uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,477,468,750,381 | arxiv | \section{Introduction and statement of main results}
\label{sec:intro}
The main results of this paper is the gradient estimates of solutions to quasilinear elliptic equations coupled with nonhomogeneous Dirichlet boundary conditions of the form:
\begin{align}
\label{eq:diveq}
\begin{cases}
\text{div}(A(x,\nabla u)) &= \ \text{div}(|F|^{p-2}F) \quad \text{in} \ \ \Omega, \\
\hspace{1.2cm} u &=\ \sigma \qquad \qquad \qquad \text{on} \ \ \partial \Omega.
\end{cases}
\end{align}
Moreover, this study also provides the new fractional maximal estimates for gradients of solutions to this type of problem. Here, the domain $\Omega$ is open bounded domain of $\mathbb{R}^n$, $(n \ge 2)$ and functional data $F \in L^p(\Omega;\mathbb{R}^n)$ together with the general Dirichlet boundary data $\sigma \in W^{1-1/p,p}(\partial\Omega)$.
Over the past decades, the earlier works on the local interior gradient estimates of weak solutions for classical homogeneous $p$-Laplacian equations:
\begin{align}
\label{eq:plaplace}
\text{div}(|\nabla u|^{p-2}\nabla u)=0
\end{align}
in the scalar case $n=1$ have developed in the series of papers \cite{Ural1968, LadyUral1968, Uhlenbeck1977,Evans1982} for $p \ge 2$, and in \cite{Lewis1983} the same result for the case $1<p<2$. Later, in \cite{Tolksdorf1984}, the result was extended for more general equation $\text{div}(|\nabla u|^{p-2}\nabla u) = f$ with $p \ge 2$. It is well known that when $p \neq 2$, weak solution to \eqref{eq:plaplace} is of class $C^{1,\alpha}$ that has H\"older continuous derivatives. Besides that, for related results concerning to this equation, in \cite{DiBenedetto1983, Lieberman1984, Lieberman1986, Lieberman1988}, authors proved interior $C^{1,\alpha}$ regularity for homogeneous quasilinear elliptic equations of type $-\text{div}(A(x,u,\nabla u))=0$. Since then, the regularity theory of divergence form modeled on the $p$-Laplacian equation $\text{div}(|\nabla u|^{p-2}\nabla u)=-\text{div}(|F|^{p-2}F)$ (homogeneous problem) has been continued to extend via literature in \cite{Iwaniec, DiBenedetto1993} and many references therein.
In recent years, there have been plenty of research activities on the regularity theory of solution to quasilinear elliptic equations $\text{div}(A(x,\nabla u)) = \ \text{div}(|F|^{p-2}F)$, under various assumptions on domain, nonlinear operator $A$ and boundary data. For instance, with homogeneous Dirichlet problem (zero Dirichlet boundary data), S. Byun et al. in \cite{SSB3, SSB4, SSB1, SSB2} have been studied gradient estimates of solution in the setting of classical Lebesgue spaces (see \cite{SSB1}) and weighted Lebesgue spaces (see \cite{SSB2}) under assumptions on Reifenberg domain $\Omega$, together with standard ellipticity condition of $A$ and small BMO oscillation in $x$. Under some assumptions of $A$ and the smoothness requirement on domain $\Omega$, further generalization to this type of homogeneous equation are the subjects of \cite{CM2014, Phuc2015, CoMi2016, BCDKS, BW1, KZ} and their related references.
Afterwards, more general extensions of regularity to the non-homogeneous quasilinear elliptic equations of type $\text{div}(A(x,\nabla u)) = \text{div} F$ was discussed and addressed in many papers. In particular, the interior $W^{1,q}$ estimates was investigated in \cite{Truyen2016} using the perturbations method proposed by Caffarelli et al. in \cite{CP1998}. And recently further, T. Nguyen in \cite{Truyen2018} proved the global gradient estimates in weighted Morrey spaces for solutions where the nonlinearity $A(x,\xi)$ is measurable in $x$, differentiable in $\xi$ and satisfies a small BMO condition, domain $\Omega$ satisfies the Reifenberg flat condition. And many papers related to the same topic could be found in \cite{SSB1, SSB2, SSB3, SSB4, BW1, Tuoc2018}, but under different hypotheses on domain $\Omega$, the nonlinearity $A$ and the given boundary data.
The technique using maximal functions was first presented by G. Mingione et al. in their fine papers \cite{Duzamin2,55DuzaMing} with a nonlinear potential theory, and later, this approach has been developed in many optimal regularity results. In this context, we follow and continue this topic of gradient estimates for the problem of divergence type with general Dirichlet boundary data \eqref{eq:diveq}. Our main results are estimates for gradient of solutions in Lorentz spaces and moreover, some tricks involving the cut-off fractional maximal function are used to give a proof of \emph{fractional maximal gradient estimates}, will be also found in our work. More specifically and precisely, in our study, we only need the assumption of domain $\Omega$ whose complement satisfies $p$-capacity uniform thickness condition, the weaker condition on $\Omega$ than Reifenberg flatness. One notices that this $p$-capacity density condition is stronger than the Weiner criterion described in~\cite{Kilp} as:
\begin{align*}
\int_0^1{\left(\frac{\text{cap}_p((\mathbb{R}^n \setminus \Omega) \cap \overline{B}_r(x), B_{2r}(x))}{\text{cap}_p(\overline{B}_r(x), B_{2r}(x))} \right)^{\frac{1}{p-1}} \frac{dr}{r}} = \infty,
\end{align*}
which characterizes regular boundary points for the $p$-Laplace Dirichlet problem, where one measures the thickness of complement of $\Omega$ near boundary by capacity densities. The class of domains whose complement satisfies the uniformly $p$-capacity condition is relatively large (including those with Lipschitz boundaries or satisfy a uniform corkscrew condition), and its definition will be highlighted in Section \ref{sec:pcapa}. Otherwise, it is weaker than the Reifenberg flatness condition that was discussed in various studies \cite{BP14, BR, BW1_1, BW2, MP11, MP12, ER60}. Additionally, the nonlinearity $A$ here is a Carath\'edory vector valued function defined on $W^{1,p}_0(\Omega)$ only satisfying the growth and monotonicity conditions: there holds
\begin{align*}
\left| A(x,\xi) \right| &\le \Lambda_1 |\xi|^{p-1},\\
\langle A(x,\xi)-A(x,\eta), \xi - \eta \rangle &\ge \Lambda_2 \left( |\xi|^2 + |\eta|^2 \right)^{\frac{p-2}{2}}|\xi - \eta|^2,
\end{align*}
for every $(\xi,\eta) \in \mathbb{R}^n \times \mathbb{R}^n \setminus \{(0,0)\}$ and a.e. $x \in \mathbb{R}^n$, $\Lambda_1$ and $\Lambda_2$ are positive constants. This operator and its properties are emphasized in Section \ref{sec:A}. The proofs of our studies are based on the method developed in \cite{55QH4,MPT2018,MPT2019} for the measure data problem, so-called the ``good-$\lambda$'' technique. Our results in this paper show that strong proofs with less hypotheses, to more general problem than previous studies, where our technique of the optimal good-$\lambda$ method (see \cite{MPT2018, MPT2019}) is applied to problem with functional data instead of the measure data.
Now let us give a precise statement of our main results. We firstly give the boundedness property of maximal function via the following theorem \ref{theo:maintheo_lambda}, that will be important for us to prove our results later.
\begin{theorem}
\label{theo:maintheo_lambda}
Let $p>1$ and suppose that $\Omega \subset \mathbb{R}^n$ is a bounded domain whose complement satisfies a $p$-capacity uniform thickness condition with constants $c_0, r_0>0$.
Then, for any solution $u$ to \eqref{eq:diveq} with given data $F$, there exist $a \in (0,1)$, $b>0$, $\varepsilon_0 = \varepsilon_0(n,a,b) \in (0,1)$ and a constant $C = C(n,p,\Lambda_1,\Lambda_2,c_0,diam(\Omega)/{r_0})>0$ such that the following estimate
\begin{align}
\label{eq:mainlambda}
\begin{split}
&\mathcal{L}^n\left(\{{\mathbf{M}}(|\nabla u|^p)>\varepsilon^{-a}\lambda, {\mathbf{M}}(|F|^p+|\nabla \sigma|^p) \le \varepsilon^b\lambda \}\cap \Omega \right)\\ &~~~~~~\qquad \leq C \varepsilon \mathcal{L}^n\left(\{ {\mathbf{M}}(|\nabla u|^p)> \lambda\}\cap \Omega \right),
\end{split}
\end{align}
holds for any $\lambda>0$ and $\varepsilon \in (0,\varepsilon_0)$. Here, we note that $a$ and $b$ are the parameters depending only on $n,p,\Lambda_1, \Lambda_2,c_0$, and will be clarified in our proof later.
\end{theorem}
Throughout the paper, the denotation $diam(\Omega)$ is the diameter of a set $\Omega$ defined as:
\begin{align*}
diam(\Omega) = \sup\{d(x,y) \ : \ x,y \in \Omega\},
\end{align*}
and the notation $\mathcal{L}^n(E)$ stands for the $n$-dimensional Lebesgue measure of a set $E \subset \mathbb{R}^n$. Moreover, it can be noticed that in this theorem and in what follows, for simplicity, the set $\{x \in \Omega: |g(x)| > \Lambda\}$ is denoted by $\{|g|>\Lambda\}$ (in order to avoid the confusion that may arise).
With this regard, our first result concerns gradient norm estimates in classical Lorentz spaces.
\begin{theorem}
\label{theo:regularityM0}
Let $p>1$ and $\Omega \subset \mathbb{R}^n$ be a bounded domain whose complement satisfies a $p$-capacity uniform thickness condition with constants $c_0, r_0>0$. Then, for any solution $u$ to \eqref{eq:diveq} with given functional data $F \in W^{1,p}(\Omega)$, $\sigma \in L^p(\Omega)$, $0<q<\frac{\Theta}{p}$ and $0<s \le \infty$, there exists a constant $C = C(n,p,\Lambda_1,\Lambda_2,c_0,r_0,diam(\Omega),q,s)$ such that the following inequality holds
\begin{align}
\label{eq:regularityM0}
\|\mathbf{M}(|\nabla u|^p)\|_{L^{q,s}(\Omega)}\leq C \|\mathbf{M}(|F|^p+|\nabla \sigma|^p)\|_{L^{q,s}(\Omega)}.
\end{align}
\end{theorem}
Next, we state Theorem \ref{theo:main2} in a somewhat more general form of Theorem \ref{theo:maintheo_lambda} as following, where its proof would be found in Section \ref{sec:proofs}.
\begin{theorem}
\label{theo:main2}
Let $p>1$, $0\le\alpha<n$ and suppose that $\Omega \subset \mathbb{R}^n$ is a bounded domain whose complement satisfies a $p$-capacity uniform thickness condition with constants $c_0, r_0>0$. Then for any solution $u$ to equation~\eqref{eq:diveq} with given data function $F$, there exist $a \in (0,1)$, $b>0$, $\varepsilon_0 = \varepsilon_0(n,a,b) \in (0,1)$ and a constant $C = C(n, p, \Lambda_1, \Lambda_2, \alpha, c_0, r_0, T_0= diam(\Omega))$ such that the following estimate
\begin{align*}
\mathcal{L}^n\left(V_{\lambda}^{\alpha}\right) \le C \varepsilon \mathcal{L}^n\left(W_{\lambda}^{\alpha}\right),
\end{align*}
holds for any $\lambda>0$, $\varepsilon \in (0,\varepsilon_0)$ small enough, where
$$V_{\lambda}^{\alpha} = \left\{{\mathbf{M}} {\mathbf{M}}_{\alpha}(|\nabla u|^p) >\varepsilon^{-a} \lambda, \ {\mathbf{M}}_{\alpha}(|F|^p + |\nabla\sigma|^p) \le \varepsilon^{b} \lambda \right\} \cap \Omega,$$
and
$$W_{\lambda}^{\alpha} = \left\{{\mathbf{M}} {\mathbf{M}}_{\alpha} (|\nabla u|^p) > \lambda\right\} \cap \Omega.$$
Note that, as we will discuss on, $a$ and $b$ are the parameters depending only on $n,p,\Lambda_1, \Lambda_2,c_0$, and will be clarified in our proof later.
\end{theorem}
The next theorem shows the improvement of Lorentz gradient estimate in previous theorem \ref{theo:regularityM0}. It concludes the fractional gradient estimate of solutions to our class of nonhomegeneous equations \eqref{eq:diveq} with respect to our data $F$ and $\sigma$.
\begin{theorem}
\label{theo:regularityMalpha}
Let $p>1$, $0\le\alpha<n$ and $\Omega \subset \mathbb{R}^n$ be a bounded domain whose complement satisfies a $p$-capacity uniform thickness condition with constants $c_0, r_0>0$. Then, for any solution $u$ to \eqref{eq:diveq} with given functional data $F \in W^{1,p}(\Omega)$, $\sigma \in L^p(\Omega)$, $0<q<\frac{\Theta n}{p(n-\alpha)}$ and $0<s \le \infty$, the following inequality
\begin{align}
\label{eq:regularityMalpha}
\|{\mathbf{M}}_\alpha(|\nabla u|^p)\|_{L^{q,s}(\Omega)}\leq C \|\mathbf{M}_\alpha(|F|^p+|\nabla \sigma|^p)\|_{L^{q,s}(\Omega)}
\end{align}
holds. Here, the constant $C$ depending only on $n,p,\Lambda_1,\Lambda_2,\alpha,c_0,r_0,diam(\Omega),q,s$.
\end{theorem}
For the proofs of these above Theorems in our present paper, it is possible to apply some gradient estimate results developed for quasilinear equations with measure data, or linear/nonlinear potential and Calder\'on-Zygmund theories (see \cite{DuMin2010, DuMin2011, Mi3, Mi2019, MPT2018, MPT2019}).
Here, the results of Theorems \ref{theo:main2} and \ref{theo:regularityMalpha} generalize that of Theorems \ref{theo:maintheo_lambda} and \ref{theo:regularityM0} above. Heuristically speaking, one can expect such a stronger result with general fractional maximal gradient estimates of solutions ($0 \le \alpha<n$), where $\alpha=0$ is just a specific case. However, the proofs are generalized naturally in a different approach via cut-off fractional maximal functions and their related properties, somewhat will be described in this survey paper. The case study remains an open problem that leads us to consider a small range of $\alpha$ (for $0 \le \alpha<1$ and $\alpha$ is very closed to 0). This would be more meaningful for us because then, we can enlarge range of $q$ in results of Theorems \ref{theo:regularityM0} and \ref{theo:regularityMalpha}. Of course in that context, one should obtain more regularities with the fewest and simplest assumptions on $\Omega$ and the operator $A$ (in some previous works by others, the Reifenberg flatness of $\Omega$ and a small BMO condition was added to $A$). That seems interesting to many researchers and may appear in the topic of an upcoming work.
The outline of this paper is organized as follows. In the next Section \ref{sec:preliminaries} we begin with some preliminaries, gather few notations and assumptions on the problem, which are useful to our proofs. The next Section \ref{sec:cutoff} is devoted to study the cut-off fractional maximal functions and proofs of some preparatory lemmas related to them are obtained therein. To make effective use of the good-$\lambda$ method, Section \ref{sec:inter_bound} indicates to some important lemmas of local interior and boundary comparison estimates and then finally, the proofs of main theorems are provided in the last Section \ref{sec:proofs}.
\section{Preliminaries}
\label{sec:preliminaries}
This section provides some necessary preliminaries and we also recall some well-known notations and results for later use.
\subsection{The uniform $p$-capacity condition}
\label{sec:pcapa}
In this paper, the considered domain $\Omega \subset \mathbb{R}^n$ is under the assumption that its complement $\mathbb{R}^n \setminus \Omega$ is uniformly $p$-capacity thick. More precise, we say that the domain $\mathbb{R}^n \setminus \Omega$ satisfies the \emph{$p$-capacity uniform thickness condition} if there exist two constants $c_0,r_0>0$ such that
\begin{align}
\label{eq:capuni}
\text{cap}_p((\mathbb{R}^n \setminus \Omega) \cap \overline{B}_r(x), B_{2r}(x)) \ge c_0 \text{cap}_p(\overline{B}_r(x),B_{2r}(x)),
\end{align}
for every $x \in \mathbb{R}^n \setminus \Omega$ and $0<r \le r_0$. Here, the $p$-capacity of any compact set $K \subset \Omega$ (relative to $\Omega$) is defined as:
\begin{align*}
\text{cap}_p(K,\Omega) = \inf \left\{ \int_\Omega{|\nabla \varphi|^p dx}: \varphi \in C_c^\infty, \varphi \ge \chi_K \right\},
\end{align*}
where $\chi_K$ is the characteristic function of $K$. The $p$-capacity of any open subset $U \subseteq \Omega$ is then defined by:
\begin{align*}
\text{cap}_p(U,\Omega) = \sup \left\{ \text{cap}_p(K,\Omega), \ K \ \text{compact}, \ K \subseteq U \right\}.
\end{align*}
Consequently, the $p$-capacity of any subset $B \subseteq \Omega$ is defined by:
\begin{align*}
\text{cap}_p(B,\Omega) = \inf \left\{ \text{cap}_p(U,\Omega), \ U \ \text{open}, \ B \subseteq U \right\}.
\end{align*}
A function $u$ defined on $\Omega$ is said to be $\text{cap}_p$-quasi continuous if for every $\varepsilon>0$ there exists $B \subseteq \Omega$ with $\text{cap}_p(B,\Omega) < \varepsilon$ such that the restriction of $u$ to $\Omega\setminus B$ is continuous. Every nonempty $\mathbb{R}^n\setminus \Omega$ is uniform $p$-thick for $p >n$ and this condition is nontrivial only when $p \le n$. For some properties of the $p$-capacity we refer to \cite{HKM2006}.
\subsection{Assumptions on operator $A$}
\label{sec:A}
In our study of elliptic equations $\text{div}(A(x,\nabla u)) = \ \text{div}(|F|^{p-2}F)$, the nonlinear operator $A: \Omega \times \mathbb{R}^n \rightarrow \mathbb{R}$ is a Carath\'eodory vector valued function (that is, $A(.,\xi)$ is measurable on $\Omega$ for every $\xi$ in $\mathbb{R}^n$, and $A(x,.)$ is continuous on $\mathbb{R}^n$ for almost every $x$ in $\Omega$) which satisfies the following growth and monotonicity conditions: for some $1<p\le n$ there exist two positive constants $\Lambda_1$ and $\Lambda_2$ such that
\begin{align}
\label{eq:A1}
\left| A(x,\xi) \right| &\le \Lambda_1 |\xi|^{p-1},
\end{align}
and
\begin{align}
\label{eq:A2}
\langle A(x,\xi)-A(x,\eta), \xi - \eta \rangle &\ge \Lambda_2 \left( |\xi|^2 + |\eta|^2 \right)^{\frac{p-2}{2}}|\xi - \eta|^2
\end{align}
holds for almost every $x$ in $\Omega$ and every $(\xi,\eta) \in \mathbb{R}^n \times \mathbb{R}^n \setminus \{(0,0)\}$.
\subsection{Lorentz spaces}
\label{sec:lorentz}
Let us firstly recall the definition of the \emph{Lorentz space} $L^{q,t}(\Omega)$ for $0<q<\infty$ and $0<t\le \infty$ (see in \cite{55Gra}). It is the set of all Lebesgue measurable functions $g$ on $\Omega$ such that:
\begin{align}
\label{eq:lorentz}
\|g\|_{L^{q,t}(\Omega)} = \left[ q \int_0^\infty{ \lambda^t\mathcal{L}^n \left( \{x \in \Omega: |g(x)|>\lambda\} \right)^{\frac{t}{q}} \frac{d\lambda}{\lambda}} \right]^{\frac{1}{t}} < +\infty,
\end{align}
as $t \neq \infty$. If $t = \infty$, the space $L^{q,t}(\Omega)$ is the usual weak-$L^q$ or Marcinkiewicz spaces with the following quasinorm:
\begin{align}
\|g\|_{L^{q,\infty}(\Omega)} = \sup_{\lambda>0}{\lambda \mathcal{L}^n\left(\{x \in \Omega:|g(x)|>\lambda\}\right)^{\frac{1}{q}}}.
\end{align}
When $t=q$, the Lorentz space $L^{q,q}(\Omega)$ becomes the Lebesgue space $L^q(\Omega)$.
\subsection{Maximal and Fractional Maximal functions}
\label{sec:MMalpha}
In what follows, we denote the open ball in $\mathbb{R}^n$ with center $x_0$ and radius $r$ by $B_r(x_0)$, that is the set $B_r(x_0) = \{x\in \mathbb{R}^n: |x-x_0|<r\}$. And we clarify that in this paper, we use the denotation $\displaystyle{\fint_{B_r(x)}{f(y)dy}}$ indicates the integral average of $f$ in the variable $y$ over the ball $B_r(x)$, i.e.
\begin{align*}
\fint_{B_r(x)}{f(y)dy} = \frac{1}{|B_\rho(x)|}\int_{B_r(x)}{f(y)dy}.
\end{align*}
We first recall the definition of fractional maximal function that regarding to \cite{K1997, KS2003}. Let $0 \le \alpha \le n$, the fractional maximal function $\mathbf{M}_\alpha$ of a locally integrable function $g: \mathbb{R}^n \rightarrow [-\infty,\infty]$ is defined by:
\begin{align}
\label{eq:Malpha}
\mathbf{M}_\alpha g(x) = \sup_{\rho>0}{\rho^\alpha \fint_{B_\rho(x)}{|g(y)|dy}}.
\end{align}
For the case $\alpha=0$, one obtains the Hardy-Littlewood maximal function, $\mathbf{M}g = \mathbf{M}_0g$, defined for each locally integrable function $g$ in $\mathbb{R}^n$ by:
\begin{align}
\label{eq:M0}
\mathbf{M}g(x) = \sup_{\rho>0}{\fint_{B_{\rho}(x)}|g(y)|dy},~~ \forall x \in \mathbb{R}^n.
\end{align}
Fractional maximal operators have may applications in partial differential equations, potential theory and harmonic analysis. Once we need to estimate some quantities of a function $g$, they can be shown to be dominated by $\mathbf{M}g$, or more generally by $\mathbb{M}_\alpha g$. The fundamental result of maximal operator is that the boundedness on $L^p(\mathbb{R}^n)$ when $1< p \le \infty$, that is there exists a constant $C(n,p)>0$ such that:
\begin{align*}
\|\mathbf{M}g\|_{L^p(\mathbb{R}^n)} \le C(n,p)\|g\|_{L^p(\mathbb{R}^n)}, \quad \forall g \in L^p(\mathbb{R}^n).
\end{align*}
Moreover, $\mathbf{M}$ is also said to be \emph{weak- type} (1,1), this means there is a constant $C(n)>0$ such that for all $\lambda>0$ and $g \in L^1(\mathbb{R}^n)$, it holds that
\begin{align*}
\mathcal{L}^n\left(\{\mathbf{M}(g)>\lambda\}\right) \le C(n)\frac{\|g\|_1}{\lambda}.
\end{align*}
The standard and classical references can be found in many places such as \cite{Gra97,55Gra}, and later also in \cite{55QH10}. Besides that, there are some well-known properties of maximal and fractional maximal operators, that will be shown in following lemmas.
\begin{lemma}
\label{lem:boundM}
It refers to \cite{55Gra} that the operator $\mathbf{M}$ is bounded from $L^s(\mathbb{R}^n)$ to $L^{s,\infty}(\mathbb{R}^n)$, for $s \ge 1$, this means,
\begin{align}
\mathcal{L}^n\left(\{\mathbf{M}(g)>\lambda\}\right) \le \frac{C}{\lambda^s}\int_{\mathbb{R}^n}{|g(x)|^s dx}, \quad \mbox{ for all } \lambda>0.
\end{align}
\end{lemma}
\begin{lemma}
\label{lem:boundMlorentz}
In \cite{55Gra}, it allows us to present a boundedness property of maximal function $\mathbf{M}$ in the Lorentz space $L^{q,s}(\mathbb{R}^n)$, for $q>1$ as follows:
\begin{align}
\label{eq:boundM}
\|\mathbf{M}g\|_{L^{q,s}(\Omega)} \le C \|g\|_{L^{q,s}(\Omega)}.
\end{align}
\end{lemma}
Moreover, a very important property of fractional maximal function was also obtained from the boundedness property of maximal function. The proof of this result is a modification of the result in Lemma \ref{lem:boundM} based on the definition of maximal and fractional maximal function, and we show below all the details.
\begin{lemma}
\label{lem:Malpha_prop}
Let $0\le \alpha<n$, $\rho>0$ and $x \in \mathbb{R}^n$. Then, for any locally integrable function $f \in L^1_{\text{loc}}(\mathbb{R}^n)$ we have the following inequality holds
\begin{equation*}
\displaystyle{\mathcal{L}^n\left(\left\{\mathbf{M}_\alpha(\chi_{B_{\rho}(x)}f)>\lambda\right\}\right)} \leq C\displaystyle{\left(\frac{\displaystyle{\int_{B_{\rho}(x)}|f(y)|dy}}{\lambda}\right)^{\frac{n}{n-\alpha}}},~~ \mbox{ for all} \ \lambda>0.
\end{equation*}
\end{lemma}
\begin{proof}
First of all, let us give a proof that for $0 \le \alpha<n$ and any $f \in L^1_{\text{loc}}(\mathbb{R}^n)$, there holds:
\begin{equation*}
\displaystyle{\mathcal{L}^n\left(\left\{\mathbf{M}_\alpha f>1\right\}\right)} \leq C\displaystyle{\left({\displaystyle{\int_{\mathbb{R}^n}|f(y)|dy}}\right)^{\frac{n}{n-\alpha}}}.
\end{equation*}
Indeed, for any $x \in \mathbb{R}^n$, from definition of fractional maximal function $\textbf{M}_\alpha$, one has
\begin{align*}
\mathbf{M}_{\alpha} f(x) &= \sup_{\rho>0} \rho^{\alpha-n} \int_{B_{\rho}(x)}|f(y)|dy \\
& = \sup_{\rho>0} \left(\rho^{-n} \int_{B_{\rho}(x)}|f(y)|dy\right)^{\frac{n-\alpha}{n}} \left(\int_{B_{\rho}(x)}|f(y)|dy\right)^{\frac{\alpha}{n}} \\
& \le C \left[\mathbf{M}f(x)\right]^{1 - \frac{\alpha}{n}} \|f\|^{\frac{\alpha}{n}}_{L^1(\mathbb{R}^n)}.
\end{align*}
It follows that
\begin{align*}
\displaystyle{\mathcal{L}^n\left(\left\{\mathbf{M}_\alpha f>1\right\}\right)} &\leq \displaystyle{\mathcal{L}^n\left(\left\{[\mathbf{M} f]^{1 - \frac{\alpha}{n}} \|f\|^{\frac{\alpha}{n}}_{L^1(\mathbb{R}^n)}>C\right\}\right)} \\ & = \displaystyle{\mathcal{L}^n\left(\left\{\mathbf{M} f > C \|f\|^{-\frac{\alpha}{n-\alpha}}_{L^1(\mathbb{R}^n)}\right\}\right)} .
\end{align*}
Applying Lemma~\ref{lem:boundM} for $s=1$ and $\lambda = \|f\|^{-\frac{\alpha}{n-\alpha}}_{L^1(\mathbb{R}^n)}$, we obtain that
\begin{align*}
\displaystyle{\mathcal{L}^n\left(\left\{\mathbf{M}_\alpha f>1\right\}\right)} \le \frac{C}{ \|f\|^{-\frac{\alpha}{n-\alpha}}_{L^1(\mathbb{R}^n)}} \int_{\mathbb{R}^n} |f(x)|dx = C \|f\|^{\frac{n}{n-\alpha}}_{L^1(\mathbb{R}^n)}.
\end{align*}
Without loss of generality, by scaling what already proved, we consider function $\frac{f}{\lambda}$ instead of $f \in L^1_{\text{loc}}(\mathbb{R}^n)$ and then with $\lambda$ is 1, it yields that the following inequality holds
\begin{equation*}
\displaystyle{\mathcal{L}^n\left(\left\{\mathbf{M}_\alpha(\chi_{B_{\rho}(x)}f)>\lambda\right\}\right)} \leq C\displaystyle{\left(\frac{\displaystyle{\int_{B_{\rho}(x)}|f(y)|dy}}{\lambda}\right)^{\frac{n}{n-\alpha}}},~~ \mbox{for all} \ \lambda>0.
\end{equation*}
\end{proof}
\section{Cut-off Fractional Maximal functions and Preparatory lemmas}
\label{sec:cutoff}
In this section, we restrict ourselves to study the so-called ``cut-off fractional maximal functions'' and their properties that will be needed in later parts of this paper.
Let $r>0$ and $0\le \alpha \le n$, we define some additional cut-off maximal functions of a locally integrable function $f$ corresponding to the maximal function $\mathbf{M}f$ in \eqref{eq:M0} as follows
\begin{align}
\label{eq:MTrf}
\begin{split}
{\mathbf{M}}^rf(x) &= \sup_{0<\rho<r} \fint_{B_\rho(x)}f(y)dy; \\ {\mathbf{T}}^rf(x) &= \sup_{\rho \ge r}\fint_{B_\rho(x)}f(y)dy,
\end{split}
\end{align}
and corresponding to $\mathbf{M}_{\alpha}f$ in \eqref{eq:Malpha} as
\begin{align}
\label{eq:MTralphaf}
{\mathbf{M}}^r_{\alpha}f(x) &= \sup_{0<\rho<r} \rho^{\alpha} \fint_{B_\rho(x)}f(y)dy; \\ {\mathbf{T}}^r_{\alpha}f(x) &= \sup_{\rho \ge r} \rho^{\alpha}\fint_{B_\rho(x)}f(y)dy.
\end{align}
We remark here that if $\alpha = 0$ then $\mathbf{M}^r_{\alpha}f = \mathbf{M}^rf$ and $\mathbf{T}^r_{\alpha}f = \mathbf{T}^r f$, for all $f \in L^1_{loc}(\mathbb{R}^n)$. The following lemma can be inferred from from their definitions.
\begin{lemma}\label{lem:Malpha}
For any $r>0$ and $0\le \alpha \le n$, we have
\begin{align*}
{\mathbf{M}}{\mathbf{M}}_{\alpha}f(x) \le \max \left\{{\mathbf{M}}^r{\mathbf{M}}^r_{\alpha}f(x), {\mathbf{M}}^r{\mathbf{T}}^r_{\alpha}f(x), {\mathbf{T}}^r{\mathbf{M}}_{\alpha}f(x)\right\},
\end{align*}
for any $x \in \mathbb{R}^n$ and $f \in L^1_{loc}(\mathbb{R}^n)$.
\end{lemma}
We will now prove some inequalities related to these operators, that will be needed in our desired results later.
\begin{lemma}\label{lem:Tr}
Let $r >0$, $k \ge 1$ and $0\le \alpha \le n$. For some $x_1, x_2 \in \mathbb{R}^n$, assume that
$$B_{\rho}(x_1) \subset B_{k\rho}(x_2), \quad \forall \rho \ge r.$$
Then we have the following estimate
\begin{align}
{\mathbf{T}}^r_{\alpha}f(x_1) \le k^{n-\alpha} {\mathbf{M}}_{\alpha}f(x_2),
\end{align}
for all $f \in L^1_{loc}(\mathbb{R}^n)$.
\end{lemma}
\begin{proof}
From definition of the cut-off $\mathbf{T}_\alpha^r$ in \eqref{eq:MTralphaf}, the inequality is proved as well:
\begin{align*}
{\mathbf{T}}^r_{\alpha}f(x_1) & = \sup_{\rho\ge r} \rho^{\alpha-n} \int_{B_{\rho}(x_1)} f(x)dy \\
&\le \sup_{\rho\ge r} \rho^{\alpha-n} \int_{B_{k\rho}(x_2)} f(x)dy \\
& = k^{n - \alpha} \sup_{\rho\ge r} \ (k\rho)^{\alpha} \fint_{B_{k\rho}(x_2)} f(x)dy\\
& \le k^{n - \alpha} {\mathbf{M}}_{\alpha}f(x_2).
\end{align*}
\end{proof}
\begin{lemma}\label{lem:MrMr}
Let $r>0$ and $0\le \alpha <n$. Then there exists a constant $C>0$ such that
\begin{align}\label{eq:MrMr}
{\mathbf{M}}^r {\mathbf{M}}^r_{\alpha} f(x) \le C {\mathbf{M}}^{2r}_{\alpha}f(x),
\end{align}
for any $x \in \mathbb{R}^n$ and $f \in L^1_{loc}(\mathbb{R}^n)$.
\end{lemma}
\begin{proof}
For any $\rho \in (0,r)$ and $y \in B_{\rho}(x)$, we have
\begin{align}\label{eq:Mr1}
{\mathbf{M}}^r_{\alpha} f(y) = \max\left\{ {\mathbf{M}}^{\rho}_{\alpha} f(y), \sup_{\rho\le\delta<r} \delta^{\alpha-n} \int_{B_{\delta}(y)}f(z) dz \right\}.
\end{align}
For any $\delta>0$, since $B_{\delta}(y) \subset B_{\rho + \delta}(x)$, it deduces that the second term on the right-hand side can be estimated as
\begin{align}\nonumber
\sup_{\rho\le\delta<r} \delta^{\alpha-n} \int_{B_{\delta}(y)}f(z) dz & \le \sup_{\rho\le\delta<r} \left(\frac{\delta}{\rho+\delta}\right)^{\alpha-n} (\rho+\delta)^{\alpha-n} \int_{B_{\rho+\delta}(x)}f(z) dz \\ \nonumber
& \le 2^{n-\alpha} \sup_{\rho\le\delta<r} (\rho+\delta)^{\alpha-n} \int_{B_{\rho+\delta}(x)}f(z) dz \\ \nonumber
& \le 2^{n-\alpha} \sup_{0<R<2r} R^{\alpha-n} \int_{B_{R}(x)}f(z) dz \\
\label{eq:Mr2}
& = 2^{n-\alpha} {\mathbf{M}}^{2r}_{\alpha}f(x).
\end{align}
From \eqref{eq:Mr1}, \eqref{eq:Mr2} and the definitions of the cut off fractional maximal function ${\mathbf{M}}^r$ and ${\mathbf{M}}^r_{\alpha}$ in \eqref{eq:MTrf} and \eqref{eq:MTralphaf}, one obtains
\begin{align}\nonumber
{\mathbf{M}}^r {\mathbf{M}}^r_{\alpha} f(x) & = \sup_{0<\rho<r} \rho^{-n} \int_{B_{\rho}(x)} {\mathbf{M}}^r_{\alpha} f(y) dy \\ \nonumber
&\le \max\left\{\sup_{0<\rho<r} \rho^{-n} \int_{B_{\rho}(x)} {\mathbf{M}}^{\rho}_{\alpha} f(y) dy, \ 2^{n-\alpha} {\mathbf{M}}^{2r}_{\alpha}f(x)\right\} \\ \label{eq:Mr3}
& = \max\left\{I, \ 2^{n-\alpha} {\mathbf{M}}^{2r}_{\alpha}f(x)\right\},
\end{align}
where
\begin{align*}
I & = \sup_{0<\rho<r} \rho^{-n} \int_{B_{\rho}(x)} {\mathbf{M}}^{\rho}_{\alpha} f(y) dy,
\end{align*}
and it remains to prove the estimate for this term. Indeed, for any $y\in B_{\rho}(x)$, we have
\begin{equation*}
{\mathbf{M}}^{\rho}_{\alpha} f(y) = \sup_{0<\delta<\rho} \delta^{\alpha-n} \int_{B_{\delta}(y)} \chi_{B_{2\rho}(x)}f(z)dz\leq \mathbf{M}_\alpha[ \chi_{B_{2\rho}(x)}f](y).
\end{equation*}
and it clearly forces that
\begin{align*}
I\leq \sup_{0<\rho<r} \rho^{-n} \int_{B_{\rho}(x)} \mathbf{M}_\alpha[\chi_{B_{2\rho}(x)}f](y) dy.
\end{align*}
According to the results in Lemma \ref{lem:Malpha_prop}, we thus get
\begin{align*}
\int_{B_{\rho}(x)} \mathbf{M}_\alpha[\chi_{B_{2\rho}(x)}f](y) dy&= \int_{0}^{\infty} \mathcal{L}^n\left(\left\{\mathbf{M}_\alpha(\chi_{B_{2\rho}(x)}f)>\lambda\right\}\right) d\lambda\\& \leq C\rho^n\lambda_0+\int_{\lambda_0}^{\infty} \mathcal{L}^n\left(\left\{\mathbf{M}_\alpha(\chi_{B_{2\rho}(x)}f)>\lambda\right\}\right) d\lambda\\&\leq C \rho^n\lambda_0+ C \left(\int_{B_{2\rho}(x)}f(y)dy\right)^{\frac{n}{n-\alpha}}\int_{\lambda_0}^{\infty}\lambda^{-\frac{n}{n-\alpha}}d\lambda\\&= C\rho^n\lambda_0+ C \left(\int_{B_{2\rho}(x)}f(y)dy\right)^{\frac{n}{n-\alpha}}\lambda_0^{-\frac{\alpha}{n-\alpha}}.
\end{align*}
By choosing
\begin{equation*}
\lambda_0=\rho^{-n+\alpha}\int_{B_{2\rho}(x)}f(y)dy,
\end{equation*}
we obtain
\begin{align*}
\int_{B_{\rho}(x)} \mathbf{M}_\alpha[\chi_{B_{2\rho}(x)}f](y) dy&\leq C\rho^{\alpha}\int_{B_{2\rho}(x)}f(y)dy.
\end{align*}
Hence,
\begin{align} \label{eq:Mr4}
I\leq C \sup_{0<\rho<r} \rho^{-n+\alpha} \int_{B_{2\rho}(x)}f(y)dy \le 2^{n-\alpha} C {\mathbf{M}}^{2r}_{\alpha}f(x).
\end{align}
From \eqref{eq:MrMr} and the easily checked inequalities in \eqref{eq:Mr3} and \eqref{eq:Mr4}, it completes the proof.
\end{proof}
\section{Interior and boundary comparison estimates}
\label{sec:inter_bound}
In this section, we present some local interior and boundary comparison estimates for weak solution $u$ of \eqref{eq:diveq} that are essential to our development later.
\begin{proposition}
\label{prop1}
Let $\sigma \in W^{1,p}(\Omega), \ F \in L^p(\Omega)$ and $u$ be a weak solution of~\eqref{eq:diveq}. Then we have
\begin{equation}
\label{eq:prop1}
\int_{\Omega} |\nabla u|^p dx \le C \int_{\Omega} \left(|F|^p + |\nabla \sigma|^p \right)dx.
\end{equation}
Here, it remarks that the constant $C$ depends only on $p,\Lambda_1, \Lambda_2$.
\end{proposition}
\begin{proof}
By using $u - \sigma$ as a test function of equation~\eqref{eq:diveq}, we obtain
\begin{equation*}
\int_{\Omega} A(x,\nabla u) \nabla u dx = \int_{\Omega} A(x,\nabla u) \nabla \sigma dx + \int_{\Omega} |F|^{p-2} F \nabla (u - \sigma) dx.
\end{equation*}
It follows from conditions ~\eqref{eq:A1} and~\eqref{eq:A2} of operator $A$ as
\begin{equation*}
\int_{\Omega} |\nabla u|^p dx \le C \left( \int_{\Omega} |\nabla u|^{p-1} |\nabla \sigma| dx + \int_{\Omega} |F|^{p-1} |\nabla u| dx + \int_{\Omega} |F|^{p-1} |\nabla \sigma| dx \right).
\end{equation*}
By using H{\"o}lder's inequality and Young's inequality, we obtain that
\begin{align*}
\int_{\Omega} |\nabla u|^{p-1} |\nabla \sigma| dx & \le \left( \int_{\Omega} |\nabla u|^{p} dx \right)^{\frac{p-1}{p}} \left( \int_{\Omega} |\nabla \sigma|^p dx \right)^{\frac{1}{p}} \\
& \le \frac{p-1}{2p} \int_{\Omega} |\nabla u|^{p} dx + \frac{2^{p-1}}{p} \int_{\Omega} |\nabla \sigma|^{p} dx,
\end{align*}
\begin{align*}
\int_{\Omega} |F|^{p-1} |\nabla u| dx \le \frac{1}{2p} \int_{\Omega} |\nabla u|^{p} dx + \frac{p-1}{p} 2^{\frac{1}{p-1}} \int_{\Omega} |F|^{p} dx ,
\end{align*}
\begin{align*}
\int_{\Omega} |F|^{p-1} |\nabla \sigma| dx \le \frac{p-1}{p} \int_{\Omega} |F|^{p} dx + \frac{1}{p} \int_{\Omega} |\nabla \sigma|^{p} dx.
\end{align*}
We obtain~\eqref{eq:prop1} by combining these estimates.
\end{proof}
\subsection{In the interior domain}
We firstly take our attention to the interior estimates. Let us fix a point $x_0 \in \Omega$, for $0<2R \le r_0$ ($r_0$ was given in \eqref{eq:capuni}). Assume $u$ being solution to \eqref{eq:diveq} and for each ball $B_{2R}=B_{2R}(x_0)\subset\subset\Omega$, we consider the unique solution $w$ to the following equation:
\begin{equation}
\label{eq:I1}
\begin{cases} \mbox{div} A(x,\nabla w) & = \ 0, \quad \ \quad \mbox{ in } B_{2R}(x_0),\\
\hspace{1.2cm} w & = \ u - \sigma, \ \mbox{ on } \partial B_{2R}(x_0).\end{cases}
\end{equation}
We first recall the following version of interior Gehring's lemma applied to the function $w$ defined in equation \eqref{eq:I1}, has been studied in \cite[Theorem 6.7]{Giu}. It is also known as a kind of ``reverse'' H\"older inequality with increasing supports. Here, let us mention that the proof of such reserve H\"older type estimates of $\nabla u$ can be found in \cite{Phuc1, MPT2018}. And the use of this inequality with small exponents was firstly proposed by G. Mingione in his fine paper \cite{Mi1} when the problem involves measure data. The reader is referred to \cite{Phuc1, MPT2018, Mi3, 55QH4, HOk2019} and materials therein for the proof of this inequality and related results in similar research papers.
\begin{lemma}
\label{lem:reverseHolder}
Let $w$ be the solution to \eqref{eq:I1}. Then, there exist constants $\Theta = \Theta(n,p,\Lambda_1,\Lambda_2)>p$ and $C = C(n,p,\Lambda_1,\Lambda_2)>0$ such that the following estimate
\begin{equation}\label{eq:reverseHolder}
\left(\fint_{B_{\rho/2}(y)}|\nabla w|^{\Theta} dx\right)^{\frac{1}{\Theta}}\leq C\left(\fint_{B_{\rho}(y)}|\nabla w|^p dx\right)^{\frac{1}{p}}
\end{equation}
holds for all $B_{\rho}(y)\subset B_{2R}(x_0)$.
\end{lemma}
\begin{lemma}
\label{lem:I1}
Let $w$ be the unique solution to equation~\eqref{eq:I1}. Then, there exists a positive constant $C = C(n,p,\Lambda_1,\Lambda_2)>0$ such that the following comparison estimate
\begin{multline}
\label{eq:lem1b}
\fint_{B_{2R}(x_0)} |\nabla u - \nabla w|^p dx \le C \fint_{B_{2R}(x_0)} |F|^p + |\nabla \sigma|^p dx \\ + C \left(\fint_{B_{2R}(x_0)} |\nabla u|^pdx \right)^{\frac{p-1}{p}} \left(\fint_{B_{2R}(x_0)} |\nabla \sigma|^pdx \right)^{\frac{1}{p}}
\end{multline}
holds for all $p>1$.
\end{lemma}
\begin{proof}
By choosing $u - w - \overline{\sigma}$ as a test function of equations~\eqref{eq:diveq} and~\eqref{eq:I1}, where $\overline{\sigma} = \sigma$ in $\overline{B}_{2R}(x_0)$, one can show that
\begin{align}
\label{eq:L1I1}
\nonumber
\int_{B_{2R}(x_0)} \left(A(x,\nabla u) - A(x, \nabla w)\right) \nabla (u - w) dx = \int_{B_{2R}(x_0)} \left(A(x,\nabla u) - A(x, \nabla w)\right) \nabla \sigma dx \\ + \int_{B_{2R}(x_0)} |F|^{p-2} F \nabla (u - w) dx - \int_{B_{2R}(x_0)} |F|^{p-2} F \nabla \sigma dx.
\end{align}
Two conditions of the operator $A$ in~\eqref{eq:A1} and~\eqref{eq:A2} immediately yield that there exists a positive constant $C$ depending on $\Lambda_1, \Lambda_2$ such that
\begin{multline}
\label{eq:L1I2}
\int_{B_{2R}(x_0)} |\nabla u - \nabla w|^p dx \le C \left(\int_{B_{2R}(x_0)} \left(|\nabla u| + |\nabla w |\right)^{p-1} |\nabla \sigma| dx\right. \\ + \left. \int_{B_{2R}(x_0)} |F|^{p-1} |\nabla u - \nabla w| dx + \int_{B_{2R}(x_0)} |F|^{p-1} |\nabla \sigma| dx\right).
\end{multline}
Moreover, let us remark that
\begin{align}
\label{eq:remin}
\begin{split}
\left(|\nabla u| + |\nabla w|\right)^{p-1} & \le \left(2|\nabla u| + |\nabla u - \nabla w|\right)^{p-1} \\
&\le 4^p \left(|\nabla u|^{p-1} + |\nabla u - \nabla w|^{p-1}\right).
\end{split}
\end{align}
and follow from~\eqref{eq:L1I2}, it yields
\begin{equation}
\label{eq:L10}
\int_{B_{2R}(x_0)} |\nabla u - \nabla w|^p dx \le C \left(I_1 + I_2 + I_3 + I_4\right),
\end{equation}
where
\begin{align*}
& I_1 = \int_{B_{2R}(x_0)} |\nabla u|^{p-1} |\nabla \sigma| dx,\quad I_2= \int_{B_{2R}(x_0)} |\nabla u - \nabla w|^{p-1} |\nabla \sigma| dx,\\
& I_3 = \int_{B_{2R}(x_0)} |F|^{p-1} |\nabla u - \nabla w| dx, \quad \text{and} \quad I_4 = \int_{B_{2R}(x_0)} |F|^{p-1} |\nabla \sigma| dx.
\end{align*}
For any $\varepsilon>0$, thanks to H\"older's inequality and Young's inequality, it is clearly to obtain the estimations for each term $I_1, I_2$ and $I_3$ as follows:
\begin{equation}
\label{eq:estI0}
I_1 \le \left(\int_{B_{2R}(x_0)} |\nabla \sigma|^p dx \right)^{\frac{1}{p}} \left(\int_{B_{2R}(x_0)} |\nabla u|^{p} dx \right)^{\frac{p-1}{p}},
\end{equation}
\begin{align}
\nonumber
I_2 & \le \left(\int_{B_{2R}(x_0)} |\nabla \sigma|^p dx \right)^{\frac{1}{p}} \left(\int_{B_{2R}(x_0)} |\nabla u- \nabla w |^{p} dx \right)^{\frac{p-1}{p}} \\
\label{eq:estI1}
& \le \frac{1}{p}{\varepsilon^{1-p}} \int_{B_{2R}(x_0)} |\nabla \sigma|^p dx + \frac{p-1}{p}\varepsilon \int_{B_{2R}(x_0)} |\nabla u- \nabla w |^{p} dx ,
\end{align}
\begin{align}
\nonumber
I_3 & \le \left(\int_{B_{2R}(x_0)} |\nabla u - \nabla w|^p dx \right)^{\frac{1}{p}} \left(\int_{B_{2R}(x_0)} |F|^{p} dx \right)^{\frac{p-1}{p}} \\
\label{eq:estI2}
& \le \frac{1}{p} \varepsilon \int_{B_{2R}(x_0)} |\nabla u - \nabla w|^p dx + \frac{p-1}{p}\varepsilon^{-\frac{1}{p-1}} \int_{B_{2R}(x_0)} |F|^{p} dx,
\end{align}
and
\begin{align}
\nonumber
I_4 & \le \left(\int_{B_{2R}(x_0)} |\nabla \sigma|^p dx \right)^{\frac{1}{p}} \left(\int_{B_{2R}(x_0)} |F|^{p} dx \right)^{\frac{p-1}{p}} \\
\label{eq:estI3}
& \le \frac{1}{p} \int_{B_{2R}(x_0)} |\nabla \sigma|^p dx + \frac{p-1}{p} \int_{B_{2R}(x_0)} |F|^{p} dx.
\end{align}
Choosing $\varepsilon = \frac{1}{2}$ and combining~\eqref{eq:L10} with \eqref{eq:estI0}, \eqref{eq:estI1}, \eqref{eq:estI2} and~\eqref{eq:estI3}, we conclude that \eqref{eq:lem1b} holds, where the constant $C$ depending on $n,p,\Lambda_1, \Lambda_2,c_0$.
\end{proof}
\bigskip
\subsection{On the boundary}
Next, we are able to highlight some comparison estimates on the boundary and the same conclusion as interior estimates can be drawn hereafter. First, as $\mathbb{R}^n \setminus \Omega$ is uniformly $p$-thick with constants $c_0, r_0>0$, let $x_0 \in \partial \Omega$ be a boundary point and for $0<R<r_0/10$ we set $\Omega_{10R} = \Omega_{10R}(x_0) = B_{10R}(x_0) \cap \Omega$. With $u \in W^{1,p}_0(\Omega)$ being a solution to \eqref{eq:diveq}, we consider the unique solution $w \in u+W^{1,p}_0(\Omega_{10R})$ to the following equation:
\begin{equation}
\label{eq:B1}
\left\{ \begin{array}{rcl}
\operatorname{div}\left( {A(x,\nabla w)} \right) &=& 0 \quad ~~~~~~\text{in}\quad \Omega_{10R}(x_0), \\
w &=& u-\sigma \quad \text{on} \quad \partial \Omega_{10R}(x_0).
\end{array} \right.
\end{equation}
In what follows we extend $u$ by zero to $\mathbb{R}^n\setminus \Omega$ and $w$ by $u-\sigma$ to $\mathbb{R}^n\setminus \Omega_{10R}$. The following reverse H\"oder is also recalled as a boundary version of Lemma \ref{lem:reverseHolder}, it refers to \cite[Lemma 3.4]{MPT2018} for the detailed proof, or another version in \cite[Lemma 2.5]{Phuc1}, where the integrals should be taken on arbitrary sufficiently small ball.
\begin{lemma}
\label{lem:reverseHolderbnd}
Let $w$ be the solution to \eqref{eq:B1}. Then, there exist two constants $\Theta = \Theta(n,p,\Lambda_1,\Lambda_2,c_0)>p$ and $C = C(n,p,\Lambda_1,\Lambda_2,c_0)>0$ such that the following estimate
\begin{equation}\label{eq:reverseHolderbnd}
\left(\fint_{B_{\rho/2}(y)}|\nabla w|^{\Theta} dx\right)^{\frac{1}{\Theta}}\leq C\left(\fint_{B_{2\rho/3}(y)}|\nabla w|^p dx\right)^{\frac{1}{p}}
\end{equation}
holds for all $B_{2\rho/3}(y) \subset B_{10R}(x_0)$, $y \in B_r(x_0)$.
\end{lemma}
We next state and prove the selection Lemma \ref{lem:l2} which establishes the solution comparison gradient estimate up to the boundary, that is a version of Lemma \ref{lem:I1} up to the boundary and this preparatory lemma is very important to prove our desired results later.
\begin{lemma}
\label{lem:l2}
Let $w$ be the unique solution to equation~\eqref{eq:B1}. Then, there exists a positive constant $C = C(n,p,\Lambda_1,\Lambda_2)>0$ such that the following comparison estimate
\begin{multline}
\label{eq:lem2}
\fint_{B_{10R}(x_0)} |\nabla u - \nabla w|^p dx \le C \fint_{B_{10R}(x_0)} {(|F|^p + |\nabla \sigma|^p) dx} \\ + C \left(\fint_{B_{10R}(x_0)} |\nabla u|^pdx \right)^{\frac{p-1}{p}} \left(\fint_{B_{10R}(x_0)} |\nabla \sigma|^pdx \right)^{\frac{1}{p}},
\end{multline}
holds for all $p>1$.
\end{lemma}
\begin{proof}
Similar to the proof of interior lemma \ref{lem:I1}, we firstly choose $u - w - \overline{\sigma}$ as a test function of equations~\eqref{eq:diveq} and~\eqref{eq:B1}, where $\overline{\sigma} = \sigma$ in $\overline{B}_{10R}(x_0)$, which yields that
\begin{align}
\label{eq:L1I1b}
\nonumber
\int_{B_{10R}(x_0)} \left(A(x,\nabla u) - A(x, \nabla w)\right) \nabla (u - w) dx = \int_{B_{10R}(x_0)} \left(A(x,\nabla u) - A(x, \nabla w)\right) \nabla \sigma dx \\ + \int_{B_{10R}(x_0)} |F|^{p-2} F \nabla (u - w) dx - \int_{B_{10R}(x_0)} |F|^{p-2} F \nabla \sigma dx.
\end{align}
The previous assumptions on the operator $A$ (see \eqref{eq:A1} and~\eqref{eq:A2} in Section \ref{sec:A}) immediately yield that there exists a positive constant $C$ depending on $\Lambda_1, \Lambda_2$ such that
\begin{multline}
\label{eq:L1I2b}
\int_{B_{10R}(x_0)} |\nabla u - \nabla w|^p dx \le C \left(\int_{B_{10R}(x_0)} \left(|\nabla u| + |\nabla w |\right)^{p-1} |\nabla \sigma| dx\right. \\ + \left. \int_{B_{10R}(x_0)} |F|^{p-1} |\nabla u - \nabla w| dx + \int_{B_{10R}(x_0)} |F|^{p-1} |\nabla \sigma| dx\right).
\end{multline}
Inequality \eqref{eq:remin} is now applied again to get
\begin{equation}
\label{eq:L10b}
\int_{B_{10R}(x_0)} |\nabla u - \nabla w|^p dx \le C \left(I_1 + I_2 + I_3 + I_4\right),
\end{equation}
where
\begin{align*}
& I_1 = \int_{B_{10R}(x_0)} |\nabla u|^{p-1} |\nabla \sigma| dx,\quad I_2= \int_{B_{10R}(x_0)} |\nabla u - \nabla w|^{p-1} |\nabla \sigma| dx,\\
& I_3 = \int_{B_{10R}(x_0)} |F|^{p-1} |\nabla u - \nabla w| dx, \quad \text{and}\quad I_4 = \int_{B_{10R}(x_0)} |F|^{p-1} |\nabla \sigma| dx.
\end{align*}
For any $\varepsilon>0$, thanks to H\"older's inequality and Young's inequality, it is clearly to obtain the integral estimate for each term $I_i$ ($i=1,2,3,4$) in much the same way as \eqref{eq:estI0}, \eqref{eq:estI1}, \eqref{eq:estI2} and \eqref{eq:estI3} in previous proof of Lemma \ref{lem:I1} but on the ball $B_{10R}(x_0)$. Then, when choosing $\varepsilon = \frac{1}{2}$ small enough, the assertion of lemma is concluded.
\end{proof}
\section{Proofs of main Theorems}
\label{sec:proofs}
This section is devoted to separable proofs of our main results in Theorem \ref{theo:maintheo_lambda}, \ref{theo:main2} and some gradient norm estimates presented in Theorem \ref{theo:regularityM0} and Theorem \ref{theo:regularityMalpha}. Key ingredients of proofs include some properties of maximal/cut-off maximal functions, and the following Lemma \ref{lem:mainlem}. It can be viewed as a substitution for the Calder\'on-Zygmund-Krylov-Safonov decomposition. The reader is referred to \cite{CC1995} for the proof of this lemma.
\begin{lemma}
\label{lem:mainlem}
Let $0<\varepsilon<1$ and $R\ge R_1>0$ and the ball $Q:=B_R(x_0)$ for some $x_0\in \mathbb{R}^n$. Let $V\subset W\subset Q$ be two measurable sets satisfying two following properties:
\begin{itemize}
\item[(i)] $\mathcal{L}^n\left(V\right)<\varepsilon \mathcal{L}^n\left(B_{R_1}\right)$;
\item[(ii)] For all $x \in Q$ and $r \in (0,R_1]$, we have $B_r(x) \cap Q \subset W$ provided $\mathcal{L}^n\left(V \cap B_r(x)\right)\geq \varepsilon \mathcal{L}^n\left(B_r(x)\right)$.
\end{itemize}
Then $\mathcal{L}^n\left(V\right)\leq C \varepsilon \mathcal{L}^n\left(W\right)$ for some constant $C=C(n)$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{theo:maintheo_lambda}]
Take $0<\varepsilon<1$ and $\lambda>0$. First of all, let us set:
\begin{align*}
V_\lambda &= \{{\mathbf{M}}(|\nabla u|^p)>\varepsilon^{-a}\lambda, {\mathbf{M}}(|F|^p+|\nabla \sigma|^p) \le \varepsilon^b\lambda \}\cap \Omega; \\
W_\lambda&= \{ {\mathbf{M}}(|\nabla u|^p)> \lambda\}\cap \Omega,
\end{align*}
All we need is to verify that there exists a constant $C$ such that $\mathcal{L}^n\left(V_\lambda\right) <C \varepsilon \mathcal{L}^n\left(W_\lambda\right)$. This process can be splitted into 2 steps. Let us start with the step of verification $(i)$ in Lemma \ref{lem:mainlem}.\\
\emph{Step 1.} If $V_\lambda \neq \emptyset$, then there exists $x_1 \in \Omega$ such that:
\begin{align}
\label{eq:x1_1}
{\mathbf{M}}(|\nabla u|^p)(x_1) &>\varepsilon^{-a}\lambda,
\end{align}
and
\begin{align}
\label{eq:x1_2}
{\mathbf{M}}(|F|^p+|\nabla \sigma|^p)(x_1) &\le \varepsilon^b\lambda.
\end{align}
From \eqref{eq:x1_2} and the definition of maximal function ${\mathbf{M}}$, it gives us
\begin{align*}
\sup_{\rho>0}{\fint_{B_\rho(x_1)}{(|F|^p+|\nabla \sigma|^p) dx}} \le \varepsilon^b\lambda
\end{align*}
which implies
\begin{align*}
\fint_{B_\rho(x_1)}{(|F|^p+|\nabla \sigma|^p) dx} \le \varepsilon^b \lambda, \quad \forall \rho>0,
\end{align*}
and we obtain
\begin{align}\label{eq:res3}
\int_{B_{\rho}(x_1)}{(|F|^p+|\nabla \sigma|^p) dx} \le \varepsilon^b\lambda\mathcal{L}^n\left(B_\rho(x_1)\right), \quad \forall\rho>0.
\end{align}
To prove the claim, we choose $\rho=T_0 := diam(\Omega)$ to get:
\begin{align*}
\begin{split}
\int_{\Omega}{(|F|^p+|\nabla \sigma|^p) dx} &\le \varepsilon^b\lambda \mathcal{L}^n\left(B_{T_0}(x_1)\right)\\
&\le C\varepsilon^b\lambda \left(\frac{T_0}{R_1} \right)^n \mathcal{L}^n\left(B_{R_1}(0)\right)\\
&\le C\varepsilon^b\lambda\mathcal{L}^n\left(B_{R_1}(0)\right).
\end{split}
\end{align*}
On the other hand, from \eqref{eq:x1_1} and due to the fact that ${\mathbf{M}}$ is bounded from $L^1(\mathbb{R}^n)$ into $L^{1,\infty}(\mathbb{R}^n)$, it is clearly to see that
\begin{align}
\label{eq:res4}
\mathcal{L}^n\left(V_\lambda\right) \le \mathcal{L}^n\left(\left\{ {\mathbf{M}}(|\nabla u|^p)>\varepsilon^{-a}\lambda \right\} \right) \le \frac{1}{\varepsilon^{-a} \lambda}\int_{\Omega}{|\nabla u|^p dx}.
\end{align}
It follows from Proposition \ref{prop1} that for $p>1$, we have:
\begin{align*}
\mathcal{L}^n\left(V_\lambda\right) \le \frac{1}{\varepsilon^{-a} \lambda}\int_\Omega{(|F|^p+|\nabla \sigma|^p) dx} \le {C\varepsilon^{a+b}}\mathcal{L}^n\left(B_{R_1}(0)\right)<C\varepsilon\mathcal{L}^n\left(B_{R_1}(0)\right),
\end{align*}
where the last estimate comes from the fact that $a+b>1$.
Here, one notices that the constant $C$ depends on $n, T_0$. \bigskip
\emph{Step 2.}
Let $x_0$ be fixed in the interior of $\Omega$. We need to prove that for all $x \in Q = B_R(x_0)$ and $r \in (0,R_1]$, we have $B_r(x) \cap Q \subset W_\lambda$, if $\mathcal{L}^n\left(V_\lambda \cap B_r(x)\right) \ge \varepsilon \mathcal{L}^n\left(B_r(x)\right)$. Indeed, let us suppose that $V_\lambda \cap B_r(x) \neq \emptyset$ and $B_r(x)\cap \Omega \cap W_\lambda^c \neq \emptyset$. Then, there exist $x_2, x_3 \in B_r(x) \cap \Omega$ such that:
\begin{align}
\label{eq:x2}
{\mathbf{M}}(|\nabla u|^p)(x_2) \le \lambda,
\end{align}
and
\begin{align}
\label{eq:x3}
{\mathbf{M}}(|F|^p+|\nabla \sigma|^p)(x_3) \le \varepsilon^b\lambda.
\end{align}
At the moment, we need to prove that there exists a constant $C = C(n,p,\Lambda_1,\Lambda_2,c_0)>0$ such that
\begin{align}
\label{eq:iigoal}
\mathcal{L}^n\left(V_\lambda \cap B_r(x)\right) < C\varepsilon\mathcal{L}^n\left(B_r(x)\right).
\end{align}
For $\rho>0$ and $y \in B_r(x)$ we have:
\begin{align}
\label{eq:res9}
\fint_{B_\rho(y)}{|\nabla u|^p dx} \le \sup\left\{\left(\sup_{\rho'<r}{\fint_{B_{\rho'}(y)}{\chi_{B_{2r}(x)}|\nabla u|^p dx}} \right); \left( \sup_{\rho' \ge r}{\fint_{B_{\rho'}(y)}{\chi_{B_{2r}(x)}|\nabla u|^p dx}} \right)\right\},
\end{align}
where $\rho' \ge r$. Since $B_{\rho'}(y) \subset B_{\rho'+r}(x) \subset B_{\rho'+2r}(x_1) \subset B_{2\rho'}(x_1)$, then:
\begin{align*}
\sup_{\rho' \ge r}{\fint_{B_{\rho'}(y)}{\chi_{B_{2r}(x)}|\nabla u|^p dx}} \le 3^n\sup_{\rho'>0}{\fint_{B_{2\rho'}(x_1)}{|\nabla u|^p dx}}.
\end{align*}
From \eqref{eq:res9} and in the use of \eqref{eq:x2}, we get that:
\begin{align}
\label{eq:res10}
\begin{split}
\fint_{B_\rho(y)}{|\nabla u|^p dx} &\le \sup \left\{{\mathbf{M}}(\chi_{B_{2r}(x)}|\nabla u|^p)(y); 3^n\sup_{\rho'>0}{\fint_{B_{2\rho'}(x_1)}{|\nabla u|^p dx}} \right\}\\
&\le \sup\left\{{\mathbf{M}}(\chi_{B_{2r}(x)}|\nabla u|^p)(y);3^n\lambda \right\}.
\end{split}
\end{align}
Taking the supremum both sides for all $\rho'>0$ it gives
\begin{align*}
{\mathbf{M}}(|\nabla u|^p)(y) \le \max\left\{ {\mathbf{M}}(\chi_{B_{2r}(x)}|\nabla u|^p)(y);3^n\lambda \right\}, \forall y \in B_r(x).
\end{align*}
Let $\varepsilon_0 = \displaystyle{\left( \frac{1}{3}\right)^{\frac{n+1}{a}}} \in (0,1)$, then for all $\lambda>0$ and $\varepsilon \in (0,\varepsilon_0)$:
\begin{align}
\label{eq:res11}
V_\lambda \cap B_r(x) = \left\{{\mathbf{M}}(\chi_{B_{2r}(x)}|\nabla u|^p)> \varepsilon^{-a}\lambda; {\mathbf{M}}(|F|^p+|\nabla \sigma|^p) \le \varepsilon^b\lambda \right\} \cap B_r(x) \cap \Omega.
\end{align}
In order to prove \eqref{eq:iigoal}, we have to consider two cases: $B_{2r}(x) \subset\subset\Omega$ (in the interior domain) and $B_{2r}(x) \cap \partial\Omega \neq \emptyset$ (on the boundary).\\ \medskip\\
\textbf{Case 1: $B_{2r}(x) \subset\subset\Omega$}.\\
For $0<R\le R_1$ and $x_0 \in \Omega$, set $B_{2R} = B_{2R}(x_0)$ and let us consider $w$ a unique solution of equation:
\begin{equation}
\label{eq:wsol_inter}
\begin{cases}
\text{div} A(x,\nabla w) &=0, \quad \qquad \text{in} \ B_{2r}(x),\\
w &= u - \sigma, \quad \text{on} \ \partial B_{2r}(x).
\end{cases}
\end{equation}
We have firstly that the Lebesgue measure of the set $V_\lambda \cap B_r(x)$ can be separated into 2 parts
\begin{align}
\label{eq:res15}
\begin{split}
\mathcal{L}^n\left(V_\lambda \cap B_r(x)\right) &\le \mathcal{L}^n\left(\left\{{\mathbf{M}}(\chi_{B_{2r}(x)}|\nabla u - \nabla w|^p)>\varepsilon^{-a}\lambda\right\}\cap B_r(x) \right) \\
&+ \mathcal{L}^n\left(\left\{{\mathbf{M}}(\chi_{B_{2r}(x)}|\nabla w|^p)>\varepsilon^{-a}\lambda \right\} \cap B_r(x) \right).
\end{split}
\end{align}
Each term on the right hand side could be estimated following Lemma \ref{lem:I1} as:
\begin{align}\label{eq:res16}
\begin{split}
& \mathcal{L}^n\left(\left\{ {\mathbf{M}}(\chi_{B_{2r}(x)}|\nabla u - \nabla w|^p)>\varepsilon^{-a}\lambda\right\}\cap B_r(x) \right) \\
& \le \frac{C}{\varepsilon^{-a} \lambda}\int_{B_{2r}(x)}{|\nabla u - \nabla w|^p dy} \\~~~~~
& \le \frac{Cr^n}{\varepsilon^{-a} \lambda} \left[\fint_{B_{2r}(x)}{(|F|^p+|\nabla\sigma|^p)dy}+ \left(\fint_{B_{2r}(x)} |\nabla u|^pdy \right)^{\frac{p-1}{p}} \left(\fint_{B_{2r}(x)} |\nabla \sigma|^pdy \right)^{\frac{1}{p}} \right]
\end{split}
\end{align}
and
\begin{align}\label{eq:res17}
\mathcal{L}^n\left(\left\{{\mathbf{M}}(\chi_{B_{2r}(x)}|\nabla w|^p)>\varepsilon^{-a}\lambda \right\} \cap B_r(x) \right) \le \frac{Cr^n}{(\varepsilon^{-a}\lambda)^{\frac{\Theta}{p}}}\fint_{B_{2r}(x)}{|\nabla w|^{\Theta} dy},
\end{align}
where $\Theta = \Theta(n,p,\Lambda_1,\Lambda_2,c_0)>p$ and the constant $C>0$ depending on $n,p,\Lambda_1,\Lambda_2,c_0$, appearing in the use of reverse H\"older's inequality \eqref{eq:reverseHolder} as:
\begin{equation*}
\left( \fint_{B_{2r}(x)}{|\nabla w|^{\Theta} dy}\right)^{1/\Theta}\leq C \left( \fint_{B_{4r}(x)}{|\nabla w|^{p} dy}\right)^{1/p}.
\end{equation*}
The parameter $\displaystyle{a=\frac{p}{\Theta}}$ is taken into account and it follows again the comparison estimate found in Lemma \ref{lem:I1} to obtain
\begin{align}
\label{eq:res18}
\begin{split}
\fint_{B_{4r}(x)}{|\nabla w|^p dy} &\le C\fint_{B_{4r}(x)}{|\nabla u|^p dy} + C\fint_{B_{4r}(x)}{|\nabla u - \nabla w|^p dy}\\
&\le C\fint_{B_{4r}(x)}{|\nabla u|^p dy} +C\left[\fint_{B_{4r}(x)}{(|F|^p+|\nabla\sigma|^p)dy} \right.\\
&~~~~~~~+\left. \left(\fint_{B_{4r}(x)} |\nabla u|^pdy \right)^{\frac{p-1}{p}} \left(\fint_{B_{4r}(x)} |\nabla \sigma|^pdy \right)^{\frac{1}{p}} \right].
\end{split}
\end{align}
On the other hand, as $|x-x_2|<r$ yields that$B_{4r}(x) \subset B_{4r}(x_2)$, then by the fact in \eqref{eq:x2} it gets
\begin{align}
\label{eq:res19}
\begin{split}
\fint_{B_{4r}(x)}{|\nabla u|^p dy} &\le C \fint_{B_{4r}(x_2)}{|\nabla u|^p dy} \le C\sup_{\rho>0}{\fint_{B_{4r}(x_2)}{|\nabla u|^p dy}}\\
&= C{\mathbf{M}}(|\nabla u|^p)(x_2) \le C\lambda.
\end{split}
\end{align}
Analogously, as $|x-x_3|<r$, $B_{4r}(x) \subset B_{4r}(x_3)$, then for all $\rho>0$ and from \eqref{eq:x3} we have
\begin{align}
\label{eq:res20}
\begin{split}
\fint_{B_{4r}(x)}{(|F|^p+|\nabla \sigma|^p)dy} &\le C\fint_{B_{4r}(x_3)}{(|F|^p+|\nabla \sigma|^p)dy} \\
& \le C\sup_{\rho>0}{\fint_{B_\rho(x_3)}{(|F|^p+|\nabla \sigma|^p)dy}} \\
&= C{\mathbf{M}}(|F|^p+|\nabla \sigma|^p)(x_3) \le C\varepsilon^b\lambda.
\end{split}
\end{align}
Applying to \eqref{eq:res16} we obtain that
\begin{align*}
\begin{split}
\mathcal{L}^n\left(\left\{ {\mathbf{M}}(\chi_{B_{2r}(x)}|\nabla u - \nabla w|^p)>\varepsilon^{-a}\lambda\right\}\cap B_r(x) \right) &\le \frac{Cr^n}{\varepsilon^{-a} \lambda}\left(\varepsilon^b\lambda+\varepsilon^{\frac{b}{p}}\lambda \right)\\
&= {Cr^n}\varepsilon\left(\varepsilon^{a+b-1}+1 \right);
\end{split}
\end{align*}
and to \eqref{eq:res17}:
\begin{align*}
\begin{split}
\mathcal{L}^n\left(\left\{{\mathbf{M}}(\chi_{B_{2r}(x)}|\nabla w|^p)>\varepsilon^{-a}\lambda \right\} \cap B_r(x) \right) \le \frac{Cr^n}{\varepsilon^{-1}\lambda^{\frac{\Theta}{p}}} \left[\lambda + \varepsilon^b\lambda+\varepsilon^{\frac{b}{p}}\lambda \right]^{\frac{\Theta}{p}}\\
= {Cr^n}{\varepsilon}\left(1+\varepsilon^b+\varepsilon^{\frac{b}{p}} \right)^{\frac{\Theta}{p}}.
\end{split}
\end{align*}
Combining these above estimates into \eqref{eq:res15}
\begin{align}
\mathcal{L}^n\left(V_\lambda \cap B_r(x)\right) \le {Cr^n}{\varepsilon}\left[1+\varepsilon^{a+b-1}+ \left(1+\varepsilon^b+\varepsilon^{\frac{b}{p}} \right)^{\frac{\Theta}{p}} \right]<C\varepsilon r^n,
\end{align}
which establishes the desired result.\bigskip
\textbf{Case 2: $B_{2r}(x) \cap \partial\Omega \neq \emptyset$}. Let $x_4 \in \partial \Omega$ such that $|x_4-x| = dist(x,\partial\Omega) \le 2r$, then $B_{2r}(x) \subset B_{10r}(x_4)$. Application of Lemma \ref{lem:l2} on the ball $B_{10r}(x_4)$ enables us to get the existence of a constant $C = C(n,p,\Lambda_1,\Lambda_2,c_0)>0$ such that:
\begin{multline*}
\fint_{B_{10r}(x_4)} |\nabla u - \nabla w|^p dy \le C \fint_{B_{10r}(x_4)} {(|F|^p + |\nabla \sigma|^p) dy} \\ + C \left(\fint_{B_{10r}(x_4)} |\nabla u|^pdy \right)^{\frac{p-1}{p}} \left(\fint_{B_{10r}(x_4)} |\nabla \sigma|^pdy \right)^{\frac{1}{p}}.
\end{multline*}
As a boundary version of \eqref{eq:res15}, on the ball $B_{10r}(x_4)$ one has
\begin{align}
\label{eq:res1}
\begin{split}
\mathcal{L}^n\left(V_\lambda \cap B_r(x)\right) &\le \mathcal{L}^n\left(\left\{{\mathbf{M}}(\chi_{B_{10r}(x_4)}|\nabla u - \nabla w|^p)>\varepsilon^{-a}\lambda\right\}\cap B_r(x) \right) \\
&+ \mathcal{L}^n\left(\left\{{\mathbf{M}}(\chi_{B_{10r}(x_4)}|\nabla w|^p)>\varepsilon^{-a}\lambda \right\} \cap B_r(x) \right).
\end{split}
\end{align}
From Lemma \ref{lem:l2} we obtain each term on the right hand side of \eqref{eq:res1} as
\begin{align*}
\begin{split}
&\mathcal{L}^n\left(\left\{ {\mathbf{M}}(\chi_{B_{10r}(x_4)}|\nabla u - \nabla w|^p)>\varepsilon^{-a}\lambda\right\}\cap B_r(x) \right) \le \frac{Cr^n}{\varepsilon^{-a} \lambda}\fint_{B_{10r}(x_4)}{|\nabla u - \nabla w|^p dy} \\~~~~~ &\le \frac{Cr^n}{\varepsilon^{-a} \lambda} \left[\fint_{B_{10r}(x_4)}{(|F|^p+|\nabla\sigma|^p)dy}+ \left(\fint_{B_{10r}(x_4)} |\nabla u|^pdy \right)^{\frac{p-1}{p}} \left(\fint_{B_{10r}(x_4)} |\nabla \sigma|^pdy \right)^{\frac{1}{p}} \right];
\end{split}
\end{align*}
and according to the reverse H\"older inequality in Lemma \ref{lem:reverseHolderbnd}, there exist $\Theta = \Theta(n,p,\Lambda_1,\Lambda_2,c_0)>p$ and a constant $C = C(n,p,\Lambda_1,\Lambda_2,T_0,\Theta)$ such that
\begin{align*}
\begin{split}
& \mathcal{L}^n\left(\left\{{\mathbf{M}}(\chi_{B_{10r}(x_4)}|\nabla w|^p)>\varepsilon^{-a}\lambda \right\} \cap B_r(x) \right) \le \frac{Cr^n}{\left(\varepsilon^{-a}\lambda\right)^{\frac{\Theta}{p}}}\fint_{B_{10r}(x_4)}{|\nabla w|^\Theta dy}\\
&~~~~~~~\le \frac{Cr^n}{\varepsilon^{-1}\lambda^{\frac{\Theta}{p}}} \left[ \fint_{B_{14r}(x_4)}{|\nabla u|^p dy} + \fint_{B_{14r}(x_4)}{(|F|^p+|\nabla\sigma|^p)dy} \right.\\
&~~~~~~~~~~~~+\left. \left(\fint_{B_{14r}(x_4)} |\nabla u|^pdy \right)^{\frac{p-1}{p}} \left(\fint_{B_{14r}(x_4)} |\nabla \sigma|^pdy \right)^{\frac{1}{p}} \right]^{\frac{\Theta}{p}}.
\end{split}
\end{align*}
For $x_2, x_3$ determined in the previous case and the definition of $x_4$, since $dist(x,\partial\Omega) \le 2r$, we can check easily that
\begin{align*}
\begin{split}
\overline{B_{14r}(x_4)} \subset \overline{B_{16r}(x)} \subset \overline{B_{17r}(x_2)},\\
\overline{B_{14r}(x_4)} \subset \overline{B_{16r}(x)} \subset \overline{B_{17r}(x_3)},
\end{split}
\end{align*}
and it gives us two following inequalities
\begin{align*}
\int_{B_{14r}(x_4)}{|\nabla u|^p dy} \le C{\mathbf{M}}(|\nabla u|^p)(x_2) \le C\lambda,
\end{align*}
and
\begin{align*}
\int_{B_{14r}(x_4)}{(|F|^p+|\nabla \sigma|^p)dy} \le C{\mathbf{M}}(|F|^p+|\nabla \sigma|^p)(x_3) \le C\varepsilon^b\lambda.
\end{align*}
Therefore, we also obtain the fact that $\mathcal{L}^n\left(V_\lambda \cap B_r(x)\right) \le C\varepsilon r^n$ and the proof is completed when we apply exactly Lemma \ref{lem:mainlem} by contradiction. That means, there exists a constant $C$ depending only on $n,p,\Lambda_1,\Lambda_2,T_0,c_0,r_0,\Theta$ such that $\mathcal{L}^n\left(V_\lambda\right) \le C\varepsilon\mathcal{L}^n\left(W_\lambda\right)$ holds for any $\lambda>0$ and all $\varepsilon \in (0,\varepsilon_0)$.
\end{proof}
Theorem \ref{theo:maintheo_lambda} gives us idea to get the gradient norm estimate of solutions to \eqref{eq:diveq} in Lorentz space. Having disposed of this preliminary step, we can now proceed to the proof of Theorem \ref{theo:regularityM0}.
\begin{proof}[Proof of Theorem \ref{theo:regularityM0}]
The definition of norm in Lorentz space $L^{q,s}(\Omega)$ in \eqref{eq:lorentz} gives:
\begin{align*}
\|{\mathbf{M}}(|\nabla u|^p)\|^s_{L^{s,q}(\Omega)} = q \int_0^\infty{\lambda^s \mathcal{L}^n\left(\{{\mathbf{M}}(|\nabla u|^p)>\lambda\} \right)^{\frac{s}{q}}\frac{d\lambda}{\lambda}}.
\end{align*}
By changing the variable $\lambda$ to $\varepsilon^{-a}\lambda$ within the integral above, we get that:
\begin{align*}
\|{\mathbf{M}}(|\nabla u|^p)\|^s_{L^{s,q}(\Omega)} = \varepsilon^{-as}q\int_0^\infty{\lambda^s\mathcal{L}^n\left(\{{\mathbf{M}}(|\nabla u|^p)>\varepsilon^{-a}\lambda\} \right)^{\frac{s}{q}}\frac{d\lambda}{\lambda}},
\end{align*}
and Theorem \ref{theo:maintheo_lambda} makes it that
\begin{align*}
\mathcal{L}^n\left(\{{\mathbf{M}}(|\nabla u|^p)>\varepsilon^{-a}\lambda\}\right) &\le C\varepsilon \mathcal{L}^n\left(\{{\mathbf{M}}(|\nabla u|^p)>\lambda\}\cap\Omega \right)\\ &~~+ \mathcal{L}^n\left(\{{\mathbf{M}}(|F|^p+|\nabla u|^p)>\varepsilon^b\lambda\}\cap\Omega \right),
\end{align*}
one obtains:
\begin{align*}
\|{\mathbf{M}}(|\nabla u|^p)\|^s_{L^{s,q}(\Omega)} &\le C\varepsilon^{-as+\frac{s}{q}}q\int_0^\infty{\lambda^s\mathcal{L}^n\left(\{{\mathbf{M}}(|\nabla u|^p)>\lambda\}\cap\Omega \right)^{\frac{s}{q}}\frac{d\lambda}{\lambda}}\\
&~~~+ C\varepsilon^{-as}q\int_0^\infty{\lambda^s\mathcal{L}^n\left(\{{\mathbf{M}}(|F|^p+|\nabla \sigma|^p)>\varepsilon^b\lambda\}\cap\Omega \right)^{\frac{s}{q}}\frac{d\lambda}{\lambda}}.
\end{align*}
Performing change of variables in the second integral on right-hand side, yields that
\begin{align*}
\|{\mathbf{M}}(|\nabla u|^p)\|^s_{L^{s,q}(\Omega)} &\le C\varepsilon^{-as+\frac{s}{q}}\|{\mathbf{M}}\left(|\nabla u|^p \right)\|^s_{L^{s,q}(\Omega)}\\ &~~+C\varepsilon^{-as-bs}\|{\mathbf{M}}(|F|^p+|\nabla \sigma|^p)\|^s_{L^{s,q}(\Omega)}.
\end{align*}
Therefore, for $0<s<\infty$ and $0<q<\frac{\Theta}{p}$ with $s\left(\frac{1}{q}-a \right)>0$ we choose $\varepsilon_0>0$ sufficiently small such that:
\begin{align*}
C\varepsilon_0^{s\left(\frac{1}{q}-a\right)} \le \frac{1}{2},
\end{align*}
and our gradient norm then holds for all $\varepsilon \in (0,\varepsilon_0)$. This is precisely the assertion of Theorem \ref{theo:regularityM0}.
\end{proof}
Our next objective is to prove the stronger result than in Theorem \ref{theo:maintheo_lambda}, that exploits the cut-off fractional maximal functions effectively in our study. We are then led to the following proof of Theorem \ref{theo:main2}.
\bigskip
\begin{proof}[Proof of Theorem \ref{theo:main2}]
Let $\lambda>0$ and $u$ be a solution to~\eqref{eq:diveq}. The proof is now proceeded analogously by applying Lemma~\ref{lem:mainlem} and preparatory lemmas in Section \ref{sec:cutoff}. We are left with the task of verifying that there exists a constant $C>0$ such that $\mathcal{L}^n\left(V_{\lambda}^{\alpha}\right) \le C \varepsilon \mathcal{L}^n\left(W_{\lambda}^{\alpha}\right)$, for $\varepsilon>0$ small enough. The process is divided into 2 steps.
{\bf Step 1:} Without loss of generality, we can assume that $V_{\lambda}^{\alpha} \neq \emptyset$ (indeed, if this set is empty, the proof is then straightforward). It follows that there exists at least a point $x_1 \in \Omega$ such that
\begin{align}
\label{eq:Mx1}
{\mathbf{M}} {\mathbf{M}}_{\alpha}(|\nabla u|^p)(x_1) >\varepsilon^{-a} \lambda, \ \mbox{ and } \ {\mathbf{M}}_{\alpha}(|F|^p + |\nabla\sigma|^p)(x_1) \le \varepsilon^{b} \lambda.
\end{align}
From the boundedness of maximal function ${\mathbf{M}}$ from $L^1(\mathbb{R}^n)$ into $L^{1,\infty}(\mathbb{R}^n)$, we have
\begin{align}
\label{eq:V1}
\mathcal{L}^n\left(V_{\lambda}^{\alpha}\right) \le \mathcal{L}^n\left(\left\{{\mathbf{M}} {\mathbf{M}}_{\alpha}(|\nabla u|^p) >\varepsilon^{-a} \lambda \right\}\right) \le \frac{C}{\varepsilon^{-a} \lambda} \int_{\Omega} {\mathbf{M}}_{\alpha}(|\nabla u|^p)(y) dy.
\end{align}
On the other hand, we also have
\begin{align*}
\int_{\Omega} \mathbf{M}_\alpha(|\nabla u|^p)(y) dy&= \int_{0}^{\infty} \mathcal{L}^n\left(\left\{y\in \Omega: \ \mathbf{M}_\alpha(|\nabla u|^p)(y)>\lambda\right\}\right) d\lambda\\& \leq CT_0^n\lambda_0+\int_{\lambda_0}^{\infty} \mathcal{L}^n\left(\left\{y \in \Omega: \ \mathbf{M}_\alpha(|\nabla u|^p)(y)>\lambda\right\}\right) d\lambda\\&\leq C T_0^n\lambda_0+ C \left(\int_{\Omega}|\nabla u|^p(y)dy\right)^{\frac{n}{n-\alpha}}\int_{\lambda_0}^{\infty}\lambda^{-\frac{n}{n-\alpha}}d\lambda\\&= CT_0^n\lambda_0+ C \left(\int_{\Omega}|\nabla u|^p(y)dy\right)^{\frac{n}{n-\alpha}}\lambda_0^{-\frac{\alpha}{n-\alpha}}.
\end{align*}
Choosing
\begin{equation*}
\lambda_0=T_0^{-n+\alpha}\int_{\Omega}|\nabla u|^p(y)dy,
\end{equation*}
yields that
\begin{align}\label{eq:Mup}
\int_{\Omega} \mathbf{M}_\alpha(|\nabla u|^p)(y) dy&\leq CT_0^{\alpha}\int_{\Omega}|\nabla u|^p(y)dy.
\end{align}
Let us apply Proposition~\ref{prop1}, \eqref{eq:Mx1} and \eqref{eq:Mup} simultaneously, it gets
\begin{align}\nonumber
\int_{\Omega} {\mathbf{M}}_{\alpha}(|\nabla u|^p)(y) dy & \le CT_0^{\alpha}\int_{\Omega}(|F|^p + |\nabla \sigma|^p)(y)dy \\ \nonumber
&\le C T_0^n T_0^{\alpha} \fint_{B_{T_0}(x_1)} (|F|^p + |\nabla \sigma|^p)(y)dy \\ \nonumber
& \le CT_0^n {\mathbf{M}}_{\alpha}(|F|^p + |\nabla \sigma|^p)(y)dy \\ \nonumber
& \le C\varepsilon^{b} \lambda \mathcal{L}^n\left(B_{T_0}(x_1)\right) \\ \nonumber
&\le C\left(\frac{T_0}{R}\right)^n \varepsilon^{b} \lambda \mathcal{L}^n\left(B_R(0)\right)\\ \label{eq:M}
& \le C \varepsilon^{b} \lambda \mathcal{L}^n\left(B_R(0)\right).
\end{align}
Thanks to the argument~\eqref{eq:M} to \eqref{eq:V1}, we conclude that
$$ \mathcal{L}^n\left(V_{\lambda}^{\alpha}\right) \le C \varepsilon^{a+b} \mathcal{L}^n\left(B_R(0)\right) \le C \varepsilon \mathcal{L}^n\left(B_R(0)\right),$$
where constant $C$ depends on $n$, $T_0$ and with $a = \frac{p}{\Theta}$, the parameter $b$ is chosen later such that $a+b\ge 1$. Step 1 is thereby completed.\\
{\bf Step 2:} Let $x_0 \in \Omega$, we verify in step 2 that for all $x \in B_{T_0}(x_0)$, $r \in (0,2R]$ and $\lambda>0$, it may be concluded that
$$\mathcal{L}^n\left(V_{\lambda}^{\alpha} \cap B_r(x)\right) \ge C \varepsilon \mathcal{L}^n\left(B_r(x)\right) \Longrightarrow B_r(x) \cap \Omega \subset W_{\lambda}^{\alpha}.$$
According to Lemma~\ref{lem:mainlem}, the contradiction will point out that $\mathcal{L}^n\left(V_{\lambda}^{\alpha} \cap B_r(x)\right) < C\varepsilon \mathcal{L}^n\left(B_r(x)\right)$. Let us firstly assume that $V_{\lambda}^{\alpha} \cap B_r(x) \neq \emptyset$ and $B_r(x) \cap \Omega \cap (W_{\lambda}^{\alpha})^c \neq \emptyset$, that means there exist $x_2, x_3 \in B_r(x) \cap \Omega$ such that
\begin{align}
\label{eq:MMx2} &{\mathbf{M}}{\mathbf{M}}_{\alpha} (|\nabla u|^p)(x_2) \le \lambda, \\
\label{eq:MFx3} &{\mathbf{M}}_{\alpha}(|F|^p + |\nabla \sigma|^p)(x_3) \le \varepsilon^b\lambda.
\end{align}
Applying Lemma~\ref{lem:Malpha} gives us the following assertion
\begin{align}
\begin{split}
\label{eq:Vlam}
\mathcal{L}^n\left(V_{\lambda}^{\alpha} \cap B_r(x)\right)&\le \mathcal{L}^n\left(\left\{{\mathbf{M}} {\mathbf{M}}_{\alpha}(|\nabla u|^p) >\varepsilon^{-a} \lambda \right\} \cap B_r(x)\right)\\ &\le \max \left\{Q_1, Q_2, Q_3 \right\},
\end{split}
\end{align}
where
$$ Q_1 = \mathcal{L}^n\left(\left\{{\mathbf{M}}^r {\mathbf{M}}_{\alpha}^r(|\nabla u|^p) >\varepsilon^{-a} \lambda \right\}\cap B_r(x)\right),$$
$$ Q_2 = \mathcal{L}^n\left(\left\{{\mathbf{M}}^r {\mathbf{T}}_{\alpha}^r(|\nabla u|^p) > \varepsilon^{-a}\lambda\right\}\cap B_r(x)\right),$$
and
$$ Q_3 = \mathcal{L}^n\left(\left\{{\mathbf{T}}^r {\mathbf{M}}_{\alpha}(|\nabla u|^p) >\varepsilon^{-a} \lambda \right\}\cap B_r(x)\right).$$
For any $y \in B_r(x)$, it is easy to check that $B_{\rho}(y) \subset B_{2\rho}(x) \subset B_{3\rho}(x_2), \ \forall \rho \ge r$. From what has already been proved in Lemma~\ref{lem:Tr} (apply with $\alpha=0$ and $f = {\mathbf{M}}_{\alpha}(|\nabla u|^p)$), we then use \eqref{eq:MMx2} to obtain that
\begin{align*}
{\mathbf{T}}^r {\mathbf{M}}_{\alpha}(|\nabla u|^p)(y) \le 3^{n} {\mathbf{M}} {\mathbf{M}}_{\alpha}(|\nabla u|^p)(x_2) \le 3^{n} \lambda.
\end{align*}
Moreover, for any $\rho \in (0,r)$, $y \in B_r(x)$ and $z \in B_{\rho}(y)$, since $B_{\delta}(z) \subset B_{\delta + 3r}(x_2)$ for all $\delta \ge r$, it turns out that
\begin{align*}
\mathbf{T}^r_{\alpha}(|\nabla u|^p)(z) & = \sup_{\delta\ge r} \delta^{\alpha-n}\int_{B_{\delta}(z)} |\nabla u|^p(\xi)d\xi \\
& \le \sup_{\delta\ge r}\left(\frac{3r+\delta}{\delta}\right)^{n-\alpha}(3r+\delta)^{\alpha-n}\int_{B_{\delta+3r}(x_2)} |\nabla u|^p(\xi)d\xi \\
& \le 4^{n-\alpha} \mathbf{M}_{\alpha}(|\nabla u|^p)(x_2),
\end{align*}
yields the estimate
\begin{align*}
{\mathbf{M}}^r {\mathbf{T}}_{\alpha}^r(|\nabla u|^p)(y) & = \sup_{0<\rho<r} \fint_{B_{\rho}(y)} {\mathbf{T}}^r_{\alpha} (|\nabla u|^p)(z)dz \\
&\le 4^{n-\alpha} \mathbf{M}_{\alpha}(|\nabla u|^p)(x_2) \le 4^{n-\alpha} \lambda.
\end{align*}
It follows that $Q_2 = Q_3 = 0$ for every $\varepsilon^{-a}>4^{n}>\max\{4^{n-\alpha},3^n\}$.
Therefore, for all $\lambda>0$, it is possible to choose $\varepsilon_0$ such that $\varepsilon_0^{-a} > 4^{n}$ and then we obtain
\begin{align*}
\mathcal{L}^n\left(V_{\lambda}^{\alpha} \cap B_r(x)\right) \le Q_1, \ \mbox{ for all } \ \varepsilon \in (0,\varepsilon_0).
\end{align*}
Analogously, we need only to consider two cases: $B_{2r}(x) \subset\subset\Omega$ (in the interior domain) and $B_{2r}(x) \cap \partial\Omega \neq \emptyset$ (on the boundary).\\ \medskip\\
\emph{Case 1.} $B_{2r}(x) \subset \subset \Omega$.\\
Again, let us consider $w$ the unique solution to the equation \eqref{eq:wsol_inter} and apply Lemma~\ref{lem:MrMr} to obtain that
\begin{align}\label{eq:interset}
\mathcal{L}^n\left(V_{\lambda}^{\alpha} \cap B_r(x)\right) \le \mathcal{L}^n\left( \left\{{\mathbf{M}}^{2r}_{\alpha}(\chi_{B_{2r}(x)}|\nabla u|^p) > \varepsilon^{-a} \lambda\right\} \cap B_r(x) \right) \le S_1 + S_2,
\end{align}
where
$$S_1 = \mathcal{L}^n\left( \left\{{\mathbf{M}}^{2r}_{\alpha}(\chi_{B_{2r}(x)}|\nabla u-\nabla w|^p) > \varepsilon^{-a} \lambda\right\} \cap B_r(x) \right),$$
and
$$S_2 = \mathcal{L}^n\left( \left\{{\mathbf{M}}^{2r}_{\alpha}(\chi_{B_{2r}(x)}|\nabla w|^p) > \varepsilon^{-a} \lambda\right\} \cap B_r(x) \right).$$
The bounded property of fractional maximal function ${\mathbf{M}}_{\alpha}$ deduces that
\begin{align*}
S_1 &\le \frac{C}{\left(\varepsilon^{-a}\lambda\right)^{\frac{n}{n-\alpha}}} \left(\int_{B_{2r}(x)} |\nabla u - \nabla w|^p dy\right)^{\frac{n}{n-\alpha}}\\
&\le \frac{C}{\left(\varepsilon^{-a}\lambda\right)^{\frac{n}{n-\alpha}}} r^{\frac{n^2}{n-\alpha}}\left(\fint_{B_{2r}(x)} |\nabla u - \nabla w|^p dy\right)^{\frac{n}{n-\alpha}}\\
&= \frac{C}{\left(\varepsilon^{-a}\lambda\right)^{\frac{n}{n-\alpha}}} r^n \left(r^{\alpha}\fint_{B_{2r}(x)} |\nabla u - \nabla w|^p dy\right)^{\frac{n}{n-\alpha}}.
\end{align*}
Thanks to Lemma \ref{lem:I1} one has
\begin{align}
\label{eq:cas1_4}
\begin{split}
r^{\alpha}\fint_{B_{2r}(x)} |\nabla u - \nabla w|^p dy &\le C r^{\alpha}\fint_{B_{2r}(x)}{(|F|^p+|\nabla \sigma|^p)dy}\\ &+C\left(r^{\alpha}\fint_{B_{2r}(x)}|\nabla u|^pdy \right)^{\frac{p-1}{p}}\left(r^{\alpha}\fint_{B_{2r}(x)}|\nabla \sigma|^pdy\right)^{\frac{1}{p}}.
\end{split}
\end{align}
As $|x-x_2|<r$ and $|x-x_3|<r$, it is not difficult to show that both $B_{2r}(x) \subset B_{4r}(x_2)$ and $B_{2r}(x) \subset B_{4r}(x_3)$ hold. Then, in the use of \eqref{eq:MMx2} and \eqref{eq:MFx3} it gets that
\begin{align*}
\displaystyle{r^{\alpha}\fint_{B_{2r}(x)}{|\nabla u|^p dy}} \le C \displaystyle{r^{\alpha}\fint_{B_{4r}(x_2)}{|\nabla u|^p dy}} \le C {\mathbf{M}\mathbf{M}}_\alpha (|\nabla u|^p)(x_2) \le C\lambda;
\end{align*}
and
\begin{align*}
\displaystyle{r^{\alpha}\fint_{B_{2r}(x)}{(|F|^p+|\nabla \sigma|^p) dy}} & \le C\displaystyle{r^{\alpha}\fint_{B_{4r}(x_3)}{(|F|^p+|\nabla \sigma|^p) dy}} \\
& \le C {\mathbf{M}}_{\alpha}(|F|^p+|\nabla \sigma|^p)(x_3) \\
&\le C\varepsilon^b\lambda.
\end{align*}
When combined them with the inequality \eqref{eq:cas1_4}, one gave:
\begin{align*}
\left(r^{\alpha}\fint_{B_{2r}(x)} |\nabla u - \nabla w|^p dy\right)^{\frac{n}{n-\alpha}} &\le C\left[\varepsilon^b\lambda+\lambda^{\frac{p-1}{p}}(\varepsilon^b\lambda)^{\frac{1}{p}} \right]^{\frac{n}{n-\alpha}}\\
&\le C\varepsilon^{\frac{bn}{p(n-\alpha)}}\left(\varepsilon^{\frac{b(p-1)}{p}} + 1 \right)^{\frac{n}{n-\alpha}}\lambda^{\frac{n}{n-\alpha}}.
\end{align*}
Therefore,
\begin{align}\label{eq:S1}
S_1 \le Cr^n \varepsilon^{\left(a+\frac{b}{p} \right)\frac{n}{n-\alpha}}\left(\varepsilon^{\frac{b(p-1)}{p}} +1\right)^{\frac{n}{n-\alpha}} \le Cr^n\varepsilon^{\left(a+\frac{b}{p} \right)\frac{n}{n-\alpha}}.
\end{align}
For the estimation of $S_2$, thanks to the reverse H\"older inequality in Lemma \ref{lem:reverseHolder}, there exist $\Theta=\Theta(n,p,\Lambda_1,\Lambda_2,c_0)>p$ and a constant $C=C(n,p,\Lambda_1,\Lambda_2,c_0,\Theta)>0$ to find that:
\begin{multline}
\label{eq:S2est}
S_2 \le \frac{Cr^n}{(\varepsilon^{-a} \lambda)^{\frac{\Theta}{p}\frac{n}{n-\alpha}}} \left(r^{\alpha} \fint_{B_{2r}(x)}{|\nabla w|^\Theta dy}\right)^{\frac{n}{n-\alpha}} \\ \le \frac{Cr^n}{(\varepsilon^{-a} \lambda)^{\frac{\Theta}{p}\frac{n}{n-\alpha}}} \left( r^{\alpha}\fint_{B_{4r}(x)}{|\nabla w|^p dy}\right)^{\frac{\Theta n}{p(n-\alpha)}}.
\end{multline}
Following Lemma \ref{lem:I1}, one has
\begin{align*}
r^{\alpha}\fint_{B_{4r}(x)}{|\nabla w|^p dy} &\le C r^{\alpha}\fint_{B_{4r}(x)}{|\nabla u|^p dy} + Cr^{\alpha} \fint_{B_{4r}(x)}{|\nabla u-\nabla w|^p dy}\\
& \le C r^{\alpha}\fint_{B_{4r}(x)}{|\nabla u|^p dy} + C r^{\alpha}\fint_{B_{4r}(x)} {(|F|^p+|\nabla \sigma|^p)dy} \\ & \qquad + C \left(r^{\alpha}\fint_{B_{4r}(x)}{|\nabla u|^p dy} \right)^{\frac{p-1}{p}}\left(r^{\alpha}\fint_{B_{4r}(x)}{|\nabla\sigma|^p dy} \right)^{\frac{1}{p}} \\
&\le C\left[\lambda + \varepsilon^b\lambda + \lambda^{\frac{p-1}{p}}(\varepsilon^{b}\lambda)^{\frac{1}{p}} \right]\\
&= C\lambda\left(1+\varepsilon^b+\varepsilon^{\frac{b}{p}}\right).
\end{align*}
Thus, applying this to \eqref{eq:S2est} we finally get the estimation for $S_2$.
\begin{align}\label{eq:S2}
S_2 \le Cr^n\varepsilon^{\frac{a\Theta}{p}\frac{n}{n-\alpha}}\lambda^{\frac{-\Theta n}{p(n-\alpha)}}\left[\lambda\left(1+\varepsilon^b+\varepsilon^{\frac{b}{p}}\right) \right]^{\frac{\Theta n}{p(n-\alpha)}} \le Cr^n\varepsilon^{\frac{a\Theta n}{p(n-\alpha)}}.
\end{align}
For the chosen parameter $a = \frac{p(n-\alpha)}{n\Theta}$, $b$ is then clarified such that
\begin{align*}
a+\frac{b}{p}=1.
\end{align*}
It follows that
\begin{align*}
\left(a+\frac{b}{p} \right)\frac{n}{n-\alpha}\ge 1, \quad \mbox{ and } \quad a + b > a+\frac{b}{p}=1.
\end{align*}
Therefore, from \eqref{eq:interset}, \eqref{eq:S1} and \eqref{eq:S2} it may conclude that
\begin{align*}
\mathcal{L}^n\left(V_{\lambda}^{\alpha}\cap B_r(x)\right) \le C r^n \varepsilon.
\end{align*}
\bigskip
\emph{Case 2.} $B_{2r}(x) \cap \partial\Omega \neq \emptyset$. First of all, let us take a point $x_4 \in \partial\Omega$ satisfying $|x-x_4|=d(x,\partial\Omega)<2r$. Then, it is clear to see that $B_{2r}(x) \subset B_{10r}(x_4)$.
Let $w$ be the unique solution to the equation:
\begin{align}
\label{eq:solw_bound}
\begin{cases}
\text{div} A(x,\nabla w) &=0, \quad \quad \text{in} \ B_{10r}(x_4)\\
w &= u - \sigma, \quad \text{on} \ \partial B_{10r}(x_4).
\end{cases}
\end{align}
On the ball $B_{10r}(x_4)$, applying Lemma \ref{lem:l2} the comparison estimate between $\nabla u$ and $\nabla w$, one obtains:
\begin{multline*}
\fint_{B_{10r}(x_4)}{|\nabla u - \nabla w|^p dy} \le C\fint_{B_{10r}(x_4)}{(|F|^p+|\nabla \sigma|^p)dy}\\ + C\left(\fint_{B_{10r}(x_4)}{|\nabla u|^p dy} \right)^{\frac{p-1}{p}}\left(\fint_{B_{10r}(x_4)}{|\nabla \sigma|^p dy} \right)^{\frac{1}{p}}.
\end{multline*}
As the boundary version of \eqref{eq:interset}, it allows us to write:
\begin{align}
\label{eq:boundset}
\begin{split}
\mathcal{L}^n\left(V_\lambda^\alpha \cap B_r(x)\right) &\le \mathcal{L}^n\left(\{{\mathbf{M}}_{\alpha}^{2r}\left(\chi_{B_{10r}(x_4)}|\nabla u|^p \right)>\varepsilon^{-a}\lambda\}\cap B_r(x) \right)\\
&\le \mathcal{L}^n\left(\{{\mathbf{M}}_{\alpha}^{2r}\left(\chi_{B_{10r}(x_4)}|\nabla u-\nabla w|^p \right)>\varepsilon^{-a}\lambda\}\cap B_r(x) \right)\\
&~~~~+ \mathcal{L}^n\left(\{{\mathbf{M}}_{\alpha}^{2r}\left(\chi_{B_{10r}(x_4)}|\nabla w|^p \right)>\varepsilon^{-a}\lambda\}\cap B_r(x) \right)\\
&= S_1^B + S_2^B.
\end{split}
\end{align}
Note that each term on the right-hand side of \eqref{eq:boundset} can be estimated similarly to the previous case. More precisely, one respects to:
\begin{align}\label{eq:S1B}
\begin{split}
S_1^B &= \mathcal{L}^n\left(\{{\mathbf{M}}_{\alpha}^{2r}\left(\chi_{B_{10r}(x_4)}|\nabla u-\nabla w|^p \right)>\varepsilon^{-a}\lambda\}\cap B_r(x) \right)\\
&\le \frac{C}{(\varepsilon^{-a}\lambda)^{\frac{n}{n-\alpha}}}\left(\int_{B_{10r}(x_4)}{|\nabla u-\nabla w|^p dy} \right)^{\frac{n}{n-\alpha}}\\
&\le \frac{C}{(\varepsilon^{-a}\lambda)^{\frac{n}{n-\alpha}}}r^{\frac{n^2}{n-\alpha}}\left(\fint_{B_{10r}(x_4)}{|\nabla u - \nabla w|^p dy} \right)^{\frac{n}{n-\alpha}}\\
&= \frac{C}{(\varepsilon^{-a}\lambda)^{\frac{n}{n-\alpha}}}r^n \left(r^{\alpha}\fint_{B_{10r}(x_4)}{|\nabla u - \nabla w|^p dy} \right)^{\frac{n}{n-\alpha}}.
\end{split}
\end{align}
As $x_2, x_3$ defined in previous case, since $d(x,\partial\Omega) \le 4r$, we can check easily that
\begin{align*}
B_{10r}(x_4) \subset B_{12r}(x) \subset B_{13r}(x_2),\\
B_{10r}(x_4) \subset B_{12r}(x) \subset B_{13r}(x_3).
\end{align*}
Hence,
\begin{align*}
\displaystyle{r^{\alpha}\fint_{B_{10r}(x_4)}{|\nabla u|^p dy}} \le {{C r^{\alpha}\fint_{B_{13r}(x_2)}{|\nabla u|^p dy} \le C {\bf M M}_{\alpha}(|\nabla u|^p)(x_2)}} \le C\lambda;
\end{align*}
and
\begin{align*}
\displaystyle{r^{\alpha}\fint_{B_{10r}(x_4)}{(|F|^p+|\nabla \sigma|^p)dy}} & \le C r^{\alpha}\fint_{B_{13r}(x_3)}{(|F|^p+|\nabla \sigma|^p)dy} \\ & \le {\mathbf{M}}_{\alpha}(|F|^p+|\nabla \sigma|^p)(x_3) \le C\varepsilon^b\lambda.
\end{align*}
Both of them are applied to \eqref{eq:S1B} to get:
\begin{align*}
\left(r^{\alpha}\fint_{B_{10r}(x_4)} |\nabla u-\nabla w|^p dy \right)^{\frac{n}{n-\alpha}} &\le C \left[\varepsilon^b\lambda+\lambda^{\frac{p-1}{p}}(\varepsilon^b\lambda)^{\frac{1}{p}} \right]^{\frac{n}{n-\alpha}}\\
&\le C\left(\varepsilon^{\frac{b}{p}}\lambda\right)^{\frac{n}{n-\alpha}}\left[ \varepsilon^{\frac{b(p-1)}{p}}+1\right]^{\frac{n}{n-\alpha}}.
\end{align*}
Therefore,
\begin{align*}
S_1^B \le Cr^n\varepsilon^{\left( a+\frac{b}{p}\right)\frac{n}{n-\alpha}}\left(\varepsilon^{\frac{b(p-1)}{p}}+1 \right)^{\frac{n}{n-\alpha}} \le Cr^n\varepsilon^{\left(a+\frac{b}{p} \right)\frac{n}{n-\alpha}}.
\end{align*}
For the estimation of $S_2^B$, by Lemma \ref{lem:reverseHolderbnd}, the reserve H\"older's inequality is properly utilized to show that there exist $\Theta=\Theta(n,p,\Lambda_1,\Lambda_2,c_0)>p$ and a constant $C = C(n,p,\Lambda_1,\Lambda_2,c_0,\Theta)>0$ giving:
\begin{align}
\label{eq:S2est_bnd}
\begin{split}
S_2^B &\le \frac{Cr^n}{(\varepsilon^{-a} \lambda)^{\frac{\Theta}{p}\frac{n}{n-\alpha}}} \left( \fint_{B_{10r}(x_4)}{|\nabla w|^\Theta dy}\right)^{\frac{n}{n-\alpha}} \\ &\le \frac{Cr^n}{(\varepsilon^{-a} \lambda)^{\frac{\Theta}{p}\frac{n}{n-\alpha}}} \left( \fint_{B_{14r}(x_4)}{|\nabla w|^p dy}\right)^{\frac{\Theta n}{p(n-\alpha)}}.
\end{split}
\end{align}
From the determination of $x_2, x_3$ in previous case, it is a simple matter to get
\begin{align*}
B_{14r}(x_4) \subset B_{16r}(x)\subset B_{17r}(x_2),\\
B_{14r}(x_4) \subset B_{16r}(x)\subset B_{17r}(x_3),
\end{align*}
which implies
\begin{align*}
\displaystyle{r^{\alpha}\fint_{B_{14r}(x_4)}{|\nabla u|^p dy}} \le r^{\alpha}C\fint_{B_{17r}(x_2)}{|\nabla u|^pdy} \le C{\mathbf{M}\mathbf{M}}_\alpha(|\nabla u|^p)(x_2) \le C\lambda,
\end{align*}
and
\begin{align*}
\displaystyle{r^{\alpha}\fint_{B_{14r}(x_4)}{(|F|^p+|\nabla u|^p)dy}} & \le r^{\alpha}\fint_{B_{17r}(x_3)}{(|F|^p+|\nabla u|^p)dy} \\ &\le {\mathbf{M}}_{\alpha}(|F|^p+|\nabla u|^p)(x_3) \le C\varepsilon^b\lambda.
\end{align*}
The use of Lemma \ref{lem:l2} enables us to write:
\begin{align*}
r^{\alpha}\fint_{B_{14r}(x_4)}{|\nabla w|^p dy} &\le C r^{\alpha}\fint_{B_{14r}(x_4)}{|\nabla u|^p dy} + C r^{\alpha}\fint_{B_{14r}(x_4)}{|\nabla u - \nabla w|^p dy}\\ & \le C r^{\alpha}\fint_{B_{20r}(x_4)}{|\nabla u|^p dy} + Cr^{\alpha}\fint_{B_{20r}(x_4)}{(|F|^p+|\nabla \sigma|^p)dy} \\ & \quad + C\left(r^{\alpha}\fint_{B_{20r}(x_4)}{|\nabla u|^p dy} \right)^{\frac{p-1}{p}}\left(r^{\alpha}\fint_{B_{20r}(x_4)}{|\nabla\sigma|^p dy} \right)^{\frac{1}{p}} \\
&\le C\left[\lambda + \varepsilon^b\lambda + \lambda^{\frac{p-1}{p}}(\varepsilon^b\lambda)^{\frac{1}{p}} \right]\\
&= C\lambda\left[1+\varepsilon^b+\varepsilon^{\frac{b}{p}}\right].
\end{align*}
Thus, back to \eqref{eq:S2est_bnd}, we finally get the estimation for $S_2^B$ as follows
\begin{align*}
S_2^B \le Cr^n\varepsilon^{\frac{a\Theta}{p}\frac{n}{n-\alpha}}\lambda^{\frac{-\Theta n}{p(n-\alpha)}}\left[\lambda\left(1+\varepsilon^b+\varepsilon^{\frac{b}{p}}\right) \right]^{\frac{\Theta n}{p(n-\alpha)}} \le Cr^n\varepsilon^{\frac{a\Theta n}{p(n-\alpha)}}.
\end{align*}
Therefore, as in previous case, with the choice of $a = \frac{p(n-\alpha)}{n\Theta}$, $b$ is also taken according to the formula
\begin{align*}
a+\frac{b}{p}=1,
\end{align*}
that giving us evidence of the desired results.
\end{proof}
It remains to prove the final theorem, where we establish the $\mathbf{M}_\alpha$ gradient norm estimate of solution as below.
As far as Theorem \ref{theo:main2} is applied, the proof of Theorem \ref{theo:regularityMalpha} can be done in the same way as what already obtained in proof of Theorem \ref{theo:regularityM0}.
\begin{proof}[Proof of Theorem \ref{theo:regularityMalpha}]
Let us rephrase the definition of norm in Lorentz space $L^{q,s}(\Omega)$ in \eqref{eq:lorentz} as:
\begin{align*}
\|{\mathbf{M}\mathbf{M}}_\alpha(|\nabla u|^p)\|^s_{L^{s,q}(\Omega)} = q \int_0^\infty{\lambda^s \mathcal{L}^n\left(\{{\mathbf{M}\mathbf{M}}_\alpha(|\nabla u|^p)>\lambda\} \right)^{\frac{s}{q}}\frac{d\lambda}{\lambda}}.
\end{align*}
Changing the variable $\lambda$ to $\varepsilon^{-a}\lambda$ within the integral, we get that:
\begin{align*}
\|{\mathbf{M}\mathbf{M}}_\alpha(|\nabla u|^p)\|^s_{L^{s,q}(\Omega)} = \varepsilon^{-as}q\int_0^\infty{\lambda^s\mathcal{L}^n\left(\{{\mathbf{M}\mathbf{M}}_\alpha(|\nabla u|^p)>\varepsilon^{-a}\lambda\} \right)^{\frac{s}{q}}\frac{d\lambda}{\lambda}}.
\end{align*}
Alternatively, it can be applied Theorem \ref{theo:main2} implies that
\begin{align*}
\mathcal{L}^n\left(\{{\mathbf{M}\mathbf{M}}_\alpha(|\nabla u|^p)>\varepsilon^{-a}\lambda\}\right) &\le C\varepsilon \mathcal{L}^n\left(\{{\mathbf{M}\mathbf{M}_{\alpha}}(|\nabla u|^p)>\lambda\}\cap\Omega \right)\\ &~~+ \mathcal{L}^n\left(\{{\mathbf{M}}_\alpha(|F|^p+|\nabla u|^p)>\varepsilon^b\lambda\}\cap\Omega \right),
\end{align*}
which gives
\begin{align*}
\|{\mathbf{M}\mathbf{M}}_\alpha(|\nabla u|^p)\|^s_{L^{s,q}(\Omega)} &\le C\varepsilon^{-as+\frac{s}{q}}q\int_0^\infty{\lambda^s\mathcal{L}^n\left(\{{\mathbf{M}\mathbf{M}}_\alpha(|\nabla u|^p)>\lambda\}\cap\Omega \right)^{\frac{s}{q}}\frac{d\lambda}{\lambda}}\\
&~~~+ C\varepsilon^{-as}q\int_0^\infty{\lambda^s\mathcal{L}^n\left(\{{\mathbf{M}}_\alpha(|F|^p+|\nabla \sigma|^p)>\varepsilon^b\lambda\}\cap\Omega \right)^{\frac{s}{q}}\frac{d\lambda}{\lambda}}.
\end{align*}
Performing change of variable in the second integral on right-hand side, we get
\begin{align*}
\|{\mathbf{M}\mathbf{M}}_\alpha(|\nabla u|^p)\|^s_{L^{s,q}(\Omega)} &\le C\varepsilon^{-as+\frac{s}{q}}\|{\mathbf{M}\mathbf{M}}_\alpha\left(|\nabla u|^p \right)\|^s_{L^{s,q}(\Omega)}\\ &~~+C\varepsilon^{-as-bs}\|{\mathbf{M}}_\alpha(|F|^p+|\nabla \sigma|^p)\|^s_{L^{s,q}(\Omega)}.
\end{align*}
Therefore, for $0<s<\infty$ and $0<q<\frac{1}{a}=\frac{n}{n-\alpha}.\frac{\Theta}{p}$ it turns out that $s\left(\frac{1}{q}-a \right)>0$. Then, it is possible to choose $\varepsilon_0>0$ sufficiently small satisfying:
\begin{align*}
C\varepsilon_0^{s\left(\frac{1}{q}-a\right)} \le \frac{1}{2},
\end{align*}
and this finishes the proof, for all $\varepsilon \in (0,\varepsilon_0)$.
\end{proof}
|
1,477,468,750,382 | arxiv | \section{Introduction}
In this work, we are interested in the following complex-valued semilinear heat equation
\begin{equation}\label{equ:problem}
\left\{
\begin{array}{rcl}
\partial_t u &=& \Delta u + F(u), t \in [0,T), \\[0.2cm]
u(0) &=& u_0 \in L^\infty,
\end{array}
\right.
\end{equation}
where $F(u) = u^p$ and $u(t): \mathbb{R}^n \to \mathbb{C}$, $L^\infty := L^\infty(\mathbb{R}^n, \mathbb{C})$, $p > 1$. Though our results hold only when $p \in \mathbb{N}$ (see Theorem \ref{Theorem-profile-complex} below), we keep $p\in \mathbb{R}$ in the introduction, in order to broaden the discussion.
In particular, when $p =2$, model \eqref{equ:problem} evidently becomes
\begin{equation}\label{equation-problem-p=2}
\left\{
\begin{array}{rcl}
\partial_t u &=& \Delta u + u^2, t \in [0,T), \\[0.2cm]
u(0) &=& u_0 \in L^\infty.
\end{array}
\right.
\end{equation}
We remark that equation \eqref{equation-problem-p=2} is rigidly related to the viscous Constantin-Lax-Majda equation with a viscosity term,
which is a one dimensional model for the vorticity equation in fluids.
The readers can see more in some of the typical works: Constantin, Lax, Majda \cite{CLMCPAM1985}, Guo, Ninomiya and Yanagida
in \cite{GNYTAMS2013}, Okamoto, Sakajo and Wunsch \cite{OSWnon2008}, Sakajo in \cite{SMAthScitokyo2003} and \cite{Snon2003}, Schochet \cite{SCPAM1986}.
The local Cauchy problem for model \eqref{equ:problem} can be solved (locally in time)
in $L^{\infty}(\mathbb{R}^n,\mathbb{C})$ if $p$ is integer, by using a fixed-point argument. However, when $p $ is not integer, the local Cauchy problem has not been sloved yet, up to our knowledge. This probably comes from the discontinuity of $F(u) $ on $\{u \in \mathbb{R}^*_-\}$. In addition to that, let us remark that equation
\eqref{equ:problem} has the following family of space independent solutions:
\begin{equation}\label{solu-indipendent-p-notin-Q}
u_k (t) = \kappa e^{i \frac{2k\pi}{p-1}} \left( T -t\right)^{-\frac{1}{p-1}},\text{ for any } k \in \mathbb{Z},
\end{equation}
where $\kappa = (p-1)^{-\frac{1}{p-1}}$.
If $p \in \mathbb{Q}$, this makes a finite number of solutions.
If $p \notin \mathbb{Q},$ then the set
\begin{equation}\label{rescaling-u_k}
\left\{ u_k(t) \frac{(T - t)^{\frac{1}{p-1}}}{\kappa} \left| \right. \quad k \in \mathbb{Z} \right\},
\end{equation}
is countable and dense in the unit circle of $\mathbb{C}$.
\noindent This latter case ($p \notin \mathbb{Q}$), is somehow intermediate between
the case $(p \in \mathbb{Q})$ and the case of the twin PDE
\begin{equation}\label{heat-real-|u|p-1u}
\partial_t u = \Delta u + |u|^{p-1} u,
\end{equation}
which admits the following family of space independent solutions
$$ u_{\theta} (t) = \kappa e^{i \theta} (T -t)^{-\frac{1}{p-1}},$$
for any $\theta \in \mathbb{R}$, which turns to be infinite and covers all the unit
circle, after rescaling as in \eqref{rescaling-u_k}. In fact, equation \eqref{heat-real-|u|p-1u} is certainly much easier than equation \eqref{equ:problem}. As a mater of fact, it reduces to the scalar case thanks to a modulation technique, as Filippas and Merle did in \cite{FMjde95}.
\bigskip
Since the Cauchy problem for equation \eqref{equ:problem} is already hard when $p \notin \mathbb{N}$, and given that we are more interested in the asymptotic blowup behavior, rather than the well-posedness issue, we will focus in our paper on the case $p \in \mathbb{N}$.
In this case, from the Cauchy theory, the solution of equation \eqref{equ:problem} either exists globally or blows up in finite time. Let us recall that the solution $u(t) = u_1 (t) + i u_2(t)$ blows up in finite time $T < +\infty$ if and only if it exists for all $t \in [0,T)$ and
$$\limsup_{t \to T} \{\|u_1(t)\|_{L^{\infty}} + \| u_2(t)\|_{L^{\infty}}\} \to +\infty.$$
If $u$ blows up in finite time $T$, a point $a \in \mathbb{R}^n$ is called a blowup point if and only if
there exists a sequence $\{(a_j, t_j)\} \to (a,T)$ as $j \to +\infty$ such that
$$ |u_1(a_j,t_j)| + |u_2(a_j,t_j)| \to +\infty \text{ as } j \to +\infty.$$
\bigskip
The blowup phenomena occur for evolution equations in general, and in semilinear heat equations in particular. Accordingly, an interesting question is to construct for those equations a solution which blows up in finite time and to describe its blowup behavior. These questions are being studied by many authors in the world. Let us recall some blowup results connected to our equation:
\bigskip
\textbf{ $(i)$ The real case:} Bricmont and
Kupiainen \cite{BKnon94} constructed a real positive solution to \eqref{equ:problem} for all $p> 1$, which blows up in finite time $T$, only at the origin and they also gave the profile of the solution such that
$$ \left\| (T-t)^{\frac{1}{p-1}} u (x,t) - f_0 \left( \frac{x}{\sqrt{(T-t)|\ln (T - t)|} } \right) \right\|_{L^{\infty}(\mathbb{R}^n)} \leq \frac{C}{ 1 + \sqrt{|\ln (T-t)|}},$$
where the profile $f_0 $ is defined as follows
\begin{equation}\label{defini-f-0}
f_0 (z) = \left( p-1 + \frac{(p-1)^2 |z|^2}{4p}\right)^{-\frac{1}{p-1}}.
\end{equation}
\bigskip
\noindent In addition to that, with a different method, Herrero and Vel\'azquez in \cite{HVcpde92} obtained the same result. Later, in \cite{MZdm97} Merle and Zaag simplified the proof of \cite{BKnon94} and proposed the following two-step method (see also the note \cite{MZasp96}):
\begin{itemize}
\item[-] Reduction of the infinite dimensional problem to a finite dimensional one.
\item[-] Solution of the finite dimensional problem thanks to a topological argument based on Index theory.
\end{itemize}
We would like to mention that this method has been successful in various situations such as the work of Tayachi and Zaag \cite{TZpre15}, and also the works of Ghoul, Nguyen and Zaag in \cite{GNZpre16a}, \cite{GNZpre16b}, and \cite{GNZsysparabolic2016}. In those papers, the considered equations were scale invariant; this property was believed to be essential for the construction. Fortunately, with the work of Ebde and Zaag \cite{EZsema11} for the following equation
$$ \partial_t u = \Delta u + |u|^{p-1} u + f (u, \nabla u),$$
where
$$ |f(u,\nabla u ) | \leq C ( 1 + |u|^q + | \nabla u|^{q'}) \text{ with } q < p , q' < \frac{2p}{p+1},$$
that belief was proved to be wrong.
\noindent Going in the same direction as \cite{EZsema11}, Nguyen and Zaag in \cite{NZens16}, have achieved the construction with a stronger perturbation
$$ \partial_t u = \Delta u + | u|^{p-1} u + \frac{\mu |u|^{p - 1} u }{ \ln ^a (2 + u^2)} ,$$
where $\mu \in \mathbb{R}, a > 0 $. Though the results of \cite{EZsema11} and \cite{NZens16} show that the invariance under dilations of the equation in not necessary in the construction method, we might think that the construction of \cite{EZsema11} and \cite{NZens16} works because the authors adopt a perturbative method around the pure power case $ F(u) = |u|^{p-1} u$. If this is true with \cite{EZsema11}, it is not the case for \cite{NZens16}. In order to totally prove that the construction does not need the invariance by dilation, Duong, Nguyen and Zaag considered in \cite{DNZtunisian-2017} the following equation
$$ \partial_t u = \Delta u + |u|^{p - 1} u \ln ^\alpha ( 2 + u^2),$$
for some where $\alpha \in \mathbb{R} $ and $ p> 1$, where we have no invariance under dilation, not even for the main term on the nonlinearity. They were successful in constructing a stable blowup solution for that equation. Following the above mentioned discussion, that work has to be considered as a breakthrough.
\bigskip
\noindent Let us mention that a classification of the blowup behavior of \eqref{equation-problem-p=2} was made available by many authors such as Herrero and Vel\'azquez in \cite{HVcpde92} and Vel\'azquez in \cite{VELcpde92}, \cite{VELtams93}, \cite{VELiumj93} (see also Zaag in \cite{ZAAcmp02} for some refinement). More precisely and just to stay in one space dimension for simplicity, it is proven in \cite{HVcpde92} that if $u$ a real solution of \eqref{equ:problem}, which blows up in finite time $T$ and $a$ is a given blowup point, then:
\begin{itemize}
\item[$A.$] Either
$$\sup_{|x - a| \leq K \sqrt{(T - t) |\ln (T - t)|} } \left| \left( T - t\right)^{\frac{1}{p-1}} u(x,t) - f_0\left( \frac{x - a}{\sqrt{(T - t) |\ln (T - t)|}}\right)\right| \to 0 \text{ as } t \to T,$$
for any $K > 0$ where $f_0 (z) $ is defined in \eqref{defini-f-0}.
\item[$B.$] Or, there exist $m \geq 2, m \in \mathbb{N}$ and $C_m > 0$ such that
$$\sup_{|x - a| \leq K (T - t)^{\frac{1}{ 2 m}} } \left| \left( T - t\right)u(x,t) - f_m\left( \frac{C_m(x - a)}{ (T - t)^{\frac{1}{2m}}}\right)\right| \to 0 \text{ as } t \to T,$$
for any $K > 0$, where $f_m (z) = (p-1 + |z|^{2m})^{-\frac{1}{p-1}}$.
\end{itemize}
\medskip
\textbf{ $(ii)$ The complex case:} The blowup question for the complex-valued parabolic equations has been studied intensively by many authors, in particular for the Complex Ginzburg Landau (CGL) equation
\begin{equation}\label{ginzburg-landau}
\partial_t u = (1 + i \beta ) \Delta u + (1 + i \delta) | u|^{p-1} u + \gamma u.
\end{equation}
This is the case of an ealier work of Zaag in \cite{ZAAihn98} for equation \eqref{ginzburg-landau} when $\beta = 0$ and $ \delta$ small enough. Later, Masmoudi and Zaag in \cite{MZjfa08} generalized the result of \cite{ZAAihn98} and constructed a blowup solution for \eqref{ginzburg-landau} with $ p - \delta^2 - \beta \delta - \beta \delta p >0 $ such that the solution satisfies the following
\begin{eqnarray*}
\left\| (T - t)^{\frac{1 + i \delta}{p-1} } | \ln (T -t)|^{-i \mu } u(x,t) - \left(p-1 + \frac{b_{sub} |x|^2}{(T-t)|\ln (T -t)|} \right)^{- \frac{1 + i \delta}{p-1}} \right\|_{L^\infty} \leq \frac{C}{1 + \sqrt{| \ln(T-t|}},
\end{eqnarray*}
where
$$b_{sub} = \frac{(p-1)^2}{ 4 ( p - \delta^2 - \beta \delta - \beta \delta p )} > 0 .$$
Then, Nouaili and Zaag in \cite{NZ2017} has constructed for \eqref{ginzburg-landau} (in case the critical where $\beta = 0$ and $ p = \delta^2 $) a blowup solution
satisfying
\begin{eqnarray*}
\left\| (T - t)^{\frac{1 + i \delta}{p-1} } | \ln (T -t)|^{-i \mu } u(x,t) - \kappa^{-i \delta }\left(p-1 + \frac{b_{cri} |x|^2}{(T-t)|\ln (T -t)|^{\frac{1}{2}}} \right)^{- \frac{1 + i \delta}{p-1}} \right\|_{L^\infty} \leq \frac{C}{1 + |\ln(T-t)|^{\frac{1}{4}} },
\end{eqnarray*}
with
$$ b_{cri} = \frac{(p-1)^2}{ 8 \sqrt{p(p+1)}}, \mu = \frac{\delta}{8 b }.$$
\bigskip
\noindent
As for equation \eqref{equation-problem-p=2}, there are many works done in dimension one, such as the work of Guo, Ninomiya, Shimojo and Yanagida, who proved in \cite{GNYTAMS2013} the following results (see Theorems 1.2, 1.3 and 1.5 in this work):
\medskip
$(i)$ \noindent \textit{ (A Fourier- based blowup crieterion). We assume that the Fourier transform of initial data of \eqref{equation-problem-p=2} is real and positive, then the solution blows up in finite time.}
\medskip
$(ii)$ \noindent \textit{ (A simultaneous blowup criterion in dimension one) If the initial data $u^0 = u_1^0 + u_2^0,$ satisfies
$$ u_1^0 \text{ is even }, u_2^0 \text{ is odd with } u_2^0 > 0 \text{ for } x > 0.$$
Then, the fact that the blowup set is compact implies that $u_1^0, u_2^0$ blow up simultaneously. }
\medskip
$(iii)$ \noindent \textit{ Assume that $u_0 = u_1^0 + i u_2^0$ satisfy
\begin{eqnarray*}
u_1^0, u_2 ^0 \in C^1 (\mathbb{R}^n), 0 \leq u_1^0 \leq M, u_1^0 \neq M, 0 < u_2^0 \leq L,
\end{eqnarray*}
\begin{eqnarray*}
\lim_{|x| \to + \infty} u_1^0 (x) = M \text{ and } \lim_{|x| \to +\infty} u_2^0 = 0,
\end{eqnarray*}
for some constant $L, M $. Then, the solution $u= u_1 + i u_2$ of \eqref{equation-problem-p=2}, with initial data $u^0$, blows up at time $T (M),$ with $u_2 (t) \not\equiv 0$ . Moreover, the real part $u_1(t)$ blows up only at space infinity and $u_2 (t)$ remains bounded. }
\medskip
\noindent Still for equation \eqref{equation-problem-p=2}, Nouaili and Zaag constructed in \cite{NZCPDE2015} a complex solution $ u = u_1+ i u_2 ,$ which blows up in finite time $T$ only at the origin. Moreover, the solution satisfies the following asymptotic behavior
$$\left\| (T -t) u (.,t) - f \left( \frac{.}{ \sqrt{ (T - t) |\ln (T -t)|}}\right) \right\|_{L^{\infty}} \to 0 \text{ as } t \to T,$$
where $f(z) = \frac{1}{ 8 + |z|^2}$ and the imaginary part satisfies the following astimate for all $ K > 0$
\begin{equation}\label{asymptotic-imiginary-nouaili-zaag}
\sup_{|x| \leq K \sqrt{T -t}} \left| (T -t) \tilde v (x, t) - \frac{1}{|\ln (T -t)|^2} \sum_{j=1}^n C_j \left( \frac{x_j^2}{T - t} - 2\right) \right| \leq \frac{C(K)}{ | \ln (T - t)|^\alpha},
\end{equation}
for some $(C_i)_i \neq (0,...,0)$ and $ 2 < \alpha < 2 + \eta, \eta $ small enough. Note that the real and the imaginary parts blow up simultaneously at the origin. Note also that \cite{NZCPDE2015} leaves unanswered the question of the derivation of the profile of the imaginary part, and this is precisely our aim in this paper, not only for equation \eqref{equation-problem-p=2}, but also for equation \eqref{equ:problem} with $ p \in \mathbb{N}, p \geq 2$.
\medskip
\noindent Before stating our result (see Theorem \ref{Theorem-profile-complex} below), we would like to mention some classification results by Harada for blowup solutions of \eqref{equation-problem-p=2}. As a matter of fact, in \cite{HaradaJFA2016}, he classified all blowup solutions of \eqref{equation-problem-p=2} in dimension one, under some reasonable assumption (see \eqref{condition-Harada}, \eqref{harada-imaginary-condi}), as follows (see Theorems 1.4, 1.5 and 1.6 in that work):
\medskip
\noindent \textit{ Consider $u = u_1 + i u_2 $ a blowup solution of \eqref{equation-problem-p=2} in one dimension space with blowup time $T$ and blowup point $\xi$ which satisfies
\begin{equation}\label{condition-Harada}
\sup_{0< t < T} (T-t) \| u(t)\|_{L^\infty} < +\infty.
\end{equation}
Assume in addition that
\begin{equation}\label{harada-imaginary-condi}
\lim_{s \to +\infty} \| w_2(s) \|_{L^2_\rho(\mathbb{R})} = 0, w_2 \not \equiv 0,
\end{equation}
where $\rho$ is defined as follows
\begin{equation}\label{defi-rho-n=1}
\rho (y) = \frac{e^{-\frac{y^2}{4}}}{\sqrt{4 \pi}},
\end{equation}
and $w_2$ is defined by the following change of variables (also called similarity variables):
$$ w_1 (y,s) = (T-t) u_1( \xi + e^{-\frac{s}{2}}y, t ) \text{ and } w_2(y,s) = (T-t) u_2( \xi + e^{-\frac{s}{2}}y, t ) , \text{ where } t = T -e^{-s}.$$
Then, one of the following cases occurs
\begin{eqnarray*}
&(C_1)& \left\{ \begin{array}{rcl}
w_1 &=& 1 - \frac{c_0}{s} h_2 + O (\frac{\ln s}{s^2} ) \text{ in } L^2_{\rho} (\mathbb{R}),\\[0.3cm]
w_2 &= &c_2 s^{- m} e^{- \frac{(m-2)s}{2}} h_m + O \left( s^{-(m+1)} e^{- \frac{(m-2)s}{2}} \ln s\right) \text{ in } L^2_\rho(\mathbb{R}), m \geq 2.
\end{array} \right.\\
&(C_2)& \left\{ \begin{array}{rcl}
u &=& 1 - c_1 e^{- (k-1)s} h_{2k} + O ( e^{- \frac{(2k - 1)s}{2}} ) \text{ in } L^2_{\rho} (\mathbb{R}),\\[0.3cm]
v &=& c_2 e^{- \frac{(m-2)s}{2}} h_m + O \left( e^{- \frac{(m-1)s}{2}} \right) \text{ in } L^2_\rho(\mathbb{R}), k \geq 2, m \geq 2k.
\end{array} \right.
\end{eqnarray*}
where $c_0 = \frac{1}{8} , c_1 > 0, c_2 \neq 0$ and $\rho(y)$ is defined in \eqref{defi-rho-n=1}
and $h_j(y)$ is a rescaled version of the Hermite polynomial of order $m^{th}$ defined as follows:
\begin{equation}\label{Hermite}
h_m(y) = \sum_{j=0}^{\left[ \frac{m}{2}\right]} \frac{(-1)^jm! y^{m - 2j}}{j! ( m -2j)!}.
\end{equation}
. }
\medskip
\noindent Besides that, Harada has also given a profile to the solutions in similarity variables:
\medskip
\noindent \textit{There exist $\kappa, \sigma, c > 0$ such that
\begin{eqnarray}
(C_1) & \Rightarrow & \left| u - \frac{1}{1 + c_0 s^{-1} h_2 } \right| + \left|s^{\frac{m}{2}} e^{\frac{(m-2)s}{2}} v - \frac{c_2 s^{-\frac{m}{2}} h_m}{(1 + c_0 s^{-1}h_2)^2} \right| < c s^{-\kappa},\label{profile-real-hara}\\
& \text{ for } & |y| \leq s^{(1 + \sigma)}.\nonumber\\
(C_2) & \Rightarrow & \left| u - \frac{1}{ 1 + c_1 e^{- (k-1 ) s } h_{2k} } \right| + \left| e^{\frac{(m - 2k)s}{2k} } v - \frac{c_2 e^{- \frac{(k-1)m s}{2k}}h_m}{ (1 + c_1 e^{-(k-1)s}h_{2k})^2} \right|,\label{profile-imaginary-hara}\\
&\text{ for } & |y| \leq e^{\frac{(k-1 + \sigma)s}{2k}}.\nonumber
\end{eqnarray}}
\bigskip
\noindent
Furthemore , he also gave the final blowup profiles
\bigskip
\noindent \textit{The blowup profile of $u= u_1 + i u_2$ is given by
\begin{eqnarray*}
(C_1) & \Rightarrow & \left\{ \begin{array}{rcl}
u_1(x,T) & = & \frac{2}{c_0} \left(\frac{|\ln |x||}{ x^2} \right) (1 + o (1)),\\[0.3cm]
u_2(x,T) &= & \frac{c_2 }{2^{m-2} (c_0 )^2} \left(\frac{x^{m-4}}{|\ln|x||^{m-2}} \right) (1 + o(1)),
\end{array} \right.\\[0.3cm]
(C_2) & \Rightarrow & \left\{ \begin{array}{rcl}
u(x, T) &=& \frac{1 + i c_1}{(c_1 - i c_2) } x^{-2k} (1 + o (1)), \\[0.3cm]
\text{ if } m & = & 2k,\\[0.3cm]
u_1(x,T) & = & (c_1 )^{-1} x^{-2k} (1 + o(1)) \text{ and } u_2(x,T) = \frac{c_2 }{ (c_1 )^2} x^{m - 4k} (1 + o (1)), \text{ if } m > 2k.
\end{array} \right.
\end{eqnarray*}}
Then, from the work of Nouaili and Zaag in \cite{NZCPDE2015} and Harada in \cite{HaradaJFA2016} for equation \eqref{equation-problem-p=2}, we derive that the imaginary part $u_2$ also blows up under some conditions, however, none of them was able to give a global profile (i.e. valid uniformly on $\mathbb{R}^n$, and not just on an expanding ball as in \eqref{profile-real-hara} and \eqref{profile-imaginary-hara}) for the imaginary part. For that reason, our main motivation in this work is to give a sharp description for the profile of the imaginary part. Our work is considered as an improvement of Nouaili and Zaag in \cite{NZCPDE2015} in dimension $n$, which is valid not only for $p=2$, but also for any $p\geq 3, p \in \mathbb{N}$. In particular, this is the first time we give the profile for the imiginary part when the solution blows up. More precisely, we have the following Theorem:
\begin{theorem}[Existence of a blowup solution for \eqref{equ:problem} and a sharp discription of its profile]\label{Theorem-profile-complex} For each $p \geq 2, p \in \mathbb{N}$ and $ p_1 \in (0,1) $, there exists $T_1 (p,p_1)> 0 $ such that for all $T \leq T_1,$ there exist initial data $u^0 = u^0_1 + i u^0_2,$ such that equation \eqref{equ:problem} has a unique solution $u (x,t)$ for all $(x,t) \in \mathbb{R}^n \times [0,T)$ satisfying the following:
\begin{itemize}
\item[$i)$] The solution $u$ blows up in finite time $T$ only at the origin. Moreover, it satisfies the following estimates
\begin{equation}\label{esttima-theorem-profile-complex}
\left\| (T- t)^{\frac{1}{p-1}} u (x,t) - f_0 \left( \frac{x}{\sqrt{(T -t) |\ln(T -t)|}} \right) \right\|_{L^{\infty}(\mathbb{R}^n)} \leq \frac{C}{ \sqrt{|\ln (T -t)|}},
\end{equation}
and
\begin{equation}\label{estima-the-imginary-part}
\left\| (T- t)^{\frac{1}{p-1}} |\ln (T-t)| u_2(x,t) - g_0 \left( \frac{x}{\sqrt{(T -t) |\ln(T -t)|}} \right) \right\|_{L^{\infty}(\mathbb{R}^n)} \leq \frac{C}{ |\ln (T -t)|^{\frac{p_1}{2}}},
\end{equation}
where $f_0 $ is defined in \eqref{defini-f-0} and $g_0(z)$ is defined as follows
\begin{eqnarray}
\displaystyle g_0(z) &=& \frac{|z|^2}{\left(p-1 + \frac{(p-1)^2}{4p} |z|^2 \right)^{\frac{p}{p-1}}}\label{defini-g-0-z}.
\end{eqnarray}
\item[$ii)$] There exists a complex function
$u^*(x) \in C^{2}(\mathbb{R}^n \backslash \{0\})$ such that
$u(t) \to u^* = u_1^* + i u_2^*$ as $t \to T$ uniformly on compact sets of
$\mathbb{R}^n \backslash \{0\}$ and we have the following asymptotic expansions:
\begin{equation}\label{asymp-u-start-near-0-profile-complex}
u^*(x) \sim \left[ \frac{(p-1)^2 |x|^2}{ 8 p |\ln|x||}\right]^{-\frac{1}{p-1}}, \text{ as } x \to 0.
\end{equation}
and
\begin{equation}\label{asymp-u-start-near-0-profile-complex-imaginary-part}
u^*_2(x) \sim \frac{2 p}{(p-1)^2} \left[ \frac{ (p-1)^2|x|^2}{ 8p |\ln|x||}\right]^{-\frac{1}{p-1}}\frac{1}{ |\ln|x||} , \text{ as } x \to 0.
\end{equation}
\end{itemize}
\end{theorem}
\begin{remark}\label{remark-initial-data}
The initial data $u^0 $ is given exactly as follows
$$ u^0 = u_1^0 + i u_2^0 ,$$
where
\begin{eqnarray*}
u_1^0 & =& T^{ - \frac{1}{p -1}} \left\{ \left( p-1 + \frac{(p-1)^2|x|^2}{ 4 p T |\ln T|} \right)^{-\frac{1}{p-1}} + \frac{n \kappa }{2 p |\ln T|} \right.\\
& +& \left. \frac{A}{|\ln T|^2} \left( d_{1,0} + d_{1,1} \cdot y \right) \chi_0 \left(\frac{2x}{K \sqrt{T \ln T }} \right) \right\},\\
u_2^0 & =& T^{ - \frac{1}{p -1}} \left\{ \frac{|x|^2}{ T |\ln T|^2} \left( p-1 + \frac{(p-1)^2|x|^2}{ 4 p T |\ln T|} \right)^{-\frac{p}{p-1}} - \frac{2 n \kappa }{(p-1) |\ln T|^2} \right.\\
& +& \left. \left[ \frac{A^2}{|\ln T|^{p_1 +2}} \left( d_{1,0} + d_{1,1} \cdot y \right) \chi_0 + \frac{A^5 \ln( |\ln(T)|)}{ |\ln T|^{p_1 + 2}} \left( \frac{1}{2} y^{\mathcal {T} }\cdot d_{2,2} \cdot
y - \text{Tr} (d_{2,2}) \right) \right]\chi_0\left(\frac{2x}{K \sqrt{T \ln T }} \right) \right\}.
\end{eqnarray*}
with $\kappa = (p-1)^{- \frac{1}{p-1}}$, $K, A$ are positive constants fixed large enough, $d^{(1)} = (d_{1,0}, d_{1,1}), d^{(2)} = (d_{2,0}, d_{2,1}, d_{2,2}) $ are parametes we fine tune in our proof, and $\chi_0 \in C^{\infty}_{0}[0,+\infty), \|\chi_0\|_{L^{\infty}} \leq 1, \text{ supp } \chi_0 \subset [0,2]$.
\end{remark}
\begin{remark}
We see below in \eqref{equation-satisfied-u_1-u_2} that the equation satisfied by of $u_2$ is almost 'linear' in $u_2$. Accordingly, we may change a little our proof to construct a solution $u_{c_0} (t) = u_{1,c_0} + i u_{2,c_0}$ with $t \in [0,T), c_0 \neq 0$, which blows up in finite time $T$ only at the origin such that \eqref{esttima-theorem-profile-complex} and \eqref{asymp-u-start-near-0-profile-complex} hold and also the following
\begin{equation}
\left\| (T- t)^{\frac{1}{p-1}} |\ln (T-t)| u_{2,c_0}(x,t) - c_0g_{0}\left( \frac{x}{\sqrt{(T -t) |\ln(T -t)|}} \right) \right\|_{L^{\infty}} \leq \frac{C}{ |\ln (T -t)|^{\frac{p_1}{2}}},
\end{equation}
and
\begin{equation}
u^*_2(x) \sim \frac{2 p c_0}{(p-1)^2} \left[ \frac{ (p-1)^2|x|^2}{ 8p |\ln|x||}\right]^{-\frac{1}{p-1}}\frac{1}{ |\ln|x||} , \text{ as } x \to 0,
\end{equation}
\end{remark}
\begin{remark} We deduce from $(ii)$ that $u$ blows up only at $0$. In particular, note both the $u_1$ and $u_2$ blow up. However, the blowup speed of $u_2$ is softer than $u_1$ because of the quantity $ \frac{1}{|\ln |x||}$.
\end{remark}
\begin{remark}
Nouaili and Zaag constructed a blowup solution of
\eqref{equation-problem-p=2} with a less explicit behavior
for the imaginary part (see \eqref{asymptotic-imiginary-nouaili-zaag}). Here, we do better and we obtain the
profile the the imaginary part in \eqref{estima-the-imginary-part} and we also describe the asymptotics of the solution in the neighborhood of the blowup point in \eqref{asymp-u-start-near-0-profile-complex-imaginary-part}. In fact, this refined behavior
comes from a more involved formal approach (see Section \ref{section-approach-formal} below), and more parameters to be
fine tuned in initial data (see Definition \ref{initial-data-profile-complex} where we need more parameters than in Nouaili and Zaag \cite{NZCPDE2015}, namely $d_2 \in \mathbb{R}^{\frac{n(n+1)}{2}}$). Note also that our profile estimates in \eqref{esttima-theorem-profile-complex} and \eqref{estima-the-imginary-part} are better than the estimates \eqref{profile-real-hara} and \eqref{profile-imaginary-hara} by Harada ($m = 2$), in the sense that we have a uniform estimate for whole space $\mathbb{R}^n$, and not just for all $|y| \leq s^{1 + \sigma}$ for some $\sigma >0$. Another point: our result hold in $n$ space dimensions, unlike the work of Harada in \cite{HaradaJFA2016}, which holds only in one space dimension.
\end{remark}
\begin{remark}
As in the case $p=2$ treated by Nouaili and Zaag \cite{NZCPDE2015}, we suspect this behavior in Theorem \ref{Theorem-profile-complex} to be unstable. This is due to the fact that the number of parameters in the initial data we consider below in Definition \ref{initial-data-profile-complex} is higher than the dimension of
the blowup parameters which is $n+1$ ($n$ for the blowup points and $1$ for the blowup time).
\end{remark}
Besides that, we can use the technique of Merle \cite{Mercpam92}
to construct a solution which blows up at arbitrary given points.
More precisely, we have the following Corollary:
\begin{corollary}[Blowing up at $k$ distinct points]
For any given points, $x_1,...,x_k$, there exists a solution of \eqref{equ:problem} which blows up exactly at $x_1,...,x_k$. Moreover, the local behavior at each
blowup point $x_j$ is also given by \eqref{esttima-theorem-profile-complex}, \eqref{estima-the-imginary-part}, \eqref{asymp-u-start-near-0-profile-complex}, \eqref{asymp-u-start-near-0-profile-complex-imaginary-part} by replacing $x$ by $x_j$ and $L^\infty (\mathcal{R}^n)$ by $L^{\infty} (|x - x_j| \leq \epsilon_0),$ for some $\epsilon_0 >0$.
\end{corollary}
This paper is organized as follows:
- In Section \ref{section-approach-formal}, we adopt a formal approach to show how the profiles we have in Theorem \ref{Theorem-profile-complex} appear naturally.
- In Section \ref{section-exisence solu}, we give the rigorous proof for Theorem \ref{Theorem-profile-complex}, assuming some technical estimates.
- In Section \ref{the proof of proposion-reduction-finite-dimensional}, we prove the techical estimates assumed in Section \ref{section-exisence solu}.
\medskip
\textbf{Acknowledgement:} I would like to send a huge thank to Professor Hatem ZAAG, my PhD advisor at Paris 13. He led my first steps of the study. Not only did he introduced me to the subject, he also gave me valuable indications on the reductions of a mathematics paper. I have no anymore words to describe this wondeful. Beside that, I also thank my family who encouraged me in my mathematical stidies.
\section{Derivation of the profile (formal approach)}\label{section-approach-formal}
In this section, we aim at giveing a formal approach to our problem which helps us to explain how we derive the profile of solution of \eqref{equ:problem} given in Theorem \eqref{Theorem-profile-complex}, as well the asymptotics of the solution.
\subsection{Modeling the problem}\label{subsection-pro-L}
In this part, we will give definitions and special symbols important for our work and explain how the functions $f_0, g_0$ arise as blowup profiles for equation \eqref{equ:problem} as stated in \eqref{esttima-theorem-profile-complex} and \eqref{estima-the-imginary-part}. Our aim in this section is to give solid (though formal) hints for the existence of a solution $u(t) = u_1 (t) + i u_2(t)$ to equation \eqref{equ:problem} such that
\begin{equation}\label{lim-u-t-=+-infty}
\lim_{t \to T} \|u(t) \|_{L^\infty} = +\infty,
\end{equation}
and $u$ obeys the profiles in \eqref{esttima-theorem-profile-complex} and \eqref{estima-the-imginary-part}, for some $T > 0$. By using equation \eqref{equ:problem}, we deduce that $u_1,u_2$ solve:
\begin{equation}\label{equation-satisfied-u_1-u_2}
\left\{ \begin{array}{rcl}
\partial_t u_1 &=& \Delta u_1 + F_1(u_1,u_2),\\
\partial_t u_2 &=& \Delta u_2 + F_2(u_1, u_2).
\end{array} \right.
\end{equation}
where
\begin{equation}\label{defi-mathbb-A-1-2}
\left\{ \begin{array}{rcl}
F_1(u_1,u_2) &=& \text{ Re} \left[( u_1 + i u_2)^p\right] = \sum_{j=0}^{\left[ \frac{p}{2} \right]} C^{2j }_p ( -1)^{j} u_1^{p-2j} u_2^{2j} ,\\
F_2(u_1, u_2) &=& \text{ Im} \left[( u_1 + i u_2)^p\right] = \sum_{j=0}^{\left[ \frac{p-1}{2} \right]} C^{2j + 1}_p ( -1)^{j} u_1^{p-2j -1} u_2^{2j +1},
\end{array} \right.
\end{equation}
with $\text{ Re}[ z] $ and $\text{ Im} [z]$ being respectively the real and the imaginary part of $z$ and $C^m_{p} = \frac{p!}{ m ! (p-m)!} ,$ for all $m \leq p .$ Let us introduce \textit{the similarity-variables}:
\begin{equation}\label{similarity-variales}
w_1 (y,s) = (T-t)^{\frac{1}{p-1}}u_1 (x,t), w_2 (y,s) = (T-t)^{\frac{1}{p-1}}u_2 (x,t) ,y = \frac{x}{\sqrt {T- t}} , s = - \ln(T- t).
\end{equation}
Thanks to \eqref{equation-satisfied-u_1-u_2}, we derive the system satisfied by $( w_1, w_2),$ for all $y \in \mathbb{R}^n$ and $s \geq -\ln T $ as follows:
\begin{equation}\label{equation-satisfied-by-w-1-2}
\left\{ \begin{array}{rcl}
\partial_s w_1 & = & \Delta w_1 - \frac{1}{2} y \cdot \nabla w_1 - \frac{w_1}{p-1} + F_1(w_1,w_2), \\
\partial_s w_2 &=& \Delta w_2 - \frac{1}{2} y \cdot \nabla w_2 - \frac{w_2}{p-1} + F_2 (w_1,w_2).
\end{array} \right.
\end{equation}
Then note that studying the asymptotics of $u_1 + i u_2$ as $t \to T$ is equivalent to studying the asymptotics of $w_1 + i w_2$ in long time. We are first interested in the set of constant solutions of
\eqref{equation-satisfied-by-w-1-2}, denoted by
$$\mathcal{S} = \left\{ (0,0)\right\} \cup \left\{ \left(\kappa \cos\left(\frac{2 k \pi}{p-1} \right) ,\kappa \sin\left(\frac{2 k \pi}{p-1} \right) \right) \text{ where } \kappa
= (p-1)^{-\frac{1}{p-1}}, k = 0,..., p-1 \right\}.$$
With the transformation \eqref{similarity-variales}, we slightly precise our goal in \eqref{lim-u-t-=+-infty} by requiring in addition that
$$ (w_1, w_2) \to (\kappa, 0) \text{ as } s \to +\infty.$$
Introducing $w_1 = \kappa + \bar w_1, $ our goal because to get
$$ (\bar w_1, w_2) \to (0,0) \text{ as } s \to + \infty. $$
From \eqref{equation-satisfied-by-w-1-2}, we deduce that $\bar w_1, w_2$ satisfy the following system
\begin{equation}\label{system-bar R-varphi}
\left\{ \begin{array}{rcl}
\partial_s \bar w_1 &=& \mathcal{L} \bar w_1 + \bar B_1(\bar w_1, w_2),\\
\partial_s w_2 &=& \mathcal{L} w_2 + \bar B_2 (\bar w_1, w_2).
\end{array} \right.
\end{equation}
where
\begin{eqnarray}
\mathcal{L} &=& \Delta - \frac{1}{2} y \cdot \nabla + Id,\label{define-operator-L}\\
\bar B_1 ( \bar w_1, w_2 ) & = & F_1(\kappa + \bar w_1, w_2)- \kappa^p - \frac{p}{p-1} \bar w_1 , \label{defini-bar-B-1}\\
\bar B_2 (\bar w_1, w_2) &=& F_2(\kappa + \bar w_1, w_2) - \frac{p}{p-1} w_2.\label{defini-bar-B-2}
\end{eqnarray}
It is important to study the linear operator $\mathcal{L}$ and the asymptotics of $\bar B_1, \bar B_2$ as $(\bar w_1, w_2) \to (0,0)$ which will appear as quadratic.
$\bullet $ \textit{ The properties of $\mathcal{L}$:}
We observe that the operator $\mathcal{L}$ plays an important
role in our analysis. It is not really difficult to find an analysis space such that $\mathcal{L}$ is
self-adjoint. Indeed, $\mathcal{L} $ is self-adjoint in $L^2_\rho(\mathbb{R}^n)$, where $L^2_\rho$ is the weighted space associated with the weight $\rho$ defined by
\begin{equation}\label{def-rho-rho-j}
\rho(y) =\frac{e^{- \frac{|y|^2}{4}}}{(4 \pi)^{\frac{n}{2}}} = \prod_{j=1}^n \rho_j (y_j), \text{ with } \rho_j(y_j) = \frac{e^{- \frac{|y_j|^2}{4}}}{(4 \pi)^{\frac{1}{2}}},
\end{equation}
and the spectrum set of $\mathcal{L}$
$$\textup{spec}(\mathcal{L}) = \displaystyle \left\{1 - \frac m2, m \in \mathbb{N}\right\}.$$
Moreover, we can find eigenfunctions which correspond to each eigenvalue $ 1 - \frac{m}{2}, m \in \mathbb{N}$:
\begin{itemize}
\item[-] The one space dimensional case: the eigenfunction corresponding to
the eigenvalue $1 - \frac m2$ is $h_m$, the rescaled Hermite polynomial given in \eqref{Hermite}. In particular, we have the following orthogonality property:
$$\int_{\mathbb{R}} h_i h_j \rho dy = i! 2^i \delta_{i,j}, \quad \forall (i,j) \in \mathbb{N}^2. $$
\item[-] The higher dimensional case: $n \geq 2$, the eigenspace $\mathcal{E}_{m}$, corresponding
to the eigenvalue $1 - \frac {m}{2}$ is defined as follows:
\begin{equation}\label{eigenspace}
\mathcal{E}_m = \left\{ h_{\beta} = h_{\beta_1} \cdots h_{\beta_n}, \text{ for all } \beta \in \mathbb{N}^n, |\beta| = m , |\beta| = \beta_1 + \cdots +\beta_n \right\}.
\end{equation}
\end{itemize}
As a matter of fact, so we can represent an arbitrary function $r \in L^{2}_\rho$ as follows
\begin{eqnarray*}
r &=& \displaystyle \sum_{\beta, \beta \in \mathbb{N}^n} r_\beta h_\beta (y),\\
\end{eqnarray*}
where: $r_\beta$ is the projection of $r$ on $h_\beta $ for any $ \beta \in \mathbb{R}^n$ which is defined as follows:
\begin{equation}\label{projector-h-beta}
r_\beta = \mathbb{P}_\beta (r) = \int r k_\beta \rho dy, \forall \beta \in \mathbb{N}^n,
\end{equation}
with
\begin{equation}\label{note-k-beta-hermite}
k_\beta (y) = \frac{ h_\beta}{\|h_\beta\|^2_{L^2_\rho}},
\end{equation}
$\bullet $ \textit{ The asymptotic of $\bar B_1(\bar w_1, w_2), \bar B_2(\bar w_1, w_2)$:}
The following asymptotics hold:
\begin{eqnarray}
\bar B_1 (\bar w_1, w_2) &=& \frac{p}{2 \kappa} \bar w_1^2 + O (|\bar w_1|^3 + |w_2|^2),\label{asymptotic-bar-B-1-1}\\
\bar B_2( \bar w_1, w_2) &=& \frac{p}{ \kappa} \bar w_1 w_2 + O \left( |\bar w_1|^2 |w_2| \right) + O \left( |w_2|^3 \right), \label{asymptotic-bar-B-2-1}
\end{eqnarray}
as $(\bar w_1, w_2) \to (0,0)$ (see Lemma \ref{asymptotic-bar-B-1-2} below).
\subsection{Inner expansion}\label{subsection-inner-expan}
In this part, we study the asymptotics of the solution
in $L^2_\rho(\mathbb{R}^n).$ Moreover, for simplicity we suppose that $n =1$, and we recall that we aim at constructing a solution of
\eqref{system-bar R-varphi} such that
$( \bar w_1, w_2) \to (0,0) $. Note first that the spectrum of $\mathcal{L}$ contains two positive eigenvalues $1, \frac{1}{2}$, a neutral eigenvalue $0$ and all the other ones are strictly negative. So, in the representation of the solution in $L^2_\rho,$ it is reasonable to think that the part corresponding to the negative spectrum is easily controlled. Imposing a symmetry condition on the solution with respect of $y$, it is reasonable to look for a solution $ \bar w_1, w_2 $ of the form:
\begin{eqnarray*}
\bar w_1 & = & \bar w_{1,0} h_0 + \bar w_{1,2} h_2, \\
w_2 &=& w_{2,0} h_0 + w_{2,2} h_2.
\end{eqnarray*}
From the assumption that $ (\bar w_1, w_2) \to (0,0)$, we see that $ \bar w_{1,0}, \bar w_{1,2}, w_{2,0} , w_{2,2} \to 0$ as $s \to + \infty$.
We see also that we can understand the asymptotics of the solution $\bar w_1, w_2$ in $L^2_\rho$ from the study of the asymptotics of $\bar w_{1,0}, \bar w_{1,2}, w_{2,0}, w_{2,2}.$
We now project equations \eqref{system-bar R-varphi} on $h_0$ and
$h_2.$ Using the asymptotics of $\bar B_1, \bar B_2$ in \eqref{asymptotic-bar-B-1-1} and \eqref{asymptotic-bar-B-2-1}, we get the following ODEs for $ \bar w_{1,0}, \bar w_{1,2}, w_{2,0} , w_{2,2}:$
\begin{eqnarray}
\partial_s \bar w_{1,0} &=& \bar w_{1,0} + \frac{p}{2 \kappa} \left( \bar w_{1,0}^2 + 8 \bar w_{1,2}^2\right)+ O (|\bar w_{1,0}|^3 + |\bar w_{1,2}|^3) + O( |w_{2,0}|^2 + |w_{2,2}|^2),\label{ODe-bar w_1-0} \\
\partial_s \bar w_{1,2} &=& \frac{p}{\kappa} \left( \bar w_{1,0} \bar w_{1,2} + 4 \bar w_{1,2}^2\right) + O (|\bar w_{1,0}|^3 + |\bar w_{1,2}|^3) + O( |w_{2,0}|^2 + |w_{2,2}|^2) ,\label{ODe-bar w_1-2}\\
\partial_s w_{2,0} &=& w_{2,0} + \frac{p}{\kappa}\left[\bar w_{1,0} w_{2,0} + 8 \bar w_{1,2} w_{2,2} \right] + O ((|\bar w_{1,0}|^2 + |\bar w_{1,2}|^2)(|w_{2,0}|+ |w_{2,2}|)) \label{ODe- w_2-0}\\
&+& O( |w_{2,0}|^3 + |w_{2,2}|^3) ,\nonumber\\
\partial_s w_{2,2} &=& \frac{p}{\kappa} \left[ \bar w_{1,0} w_{2,2} + \bar w_{1,2} w_{2,0} + 8 \bar w_{1,2} w_{2,2}\right] + O ((|\bar w_{1,0}|^2 + |\bar w_{1,2}|^2)(|w_{2,0}|+ |w_{2,2}|)) \label{ODe- w_2-2}\\
&+& O( |w_{2,0}|^3 + |w_{2,2}|^3). \nonumber
\end{eqnarray}
Assuming that
\begin{equation}\label{asumption-bar-w-1-0-lesthan-bar-w-1-2}
\bar w_{1,0}, w_{2,0}, w_{2,2} \ll \bar w_{1,2} \text{ as } s \to + \infty,
\end{equation}
we may simplify the ODE system as follows:
$ \bullet $ \textit{The asymptotics of $\bar w_{1,2}$:}
We deduce from \eqref{ODe-bar w_1-2} and \eqref{asumption-bar-w-1-0-lesthan-bar-w-1-2} that
$$\partial_s \bar w_{1,2} \sim \frac{4 p}{\kappa} \bar w_{1,2}^2 \text{ as } s \to + \infty,$$
which yields
\begin{equation}
\bar w_{1,2} = -\frac{\kappa}{4 p s} + o\left(\frac{1 }{s} \right), \text{ as } s \to + \infty.
\end{equation}
Assuming futher that
\begin{equation}\label{asuming-bar-w-1-w-2-2-leq-s^2}
\bar w_{1,0}, w_{2,0}, w_{2,2} \lesssim \frac{1}{s^2},
\end{equation}
we see that
\begin{equation}\label{asymp-bar-w-1-2}
\bar w_{1,2} = -\frac{\kappa}{4 p s} + O\left(\frac{\ln s }{s^2} \right), \text{ as } s \to + \infty.
\end{equation}
$\bullet $ \textit{The asymptotics of $\bar w_{1,0}:$}
By using \eqref{ODe-bar w_1-0}, \eqref{asumption-bar-w-1-0-lesthan-bar-w-1-2} and the asymptotics of $\bar w_{1,2}$ in \eqref{asymp-bar-w-1-2}, we see that
\begin{equation}\label{asymp-bar-w-1-0}
\bar w_{1,0} = O \left( \frac{1}{s^2}\right) \text{ as } s \to + \infty.
\end{equation}
$\bullet $ \textit{The asymptotics of $w_{2,0}$ and $ w_{2,2}$:}
Bisides that, we derive from \eqref{ODe- w_2-0}, \eqref{ODe- w_2-2} and \eqref{asuming-bar-w-1-w-2-2-leq-s^2} that
\begin{eqnarray}
\partial_s w_{2,2} &= & \left( -\frac{2}{s} + O \left( \frac{\ln s}{s^2}\right) \right) w_{2,2} + o \left( \frac{1}{s^3}\right),\label{ODE-w-2-2-o-1-s-3}\\
\partial_s w_{2,0} & = & w_{2,0} + O \left( \frac{1}{s^3}\right),\nonumber
\end{eqnarray}
which yields
\begin{eqnarray}
w_{2,2} &=& o \left( \frac{\ln s}{s^2}\right),\nonumber\\
w_{2,0} &=& O \left( \frac{1}{s^3} \right),\label{asymptotic-w-2-0}
\end{eqnarray}
as $s \to + \infty$. This also yields a new ODE for $ w_{2,2}:$
$$\partial_s w_{2,2} = - \frac{2}{s} w_{2,2} + o\left( \frac{\ln^2 s}{s^4} \right),$$
which implies
$$ w_{2,2} = O\left(\frac{1}{s^2} \right).$$
Using again \eqref{ODE-w-2-2-o-1-s-3}, we derive a new ODE for $w_{2,2}$
$$\partial_s w_{2,2} = - \frac{2}{s} w_{2,2} + O\left( \frac{\ln s}{s^4} \right),$$
which yields
\begin{eqnarray}
w_{2,2} &=& \frac{\tilde c_0}{s^2} + O\left( \frac{\ln s}{s^3}\right), \text{ for some } \tilde c_0 \in \mathbb{R}^* \label{asymptotic-w-2-2}.
\end{eqnarray}
Noting that our finding \eqref{asymp-bar-w-1-2}, \eqref{asymp-bar-w-1-0}, \eqref{asymptotic-w-2-0} and \eqref{asymptotic-w-2-2} are consistent with our hypotheses in \eqref{asumption-bar-w-1-0-lesthan-bar-w-1-2} and \eqref{asuming-bar-w-1-w-2-2-leq-s^2}, we get the asymptotics of the solution $w_1$ and $w_2$ as follows:
\begin{eqnarray}
w_1&=& \kappa - \frac{\kappa}{4p s } ( y^2 - 2) + O\left(\frac{1}{s^2} \right), \label{asymptotic-w-1}\\
w_2 &=& \frac{\tilde c_0}{s^2} (y^2 - 2) + O \left( \frac{\ln s}{s^3}\right), \label{asymptotic-w-2},
\end{eqnarray}
in $L^2_\rho(\mathbb{R})$ for some $\tilde c_0$ in $\mathbb{R}^*$.
Using parabolic regularity, we note that the asymptotics \eqref{asymptotic-w-1}, \eqref{asymptotic-w-2} also hold for all $|y| \leq K,$ where $K$ is an arbitrary positive constant.
\subsection{Outer expansion}
As Subsection \ref{subsection-inner-expan} above, we assume that $n=1$. We see that asymptotics \eqref{asymptotic-w-1} and \eqref{asymptotic-w-2}
can not give us a shape, since they hold uniformly on compact sets, and not in larger sets. Fortunately, we observe from \eqref{asymptotic-w-1} and \eqref{asymptotic-w-2} that the profile may be based on the following variable:
\begin{equation}\label{the-variable-z-profile}
z = \frac{y}{\sqrt s}.
\end{equation}
This motivates us to look for solutions of the form:
\begin{eqnarray*}
w_1(y,s) &=& \sum_{j=0}^{\infty} \frac{R_{1,j} (z)}{s^j}, \\
w_2 (y,s) &=& \sum_{j=1}^{\infty} \frac{R_{2,j}(z)}{s^j}.
\end{eqnarray*}
Using system \eqref{equation-satisfied-by-w-1-2} and gathering terms of order $\frac{1}{s^j}$ for $j=0,...,2$, we obtain
\begin{eqnarray}
0 &=& - \frac{1}{2} R_{1,0}' (z) \cdot z - \frac{R_{1,0} (z)}{p-1} + R_{1,0}^p (z), \label{equa-R-1-0} \\
0 &=& - \frac{1}{2} z R_{1,1}' - \frac{R_{1,1}}{p-1} + p R_{1,0}^{p-1}R_{1,1} + R_{1,0}''
+ \frac{z R_{1,0}'}{2}, \label{equa-R-1-1} \\
0 &=& - \frac{1}{2} R_{2,1}' (z) \cdot z - \frac{R_{2,1}}{p-1} + p R_{1,0}^{p-1} R_{2,1}, \label{equa-R-2-1}\\
0 &=& - \frac{1}{2} R_{2,2}'(z). z - \frac{R_{2,2}}{p-1} + p R^{p-1}_{1,0} R_{2,2} + R''_{2,1} + R_{2,1} + \frac{1}{2} R_{2,1}' \cdot z + p (p-1) R^{p-2}_{1,0} R_{1,1} R_{2,1}. \label{equa-R-2-2}
\end{eqnarray}
We now solve the above equations:
$ \bullet $ \textit{ The solution $R_{1,0}$:} It is easy to solve \eqref{equa-R-1-0}
\begin{equation} \label{solu-R-0}
R_{1,0} (z) = (p-1 + b z^2)^{- \frac{1}{p-1}},
\end{equation}
where $b $ is an unknown constant that will be selected accordingly to our purpose.
$ \bullet $ \textit{ The solution $R_{1,1}$:} We rewrite \eqref{equa-R-1-1} under the following form:
$$ \frac{1}{2} z. R_{1,1}'(z) = \left( \frac{(p-1)^2 - bz^2}{(p-1)(p-1 + b z^2)} \right)R_{1,1} + F_{1,1}(z),$$
where
\begin{eqnarray*}
F_{1,1} (z) &=& - \frac{2 b }{p-1} (p-1 + bz^2)^{- \frac{p}{p-1}} + \frac{4 p b^2 z^2}{(p-1)^2} (p-1 + bz^2)^{- \frac{(2p-1)}{p-1}} \\
& - & \frac{b z^2}{p-1} (p-1 + b z^2)^{- \frac{p}{p-1}}.
\end{eqnarray*}
Thanks to the variation of constant method, we see that
\begin{eqnarray}\label{variation-constant-R-1-1}
R_{1,1} = H^{-1} (z) \left( \int \frac{2 }{z}H (z) F_{1,1} (z) dz + C_1 \right),
\end{eqnarray}
where
$$ H (z) = \frac{ (p-1 + bz^2)^{\frac{p}{p-1}}}{ z^2}.$$
Besides that, we have:
\begin{eqnarray*}
\frac{2 H}{z} F_{1,1} &=& - \frac{4 b }{ (p-1) z ^3 } + \frac{8p b^2}{(p-1)^2} \left( \frac{1}{ z (p-1 + b z^2)} \right) - \frac{2b}{ (p-1) z}\\
&=& - \frac{4 b }{ (p-1) z^3} + \frac{1}{ z} \left( -\frac{2b}{p-1} + \frac{8p b^2}{(p-1)^3} \right) \\
&+& (p-1 + b z^2)^{-1} \left( - \frac{8 p b^3 z}{(p-1)^3} \right)\\
\end{eqnarray*}
We can see that if the coefficient of $\frac{1}{z}$ is non zero, then we will have a $\ln z$ term in the solution $R_{1,1}$ and this term would not be analytic, creating a singularity in the solution. In order to avoid this singularity, we impose that
\begin{equation*}\label{condition-regularity-R-1}
-\frac{2b}{p-1} + \frac{8p b^2}{(p-1)^3} = 0.
\end{equation*}
which yields
\begin{equation}\label{condition-b}
b= \frac{(p-1)^2}{4 p }.
\end{equation}
Besides that, for simplicity, we assume that $C_1 = 0.$ Using \eqref{variation-constant-R-1-1}, we see that
\begin{eqnarray}
R_{1,1} &=& \frac{(p-1)}{ 2 p} (p-1 + bz ^2)^{- \frac{p}{p-1}} - \frac{p-1}{4 p} z^2 \ln (p-1 + b z^2) (p-1 + b z^2)^{- \frac{p}{p-1}}. \label{solu-R-1-1}
\end{eqnarray}
$ \bullet $ \textit{ The solution $R_{2,1}$:} It is easy to solve \eqref{equa-R-2-1} as follows:
\begin{equation}\label{solu-varphi_1}
R_{2,1} (z) = \frac{z^2}{(p-1 + bz^2)^{\frac{p}{p-1}}}.
\end{equation}
$ \bullet $ \textit{ The solution $R_{2,2}$:} We rewrite \eqref{equa-R-2-2} as follows
\begin{eqnarray*}
\frac{1}{2} z \cdot R_{2,2}'(z) &=& \left( \frac{(p-1)^2 - bz^2}{(p-1)(p-1 + b z^2)} \right) R_{2,2} (z) + F_{2,2}(z),
\end{eqnarray*}
where
\begin{eqnarray*}
F_{2,2}(z) &=& R''_{2,1} + R_{2,1} + \frac{1}{2} R_{2,1}' \cdot z + p (p-1) R^{p-2}_{1,0} R_{1,1} R_{2,1} \\
&=& 2 (p-1 + b z^2)^{- \frac{p}{p-1}} \\
&-& \frac{10 p b z^2}{p-1} (p-1+ b z^2)^{- \frac{2p-1}{p-1}} + 2 z^2 (p-1 + b z^2)^{-\frac{p}{p-1}} + \frac{(p-1)^2}{2} z^2 (p-1 + b z^2)^{- \frac{3p-2}{p-1}} \\
&+& \frac{4p (2p-1)b^2z^4}{(p-1)^2} (p-1 + b z^2)^{-\frac{3p-2}{p-1}} -\frac{p b z^4}{p-1} (p-1 + b z^2)^{- \frac{2p-1}{p-1}} \\
&- &\frac{(p-1)^2}{4} z^4 \ln (p-1 + b z^2) (p-1+ b z^2)^{- \frac{3p-2}{p-1}}.
\end{eqnarray*}
By using the variation of constant method, we have
\begin{eqnarray}
R_{2,2} (z) = \frac{z^2}{(p-1+ b z^2)^{-\frac{p}{p-1}}} \left( \int \frac{2 (p-1+ b z^2)^{-\frac{p}{p-1}}}{z^3} F_{2,2}(z) dz + C_2 \right),\label{variation-constant-R-2-2}
\end{eqnarray}
where
\begin{eqnarray*}
\frac{2 (p-1+ b z^2)^{-\frac{p}{p-1}}}{z^3} F_{2,2}(z) &=& \frac{4}{z^3} + \left[ 5 - \frac{20 p b}{(p-1)^2}\right] \frac{1}{z} + \frac{z}{p-1+ b z^2} \left[ \frac{20p b}{(p-1)^2} - b - \frac{2 p b}{p-1} \right] \\
&+& \left[ \frac{8 p (2p-1) b^2}{(p-1)^2} - (p-1) p\right] \frac{z}{(p-1+ b z^2)^2} \\
&-& \frac{(p-1)^2}{2} z \ln (p-1 + b z^2) (p-1 + b z^2)^{-2}.
\end{eqnarray*}
We observe that
$$ 5 - \frac{20 p b}{(p-1)^2} = 0, \text{ because } b = \frac{(p-1)^2}{4 p }.$$
So, from \eqref{variation-constant-R-2-2} and assuming that $C_2 = 0,$ we have
\begin{equation}\label{solu-R-2-2}
R_{2,2} (z) = - 2 (p-1 + b z^2) ^{- \frac{p}{p-1}} + H_{2,2} (z),
\end{equation}
where
\begin{eqnarray*}
H_{2,2} (z) &=& C_{2,1} (p) z^2 (p-1 + b z^2)^{-\frac{2p-1}{p-1}} + C_{2,3} (p) z^2 \ln (p-1 + b z^2) (p-1 + b z^2)^{-\frac{p}{p-1}} \\
&+& C_{2,3} (p) z^2 \ln (p-1 + b z^2) (p-1 + b z^2)^{-\frac{2p-1}{p-1}}.
\end{eqnarray*}
\subsection{Matching asymptotics}
Since the outer expansion has to match the inner expansion, this will fiw several constant, giving us the following profiles for $w_1$ and $w_2:$
\begin{equation}\label{equavalent-w-1-2-Phi-1-2}
\left\{ \begin{array}{rcl}
w_1 (y,s) &\sim & \Phi_1(y,s),\\
w_2 (y,s) &\sim & \Phi_2(y,s),
\end{array} \right.
\end{equation}
where
\begin{eqnarray}
\Phi_1(y,s) &=& \left( p-1 + \frac{(p-1)^2}{4 p} \frac{|y|^2}{s} \right)^{-\frac{1}{p-1}} + \frac{n \kappa}{2 p s},\label{defi-Phi-1}\\
\Phi_2 (y,s) &=&\frac{|y|^2}{s^2} \left( p-1 + \frac{(p-1)^2}{4 p} \frac{|y|^2}{s} \right)^{-\frac{p}{p-1}} - \frac{2n \kappa}{(p-1) s^2},\label{defi-Phi-2}
\end{eqnarray}
for all $(y,s) \in \mathbb{R}^n \times (0, + \infty)$.
\section{Existence of a blowup solution in Theorem \ref{Theorem-profile-complex}}\label{section-exisence solu}
In Section \ref{section-approach-formal}, we adopted a formal approach on order to justify how the profiles $f_0, g_0$ arise as blowup profiles for equation \eqref{equ:problem}. In this section, we give a rigorous proof to justify the existence of a solution approaching those profiles.
\subsection{Formulation of the problem}
In this section, we aim at formulating our problem in order to justify the formal approach which is given in the previous section. Introducing
\begin{equation}\label{defini-q-1-2}
\left\{ \begin{array}{rcl}
w_1 &=& \Phi_1 + q_1, \\
w_2 &=& \Phi_2 + q_2,
\end{array} \right.
\end{equation}
where $\Phi_1, \Phi_2$ are defined in \eqref{defi-Phi-1} and \eqref{defi-Phi-2} respectively, then using \eqref{equation-satisfied-by-w-1-2}, we see that $(q_1, q_2) $ satisfy
\begin{equation}\label{equation-satisfied-by-q-1-2}
\partial_s \binom{q_1}{q_2} = \left( \begin{matrix}
\mathcal{L} + V & 0 \\
0 & \mathcal{L} + V
\end{matrix} \right) \binom{q_1}{q_2} + \left( \begin{matrix}
V_{1,1} & V_{1,2} \\
V_{2,1 }& V_{2,2}
\end{matrix} \right) \binom{q_1 }{q_2} + \binom{B_1}{B_2} \binom{q_1}{q_2} + \binom{R_1 (y,s)}{R_{2} (y,s)}
\end{equation}
where linear operator $\mathcal{L}$ is defined in \eqref{define-operator-L} and:\\
\medskip
\noindent
- The potential functions $V, V_{1,1}, V_{1,2}, V_{2,1}, V_{2,2} $ are defined as follows
\begin{eqnarray}
V(y,s) &=& p \left( \Phi_1^{p- 1} - \frac{1}{p-1}\right)\label{defini-potentian-V}, \\
V_{1,1} (y,s) & = & \sum_{ j=1}^{ \left[\frac{p}{2}\right]} C_p^{2j} (-1)^j (p-2j) \Phi_1^{p - 2j -1} \Phi_2^{2j} ,\label{defini-V-1-1} \\
V_{1,2} (y,s) & = & \sum_{ j=0}^{ \left[\frac{p}{2}\right]} C_p^{2j} (-1)^j .(2j) \Phi_1^{p - 2j} \Phi_2^{2j - 1}, \label{defini-V-1-2} \\
V_{2,1} (y,s) & = & \sum_{ j=0}^{ \left[\frac{p-1}{2}\right]} C_p^{2j+ 1} (-1)^j (p-2j -1) \Phi_1^{p - 2j -2} \Phi_2^{2j+ 1}, \label{defini-V-2-1} \\
V_{2,2} (y,s) & = & \sum_{ j =1 }^{ \left[\frac{p-1}{2}\right]} C_p^{2j+ 1} (-1)^j (2j + 1) \Phi_1^{p - 2j -1} \Phi_2^{2j }. \label{defini-V-2-2}
\end{eqnarray}
\medskip
\noindent
- The quadratic terms $B_1 (q_1, q_2), B_2 (q_1,q_2)$ are defined as follows:
\begin{eqnarray}
B_1 (q_1,q_2) & = & F_1 \left( \Phi_1 + q_1, \Phi_2 + q_2 \right) - F_1(\Phi_1, \Phi_2) - \sum_{j=0 }^{ \left[\frac{p}{2}\right]} C_p^{2j} (-1)^j (p-2j) \Phi_1^{p - 2j -1} \Phi_2^{2j} q_1 \label{defini-quadratic-B-1}\\
&-& \sum_{ j=0}^{ \left[\frac{p}{2}\right]} C_p^{2j} (-1)^j .(2j) \Phi_1^{p - 2j} \Phi_2^{2j - 1} q_2,\nonumber\\
B_2(q_1, q_2) & = & F_2 \left( \Phi_1 + q_1, \Phi_2 + q_2 \right) - F_2(\Phi_1, \Phi_2) - \sum_{ j=0}^{ \left[\frac{p-1}{2}\right]} C_p^{2j+ 1} (-1)^j (p-2j -1) \Phi_1^{p - 2j -2} \Phi_2^{2j+ 1} q_1 \nonumber\\
& - & \sum_{ j =0 }^{ \left[\frac{p-1}{2}\right]} C_p^{2j+ 1} (-1)^j (2j + 1) \Phi_1^{p - 2j -1} \Phi_2^{2j } q_2.\label{defini-term-under-linear-B-2}
\end{eqnarray}
\medskip
\noindent
- The rest terms $R_1(y,s), R_2(y,s)$ are defined as follows:
\begin{eqnarray}
R_1 (y,s) &=& \Delta \Phi_1 - \frac{1}{2} y \cdot \nabla \Phi_1 - \frac{\Phi_1}{p-1} + F_1 (\Phi_1, \Phi_2) - \partial_s \Phi_1 , \label{defini-the-rest-term-R-1}\\
R_2 (y,s) &=& \Delta \Phi_2 - \frac{1}{2} y \cdot \nabla \Phi_2 - \frac{\Phi_2}{p-1} + F_2 (\Phi_1, \Phi_2) - \partial_s \Phi_2, \label{defini-the-rest-term-R-2}
\end{eqnarray}
where $ F_1, F_2$ are defined in \eqref{defi-mathbb-A-1-2}.
By the linearization around $\Phi_1, \Phi_2,$ our problem is reduced to constructing a solution $(q_1,q_2)$ of system \eqref{equation-satisfied-by-q-1-2}, satisfying
$$ \|q_1\|_{L^{\infty}(\mathbb{R}^n)} + \|q_2\|_{L^{\infty}(\mathbb{R}^n)} \to 0 \text{ as } s \to +\infty.$$
Concerning equation \eqref{equation-satisfied-by-q-1-2}, we recall that we already know the properties of the linear operator $\mathcal{L}$ (see page \pageref{define-operator-L}). As for potentials $V_{j,k}$ where $ j,k \in \{1,2\},$ they admit the following asymptotics
\begin{eqnarray*}
\sum_{j,k \leq 2} |V_{j,k} (y,s) | \leq \frac{C}{s}, \forall y \in \mathbb{R}^n, s\geq 1,
\end{eqnarray*}
(see Lemma \ref{lemmas-potentials}). Regarding the terms $B_1,B_2, R_1, R_2$, we see that whenever $|q_1| + |q_2| \leq 2,$ we have
\begin{eqnarray*}
|B_1(q_1,q_2)| &\leq & C(q_1^2 + q_2^2),\\
|B_2(q_1,q_2)| &\leq& C \left( \frac{|q_1|^2}{s} + |q_1 q_2| + |q_2|^2 \right),\\
\|R_1(y,s)\|_{L^\infty(\mathbb{R}^n)} &\leq & \frac{C}{s}, \\
\|R_2 (.,s)\|_{L^\infty(\mathbb{R}^n)} &\leq & \frac{C}{s^2},\\
\end{eqnarray*}
(see Lemmas \ref{lemma-quadratic-term-B-1-2} and \ref{lemma-rest-term-R-1-2}). In fact, the dynamics of equation \eqref{equation-satisfied-by-q-1-2} will mainly depend on the main linear operator
$$ \left( \begin{matrix}
\mathcal{L} + V & 0\\
0 & \mathcal{L} + V
\end{matrix} \right), $$
and the effects of the orther terms will be less important. For that reason, we need to understant the dynamics of $\mathcal{L} + V$. Since the spectral properties of $\mathcal{L}$ were already introduced in Section \ref{subsection-pro-L}, we will focus here on the effect of $V$.
$i)$ Effect of $V$ inside the blowup region $\{|y| \leq K\sqrt s\}$ with $K>0$ arbitrary, we have
$$ V \to 0 \text{ in } L^2_\rho(|y| \leq K \sqrt s ) \text{ as } s \to + \infty,$$
which means that the effect of $V$ will be negligeable with respect of the effect of $\mathcal{L},$ except perhaps on the null mode of $\mathcal{L}$ (see item $(ii)$ of Proposition \ref{prop-dynamic-q-1-2-alpha-beta} below)
$ii)$ Effect of $V$ outside the blowup region: for each $\epsilon > 0,$ there exist $K_{\epsilon} >0$ and $ s_{\epsilon} >0$ such that
$$ \sup_{\frac{y}{\sqrt s} \geq K_{\epsilon}, s \geq s_{\epsilon}} \left| V(y,s) + \frac{p}{p-1} \right| \leq \epsilon.$$
Since $1$ is the biggest eigenvalue of $\mathcal{L}$, the operator $\mathcal{L}+ V$ behaves as one with with a fully negative spectrum outside blowup region $\{|y| \geq K_\epsilon\sqrt s\}$, which makes the control of the solution in this region easily.
\medskip
Since the behavior of the potential $V$ inside and outside the blowup region is different,
we will consider the dynamics of the solution for $|y| \leq 2K\sqrt s$ and for $|y| \geq K\sqrt s$ separately for some $K$ to be fixed large.
For that purpose, we introduce the following cut-off function
\begin{equation}\label{def-chi}
\chi(y,s) = \chi_0\left(\frac{|y|}{K \sqrt s} \right),
\end{equation}
where $\chi_0 \in C^{\infty}_{0}[0,+\infty), \|\chi_0\|_{L^{\infty}} \leq 1$ and
$$
\chi_0(x) = \left\{ \begin{array}{l}
1 \quad \text{ for } x \leq 1,\\
0 \quad \text{ for } x \geq 2,
\end{array} \right.$$
and $K$ is a positive constant to be fixed large later. Hence, it is reason able to consider separately the solution in the blowup region $\{ |y| \leq 2 K \sqrt s \}$ and in the regular region $\{| y| \geq K \sqrt s \}$. More precisely, let us define the following notation for all functions $q $ in $L^\infty$ as follows
\begin{equation}\label{defini-q-1-1-e}
q = q_b + q_e \text{ with } q_b = \chi q \text{ and } q_e = (1 - \chi) q,
\end{equation}
Note in particular that $\text{ supp} (q_b) \subset \mathbb{B} ( 0, 2 K \sqrt s)$ and $ \text{ supp} (q_e) \subset \mathbb{R}^n \setminus \mathbb{B} ( 0, K \sqrt s)$.
Besides that, we also expand $q_b $ in $L^2_\rho$ as follows; according to the spectrum of $\mathcal{L}$ (see Sention \ref{subsection-pro-L} above):
\begin{equation}\label{representation-q-1-L-2-rho}
q_b (y) = q_0 + q_1 \cdot y + \frac{1}{2} y^{\mathcal{T}} \cdot q_2 \cdot y - \text{ Tr} \left( q_2 \right) + q_- (y) ,
\end{equation}
where
\begin{eqnarray*}
q_0 & = & \int_{\mathbb{R}^n} q_b \rho (y) d y, \\
q_1 &=& \int_{\mathbb{R}^n} q_b \frac{y}{2} \rho (y) d y, \\
q_2 & =& \left ( \int_{\mathbb{R}^n} q_b \left( \frac{1}{4} y_j y_k - \frac{1}{2} \delta_{j,k} \right) \rho (y) d y \right)_{1 \leq j,k \leq n},\\
\end{eqnarray*}
and $\text{ Tr }(q_2)$ is the trace of the matrix $q_2$.
The reader should keep in mind that $q_0, q_1,q_2$ are just coordinates of $q_b$, not for $q$. Note that $q_m$ is the projection of $q_b$ as the eigenspace of $\mathcal{L}$ corresponding to the eigenvalue $\lambda = 1 - \frac{m}{2}.$ Accordingly, $q_-$ is the projection of $q_b$ on the negative part of the spectrum of $\mathcal{L}.$ As a consequence of \eqref{defini-q-1-1-e} and \eqref{representation-q-1-L-2-rho}, we see that every $q \in L^\infty (\mathbb{R}^n)$ can be decomposed into $5$ components as follows:
\begin{equation}\label{decom-5-parts}
q = q_b + q_e = q_0 + q_1 \cdot y + \frac{1}{2} y^\mathcal{T} \cdot q_2 \cdot y - \text{Tr} (q_2) + q_- + q_e.
\end{equation}
\subsection{The shrinking set}
In this part, we will construct a shrinking set, such that the control of $(q_1,q_2) \to 0,$ will be a consequence of the control of $(q_1,q_2)$ in this shrinking set. This is our definition
\begin{definition}[The shrinking set]\label{the-rhinking set}
For all $A \geq 1, p_1 \in (0,1)$ and $s > 0,$ we introduce the set $V_{p_1,A,} (s) $ denoted for simplicity by $ V_A (s) $ as the set of all $(q_1, q_2) \in (L^\infty (\mathbb{R}^n))^2$ satisfying the following conditions:
\begin{eqnarray*}
|q_{1,0} | \leq \frac{A}{s^2} &\text{ and }& |q_{2,0}| \leq \frac{A^2}{s^{p_1 + 2}},\\
|q_{1,j} | \leq \frac{A}{s^2} &\text{ and }& |q_{2,j} | \leq \frac{A^2}{s^{p_1 + 2}}, \forall 1 \leq j \leq n,\\
|q_{1,j,k} | \leq \frac{A^2 \ln s}{s^2} &\text{ and }& |q_{2,j,k} | \leq \frac{A^5 \ln s}{s^{p_1 + 2}}, \forall 1 \leq j,k \leq n,\\
\left\| \frac{q_{1,-} }{1 + |y|^3} \right\|_{L^\infty} \leq \frac{A}{s^{2}} &\text{ and }& \left\| \frac{q_{2,-} }{1 + |y|^{3}} \right\|_{L^\infty} \leq \frac{A^2}{ s^{\frac{p_1 + 5}{2}}},\\
\|q_{1,e} \|_{L^\infty} \leq \frac{A^2}{\sqrt s} & \text{ and } & \|q_{2,e} \|_{L^\infty} \leq \frac{A^3}{ s^{\frac{p_1 + 2}{2}}},
\end{eqnarray*}
where $q_1$ and $q_2$ are decomposed as in \eqref{decom-5-parts} .
\end{definition}
In the following Lemma, we show that belonging to $V_A(s)$ implies the convergence to $0$. In fact, we have a more precise statement in the following:
\begin{lemma}\label{lemma-estiam-q-1-2-in V-A}
For all $A \geq 1, s \geq 1,$ if we have $(q_1, q_2) \in V_A (s)$, then the following estimates hold:
\begin{itemize}
\item[$(i)$] $\|q_1\|_{L^\infty (\mathbb{R}^n) } \leq \frac{C A^2}{ \sqrt s} \text{ and } \|q_2\|_{L^\infty(\mathbb{R}^n)} \leq \frac{CA^3}{s^{\frac{p_1 + 2 }{2}}}.$
\item[$(ii)$]
$$ |q_{1,b} (y) | \leq \frac{CA^2 \ln s}{s^2} (1 + |y|^3), \quad |q_{1,e} (y)| \leq \frac{C A^2}{s^2} (1 + |y|^3) \text{ and } |q_1| \leq \frac{C A^2 \ln s}{ s^2} (1 + |y|^3),$$
and
$$ |q_{2,b} (y) | \leq \frac{CA^3 }{s^{\frac{p_1 +5}{2}}} (1 + |y|^3), \quad |q_{2,e} (y)| \leq \frac{C A^3}{s^{\frac{p_1 + 5}{2}}} (1 + |y|^3) \text{ and } |q_2| \leq \frac{C A^3 \ln s}{ s^{\frac{p_1 + 5}{2}}} (1 + |y|^3).$$
\item[$(iii)$] For all $y \in \mathbb{R}^n$ we have
$$ |q_1| \leq C \left[ \frac{A}{s^2}( 1 + |y| ) + \frac{A^2 \ln s}{s^2} (1 + |y|^2) + \frac{A^2}{s^2} (1 + |y|^3) \right],$$
and
$$ |q_2| \leq C \left[ \frac{A^2}{s^{p_1 + 2}} (1 + |y|) + \frac{A^5 \ln s}{s^{p_1 + 2}} (1 + |y|^2) + \frac{A^3}{ s^{\frac{p_1 + 5}{2}}} (1 + |y|^3) \right] .$$
\end{itemize}
where $C$ will henceforth be an universal constant in our proof which depends only on $K $.
\end{lemma}
\begin{proof}
We only prove the estimate for $q_2$ since the estimates for $q_1$ follow similarly and has already been proved in previous papers (see for intance Proposition 4.7 in \cite{TZpre15}). We now take $A \geq 1, s \geq 1$ and $(q_1, q_2) \in V_A (s)$ and $y \in \mathbb{R}^n$. We also recall from \eqref{decom-5-parts} that
$$q_2 = q_{2,b} + q_{2,e},$$
where $\text{ supp} (q_{2,b}) \subset \mathbb{B} ( 0, 2 K \sqrt s)$ and $ \text{ supp} (q_{2,e}) \subset \mathbb{R}^n \setminus \mathbb{B} ( 0, K \sqrt s)$.
$(i)$ From \eqref{representation-q-1-L-2-rho}, we have
$$ q_b = q_{2,0} + q_{2,1} \cdot y + \frac{1}{2} y^\mathcal{T} \cdot q_{2,2} \cdot y - \text{Tr} (q_{2,2}) + q_{2,-} .$$
Therefore,
\begin{eqnarray}
|q_{2,b} (y)| & \leq & |q_{2,0}| + |q_{2,1}| |y| + \max_{j,k \leq n} | q_{2,j,k} | (1 + |y|^2) + \left\| \frac{q_{2,-}}{1 + |y|^3}\right\|_{L^\infty (\mathbb{R}^n)} (1 + |y|^3).\label{modul-q-2-b}
\end{eqnarray}
Then, recalling that $\text{supp} (q_{2,b}) \subset \mathbb{B} ( 0, 2 K \sqrt s), $ using Definition \ref{the-rhinking set}, we see that
$$ |q_{2,b} (y)| \leq \frac{CA^3}{ s^{\frac{p_1 + 2}{2}}}.$$
Since we also have
$$ | q_{2,e} | \leq \frac{A^3}{s^{\frac{p_1 + 2}{2}}}.$$
We end-up with
$$ \| q_{2} \|_{L^\infty} \leq \| q_{2,b}\|_{L^\infty} + \| q_{2,e}\|_{L^\infty} \leq \frac{C A^3}{ s^{\frac{p_1 + 1}{2}}} .$$
$(ii)$ Using \eqref{modul-q-2-b} and Definition \ref{the-rhinking set}, we see that
\begin{equation}\label{estima-q-2-b-s-p-1-5}
| q_{2,b} (y) | \leq \frac{C A^3 }{ s^{\frac{p_1 + 5 }{2}}} ( 1 + | y|^3).
\end{equation}
We claim that $q_{2,e}$ satisfies a similar estimate:
\begin{equation}\label{estima-q-2-e-s-p-1-5}
| q_{2,e} (y) | \leq \frac{C A^3 }{ s^{\frac{p_1 + 5 }{2}}} ( 1 + | y|^3).
\end{equation}
Indeed, since $\text{ supp} (q_{2,e}) \subset \mathbb{R}^n \setminus \mathbb{B} ( 0, K \sqrt s),$ we may assume that
$$ \frac{|y| }{K \sqrt s} \geq 1,$$
hence, from Definition \ref{the-rhinking set}, we write
$$ |q_{2,e}(y)| \leq \frac{A^3}{s^{\frac{p_1 + 2}{2}}} .1 \leq \frac{A^3}{s^{\frac{p_1 + 2}{2}}} \frac{|y|^3}{ K^3 s^\frac{3}{2}} \leq \frac{C A^3 }{s^\frac{p_1 + 5}{2}} ( 1 + |y|^3),$$
and \eqref{estima-q-2-e-s-p-1-5} follows. Using \eqref{estima-q-2-b-s-p-1-5} and \eqref{estima-q-2-e-s-p-1-5}, we see that
$$ |q_2| \leq | q_{2,b} | + |q_{2,e}| \leq \frac{C A^3}{s^\frac{p_1 + 5}{2} } (1 + |y|^3).$$
$(iii)$ It is leaved to reader, since this is a direct consequence of Definition \eqref{the-rhinking set} and the decomposition \eqref{decom-5-parts}.
\end{proof}
\subsection{Initial data}
Here we suggest a class of initial data, depending on some parameters to be fine-tuned in order to get a good solution for our problem. This is initial data:
\begin{definition}[The initial data]\label{initial-data-profile-complex} For each
$A \geq 1, s_0 \geq 1, d_1= (d_{1,0}, d_{1,1}) \in \mathbb{R} \times \mathbb{R}^n, d_2 = (d_{2,0},d_{2,1}, d_{2,2}) \in \mathbb{R} \times \mathbb{R}^{ n} \times \mathbb{R}^{\frac{n(n+1)}{2}}$, we introduce
\begin{eqnarray*}
\phi_{1,A,d_1,s_0} (y) &=& \frac{A}{s_0^2} \left( d_{1,0} + d_{1,1} \cdot y \right) \chi (2 y, s_0),\\
\phi_{2,A, d_2,s_0} (y) &= &\left( \frac{A^2}{s_0^{p_1+2}} \left( d_{2,0} + d_{2,1} \cdot y \right) + \frac{A^5 \ln s_0}{s^{p_1+2}_0}\left( y^{\mathcal{T}} \cdot d_{2,2}\cdot y - 2\textup{ Tr}(d_{2,2}) \right) \right)\chi (2 y, s_0).
\end{eqnarray*}
\end{definition}
\textbf{Remark:} Note that $d_{1,0}$ and $d_{2,0}$ are scalars, $d_{1,1}$ and $d_{2,1}$ are vectors, $d_{2,2}$ is a square matrix of order $n$. For simplicity, we may drop down the parameters expect $s_0$ and write $\phi_1 (y,s_0)$ and $\phi_2 (y,s_0)$.
\medskip
We next claim that we can find a domain for $(d_{1},d_{2})$ so that initial data belongs to $V_A(s_0):$
\begin{lemma}[Control of initial data in $V_A(s_0)$]\label{lemma-control-initial-data}
There exists $A_1 \geq 1$ such that for all $ A \geq A_1$, there exists $s_1(A) \geq 1$ such that for all $s_0 \geq s_1(A),$ if $(q_1, q_2) (s_0)= \left( \phi_{1}, \phi_{ 2}\right)(s_0) $ where $(\phi_{1}, \phi_{ 2})(s_0)$ are defined in Definition \ref{initial-data-profile-complex}, then, the following properties hold:
\begin{itemize}
\item[$i)$] There exists a set $\mathcal{D}_{A, s_0} \in \left[ -2, 2\right]^{ \frac{n^2 + 5n + 4}{2}} $ such that the mapping
\begin{eqnarray*}
\Psi_1: \mathbb{R}^{\frac{n^2 + 5n + 4}{2}} &\to & \mathbb{R}^{\frac{n^2 + 5n + 4}{2}} \\
(d_1, d_2) & \mapsto & ( q_{1,0}, (q_{1,j} )_{j \leq n } , q_{2,0}, (q_{2,j})_{j\leq n}, (q_{2,j,k} )_{j,k \leq n})(s_0)
\end{eqnarray*}
is linear, one to one from $\mathcal{D}_{A,s_0}$ to $\hat V_A (s_0)$, where
\begin{eqnarray}
\hat V_A(s) = \left[ - \frac{A}{s^2},\frac{A}{s^2} \right]^{1 + n} \times \left[ - \frac{A^2}{s^{p_1 + 2}},\frac{A^2}{s^{p_1 + 2}} \right]^{1 + n} \times \left[ - \frac{A^5 \ln s}{s^{p_1+2}}, \frac{A^5 \ln s}{s^{ p_1+2}}\right]^{\frac{n(n+1)}{2}}.\label{defini-of-hat-V-A}
\end{eqnarray}
Moreover,
\begin{equation}\label{degree-Psi-1}
\Psi_1 (\partial \mathcal{D}_{A,s_0}) \subset \partial \hat V_A (s_0) \text{ and } \text{deg } ( \Psi_1 \left|_{\partial \mathcal{D}_{A,s_0}} \right.) \neq 0.
\end{equation}
\item[$ii)$ ] In particular, we have $(q_1, q_2) (s_0) \in V_{A} (s_0),$ and
\begin{eqnarray*}
| q_{1,j,k}(s_0)| &\leq &\frac{A^2 \ln s_0}{2 s_0^2}, \forall j,k \leq n,\\
\left\| \frac{q_{1,-} (.,s_0)}{1 + |y|^{3}} \right\|_{L^\infty} \leq \frac{A}{2s^{2}_0} &\text{ and }& \left\| \frac{q_{2,-} (.,s_0)}{1 + |y|^{3}} \right\|_{L^\infty} \leq \frac{A^{2} }{2s_0^{\frac{p_1 + 5}{2}}},\\
q_{1,e} (.,s_0) = 0 &\text{ and } & q_{2,e} (.,s_0) = 0.\\
\end{eqnarray*}
\end{itemize}
\end{lemma}
\begin{proof}
The proof is straightforword and a bit length. For that reason, the proof is omitted, and we friendly refer the reader to Proposition 4.5 in \cite{TZpre15} for a quite similar case.
\end{proof}
Now, we give a key-proposition for our argument. More precisely, in the following proposition, we prove an existence of a solution of equation \eqref{equation-satisfied-by-q-1-2} trapped in the shrinking set:
\begin{proposition}[Existence of a solution trapped in $V_A(s)$]\label{pro-existence-d-1-d-1}
There exists $A_2 \geq 1 $ such that for all $ A \geq A_2$ there exists $s_2(A) \geq 1$ such that for all $s_0 \geq s_2(A)$, there exists $(d_1,d_2) \in \mathbb{R}^{ \frac{n^2 + 5n + 4}{2}} $ such that the solution $(q_1, q_2)$ of equation \eqref{equation-satisfied-by-q-1-2} with the initial data at the time $s_0,$ given by $(q_1,q_2)(s_0) = (\phi_1 , \phi_2 )(s_0)$, where $(\phi_1 , \phi_2 )(s_0)$ is defined in Definition \ref{initial-data-profile-complex}, we have
$$(q_1, q_2) \in V_A(s), \quad \forall s \in [s_0,+\infty).$$
\end{proposition}
The proof is divided into 2 steps:
\begin{itemize}
\item The first step: In this step, we reduce our problem to a finite dimensional one. In other words, we aim at proving that the control of $(q_1,q_2)(s)$ in the shrinking set $V_A (s)$ reduces to the control of the components
$$( q_{1,0}, (q_{1,j} )_{j \leq n } , q_{2,0}, (q_{2,j})_{j\leq n}, (q_{2,j,k} )_{j,k \leq n} )(s)$$
in $\hat V_A (s).$
\item The second step: We get the conclusion of Proposition \ref{pro-existence-d-1-d-1} by using a topological argument in finite dimension.
\end{itemize}
\begin{proof} We here give proof of Proposition \ref{pro-existence-d-1-d-1}:
\medskip
- \textit{Step 1: Reduction to a finite dimensional problem:}
Using \textit{a priori estimates}, our problem will be reduced to the control of a finite number of components.
\begin{proposition}[Reduction to a finite dimensional problem]\label{pro-reduction- to-finit-dimensional}
There exists $A_3 \geq 1$ such that for all $A \geq A_3 $, there exists $s_3 (A) \geq 1 $ such that
for all $s_0 \geq s_3 (A).$ The following holds:
\begin{itemize}
\item[$(a)$] If $(q_1,q_2)(s)$ a solution
of equation \eqref{equation-satisfied-by-q-1-2} with initial data at the time $s_0$ given by $(q_1,q_2)(s_0) = (\phi_1 , \phi_2 )(s_0)$ defined as in Definition \ref{initial-data-profile-complex} with $(d_1,d_2) \in \mathcal{D}_{A,s_0}$ defined in Lemma \ref{lemma-control-initial-data}.
\item[$(b)$] If we furthemore assume that $(q_1,q_2) (s) \in V_A (s)$ for all $ s \in [s_0,s_1]$ for some $s_1 \geq s_0$ and $(q_1,q_2)(s_1) \in \partial V_A (s_1)$.
\end{itemize}
Then, we have the following conclusions:
\begin{itemize}
\item[$(i)$] (\textit{Reduction to finite dimensions}): We have $( q_{1,0}, (q_{1,j} )_{j \leq n } , q_{2,0}, (q_{2,j})_{j\leq n}, (q_{2,j,k} )_{j,k \leq n} )(s_1) \in \partial \hat V_A (s_1) .$
\item[$(ii)$] (\textit{Transverse outgoing crossing}) There exists $\delta_0 > 0$ such that
\begin{equation}\label{traverse-outgoing crossing}
\forall \delta \in (0,\delta_0), ( q_{1,0}, (q_{1,j} )_{j \leq n } , q_{2,0}, (q_{2,j})_{j\leq n}, (q_{2,j,k} )_{j,k \leq n} )(s_1+\delta) \notin \hat V_A (s_1+\delta),
\end{equation}
which implies that $(q_1,q_2)(s_1 + \delta) \notin V_A (s_1 + \delta) $ for all $ \delta \in (0, \delta_0).$
\end{itemize}
\end{proposition}
This proposition makes the heart of the paper and needs many steps to be proved. For that reason, we dedicate a whole section to its proof (Section \ref{the proof of proposion-reduction-finite-dimensional} below). Let us admit it here, and get to the conclusion of Proposition \ref{pro-existence-d-1-d-1} in the second step.
\medskip
\textit{ - Step 2: Conclusion of Proposition \ref{pro-existence-d-1-d-1} by a topological argument.}
In this step, we finish the proof of Proposition \ref{pro-existence-d-1-d-1}. In fact, we aim at proving the existence of a parameter $(d_1,d_2) \in \mathcal{D}_{A,s_0}$ such that the solution $(q_1,q_2)(s)$ of equation \eqref{equation-satisfied-by-q-1-2} with initial data $(q_1 ,q_2 )(s_0) = (\phi_1 , \phi_2 )(s_0),$ exists globally for all $s \in [s_0, + \infty)$ and satisfies
$$ (q_1, q_2)(s) \in V_A (s).$$
Our argument is analogous to the argument of Merle and Zaag in \cite{MZdm97}. For that reason, we only give a brief proof. Let us fix $K, A, s_0$ such that Lemma \ref{lemma-control-initial-data} and Proposition \ref{pro-reduction- to-finit-dimensional} hold. We first consider $(q_1, q_2)_{d_1,d_2}(s), s \geq s_0$ a solution of equation \eqref{equation-satisfied-by-q-1-2} with initial data at $s_0$ is $(q_1,q_2)(s_0)$, which depend on $(d_1,d_2)$ as follows
$$ (q_1,q_2)_{d_1,d_2}(s_0) = (\phi_1, \phi_2) (s_0).$$
From Lemma \ref{lemma-control-initial-data} and by construction of the set $\mathcal{D}_{A,s_0},$ we know that
\begin{equation}\label{q-1-q-2-s-0-V-A-s-0}
(q_1,q_2) (s_0) \in V_A (s_0).
\end{equation}
By contradiction, we assume that for all $(d_1,d_2) \in \mathcal{D}_{A, s_0}$ there exists $s_1 \in [s_0, + \infty)$ such that
$$ (q_1, q_2)_{d_1,d_2} (s_1) \notin V_A(s_1).$$
Then, for each $(d_1, d_2) \in \mathcal{D}_{A,s_0},$ we can define
$$s^* (d_1, d_2) = \inf \{ s_1 \geq s_0 \text{ such that } (q_1,q_2)_{d_1,d_2} (s_1) \notin V_A (s_1)\}.$$
Since there exists $s_1$ such that $(q_1,q_2)(s_1) \notin V_A(s_1)$ we deduce that $s^*(d_1,d_2) < + \infty$ for all $ (d_1,d_2) \in \mathcal{D}_{A, s_0}.$ Besides that, using \eqref{q-1-q-2-s-0-V-A-s-0}, and the minimality of $s^* (d_1,d_2),$ the continuity of $(q_1,q_2)$ in $s$ and the closeness of $V_A(s)$ we derive that $ (q_1,q_2) (s^*(d_1,d_2)) \in \partial V_A (s^*(d_1,d_2))$ and for all $s \in [s_0, s^*(d_1,d_2)],$
$$ (q_1,q_2) (s) \in V_A (s).$$
\noindent
Therefore, from item $(i)$ of Proposition \ref{pro-reduction- to-finit-dimensional} we see that
$$ ( q_{1,0}, (q_{1,j} )_{j \leq n } , q_{2,0}, (q_{2,j})_{j\leq n}, (q_{2,j,k} )_{j,k \leq n} )(s^*(d_1,d_2))\in \hat V_A(s^*(d_1,d_2)).$$
This means that following mapping $\Gamma$ is well-defined:
\begin{eqnarray*}
\Gamma : \mathcal{D}_{A, s_0} & \to & \partial \left( [-1, 1]^{1 + n} \times [-1, 1]^{1 + n} \times [-1, 1]^{\frac{n (n+1)}{2}}\right)\\
(d_1,d_1) &\mapsto & \left( \frac{s_*^2(d_1,d_2)}{A}(q_{1,0}, (q_{1,j})_{j\leq n}), \frac{s^{p_1 + 2}}{A^2} (q_{2,0}, (q_{2,j})_{j\leq n}),
\frac{s_*^{p_1+2}(d_1,d_2)}{A^5 \ln s_*(d_1,d_2)} (q_{2,j,k})_{j,k \leq n }\right)(s^*(d_1,d_2)).
\end{eqnarray*}
Moreover, it satisfies the two following properties:
\begin{itemize}
\item[(i)] $\Gamma$ is continuous from $\mathcal{D}_{A, s_0}$ to $ \partial \left( [-1, 1]^{\frac{n^2 + 5n + 4}{2}}\right).$ This is a consequence of item $(ii)$ in Proposition \eqref{pro-reduction- to-finit-dimensional}.
\item[(ii)] The degree of the restriction $\Gamma \left|\right. _{\partial \mathcal{D}_{A,s_0}}$ is non zero.
Indeed, again by item $(ii)$ in Proposition \ref{pro-reduction- to-finit-dimensional}, we have
$$ s^* (d_1,d_2) = s_0,$$
in this case. Applying \eqref{degree-Psi-1}, we get the conclusion.
\end{itemize}
In fact, such a mapping $\Gamma$ can not exist by Index theorem, this is a contradiction. Thus, Proposition \ref{pro-existence-d-1-d-1} follows, assuming that Proposition \ref{pro-reduction- to-finit-dimensional} (see Section \ref{the proof of proposion-reduction-finite-dimensional} for the proof of latter)
\end{proof}
\subsection{ The proof of Theorem \ref{Theorem-profile-complex}}
In this section, we aim at giving the proof of Theorem \ref{Theorem-profile-complex}.
\begin{proof} \textit{ Proof of Theorem \ref{Theorem-profile-complex} assuming that Proposition \ref{pro-reduction- to-finit-dimensional}}
\medskip
+ \textit{The proof of item $(i)$ of Theorem \ref{Theorem-profile-complex}:}
Using Proposition \ref{pro-existence-d-1-d-1}, there exists initial data $(q_1,q_2)_{d_1,d_2}(s_0) = (\phi_1,\phi_2)(s_0)$ such that the solution of equation of \eqref{equation-satisfied-by-q-1-2} exists globally on $[s_0, + \infty)$ and satisfies:
$$ (q_1,q_2) (s) \in V_A (s), \forall s \in [s_0, + \infty),$$
Thanks to similarity variables \eqref{similarity-variales}, \eqref{defini-q-1-2} and item $(i)$ in Lemma \ref{lemma-estiam-q-1-2-in V-A}, we conclude that there exist initial data $u^0 $ of the form given in Remark \ref{remark-initial-data} with $(d_1,d_2)$ given in Proposition \ref{pro-existence-d-1-d-1} such that the solution $u(t)$ of equation \eqref{equ:problem} exists on $[0,T),$ where $T = e^{- s_0}$ and satisfies \eqref{esttima-theorem-profile-complex} and \eqref{estima-the-imginary-part}. Using these two estimates, we see that
$$ u(0,t) \sim \kappa (T -t)^{-\frac{1}{p-1}} \text{ as } t \to T,$$
which means that $u$ blows up at time $T$ and the origin is a blowup point. It remains to prove that for all $x \neq 0,$ $x$ is not a blowup point of $u$. The following Lemma allows us to conclude.
\begin{lemma}[No blow up under some threshold]\label{lemma-no-blowup-solution}
For all $C_0 > 0, 0 \leq T_1 < T$ and $\sigma>0$ small enough, there exists $\epsilon_0 (C_0,T, \sigma)> 0$ such that $u(\xi, \tau)$ satisfies the following estimates for all $|\xi| \leq \sigma, \tau \in \left[T_1,T\right)$:
$$ \left| \partial_\tau u - \Delta u \right| \leq C_0 |u|^p,$$
and
$$ |u(\xi,\tau)| \leq \epsilon_0 (1 -\tau)^{-\frac{1}{p-1}}.$$
Then, $u$ does not blow up at $\xi = 0, \tau = T$.
\end{lemma}
\begin{proof}
The proof of this Lemma is processed similarly to Theorem 2.1 in \cite{GKcpam89}. Although the proof of \cite{GKcpam89} was given in the real case, it extends naturally to the complex valued case.
\end{proof}
\noindent We next use Lemma \ref{lemma-no-blowup-solution} to conclude that $u$ does not blow up at $x_0 \neq 0.$ Indded, if $x_0 \neq 0$ we use \eqref{esttima-theorem-profile-complex} to deduce the following:
\begin{equation}\label{estima-u-x-0-neq-0}
\sup_{|x - x_0| \leq \frac{|x_0|}{2}} (T - t)^{\frac{1}{p -1}} | u(x,t) | \leq \left| f_0 \left( \frac{ \frac{|x_0|}{2}}{ \sqrt{{(T -t )}|\ln (T -t)| } }\right) \right| + \frac{C}{ \sqrt{ | \ln (T - t)|}} \to 0, \text{ as } t \to T.
\end{equation}
Applying Lemma \ref{lemma-no-blowup-solution} to $u(x - x_0, t),$ with some $\sigma$ small enough such that $\sigma \leq \frac{|x_0|}{2},$ and $T_1$ close enough to $T,$ we see that $u(x - x_0, t)$ does not blow up at time $T$ and $x = 0$. Hence $x_0 $ is not a blow-up point of $u$. This concludes the proof of item $(i)$ in Theorem \ref{Theorem-profile-complex}.
\medskip
+ \textit{The proof of item $(ii)$ of Theorem \ref{Theorem-profile-complex}:}
Here, we use the argument of Merle in \cite{Mercpam92} to deduce the existence of $u^* = u_1^* + i u_2^*$ such that $u(t) \to u^* $ as $t \to T$ uniformly on compact sets of $\mathbb{R}^n \backslash \{0\}$. In addition to that, we use the techniques in Zaag \cite{Zcpam01}, Masmoudi and Zaag \cite{ MZjfa08}, Tayachi and Zaag \cite{TZpre15} for the proofs of \eqref{asymp-u-start-near-0-profile-complex} and \eqref{asymp-u-start-near-0-profile-complex-imaginary-part}.
\noindent
Indeed, for all $x_0 \in \mathbb{R}^n , x_0 \neq 0 $, we deduce from \eqref{esttima-theorem-profile-complex}, \eqref{estima-the-imginary-part} that not only \eqref{estima-u-x-0-neq-0} holds but also the following satisfied:
\begin{eqnarray}
\sup_{|x - x_0| \leq \frac{|x_0|}{2}} (T - t)^{\frac{1}{p -1}} |\ln(T -t)|| u_2(x,t) | &\leq & \left| \frac{3 |x_0|^2}{2 (T -t ) |\ln (T -t)| } f_0 ^p\left( \frac{ \frac{|x_0|}{2}}{ \sqrt{{(T -t )}|\ln (T -t)| } }\right) \right|\label{estimates-T-t-u-leq-epsilon} \\
&+& \frac{C}{ | \ln (T - t)|^{\frac{p_1}{2}}} \to 0,\nonumber \text{ as } t \to T.
\end{eqnarray}
We now consider $x_0$ such that $|x_0|$ is small enough, and $K_0$ to be fixed later. We define $t_0(x_0)$ by
\begin{equation}\label{x-0-leq-delta-t-x-0}
|x_0| = K_0 \sqrt{ T -t_0(x_0) |\ln (T -t_0(x_0))|} .
\end{equation}
Note that $t_0 (x_0)$ is unique when $|x_0|$ is small enough and $t_0 (x_0)\to T$ as $x_0 \to 0$. We introduce the rescaled functions $U(x_0, \xi, \tau)$ and $V_2 (x_0, \xi, \tau)$ as follows:
\begin{equation}\label{equa-upsilon-xi-tau}
U (x_0, \xi, \tau) = \left( T - t_0 (x_0)\right)^{\frac{1}{p-1}} u(x,t).
\end{equation}
and
\begin{equation}\label{defini-V-2-x-0-xi-tau}
V_2 (x_0, \xi, \tau) = |\ln (T- t_0(x_0))| U_2 (x_0, \xi, \tau),
\end{equation}
where $U_2 (x_0, \xi, \tau)$ is defined by
$$ U (x_0, \xi, \tau) = U_1 (x_0, \xi, \tau) + i U_2 (x_0, \xi, \tau),$$
and
\begin{equation}\label{relation-x-and-xi-tau}
(x,t) = \big(x_0 + \xi\sqrt{T - t_0(x_0)},t_0(x_0) + \tau (T - t_0(x_0))\big), \text{ and } (\xi, \tau) \in \mathbb{R}^n \times \left[ - \frac{t_0(x_0)}{T - t_0(x_0)}, 1 \right).
\end{equation}
We can see that with these notations, we derive from item $(i)$ in Theorem \ref{Theorem-profile-complex} the following estimates for initial data at $\tau = 0 $ of $U$ and $V_2$
\begin{eqnarray}
\sup_{|\xi| \leq |\ln(T - t_0(x_0))|^{\frac{1}{4}}} \left| U(x_0, \xi, 0) - f_0(K_0)\right| & \leq & \frac{C}{ 1 + (|\ln(T - t_0(x_0))|^{\frac{1}{4}})} \to 0 \quad \text{ as } x_0 \to 0,\label{condition-initial-K-0-f-0}\\
\sup_{|\xi| \leq |\ln(T - t_0(x_0))|^{\frac{1}{4}}} \left| V_2(x_0, \xi, 0)- g_0(K_0)\right| &\leq &\frac{C}{ 1 + (|\ln(T - t_0(x_0))|^{ \gamma_1})} \to 0 \quad \text{ as } x_0 \to 0.\label{condition-initial-K-0-g-0}
\end{eqnarray}
where $f_0 (x), g_0 (x)$ are defined as in \eqref{defini-f-0} and \eqref{defini-g-0-z} respectively, and $\gamma_1 = \min \left( \frac{1}{4} , \frac{p_1}{2} \right) $. Moreover, using equations \eqref{equation-satisfied-u_1-u_2}, we derive the following equations for $U, V_2$: for all $\xi \in \mathbb{R}^n, \tau \in\left[ 0, 1 \right)$
\begin{eqnarray}
\partial_\tau U = \Delta_\xi U + U^p,\label{equa-U-x-0-xi-tau}\\
\partial_\tau V_2 = \Delta_\xi V_2 + V_2 G_2 (U_1, U_2), \label{equa-V-2-x-0-xi-tau}
\end{eqnarray}
where $G$ is defined by
\begin{equation}\label{defini-G-2-U-1-2}
G (U_1, U_2) U_2 = F_2 (U_1,U_2),
\end{equation}
and $F_2$ is defined in \eqref{defi-mathbb-A-1-2}. We note that $G_2, F_2$ are polynomials of $U_1, U_2$.
Besides that, from \eqref{estimates-T-t-u-leq-epsilon} and \eqref{equa-U-x-0-xi-tau}, we can apply Lemma \ref{lemma-no-blowup-solution} to $U$ when $|\xi| \leq |\ln (T - t_0(x_0))|^{\frac{1}{4}}$ and obtain:
\begin{equation}\label{bound-U-xi-tau-x-0}
\sup_{|\xi| \leq \frac{1}{2}|\ln (T -t_0(x_0))|^\frac{1}{4}, \tau \in [0,1) } |U (x_0, \xi,\tau)| \leq C.
\end{equation}
and we aim at proving for $V_2 (x_0, \xi, \tau)$ that
\begin{equation}\label{bound-V-2-xi-tau-x-0}
\sup_{|\xi| \leq \frac{1}{16}|\ln (T -t_0(x_0))|^\frac{1}{4}, \tau \in [0,1) } |V_2(x_0, \xi,\tau)| \leq C.
\end{equation}
+ \textit{The proof for \eqref{bound-V-2-xi-tau-x-0}:} We first use \eqref{bound-U-xi-tau-x-0} to derive the following rough estimate:
\begin{equation}\label{estima-V-2-1-step-0}
\sup_{|\xi| \leq \frac{1}{2} |\ln (T -t_0(x_0))|^\frac{1}{4}, \tau \in [0,1) } |V_2(x_0, \xi,\tau)| \leq C |\ln(T -t_0(x_0))|.
\end{equation}
We first introduce $\psi (x)$ a cut-off function $\psi \in C^\infty_0 (\mathbb{R}^n), 0 \leq \psi \leq 1, supp(\psi ) \subset B(0,1), \psi = 1 $ on $B( 0, \frac{1}{2}).$ We introduce
\begin{equation}\label{U-2-1-psi-1-x-0}
\psi_1 (\xi) = \psi \left( \frac{2\xi}{ |\ln (T -t_0 (x_0))|^{\frac{1}{4}}} \right) \text{ and } V_{2,1} (x_0, \xi, \tau) = \psi_1 (\xi) V_2 (x_0,\xi, \tau).
\end{equation}
Then, we deduce from \eqref{equa-V-2-x-0-xi-tau} an equation satisfied by $V_{2,1}$
\begin{equation}\label{equa-V-2-1-x-0}
\partial_\tau V_{2,1} = \Delta_\xi V_{2,1} - 2 \text{ div} (V_2 \nabla \psi_1) + V_2 \Delta \psi_1 + V_{2,1} G_1(U_1,U_2).
\end{equation}
Hence, we can write $V_{2,1} $ with a integral equation as follows
\begin{equation}\label{equa-integral-V-2-1}
V_{2,1} (\tau) = e^{\Delta \tau} (V_{2,1}(0)) + \int_0^\tau e^{(\tau - \tau')\Delta} \left( - 2 \text{ div } (V_2 \nabla \psi_1) + V_2 \Delta\psi_1 + V_{2,1} G(U_1, U_2)(\tau') \right) d \tau'.
\end{equation}
Besides that, using \eqref{bound-U-xi-tau-x-0} and \eqref{estima-V-2-1-step-0} and the fact that
\begin{eqnarray*}
| \nabla \psi_1| \leq \frac{C}{ | \ln (T -t_0(x_0))|^{\frac{1}{4}}},
| \Delta \psi_1| \leq \frac{C}{ | \ln (T -t_0(x_0))|^{\frac{1}{2}}},
\end{eqnarray*}
we deduce that
\begin{eqnarray*}
\left| \int_0^\tau e^{(\tau - \tau')\Delta} \left( - 2 \text{ div } (V_2 \nabla \psi_1) \right) d\tau' \right| &\leq & C \int_{0}^\tau \frac{\| V_2 \nabla \psi_1\|_{L^\infty} (\tau')}{ \sqrt { \tau - \tau'}} d\tau' \leq C |\ln (T - t_0 (x_0))|^{\frac{3}{4}},\\
\left| \int_0^\tau e^{(\tau - \tau')\Delta} \left( V_2 (\tau') \Delta \psi_1 \right) d\tau' \right| &\leq & C \int_0^\tau \| V_2 \Delta \psi_1\|_{\infty} (\tau') d\tau' \leq C |\ln (T -t_0 (x_0))|^\frac{1}{2}, \\
\left| \int_0^\tau e^{(\tau - \tau')\Delta} \left( V_2 \psi_1 G(U_1, U_2)(\tau') \right) d\tau' \right| &\leq & C \int_0^\tau \| V_{2,1 } G_2 (U_1,U_2)\|_{L^\infty} (\tau') d\tau'.
\end{eqnarray*}
Note that $G_2 (U_1, U_2)$ in the last line is bounded on $|\xi| \leq |\ln(T -t_0)|^\frac{1}{4}, \tau \in [0,1)$ because it is a polynomial in $U_1, U_2$ and \eqref{bound-U-xi-tau-x-0} holds, then, we derive
$$ \| V_{2,1} G_2 (U_1, U_2)\|_{L^\infty}(\tau') \leq C \| V_{2,1}\|_{L^\infty} (\tau').$$
Hence, from \eqref{equa-integral-V-2-1} and the above estimates, we derive
$$ \| V_{2,1}(\tau)\|_{L^\infty} \leq C | \ln (T -t_0 (x_0)) |^{\frac{3}{4}} + C \int_0^\tau \|V_{2,1}(\tau')\|_{L^\infty} d \tau'. $$
Thanks to Gronwall Lemma, we deduce that
$$ \|V_{2,1} (\tau)\|_{L^\infty} \leq C |\ln(T -t_0(x_0))|^{\frac{3}{4}}, \forall \tau \in [0,1),$$
which yields
\begin{equation}\label{estima-V-2-1-step-1}
\sup_{|\xi| \leq \frac{1}{4} |\ln (T -t_0(x_0))|^\frac{1}{4}, \tau \in [0,1) } |V_2(x_0, \xi,\tau)| \leq C |\ln(T -t_0(x_0))|^{\frac{3}{4}}.
\end{equation}
We apply iteratively for
$$ V_{2,2} (x_0, \xi, \tau) = \psi_2 (\xi) V_{2} (x_0,\xi, \tau) \text{ where } \psi_2 (\xi) = \psi \left( \frac{4 \xi}{ |\ln(T -t_0 (x_0))|^{\frac{1}{4}}} \right).$$
Similarly, we deduce that
$$ \sup_{|\xi| \leq \frac{1}{8} |\ln (T -t_0(x_0))|^\frac{1}{4}, \tau \in [0,1) } |V_2(x_0, \xi,\tau)| \leq C |\ln(T -t_0(x_0))|^{\frac{1}{2}}.$$
We apply this process a finite number of steps to obtain \eqref{bound-V-2-xi-tau-x-0}. We now come back to our problem, and aim at proving that:
\begin{eqnarray}
\sup_{|\xi| \leq \frac{1}{16} |\ln(T - t_0(x_0))|^{\frac{1}{4}}, \tau \in [0,1)} \left| U (x_0, \xi, \tau) - \hat U_{K_0} (\tau) \right| &\leq & \frac{C}{ 1 + |\ln (T- t_0(x_0) )|^{\gamma_2}}, \label{sup-v-xi-tau-apro-1}\\
\sup_{|\xi| \leq \frac{1}{32}|\ln(T - t_0(x_0))|^{\frac{1}{4}}, \tau \in [0,1)} \left| V_2 (x_0, \xi, \tau) - \hat V_{2,K_0} (\tau) \right| &\leq & \frac{C}{ 1 + |\ln (T- t_0(x_0) )|^{\gamma_3}}, \label{sup-V-2-xi-tau-apro-1}
\end{eqnarray}
where $\gamma_2, \gamma_3$ are positive small enough and $( \hat U_{K_0} , \hat V_{2,K_0}) (\tau) $ is the solution of the following system:
\begin{eqnarray}
\partial_\tau \hat U_{K_0} &=& \hat U_{K_0}^p,\label{ODE-hat-U-K-0}\\
\partial_\tau \hat V_{2, K_0} &=& p \hat U_{K_0}^{p-1} \hat V_{2,K_0}\label{ODE-hat-U-K-0}.
\end{eqnarray}
with initial data at $\tau = 0$
\begin{eqnarray*}
\hat U_{K_0} (0) &=& f_0 (K_0),\\
\hat V_{2,K_0} (0) &=& g_0 (K_0).
\end{eqnarray*}
given by
\begin{eqnarray}
\hat U_{K_0} (\tau) &=& \left( (p-1) (1 - \tau) + \frac{(p-1)^2 K_0^2}{4 p}\right)^{-\frac{1}{p-1}} ,\label{defini-hat-U-K-0-tau}\\
\hat V_{2,K_0} (\tau) &=& K_0 ^2 \left( (p-1) (1 - \tau) + \frac{(p-1)^2 K_0^2}{4 p}\right)^{-\frac{p}{p-1}}. \label{defini-hat-V-2-K-0-tau}
\end{eqnarray}
for all $\tau \in [0,1)$. The proof of \eqref{sup-v-xi-tau-apro-1} is cited to Section 5 of Tayachi and Zaag \cite{TZpre15} and the proof of \eqref{sup-V-2-xi-tau-apro-1} is similar. For the reader's convenience, we give it here. Let us consider
\begin{equation}\label{defini-mathcal-V-2}
\mathcal{V}_2 = V_2 - \hat V_{2,K_0} (\tau).
\end{equation}
Then, $\mathcal{V}_2$ satisfies
\begin{equation}\label{estima-mathcal-V-2}
\sup_{|\xi| \leq \frac{1}{ 16} | \ln (T - t_0(x_0))|^{\frac{1}{4}}, \tau \in [0,1)} | \mathcal{V}_{2}| \leq C.
\end{equation}
We use \eqref{equa-V-2-x-0-xi-tau} to derive an equation on $\mathcal{V}_2$ as follows:
\begin{equation}\label{equat-mathcal-V-2}
\partial_\tau \mathcal{V}_2 = \Delta \mathcal{V}_2 + p \hat U_{K_0}^{p-1} \mathcal{V}_2 + p (U_1^{p-1} - \hat U_{K_0}^{p-1} ) V_2 + \mathcal{G}_2(x_0, \xi, \tau),
\end{equation}
where
$$ \mathcal{G}_2 (x_0, \xi,\tau) = V_2 [ G_2 (U_1, U_2) - p U_1^{p-1} ].$$
Note that, from definition of $G_2$ and \eqref{bound-U-xi-tau-x-0} we deduce that
$$ \sup_{|\xi| \leq \frac{1}{2} |\ln(T -t_0)|^{\frac{1}{4}}, \tau \in [0,1) } | G_2 (U_1, U_2) - p U_1^{p-1}| \leq C |U_2|,$$
Hence, using \eqref{defini-V-2-x-0-xi-tau} and \eqref{bound-V-2-xi-tau-x-0} and we derive
\begin{equation}\label{estima-mathcal-G-2}
\sup_{|\xi| \leq \frac{1}{16} |\ln(T -t_0)|^{\frac{1}{4}}, \tau \in [0,1) } | \mathcal{G}_2 (x_0, \xi, \tau)| \leq \frac{C}{ | \ln (T -t_0 (x_0))|}.
\end{equation}
We also define
$$\bar{\mathcal{V} }_2 = \psi_* (\xi) \mathcal{V}_2,$$
where
$$ \psi_* = \psi \left( \frac{1 6 \xi}{ |\ln (T- t_0 (x_0))|^{\frac{1}{4}}}\right),$$
and $ \psi $ is the cut-off function which has been introduced above. We also note that $\nabla \psi_*, \Delta \psi_*$ satisfy the following estimates
\begin{equation}\label{estima-psi-nabla-xi-psi-x-0}
\| \nabla_\xi \psi_* \|_{L^\infty} \leq \frac{C}{ |\ln (T- t_0 (x_0))|^{\frac{1}{4}}} \text{ and } \| \Delta_\xi \psi_*\|_{L^\infty} \leq \frac{C}{ |\ln (T- t_0 (x_0))|^{\frac{1}{2}}}.
\end{equation}
In particular, $\bar{\mathcal{V}}_2$ satisfies
\begin{equation}\label{euqa-bar-mathcal-V-2}
\partial_\tau \bar {\mathcal{V}}_2 = \Delta \bar {\mathcal{V}}_2 + p \hat U_{K_0}^{p-1} (\tau) \bar {\mathcal{V}}_2 - 2 \text{ div } (\mathcal{V}_2 \nabla \psi_*) + \mathcal{V}_2 \Delta \psi_* + p (U_1^{p-1} - \hat U_{K_0}^{p-1}) \psi_* V_2 + \psi_* \mathcal{G}_2,
\end{equation}
By Duhamel principal, we derive the following integral equation
\begin{equation}\label{duhamel-bar-mathcal-V-2}
\bar{ \mathcal{V}}_2 (\tau) = e^{\tau \Delta} (\bar{ \mathcal{V}}_2 (\tau) ) + \int_0^\tau e^{(\tau - \tau')\Delta} \left( p \hat U_{K_0}^{p-1} \bar {\mathcal{V}}_2 - 2 \text{ div } (\mathcal{V}_2 \nabla \psi_*) + \mathcal{V}_2 \Delta \psi_* + p (U_1^{p-1} - \hat U_{K_0}^{p-1}) \psi_* V_2 + \psi_* \mathcal{G}_2 \right) (\tau') d\tau'.
\end{equation}
Besides that, we use \eqref{sup-v-xi-tau-apro-1}, \eqref{defini-hat-U-K-0-tau}, \eqref{estima-mathcal-V-2}, \eqref{estima-psi-nabla-xi-psi-x-0}, \eqref{estima-mathcal-G-2} to derive the following estimates: for all $\tau \in [0,1)$
\begin{eqnarray*}
|\hat U_{K_0} (\tau )| & \leq & C ,\\
\| \mathcal{V}_2 \nabla \psi_* \|_{L^\infty} (\tau) & \leq & \frac{C }{ |\ln (T -t_0(x_0))|^{\frac{1}{4}}},\\
\| \mathcal{V}_2 \Delta \psi_* \|_{L^\infty} (\tau) & \leq & \frac{C }{ |\ln (T -t_0(x_0))|^{\frac{1}{2}}},\\
\left\| \left(U_1^{p-1} - \hat U_{K_0}^{p-1} \right) \psi_* \right\|_{L^\infty} (\tau)& \leq & \frac{C}{ | \ln(T -t_0(x_0))|^{\gamma_2}}, \\
\| \mathcal{G}_2 \psi_*\|_{L^{\infty}} & \leq & \frac{C}{ |\ln (T -t_0(x_0))| }.
\end{eqnarray*}
where $\gamma_2$ given in \eqref{sup-v-xi-tau-apro-1}. Hence, we derive from the above estimates that: for all $\tau \in [0,1)$
\begin{eqnarray*}
| e^{(\tau - \tau')\Delta}p \hat U_{K_0}^{p-1} \bar {\mathcal{V}}_2 (\tau') | & \leq &C \|\bar {\mathcal{V}}_2 (\tau') \|,\\
| e^{(\tau - \tau')\Delta} (\text{div} (\mathcal{V}_2 \nabla \psi_*)) | &\leq & C \frac{1}{\sqrt{ \tau - \tau'}} \frac{1}{| \ln (T- t_0(x_0))|^{\frac{1}{4}}} ,\\
|e^{(\tau - \tau')\Delta } ( \mathcal{V}_2 \Delta \psi_*) | & \leq & \frac{C}{|\ln(T - t_0(x_0) )|^{\frac{1}{2}} },\\
|e^{(\tau - \tau')\Delta } ( p (U_1^{p-1} - \hat U_{K_0}^{p-1}) \psi_* V_2 )(\tau') | & \leq & \frac{C}{ |\ln (T -t_0(x_0))|^{\gamma_2}},\\
|e^{(\tau - \tau')\Delta } (\psi_* \mathcal{G}_2 )(\tau')| &\leq & \frac{C}{|\ln (T - t_0(x_0))|}.
\end{eqnarray*}
Pluggin into \eqref{duhamel-bar-mathcal-V-2}, we obtain
$$ \|\bar{\mathcal{V}}_2 (\tau)\|_{L^\infty} \leq \frac{C}{ |\ln(T -t_0(x_0))|^{\gamma_3}} + C \int_{0}^\tau \|\bar{\mathcal{V}}_2 (\tau')\|_{L^\infty} d \tau' ,$$
where $\gamma_3 = \min (\frac{1}{4}, \gamma_2)$. Then, thanks to Gronwall inequality, we get
$$\| \bar{ \mathcal{V}}_2\|_{L^\infty} \leq \frac{C}{|\ln(T -t_0(x_0))|^{\gamma_3}}.$$
Hence, \eqref{sup-V-2-xi-tau-apro-1} follows . Finally, we easily find the asymptotics of $u^*$ and $u_2^*$ as follows, thanks to the definition of $U$ and $V_2$ and to estimates \eqref{sup-v-xi-tau-apro-1} and \eqref{sup-V-2-xi-tau-apro-1}:
\begin{equation}\label{limit-u-start}
u^* (x_0) = \lim_{t \to T} u(x_0, t) = (T- t_0 (x_0))^{- \frac{1}{p-1}} \lim_{\tau \to 1} U (x_0, 0, \tau) \sim (T- t_0 (x_0))^{- \frac{1}{p-1}} \left(\frac{(p - 1)^2}{ 4 p} K_0^2 \right)^{- \frac{1}{p-1}},
\end{equation}
and
\begin{equation}\label{limit-im-u-*}
u_2^* = \lim_{t \to T} u_2(x_0, t) = \frac{(T- t_0 (x_0))^{- \frac{1}{p-1}}}{|\ln (T- t_0 (x_0))|} \lim_{\tau \to 1} V_2 (x_0, 0, \tau) \sim \frac{(T- t_0 (x_0))^{- \frac{1}{p-1}}}{|\ln (T- t_0 (x_0))|} \left(\frac{(p - 1)^2}{ 4 p} \right)^{- \frac{p}{p-1}} (K_0^2)^{- \frac{1}{p-1}}.
\end{equation}
Using the relation \eqref{x-0-leq-delta-t-x-0}, we find that
\begin{equation}\label{asymp-T-t-0-x-0}
T - t_0 \sim \frac{|x_0|^2}{ 2 K_0^2 |\ln |x_0||} \text{ and }\ln(T - t_0(x_0)) \sim 2 \ln (|x_0|), \quad \text{ as } x_0 \to 0.
\end{equation}
Plugging \eqref{asymp-T-t-0-x-0} into \eqref{limit-u-start} and \eqref{limit-im-u-*}, we get the conclusion of item $(ii)$ of Theorem \ref{Theorem-profile-complex}.
This concludes the proof of Theorem \ref{Theorem-profile-complex} assuming that Proposition \ref{pro-reduction- to-finit-dimensional} holds. Naturally, we need to prove this propostion on order to finish the argument. This will be done in the next section.
\end{proof}
\section{The proof of Proposition \ref{pro-reduction- to-finit-dimensional}}\label{the proof of proposion-reduction-finite-dimensional}
This section is devoted to the proof of Proposition \ref{pro-reduction- to-finit-dimensional},
which is the heart of our analysis. We proceed into two parts. In the first part, we derive \textit{a priori estimates} on $q(s)$ in $V_A(s)$. In the second part, we show that the new bounds are better than those defined in $V_A(s)$, except for the first components $( q_{1,0}, (q_{1,j} )_{j \leq n } , q_{2,0}, (q_{2,j})_{j\leq n}, (q_{2,j,k} )_{j,k \leq n} )(s)$. This means that the problem is reduced to the control of these components, which is the conclusion of item $(i)$ of Proposition \ref{pro-reduction- to-finit-dimensional}. Item $(ii)$ of Proposition \ref{pro-reduction- to-finit-dimensional} is just a direct consequence of the dynamics of these modes. Let us start the first part.
\subsection{A priori estimates on $(q_1,q_2)$ in $V_A(s)$.}
In this subsection, we aim at proving the following proposition:
\begin{proposition}\label{prop-dynamic-q-1-2-alpha-beta}
There exists $A_4\geq 1,$ such that for all $ A \geq A_4$ there
exists $s_4(A)\geq 1$, such that the following holds for all
$s_0 \geq s_4(A)$: we assume that for all $s \in [\sigma,s_1], (q_1,q_2)(s) \in V_A(s)$ for some $s_1\geq s_0$. Then, the following holds for all $s \in [s_0,s_1]$:
\begin{itemize}
\item[$(i)$] (\textit{ODE satisfied by the positive modes}) For all $j \in \{1,n\}$ we have
\begin{equation}\label{ODE-q-1-0-1}
\left| q_{1,0}' (s) - q_{1,0} (s) \right| + \left| q_{1,j}' (s) - \frac{1}{2}
q_{1,j} (s) \right| \leq \frac{C}{s^2},\forall j\leq n.
\end{equation}
\begin{equation}\label{ODE-Phi-0-1}
\left| q_{2,0}' (s) - q_{2,0}(s) \right| + \left| q_{2,j}' (s) - \frac{1}{2} q_{2,j}(s) \right| \leq \frac{C }{s^{p_1 +2}}, \forall j \leq n.
\end{equation}
\item[$(ii)$] (\text{ODE satisfied by the null modes}) For all $j,k \leq n$
\begin{equation}\label{ODE-q-1-2}
\left| q_{1,j,k}' (s) + \frac{2}{s} q_{1,j,k} (s) \right| \leq \frac{C A}{s^3},
\end{equation}
\begin{equation}\label{ODE-Phi-2}
\left| q_{2,j,k} '(s) + \frac{2}{s} q_{2,j,k}(s)\right| \leq \frac{C A^2 \ln s}{s^{p_1^* + 3}}.
\end{equation}
\item[$(iii)$] (\textit{Control the negative part})
\begin{equation}\label{Estimata-q-1--}
\left\| \frac{q_{1,-}(.,s)}{1 + |y|^{3}}\right\|_{L^\infty} \leq Ce^{- \frac{s - \tau }{2}} \left\| \frac{q_{1,-}(.,\tau)}{1 + |y|^{3}}\right\|_{L^\infty} + C \frac{ e^{- (s - \tau )^2}}{s^{\frac{3}{2}}} \|q_{1,e}(.,\tau)\|_{L^\infty} + \frac{C (1 + s -\tau)}{s^2},
\end{equation}
\begin{equation}\label{Estimat-q-2--}
\left\| \frac{q_{2,-}(.,s)}{1 + |y|^{3}}\right\|_{L^\infty} \leq Ce^{- \frac{s - \tau }{2}} \left\| \frac{q_{2,-}(.,\tau)}{1 + |y|^{3}}\right\|_{L^\infty} + C \frac{ e^{- (s - \tau )^2}}{s^{\frac{3}{2}}} \|q_{2,e}(.,\tau)\|_{L^\infty} + \frac{C (1 + s -\tau)}{s^{\frac{p_1 + 5}{2}}}.
\end{equation}
\item[$(v)$] (\textit{Outer part})
\begin{equation}\label{outer-Q-e}
\left\| q_{1,e} (.,s) \right\|_{L^\infty} \leq C e^{- \frac{(s -\tau)}{p} } \|q_{1,e}(.,\tau)\|_{L^\infty} + C e^{s - \tau }s^{\frac{3}{2}} \left\| \frac{q_{1,-}(.,\tau)}{ 1 + |y|^3} \right\|_{L^\infty} + \frac{C (1 + s - \tau)e^{s - \tau}}{\sqrt s},
\end{equation}
\begin{equation}\label{outer-Phi-e}
\left\| q_{2,e} (.,s) \right\|_{L^\infty} \leq C e^{- \frac{(s -\tau)}{p} } \|q_{2,e}(.,\tau)\|_{L^\infty} + C e^{s - \tau }s^{\frac{3}{2}} \left\| \frac{q_{2,-}(.,\tau)}{ 1 + |y|^3} \right\|_{L^\infty} + \frac{C (1 + s - \tau)e^{s - \tau}}{s^{\frac{p_1 +2}{2}}}.
\end{equation}
\end{itemize}
\end{proposition}
\begin{proof}
The proof of this Proposition is given in two steps:
+ \textit{ Step 1: } We will give a proof to items $(i)$ and $(ii)$ by using the projection the equations which are satisfied by $q_1 $ and $ q_2$.
+ \textit{ Step 2: } We will control the other components by studying the dynamics of the linear operator $\mathcal{L} + V $.
a) \textbf{Step 1: } We observe that the techniques of the proof for \eqref{ODE-q-1-0-1}, \eqref{ODE-Phi-0-1}, \eqref{ODE-q-1-2} and \eqref{ODE-Phi-2} are the same. So, we only deal with the proof of
\eqref{ODE-q-1-2}. For each $j,k \leq n$ by using the equation in \eqref{equation-satisfied-by-q-1-2} and the definition of $q_{1,j,k}$ we deduce that
\begin{equation}\label{in-ject-equa-q-1-h-j-k}
\left|q_{1,i,j}'(s) - \int \left[ \mathcal{L} q_1 + Vq_1 + B_1(q_1, q_2) + R_1(y,s) \right] \chi(y,s) \left( \frac{y_i y_j }{4} - \frac{\delta_{i,j}}{2} \right) \rho dy \right| \leq C e^{-s},
\end{equation}
if $K$ is large enough. In addition to that, using the fact $(q_1,q_2) \in V_A(s)$ and Lemma \ref{lemma-estiam-q-1-2-in V-A}, Lemma \ref{lemmas-potentials}, Lemma \ref{lemma-quadratic-term-B-1-2}, Lemma \ref{lemma-rest-term-R-1-2} that
\begin{eqnarray*}
\left|\int \mathcal{L}(q)\chi \left( \frac{y_i y_j }{4} - \frac{\delta_{i,j}}{2} \right)\rho dy \right| &\leq & \frac{C}{s^3},\\
\left| \int V q_1 \chi \left( \frac{y_i y_j }{4} - \frac{\delta_{i,j}}{2} \right) \rho dy + \frac{2 }{s} q_{1,i,j}(s)\right| &\leq & \frac{C A}{s^3},\\
\left|\int B_1(q_1,q_2) \chi \left( \frac{y_i y_j }{4} - \frac{\delta_{i,j}}{2} \right) \rho dy \right| &\leq & \frac{C }{s^3},\\
\left| \int R_1(y,s) \chi \left( \frac{y_i y_j }{4} - \frac{\delta_{i,j}}{2} \right) \rho dy \right| &\leq & \frac{C}{s^3},
\end{eqnarray*}
if $s \geq s_4(A)$. Then, \eqref{ODE-q-1-2} is derived by adding all the above estimates.
\textbf{ Step 2:} In this part, we will concentrate
on the proof of items $(iii)$ and $(iv)$.
We now rewrite \eqref{equation-satisfied-by-q-1-2} in its integral form: for each $s \geq \tau $
\begin{equation}\label{equationq-1-2-intergral}
\left\{ \begin{array}{rcl}
q_1(s) &=& \mathcal{K}(s,\tau) q_1(\tau) + \int_{\tau}^s \mathcal{K}(s,\sigma) \left[ (V_{1,1} q_1 )(\sigma) + ( V_{1,2} q_2) (\sigma) + B_1(q_1,q_2)(\sigma) + R_1(\sigma) \right] d \sigma \\
&=& \sum_{i=1}^5 \vartheta_{1,i}(s,\tau),\\
q_2(s) & =& \mathcal{K}(s,\tau) q_2(\tau) + \int_{\tau}^s \mathcal{K}(s,\sigma) \left[ (V_{2,1} q_1)(\sigma) + (V_{2,2} q_2)(\sigma) + B_2(q_1,q_2)(\sigma) + R_2(\sigma) \right] d \sigma \\
&=& \sum_{i=1}^5 \vartheta_{2,i}(s,\tau).
\end{array} \right.
\end{equation}
where $\{\mathcal{K}(s,\tau)\}_{s \geq \tau}$ is the fundamental solution associated to the linear operator $\mathcal{L} + V$ and defined by
\begin{equation}\label{fundamental-sol}
\left\{ \begin{array}{l}
\partial_s \mathcal{K}(s,\tau) = (\mathcal{L} + V) \mathcal{K}(s,\tau),\quad \forall s > \tau,\\
\mathcal{K}(\tau,\tau) = Id.
\end{array}
\right.
\end{equation}
Let us now introduce some notations:
\begin{align*}
\vartheta_{1,1}(s,\tau) &= \mathcal{K}(s,\tau) q_1(\tau), \quad \vartheta_{1,2}(s,\tau) = \int_{\tau}^s \mathcal{K}(s,\sigma) (V_{1,1} q_1)(\sigma) d \sigma, \quad \vartheta_{1,3}(s,\tau) = \int_{\tau}^s \mathcal{K}(s,\sigma) ( V_{1,2} q_2)(\sigma) d \sigma,\\
\vartheta_{1,4}(s,\tau) & = \int_{\tau}^s \mathcal{K}(s,\sigma) ( B_1 (q_1,q_2))(\sigma) d \sigma, \quad \vartheta_{1,5} = \int_{\tau}^s \mathcal{K}(s,\sigma) ( R_1 (., \sigma)) d \sigma,
\end{align*}
and
\begin{align*}
\vartheta_{2,1}(s,\tau) &= \mathcal{K}(s,\tau) (q_2(\tau)), \quad \vartheta_{2,2}(s,\tau) = \int_{\tau}^s \mathcal{K}(s,\sigma) (V_{2,1} q_1)(\sigma) d \sigma, \quad \vartheta_{2,3}(s,\tau) = \int_{\tau}^s \mathcal{K}(s,\sigma) ( V_{2,2} q_2)(\sigma) d \sigma,\\
\vartheta_{2,4}(s,\tau) & = \int_{\tau}^s \mathcal{K}(s,\sigma) ( B_2 (q_1,q_2))(\sigma) d \sigma, \quad \vartheta_{2,5} = \int_{\tau}^s \mathcal{K}(s,\sigma) ( R_2 (., \sigma)) d \sigma.
\end{align*}
From \eqref{equationq-1-2-intergral}, we can see the strong influence of the kernel $\mathcal{K}.$ For that reason, we will study the dynamics of that operator:
\begin{lemma}[A priori estimates of the linearized operator]\label{dynamic-K-feym}For all $\rho^* \geq 0$, there exists $s_5(\rho^*) \geq 1$, such that if $\sigma \geq s_5(\rho^*)$ and $v \in L^{2}_{\rho}$ satisfies
\begin{equation}\label{condition-q-sigma}
\sum_{m=0}^2 |v_m| + \left\|\frac{v_-}{1 + |y|^3}\right\|_{L^{\infty}} +\|v_e\|_{L^{\infty}} < \infty,
\end{equation}
then, for all $ s \in [\sigma, \sigma + \rho^*],$ the function $\theta(s) = \mathcal{K}(s,\sigma) v $ satisfies
\begin{equation}\label{control-K-q-sigma1}
\begin{array}{l}
\left\|\frac{\theta_-(y,s)}{1 + |y|^3}\right\|_{L^{\infty}} \leq \frac{C e^{s -\sigma} \left( (s - \sigma)^2 +1 \right)}{s} \left( |v_0| + |v_1| + \sqrt s |v_2|\right)\\
\quad \quad \quad \quad \quad + C e^{-\frac{(s-\sigma)}{2}} \left\|\frac{v_-}{1 + |y|^3}\right\|_{L^{\infty}} + C \frac{e^{-(s-\sigma)^2} }{s^{\frac{3}{2}}} \|v_e\|_{L^{\infty}},
\end{array}
\end{equation}
and
\begin{equation}\label{control-K-q-e}
\|\theta_e(y,s)\|_{L^{\infty}} \leq C e^{s -\sigma} \left( \sum_{l=0}^2 s^{\frac{l}{2}} |v_l| +s^{\frac{3}{2}} \left\|\frac{v_-}{1 +|y|^3}\right\|_{L^{\infty}}\right) + C e^{-\frac{s -\sigma}{p}} \|v_e\|_{L^{\infty}}.
\end{equation}
\end{lemma}
\begin{proof} The proof of this result was given by Bricmont and Kupiainen \cite{BKnon94} in the one dimensional case. Later, it was extended to the higher dimensional case by Nguyen and Zaag \cite{NZens16}. We kindly refer interested readers to Lemma 2.9 in \cite{NZens16} for details of the proof.
\end{proof}
We now use Lemmas \ref{dynamic-K-feym}, \ref{lemma-estiam-q-1-2-in V-A}, \ref{lemmas-potentials}, \ref{lemma-quadratic-term-B-1-2} and \ref{lemma-rest-term-R-1-2} to deduce the following Lemma which implies Proposition \ref{prop-dynamic-q-1-2-alpha-beta}.
\begin{lemma}\label{control-prin-q-e-q-}
For all $A \geq 1, \rho^* \geq 0$, there exists $s_6(A, \rho^*) \geq 1$ such that $\forall s_0 \geq s_6(A,\rho^*)$ and $q(s) \in S_A(s), \forall s \in [\tau, \tau + \rho^*] \text{ where } \tau \geq s_0$. Then, we have the following properties: for all $ s \in [\tau, \tau + \rho^*] $,
\begin{itemize}
\item[$i)$] (The linear term $\vartheta_{1,1}(s,\tau)$ and $\vartheta_{2,1}(s,\tau)$)
\begin{eqnarray*}
\left\|\frac{(\vartheta_{1,1}(s,\tau))_-}{1 + |y|^3} \right\|_{L^{\infty}} &\leq & C e^{- \frac{s - \tau}{2}} \left\|\frac{q_{1,-}(., \tau)}{1 + |y|^3} \right\|_{L^{\infty}} + \frac{C e^{- (s-\tau)^2}}{s^{\frac{3}{2}}} \|q_{1,e} (\tau)\|_{L^\infty} + \frac{C }{s^{2}} ,\\
\| (\vartheta_{1,1}(s,\tau))_e\|_{L^{\infty}} &\leq & C e^{- \frac{s - \tau }{p}} \|q_{1,e} (\tau)\|_{L^\infty} + C e^{s- \tau} s^{\frac{3}{2}} \left\|\frac{q_{1,-}(., \tau)}{1 + |y|^3} \right\|_{L^{\infty}} + \frac{C }{ \sqrt s},\\
\left\|\frac{(\vartheta_{2,1}(s,\tau))_-}{1 + |y|^3} \right\|_{L^{\infty}} &\leq & C e^{- \frac{s - \tau}{2}} \left\|\frac{q_{2,-}(., \tau)}{1 + |y|^3} \right\|_{L^{\infty}} + \frac{C e^{- (s-\tau)^2}}{s^{\frac{3}{2}}} \|q_{2,e} (\tau)\| + \frac{C }{s^{\frac{p_1+ 5}{2}}} ,\\
\| (\vartheta_{2,1}(s,\tau))_e\|_{L^{\infty}} &\leq & C e^{- \frac{s - \tau }{p}} \|q_{2,e} (\tau)\|_{L^\infty} + C e^{s- \tau} s^{\frac{3}{2}} \left\|\frac{q_{2,-}(., \tau)}{1 + |y|^3} \right\|_{L^{\infty}} + \frac{C }{ s^{\frac{p_1 + 2}{2}}}.
\end{eqnarray*}
\item[$ii)$] The quadratic term $\vartheta_{1,2}(s,\tau)$ and $\vartheta_{2,2}(s,\tau)$
\begin{eqnarray*}
\left\|\frac{(\vartheta_{1,2}(s,\tau))_-}{1 + |y|^3} \right\|_{L^{\infty}} &\leq & \frac{C(s - \tau)}{ s^{2 }}, \quad \| (\vartheta_{1,2}(s,\tau))_e\|_{L^{\infty}} \leq \frac{C (s - \tau)}{ s^{\frac{1}{2} }},\\
\left\|\frac{(\vartheta_{2,2}(s,\tau))_-}{1 + |y|^3} \right\|_{L^{\infty}} &\leq & \frac{C(s - \tau)}{ s^{ \frac{p_1 + 5}{2} }}, \quad \| (\vartheta_{2,2}(s,\tau))_e\|_{L^{\infty}} \leq \frac{C (s - \tau)}{ s^{\frac{p_1 + 2}{2} }}.
\end{eqnarray*}
\item[$iii)$] The correction terms $\vartheta_{1,3}(s,\tau) $ and $\vartheta_{2,3}(s,\tau) $
\begin{eqnarray*}
\left\|\frac{(\vartheta_{1,3}(s,\tau))_-}{1 + |y|^3} \right\|_{L^{\infty}} &\leq & \frac{C (s - \tau) }{ s^{2}}, \quad \| (\vartheta_{1,3}(s,\tau))_e\|_{L^{\infty}} \leq \frac{C (s -\tau ) }{ s^{\frac{1}{2} }},\\
\left\|\frac{(\vartheta_{2,3}(s,\tau))_-}{1 + |y|^3} \right\|_{L^{\infty}} &\leq & \frac{C (s - \tau) }{ s^{\frac{p_1+5}{2}}}, \quad \| (\vartheta_{2,3}(s,\tau))_e\|_{L^{\infty}} \leq \frac{C (s -\tau ) }{ s^{\frac{p_1+2}{2} }}.
\end{eqnarray*}
\item[$iv)$] The correction terms $\vartheta_{1,4}(s,\tau) $ and $\vartheta_{2,4}(s,\tau) $
\begin{eqnarray*}
\left\|\frac{(\vartheta_{1,3}(s,\tau))_-}{1 + |y|^3} \right\|_{L^{\infty}} &\leq & \frac{C (s - \tau) }{ s^{2}}, \quad \| (\vartheta_{1,3}(s,\tau))_e\|_{L^{\infty}} \leq \frac{C (s -\tau ) }{ s^{\frac{1}{2} }},\\
\left\|\frac{(\vartheta_{2,3}(s,\tau))_-}{1 + |y|^3} \right\|_{L^{\infty}} &\leq & \frac{C (s - \tau) }{ s^{\frac{p_1+5}{2}}}, \quad \| (\vartheta_{2,3}(s,\tau))_e\|_{L^{\infty}} \leq \frac{C (s -\tau ) }{ s^{\frac{p_1+2}{2} }}.
\end{eqnarray*}
\item[$v)$] The correction terms $\vartheta_{1,5}(s,\tau) $ and $\vartheta_{2,5}(s,\tau) $
\begin{eqnarray*}
\left\|\frac{(\vartheta_{1,3}(s,\tau))_-}{1 + |y|^3} \right\|_{L^{\infty}} &\leq & \frac{C (s - \tau) }{ s^{2}}, \quad \| (\vartheta_{1,3}(s,\tau))_e\|_{L^{\infty}} \leq \frac{C (s -\tau ) }{ s^{\frac{1}{2} }},\\
\left\|\frac{(\vartheta_{2,3}(s,\tau))_-}{1 + |y|^3} \right\|_{L^{\infty}} &\leq & \frac{C (s - \tau) }{ s^{\frac{p_1+5}{2}}}, \quad \| (\vartheta_{2,3}(s,\tau))_e\|_{L^{\infty}} \leq \frac{C (s -\tau ) }{ s^{\frac{p_1+2}{2} }}.
\end{eqnarray*}
\end{itemize}
\end{lemma}
\begin{proof}
The result is implied from the definition of the shrinking set $V_A(s) $ and Lemma \ref{lemma-estiam-q-1-2-in V-A} and the bounds for $V, V_{j,k}, B_1, B_2, R_1, R_2$ with $j ,k \in \{1,2\}$ which are shown in Lemmas \ref{lemmas-potentials}, \ref{lemma-quadratic-term-B-1-2} and \ref{lemma-rest-term-R-1-2}. For details in a quite similar case, see Lemma 4.20 in Tayachi and Zaag \cite{TZpre15}.
\end{proof}
Finally, the conclusion of $(iii)$ and $(iv)$ of Proposition \ref{prop-dynamic-q-1-2-alpha-beta} follows by using formular \eqref{equationq-1-2-intergral} and Lemma \eqref{control-prin-q-e-q-}. This concludes the proof of Proposition \ref{prop-dynamic-q-1-2-alpha-beta}.
\end{proof}
\subsection{ Conclusion of the proof of Proposition \ref{pro-reduction- to-finit-dimensional}}
In this subsection, we will give prove a Proposition which implies Proposition \ref{pro-reduction- to-finit-dimensional} directly. More precisely, this is our statement:
\begin{proposition}\label{control-q(s)-V-A-s-1-2}
There exists $A_7\geq 1$ such that for all $ A \geq A_7$, there exists $s_7(A)\geq 1$ such that for all $s_0 \geq s_7(A)$, we have the following properties: If the following conditions hold:
\begin{itemize}
\item[$a)$] $(q_1,q_2)(s_0) = (\phi_1,\phi_2) $ with $(d_0,d_1) \in \mathcal{D}_{A,s_0}$,
\item[$b)$] For all $s \in [s_0,s_1]$ we have $(q_1,q_2)(s) \in V_A(s)$.
\end{itemize}
Then for all $s \in [s_0,s_1]$, we have
\begin{eqnarray}
\forall i,j \in \{1, \cdots, n\}, \quad |q_{2,i,j}(s)| & \leq & \frac{A^2 \ln s}{2 s^2}, \label{conq_1-2} \\
\left\| \frac{q_{1,-}(y,s)}{1 + |y|^3}\right\|_{L^{\infty}} &\leq & \frac{A}{2 s^{2}}, \quad \|q_{1,e}(s)\|_{L^{\infty}} \leq \frac{A^2}{2 \sqrt s}, \label{conq-q-1--and-e}\\
\left\| \frac{q_{2,-}(y,s)}{1 + |y|^3}\right\|_{L^{\infty}} &\leq & \frac{A^2}{2 s^{\frac{p_1 + 5}{2}}},\quad \|q_{2,e}(s)\|_{L^{\infty}} \leq \frac{A^3}{2 s^{\frac{p_1 + 2}{2}}}. \label{conq-q-2--and-e}
\end{eqnarray}
where $\mathcal{D}_{A,s_0}$ is introduced in Lemma \ref{lemma-control-initial-data} and $(\phi_1,\phi_2)$ is defined as in Definition \eqref{initial-data-profile-complex}.
\end{proposition}
\begin{proof}
The proof relies on Propostion \ref{prop-dynamic-q-1-2-alpha-beta} and details are similar to Proposition 4.7 of Merle and Zaag \cite{MZdm97}. For that reason, we only give a short proof to \eqref{conq_1-2}. We use \eqref{ODE-q-1-2} to deduce that
$$ \left| \int_{s_0}^s (\tau^2q_{j,k} (\tau))d \tau \right| \leq CA ( \ln (s) - \ln (s_0)), $$
which yields
$$ |q_{1,j,k} (s)| \leq C A s^{-2}\ln s \leq \frac{A^2 \ln s}{2 s^2 },$$
if $A \geq A_7$ large enough and $s \geq s_7 (A)$. Then, \eqref{conq_1-2} follows.
\end{proof}
We here give the conclusion of the proof of Proposition \ref{pro-reduction- to-finit-dimensional}:
\begin{proof}
From Proposition \ref{control-q(s)-V-A-s-1-2}, if $(q_1,q_2)(s_1) \in \partial V_A(s_1)$ then:
\begin{equation}\label{q-1-0-j-k-in-partial-hat-V-A-s-1}
\left(q_{1,0}, (q_{1,j})_{1 \leq j\leq n}, q_{2,0},(q_{2,j})_{1 \leq j\leq n},(q_{2,j,k})_{1 \leq j,k \leq n}\right)(s_1) \in \partial \hat V_A (s_1).
\end{equation}
This concludes item $(i)$ of Proposition \ref{pro-reduction- to-finit-dimensional}.
\noindent
\text{The proof of item $(ii)$ of Proposition \ref{pro-reduction- to-finit-dimensional}.} Thanks to \eqref{q-1-0-j-k-in-partial-hat-V-A-s-1}, we derive two the following cases:
+ The first case: There exists $j_0 \in \{ 1,...,n\}$ and $\epsilon_0 \in \{-1, 1\}$ such that either $q_{1,0} (s_1) = \epsilon_0 \frac{A}{s_1^2}$ or $ q_{1,j_0} = \epsilon_0 \frac{A}{s_1^2}$ or $q_{2,0} = \epsilon_0\frac{A^2}{s_1^{p_1+ 2}}$ or $q_{2,j_0} (s_1) = \epsilon_0 \frac{A^2}{s_1^{p_1 + 2}}$. Without loss of generality, we can suppose that $ q_{1,0} = \epsilon_0 \frac{A}{s_1^2} $ (the other cases are similar). Then, by using \eqref{ODE-q-1-0-1}, we can prove that the sign of $q_{1,0}'(s_1)$ is oppsite to the sign of $\left( \epsilon_0 \frac{A}{s_1^2}\right)'$. In other words,
$$ \epsilon_0 \left( q_{1,0} - \epsilon_0 \frac{A}{s^{2}} \right)'(s_1) > 0. $$
+ The second case: There exists $j_0, k_0 , \epsilon_0 \in {-1,1}$ such that $q_{2,j_0,k_0} (s_1) = \epsilon_0 \frac{A^2}{s_1^{p_1+2}}$, by using \eqref{ODE-Phi-2} we can prove that
$$ \epsilon_0 \left( q_{2,j_0,k_0} - \epsilon_0 \frac{A^2}{s^{p_1 + 2}} \right)'(s_1) > 0. $$
Finally, we deduce that there exists $\delta_0 > 0$ such that for all $\delta \in (0, \delta_0)$ we have
$$\left(q_{1,0}, (q_{1,j})_{1 \leq j\leq n}, q_{2,0},(q_{2,j})_{1 \leq j\leq n},(q_{2,j,k})_{1 \leq j,k \leq n}\right)(s_1 + \delta ) \notin \hat V_A (s_1 + \delta ). $$
if $A \geq A_3$ and $s_0 \geq s_3(A)$ large enough. Then, the item $(ii)$ of Proposition follows. Hence, we also derive the conclusion of Proposition \ref{pro-reduction- to-finit-dimensional}.
\end{proof}
|
1,477,468,750,383 | arxiv | \subsection{Other Notions of Fairness}
\todo{Trim this down to be consistent with the introduction}
There are several other popular notions of fairness and efficiency. We list them here:
\begin{description}
\item[\text{MAX-USW}] An allocation $X$ is said to maximize {\em utilitarian social welfare} if it maximizes $\sum_{i \in N} v_i(X_i)$.
\item[\text{MNW}] An allocation $X$ is said to maximize {\em Nash social welfare} if it maximizes $\prod_{i \in N} v_i(X_i)$.
\item[\text{MMS}] An agent's {\em maxmin share} is defined as the utility they would obtain if they divided the goods into $n$ bundles and picked the worst of these bundles. More formally,
\begin{align*}
\text{MMS}_i = \min_{X_j \in X} \max_{X = (X_1, X_2, \dots, X_n)} v_i(X_j))
\end{align*}
An allocation is $c\text{MMS}$ if it guarantees every agent $i \in N$ utility of at least $c \text{MMS}_i$.
\item[Min-squared] A \text{MAX-USW}{} allocation $X$ is min-squared if it minimizes $\sum_{i \in N}$ $v_i(X_i)^2$ among all the allocations which are \text{MAX-USW}.
\item[Envyfreeness] An allocation $X$ is said to be envy free if for all $i, j \in N$, $v_i(X_i) \ge v_i(X_j)$ i.e. no agent values another agent's bundle over their own. However, this is not always achievable --- consider a problem instance with two agents and one good. A few relaxations have been proposed and studied in the literature:
\begin{inparaenum}[(a)]
\item {\em EF1}: $X$ is EF1 if for every pair of agents $i, j \in N$, there is always a good $g \in X_j$ such that $v_i(X_i) \ge v_i(X_j - g)$.
\item {\em EFX}: $X$ is EFX if for every pair of agents $i, j \in N$, for any $g \in X_j$, we have $v_i(X_i) \ge v_i(X_j - g)$.
\item {\em Stochastic envyfreeness}: A randomized allocation $X$ is stochastic envy free if for all pairs of agents $i, j \in N$ and every value $t$, we have $\Pr[v_i(X_i) \ge t] \ge \Pr[v_i(X_j) \ge t]$.
\item {\em Ex-ante envyfreeness}: A randomized allocation $X$ is ex-ante envy free if for all pairs of agents $i, j \in N$, we have $\mathbb{E}[v_i(X_i)] \ge \mathbb{E}[v_i(X_j)]$. EFX and stochastic envyfreeness are stronger notions than EF1 and ex-ante envyfreeness respectively.
\end{inparaenum}
\item[Proportionality] An allocation $X$ is proportional if for all agents $i \in N$, $v_i(X_i) \ge \frac{v_i(G)}{n}$. Like envyfreeness, proportionality is also impossible to guarantee: again, consider the instance with two agents and one good. There have been two prominent relaxations of proportionality proposed in the literature:
\begin{inparaenum}[(a)]
\item {\em PROP1}: $X$ is PROP1 if for every agent $i \in N$, there exists a good $g \in G$ such that $v_i(X_i+ g) \ge \frac{v_i(G)}{n}$.
\item {\em Ex-ante proportionality}: A randomized allocation $X$ is said to be ex-ante proportional if for all agents $i \in N$, $\mathbb{E}[v_i(X_i)] \ge \frac{v_i(G)}{n}$.
\end{inparaenum}
\item[Lorenz dominating] An allocation $X$ is said to be Lorenz-dominating if for all other allocations $Y$ and every $k \in [n]$, we have $\sum_{j = 1}^k u^X_j \ge \sum_{j = 1}^k u^Y_j$ where $\vec u^X$ and $\vec u^Y$ are the sorted utility vectors of $X$ and $Y$ respectively.
\end{description}
Apart from fairness and efficiency, an important property of algorithms which compute fair allocations is {\em strategyproofness} i.e. no agent can gain by misreporting their valuation function.
\citet{Babaioff2021Dichotomous} use the concept of Lorenz dominating allocations but use auxiliary goods to create a priority ordering between the agents.
Given a priority ordering $\pi: N \mapsto [n]$, they add $n$ goods $A = \{a_1, a_2, \dots, a_n\}$ to the problem instance where each agent $i$ values $a_i$ at $\frac{\pi(i)}{n^2}$ and the other goods in $A$ at $0$.
More formally, the create a new fair allocation problem instance with the set of agents $N$ and the set of goods $G \cup A$ where each agent $i$ has the following valuation function $v'_i$:
\begin{align*}
v'_i(S) = v_i(S \cap G) +\frac{\pi(i)}{n^2}|S \cap \{a_i\}|
\end{align*}
The goods in $A$ do not exist in reality and serve only to break ties between agents. After computing a Lorenz dominating allocation for this artificial problem instance, they remove the goods from $A$ and allocate the goods from $G$ to the agents according to the computed allocation. They refer to this allocation as a Lorenz dominating allocation with respect to the priority order $\pi$.
\citet{Babaioff2021Dichotomous} show that these Lorenz dominating allocation with respect to any ordering $\pi$ is \text{MNW}, \text{MAX-USW}, Min-squared, leximin, EFX and $(1/2)\text{MMS}$. They also show that these allocations can be computed in polynomial time.
\begin{theorem}[\citet{Babaioff2021Dichotomous}]
Whena agents have MRF valuations, lorenz dominating allocations with respect to any ordering $\pi$ are \text{MNW}, \text{MAX-USW}, Min-squared, leximin, EFX and $(1/2)\text{MMS}$. They can also be computed in polynomial time. When agents have additive valuations, these lorenz dominating allocation are $\text{MMS}$ as well.
\end{theorem}
They also propose a mechanism which chooses the priority order $\pi$ uniformly at random. Apart from the guarantees offered by the above theorem, they show that the mechanism is strategy proof and results in a random allocation which is stochastically envy-free and ex-ante proportional. Their mechanism is called randomized prioritized egalitarian (RPE) and works as follows:
\begin{enumerate}[(a)]
\item Choose a priority order $\pi$ uniformly at random
\item Ask each agent to report their preferences. If an agent's reported preferences are not an MRF, assume their valuation for every bundle is $0$.
\item Compute a Lorenz dominating allocation for the reported preferences with respect to the priority order $\pi$.
\end{enumerate}
\begin{theorem}[\citet{Babaioff2021Dichotomous}]
The RPE mechanism is strategy proof and when agents report preferences truthfully, results in an allocation which is stochastically envy-free and ex-ante proportional.
\end{theorem}
To add to this cornucopia of positive results, we show that Lorenz dominating allocations with respect to any priority ordering $\pi$ is PROP1 as well. Our proof uses the following simple observation.
\begin{obs}\label{obs:unallocated-marginal}
When agents have MRF valuations, let $X$ be a \text{MAX-USW}{} allocation. Then, for any agent $i \in N$, we have $v_i(\bigcup_{j \in N} X_j) = v_i(G)$.
\end{obs}
\begin{proof}
Let $G' = \bigcup_{j \in N} X_j)$.
Since $X$ is utility maximizing, we must have that $\Delta_i(X_i, g) = 0$ for all $g$ in $X_0 = G \setminus G'$. If we have $\Delta_i(X_i, g) = 1$ for some $g$ in $X_0$, we can move this good from $X_0$ to $X_i$ resulting in an allocation with higher utilitarian social welfare since this move will increase $i$'s utility.
Without loss of generality, let $X_0 = \{g_1, g_2, \dots, g_k\}$ and let $Z_j = \{g_1, g_2, \dots, g_j\}$. We have
\begin{align*}
v_i(G) = v_i(G') + \sum_{j = 1}^k \Delta_i(G' \cup Z_{j-1}, g_j) \le v_i(G') + \sum_{j = 1}^k \Delta_i(X_i, g_j) = v_i(G')
\end{align*}
Since marginal gains cannot be negative and $G' \subseteq G$, we have $v_i(G') \le v_i(G)$ as well. Combining the two inequalities gives us our result.
\end{proof}
\begin{prop}
When agents have MRF valuations, lorenz dominating allocations with respect to any priority ordering $\pi$ is PROP1
\end{prop}
\begin{proof}
Let $X$ be a non-redundant Lorenz dominating allocation. Consider the agent $i$.
If $i$ does not envy any other agent, we have $v_i{X_i} \ge v_i(X_j)$ for all $j \in N$. Summing up the inequality for all $j \in N$, we get
\begin{align*}
n v_i(X_i) &\ge \sum_{j \in N} v_i(X_j) \ge v_i(\bigcup_{j \in N} X_j) = v_i(G)
\end{align*}
The second inequality results from the repeated application of \cref{obs:union-upperbound} and the final equality is implied by \cref{obs:unallocated-marginal}.
If $i$ envies some other agent (say $j$), then $|X_j| \ge |X_i| + 1$. If $|X_j| \le |X_i|$, then from \cref{obs:size-upperbound}, $v_i(X_j) \le |X_i| = v_i(X_i)$ --- $i$ does not envy $j$. Using similar arguments to that of \cref{obs:wlog-nonredundant} and \cref{obs:nonredundance-exchange}, we can show that there exists a good $g^*$ such that $\Delta_i(X_i, g^*) = 1$.
Since the allocation is EFX we must have $v_i(X_j) \le v_i(X_i) + 1$ for all $j \in N$. If $v_i(X_j) \ge v_i(X_i) + 2$, dropping any good will result in the reduction in value of at most $1$ due to binary marginal gains i.e. we get $v_i(X_j - g) \ge v_i{X_j} - 1 \ge v_i(X_i) + 1$, a contradiction since $i$ still envies $j$ after dropping a good.
Combining the above two paragraphs, we get that $v_i(X_i + g^*) \ge v_i(X_j)$ for all $j \in N$. Summing up this inequality for all $j \in N$, we get
\begin{align*}
n v_i(X_i + g^{*}) &\ge \sum_{j \in N} v_i(X_j) \ge v_i(\bigcup_{j \in N} X_j) = v_i(G)
\end{align*}
The second inequality results from the repeated application of \cref{obs:union-upperbound} and the final equality is implied by \cref{obs:unallocated-marginal}. This completes the proof.
\end{proof}
We also show that Lorenz dominating allocations with respect to the ordering $\pi$ and prioritized leximin allocations with respect to the ordering $\pi$ are equivalent. This implies that our algorithm also possesses all the fairness properties discussed above and it can be used as a subroutine in RPE mechanism.
\todo{Continue writing from here}
\if 0
We denote the set of all agents which are present in at least one transfer path starting with $i$ under the allocation $X$ as $D_i(X)$.
For a set of agents $S$, $D_S(X) = \bigcup_{i \in S}D_i(X)$.
Note that the fake agent $0$ may be in a transfer path and $D_i(X)$ for some $i$ and $X$. A transfer path involving agent $0$ is one where an agent takes an item from the unallocated pile $X_0$. These transfer paths play a crucial role in our analysis. This is mainly because, as we show in \cref{prop:demand-set-optimal}, if $X$ is non-redundant, no allocation has a better USW for the agents in $D_i(X)$ than $X$.
We prove two useful Lemmata.
\begin{lemma}\label{lem:demand-set-Pareto}
Let $X$ be a non-redundant allocation. When agents have MRF valuations, if there exists an allocation that has a higher USW for a particular set of agents $S \in N \cup \{0\}$, then there exists a non-redundant allocation which Pareto dominates $X$ for the set of agents $S \in N \cup \{0\}$.
\end{lemma}
\begin{proof}
Out of all the allocations which have a higher USW for a particular set of agents $S$, let $Y$ be a non-redundant allocation which maximizes $\sum_{j \in N \cup \{0\}}|X_j \cap Y_j|$.
We argue that $Y$ Pareto dominates $X$ for the set of agents $S$.
Assume for contradiction that $Y$ does not Pareto dominate $X$ for the agents in $S$.
Then we must have some agent $i \in S$ such that $|X_i| > |Y_i|$.
Using the matroid exchange property there must exist some good in $g \in X_i \setminus Y_i$ such that $v_i(Y_i + g) > v_i(Y_i)$. This good $g$ is not in $Y_0$ by assumption since the allocation is non-redundant. Therefore this good belongs to another agent's allocation, say $Y_{i'}$.
If $|X_{i'}| < |Y_{i'}|$, then we can create a new allocation $Z$ as follows
\begin{align*}
Z_j =
\begin{cases}
Y_j + g & j = i \\
Y_j - g & j = i' \\
Y_j & \text{otherwise.}
\end{cases}
\end{align*}
It is easy to see that $Z$ has a weakly greater USW for the agents in $S$ than $Y$ and $\sum_{j \in N \cup \{0\}} |X_j \cap Z_j| > \sum_{j \in N \cup \{0\}} |X_j \cap Y_j|$. This contradicts our initial assumption on $Y$.
If $|X_{i'}| \ge |Y_{i'}|$. Again, by the matroid exchange property there must be a good $g' \in X_{i'} \setminus (Y_{i'} - g) = X_{i'} \setminus Y_{i'}$ such that $v_{i'}(Y_{i'} + g' - g) > v_{i'}(Y_{i'} - g)$. Assume this good is in $Y_{i''}$ $(i'' \ne i')$.
By continuing this process with $i''$, we either end up in a cycle (for example, if $i'' = i$) or we have an acyclic sequence of agents $i \rightarrow i_1 \rightarrow i_2 \rightarrow \dots \rightarrow i_r$ such that $|Y_{i_r}| > |X_{i_r}|$.
If we obtain an acyclic sequence of agents $i \rightarrow i_1 \rightarrow i_2 \rightarrow \dots \rightarrow i_r$ such that $|Y_{i_r}| > |X_{i_r}|$, this is a transfer path for $i$ under the allocation $Y$ such that in every transfer, we can give an agent a good they had in $X$ and take away a good they did not have in $X$. This is true by construction. Let $Z$ be the allocation that results from making the above described transfer along this path. The allocation $Z$ has a weakly greater USW for the set of agents $S$ than $Y$ and we have $\sum_{j \in N \cup \{0\}} |X_j \cap Z_j| > \sum_{j \in N \cup \{0\}} |X_j \cap Y_j|$.
Similarly, if we have a cycle $i_1 \rightarrow i_2 \rightarrow \dots \rightarrow i_r \rightarrow i_1$, there exists a cyclic transfer such that at every transfer, we can give an agent a good they had in $X$ and take away a good they did not have in $X$. It is easy to see that the resulting allocation $Z$ provides each agent with the same utility as $Y$. This transfer also results in an allocation $Z$ such that $\sum_{j \in N \cup \{0\}} |X_j \cap Z_j| > \sum_{j \in N \cup \{0\}} |X_j \cap Y_j|$.
\end{proof}
\begin{lemma}\label{lem:transfer}
Consider two non-redundant allocations $X$ and $Y$, if for some $i \in N \cup \{0\}$, $|X_i| \le |Y_i|$ and $X_i \nsubseteq Y_i$, then for every good $g \in X_i \setminus Y_i$, there exists a good $g' \in Y_i \setminus X_i$ such that $v_i(Y_i - g' + g) = v_i(Y_i)$ and $v_i(X_i + g' - g) = v_i(X_i)$
\end{lemma}
\begin{proof}
This proof has been adapted from \citet{brualdi1969bases}.
We switch to matroid speak for this proof. The matroid we refer to here is the one defined by the rank function $v_i$. We use the following properties of circuits for the proof.
\begin{lemma}[\citet{brualdi1969bases}]\label{lem:circuit-unique}
Given a matroid $(E, \cal I)$, if some set $S \in \cal I$ and there exists an element $e \in E$ such that $S + e \notin \cal I$, then $S + e$ contains a unique circuit.
\end{lemma}
\begin{lemma}[\citet{asche1966minimal}]\label{lem:circuit-intersection}
If $C_1$ and $C_2$ are circuits with $a \in C_1 \cap C_2$ and $b \in C_1 \setminus C_2$, there exists a circuit $C_3$ such that $b \in C_3 \subseteq (C_1 \cup C_2) - a$
\end{lemma}
In matroid terms, the goal of this proof is to find sets $Y_i + g - g'$ and $X_i - g + g'$ which are independent.
If $Y_i + g$ is independent, we are done. To find $g'$, we apply the exchange property --- there must be an element $g' \in (Y_i + g) \setminus X_i$ such that $X_i + g'$ is independent. If $Y_i + g$ is not independent, it must contain a unique circuit $C$ (\cref{lem:circuit-unique}). We must have $C \cap (Y_i \setminus X_i) \ne \emptyset$. If there exists $g'' \in C \cap (Y_i \setminus X_i)$ such that $X_i + g''$ is independent, we are done.
Otherwise, for all $g'' \in C \cap (Y_i \setminus X_i)$, $X_i + g''$ must contain a unique circuit (\cref{lem:circuit-unique}). If none of these circuits contain $g$, then we can repeatedly apply \cref{lem:circuit-intersection} to create a circuit which contains $g$ but is contained in $X_i$ --- a contradiction. Therefore, there must be a $g' \in C \cap (Y_i \setminus X_i)$ such that the unique circuit contained in $X_i \cup g'$ contains $g$. From this, we have that $Y_i - g' + g$ and $X_i -g + g'$ are independent sets. This completes the proof.
\end{proof}
\begin{prop}\label{prop:demand-set-optimal}
Let $X$ be any non redundant allocation. There exists no other allocation $Y$ such that achieves a higher USW for the set of agents $D_S(X)$ for any $S \subseteq N \cup \{0\}$.
\end{prop}
\begin{proof}
Assume for contradiction there is a non-redundant allocation $Y$ with higher USW for the set of agents $D_S(X)$ for some $S \subseteq N \cup \{0\}$. From \cref{lem:demand-set-Pareto}, we can assume without loss of generality that $Y$ Pareto dominates $X$ for the set of agents $D_S(X)$. Further assume that, of all the allocations which Pareto dominate $X$ for the set of agents $D_i(X)$, $Y$ maximizes $\sum_{j \in N \cup \{0\}} |X_j \cap Y_j|$.
Let $X_S = \bigcup_{j \in D_S(X)} X_j$ be the set of goods allocated to agents in $D_S(X)$.
If $X_j \subseteq Y_j$ for all $j \in D_S(X)$, then this proof is trivial. There must be some agent $i'$ such that $Y_{i'} \setminus X_{i'} \ne \emptyset$. Let $g$ be an arbitrary good in $Y_{i'} \setminus X_{i'}$. $g$ is not in $X_S$ since all the agents in $D_S(X)$ received supersets of their allocations in $X$. Let the agent who has $g$ in $X$ be $i''$. There must be a transfer path from some agent $i \in S$ to $i'$ by definition since $i'$ in $D_S(X)$. This transfer path can be extended to include an edge from $i'$ to $i''$ since whatever good $i'$ gives away in the initial path, it can take $g$ from $i''$ and retain its original valuation. This means there is a transfer path from $i$ to $i''$ which is a contradiction since $i'' \notin D_S(X)$.
We now move to the case where $X_j \nsubseteq Y_j$ for some $j \in D_S(X)$. Since $Y$ Pareto dominates $X$ for the set of agents $D_S(X)$, there must exist an agent $i_1 \in D_S(X)$ such that $v_{i_1}(Y_{i_1}) > v_{i_1}(X_{i_1})$. By the matroid exchange property, there must exist a good $g \in Y_{i_1} \setminus X_{i_1}$ such that $v_{i_1}(X_{i_1} + g) > v_{i_1}(X_{i_1})$. Let this good belong to some other agent $i_2$. By the same logic used in the previous paragraph, $i_2 \in D_S(X)$. We create an allocation $Z$ as a copy of $Y$ but move $g$ from $i_1$ to $i_2$. If $v_{i_2}(Z_{i_2}) > v_{i_2}(Y_{i_2})$, then we are done because we have created a Pareto dominant allocation $Z$ such that $\sum_{j \in N \cup \{0\}} |X_j \cap Z_j| > \sum_{j \in N \cup \{0\}} |X_j \cap Y_j|$. Otherwise there must be some good $g' \in Y_{i_2}$ that $i_2$ can give away such that $v_{i_2} (Y_{i_2} + g - g') = v_{i_2}(Y_{i_2})$ and $v_{i_2}(X_{i_2} - g + g') = v_{i_2}(X_{i_2})$ (from \cref{lem:transfer}). This must belong to some other agent $i_3$. We update $Z$ by moving $g'$ from $i_2$ to $i_3$. We continue this process till we have a sequence of agents (which may repeat) $i_1 \rightarrow i_2 \rightarrow \dots \rightarrow i_r$. Note that all of these agents are a part of $D_i(X)$ since by \cref{lem:transfer} and our construction. To show this it suffices to show that there is a transfer path from $i_1$ to $i_r$ since if there is a path that can increase $i_1$'s utility, this path can be extended to increase $i$'s utility for some $i$ in $S$ since $i' \in D_S(X)$. It is easy to see that giving each $i_p$ the same good in $i_{p+1}$ that was transferred from $Z_{i_p}$ to $Z_{i_{p+1}}$ for all $p \in [r-1]$ is a valid transfer path from $i_1$ to $i_r$.
We stop the sequence if the allocation after the transfer is Pareto dominant for the set of agents $D_i(X)$. This would create a Pareto dominant allocation $Z$ with $\sum_{j \in N \cup \{0\}} |X_j \cap Z_j| > \sum_{j \in N \cup \{0\}} |X_j \cap Y_j|$ --- a contradiction. Note that the sequence has to be finite because it will end when all the agents in $D_i(X)$ have the same bundles as they did in $X$. However note that this is not possible since there are goods that agents in $D_i(X)$ have in $Y$ that they do not have in $X$. We do not transfer goods to agents outside $D_i(X)$. So this would mean the goods just disappeared which is a contradiction.
\end{proof}
\fi
\if 0
\subsection{Correctness}
\begin{prop}\label{prop:transfer-path-concat}
Let $X = X_0,\dots,X_n$ be some allocation. If there is a transfer path from $j$ to $0$, and $j\in D_i(X)$, then there is a transfer path from $i$ to $0$
\end{prop}
\begin{proof}
Let us first consider the case where the two transfer paths do not share any intermediate players. Since $j\in D_i(X)$, there is some transfer path from $i$ to $j$, where $j$ loses some item $g$. Since there is a transfer path from $j$ to $0$, then there is a sequence of transfers ending with $0$ such that $j$ gains an item $g'$ for which $\Delta_j(X_j,g') = 1$. Now, since $v_j$ is an MRF, $1 =\Delta_j(X_j,g') \le \Delta_j(X_j\setminus \{g\},g')$, thus $\Delta_j(X_j\setminus\{g\},g') = 1$. In other words, if we concatenate the transfer path from $i$ to $j$, and the transfer path from $j$ to $0$, we obtain a valid transfer path from $i$ to $0$.
Let us next consider the case where there is a single agent $k\ne j,i,0$ who appears in both the transfer path from $i$ to $j$, and from $j$ to $0$.
Let $(i,p_1,\dots,p_r,k,p_{r+1},\dots,p_s,j)$ be the transfer path from $i$ to $j$, and let $(j,q_1,\dots,q_t,k,q_{t+1},\dots,q_u,0)$ be the transfer path from $j$ to $0$ (any of the sub-paths under this notation can be empty with no change to the proof).
Suppose that under the transfer path from $i$ to $j$, $k$ loses the item $c$, and under the transfer path from $j$ to $0$, $k$ loses the item $d$.
If $c = d$, then $(i,p_1,\dots,p_r,k,q_{t+1},\dots,q_u,0)$ is a valid transfer path from $i$ to $0$. If $c \ne d$ then we do the following:
\end{proof}
\begin{lemma}\label{lem:no-0-path-after-flag}
At every iteration after $\texttt{flag}\xspace_i$ is set to \texttt{true}\xspace for some agent $i$, $0 \notin D_i(X)$
\end{lemma}
\begin{proof}
First, $\texttt{flag}\xspace_i$ is set to \texttt{true}\xspace only when there is no path from $0$ to $i$, thus $0 \notin D_i(X)$ immediately following the step where $\texttt{flag}\xspace_i$ is set to \texttt{true}\xspace. We must show that it is impossible that such a path appears due to other agents' item transfers.
Let us consider the first iteration after $\texttt{flag}\xspace_i$ was set to \texttt{true}\xspace, where a path formed from $i$ to agent $0$.
Suppose that this is a path of length $1$, i.e. $0$ can now transfer some good $g$ to $i$ that increases $i$'s welfare. The good $g$ was not in $X_0$ before, thus it must be that agent $0$ appeared as an intermediate agent, received $g$ from some agent $j$, who then continued a transfer path ending again in agent $0$. However, this implies that the truncated transfer path $(i,j,\dots,0)$ was available before the current iteration, a contradiction to our assumption that the current iteration is the first where a path formed from $i$ to agent $0$.
The general case is similar. Let us call the newly formed transfer path $(i,p_1,\dots,p_r=0)$, where $g_1$ is received by $i$; $g_2$ is received by $p_1$ and so on, and $g_r$ is given by agent $0$ to $p_{r-1}$. At least one good $g_\ell$ was not in the possession of $p_\ell$ prior to the current iteration, and thus was transferred to $p_\ell$ via some transfer path from some player $j$ that ends at agent $0$. However, this means that $(i,p_1,\dots,p_\ell,j,\dots,0)$ is a valid transfer path that existed prior to the current iteration, a contradiction to our assumption.
\end{proof}
\begin{lemma}\label{lem:flagged-no-bundle-change}
At every iteration after $\texttt{flag}\xspace_i$ is set to \texttt{true}\xspace under Algorithm \ref{algo:yankee-swap}, the bundle of agent $i$ remains unchanged.
\end{lemma}
\begin{proof}
The only way that an agent's bundle can change is if they are a part of a transfer path that ends at $0$. Let us consider the first iteration where agent $i$'s bundle changes. If $i$'s bundle has changed, then this means that there is some player $j$ that initiated a transfer path including $i$, which terminates at $0$. However, this in particular means that player $i$ has a transfer path that ends at $0$ as well, a contradiction to Lemma \ref{lem:no-0-path-after-flag}.
\end{proof}
\begin{lemma}\label{lem:no-new-agents-D-i}
At every iteration after $\texttt{flag}\xspace_i$ is set to \texttt{true}\xspace, no new transfer paths starting at $i$ form.
\end{lemma}
\begin{proof}
Let $X = (X_0,X_1,\dots,X_n)$ be the allocation when $\texttt{flag}\xspace_i$ was set to \texttt{true}\xspace, .
Let $j$ be the first agent that $i$ formed a new transfer path to during the run of Algorithm \ref{algo:yankee-swap}, after $\texttt{flag}\xspace_i$ was set to \texttt{true}\xspace. Let us call the allocation at that time $Y=(Y_0,Y_1,\dots,Y_n)$.
We first analyze the case where there is a direct path from $i$ to $j$, i.e. under $Y$ there is some good $g \in Y_j$ such that $\Delta_i(Y_i,g) = 1$. By Lemma \ref{lem:flagged-no-bundle-change}, $Y_i = X_i$, i.e. $\Delta_i(X_i,g) = 1$. In order to have player $j$ receive $g$, there must have been some transfer path ending at $0$ where $g$ was transferred to $j$; however, this means that $g$ belonged to some other agent $k$, i.e. a transfer path starting at $k$ and ending at $0$ exists, say $(k,p_1,\dots,p_r,0)$. However, this implies that the path $(i,k,p_1,\dots,p_r,0)$ existed in the previous round, a contradiction to Lemma \ref{lem:no-0-path-after-flag}
\end{proof}
\begin{lemma}\label{lem:size-upper-bound}
At every iteration of Algorithm \ref{algo:yankee-swap}, for every $i\in N$ and every agent $j$ in $D_i(X)$, $|X_j|\le |X_i|+1$.
\end{lemma}
\begin{proof}
Suppose that $j\in D_i(X)$ has an allocation such that $|X_j| \ge |X_i|+2$. When Algorithm \ref{algo:yankee-swap}
Let us consider the iteration of Algorithm \ref{algo:yankee-swap} where the size of $X_j$ grows from $|X_i|+1$ to $|X_i|+2$. The only way that agents' bundles grow is if they initiate a transfer path. Agent $i$'s welfare is lower than $j$'s; furthermore, $\texttt{flag}\xspace_i=\texttt{false}\xspace$: there is a transfer path from $j$ to $0$, and one from $i$ to $j$ and in particular, there is a transfer path from $i$ to $0$ as per Proposition \ref{prop:transfer-path-concat}. Thus, it cannot be that $j$ was selected to receive an item prior to $i$ at that iteration, since Algorithm \ref{algo:yankee-swap} always selects agents with smaller bundles to initiate transfers, and $|X_j|=|X_i|+1> |X_i|$.
Finally, it is impossible that $j$ joins $D_i(X)$ after $\texttt{flag}\xspace_i$ is set to \texttt{true}\xspace, as per Lemma \ref{lem:no-new-agents-D-i}.
\end{proof}
\begin{theorem}
\cref{algo:yankee-swap} terminates in a non-redundant leximin allocation.
\end{theorem}
\begin{proof}
Let $X = X_0,X_1,\dots,X_n$ be the output of Algorithm \ref{algo:yankee-swap}.
Assume for contradiction that it does not terminate in a leximin solution. Let $Y$ be a leximin solution that maximally intersects with $X$ i.e. $\sum_{j \in N \cup \{0\}} |Y_j \cap X_j|$. Let $i$ be the agent with the lowest utility that $Y$ provides higher utility to, and let $u = v_i(X_i) = |X_i|$. Let $S$ be the set of all $j \in N$ with size $u$.
We have, from Proposition \ref{prop:demand-set-optimal} that no allocation can provide a higher USW for $D_S(X)$. Therefore, we have $\sum_{j \in D_S(X)} v_j(Y_j) \le \sum_{j \in D_S(X)} v_j(X_j)$.
If $Y$ has the same number of agents with utility $u$ as $X$, then there exists an agent (say $i_1$) such that $u = |Y_{i_1}| < |X_{i_1}|$.
Using the matroid exchange property, there must be some good $g_1 \in X_{i_1} \setminus Y_{i_1}$ such that $\Delta_{i_1}(Y_{i_1},g_1) = 1$.
Let this good belong to another agent $i_2$. Similarly, if, $|Y_{i_2}| \le |X_{i_2}|$ there exists some good $g_2 \in X_{i_2} \setminus Y_{i_2}$ such that $\Delta_{i_2}(Y_{i_2}, g_2)=1$.
We continue this sequence until we reach an agent $i_r$ such that $|Y_{i_r}| > |X_{i_r}|$. Therefore, we have a sequence of agents $i_1,i_2,\dots,i_r$, such that $|Y_{i_r}| > |X_{i_r}|$. If $|Y_{i_r}| > u+1$, then we can transfer backwards along this path from $i_r$ to $i_1$, to create an allocation that lexicographically dominates $Y$, a contradiction.
If $|Y_{i_r}| \le u$, then from $|Y_{i_r}| > |X_{i_r}|$, we get that $i_{r}$ has utility $\le u -1$ and $Y$ provides higher utility to $i_r$ contradicting our choice of $i$. If $|Y_{i_r}| = u+1$, we can transfer along the path $i_1 \rightarrow i_2 \rightarrow \dots \rightarrow i_r$ such that at each transfer we give an agent a good they had in $X$ and remove a good they did not have in $X$. This transfer creates an allocation $Z$ with the same utility vector as $Y$ but has a higher intersection with $X$ i.e. $\sum_{j \in N \cup \{0\}} |Z_j \cap X_j| > \sum_{j \in N \cup \{0\}} |Y_j \cap X_j|$ --- again, a contradiction.
If $Y$ has fewer agents with utility $u$, then at least one of the agents in $S$ must have higher utility in $Y$ than in $X$. However since $\sum_{j \in D_S(X)} v_j(Y_j) \le \sum_{j \in D_S(X)} v_j(X_j)$, some agents in $D_S(X)$ must have lower utility in $Y$ than in $X$. Due to \cref{lem:size-upper-bound}, these agents must have a bundle of size $u+1$ in $X$ and a bundle of size $u$ in $Y$. If any of these agents have a bundle of size $\le u$ in $X$ and $\le u-1$ in $Y$, it is easy to see that $X$ lexicographically dominates $Y$ contradicting the assumption that $Y$ is leximin.
Therefore, for every additional unit of utility that an agent of size $u$ gains in $Y$, there exists an agent who does not have a size $u$ bundle in $X$ but does have a size $u$ bundle in $Y$. This means that there cannot be fewer agents with a bundle of size $u$ in $Y$. Therefore, $X$ is in fact leximin.
\end{proof}
\fi
\section{Warm-Up: Envy-Induced Transfers}\label{sec:EIT-simple}
\cite{benabbou2021MRF} propose the following algorithm for computing EF-1 and USW maximizing allocations under MRF valuations. First, compute a USW optimal allocation (using matroid intersection algorithms); next, while there exists any envy beyond one good between any two agents, transfer an item from the envied agent to the envious agent (the existence of an item that will increase the envious agent's utility in the envied agent's bundle is guaranteed since valuations are submodular).
This algorithm carries two benefits over \citet{Babaioff2021Dichotomous}'s method: it is both simpler to implement, and runs faster.
We begin by presenting a further simplification of \citeauthor{benabbou2021MRF}'s method: rather than transferring items only between envious agents, we transfer items whenever the gap in valuations exceeds 2. More formally, a \emph{valid transfer} between agents $i$ and $j$ is one where $j$ can get a positive marginal gain from one of the items in $i$'s bundle, and $i$'s utility exceeds $j$'s utility by more than $1$.
\begin{algorithm}
\caption{Computing an USW+EF-1 allocation for MRFs }
\label{algo:EIT-algorithm}
\begin{algorithmic}
\State $X = (X_1, \dots, X_n) \gets \texttt{ComputeCleanOptimalAllocation}(G,v_1,\dots,v_n)$
\State $\texttt{flag}\xspace \gets \texttt{true}\xspace$
\While{$\texttt{flag}\xspace =\texttt{true}\xspace$}
\State $\texttt{flag}\xspace \gets \texttt{false}\xspace$
\For{$j \in N$ sorted from highest utility to lowest utility agent}
\For{$i \in N$ sorted from lowest utility to highest utility agent}
\If{$\exists g \in X_j$ such that $\Delta_i(X_i,g) = 1$ and $v_j(X_j) > v_i(X_i) + 1$}
\State $X_i \gets X_i \cup \{g\}$
\State $X_j \gets X_j \setminus \{g\}$
\State $\texttt{flag}\xspace \gets \texttt{true}\xspace$
\EndIf
\EndFor
\EndFor
\EndWhile
\State \Return $(X_1,\dots,X_n)$
\end{algorithmic}
\end{algorithm}
\begin{theorem}\label{thm:EIT-simplified}
Algorithm \ref{algo:EIT-algorithm} outputs an EF-1 and USW optimal allocation.
\end{theorem}
\begin{proof}
Algorithm \ref{algo:EIT-algorithm} starts with an optimal item allocation. At every round, an agent loses an item (and therefore their utility decreases by $1$), and another gains an item that they have positive marginal benefit for (and therefore their utility increases by $1$). Thus, the overall welfare of the allocation is unchanged by the transfers.
Next, we observe that we continue transferring items from ``rich'' agents to ``poor'' agents as long as the differences in their utilities is more than 1. Suppose that agent $i$ envies agent $j$ for more than one good, i.e. $v_i(X_i)+1 < v_i(X_j)$. The allocation is initially clean, and remains clean at every iteration, since \begin{inparaenum}[(a)]
\item we always give items to agents that gain from receiving them, and
\item when we take an item from an agent, their utility must decrease by $1$ (or else we did not start with a USW optimal allocation).
\end{inparaenum}
Since agent valuations have binary gains, $v_j(X_j) = |X_j|$, and $v_i(X_i) = |X_i|$. Furthermore, since agent valuations are submodular, $v_i(X_j) \le |X_j|$. Putting these together we have that
\begin{align*}
v_i(X_i)+1 < v_i(X_j) &\Rightarrow v_i(X_i)+1 < |X_j| = v_j(X_j)
\end{align*}
Thus, $v_j(X_j)>v_i(X_i)+ 1$. Furthermore, as shown by \citeauthor{benabbou2021MRF}, under MRF valuations, if $i$ envies $j$, there is some item $g$ in $j$'s bundle such that $\Delta_i(X_i,g) = 1$.
We conclude that Algorithm \ref{algo:EIT-algorithm} does not terminate as long as some agent envies another for more than one good.
Finally, Algorithm \ref{algo:EIT-algorithm} terminates in polynomial time due a similar potential function argument made in \cite{benabbou2021MRF}: if we transfer an item from $j$ to $i$ and $v_i(X_i) +1 < v_j(X_j)$, the potential function $\sum_{i = 1}^n v_i(X_i)^2$ strictly decreases.
\end{proof}
The proof of Theorem \ref{thm:EIT-simplified} does not make use of the fact that we first enact feasible item transfers from the most well-off agents to the least well-off agents. However, doing so does result in more efficient transfers.
\begin{example}
Consider a setting with three agents with binary additive utilities, and $6k$ identical items desired by all agents. We start with the optimal allocation assigning all items to agent 3. A naive item transfer protocol might first transfer items from agent 3 to agent 2 until both have $3k$ items (a total of $3k$ iterations), and then transfer $k$ items from agent 2 to agent 1, and another $k$ items from agent 3 to agent 1 (for $2k$ additional iterations). This results in a total of $5k$ transfers. However, following Algorithm \ref{algo:EIT-algorithm} will have agent 3 alternate item transfers between agents 1 and 2 until all agents have exactly $2k$ items, for a total of $4k$ transfers.
\end{example}
Transferring items between disparate agents, regardless of their envy, is also a good way of balancing non-egalitarian outcomes.
\begin{example}
Consider the case of two agents, where agent 1 only wants apples and agent 2 wants either apples or bananas. We have one apple and one banana. Assigning both the apple and the banana to agent 2 is a socially optimal outcome; furthermore, since agent 1 only wants the apple, this is an EF-1 outcome as well. Thus, \citeauthor{benabbou2021MRF}'s algorithm will terminate and output this allocation. However, Algorithm \ref{algo:EIT-algorithm} will transfer the apple from agent 2 to agent 1, resulting in a utility of 1 to both agents.
\end{example}
While Algorithm \ref{algo:EIT-algorithm} is somewhat simpler than \citeauthor{benabbou2021MRF} and \citeauthor{Babaioff2021Dichotomous}'s proposed methods, it still suffers from two major shortcomings. First, single item transfers may not achieve a leximin outcome (see Example \ref{ex:single-transfers-break}); secondly, it still requires one to first compute an optimal allocation, and then correct envy issues via item transfers.
\begin{example}\label{ex:single-transfers-break}
We have three agents, $1,2$ and $3$, and five items, four apples ($a$) and one banana ($b$). The current allocation is as follows:
$$X_1 = \emptyset; X_2 = \{a,b\}; X_3 = 3\times a.$$
In other words, agent 3 has three apples, agent 2 has an apple and a banana, and agent 1 has nothing. Agents have binary additive valuations, where agent 1 only wants bananas, agent 2 wants either apples or bananas, and agent 3 wants only apples.
The current allocation is efficient and EF-1: agent 1 is the only envious agent, and desires agent 2's banana. In addition, any single item transfer will not result in an EF-1 and efficient allocation: giving the banana to agent 1 makes agent 2 envy agent 3's abundance of apples. However, we can transfer items along a path: agent 3 gives one apple to agent 2, who in turn gives a banana to agent 1. The resulting allocation is lexmin and envy-free.
\end{example}
\section{Introduction}\label{sec:intro}
Fair allocation of indivisible goods is an extremely popular problem in the EconCS community.
We would like to assign a set of indivisible \emph{items} to a set of \emph{agents} who express \emph{preferences} over the sets of items (or \emph{bundles}) they receive.
For example, consider the problem of assigning course slots to students \citep{Budish2011EF1,budish2017coursematch}. Each student $i$ has a preference over the set of classes they are assigned, in addition to other constraints such as scheduling conflicts and prerequisites.
We wish to identify an assignment of course slots to students that is both \emph{efficient} --- no student wants any additional available slots --- and \emph{fair} --- no student would prefer another student's assignment to their own.
Course allocation can thus be naturally cast as an instance of a fair allocation problem.
To this end, one might wish to implement some fair allocation mechanism from the literature.
However, the algorithms proposed in the fair division literature are becoming increasingly complex, which often precludes their consideration in actual applications. Consider for example the CourseMatch algorithm \citep{budish2017coursematch}, used to assign MBA students to classes at the Wharton School of Business. \citeauthor{budish2017coursematch} state that
``To find allocations, CourseMatch performs a massive parallel heuristic search that solves billions of mixed-integer programs to output an approximate competitive equilibrium in a fake-money economy for courses''. This framework has been applied to the course allocation system at the UPenn Wharton School of Business, which admits approximately $1700$ students to roughly $350$ courses. To understand the system, students are referred to nine instructional videos and a 12-page manual.
While this system may be appropriate for a specialized MBA program, it may not be as effective for university-wide application, especially in settings with non-expert end-users.
Spliddit \citep{goldman2015spliddit} is another application of fair allocation mechanisms to real-world instances; however, it too does not scale well, as its underlying mechanism solves a mixed integer-linear program to find a Nash-welfare maximizing allocation \citep{Caragiannis2016MNW}.
We propose a \emph{fast} and \emph{simple} fair allocation mechanism that offers strong fairness and efficiency guarantees for agents with \emph{matroid rank valuations}, also known as \emph{binary submodular valuations}.
Previous literature on fair allocation under binary submodular valuations \citep{Barman2021MRFMaxmin, benabbou2021MRF, Babaioff2021Dichotomous} uses complex algorithms for matroid intersection (or union) as a subroutine in their proposed algorithms. This complexity significantly hinders the real-world adoption of such methods.
Thus, our main goal in this work is to
\begin{displayquote}
\textit{develop a simple and fast algorithm to compute fair and efficient allocations under binary submodular valuations.}
\end{displayquote}
Binary submodular valuations naturally arise in course allocation for students \citep{benabbou2021MRF} and shift allocation for employees \citep{Barman2021MRFTruthful}.
If we assume that students simply want to take as many classes as they are allowed to, subject to scheduling constraints, then student preferences induce \emph{submodular} valuations \citep{benabbou2019group}.
Submodular valuations exhibit decreasing returns to scale: the larger the bundle agents already have, the less marginal gain they get from additional items.
Under binary submodular valuations, each agent values each good at either $1$ or $0$. Since these valuations correspond to the rank function of some matroid, they are commonly referred to as {\em matroid rank functions} (MRFs).
MRFs are highly structured, a structure that has been exploited in the optimization literature \citep{krause2014submodular,oxley2011matroids} and more recently, in fair allocation \citep{Babaioff2021Dichotomous,Barman2021MRFMaxmin,Barman2021MRFTruthful,benabbou2021MRF}.
Most notably, \citet{Babaioff2021Dichotomous} show that, when agents have MRF valuations, there exists a polynomial time algorithm to compute an allocation which is leximin, envy free upto any good and maximizes both the utilitarian social welfare and Nash social welfare.
However, a runtime analysis of their algorithm places its runtime at roughly $O(n^6 m^{9/2})$ time (where $n$ and $m$ are the number of agents and goods respectively)
which significantly hinders scalability.
\subsection{Our Contribution and Techniques}\label{subsec:contrib}
Our main contribution is an algorithm (\cref{algo:yankee-swap} in \Cref{sec:yankee-swap}) which computes the same Lorenz dominating allocations as \citet{Babaioff2021Dichotomous} but runs in $O((n+m)(n + \tau) m^{2})$ time; $\tau$ is the time it takes to compute the valuation $v_i(S)$ for any bundle $S \subseteq G$ and agent $i \in N$. This is a significant speed up compared to \citeauthor{Babaioff2021Dichotomous}'s runtime of $O(n^6 m^{7/2} (m + \tau) \log{nm})$.
Our algorithm (known colloquially as {\em Yankee Swap}) is similar in spirit to the well known round robin algorithm. In the round robin algorithm, agents initially start with an empty bundle and proceed in rounds picking a good in each round one by one from the pool of unallocated goods; this is essentially the same as the draft mechanism used by major sports leagues to recruit new players.
Agents take sequential actions under our algorithm as well.
Unlike the round robin algorithm, agents have the power to steal goods from other agents if they do not like any good in the pool of unallocated goods. The agents who are stolen from then make up for their loss by taking a good from the pool of unallocated goods or by stealing a good from someone else.
This procedure results in {\em transfer paths} --- an agent steals a good from someone, who potentially steals a good from someone else and this goes on till someone takes a good from the pool of unallocated goods.
The main difference between our method and the colloquial Yankee Swap\footnote{Yankee swap is also known as ``Nasty Christmas'' or ``White Elephant''. See \url{https://youtu.be/19ulSNSRKyU} for a discussion.} is that we only allow an agent to steal a good when a transfer path exists.
This means that the utility of every agent on the transfer path remains the same (except for the utility of the agent that initiates it, whose utility increases by $1$).
\subsection{Related Work}\label{subsec:rel-work}
Binary valuation functions (otherwise called {\em dichotomous preferences}) have been studied in various contexts in the economics and computer science literature. More specifically, binary valuations have been studied in mechanism design \citep{Ortega2018dichotomousmechanism,bogomolnaia2005dichotomousmechanism}, auctions \citep{Babaioff2009Auction, Mishra2013dichotomousauction}, and exchange \citep{Roth2005Dichotomousexchange, Aziz2019Dichotomousexchange}.
Binary valuations have also been extensively studied for the specific problem of fair allocation. \citet{halpern2020binaryadditive} and \citet{Suksumpong2022weightednash} study fair allocation in the restricted setting of binary additive valuations. \citet{Barman2021MRFMaxmin} study the computation of the maxmin share under binary submodular valuations.
\citet{barman2018pathtransfers, Darmann2015MaximizingNP} and \citet{Barman2021approxmnw} study the computation of a max Nash welfare allocation under various binary valuation classes. It is also worth noting that our work is not the first to explore transfer paths: \citet{Suksumpong2022weightednash, barman2018pathtransfers} and \citet{Barman2021MRFMaxmin} also use some form of transfer paths in their algorithm design.
Lastly, \citet{benabbou2021MRF} and \citet{Babaioff2021Dichotomous} study the computation of fair and efficient allocations under matroid rank valuations. \citet{benabbou2021MRF} presented the first positive algorithm result for the valuation class by showing that a utilitarian social welfare maximizing and envy free upto one good allocation can be computed in polynomial time. This result was later significantly improved on by \citet{Babaioff2021Dichotomous} whose work we discuss in detail in \cref{subsec:lorenz-dominating}.
\section{Preliminaries}\label{sec:prelims}
We use $[t]$ to denote the set $\{1, 2, \dots, t\}$. For the sake of readability, for a set $A$ and an item $g$, we replace $A \setminus \{g\}$ (resp. $A \cup \{g\}$) with $A - g$ (resp. $A + g$).
We have a set of $n$ {\em agents} $N = [n]$ and a set of $m$ {\em goods} $G = \{g_1, g_2, \dots, g_m\}$. Each agent $i$ has a {\em valuation function} $v_i:2^G \mapsto \mathbb{R}_+$ --- $v_i(S)$ corresponds to the value agent $i$ has for the bundle of goods $S$.
We let $\Delta_i(S,g) \triangleq v_i(S + g) - v_i(S)$ be the marginal utility of agent $i$ from receiving the good $g$, given that they already own the bundle $S$.
Unless otherwise mentioned, we assume that $v_i$ is a {\em matroid rank function} (MRF). Due to their equivalence, we use binary submodular valuation and matroid rank function interchangeably. More formally, a function $v_i$ is a matroid rank function if
\begin{inparaenum}[(a)]
\item $v_i(\emptyset) = 0$,
\item for every $S\subseteq G$ and every $g \in G$, $\Delta_i(S,g) \in \{0,1\}$, and
\item $v_i$ is submodular: for every $S \subseteq T \subseteq G$ and every $g \in G\setminus T$, $\Delta_i(S,g) \ge \Delta_i(T,g)$.
\end{inparaenum}
Since there may not be a polynomial space representation of these valuation functions, we assume oracle access to each $v_i$: given a bundle of goods $S \subseteq G$, we can compute $v_i(S)$ in at most $\tau$ time.
An {\em allocation} is a partition of the set of goods $X = (X_0, X_1, \dots, X_n)$ where each agent $i$ receives the bundle $X_i$, and $X_0$ consists of the unallocated goods.
An allocation is {\em non-redundant} (or \emph{clean}) if for every agent $i \in N$, and every good $g \in X_i$, $v_i(X_i) > v_i(X_i - g)$. \citet{benabbou2021MRF} show that for MRF valutions, this is equivalent to having $v_i(X_i) = |X_i|$ for every $i \in N$.
We sometimes refer to $v_i(X_i)$ as the {\em utility} (or {\em value}) of $i$ under the allocation $X$.
For ease of analysis, we treat $0$ as an agent whose valuation function is $v_0(S) = |S|$; this valuation function is trivially an MRF.
Due to the choice of $v_0$, any clean allocation for the set of agents $N$ is also trivially clean for the set of agents $N + 0$. However, none of the fairness notions we discuss consider the (dummy) agent $0$.
Several fairness desiderata have been proposed and studied in the literature; three of them stand out:
\begin{description}[leftmargin=0cm]
\item[Envy-Freeness:] An allocation is {\em envy free} if no agent prefers another agent's bundle to their own.
This is impossible to guarantee when all goods are allocated --- consider an instance with two agents and only one indivisible good.
Due to this impossibility, several relaxations have been studied in the literature.
The most popular relaxation of envy freeness is {\em envy freeness up to one good} (EF1) \citep{Budish2011EF1,Lipton2004EF1}. An allocation is EF1 if no agent envies another agent after dropping some good for the latter agent's bundle. An EF1 allocation can be computed in polynomial time for most realistic valuation classes \citep{Lipton2004EF1}.
More recently, a stronger relaxation called {\em envy free up to any good} (EFX) \citep{Caragiannis2016MNW} has gained popularity: an allocation is EFX if no agent envies another agent after dropping {\em any} good from the latter agent's bundle. In contrast to EF1 allocations, the existence of EFX allocations is still an open question for several classes of valuation functions \citep{Plaut2017EFX}.
\item[Maximin Share:] An agent's {\em maximin share} (\text{MMS}) is defined as the value they would obtain if they divided the goods into $n$ bundles themselves and picked the worst of these bundles. More formally,
\begin{align*}
\text{MMS}_i = \max_{X = (X_1, X_2, \dots, X_n)} \min_{j\in [n]} v_i(X_j)
\end{align*}
\citet{procaccia2014fairenough} show that it is not always possible to guarantee each agent their maxmin share; instead, past works \citep{Kurokawa2018Maxmin} try and guarantee every agent a non-negligible fraction of their maxmin share. For some $c \in (0, 1]$, an allocation is $c$-$\text{MMS}$ if it guarantees every agent $i \in N$ a value of at least $c\cdot \text{MMS}_i$.
\item[Leximin:] an allocation is leximin if it maximizes the value provided to the agent with least value and conditioned on this, maximizes the value provided to the agent with the second least value and so on. While leximin allocations are computationally intractable when agent valuations are unrestricted \citep[Theorem 4.2]{benabbou2021MRF},
they can be computed in polynomial time for several simple classes of valuation functions \citep{halpern2020binaryadditive, Babaioff2021Dichotomous}.
\end{description}
When envy is the main consideration, an allocation where no agent gets any good is envy free.
While this is fair, it is very inefficient.
Therefore, coupled with fairness metrics, algorithms usually guarantee some {\em efficiency} criterion as well.
We consider two popular notions of efficiency:
\begin{description}
\item[Utilitarian Social Welfare:] The {\em utilitarian social welfare} of an allocation is defined as the {\em sum} of the value obtained by each agent i.e. $\sum_{i \in N} v_i(X_i)$.
\item[Nash Social Welfare:] The {\em Nash social welfare} of an allocation is defined as the {\em product} of the value obtained by each agent i.e. $\prod_{i \in N} v_i(X_i)$.
\end{description}
Allocations which maximize utilitarian social welfare and Nash social welfare are\textbf{} referred to as \text{MAX-USW}{} and \text{MNW}{} respectively.
Before we proceed, we show a simple useful result about matroid rank valuations --- if an agent values the bundle $Y$ more than the bundle $X$, there must be a good $g \in Y$ such that $\Delta_i(X,g) = 1$. Variants of this result have also been shown by \citet{Babaioff2021Dichotomous} and \citet{benabbou2021MRF}.
\begin{obs}\label{obs:exchange}
Suppose that agents have binary submodular valuations. If $X$ and $Y$ are two allocations and $v_i(X_i) < v_i(Y_i)$ for some $i \in N + 0$, there exists a good $g \in G \setminus X_i$ such that $\Delta_i(X_i, g) = 1$.
\end{obs}
\begin{proof}
Let $Y_i = \{g_1, \dots, g_k\}$ and $Z^j = \{g_1, \dots, g_j\}$ ($Z^0 = \emptyset$). We need to show that $\Delta_i(X_i, g_j) = 1$ for some $j \in [k]$.
If $\Delta_i(X_i \cup Z^{j-1}, g_j) = 1$ for some $j \in [k]$, then we have $1 \ge \Delta_i(X_i, g_j) \ge \Delta_i(X_i \cup Z^{j-1}, g_j) = 1$ and we are done. If this is not the case, then we have
\begin{align*}
v_i(Y_i \cup X_i) = v_i(X_i) + \sum_{j = 1}^k \Delta_i(X_i \cup Z^{j-1}, g_j) = v_i(X_i)
\end{align*}
However, since marginal gains are not negative, we must have $v_i(X_i \cup Y_i) \ge v_i(Y_i) > v_i(X_i)$ which creates a contradiction. Therefore, $\Delta_i(X_i \cup Z^{j-1}, g_j) = 1$ for some $j \in [k]$ which implies $\Delta_i(X_i, g_j) = 1$ for some $j \in [k]$.
\end{proof}
\subsection{Prioritized Lorenz Dominating Allocations}\label{subsec:lorenz-dominating}
We define the {\em sorted utility vector} of an allocation $X$ as $\vec{u}^X = (u^X_1, u^X_2, \dots u^X_n)$ which corresponds to the vector $(v_1(X_1), v_2(X_2), \dots v_n(X_n))$ sorted in ascending order (ties broken arbitrarily).
We say an allocation $X$ \emph{Lorenz dominates} the allocation $Y$ (denoted $X \succ_{\texttt{lorenz}} Y$) if for all $k \in [n]$, $\sum_{j = 1}^k u^X_j \ge \sum_{j = 1}^k u^Y_j$.
An allocation $X$ is {\em Lorenz dominating} if for all allocations $X'$, we have $X \succ_{\texttt{lorenz}} X'$.
\citet{Babaioff2021Dichotomous} show that when agents have MRF valuations, a non-redundant Lorenz dominating allocation always exists and satisfies several desirable fairness and efficiency guarantees. We formalize this result below.
\begin{theorem}[\citet{Babaioff2021Dichotomous}]\label{thm:lorenz-fair}
When agents have MRF valuations, a non-redundant Lorenz dominating allocation always exists and is \text{MNW}, \text{MAX-USW}, EFX, leximin and $\frac12$-\text{MMS}.
\end{theorem}
The one minor drawback of the ordering defined above is that it does not distinguish between two allocations with the same sorted utility vector, even when agents receive a different utility in the two allocations.
To understand this better, consider the following example.
\begin{example}\label{ex:lorenz}
Consider a problem instance with two agents $\{1, 2\}$ and three goods $\{g_1, g_2, g_3\}$. The valuation function for each agent $v_i(S) = |S|$ for $i \in \{1,2\}$.
Any Lorenz dominating allocation for this problem has sorted utility vector $(1, 2)$: one agent gets one good while the other agent gets two.
Consider two allocations $X$ and $Y$ defined as follows
\begin{align*}
X_1 = \{g_1, g_2\} &\qquad X_2 = \{g_3\} \\
Y_1 = \{g_1\} &\qquad Y_2 = \{g_2, g_3\}
\end{align*}
Both $X$ and $Y$ are Lorenz dominating but give agent $1$ a different utility: agent $1$ gets two goods in the first allocation but only one in the second.
\end{example}
It is desirable to distinguish between the two allocations described above; if we can create a solution concept where agent $1$ always gets two goods whereas under another agent $2$ gets two goods, randomizing between the two allocations will give us a (random) allocation which is arguably more fair since it is not always the case that one agent gets a higher utility.
To this end, \citet{Babaioff2021Dichotomous} introduce a priority order over the set of agents.
The priority ordering is modelled as a permutation over the set of agents $\pi: N \mapsto [n]$ where agents with a lower value of $\pi$ have a higher priority.
This ordering is enforced by adding a set of artificial goods $A = \{a_1, a_2, \dots a_n\}$ to the problem instance; each agent $i$ values $a_i$ at $\frac{\pi(i)}{n^2}$ and all other goods in $A$ at $0$.
More formally, to enforce a priority ordering, \citet{Babaioff2021Dichotomous} create a new fair allocation instance by introducing a new set of items $A=\{a_1,\dots,a_n\}$, where each agent $i$ has the following valuation function $v'_i$:
\begin{align*}
v'_i(S) = v_i(S \cap G) +\frac{\pi(i)}{n^2}|S \cap \{a_i\}|
\end{align*}
We refer to this problem instance as the {\em augmented} problem instance with the priority order $\pi$. When $\pi$ is clear from context, we simply refer to this problem instance as the augmented problem instance.
The goods in $A$ do not exist in reality and serve only to break ties between agents.
After computing a Lorenz dominating allocation (say $X'$) for the augmented problem instance, \citet{Babaioff2021Dichotomous} remove the goods from $A$ and allocate the goods from $G$ to the agents according to the computed allocation $X'$.
They refer to this allocation of the set of goods $G$ as a Lorenz dominating allocation w.r.t. the priority order $\pi$. It is easy to see that Lorenz dominating allocations with respect to any priority ordering $\pi$ are Lorenz dominating \citep[Theorem 4]{Babaioff2021Dichotomous} and therefore retain all the desirable fairness properties in \cref{thm:lorenz-fair}.
This priority order can be used to guarantee additional fairness properties --- by randomly choosing this priority order, we can generate allocations that are ex-ante envy-free and ex-ante proportional.
A random allocation $X$ is {\em ex-ante envy-free} if, in expectation, no agent envies another agent i.e. $\mathbb{E}_X[v_i(X_i)] \ge \mathbb{E}_X[v_i(X_j)]$ for all $i, j \in N$.
Similarly, a random allocation $X$ is {\em ex-ante proportional} if each agent, in expectation, receives a utility greater than the $n$-th fraction of their value for the entire bundle of goods i.e. $\mathbb{E}_X[v_i(X_i)] \ge \frac{v_i(G)}{n}$.
More formally, \citet{Babaioff2021Dichotomous} define the randomized prioritized egalitarian (RPE) mechanism which performs the following steps:
\begin{enumerate}
\item Choose a priority order $\pi$ uniformly at random.
\item Elicit the preferences of each agent. If the agent's valuation function is not an MRF, set their value for all bundles to be equal to $0$.
\item Compute a Lorenz dominating allocation with respect to the ordering $\pi$ for the above elicited preferences.
\end{enumerate}
\citet{Babaioff2021Dichotomous} show that the RPE mechanism computes an ex-ante envy-free and ex-ante proportional allocation. Another desirable property of mechanisms is that of {\em strategyproofness} --- a mechanism is {\em strategyproof} if no agent can get a better outcome by lying about their valuation function. \citet{Babaioff2021Dichotomous} show that the RPE mechanism is strategyproof as well.
\section{Yankee Swap}\label{sec:yankee-swap}
In this section, we describe an algorithm to compute prioritized Lorenz dominating allocations.
The algorithm we propose is known colloquially as a \emph{Yankee swap}: we proceed in rounds, and at every round agents have a choice of either taking an unallocated item (that offers them a positive marginal gain) or stealing an item from another agent (that offers them positive marginal gain), who then steals an item from another agent, and so on.
However, as mentioned in the introduction, we restrict the items agents can steal at every point to items which the victims of the theft can make up for by stealing other items.
More formally, we define these transfer paths recursively as follows: a \emph{transfer path} in an allocation $X$ is a sequence of agents in $N \cup \{0\}$, $(p_1,p_2,\dots,p_r)$, such that for some good $g \in X_{p_2}$:
\begin{inparaenum}[(a)]
\item $\Delta_{p_1}(X_{p_1},g) = 1$.
\item If $X'$ is the allocation that results from moving $g$ from $p_2$ to $p_1$ in $X$; then there exists a transfer path $(p_2, \dots, p_r)$ in $X'$ that does not involve the transfer of the good $g$.
\end{inparaenum}
In other words, there is a set of goods that can be transferred along the path such that agent $p_1$'s utility increases by $1$, agents $p_2,\dots,p_{r-1}$'s utilities are unchanged, and agent $p_r$'s utility decreases by $1$.
While paths can be cyclic i.e. an agent can be present multiple times in a transfer path sequence, every good may be transferred at most once.
Since goods are transferred at most once, paths can be characterized by a sequence of goods $(g_{i_1}, g_{i_2}, \dots, g_{i_k})$ and an agent $i$ where $g_{i_k}$ gets transferred to the agent that has $g_{i_{k-1}}$, $g_{i_{k-1}}$ gets transferred to the agent that has $g_{i_{k-2}}$ and so on until finally, $g_{i_1}$ gets transferred to agent $i$. Depending on the context, we use both notations of transfer paths in our algorithms and analysis.
\subsection{The Algorithm}\label{sec:yankee-swap-algo}
The Yankee Swap algorithm takes as input a fair allocation instance $(N, G, \{v_i\}_{i \in N})$ and a priority ordering $\pi: N \mapsto [n]$ over the set of agents.
The algorithm first allocates all the goods to $X_0$ --- all items are initially unallocated.
Then, we pick an agent (who's not agent $0$) with the least utility so far and see if there is a path starting from them and ending at $0$; ties are broken in favor of agents with higher priority.
If a path exists, we transfer goods backwards along the path giving the agent an additional unit of value.
Otherwise, we label them as finished and do not try to improve their allocation further; this label is stored as a $\texttt{flag}\xspace$ variable for each agent. The $\texttt{flag}\xspace$ variable for some agent $i$ takes the value $\texttt{true}\xspace$ when the agent is labelled finished and $\texttt{false}\xspace$ otherwise.
We repeat this procedure till all the agents are labelled finished, see \cref{algo:yankee-swap}.
\begin{algorithm}
\caption{Yankee Swap}
\label{algo:yankee-swap}
\begin{algorithmic}
\Require The set of agents $N = [n]$, the set of goods $G$, oracle access to valuation functions $\{v_i\}_{i \in N}$ and a priority order over the agents $\pi: N \mapsto [n]$
\State $X = (X_0, X_1, \dots, X_n) \gets (G, \emptyset, \dots, \emptyset)$
\State $\texttt{flag}\xspace_j \gets \texttt{false}\xspace \quad \forall j \in N$
\While{$\texttt{flag}\xspace_j = \texttt{false}\xspace$ for some $j \in N$}
\State Let $T$ be the agents $j \in N$ with $\texttt{flag}\xspace_j = \texttt{false}\xspace$
\State Let $T'$ be the agents in $T$ with least utility in $X$
\State Let $i$ be the highest priority agent in $T'$ according to $\pi$
\State Check if there exists a transfer path in $X$ starting at $i$ which ends at $0$
\If{a path $(g_{i_1}, g_{i_2}, \dots, g_{i_k})$ exists}
\State Transfer goods along the path and update $X$
\Else
\State $\texttt{flag}\xspace_i = \texttt{true}\xspace$
\EndIf
\EndWhile
\end{algorithmic}
\end{algorithm}
\subsection{Analysis}\label{subsec:yankee-swap-correctness}
In this section, we show that Algorithm \ref{algo:yankee-swap} computes prioritized Lorenz dominating allocations.
Our correctness result uses the following lemma that provides sufficient conditions for a transfer path to exist. A similar version of this lemma appears in \citet[Lemma 17]{Babaioff2021Dichotomous} and \citet[Lemma 3.12]{benabbou2021MRF}.
\begin{lemma}\label{lem:babaioff-paths}
Let $X$ and $Y$ be two non-redundant allocations for the set of agents $N + 0$. Let $S^{-}$ be the set of all agents $i \in N + 0$ where $|X_i| < |Y_i|$, $S^{=}$ be the set of all agents $i \in N + 0$ where $|X_i| = |Y_i|$ and $S^{+}$ be the set of all agents $i \in N + 0$ where $|X_i| > |Y_i|$.
For any agent $i \in S^{-}$, there exists a transfer path from $i$ to some agent $k \in S^{+}$ in $X$.
\end{lemma}
\begin{proof}
Our proof is constructive. We construct the transfer path using the following recursive loop which we denote by $\texttt{Loop}(X, Y, i)$:
Since $X$ and $Y$ are non-redundant, using \cref{obs:exchange}, there is some good $g \in Y_i \setminus X_i$ such that $\Delta_i(X_i, g) = 1$. This good must belong to some other agent, say $j \in N+ 0$. If $j \in S^{+}$, we are done. Otherwise, move the good from $j$ to $i$ to create a new allocation $X'$. We now compare $X'$ and $Y$ and define ${S'}^{-}$, ${S'}^{=}$ and ${S'}^{+}$ analogously to ${S}^{-}$, ${S}^{=}$ and ${S}^{+}$. Since $j \in S^{-} \cup S^{=}$ and $j$ lost a good in $X'$, we must have that $j \in {S'}^{-}$. Further, $i \in {S'}^{=} \cup {S'}^{-}$ which means $S'^{+} = S^{+}$. We then repeat this process with the allocations $X'$ and $Y$ and the agent $j$ i.e. we run $\texttt{Loop}(X', Y, j)$.
Note that the above loop must terminate since the value $\sum_{j \in N + 0} |X'_j \cap Y_j|$ increases by $1$ at every iteration. The value $\sum_{j \in N + 0} |X'_j \cap Y_j|$ is upper bounded by $|G|$. Since $S^{+}$ does not change, at some iteration, we take a good from an agent in $S^+$ and the above loop terminates.
Let $(p_1, p_2, \dots, p_r, k)$ be the sequence of agents we take a good from in the above loop --- we take a good from $p_1$ and give it to $i$, take a good from $p_2$ and give it to $p_1$, and so on.
In order to show that $(i, p_1, \dots, p_r, k)$ forms a transfer path, the last thing we need to show is that $g$ never gets transferred out of $i$ which would imply that goods get transferred at most once. This is easy to see: the only goods that can potentially get transferred in $X'$ are the goods in $Y_j \setminus X'_j$ for any $j \in N + 0$. Since $g \in Y_i \cap X'_i$, it will never get transferred out. Similarly, no good that is transferred to some agent ever gets transferred out.
\end{proof}
We will also need these useful lemmas.
\begin{lemma}\label{lem:choice-equivalence}
At the beginning of any iteration of \cref{algo:yankee-swap}, let $i$ be the highest priority agent with least utility such that $\texttt{flag}\xspace_i = \texttt{false}\xspace$ i.e. let $i$ be the agent chosen by the algorithm to be the starting point of the transfer path. Let $W$ be the allocation at the beginning of the iteration and let $W'$ be an allocation for the augmented problem instance such that $W'_j = W_j + a_j$ for all $j \in N$. Then $i$ is the least valued agent in $W'$ among all the agents whose flag is $\texttt{false}\xspace$.
\end{lemma}
\begin{proof}
Let $N_{\texttt{false}\xspace}$ be the set of elements whose flag is set to false at the beginning of the iteration in consideration. For any agent $u \in N_{\texttt{false}\xspace}$, if $|W_u| > |W_i|$, we have $v'_u(W'_u) > v'_i(W'_i)$ since the auxiliary goods provide a value less than $1$ for both agents and $W$ is non-redundant. If $|W_u| = |W_i|$, then we must have $v'_u(W'_u) > v'_i(W'_i)$ since we chose $i$ as the agent with highest priority with a bundle of size $|W_i|$. Since our first constraint on $i$ was that it needed to minimize $|W_i|$, we can never have $|W_u| < |W_i|$.
\end{proof}
\begin{lemma}\label{lem:yankee-swap-size-upper-bounds}
At the beginning of any iteration of \cref{algo:yankee-swap}, let $i$ be the highest priority agent with least utility such that $\texttt{flag}\xspace_i = \texttt{false}\xspace$ i.e. let $i$ be the agent chosen by the algorithm to be the source of the transfer paths. Let $W$ be the allocation at the beginning of the iteration. If $|W_i| = k$, then the following holds:
\begin{enumerate}
\item Agents with higher priority than $i$ have a bundle of size at most $k+1$.
\item Agents with a lower priority than $i$ have a bundle of size at most $k$.
\end{enumerate}
\end{lemma}
\begin{proof}
This result stems from the sequential nature of the allocation. Let $t$ be the iteration of the algorithm being examined.
Let $j$ be an agent with higher priority than $i$ with a bundle of size at least $k+2$. Consider the start of the iteration $t'$ where $W_j$ moved from a bundle of size $k+1$ to a bundle of size $k+2$. Since bundle sizes increase monotonically, this value is unique and well-defined. Further, we also have that $t' < t$. In the iteration $t'$, $j$ was the agent with least utility $(k+1)$ and highest priority among the agents whose flag was set to false; otherwise, $j$ would not have received a good. However, we also have that at iteration $t'$, $i$'s bundle had a size of at most $k$ and its flag was set to false. This is because at iteration $t > t'$, $i$'s bundle had a size of $k$ and its flag was false; bundle sizes increase monotonically and flags remain true once set to true. This is a contradiction since it implies $j$ was not the agent with least utility among the agents whose flag was set to false at iteration $t'$; therefore, $j$ cannot have a bundle of size at least $k+2$.
We can similarly deal with the case where $j$ has a lower priority than $i$.
\end{proof}
We are now ready to prove our main result.
\begin{theorem}\label{thm:yankee-swap-leximin}
The allocation output by \cref{algo:yankee-swap} is a non-redundant Lorenz dominating allocation with respect to the ordering $\pi$.
\end{theorem}
\begin{proof}
At every iteration of \cref{algo:yankee-swap}, the allocation remains non-redundant: by the definition of transfer paths, agents only ever take unassigned items/steal items that they have a positive marginal gain for.
It is also easy to see that the algorithm always terminates; at every iteration, either a $\texttt{flag}\xspace$ is set to $\texttt{true}\xspace$ or $|X_0|$ reduces by $1$. Since $X_0$ never increases and no $\texttt{flag}\xspace$ is set from $\texttt{true}\xspace$ to $\texttt{false}\xspace$ at any point, there can only be a finite number of iterations.
We now show that the allocation output by \cref{algo:yankee-swap} is Lorenz dominating.
Assume for contradiction that the allocation output by \cref{algo:yankee-swap} is not Lorenz dominating with respect to the ordering $\pi$. Let $X$ be the allocation output by Algorithm \ref{algo:yankee-swap} and $Y$ be a Lorenz dominating allocation with respect to the ordering $\pi$ ($Y$ is shown to always exist by \citet{Babaioff2021Dichotomous}). We use $Y'$ to denote an allocation for the augmented problem instance defined as $Y'_i = Y_i + a_i$ for all agents $i \in N$. We define $X'$ similarly as $X'_i = X_i + a_i$. Since $Y$ is Lorenz dominating with respect to the ordering $\pi$ for the original problem instance, $Y'$ is Lorenz dominating for the augmented problem instance \citep[Theorem 4]{Babaioff2021Dichotomous}.
If for all $i \in N$ $v_i(X_i) \ge v_i(Y_i)$ then since $Y$ is \text{MAX-USW}, this means that $v_i(X_i) = v_i(Y_i)$ for all $i$ and $X$ is Lorenz dominating as well with respect to the ordering $\pi$, contrary to our assumption. Thus, there must be at least one $i\in N$ such that $v_i(X_i) < v_i(Y_i)$.
Let $i \in \argmin \{v_i(X_i)\mid v_i(X_i) < v_i(Y_i)\}$. If there are multiple such agents, we pick the one with highest priority. Note that, from an argument similar to \cref{lem:choice-equivalence}, this is equivalent to saying $i$ is the least valued agent in $X'$ who receives a higher value in $Y'$ than in $X'$.
Consider the iteration of the algorithm where $\texttt{flag}\xspace_i$ was set to $\texttt{true}\xspace$. Let $W$ be the allocation at the beginning of the iteration. Let $k = |W_i|$.
We have the following Lemma.
\begin{lemma}\label{lem:lorenz-dominance}
For all $h \in N$, we have $|Y_h| \ge |W_h|$.
\end{lemma}
\begin{proof}
Assume for contradiction that this is not true.
Let $W'$ be an allocation in the augmented problem instance such that $W'_j = W_j + a_j$ for all $j \in N$. We show that $Y'$ does not Lorenz dominate $W'$. This contradicts the fact that $Y'$ is Lorenz dominating. Let $j$ be the agent with lowest utility under $Y$ such that $|Y_j| < |W_j|$; we break ties by choosing the agent with highest priority. Again, this is equivalent to saying that $j$ is the agent with least utility in $Y'$ such that $v'_j(Y'_j) < v'_j(W'_j)$.
Before we move into the technical details, we provide the main idea of the proof here. In this proof, we examine the sorted utility vectors of $W'$ and $Y'$. We show that for any agent who receives a utility of less than $v'_j(Y'_j)$ in $Y'$, they receive the same utility in $W'$ and vice versa i.e. any agent who receives a utility of less than $v'_j(Y'_j)$ in $W'$ receives the same utility in $Y'$. By doing this we can compare the first element in the sorted utility vectors of $W'$ and $Y'$ which are unequal; we then show that this element is greater in the sorted utility vector of $W'$ than in $Y'$ hence creating a contradiction and completing the proof. This is summarized in Figure \ref{fig:sorted-utility-vector}.
\begin{figure}
\centering
\scalebox{0.8}{%
\begin{tikzpicture}
\begin{axis} [ybar stacked,
bar width = 15pt,
ymin = 0,
ymax = 10,
xtick = \empty,
bar shift = -8pt
]
\addplot[fill=blue] coordinates {(1,1) (2,1) (3,1) (4,1)(5,1)};
\addplot[fill=blue] coordinates {(1,1) (2,1) (3,1) (4,1)(5,1)};
\addplot[fill=blue] coordinates {(1, 0)(2,0.2) (3,1) (4,1)(5,1)};
\addplot[fill=blue] coordinates {(1, 0)(2, 0)(3,0.5) (4,1)(5,1)};
\addplot[fill=blue] coordinates {(1, 0)(2, 0) (3, 0)(4,1)(5,1)};
\addplot[fill=blue] coordinates {(1, 0)(2,0)(3,0)(4,0.5)(5,1)}; \label{plot_one}
\node at (axis cs: 3.75, 7){$u$};
\node at (axis cs: 4.2, 7){$j$};
\draw[->] (axis cs: 3.75, 4.5) edge (axis cs: 3.75, 6.6) (axis cs: 4.2, 4.5) edge (axis cs: 4.2, 6.6);
\end{axis}
\begin{axis} [ybar stacked,
bar width = 15pt,
ymin = 0,
ymax = 10,
xtick = \empty,
bar shift = 8pt,
hide axis
]
\addlegendimage{/pgfplots/refstyle=plot_one}\addlegendentry{$W'$}
\addplot[fill=red] coordinates {(1,1) (2,1) (3,1) (4,1)(5,1)};
\addplot[fill=red] coordinates {(1,1) (2,1) (3,1) (4,1)(5,1)};
\addplot[fill=red] coordinates {(1, 0)(2,0.2) (3,1) (4,1)(5,1)};
\addplot[fill=red] coordinates {(1, 0)(2, 0)(3,0.5) (4,1)(5,1)};
\addplot[fill=red] coordinates {(1, 0)(2, 0) (3, 0)(4,1)(5,1)};
\addplot[fill=red] coordinates {(1, 0)(2,0)(3,0)(4,0)(5,1)};
\addlegendentry{$Y'$};
\end{axis}
\end{tikzpicture}
}
\caption{The idea behind the proof of \cref{lem:lorenz-dominance}: We plot the sorted utility vectors of $W'$ and $Y'$. Each large block can be thought of as a good and each of the smaller blocks can be thought of as auxiliary goods. Our first claim is to show that all the values to the left of $j$ are equal in both $W'$ and $Y'$ and correspond to the same agent. Our second claim is to show that the bar corresponding to $u$ is taller than that of $j$.}
\label{fig:sorted-utility-vector}
\end{figure}
We first show that $v'_j(Y_j') < v'_i(W_i')$.
If $j$ has a higher priority than $i$, $|W_j| \le k+1$ (\Cref{lem:yankee-swap-size-upper-bounds}). By our choice of $j$, $|Y_j| \le |W_j| - 1 \le k = |W_i|$. Since $j$ has a higher priority, we have $v'_j(Y'_j) < v'_i(W'_i)$. If $j$ has a lower priority than $i$, $|W_j| \le k$ (\Cref{lem:yankee-swap-size-upper-bounds}). By our choice of $j$, $|Y_j| \le |W_j| - 1 \le k - 1 < |W_i|$. Since $|W_i|$ is greater than $|Y_j|$ by at least $1$, we have $v'_j(Y'_j) < v'_i(W'_i)$ irrespective of their priorities.
We observe that for any $h\in N$ whose utility is less than $v'_j(Y'_j)$ under $Y'$, we have $v'_h(Y'_h) = v'_h(W'_h)$. This is because we pick an agent $j$ with minimal utility under $Y'$ for which $v'_j(Y'_j) < v'_j(W'_j)$. Therefore, for any agent $h$ for whom $v'_h(Y'_h) <v'_j(Y'_j)$ we must have that $v'_h(Y'_h) \ge v'_h(W'_h)$.
If the inequality is strict, we have $v'_h(W'_h) < v'_h(Y'_h) < v'_j(Y'_j) < v'_i(W'_i)$.
This implies that $\texttt{flag}\xspace_h$ was set to $\texttt{true}\xspace$ before $\texttt{flag}\xspace_i$ was set to $\texttt{true}\xspace$ i.e. at the iteration where $\texttt{flag}\xspace_i$ was set to $\texttt{true}\xspace$, $\texttt{flag}\xspace_h$ was already true; otherwise, according to \cref{lem:choice-equivalence}, \cref{algo:yankee-swap} would have chosen $h$ instead of $i$ creating a contradiction.
If $\texttt{flag}\xspace_h$ was set to true, $|W_h| = |X_h|$.
This implies that $v'_h(X'_h) = v'_h(W'_h) < v'_i(W'_i) \le v'_i(X'_i)$ and $v'_h(X'_h) < v'_h(Y'_h)$. This contradicts our choice of $i$.
We can similarly show that, for any agent $h$ with utility less than $v'_j(Y'_j)$ in $W'$, we have $v'_h(Y'_h) = v'_h(W'_h)$.
Let $j$ be the $l$-th least valued agent in $Y'$. From our discussion above, the first $l-1$ least valued agents in both $W'$ and $Y'$ have the same utility in both allocations. Let the $l$-th least valued agent in $W'$ be $u$. From the definition of Lorenz dominance, if $v'_j(Y'_j) < v'_{u}(W'_{u})$, $Y'$ does not Lorenz dominate $W'$.
If $v'_j(Y'_j) = v'_{u}(W'_{u})$, this implies that $j = u$ since the fractional part of $v'_h(Y'_h)$ is unique for every agent $h \in N$ due to the way we allocate the auxiliary goods $A$. However, if $j = u$, then by assumption we have $v'_j(Y'_j) < v'_j(W'_j) = v'_{u}(W'_{u}) = v'_j(Y'_j)$, a contradiction. Therefore $v'_j(Y'_j) \ne v'_{u}(W'_{u})$. If $v'_{u}(W'_{u}) < v'_j(Y_j)$, then from our discussion, we must have $v'_{u}(W'_{u}) = v'_{u}(Y'_{u}) < v'_j(Y'_j)$. This implies that there are $l$ agents (the first $l$ least valued agents in $W'$) which have a lower utility than $j$ in $Y'$, contradicting our assumption on $l$. Therefore, we must have $v'_{u}(W'_{u}) > v'_j(Y'_j)$ proving that $Y'$ does not Lorenz dominate $W'$; the sum of the first $l$ elements in the sorted utility vector of $W'$ is greater than the sum of the first $l$ elements in the sorted utility vector of $Y'$. Since $Y'$ is Lorenz dominating, this results in a contradiction and completes the proof.
\end{proof}
Construct a new allocation $Z$ starting at $Y$ and moving goods from agents in $N - i$ arbitrarily to $0$ till $|Z_j| = |W_j|$ for all $j \in N \setminus \{i\}$. In other words, let $Z_j$ be a size $|W_j|$ subset of $Y_j$ for all $j \in N -i$ and $Z_i = Y_i$. Due to \cref{lem:lorenz-dominance}, there will be no $j \in N - i$ where $|Z_j| < |W_j|$. Given the dummy agent $0$'s valuation function, this allocation is still non-redundant for the agents $N + 0$. Note that $|W_i| < |Z_i|$ and $|W_0| > |Z_0|$.
Using \cref{lem:babaioff-paths} on the allocations $W$ and $Z$, we get that there must be a transfer path from $i$ to $0$ in $W$. This stems from the fact that $0$ is the only agent in $S^{+}$. This transfer path contradicts the algorithm since we chose the iteration where $\texttt{flag}\xspace_i$ was set to $\texttt{true}\xspace$ implying that there is no path from $i$ to $0$ in $W$.
\end{proof}
\subsection{Computing Path Transfers}\label{subsec:path-transfers}
In this section, we provide a simple algorithm to compute path transfers. A general version of this method can be used to decide if a set is independent in a matroid union \citep{schrijver-book}. This approach is also used by \citet{Barman2021MRFMaxmin} whose notation we follow.
We define the {\em exchange graph} of an allocation $\cal G(X) = (G, E)$ as a directed graph defined on the set of goods $G$. Let $g \in X_j$ for some good $g$ and agent $j$. An edge exists from $g$ to some other good $g' \notin X_j$ if $v_j(X_j - g + g') = v_j(X_j)$. In other words, there is an edge from $g$ to $g'$ if the agent who owns $g$ can replace it with $g'$ with no loss to their utility. For the sake of our analysis, if the good $g$ is stolen from agent $j$, then that good can be replaced with $g'$ iff the edge $(g,g')$ exists.
In order to check if a transfer path exists from some agent $i$ to $0$, we construct the exchange graph $\cal G(X)$.
We then compute the set of goods which have a marginal gain of $1$ for the agent $i$ under the allocation $X$ i.e. $F_i(X) = \{g \in G\, |\, \Delta_i(X_i, g) = 1\}$. In the exchange graph, we find the shortest path (if there exists one) from $F_i(X)$ to goods in $X_0$; this can done by adding a source node $s$ in the exchange graph with edges to all the goods in $F_i(X)$ and then using breadth-first search to find the shortest path from $s$ to $X_0$. From the path in the exchange graph, we can also determine exactly which goods to transfer backwards along the path to update the allocation --- if we have a path $(s, g_1, g_2, \dots, g_k)$, we transfer the good $g_k$ to the agent who has $g_{k-1}$, transfer $g_{k-1}$ to the agent who has $g_{k-2}$ and so on. Finally, we give $g_1$ to $i$; see \cref{algo:transfer}.
\begin{algorithm}
\caption{Computing Transfer Paths}
\label{algo:transfer}
\begin{algorithmic}
\Require An allocation $X$ and two agents $i$ and $j$
\State Construct the exchange graph $\cal G(X)$
\State Add a source node $s$ to $\cal G(X)$ with an edge to all the goods in $F_i(X)$
\State Find the shortest path from $s$ to $X_j$ using breadth-first search in $\cal G(X)$
\If{A path exists}
\State Return the path $(s, g_{i_1}, g_{i_2}, \dots, g_{i_k})$
\Else
\State Return \texttt{false}\xspace
\EndIf
\end{algorithmic}
\end{algorithm}
That the shortest path in the exchange graph from $F_i(X)$ to $X_0$ is a transfer path is well known in the matroid literature (albeit using different terminology). This result was first adapted by \citet[Lemma 1]{Barman2021MRFMaxmin} to fair allocation. The missing proofs in this section can be found in Appendix \ref{apdx:path-transfers}.
\begin{restatable}{lemma}{lempathtransfer}\label{lem:path-transfer-equivalence}
Given a non-redundant allocation $X$, the shortest path from $F_i(X)$ to $X_j$ in the exchange graph $\cal G(X)$ is a transfer path from $i$ to $j$ ($i \ne j$ and $i, j \in N+ 0$) in $X$ i.e. transferring goods along the shortest path results in an allocation where $i$'s value for its bundle goes up by $1$, $j$'s value for its bundle goes down by $1$ and all the other agents see no change in the value of their bundles.
\end{restatable}
\citet[Lemma 5]{Barman2021MRFMaxmin} also show that a result similar to \cref{lem:babaioff-paths} is true for paths in the exchange graph as well.
\begin{restatable}{theorem}{thmexchangegraph}\label{thm:exchange-path}
Let $X$ and $Y$ be two non-redundant allocations for the set of agents $N + 0$. Let $S^{-}$ be the set of all agents $i \in N + 0$ where $|X_i| < |Y_i|$, $S^{=}$ be the set of all agents $i \in N + 0$ where $|X_i| = |Y_i|$ and $S^{+}$ be the set of all agents $i \in N + 0$ where $|X_i| > |Y_i|$.
For any agent $i \in S^{-}$, there exists a path in the exchange graph from $F_i(X)$ to $X_k$ for some $k \in S^{+}$.
\end{restatable}
Armed with these two results, we are ready to show correctness i.e. a path exists from agent $F_i(X)$ to $X_j$ in the exchange graph iff a transfer path exists from $i$ to $j$ in the allocation $X$.
\begin{theorem}\label{thm:path-correctness}
Given a non-redundant allocation $X$, a transfer path exists from agent $i$ to agent $j$ in $X$ if and only if \cref{algo:transfer} outputs a path. Furthermore, the path output by \cref{algo:transfer} is a transfer path.
\end{theorem}
\begin{proof}
The second statement is implied by \cref{lem:path-transfer-equivalence} so we only prove the first statement.
$(\Rightarrow)$ Assume \cref{algo:transfer} outputs a path. This implies there exists a path from $F_i(X)$ to $X_j$. From \cref{lem:path-transfer-equivalence}, this implies that there is a transfer path from $i$ to $j$
$(\Leftarrow)$ Assume there is a transfer path from $i$ to $j$ in $X$. Let $Y$ be the allocation that arises from transferring goods along the transfer path. Apply \cref{thm:exchange-path} to allocations $X$ and $Y$ and the agent $i$. Agent $i$ is clearly in $S^{-}$ and the only agent in $S^{+}$ is $j$ from the definition of the transfer path. Therefore, there must exist a path from $F_i(X)$ to $X_j$ in $\cal G(X)$. This implies that \cref{algo:transfer} outputs a path.
\end{proof}
\subsection{Time Complexity Analysis}
We now analyze the time complexity of \cref{algo:yankee-swap}. Our analysis assumes the allocation $X$ is stored as a binary matrix where $X(i, g) = 1$ if and only if $g \in X_i$.
This allows us to assume checking if a good belongs to an agent, adding a good to a bundle, and removing a good from a bundle can be done in $O(1)$ time.
Our time complexity analysis is based on two simple observations. First, the loop in \cref{algo:yankee-swap} runs at most $n+m$ times. This is because at each round, we reduce the size of $X_0$ by $1$ or set the flag of some agent to $\texttt{true}\xspace$ --- the former happens at most $m$ times and the latter happens at most $n$ times; once a flag is set to \texttt{true}\xspace, it does not go back to \texttt{false}\xspace.
Second, \cref{algo:transfer} runs in $O(m^2(n + \tau))$ time. This can be seen by closely examining each step of \cref{algo:transfer}. Construction of the exchange graph can be done by examining each possible pair of goods $(g, g')$, finding the agent $i$ whose bundle contains $g$, and checking if $v_i(X_i - g + g') = v_i(X_i)$. This can trivially be done in $O(m^2 (n + \tau))$ time. This is the most expensive step of the algorithm.
Adding a source node and the required edges takes $O(m \tau)$ time; we only need to check if $\Delta_i(X_i, g) = 1$ for each good $g$. Running breadth first search and finding the shortest path takes $O(m^2)$ time since there are $O(m)$ nodes in the graph. Transferring along the path takes $O(nm)$ time --- we simply iterate through the path transferring each good to the owner of the previous good; finding the owner takes $O(n)$ time.
Combining the two observations, we have the following result.
\begin{theorem}\label{thm:runtime}
\cref{algo:yankee-swap} runs in $O(m^2(n + \tau)(m+n))$ time.
\end{theorem}
\subsection{Comparison to \citet{Babaioff2021Dichotomous}}\label{subsec:comparison}
The algorithm to compute Lorenz dominating allocations by \citet{Babaioff2021Dichotomous} works as follows: they start with a \text{MAX-USW}{} allocation and then repeatedly check if there exists a path transfer that improves the objective function $\sum_{i \in N} \big (v_i(X_i) + \frac{\pi(i)}{n^2}\big )^2$. They check for and compute path transfers using algorithms for the matroid intersection algorithm. Specifically, they use algorithms for the matroid intersection problem to compute a \text{MAX-USW}{} allocation for the modified problem instance where each agent $i$'s valuation function is upper bounded by some pre-defined value $k_i$ i.e. $v_i^{new}(S) = \max\{v_i(S), k_i\}$. Note that computing the value of a bundle with respect to this modified valuation function still takes $O(\tau)$ time; therefore, from a time complexity perspective, computing a \text{MAX-USW}{} allocation for the valuation profile $v$ is equivalent to computing a \text{MAX-USW}{} allocation for the valuation profile $v^{new}$.
We first analyze the time complexity of computing a \text{MAX-USW}{} allocation and then use it to analyze the time complexity of the algorithm by \citet{Babaioff2021Dichotomous}. Our analysis uses the matroid intersection algorithm by \citet{Chakrabarty2019MatroidIntersection} which is, to the best of our knowledge, the best algorithm for the matroid intersection problem with access to a rank oracle. The proof can be found in \cref{apdx:comparison}.
\begin{restatable}{lemma}{lemmatintersection}\label{lem:mat-intersection-time}
When agents have MRF valuations, computing a \text{MAX-USW}{} allocation takes $O(n^2 m^{3/2} (m+\tau) \log{nm})$ time using the matroid intersection problem.
\end{restatable}
Note immediately that when $m = \Theta(n)$, our algorithm computes a \text{MAX-USW}{} allocation faster than the matroid intersection based approach. We now present the runtime of the algorithm by \citet{Babaioff2021Dichotomous}.
\begin{theorem}\label{thm:babaioff-runtime}
The algorithm by \citet{Babaioff2021Dichotomous} computes Lorenz dominating allocations in $\cal O(n^6 m^{7/2}(m + \tau)\log{nm})$ time.
\end{theorem}
\begin{proof}
\citet{Babaioff2021Dichotomous} show that their algorithm computes a \text{MAX-USW}{} solution at most $O(n^4m^2)$ times. Combining this with \cref{lem:mat-intersection-time}, we get the required time complexity.
\end{proof}
Indeed, our algorithm is significantly faster than that of \citet{Babaioff2021Dichotomous}. This speed-up comes mainly from two places. First, even though \citet{Babaioff2021Dichotomous} use transfer paths, our method of computing them is faster. Second, by carefully choosing which transfer paths to check for, we check for much fewer paths. Combining these two factors, when $m = \Theta(n)$, our algorithm is faster by a factor of $O(n^{13/2} \log{n})$.
\section{Conclusions and Future Work}\label{sec:conclusion}
In this work, we analyse the Yankee Swap algorithm. Our results are surprising: when agents have binary submodular valuations, Yankee Swap is a fast method to output fair and efficient allocations.
This work suggests the existence of several simple algorithms that are as powerful as complex algorithms present in the literature.
Indeed, several problems in fair allocation are yet to admit simple approaches --- the computation of EF1 and Pareto optimal allocations when agents have additive valuations is one such problem.
Other examples include the computation of an EFX allocation when there are only three agents and the computation of a welfare maximizing EF1 allocation.
Another line of future work is the analysis of Yankee Swap in other fair allocation problems. We conjecture that, when agents have entitlements (or priorities) \citep{chakraborty2021weighted}, the Yankee Swap algorithm modified to break ties using entitlements can be used to compute a weighted leximin allocation. We also believe that the Yankee Swap can be used in fair chore allocation problems when agents have dichotomous valuations.
\bibliographystyle{plainnat}
|
1,477,468,750,384 | arxiv |
\section{Introduction}
\label{sec:intro}
\begin{figure*}[ht]
\vspace{-4mm}
\includegraphics[width=\textwidth]{figures/main.pdf}
\caption{\textbf{System Overview} Our system optimization process can be divided into two stages. In the first stage, the time-pose function learns the unknown deep camera poses from an implicit functional relationship between time and camera poses in the RGB sequence. In the second stage, the complementary poses are used to train an implicit 3D scene representation with depth supervision, and the estimated poses are simultaneously refined.}
\label{fig:main}
\end{figure*}
Recently, Neural Radiance Fields \cite{mildenhall_nerf_2020} have been successfully extended to represent large-scale scenes by methods like Block-NeRF \cite{tancik_block-nerf_2022}, UrbanNeRF \cite{rematas_urban_2022}, Mega-NeRF \cite{turki_mega-nerf_2022} and BungeeNeRF \cite{xiangli_bungeenerf_2022}. Due to their revolutionary view synthesis quality, these algorithms have the potential to serve as mapping tools for next-generation visual navigation systems. We envision a future in which SLAM algorithms \cite{zhu_nice-slam_2022} estimate the poses of agents with an unprecedented accuracy, according to the differences between sensory inputs and photorealistic images rendered from radiance fields. However, these methods have not yet systematically addressed the issue of using depth as a regularization, which has been shown as an effective technique for NeRF \cite{deng_depth-supervised_2022, roessle_dense_2022}.
We note that using RGB-D signals as supervision for large-scale NeRF training is not trivial due to practical issues. Compact synchronized RGB-D cameras like Microsoft Kinect \cite{zhang_kinect_2012} or Intel Realsense \cite{zabatani_intel_2020} have limited sensing range, thus are not applicable in smart transportation applications like autonomous driving or drone delivery. Using asynchronous RGB and depth sensors, on the other hand, brings substantially more complexities as the transformation matrices between RGB and depth frames need to be estimated. We compare our problem of interest with other existing problems of widespread concern in figure \ref{fig:teaser}: (a) RGB-D calibration methods \cite{jeong_self-calibrating_2021} estimate the transformation relationship between the depth camera and the RGB camera, allowing point-to-point correspondence between depth and RGB maps. (b) Given continuously acquired RGB or depth maps, RGB-based SLAMs \cite{campos_orb-slam3_2021, engel_direct_2018, mur-artal_orb-slam_2015} and depth-based SLAMs \cite{niesner_real-time_2013, xu_multi-scale_2018} estimate the transfer matrices $\{\mathbf{T}_{ij}\}$ between adjacent frames. (c) Our problem of interest estimates the camera poses $\{\mathbf{T}_{i}\}$ of the depth sequence by learning the trajectory prior from the timestamp-pose pairs of the RGB sequence.
Inspired by the success of recent NeRF methods that optimize the radiance field and camera intrinsic/extrinsic parameters jointly \cite{jeong_self-calibrating_2021, lin_barf_2021}, we aim to develop a method that self-calibrates the mismatch between RGB-D frames automatically while optimizing the radiance field which we name it as \textbf{AsyncNeRF}. We notice that there exists a natural solution to this setting: treating RGB and depth frames separately as captured by cameras with a missing modality at certain timestamps. As such, former methods like BARF \cite{lin_barf_2021} can be readily extended to this scenario. However, this baseline method fails to leverage a useful domain-specific prior in this problem: The RGB and depth cameras actually go over the same continuous underlying trajectory.
To this end, we propose to use a novel time-pose function to model this prior, which maps a timestamp to a 6-DoF camera pose. Just like the way that distance and radiance fields approximate functions with 3D/5D inputs, this time-pose function approximates a function that takes the 1D timestamp as input and outputs a transformation in the SE(3) manifold. In other words, this time-pose function is also an implicit representation network. We combine it with a city-scale radiance field to form a cascaded architecture as shown in figure \ref{fig:main}. As such, training this architecture can simultaneously build a city-scale radiance field for scene mapping and calibrate the mismatch between RGB-D frames.
To summarize, we have the following contributions:
\begin{itemize}
\item We formalize a new important problem: training city-scale neural radiance fields from asynchronous RGB-D sequences, which is deeply rooted in practical issues encountered in real-world applications.
\item We identify an important domain-specific prior in this problem: RGB-D frames are sampled from the same underlying trajectory. We instantiate this prior into a novel time-pose function and develop a cascaded implicit representation network.
\item In order to systematically study the problem, we build a photo realistically rendered synthetic dataset that mimics different types of mismatch. \item Through a comprehensive benchmarking on this new dataset, we demonstrate that our method can promote mapping performance over strong prior arts. We release our codes, data, and models.
\end{itemize}
\section{Related Works}
\label{sec:related}
\input{related_works.tex}
\section{Method}
\label{sec:method}
Our system input consists of a set of RGB images $\{\mathcal{I}_i\}_{i=1}^{N_{c}}$, a set of depth maps $\{\mathcal{D}_i\}_{i=1}^{N_{d}}$, a set of camera poses $\{\mathbf{T}_i^c\}_{i=1}^{N_{c}}$ that are synchronized in time with the RGB images, and the EXIF information from which we obtain the timestamps $\{t_i\}_{i=1}^{N_c+N_d}$ of the images and the corresponding camera intrinsics. The RGB images and depth maps are both captured with a drone on the same trajectory, but they are not necessarily synchronized in terms of acquisition time.
The problem of interest can be formulated as a two-stage optimization problem, as shown in Fig. \ref{fig:optimization_flow}.
First, we predict the initial values of the camera poses $\{\mathbf{\hat T}_j^d\}$ corresponding to the depth maps using an implicit functional relationship (shown in equation \ref{eq:implicit-time-pose}) between the capture time and the camera poses from the RGB images.
Second, we learn the scene representation using posed RGB images, and simultaneously optimize the inaccurate initial values of the depth map poses input in the previous stage.
\begin{equation}
\mathcal{T}_\Theta:\quad t_i\to\ \mathbf{\hat T}_i = [\ x_i,\ q_i\ ],
\label{eq:implicit-time-pose}
\end{equation}
where $t_i$ is the input of this function, which is the timestamp, $ \mathbf{\hat T}_i$ denotes the output camera pose represented by a translation vector $x_i$ and a rotation vector $q_i$, and $\Theta$ is the function parameters.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{figures/optimization_flow.pdf}
\caption{\textbf{Optimization Pipeline.} (i) the time-pose function learns the embedded trajectory prior from the RGB sequence and predicts the depth camera poses; (ii), (iii) the ground-truth RGB camera poses and the estimated depth camera poses are used in learning the 3D scene representation that generates observed RGB images and depth maps at novel perspective (iv).}
\label{fig:optimization_flow}
\end{figure}
We introduce in section \ref{subsec:method-timepose} and \ref{subsec:timepose-optimize} a method of generating initial poses of depth maps using an implicit time-pose function of the trajectory prior. The functional relationship is learned from the time-pose pairs in the RGB sequence.
In section \ref{subsec:barf} we introduce our method for mapping and optimizing the predicted poses simultaneously.
\subsection{Time-Pose Function}
\label{subsec:method-timepose}
In this part, we introduce the time-pose function that estimates the camera poses.
We represent the camera trajectory as an implicit time-pose function whose input is a timestamp, and whose output is a 6-DoF camera pose.
The pose output consists of a 3D translation vector $\hat x_i$ and a 4-D rotation vector represented as a quaternion $\hat q_i$.
\noindent\textbf{Network Overview}
The time-pose function is approximated with a 1-dimensional multi-resolution hash grid $\{\mathcal{G}_\theta^{(l)}\}_{l=1}^L$, followed by an MLP structure $\mathcal{F}_\Theta$.
The hash grid consists of $L$ levels of separate feature grids with trainable hash encodings\cite{muller_instant_2022}.
When querying the camera pose $\mathbf{\hat T}_i$ for an arbitrary timestamp $t_i$ that is in the interval of the timestamps, we sample the hash encodings from each grid layer and perform quadratic interpolation on the extracted encodings to obtain a feature vector $\mathcal V_i$.
After obtaining the interpolated feature vector, a shared MLP is used to process the input, whose output is then fed into two separated fully connected layers to predict the output translation $\hat x_i$ and rotation $\hat q_i$ vector respectively.
The forward pass can be expressed in the following equations:
\begin{align}
\mathcal V_i &= \mathcal{F}_\Theta\left(\ \{\text{interp}(\text{h}(t_i, \pi_l),\ \mathcal G_{\theta}^l\}_{l=1}^L\ \right), \\
\mathbf{\hat T}_i &= [\hat x_i, \hat q_i] = l_\text{trans}(\mathcal V_i, \Theta_\text{trans}),\ l_\text{rot}(\mathcal V_i, \Theta_\text{rot}),
\end{align}
where $\text{interp}$ denotes the interpolation operator, $\text{h}$ is the hash function parameterized by $\pi_l$, $l_\text{trans}, l_\text{rot}$ are the two fully connected layer, where $\Theta_\text{trans}, \Theta_\text{rot}$ represent the network parameters respectively.
\noindent\textbf{Depth-pose Prediction}
Since both the depth maps and the RGB images are collected by the same drone in the same flight, they have an almost identical trajectory in temporal and spatial terms except for the difference in the placement of the two sensors on the aircraft.
Therefore we can directly predict the poses corresponding to the depth sequence timestamps using the implicit time-pose function we learned in the RGB sequence with a pre-defined pose transformation $\textbf{T}_\text{sensor}$ between sensor positions.
\subsection{Optimizing Time-Pose Function}
\label{subsec:timepose-optimize}
The most common choices to represent rotation for optimizing camera poses are to use rotation matrices\cite{yen-chen_inerf_2021} or Euler-angles\cite{su2015render, tulsiani2015viewpoints}. However, they are not continuous for representing rotation\cite{zhou2019continuity} for their non-homeomorphic representation space to SO(3). We chose to use unit quaternion as our raw representation because arbitrary 4-D values are easily mapped to legitimate rotations by normalizing them to unit length\cite{kendall2015posenet}.
To optimize the time-pose function, we propose the following objective function:
\begin{equation}
\mathcal{L}=\lambda_\text{trans}\mathcal{L}_\text{trans}+\lambda_\text{rot}\mathcal{L}_\text{rot}+\lambda_\text{speed}\mathcal{L}_\text{speed},
\end{equation}
where $\lambda_\text{trans}, \lambda_\text{rot}, \lambda_\text{grad}$ are the weighting parameters.
\noindent\textbf{Direct Optimization of Translation and Rotation}
We directly optimize the translation and the rotation vector in the Euclidean space by evaluating the mean square error (MSE) of the estimated camera poses and the ground-truth pose vectors:
\begin{align}
\mathcal{L}_\text{trans}&=\text{MSE}(x(\{t_i\}),\hat{x}(\{t_i\}))=\frac 1 n \sum_{i=1}^n(x_i-\hat{x}_i)^2,\\
\mathcal{L}_\text{rot}&=\text{MSE}(q(\{t_i\}),\hat{q}(\{t_i\}))=\frac 1 n \sum_{i=1}^n(q_i-\hat{q}_i)^2.
\end{align}
Since $x$ and $q$ are in different units, the scaling factor $\lambda_\text{trans}$ and $\lambda_\text{rot}$ played an important role to balance the losses. To prevent translation and rotation from influencing each other in training and to tap into possible mutual facilitation, we made the weighting factors learnable\cite{kendall2017geometric}.
\noindent\textbf{Gradient Optimization of Motion Speed}
Observing that the time-pose function is essentially a function of displacement and angular displacement with respect to time, we can use the average velocity calculated from the ground-truth camera pose to supervise the gradient of the network output.
Since the velocity variation is small and the angular velocity variation is relatively larger in the scenes captured by the drone, only the average velocity is used to supervise the neural network:
\begin{equation}
\mathcal{L}_\text{grad}=\text{MSE}(v(t_i), \hat{v}(t_i))=\frac 1 n \sum_{i=1}^n(v(t_i)-\frac{\partial \hat x}{\partial t}(t_i))^2,
\end{equation}
where $v(t_i)=\left.\frac{\partial x}{\partial t}\right|_{t=t_i}\approx\frac{x_i-x_{i-1}}{t_i-t_{i-1}}$.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.95\textwidth]{figures/Dataset-city.pdf}
\caption{Visualization of our proposed \textbf{AUS dataset}}
\label{fig:data-city}
\end{figure*}
\subsection{Learning Large-scale Implicit Fields with Joint Time-Pose Function Error Compensation}
\label{subsec:barf}
While the time-pose function provides a great initial value for the mapping stage, there still exists noticeable error in some of the outlying frames.
In this section, we describe how we perform simultaneous mapping and pose optimization, which compensates for the error of the time-pose function.
We partition the city scene map into a series of equal-sized blocks in terms of spatial scope, and each block learns its scene representation with an implicit field separately, while the camera poses $\hat T_i^d$ corresponding to the depth map is also optimized in the training process.
\noindent\textbf{Scene Representation}
We represent the scene model as a series of implicit functions mapping from spatial point coordinates and viewing directions to the radiance values as $\{f_\text{MLP}^{(i) }\}_{i=1}^{N_x\times N_y}$, where $N_x, N_y$ denotes the spatial grid size.
Each implicit function represents a geographic region with $\textbf{x}_i^\text{centroid}$ as its centroid.
\begin{equation}
\vspace{-1mm}
f^\text{(k)}_\text{MLP}(\gamma^{(\alpha)}(\mathbf{x}_\text{pts}),\mathbf{d},l^{(a)})\to(\hat c,\sigma),
\end{equation}
\begin{equation*}
\text{where }\text k = \mathop{\arg\min}\limits_{j}||\mathbf{x}_\text{pts}-\mathbf{x}_{j}^\text{centroid}||_2.
\vspace{-1mm}
\end{equation*}
For view synthesis, we adopt volume rendering techniques that are proven to be powerful. To be specific, we sample a set of points for each emitted camera ray in a coarse-to-fine manner \cite{mildenhall_nerf_2020} and accumulate the radiance and the ray-depth along the corresponding ray to calculate the rendered color $\mathcal{\hat I}$ and depth $\mathcal{\hat D}$.
To obtain the radiance of a spatial point $\mathbf{x}_\text{pts}$, we use the nearest scene model for prediction. A set of per-image appearance embedding $l^{(a)}$ \cite{martin-brualla_nerf_2021} is also optimized simultaneously in the training.
\begin{align}
\mathcal {\hat I}(\mathbf o, \mathbf d)&=\int_\text{near}^\text{far}T(t)\sigma^{(k)}(\mathbf{x}(t))\cdot c^{(k)}(\mathbf{x}(t),\mathbf{d})\text dt,\\
\mathcal{\hat D}(\mathbf o, \mathbf d)&=\int_\text{near}^\text{far}T(t)\sigma^{(k)}(\mathbf{x}(t))\cdot t\text dt,
\end{align}
where $\textbf{o}$ and $\textbf{d}$ denote the position and orientation of the sampled ray, $\textbf{x}(t) = \textbf{o} + t\textbf{d}$ represents the sampled point coordinates in the world space, and $T(t)=\exp\left(-\int_\text{near}^t\sigma^{(k)}(\textbf{x}(s))\text{d}s\right)$ is the accumulated transmittance.
\noindent\textbf{Optimization}
We jointly optimize the inaccurate camera poses and the scene mapping:
When fitting parameters $\Theta_\text{MLP}^{(k)}$ of the scene representation, the estimated depth camera poses $\hat T_i\in SE(3)$ (where $t\in \mathbb{R}^3$ and $q\in SO(3)$) will be simultaneously optimized on the manifold:
\begin{equation}
\hat{\Theta}_\text{MLP},\mathbf{\hat{T}}=\underset{\textbf{T} \in SE(3), \Theta}{\operatorname{argmin}} \mathcal{L}(\textbf{T},{\Theta_\text{MLP}} \mid {\textbf{T}_0},\{\mathcal{I}_i\},\{\mathcal D_i\}),
\end{equation}
where $\mathcal{L}$ is the objective function, and $\textbf{T}_0$ is the initial pose input from the time-pose function .
To train the implicit expression to obtain realistic RGB rendering maps and accurate depth map estimation, we used the objective function proposed in equation \ref{eq:barf_loss}.
\begin{equation}
\label{eq:barf_loss}
\mathcal{L} = \lambda_\text{color}\text{MSE}(\mathcal{I}, \mathcal{\hat I})+\lambda_\text{depth}(\alpha_0)\text{MSE}(\mathcal{D}, \mathcal{\hat D}),
\end{equation}
where $\lambda_\text{color}$ and $\lambda_\text{depth}(\alpha_0)$ are weighting hyper-parameters for color and depth loss, in which the depth loss weight starts to grow from zero gradually with training process $\alpha_0$.
To compensate for the error from the time-pose function extracted poses, we further optimize the estimated poses $\mathbf{\hat T}$ by applying a set of trainable pose correction terms $\{\xi_x^{(i)}, \xi_q^{(i)}\}$ (where $\xi_q^{(i)}\in \mathbb{R}^3, \xi_q^{(i)}\in\mathfrak{so}(3)$) to the initial poses $\{\hat x_{0,i}, \hat q_{0,i}\}$ in each iteration:
\begin{align}
\hat{q}_i' &= \text{Exp}(\xi_i)\cdot \hat q_{0,i},\\
\hat{x}_i' &= \xi_{x}^{(i)} + \hat x_{0,i}
\end{align}
where $\text{Exp}(\cdot)$ is the exponential mapping that uses the Rodrigues formula \cite{Rodrigues1840} to map rotation vectors in $\mathfrak{so}(3)$ to rotation matrices.
\noindent\textbf{Dynamic Low-pass Filter}
As stated in BARF\cite{lin_barf_2021}, in solving the positional calibration problem, smoother signals can predict more consistent displacements than complex signals, which can easily lead to sub-optimal optimization results.
The positional encoding in the traditional NeRF rendering significantly improves the synthesized view in terms of high-frequency details.
Using a low-pass filter that can weaken the effect of position coding will have a similar effect as smoothing.
Thus, a dynamic low-pass filter is used in AsyncNeRF in the joint-optimization stage to help optimize inaccurate depth frame poses.
\input{giant_table.tex}
\section{Experiments}
\label{sec:exp}
In this section, we show in the effectiveness of our two-staged pipeline. We first introduce the experiment setup in section \ref{subsec:datasets} and \ref{subsec:implementation-details}.
In section \ref{subsec:results}, we qualitatively and quantitatively evaluate our proposed methods.
\subsection{Datasets}
\label{subsec:datasets}
We evaluate our method against our proposed Asynchronous Urban Scene (AUS) dataset (Figure \ref{fig:data-city}). The AUS dataset is generated from a simulator. With loaded scene models, the simulator can create asynchronous RGB-D sequences that mimic real-world asynchronous sequences.
The proposed dataset consists of 2 realistic scenes and 4 virtual scenes. The former uses the New York and San Francisco city scenes provided by Kirill Sibiriakov\cite{ArtStation}, in which AUS-NewYork covers a $ 250\times 150 m^{2}$ area with many detailed buildings and AUS-SanFrancisco consists of a $500\times 250 m^{2}$ area near the Golden Gate Bridge, while the latter uses the simulation model files provided in the UrbanScene3D dataset \cite{DBLP:journals/corr/abs-2107-04286}.
We make use of the physical engine of Unreal Engine together with AirSim\cite{DBLP:journals/corr/ShahDLK17} to model physical properties and use three types of trajectory which consists of a Zig-Zag trajectory, a more complex planned random trajectory, and a complex manual controlled trajectory. In virtual scenes, we only provide manually controlled trajectories, since the scene sizes are relatively smaller.
\begin{figure}[ht]
\centering
\includegraphics[width=0.46\textwidth]{figures/offset.pdf}
\caption{Visual demonstration of the sampling strategies. We sample one RGB frame from every 5 frames of raw RGB-D data. We calculate the depth frame sampling time by adding different offsets to the RGB frame timestamps. Two types of offsets are included in the AUS dataset: (1) fixed offsets of 10\%-50\% of the time interval between two adjacent RGB frames (5 traces in total); (2) unfixed offset by random.}
\label{fig:offset}
\end{figure}
In real scenarios, RGB cameras and LiDAR are usually not aligned in time, so we sample raw RGB-D sequences in the simulator at a higher frequency (50fps) and re-sampled them with different starting frames to get similar asynchronous RGB-D sequences. To further increase the realism, we added random perturbations to the fixed offsets. The sampling method is shown in figure \ref{fig:offset}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{figures/qualitative.pdf}
\caption{\textbf{Qualitative Results.} Async-NeRF can render photo-realistic novel views and the best depth estimation results.}
\label{fig:qualitative}
\end{figure*}
\subsection{Implementation Details}
\label{subsec:implementation-details}
\noindent\textbf{Time-Pose Function}
In our experiments, we maintain a multi-resolution feature grid with $L=2$, concatenate the interpolated feature vectors from different grid layers together and process through a shallow MLP of 5 layers with 1024 neurons in each layer.
We use a spatial hash function $h^{(l)}(x) = \left\lfloor x\right\rfloor \oplus \pi_l\ \text{mod}\ N_l$ for each layer of the grid, where $\oplus$ denotes the bit-wise XOR operation, $N_l$ is the number of feature vectors in layer $l$, and $\pi_l$ is the selected large prime number. In our experiments, we set $\pi_0 = 1, \pi_1 = 2,654,435,761$.
We use the Adam\cite{Kingma2015AdamAM} optimizer with an initial learning rate of $5\times10^{-4}$ decaying exponentially to $5\times10^{-5}$.
The initial weighting parameters for optimizing the time-pose function are set to a default of $\lambda_\text{trans}=1, \lambda_\text{rot}=1$, and $\lambda_\text{speed}=10^{-3}$.
\noindent\textbf{Joint Optimization}
We follow the spatial partitioning method from Mega-NeRF\cite{turki_mega-nerf_2022}. We train all models for 500K iterations and render a batch of 1024 rays at each step, with a learning rate of $5\times 10^{-4}$ decaying to $5\times 10^{-5}$ for the scene representation networks, and $1\times 10^{-6}$ decaying to $1\times10^{-7}$ for the pose optimization.
The hyper-parameters for each loss in this stage are set to $\lambda_\text{color} = 1$, and $\lambda_\text{depth}(\alpha_0) = 10^{-3}\cdot\alpha_0$.
\subsection{Results}
\label{subsec:results}
We evaluate our proposed Async-NeRF against NeRFW\cite{martin-brualla_nerf_2021} and city-scale Mega-NeRF \cite{turki_mega-nerf_2022}.
We present the quantitative results in table \ref{tab:joint}.
\noindent\textbf{Time-Pose Function}
We evaluate pose error to demonstrate the ability of the time-pose function to localize from time-pose sequences. As shown in the table, our methods can achieve an average pose accuracy of less than $1.5m$ in translation and $2^\circ$ in rotation.
\noindent\textbf{Mapping}
The standard metrics for novel view synthesis and depth estimation are used for evaluation. For view synthesis, metrics including PSNR, SSIM, and the VGG implementation of LPIPS \cite{zhang_unreasonable_2018} is used. For depth estimation, RMSE, RMSE log, $\delta_1$, $\delta_2$, and $\delta_3$ are used.
We also present the mapping results qualitatively in figure \ref{fig:qualitative}. We show that our methods can synthesize photo-realistic novel views with comparable quality to the recent state-of-the-art methods. Our method significantly outperforms current RGB-based methods for depth estimation.
\subsection{Ablation Studies}
\label{subsec:ablation}
\noindent\textbf{Time-Pose Function Network Structure}
We evaluate our methods against different network structures on localization accuracy:
(a)\textbf{Pure MLP structure}: straightly process the input timestamps with an MLP.
(b)\textbf{1-D Feature Grid}: maintain a feature vector for each second in the timestamp span.
(c)\textbf{Ours}: our proposed 1-D multi-resolution hash grid with different layers of resolution.
The results (Table \ref{tab:ablation-network}) show that our proposed multi-resolution outperforms other network structures in accuracy.
\noindent\textbf{Ablation of the Speed Optimization}
We compared the localization accuracy and optimization time of our method with and without the gradient optimization of motion speed. Quantitative results are listed in table \ref{tab:ablation-network}.
\begin{table}[ht]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lllll}
\hline
\multicolumn{1}{c}{\multirow{2}{*}{Method}} & \multicolumn{2}{c}{rotation $^\circ$} & \multicolumn{2}{c}{translation $m$} \\ \cline{2-5}
\multicolumn{1}{c}{} & mean & median & mean & median \\ \hline
MLP & 26.62 & 17.53 & 8.23 & 7.5 \\
Feature Grid & 15.86 & 14.56 & 8.53 & 7.48 \\
Ours (L=1) & 24.24 & 12.99 & 9.2 & 8.01 \\
Ours w/o speed constraint & 12.96 & 11.21 & 19.95 & 12.28 \\
Ours & \textbf{11.36} & \textbf{11.17} & \textbf{6.29} & \textbf{4.03} \\ \hline
\end{tabular}%
}
\caption{Result of the ablation on different network structures and the use of speed optimization}
\label{tab:ablation-network}
\end{table}
\begin{table}[ht]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lllllll}
\hline
\multicolumn{1}{c}{\multirow{2}{*}{Scene}} & \multicolumn{2}{c}{Ours} & \multicolumn{2}{c}{Mega-NeRF} & \multicolumn{2}{c}{Mega-NeRF-Depth} \\ \cline{2-7}
\multicolumn{1}{c}{} & PSNR & RMSE & PSNR & RMSE & PSNR & RMSE \\ \hline
NY & 24.24 & 5.93 & 24.025 & 42.15 & 19.701 & 15.94 \\
SF & 22.6985 & 7.26 & 19.9957 & 32.1686 & 19.0666 & 11.3864 \\
Bridge & 29.0644 & 26.55 & 27.9767 & 120.41 & 22.3456 & 96.16 \\
Town & 25.315 & 15.61 & 24.6925 & 129.5 & 20.135 & 81.99 \\
School & 26.51 & 21.192 & 25.5729 & 63.1047 & 21.9129 & 42.74 \\
Castle & 28.2157 & 16.6617 & 28.0643 & 54.9895 & 23.23 & 38.8983 \\ \hline
\end{tabular}%
}
\caption{Result of the ablation of error compensation}
\label{tab:ablation-mega-nerf-depth}
\end{table}
\noindent\textbf{Ablation of Pose Error Compensation}
We train a Mega-NeRF\cite{turki_mega-nerf_2022} with depth supervision from the ground-truth depth maps and the depth camera pose output from the time-pose function and compare its mapping results with our proposed method. From the evaluation results (Table \ref{tab:ablation-mega-nerf-depth}), we find that depth supervision with errors can improve Mega-NeRF's effectiveness on depth prediction, but incorrect geometric information in turn affects the performance of the rendering network.
\begin{table}[htbp]
\centering
\resizebox{0.38\textwidth}{!}{
\begin{tabular}{cccccc}
\toprule
\multirow{2}{*}{offset} & \multicolumn{2}{c}{rotation $^\circ$} && \multicolumn{2}{c}{translation $m$} \\ \cmidrule{2-3} \cmidrule{4-6}& mean& median&& mean& median\\
\midrule
10\%&0.6578&0.2590&&3.4248&1.8818\\
20\%&0.9422&0.5516&&4.9643&3.898\\
30\%&1.2383&0.7914&&6.4142&5.7807\\
40\%&1.4141&0.7538&&7.2644&6.617\\
50\%&1.4984&0.8419&&5.1704&6.5162\\
random&1.1228&0.5155&&5.1704&4.0544\\
\toprule
\end{tabular}
}
\vspace{-0.1in}
\caption{Result of the ablation of different sampling offset}
\label{tab:ablation-offset}
\end{table}
\noindent\textbf{Ablation of Different Sampling Offset}
We compare the localization accuracy of implicit trajectory representations trained with RGB poses in predicting depth camera poses under different sampling offsets. We trained a time-pose function with a sparse set of time-pose pairs to amplify the differences in quantitative results (Table \ref{tab:ablation-offset}).
\section{Conclusion}
\label{sec:conclusion}
We present Async-NeRF, a pipeline that uses an implicit functional relationship between time-pose to calibrate RGB-D poses and train a large-scale implicit scene representation. Our experiments show that Async-NeRF can effectively register depth camera poses by leveraging the trajectory prior embedded in RGB time-pose relationship. Meantime, Async-NeRF learns the 3D scene representation for photo-realistic novel view synthesis and accurate depth estimations.
{\small
\bibliographystyle{ieee_fullname}
|
1,477,468,750,385 | arxiv | \section{Introduction}
Stellar mass black holes and neutron stars are the end point of
massive star evolution via supernovae or gamma-ray bursts. Nearly all
of the Galactic black holes, and many neutron stars, found so far are
in binaries. Their properties are the observable consequences of
binary interactions. Studying these remnants provides vital clues to
understanding the evolutionary processes that produce them, both in
terms of single massive star evolution, and binary star evolution.
For example, the current stellar mass black hole distribution based on
a sample of about 20 objects appears to be disjoint from that of
neutron stars (\citealt{2010ApJ...725.1918O};
\citealt{2012ApJ...757...55O}; \citealt{2011ApJ...741..103F})
suggesting a bimodality in formation that produces either low--mass
neutron stars or relatively high--mass black holes, with few systems
in between. This remains a challenge for supernova models to reproduce
(\citealt{2012ApJ...749...91F};
\citealt{2012ApJ...757...91B}). \citet{2012ApJ...757...36K} argue that
this mass gap may, in part, be due to systematic effects
underestimating the system inclination.
Unfortunately, our observational sample, particularly in the case of
black holes, is largely comprised of objects discovered in transient
X-ray outbursts, leading to a variety of possible selection effects
that could obscure the properties of the true population (e.g.~see
\citealt{2005ApJ...623.1017N}). For instance, one could envisage,
using the disk instability model including disk irradiation effects
(cf.~\citealt{2008NewAR..51..752L}), an inverse correlation between
the accretor mass and the the duty cycle, reducing the chance of
detection in outburst of relatively low-mass black holes. Additional
selection effects could be invoked by the black hole mass -- orbital
period correlation (\citealt{2002ApJ...575..996L}) and possibly
related to that, the optical and X-ray outburst peak luminosity --
orbital period correlation (\citealt{1998MNRAS.295L...1S} and
\citealt{2010ApJ...718..620W}, respectively).
To mitigate the selection effects incurred by selecting systems that
recently went through an outburst cycle we designed the Galactic Bulge
Survey (GBS; \citealt{2011ApJS..194...18J}). The GBS is a wide,
shallow {\it Chandra}\, X-ray survey of the Galactic Bulge aiming to uncover
many ($>$100) new {\it quiescent} black hole and neutron star
binaries. As a result, we may find sources quite different to those
identified in outburst. A second goal of the survey is to constrain
binary evolution models (e.g.~\citealt{1999MNRAS.309..253K};
\citealt{2002ApJ...571L..37P}; \citealt{2004ApJ...603..690B}) using
the observed number ratio between $\approx$hundred X-ray binaries and
several hundred CVs that we expect to find. This number will in
particular put constraints on uncertain phases in the binary evolution
such as the common envelope phase (e.g.~;
\citealt{2006MNRAS.369.1152K}; \citealt{2013A&ARv..21...59I}).
For both these science goals we need to classify the X-ray
sources. Given that this classification relies on
multi-wavelength data, by design, the survey area is sufficiently out
of the plane to allow (multi-epoch) optical and near-infrared (NIR)
follow-up of the majority of detected sources. In addition to
classification, optical and NIR spectroscopic observations are also
crucial for dynamical studies to derive compact object masses (and
sometimes the dynamical masses are necessary for classification,
e.g.~\citealt{2013MNRAS.428.3543R}).
The GBS is well under way. Radio counterparts to a sample of sources
from the first part of the X-ray survey have been identified by
\citet{2012MNRAS.426.3057M}. \citet{2012ApJ...761..162H} reported on
associations of X-ray sources with the brightest optical
counterparts. Results from optical variability alone
(\citealt{2012AcA....62..133U}) and optical variability and
spectroscopic studies together (\citealt{2013MNRAS.428.3543R};
\citealt{2013ApJ...769..120B}; \citealt{2013arXiv1310.2597H}; Torres
et al.~2013 submitted) are appearing. Furthermore, we are using NIR
observations from the NIR surveys Two Micron All Sky Survey (2MASS),
VISTA Variables in The Via Lactea (VVV) and the UKIRT Infrared Deep
Sky Survey to identify counterparts to the GBS X-ray sources (Greiss
et al.~2013 submitted).
We here report on {\it Chandra}\, observations of the final $\approx$quarter
of the sky area of 12 square degrees that makes up the GBS, completing
the {\it Chandra}\, survey observations of the GBS area. The initial
three-quarters were reported in \citet{2011ApJS..194...18J}. In
addition, we provide the radio counterparts to the X-ray sources
discovered in the final part after \citet{2012MNRAS.426.3057M}
reported on archival radio sources for the first
three-quarters. Finally, we investigate the spatial distribution of
all the X-ray sources found in the GBS area and by comparing with the
ROSAT sources in the sky area we report on here we investigate the
variability properties of the new GBS X-ray sources.
\section{{\it Chandra}\,X--ray observations, analysis and results}
\subsection{Source detection}
We have obtained 63 observations with the {\it Chandra}\, X--ray observatory
(\citealt{2002PASP..114....1W}) covering the remaining quarter of
the total area of twelve square degrees that we call the GBS.
We employed as much as possible the same analysis tools and techniques
as described in Jonker et al.~(2011) in order to come to an as
homogeneous as possible survey. Also we follow the source naming
convention introduced there, where sources reported in Jonker et
al.~(2011) are referred to as CX\# (after {\it Chandra}\, X-ray source, where
the numeral indicates the position of that source in the list, with
sources providing the largest number of counts at the detection have
the lowest numeral), while new sources found in the 63 new
observations are called CXB\#.
In the left panel in Figure~\ref{changbs} we show the 63 new {\it Chandra}\,
observations we report on here. The red curved line indicates the
composite outline of each circular field of view of 14\hbox{$^\prime$}\,
diameter of these 63 observations. The grey curved lines boardering
the white points indicate the composite outline of each circular field
of view of 14\hbox{$^\prime$}\, diameter of the individual {\it Chandra}\, observations
obtained and the detected sources reported in
\citet{2011ApJS..194...18J}, respectively. The area near $l=0^\circ$
is covered by the observations from \citet{2009ApJ...706..223H}.
Sources found in 2 ks-long segments of those exposures were listed in
Jonker et al.~(2011) as well. In the right panel in
Figure~\ref{changbs} the white circles indicate the position of the
detected point sources. The size of the white circles is an indication
of the number of {\it Chandra}\, counts detected for that particular source.
\begin{figure*} \hbox{
\includegraphics[angle=0,width=8cm,clip]{ao13-obs-dustmap.ps}
\includegraphics[angle=0,width=12cm,clip]{gbs-map.eps}} \caption{{\it
Left panel:} The large black plus white rimmed saw--tooth boxes are
the outline of our optical observations of the GBS area in Galactic
coordinates. The grey scale image depicts the total reddening in the
Sloan $i^\prime$--band filter, $A_{i^\prime}$, estimated from the
\textsc{Cobe} dust maps \citep{1998ApJ...500..525S}. The overplotted
white circles indicate the position of the {\it Chandra}\, X--ray sources
detected in the GBS reported in Jonker et al.~(2011). The sources
found in the areas near $l=0^\circ$ and $1^\circ<|b|<2^\circ$ were
reported in Jonker et al.~(2011) but the observations were from
\citet{2009ApJ...706..223H}. The red rimmed curved lines indicate the
composite outline of each circular field of view of 14\hbox{$^\prime$}\,
diameter of the 63 {\it Chandra}\, observations that we report on in this
paper. {\it Right panel:} The grey scale image and contours depict the
total absorption $E(B-V)$, estimated from the extinction maps from the
VVV (\citealt{2012A&A...543A..13G}). The overplotted white circles
indicate the position of all X--ray sources detected in the GBS
including the new sources we report on here. The size of the white
circles is proportional to the number of {\it Chandra}\, counts detected for
that particular source. The dashed rectangle outlines the region of
the survey of the Galactic Center of
\citet{2002Natur.415..148W}. \\
\\ \\ \\ } \label{changbs} \end{figure*}
The {\it Chandra}\,
observations have been performed using the I0 to I3 CCDs of the Advanced CCD
Imaging Spectrometer (ACIS) detector (\citealt{1997AAS...190.3404G};
ACIS--I). The observation identification (ID) numbers for the data presented
here are \dataset [ADS/Sa.CXO#DefSet/GBS] {13528--13590}. We reprocessed and
analyzed the data using the {\sc CIAO 4.3} software developed by the {\it Chandra}\,
X--ray Center and employing {\sc CALDB} version 4.4.6. The data telemetry
mode was set to {\it very faint} for all observations. The {\it very faint}
mode provides 5$\times$5 pixel information per X--ray event. This allows for
a better screening of events caused by cosmic rays. In our analysis we
selected events only if their energy falls in the 0.3--8 keV range.
We used {\sc wavdetect} to search for X--ray sources in each of the
observations using data covering the full 0.3--8, the 0.3--2.5 and the
2.5--8 keV energy bands, separately. We set the {\sc sigthresh} in
{\sc wavdetect} to 1$\times 10^{-7}$, which implies that for a
background count rate constant over the ACIS-I CCDs there would be
$<$0.1 spurious source detection per observation as about $1\times
10^6$ pixels are searched per observation. However, in most cases a
source is not detected in a single pixel, thus our estimate of 0.1
spurious source per observation is very conservative. Furthermore, as
we explain below, we applied additional selection criteria. This
lowers the number of spurious sources further.
We retained all sources for which Poisson statistics indicates that
the probability of obtaining the number of detected source counts by
chance, given the expectation for the local background count rate, is
lower than 1$\times 10^{-6}$. This would be equivalent to a $>5\sigma$
source detection in Gaussian statistics. Next, we deleted all sources
for which {\sc wavdetect} was not able to provide an estimate of the
uncertainty on the right ascension [$\alpha$] and on declination
[$\delta$] as this indicates often that all counts fell in 1 pixel
which could well be due to faint afterglow events caused by cosmic ray
hits. In addition, we impose a 3 count minimum for source detection as
\citet{2005ApJS..161....1M} simulated that in their XBootes survey
with 5 ks ACIS--I exposures, 14 per cent of the 2 count sources were
spurious (note that this percentage will probably be lower for our GBS
exposures of 2 ks).
Since our {\it Chandra}\, observations were designed to overlap near the
edges, we searched for multiple detections of the same source either
in one of the energy sub-bands or in the full energy band. We consider
sources with positions falling within 5\hbox{$^{\prime\prime}$}\, of each other likely
multiple detections of the same source. This radius is larger than
that of 3\hbox{$^{\prime\prime}$}\, that we took in \citet{2011ApJS..194...18J} as we
found out that still some multiple detections of the same source
remained for sources detected with large off-axis angles (see
\citealt{2012ApJ...761..162H} for the list of 18 sources from
\citealt{2011ApJS..194...18J} that were in fact multiple detections of
the same source.) This means that in \citet{2011ApJS..194...18J} we
found 1216 unique sources.
In the last quarter of the GBS area that we report on here, we found
that 26 sources are detected more than once. Out of these 26 sources,
23 sources are detected two times, and 3 sources are detected three
times. Two of the sources detected twice were already detected and
reported in \citet{2011ApJS..194...18J} (CX155 and CX314). We do not
list these two sources in the Table~\ref{srclist} as they were
mentioned in \citet{2011ApJS..194...18J}. The properties that we list
in Table~\ref{srclist} for the sources that are detected multiple
times are those of the detection that gave rise to the largest number
of X--ray counts. In Table~\ref{srclist} we also list the number of
times that sources are detected.
Besides the multiple detections of CX155 and CX314 fourteen additional
sources detected once in the Cycle 13 {\it Chandra}\, observations were previously
detected and listed in \citet{2011ApJS..194...18J}. These sources are CX15,
CX17, CX25, CX44, CX60, CX69, CX79, CX137, CX221, CX266, CX312, CX355, CX374,
CX439. In most cases the off-axis angle of the source position was larger
during the new observations and, given that a similar number of X--ray counts
was detected in each instance, the source position provided in
\citet{2011ApJS..194...18J} is the most accurate X--ray position available.
The main exception where we consider the newly derived position to be more
accurate is CX314. CX314 was detected at 10.8\hbox{$^\prime$}\, off-axis at 8 counts
in the {\it Chandra}\, detection leading to its discovery. The new detection we
report on here provides 17 counts and the source was 5.9\hbox{$^\prime$}\, off-axis in
ObsID 13581. The new best-fit source position is ($\alpha$,
$\delta$)=(266.6461515,-31.8136964) which is 2.6\hbox{$^{\prime\prime}$}\, from the
previously reported position.
Others, like CX25, were detected closer on axis in the new cycle 13
observations (6.7\hbox{$^\prime$}\, off--axis with 6 counts) but with much more
counts in the observation reported in \citet{2011ApJS..194...18J}
(7.2\hbox{$^\prime$}\, off-axis with 48 counts) than in the new cycle 13
observation implying that the position provided in
\citet{2011ApJS..194...18J} will be more accurate. We do conclude that
CX25 is variable in X-rays.
In total we detected 424 distinct sources in the area indicated with
red circles and the red curved lines on the left side in
Figure~\ref{changbs}. The source list is given in Table~\ref{srclist}
and the table provides information on $\alpha$, $\delta$, the error on
$\alpha$ and $\delta$, total number of counts detected, the
observation ID of the observation resulting in the detection and the
off-axis angle at which the source is detected. The error on $\alpha$
and $\delta$ are the error provided by {\sc wavdetect}, it does not
take into account the typical {\it Chandra}\, bore--sight uncertainty of
0.6\hbox{$^{\prime\prime}$}\, (90 per cent confidence). We do, however, add a column to
Table~\ref{srclist} quoting the total uncertainty on the source
position following formula 4 in \citet{2010ApJS..189...37E}. For
clarity, we repeat their equation here below, \begin{displaymath} \log
P = \left\{ \begin{array}{llr} 0.1145 \theta - 0.4957 &\log C +
0.1932 & \\ & \mathrm{for\,} 0.0 <\log C < 2.1393 & \\ 0.0968\theta -
0.2064 &\log C - 0.4260 & \\ & \mathrm{for\, } 2.1393< \log C < 3.3
& \\ \end{array} \right. \end{displaymath} where $\theta$ is the
off-axis angle in arcminutes and $C$ is the detected number of X-ray
photons. The positional error $P$ is given in arcseconds and it
corresponds to a 95\% confidence interval.
We provide individual {\it Chandra}\, source names, however, for briefness we use
the source number in Table~\ref{srclist} preceded by "CXB" to indicate
which source we discuss in this paper. For the error $\sigma_N$ on the
detected number of counts $N$, \citet{2005ApJS..161..271G} give $\sigma_N
= 1+\sqrt{N+0.75}$ after \citet{1986ApJ...303..336G}. To allow for an
rough, easy calculation of the source flux based on the detected number of
source counts we give the conversion factor for a source spectrum of a
power law with photon index of 2 absorbed by $N_{\rm H}=1\times 10^{22}$
cm$^{-2}$: $7.76\times 10^{-15}$~erg cm$^{-2}$ s$^{-1}$\,photon$^{-1}$.
\renewcommand{\arraystretch}{2.0}
\begin{center}
\begin{longtable*}{cccccccccccc}
\caption{PLACEHOLDER, FIRST TEN ENTRIES ONLY! The GBS X--ray source
list providing the GBS source name, the source number as used in
this paper is preceded by "CXB" to differentiate it from the
sources in Jonker et al.~(2011), $\alpha$, $\delta$ in decimal
degrees, the $3\sigma$ error on localizing the source on the detector
$\alpha$ and $\delta$ in arcseconds, total number of counts
detected, the ID of the observation resulting in the
detection, the off-axis angle at which the source is detected, the
number of times the source was detected in the {\it Chandra}\,
observations, the 95\% confidence positional uncertainty
($\Delta$pos) calculated according to formula 4 in
\citet{2010ApJS..189...37E} taking the boresight uncertainty into
account, and the hardness ratio (HR) for sources detected with
more than 20 counts. The hardness is defined as the ratio between
the count rate in the 2.5--8 keV minus that in the 0.3--2.5 keV
band to the count rate in the full 0.3--8 keV energy band. The HR
is calculated for the detection where the off-axis angle was
smallest if the source was detected multiple times.}
\label{srclist}\\
\hline
Source & CXB\# & $\alpha$& $\delta$ & $\Delta \alpha$ &
$\Delta\delta$ & \# & Obs & Off-axis & \# of & $\Delta$pos & HR \\[0.1mm]
name & & (degrees) & (degrees) & (\hbox{$^{\prime\prime}$}) & (\hbox{$^{\prime\prime}$}) & (cnt) & ID &
angle (\hbox{$^\prime$}) & detec. & (\hbox{$^{\prime\prime}$}) & \\[0.1mm]
CXOGBSJ175748.7-275214 & CXB1 & 269.4529160 & -27.8707194 & 0.19 & 0.22 & 161 & 13536 & 7.74 & 1 & 0.74 & -0.61$\pm$0.06 \\
CXOGBSJ175359.8-292907 & CXB2 & 268.4994759 & -29.4852781 & 0.09 & 0.05 & 148 & 13550 & 4.35 & 2 & 0.35 & -0.18$\pm$0.02 \\
CXOGBSJ174614.3-321949 & CXB3 & 266.5599883 & -32.3303786 & 0.06 & 0.05 & 105 & 13574 & 2.64 & 1 & 0.31 & 0.28$\pm$0.03 \\
CXOGBSJ173416.2-304538 & CXB4 & 263.5678548 & -30.7607505 & 0.15 & 0.09 & 70 & 13586 & 3.78 & 1 & 0.51 & -0.90$\pm$0.12 \\
CXOGBSJ173208.6-302828 & CXB5 & 263.0362304 & -30.4746348 & 0.07 & 0.10 & 66 & 13587 & 3.78 & 1 & 0.53 & -0.75$\pm$0.10 \\
CXOGBSJ174517.0-321356 & CXB6 & 266.3208565 & -32.2323620 & 0.11 & 0.11 & 66 & 13577 & 3.73 & 2 & 0.52 & 0.78$\pm$0.11 \\
CXOGBSJ175551.6-283213 & CXB7 & 268.9650346 & -28.5369772 & 0.06 & 0.05 & 65 & 13533 & 1.83 & 1 & 0.32 & 0.34$\pm$0.05 \\
CXOGBSJ175432.1-292824 & CXB8 & 268.6339299 & -29.4734138 & 0.28 & 0.26 & 65 & 13550 & 7.49 & 2 & 1.42 & -0.78$\pm$0.11 \\
CXOGBSJ174916.6-311518 & CXB9 & 267.3192034 & -31.2550666 & 0.09 & 0.07 & 64 & 13569 & 3.52 & 1 & 0.50 & -0.95$\pm$0.13 \\
CXOGBSJ175832.4-275244 & CXB10 & 269.6350093 & -27.8789043 & 0.13 & 0.11 & 53 & 13558 & 4.30 & 1 & 0.68 & -0.56$\pm$0.09 \\
\end{longtable*}
\end{center}
\renewcommand{\arraystretch}{1.0}
\subsection{X--ray spectral information}
We extract source counts using circular source extraction regions of
10\hbox{$^{\prime\prime}$}. Background extraction regions are annulli with inner and outer
radii of 15\hbox{$^{\prime\prime}$}\, and 30\hbox{$^{\prime\prime}$}, respectively. We plot the 27 sources
for which we detected 20 or more counts in a hardness -- intensity diagram
(Figure~\ref{hardnessinten}). To mitigate the effects that small
differences in exposure time across our survey can have, we use count rates
as a measure of intensity. We define the hardness ratio as the ratio
between the count rate in the 2.5--8 keV minus that in the 0.3--2.5 keV
band to the count rate in the full 0.3--8 keV energy band
(after~\citealt{2004ApJS..150...19K}). We derived the hardness using {\sl
XSPEC} version 12.7 (\citealt{1996adass...5...17A}) by determining the
count rates in the soft and hard band taking the response and ancillary
response file for each of the sources. For these 27 sources photon
pile--up is less than 10\% even for the brightest source. Naively, one
would expect most hard sources to be more distant and more absorbed than
the soft sources, as the intrinsic spectral shape of the most numerous
classes of sources we expect to find does not differ much.
\begin{figure} \includegraphics[angle=0,width=7cm]{new-hid13-inclnhpl2.ps}
\caption{The hardness -- intensity diagram for the 27 sources for
which 20 or more counts were detected in {\it Chandra}\, cycle 13
observations for the GBS survey. To mitigate effects of small
differences in exposure times we used count rates as a measure of
intensity. The hardness is defined as the ratio between the count
rate in the 2.5--8 keV minus that in the 0.3--2.5 keV band to the
count rate in the full 0.3--8 keV energy band. Hard sources fall
in the top half and soft sources in the bottom half of this
figure. The green line shows the influence of the
extinction (N${\rm_H}$) on a power law spectrum with index 2 for a
source count rate of 0.05 counts s$^{-1}$ and N${\rm_H}$ values
increasing from bottom right to top left from (0.01, 0.1, 1, 3,
10)$\times 10^{22}$ cm$^{-2}$. } \label{hardnessinten} \end{figure}
The most interesting aspect from Figure~\ref{hardnessinten} is perhaps the
presence of three bright (rate $>$2.5$\times 10^{-2}$ counts s$^{-1}$) and
relatively hard sources (HR$>$0). Their relatively hard spectrum makes it
likely that these three sources (CXB3 [HR=0.28$\pm$0.03], CXB6
[HR=0.78$\pm$0.11], and CXB7 [HR=0.34$\pm$0.05]) suffered significantly
from X-ray absorption thus they likely are at a distance of more than 3
kpc which given their relatively high X-ray flux means that their X-ray
luminosity is substantial. CXB3 is probably a transient source (see below)
and none of the three sources is associated with archival radio emission
(see below) decreasing the chance that they are background AGN, and making
them potential X-ray binaries.
As foreseen, the spectral information is insufficient for source
classification for the majority of the total number of detected
sources, therefore, classification will have to come from
(multi-epoch) multi-wavelength observations. Finally, there seems to
be a dichotomy in the hardness with one peak centered on a hardness of
0.2 and another centered on -0.8 with a paucity of sources with
hardness 0. A similar dichotomy was reported in
\citet{2011MNRAS.413..595W} and \citet{2011ApJS..194...18J} (see the
latter paper for a possible explanation for the nature of this
dichotomy).
\subsection{{\it Chandra}\,light curves source CXB\#1--10}
We inspect the {\it Chandra}\, light curves of source CXB\#1-10. We rebinned
the light curves in 200~s bins. Source CXB\#1, 2, 3, 6 and 9 show
suggestive evidence for flare-like variability. Fitting the light
curve with a constant gives a $\chi^2$ value of 16 (for 10 degrees of
freedom [d.o.f.]), 35.9 (9 d.o.f.), 19.5 (10 d.o.f.), 18 (10 d.o.f.),
16.4 (9 d.o.f.), respectively. The light curves of source CXB\# 4, 5,
7, 8 and 10 are consistent with being constant with $\chi^2$ values of
8.4 (10 d.o.f.), 7.5 (9 d.o.f.), 11 (10 d.o.f.), 10 (9 d.o.f.) and 3.8
(10 d.o.f.), respectively. We do note that the number of counts in
each 200~s bin varies between 35 and 3 counts between these sources
and as a function of time. Therefore, certainly for the bins
containing only a few counts the use of the $\chi^2$ statistic is
suspect. The small number of counts per bin in several cases makes it
likely that some of the high values of reduced $\chi^2$ are occuring
due to chance fluctuations.
In Figure~\ref{lightcurves} we plot the light curves of the sources
for which there is evidence for variability during the
observations (i.e.~CXB1, CXB2, CXB3, CXB6, and CXB9) and for
comparison we plot in the top panel of the same Figure the light curve
of CXB10 for which our current data provides no evidence that the
source varies during the observation.
\begin{figure} \includegraphics[angle=0,width=9cm]{lightcurves.eps}
\caption{The two panels show the {\it Chandra}\, X-ray light curves of six
CXB sources. Each point is an average of 200~s of {\it Chandra}\,
data. For five sources there is suggesive evidence that the source
is variable during the {\it Chandra}\, observation (CXB1, CXB2, CXB3,
CXB6, and CXB9). For comparison we plot in the top panel also the
light curve of CXB10 for which we find no evidence that the source
varied during the observation.} \label{lightcurves} \end{figure}
\section{Discussion}
Using 63 {\it Chandra}\, observations we cover the remaining $\approx$quarter
of the 12 square degrees that comprise the Galactic Bulge Survey
(\citealt{2011ApJS..194...18J}). In this paper we provide the list of
424 X-ray sources that we find in this area and that have three or more
counts in the short (2 ks) {\it Chandra}\, observations.
In total we detected 1640 unique X-ray sources. Of these 875 are
detected at Galactic latitudes below the plane and 765 at Galactic
latitudes above the plane. For a symmetric distribution of 1640
sources one would expect 820$\pm$20 on either side, making the
detected distribution marginally skewed. We investigated the nature of
this asymmetry by dividing the number sources over the four quadrants
they were detected in. We made quadrants according to the Galactic
coordinates of the source and we counted the number of sources in each
quadrant (-l,-b: \#382), (-l,+b: \#399), (+l,+b: \#366) and (+l,-b:
\#493). It turns out that the quadrant (+l,-b) is responsible for the
apparent asymmetry in the number of detected sourcs (see Figure~\ref{distri}).
\begin{figure} \includegraphics[angle=0,width=9cm]{nr-src-diff-quadrants.eps}
\caption{The cumulative distribution of the number of X-ray sources
as a function of the number of source X-ray counts discovered in
the GBS for four different quadrants according to the Galactic
coordinates of the sources (-l,-b), (-l,+b), (+l,+b) and
(+l,-b). The full histogram shows the cumulative difference in the
number of X-ray sources as a function of the number of detected
source X-rays found in the (+l,-b) and the (-l,-b) quadrant. The
difference is qualitatively the same when comparing the number of
X-ray sources in the (+l, -b) and the other quadrants. There is a
clear excess of the number of X-ray sources discovered in the
(+l,-b) quadrant when compared with the other quadrants. The
difference increases with X-ray count rate up to sources with
$\ifmmode \rlap{$<$}{}_{{}_{{}_{\textstyle\sim}}} \else$\rlap{$<$}{}_{{}_{{}_{\textstyle\sim}}}$\fi$10 X-ray counts per source.
} \label{distri} \end{figure}
Most of the sources we expect to have detected are relatively nearby
(within 3 kpc; \citealt{2011ApJS..194...18J}), nevertheless, the
different average extinction in the GBS areas in the four quadrants
could still have a significant influence on the number of detected
sources. The average extinction is indeed lower in the (+l,-b)
quadrant where we detected most new X-ray sources (cf.~the right panel
of Figure\ref{changbs}). The overdensity of sources we find in
quadrant (+l,-b) of the GBS area coincides with the presence of
diffuse X-ray emitting gas in that part of the GBS area, as found by
ROSAT (\citealt{1997ApJ...485..125S}).
In order to investigate this asymmetry further we compared the
different background levels in our {\it Chandra}\ observations as determined
by the {\sc wavdetect} tool (see Figure~\ref{backchan}, a higher
background is indicated by a lighter shade of gray). The background
levels could influence the detection probability especially for
sources with 3 counts falling far away from the optical axis of the
satellite. The diffuse emission could show up as a diffuse number of
pixels with 1 or 2 counts or in areas with a lower extinction a larger
amount of 1 and 2 cnt sources such as RS CVn and coronally active
stars can be present.
For a background count rate per pixel per second of $\approx5\times
10^{-7}$ (see Figure~\ref{backchan}) and 2 ks.~exposures and
$\ifmmode \rlap{$<$}{}_{{}_{{}_{\textstyle\sim}}} \else$\rlap{$<$}{}_{{}_{{}_{\textstyle\sim}}}$\fi$100 pixels for the point spread function far off axis, the
expected background rate is $\ifmmode \rlap{$<$}{}_{{}_{{}_{\textstyle\sim}}} \else$\rlap{$<$}{}_{{}_{{}_{\textstyle\sim}}}$\fi$0.1 count per 2 ks.~observation
in such an area. Whereas there is indeed a difference in the
background count rate in line with the expectation from either more
1--2 count point sources or more diffuse emission in the (+l,-b)
quadrant of the GBS area, this enhanced background does not have a
large effect on the number of 3 count sources even far off-axis.
\begin{figure} \includegraphics[angle=0,width=9cm]{back.eps}
\caption{The background count rate (pixel$^{-1}$ s$^{-1}$) as
measured by {\it Chandra}. The background is higher in the (+l,-b) part of
the GBS area than in the other areas. We removed two observations
from this plot; one where we used the FAINT event mode that does
not allow for the thorough cleaning of cosmic ray afterglow events
and therefore yields a much higher background and one where the
background is artificually increased due to the presence of a very
bright X-ray source. } \label{backchan} \end{figure}
We conclude that the overdensity of sources in the (-l,+b) part of our
GBS area is likely caused by the lower average extinction in that
quadrant of the GBS survey area, whereas the higher X-ray background
in that area is in line with the diffuse gas as found by
\citet{1997ApJ...485..125S}. Those authors argued that this diffuse
gas is at the distance of the bulge.
\subsection{Comparison with ROSAT sources}
In order to investigate whether sources in our CXB source list are detected by
ROSAT we cross-correlated the GBS CXB source list with the ROSAT All Sky
Survey (RASS; \citealt{1999A&A...349..389V}). We queried both the Bright as
well as the Faint catalog, and the ROSAT High Resolution Imager (HRI)
Pointed Observations (1RXH), and the Second ROSAT Position Sensitive
Proportional Counter (PSPC) Catalog (2RXP) using the VizieR database. To
accommodate the relatively large positional uncertainties in many of the
ROSAT source detections, we searched for ROSAT sources within 30\hbox{$^{\prime\prime}$}\,of
the {\it Chandra}\, positions of our CXB sources.
We find two RASS (faint) sources that have a position relatively close to
the GBS CXB sources CXB9 and CXB11. These sources are probably associated
with the ROSAT sources 1RXS~J174916.5-311509 and 1RXS~J175019.0-302654,
respectively. CXB9 is 9.3\hbox{$^{\prime\prime}$}\, away from 1RXS~J174916.5-311509. CXB9
is also associated with an O8~III Tycho-2 source
(\citealt{2012ApJ...761..162H}, see their work for further details on this
source). CXB11 is 22\hbox{$^{\prime\prime}$}\, away from 1RXS~J175019.0-302654 which is
probably the same source as 2RXP~J175020.0-302616. We furthermore find that
CXB55 is likely to correspond to 1RXH~J175017.6-311427 (reported in
\citealt{1994ApJ...423..633R}). The angular distance between the two
sources is 12.2\hbox{$^{\prime\prime}$}. Finally, CXB93 might be related to 1RXH
J174612.7-320637 which is located at an angular separation of 25\hbox{$^{\prime\prime}$}.
\subsection{Transient sources}
The first three CXB sources (CXB1--3) are bright enough that they should
have been detected in the RASS if they were as bright during the RASS as
they are in our {\it Chandra}\, observations. However, they were not detected in
the RASS, and thus we are inclined to conclude that their X-ray luminosity
has significantly varied between our {\it Chandra}\, and the RASS observation.
Before we firmly conclude that these sources are variable, we
verified the {\it Chandra}\, X-ray spectrum of each of these sources. CXB1 and CXB2
have spectra that should have allowed for a detection in RASS, however, we
found that the spectrum of CXB3 is strongly absorbed potentially providing
an explanation as to why ROSAT did not detect the source. Using
C-statistics we fit a spectral model consisting of a power-law absorbed by
interstellar material to the X-ray spectrum. For CXB3 we find a best fit
N$_H=(2.7\pm 0.9)\times 10^{22}$ cm$^{-2}$ for a power law index of $2.4\pm
0.7$. Extrapolating this model to the ROSAT band (0.01-2.5 keV) we find
that the source flux is 2.5$\times 10^{-12}$erg cm$^{-2}$ s$^{-1}$. This implies that the
source should have been detected by the RASS although we note that the
extrapolation to low energies carries a significant uncertainty. We
tentatively conclude that CXB1, 2, and 3 are transient or at least
highly variable sources.
CXB3 has a bright near-infrared counterpart of $K=10.06\pm0.04$
(2MASS~J17461440$-$3219494; this 2MASS source was not picked-up in our
cross-correlation with {\sc simbad} see Section~\ref{optical}) at an angular
distance of 0\farcs13, which is consistent with the 95\% confidence
uncertainty on the position of the source of 0\farcs31 (see
Table~\ref{srclist}). The extinction towards the source as given by
\citet{2012A&A...543A..13G} is $E(B-V)\sim 2.8$. This yields an
N$_H\sim1.6\times 10^{22}$ cm$^{-2}$ which is consistent within the
uncertainties with the value we find from our fit to the X-ray
spectrum (using the conversion of $E(B-V)$ to $A_V$ using a gas to
dust ratio of $R=3.1$ and the conversion from $A_V$ to N$_H$ from
\citealt{1995A&A...293..889P}). This value for the extinction is also
consistent with a distance to the source of $\sim 8$~kpc. For that
distance the source luminosity will be around $6\times10^{33}$erg s$^{-1}$. The source
is also detected in the 2MASS, WISE and GLIMPSE surveys
(\citealt{2006AJ....131.1163S}; \citealt{2010AJ....140.1868W};
\citealt{2003PASP..115..953B} and \citealt{2009PASP..121..213C},
respectively) as well as in our Blanco/DECam $r^\prime$ data at
$r^\prime\sim19.8$ (Johnson et al. in prep). Correcting for the
reddening of \citet{2012A&A...543A..13G} we find that the spectral
energy distribution fits well with the Kurucz model
(\citealt{2004astro.ph..5087C}) of late K red giant of T$_{eff}=4000$K
and $\log g=1.5$. This source is a candidate symbiotic X-ray binary
(cf.~\citealt{2013arXiv1310.2597H}).
We also investigated whether ROSAT sources found using the RASS (Bright and
Faint catalogues) as well as pointed observations (from the HRI and the
PSPC) fall in the observed GBS CXB area but were not detected. We use {\sc
Topcat} to cross-correlate the VizieR ROSAT catalogues mentioned above with
the coordinates of the {\it Chandra}\, pointing centers. We considered a sky area
of 7\hbox{$^\prime$}\, around the {\it Chandra}\, pointing centers in this
cross-correlation. The resulting list contains all the ROSAT sources that
fall inside this sky area. We remove the ROSAT sources that have an
associated GBS CXB counterpart within 30\hbox{$^{\prime\prime}$}\,(see above). Below we
discuss the ROSAT sources that were no longer detected in the GBS CXB
observations.
1RXH~J174423.1-320254 and 1RXH~J174449.9-321701 have no CXB
counterpart within 30\hbox{$^{\prime\prime}$}, however, both sources were detected by
ROSAT at a signal-to-noise ratio of only 3 and 2.7, respectively. The
Second ROSAT PSPC Catalog source 2RXP~J175138.6-295024 also went
undetected in the CXB. The false alarm probability for the ROSAT
detection of this source is 1.2$\times 10^{-2}$.
There are 15 sources from the RASS Faint source catalog within 7\hbox{$^\prime$}\,
of a {\it Chandra}\, CXB pointing that do not have a CXB counterpart within
30\hbox{$^{\prime\prime}$}\, (see Table~\ref{faintrass}). However, we note that the
uncertainty on the position of the faint RASS sources ranges between
14-49\hbox{$^{\prime\prime}$}\,and the search radius of 30\hbox{$^{\prime\prime}$}\, might be too strict.
However, enlarging the matching radius provides other problems. E.g.~for a
search radius of 1\hbox{$^\prime$}, 1RXS~J174608.8-320544 has two potential CXB
counterparts CXB93 and CXB406. CXB93 is at 57.2\hbox{$^{\prime\prime}$}\, and CXB406 lies at
51.6\hbox{$^{\prime\prime}$}\, from 1RXS~J174608.8-320544 (CXB93 and CXB406 are
94.7\hbox{$^{\prime\prime}$}\, apart and they are thus not consistent with being the same
source). Interestingly, given that 1RXS~J174608.8-320544 is marked as a
potentially extended source in the RASS catalog, it might be that the
source is a blend of CXB93 and CXB406.
\begin{table} \caption{RASS faint sources without CXB X-ray counterparts
within 30\hbox{$^{\prime\prime}$}. $\Delta$WCS is the uncertainty on the source position
provided by the RASS. $L$ is the likelihood of source detection $L =
-\ln(1-P)$, where $P$ is the probability that the source is real. Those
sources with $L\approxgt 9$ that went undetected in the GBS are good
candidate transients. }
\label{faintrass}
\begin{center}
\begin{tabular}{lcccc}
1RXS & RA &DEC & $\Delta$WCS (\hbox{$^{\prime\prime}$})& $L$\\
J175237.6-294714$^a$ &268.1567 &-29.78722 &49& 9 \\
J175343.3-291444 &268.4304 &-29.2457 &16& 8 \\
J175342.4-290809 &268.4267 &-29.1358 &19& 10\\
J175420.8-285412 &268.5867 &-28.9033 &15& 12\\
J175606.4-283311 &269.0267 &-28.5532 &30& 8\\
J175712.8-280510 &269.3033 &-28.0863 &17& 10\\
J175836.1-273358 &269.6504 &-27.5661 &19& 8\\
J175050.7-301735 &267.7112 &-30.2932 &37& 7\\
J175323.2-295649 &268.3467 &-29.9471 &19& 8\\
J175334.9-295013 &268.3954 &-29.8369 &14& 8\\
J175421.9-292206 &268.5913 &-29.3683 &14& 15\\
J175855.9-272945 &269.7329 &-27.4960 &27& 9 \\
J175019.0-304843 &267.5792 &-30.8119 &30& 10\\
J174906.7-311915 &267.2779 &-31.3208 &21& 11\\
J174608.8-320544$^a$ &266.5367 &-32.0956 &25& 17\\
\end{tabular}
\end{center}
{\footnotesize $^a$ Marked in the RASS as a potentially extended ROSAT source.}
\end{table}
For all the sources with $L\approxgt 9$ in Table~\ref{faintrass} as
well as the two 1RXH sources and the one 2RXP source not detected in
CXB, it is conceivable that the ROSAT observations found the source in
a bright state and/or that the source spectrum is too soft to allow
for a detection in the GBS CXB observations. Several sources present
secure ROSAT detections and they should have been detected in our CXB
observations. E.g.~1RXS~J175421.9-292206 is detected at more than
5$\sigma$ significance with ROSAT hardness ratio 1 (HR1)=$0.43\pm
0.37$ and hardness ratio 2 (HR2)=$0.22\pm0.42$. Here, HR1= (B-A)/(B+A)
and HR2= (D-C)/(D+C), with A=0.11-0.41 keV, B=0.52-2.0 keV, C=0.5-0.9
keV, and D=0.9-2.0 keV count rate. Therefore, the X-ray spectrum is
not too soft for {\it Chandra}, indicating that this source has varied between
the ROSAT and the {\it Chandra}\, observations. For some other sources, most
notably those with L$\approxlt9$ in Table~\ref{faintrass}, the ROSAT
detection significance is also so low that they could be spurious
detections.
\section{Radio NVSS detections and {\sc simbad} listing of GBS CXB sources}
\label{optical}
After \citet{2012MNRAS.426.3057M} we provide the result from the
cross-correlation between the CXB source list and the NRAO VLA Sky Survey
(NVSS), where NRAO and VLA stand for National Radio Astronomy Observatory
and Very Large Array, respectively. We considered sources within
30\hbox{$^{\prime\prime}$}\, of a CXB source as a likely match. Table~\ref{radio} contains
the nine NVSS sources we find and their likely CXB counterpart.
The three radio bright objects associated with CXB23, CXB127 and CXB150 are
also detected in \citet{2004AJ....128.1646N} as 330 MHz sources called
GCPS~359.845-1.845 ($\Delta=3.8$\hbox{$^{\prime\prime}$}; S$_{330 {\rm MHz}}=764$mJy),
GCPS~358.154-1.680 ($\Delta=17$\hbox{$^{\prime\prime}$}; S$_{330 {\rm MHz}}=1464$mJy), and
GCPS~359.912-1.815 ($\Delta=3.6$\hbox{$^{\prime\prime}$}; S$_{330 {\rm MHz}}=474$mJy),
respectively. For $S_\nu \propto \nu^\alpha$, where $\nu$ is the radio
frequency $S_\nu$ the radio flux this yields $\alpha=-0.5,-0.7,-0.5$,
respectively. These sources have radio spectra consistent with being Active
Galactic Nuclei and we thus preliminary classify CXB23, CXB127 and CXB150
as such.
For the other GBS CXB sources with potential radio counterparts it is more
difficult to provide a classification on the basis of the potential
association with the radio source alone.
\begin{table*}
\caption{NVSS sources close to CXB X-ray sources. }
\begin{center}
\label{radio}
\begin{tabular}{lccccccc}
\multicolumn{1}{l}{CXB\#} &
\multicolumn{1}{c}{NVSS} &
\multicolumn{1}{c}{RA (J2000)} &
\multicolumn{1}{c}{Dec (J2000)} &
\multicolumn{1}{c}{$\Delta$RA (sec)} &
\multicolumn{1}{c}{$\Delta$DEC (\hbox{$^{\prime\prime}$})} &
\multicolumn{1}{c}{S1.4 (mJy)} &
\multicolumn{1}{c}{Separation (\hbox{$^{\prime\prime}$})} \\
\hline
CXB19& 175737-281000 & 17 57 37.72 & -28 10 00.7 & 0.38 & 9.4 & 3.8$\pm$ 0.6 & 7.9\\
CXB23&175230-300107 & 17 52 30.97 & -30 01 07.8 & 0.03 & 0.6 & 350$\pm$10 & 0.95\\
CXB28&175205-303026 & 17 52 05.68 & -30 30 26.7 & 0.43 & 6.7 & 2.8$\pm$0.5 & 3.6\\
CXB127&174748-312315 & 17 47 48.62 & -31 23 15.2 & 0.04 & 0.6 & 540$\pm$ 15 & 5.0\\
CXB150&175233-295645 & 17 52 33.16 & -29 56 45.5 & 0.03 & 0.6 & 235$\pm$10 & 0.71\\
CXB162&173357-302729 & 17 33 57.85 & -30 27 29.2 & 0.14 & 1.9 & 8.3$\pm$0.5 & 1.37\\
CXB163&173229-302522 & 17 32 29.34 & -30 25 22.7 & 0.62 & 8.6 & 2.4$\pm$0.6 & 17.9\\
CXB288&173251-302919 & 17 32 51.74 & -30 29 19.4 & 0.05 & 0.7 & 35$\pm$1.2 & 1.1\\
CXB384&174857-310445 & 17 48 57.10 & -31 04 45.4 & 0.46 & 8.3 & 5.5$\pm$0.7 & 3.5\\
\end{tabular}
\end{center}
\end{table*}
\begin{table*}
\caption{Optical or near-infrared sources found in the {\sc simbad} data base within 5\hbox{$^{\prime\prime}$}\,of CXB X-ray sources. Radio or
X-ray sources found in the {\sc simbad} data base within 30\hbox{$^{\prime\prime}$}\,of CXB
X-ray sources. Angular D stands for the angular distance between the
{\sc simbad} and the CXB source position. PM* means high proper motion
star, EB* stands for eclipsing binary star. V* denotes variable
star and ** means double or multiple star. PN stands for planetary
nebula and YSO for young stellar object. Finally, supernova
remnant is abreviated by SNR and Cataclysmic Variable by CV.}
\begin{center}
\label{simbad}
\begin{tabular}{llccccccc}
\multicolumn{1}{l}{CXB\#} &
\multicolumn{1}{l}{CXB\#} &
\multicolumn{1}{c}{RAJ2000-CXB} &
\multicolumn{1}{c}{DECJ2000-CXB} &
\multicolumn{1}{c}{Angular D} &
\multicolumn{1}{c}{Simbad name} &
\multicolumn{1}{c}{RAJ2000} &
\multicolumn{1}{c}{DEJ2000} &
\multicolumn{1}{c}{ID} \\
\hline
1 & CXB2 & 268.499460 &-29.4852781 &7.2 & AX J1754.0-2929 &268.500000 &-29.483333 &X \\
2 & CXB5$^a$ & 263.0362304 &-30.474635 &0.2 & HD 315961 &263.036154 &-30.474636 & K5 \\
3a & CXB9$^a$ & 267.3192035 &-31.2550666 &0.6 & HD 161853 &267.319017 &-31.255022 & O8~III \\
3b & CXB9 & 267.3192035 &-31.2550666 &3.4 & PN RPZM 40 &267.319583 &-31.254167 &PN? \\
3c & CXB9 & 267.3192035 &-31.2550666 &10 & 1RXS J174916.5-311509 &267.318671 &-31.252244 &X \\
4 & CXB10 & 269.6350093 &-27.8789043 &0.7 & MACHO 401.48296.2600 &269.635208 &-27.879000 &CV \\
5 & CXB11 & 267.5862652 &-30.4477944 &22 & 1RXS J175019.0-302654 &267.579158 &-30.448469 &X \\
6 & CXB17$^a$ & 268.6255656 &-29.3992464 &0.2 & 2MASS J17543011-2923572 &268.625488 &-29.399244 &IR \\
7 & CXB21 & 268.7011304 &-29.3277772 &3.7 & OGLE BUL-SC4 568004 &268.700458 &-29.328611 &V* \\
8a& CXB23 & 268.1288255 &-30.0186408 &0.6 & [IBR2011] J1752-3001 &268.128960 &-30.018515 &Radio \\
8b& CXB23 & 268.1288255 &-30.0186408 &1.5 & [LKL2000] 43 &268.129167 &-30.018333 &Radio \\
8c& CXB23 & 268.1288255 &-30.0186408 &4.3 & GCPS 111 &268.130208 &-30.018500 &Radio \\
9& CXB26 & 268.4491784 &-29.7439772 &1.0 & OGLE BUL-SC3 6033 &268.448875 &-29.743861 &CV \\
10& CXB28 & 268.0240465 &-30.5064844 &2.1 & 2XMM J175205.6-303023 &268.023375 &-30.506556 &X \\
11& CXB29 & 268.5549195 &-29.4830887 &0.6 & OGLE BUL-SC4 155897 &268.554750 &-29.483028 &V* \\
12 & CXB34 & 266.8706341 &-32.2448156 &12 & 2MASS J17472806-3214462 &266.866917 &-32.246194 &X \\
13& CXB36$^a$ & 266.5600100 &-32.1033654 &4.1 & LTT 7073 &266.560160 &-32.102233 &PM* M2~V \\
14& CXB49 & 267.3703237 &-31.3067944 &0.8 & 2MASS J17492885-3118237 &267.370225 &-31.306603 &Candidate YSO \\
15& CXB54 & 268.1172224 &-29.9895816 &13 & RRF 9 &268.114167 &-29.987222 &Radio \\
16& CXB55 & 267.5735447 &-31.2430775 &12 & [RDL94] Terzan 6 A &267.574167 &-31.239722 &X \\
17& CXB58 & 268.5832235 &-29.6379212 &0.8 & 2MASS J17541996-2938157 &268.583188 &-29.637694 &EB* \\
18& CXB63 & 267.6738181 &-30.1941350 &1.3 & Cl* NGC 6451 KF 227 &267.674208 &-30.194250 &in Cluster \\
19a& CXB93$^a$ & 266.5529013 &-32.1035349 &2.5 & LTT 7072 &266.552088 &-32.103529 &PM* M2~V \\
19b& CXB93$^a$ & 266.5529013 &-32.1035349 &3.7 & ** LDS 611 / GJ 2130 C &266.553167 &-32.102528 &** \\
20& CXB97 & 269.7613953 &-27.4890113 &0.9 & V* V1723 Sgr &269.761125 &-27.488917 &EB*WUMa \\
21& CXB100& 268.4645298& -29.650292 & 2.3 & OGLE BUL-SC3 769186 & 268.464292& -29.650889& V* \\
22& CXB112& 263.2739071& -30.5863552& 2.0 & LP 920-61 & 263.274083& -30.585833& PM* M2.5 \\
23& CXB116$^a$& 269.2814150& -27.1476849& 0.4 & HD 314886 & 269.281369& -27.147590& A5 \\
24& CXB127& 266.9509625& -31.3875612& 3.0 & NVSS J174748-312315 & 266.950958& -31.388389& Radio \\
25& CXB128$^a$& 266.7138646& -25.7794799& 1.5 & CD-25 12283 & 266.714287& -25.779338& F8 \\
26a& CXB150& 268.1381712& -29.9457729& 0.7 & VCS4 J1752-2956 & 268.137946& -29.945806& Radio \\
26b& CXB150& 268.1381712& -29.9457729& 3.5 & GCPS 115 & 268.139292& -29.945750& Radio \\
27& CXB181$^a$& 268.73059000& -29.2027756& 0.3 & HD 162962 & 268.730569& -29.202854& A \\
28& CXB183& 268.6757225& -28.8307272& 3.0 & IRAS 17515-2849 & 268.674792& -28.830500& Star \\
29& CXB200$^a$& 263.4644661& -30.8417862& 0.5 & TYC 7376-433-1 & 263.464475& -30.841914& Star \\
30& CXB211$^a$& 265.8693744& -32.2325220& 2.6 & HD 160826 & 265.870188& -32.232264& B9~V \\
31& CXB225$^a$& 269.0803986& -28.4701699& 2.5 & TYC 6853-3032-1 & 269.079825& -28.470642& Star \\
32& CXB233$^a$& 268.83897484& -28.5734201& 1.1 & HD 316692 & 268.839115& -28.573143& A0 \\
33& CXB245& 268.2919765& -29.3556874& 0.5 & OGLE J175310.04-292120.6 & 268.291833& -29.355722& Dwarf Nova \\
34a& CXB256& 267.7514663& -30.3199539& 1.7 & Cl* NGC 6451 PMR 65& 267.751250& -30.319528& in Cluster \\
34b& CXB256& 267.7514663& -30.3199539& 1.7 & Cl* NGC 6451 PMR 64& 267.751917& -30.319667& in Cluster \\
35& CXB287$^a$& 263.3901785& -30.534113 & 1.5 & HD 158982 & 263.389732& -30.533990& A2~IV/V \\
36& CXB293& 268.710370 & -29.3371961& 0.4 & 2MASS J17545048-2920142 & 268.710375& -29.337306& EB* \\
37& CXB302$^a$& 269.6706800& -27.9024008& 0.3 & TYC 6849-1627-1 & 269.670621& -27.902478& Star \\
38& CXB306$^a$& 269.5399321& -28.1418302& 0.4 & HD 163613 & 269.539931& -28.141712& B1~Iab \\
39& CXB352& 268.4262642& -29.8320194& 1.3 & OGLEII DIA BUL-SC3 5152 & 268.426333& -29.831667& EB* \\
40& CXB361& 268.1649063& -29.752345 & 5.0 & OGLE BUL-SC37 441760 & 268.163375& -29.751944& V* \\
41& CXB366& 268.1003203& -29.7169994& 0.2 & 2MASS J17522407-2943013 & 268.100292& -29.717056& EB* \\
42& CXB380& 267.3212976& -31.2837757& 11.5& SNR G358.4-01.9 & 267.325000& -31.283333& SNR \\
43a& CXB422$^a$& 262.8208422& -30.3215429& 0.8 & HD 315956 & 262.820644& -30.321404& F2 \\
43b& CXB422$^a$& 262.8208422& -30.3215429& 3.2 & [RHI84] 9-186 & 262.820125& -30.320917& M4 \\
\end{tabular}
\end{center}
{\footnotesize $^a$ Association already found in \citet{2012ApJ...761..162H} }
\end{table*}
Finally, we cross-correlated the positions of the CXB sources with the
entries in {\sc simbad} where we retained optical sources that have a
position within 5\hbox{$^{\prime\prime}$}\, of that of a CXB source and radio and X-ray
sources that have a position within 30\hbox{$^{\prime\prime}$}\, from a CXB source.
Table~\ref{simbad} contains the resulting list of sources. Some of the NVSS
sources are not found this way (cf.~with Table~\ref{radio}) whereas others
are (e.g.~the match between the NVSS source and CXB23 is also found using
{\sc simbad}). Many of the associations of CXB sources with bright optical
counterparts were already found in \citet{2012ApJ...761..162H}. Note that
some CXB sources have multiple entries as they have more than one potential
counterpart within 5\hbox{$^{\prime\prime}$}, such as CXB93, CXB256 and CXB422, or they have
multiple detections of presumably the same object with slightly different
positions such CXB9, CXB23 and CXB150.
In order to estimate the number of false positive identifications, we
then shifted all the CXB source positions by 15 or 30\hbox{$^{\prime\prime}$}\, north
or south, and we redid the cross-correlation. On average, we get 5.5
{\sc simbad} matches, almost all of these spurious matches are OGLE sources,
with a few matches to stars from the open cluster NGC~6451. So, we
therefore conclude that $\sim$38 of our 43 opt/IR matches are real
matches, with the OGLE matches being subject to the highest
false-alarm probability.
From Table~\ref{simbad}, we find three Cataclysmic Variables with
close positional matches to the CXB X-ray source positions (CXB10,
CXB26 and CXB245). These associations are probably all real. Finally,
CXB97 is well matched with a W~UMa type source. This is likely to be
a real match, and part of the predicted W~UMa population.
\section{Summary}
In this paper we have presented the {\it Chandra}\, source list and some properties
of the X--ray sources of observations covering the $\approx$quarter of the
total survey area of 12 square degrees remaining to be done after the work of
\citet{2011ApJS..194...18J}. This paper thus completes the {\it Chandra}\, survey
part of the Galactic Bulge Survey (GBS). The accurate {\it Chandra}\, source position
will help identify the optical, near-infrared and UV counterparts. The 424
X--ray sources that have been discovered here, together with the 1216 unique
sources from \citet{2011ApJS..194...18J}, compares well with the total number
of $\approx 1650$ X--ray sources that we predicted we should detect in the
full 12 square degrees. However, this is of course no guarantee that the
number of sources per source class is close to the number we calculated.
Optical and near--infrared photometry including variability information and
spectroscopy is necessary to determine the nature of each of the sources (see
for instance \citealt{2013MNRAS.428.3543R}, \citealt{2013ApJ...769..120B},
\citealt{2013arXiv1310.2597H}, \citealt{2013arXiv1310.0224T}).
We discussed the apparent overdensity of sources in the (+l,-b)
quadrant of the GBS area. We conclude that this is caused by the lower
extinction in this quadrant.
We compared our source list with the source list of the RASS. Furthermore,
we compared our {\it Chandra}\, source list with the sources found in the catalog of
sources derived from pointed HRI and PSPC ROSAT observations that fall inside
the GBS area. Finally, we investigate whether some of the sources we report
on here are present in public radio surveys.
\section*{Acknowledgments} \noindent The authors would like to thank
the CXC for scheduling the {\it Chandra}\, observations. R.I.H, C.T.B., C.B.J,
acknowledge support from the National Science Foundation under Grant
No. AST-0908789. This research has made use of the SIMBAD database,
operated at CDS, Strasbourg, France. This research has made use of the
VizieR catalogue access tool, CDS, Strasbourg, France. The original
description of the VizieR service was published in A\&AS 143, 23. This
publication makes use of data products from the Wide-field Infrared
Survey Explorer, which is a joint project of the University of
California, Los Angeles, and the Jet Propulsion Laboratory/California
Institute of Technology, funded by the National Aeronautics and Space
Administration (NASA). This work is based in part on observations made
with the Spitzer Space Telescope, which is operated by the Jet
Propulsion Laboratory, California Institute of Technology under a
contract with NASA.
\bibliographystyle{apj}
|
1,477,468,750,386 | arxiv | \section{Introduction}
The combination of coupled-channel unitarity and chiral Lagrangians, the so-called
unitary chiral approach, has been quite
successful in explaining the nature of and/or related data about some low-lying mesonic and baryonic excitations (see Ref.~\cite{Oller:2000ma} for an introduction and earlier references). For instance,
the scalar resonances $f_0(600)$, $f_0(980)$, and $a_0(980)$, appear
naturally in the unitarized Nambu-Goldstone-boson self-interactions~\cite{Oller:1997ti}; the
$\Lambda(1405)$, on the other hand, is found originated from
the interaction between the baryon octet of the proton and the pseudoscalar octet of the pion~\cite{Kaiser:1995eg}.
Because these states are dynamically generated from meson-meson or meson-baryon interactions,
they are viewed as composite particles or hadronic molecules, instead of genuine
$q\bar{q}$ or $qqq$ systems. Studies of their behavior in the large $N_c$ limit have
confirmed that they are largely meson-meson or meson-baryon
composite systems, though some of them may contain a small genuine $q\bar{q}$ or $qqq$ component~\cite{Pelaez:2003dy}.
Both chiral symmetry and coupled-channel unitarity play an important role in the success of
the unitary chiral approach. Chiral symmetry puts strong constraints on the allowed forms of the interaction Lagrangians. Coupled-channel unitarity, on the other hand,
extends the application region of chiral perturbation theory (ChPT). This is achieved in
the Bethe-Salpeter equation method by summing over all the $s$-wave bubble diagrams.
Inspired by the success of the unitary chiral approach, a further extension has recently been taken to
study the interaction between two vector mesons and between one vector meson and
one baryon~\cite{Molina:2008jw,Geng:2008gx,Molina:2009eb,Gonzalez:2008pv,Sarkar:2009kx}. The novelty is that instead of using interaction kernels provided by ChPT, one uses transition amplitudes provided by the hidden-gauge Lagrangians, which lead to
a suitable description of the interaction of vector mesons among themselves and of vector mesons with other mesons or baryons. Coupled-channel unitarity works in the same way as in the unitary chiral approach, but now the dynamics is provided by
the hidden-gauge Lagrangians~\cite{Bando:1984ej}. As shown by several recent works~\cite{Molina:2008jw,Geng:2008gx,Molina:2009eb,Gonzalez:2008pv,Sarkar:2009kx}, this combination seems to work very well.
In this talk, we report on a recent study of vector meson-vector meson interaction using
the unitary approach
where 11 states are found dynamically generated~\cite{Geng:2008gx}. First, we briefly explain the unitary approach. Then, we show the main results and compare them with available data, followed by a short summary at the end.
\section{Theoretical framework}
In the following, we briefly
outline the main ingredients of the unitary approach (details can be found in Refs.~\cite{Molina:2008jw,Geng:2008gx}).
There are two basic building-blocks in this approach: transition amplitudes
provided by the hidden-gauge Lagrangians~\cite{Bando:1984ej} and a unitarization procedure. We adopt the Bethe-Salpeter equation method
$T=(1-VG)^{-1}V$
to unitarize the transition amplitudes $V$ for $s$-wave interactions,
where $G$ is a diagonal matrix
of the vector meson-vector meson one-loop function
\begin{equation}
G=i\int\frac{d^4q}{(2\pi)^4}\frac{1}{q^2-M_1^2}\frac{1}{q^2-M_2^2}
\end{equation}
with $M_1$ and $M_2$ the masses of the two vector mesons.
In Refs.~\cite{Molina:2008jw,Geng:2008gx}
three mechanisms, as shown in Fig.~\ref{fig:dia1}, have been taken into account for the calculation of the transition amplitudes $V$:
the four-vector-contact term, the t(u)-channel vector-exchange amplitude, and the direct box amplitude with two intermediate pseudoscalar mesons. Other possible mechanisms, e.g. crossed box amplitudes and
box amplitudes involving anomalous couplings, have been neglected, whose contributions are assumed to be small as has been explicitly shown to be the case for rho-rho scattering in Ref.~\cite{Molina:2008jw}.
\begin{figure}[t]
\centerline{\includegraphics[scale=0.35]{vvdiagram3.eps}}
\caption{Transition amplitudes $V$ appearing in the coupled-channel Bethe-Salpeter equation.\label{fig:dia1}}
\end{figure}
Among the three mechanisms considered for $V$, the
four-vector-contact term and $t(u)$-channel vector-exchange one are responsible for
the formation of resonances or bound states if the interaction generated by them
is strong enough. In this sense,
the dynamically generated states can be thought of as ``vector meson-vector meson molecules.'' On the
other hand, the consideration of the imaginary part of the direct box amplitude allows the generated states to decay into two pseudoscalars. It should
be stressed that in the present approach these two mechanisms play quite different roles:
the four-vector-contact term and the $t(u)$-channel vector-exchange one are responsible
for generating the resonances whereas the direct box amplitude mainly contributes to their decays.
Since the energy regions we are interested in are close to the two vector-meson threshold, the three-momenta of the external vector mesons are smaller than the corresponding masses, $|\vec{q}|^2/M^2\ll1$, and therefore can be safely neglected. This considerably simplifies
the calculation of the four-vector-contact term and the $t(u)$-channel vector-exchange amplitude, whose explicit expressions can be found in Appendix A of Ref.~\cite{Geng:2008gx}. To calculate the box diagram, one has
to further introduce two parameters, $\Lambda$ and $\Lambda_b$. The parameter $\Lambda$ regulates the four-point loop function, and $\Lambda_b$ is related to the form factors used for the vector-pseudoscalar-pseudoscalar vertex, which
is inspired by the empirical form factors used in the study of vector-meson decays~\cite{Titov:2000bn}.
In the present work we use $\Lambda=1$ GeV and $\Lambda_b=1.4$ GeV, unless otherwise stated.
The values of $\Lambda$ and
$\Lambda_b$ have been fixed in Ref.~\cite{Molina:2008jw} to obtain the widths of the $f_0(1370)$ and $f_2(1270)$. They are found to provide a good description of the widths of the $f'_2(1525)$, $K_2^*(1430)$, and $f_0(1710)$ as well (see Table I).
To regularize the two vector-meson one-loop function, Eq.~(2.1), one has to introduce either cutoffs in the cutoff method or subtraction constants in the dimensional regularization method. Further details concerning the values of these parameters can be found in Ref.~\cite{Geng:2008gx}. The results presented in this talk are obtained using the dimensional regularization method with the values of the subtraction constants given in Ref.~\cite{Geng:2008gx}. In particular, we have fine-tuned the values of three
subtraction constants to reproduce the masses of the three tensor states, i.e.,
the $f_2(1270)$, the $f'_2(1525)$, and the $K_2^*(1430)$. The masses of the other
eight states are predictions.
\section{Results and discussions}
\begin{table*}[b]
\renewcommand{\arraystretch}{1.4}
\setlength{\tabcolsep}{0.1cm}
\caption{The properties, (mass, width) [in units of MeV], of the 11 dynamically
generated states and, if existing, of those of their PDG
counterparts~\cite{Amsler:2008zz}. The association of the dynamically generated states with their experimental counterparts is determined by matching their mass, width, and decay pattern.
\label{table:sum}}
\begin{center}
\begin{tabular}{c|c|cc|ccc}\hline\hline
$I^{G}(J^{PC})$&\multicolumn{3}{c|}{Theory} & \multicolumn{3}{c}{PDG data}\\\hline
& Pole position &\multicolumn{2}{c|}{Real axis} & Name & Mass & Width \\
& & $\Lambda_b=1.4$ GeV & $\Lambda_b=1.5$ GeV & \\\hline
$0^+(0^{++})$ & (1512,51) & (1523,257) & (1517,396)& $f_0(1370)$ & 1200$\sim$1500 & 200$\sim$500\\
$0^+(0^{++})$ & (1726,28) & (1721,133) & (1717,151)& $f_0(1710)$ & $1724\pm7$ & $137\pm 8$\\
$0^-(1^{+-})$ & (1802,78) & \multicolumn{2}{c|} {(1802,49)} & $h_1$\\
$0^+(2^{++})$ & (1275,2) & (1276,97) & (1275,111) & $f_2(1270)$ & $1275.1\pm1.2$ & $185.0^{+2.9}_{-2.4}$\\
$0^+(2^{++})$ & (1525,6) & (1525,45) &(1525,51) &$f_2'(1525)$ & $1525\pm5$ & $73^{+6}_{-5}$\\\hline
$1^-(0^{++})$ & (1780,133) & (1777,148) &(1777,172) & $a_0$\\
$1^+(1^{+-})$ & (1679,235) & \multicolumn{2}{c|}{(1703,188)} & $b_1$ \\
$1^-(2^{++})$ & (1569,32) & (1567,47) & (1566,51)& $a_2(1700)??$
\\\hline
$1/2(0^+)$ & (1643,47) & (1639,139) &(1637,162)& $K_0^*$ \\
$1/2(1^+)$ & (1737,165) & \multicolumn{2}{c|}{(1743,126)} & $K_1(1650)?$\\
$1/2(2^+)$ & (1431,1) &(1431,56) & (1431,63) &$K_2^*(1430)$ & $1429\pm 1.4$ & $104\pm4$\\
\hline\hline
\end{tabular}
\end{center}
\end{table*}
Searching for poles of the scattering matrix $T$ on the second Riemann sheet, we find
11 states in nine strangeness-isospin-spin channels as shown in Table I. Theoretical masses and widths are obtained with two
different methods: In the ``pole position'' method,
the mass corresponds to the
real part of the pole position on the complex plane and the width corresponds to twice
its imaginary part. In this case, the box diagrams
corresponding to decays into two pseudoscalars are not included.
In the "real axis" method, the resonance parameters are
obtained from the modulus squared of the amplitudes of the dominant channel of each state
on the real axis\footnote{See Tables I, II, and III of Ref.~\cite{Geng:2008gx}.}, where the mass corresponds to the energy at which the modulus squared has a maximum and the width corresponds to the difference
between the two energies where the modulus squared is half of the
maximum value. In this latter case, the box amplitudes are included. The results
shown in Table I have been obtained using two different values of $\Lambda_b$,
which serve to quantify the uncertainties related to this parameter.
Our treatment of the box amplitudes enables us to obtain
the decay branching ratios of the generated states into two pseudoscalar mesons using the real-axis method in the
following way:
\begin{itemize}
\item First, we calculate the width of the selected state with and without the contributions of
the box diagrams,
$\Gamma(\mathrm{total})=\Gamma(\mathrm{VV})+\Gamma(\mathrm{PP})$ and $\Gamma(\mathrm{VV})$.
\item Second, we estimate the partial decay width into a particular two-pseudoscalar channel of the state by including only the contribution of the particular channel. Taking the $\pi\pi$ channel as an example, this way we obtain $\Gamma(\mathrm{w}\pi\pi)$.
The contribution of the $\pi\pi$ channel is then determined as $\Gamma({\pi\pi})=\Gamma(\mathrm{w}\pi\pi)-\Gamma(\mathrm{VV})$ and its branching ratio is calculated
as $\Gamma({\pi\pi})/\Gamma(\mathrm{total})$. The partial decay branching ratios into other two-pseudoscalar channels are calculated similarly. It should be noted that we have assumed
that interference between contributions of different channels is small, which seems
to be justified since the sum of the calculated partial decay widths agrees well with the total decay width.
\end{itemize}
The so-obtained branching ratios are given in Tables II, III, and IV, in comparison with available data~\cite{Amsler:2008zz}.
From Table II, one can see that our results for the
two $f_2$ states agree very well with the data. For the $f_0(1370)$, according to the PDG~\cite{Amsler:2008zz},
the $\rho\rho$ mode is dominant.
In our approach, however, the $\pi\pi$ mode is dominant, which is consistent with the results of
Ref.~\cite{Albaladejo:2008qa}
and the recent analysis of D. V. Bugg~\cite{Bugg:2007ja}. For the $f_0(1710)$, using the branching
ratios given in Table I, we obtain $\Gamma(\pi\pi)/\Gamma(K\bar{K})<1\%$ and $\Gamma(\eta\eta)/\Gamma(K\bar{K})\sim49\%$.
On the other hand, the PDG gives the following
averages: $\Gamma(\pi\pi)/ \Gamma(K\bar{K}) =0.41_{-0.17}^{+ 0.11}$,
and $\Gamma(\eta\eta)/\Gamma(K \bar{K})=0.48 \pm0.15$~\cite{Amsler:2008zz}. Our calculated branching ratio
for the $\eta\eta$ channel is in agreement with their average, while the ratio for
the $\pi\pi$ channel is much smaller. However, we notice that
the above PDG $\Gamma(\pi\pi)/ \Gamma(K\bar{K})$ ratio is taken from the BES data on
$J/\psi\rightarrow \gamma\pi^+\pi^-$~\cite{Ablikim:2006db}, which comes from
a partial wave analysis that includes seven resonances. On the other hand,
the BES data on $J/\psi\rightarrow\omega K^+K^-$~\cite{Ablikim:2004st} give an upper limit
$\Gamma(\pi\pi)/ \Gamma(K\bar{K})<11\%$ at the $95\%$ confidence level. Clearly more analysis is
advised to settle the issue.
Compared to the decay branching ratios into
two pseudoscalar mesons of the isospin 0 states, those of the isospin 1 states with spin either 0 or 2 are relatively small, as shown in Table III.
In Table IV, one can see that the dominant decay mode of the $K^*_2(1430)$ is $K\pi$ both theoretically and experimentally. However, other modes, such as $\rho K$, $K^*\pi$, and $K^*\pi\pi$,
account for half of its decay width according to the PDG~\cite{Amsler:2008zz}. This is consistent with the fact that our $K^*_2(1430)$ is narrower than its experimental counterpart, as can be seen from
Table I.
The three spin 1 states with the quantum numbers of $h_1$, $b_1$ and $K_1$, do not decay into two pseudoscalars in our approach since
the box diagrams do not contribute as explained in Ref.~\cite{Geng:2008gx}.
\begin{table}
\renewcommand{\arraystretch}{1.4}
\setlength{\tabcolsep}{0.2cm}
\caption{Branching ratios of the $f_0(1710)$, $f_0(1370)$, $f_2(1270)$, and $f'_2(1525)$ in comparison with data~\cite{Amsler:2008zz}.}
\begin{tabular}{c|cc|cc|cc|cc}
\hline\hline
&\multicolumn{2}{c|}{$\Gamma(\pi\pi)/\Gamma(\mathrm{total})$}&\multicolumn{2}{c|}{$\Gamma(\eta\eta)/\Gamma(\mathrm{total})$}&\multicolumn{2}{c|}{$\Gamma(K\bar{K})/\Gamma(\mathrm{total})$}
&\multicolumn{2}{c}{$\Gamma(\mathrm{VV})/\Gamma(\mathrm{total})$}\\
& Our model & Data & Our model & Data & Our model & Data & Our model & Data\\
\hline
$f_0(1370)$ & $\sim72\%$ & & $<1\%$ & & $\sim10\%$ & &$\sim18\%$ &\\
$f_0(1710)$ & $<1\%$ & & $\sim27\%$ & & $\sim55\%$&& $\sim18\%$ &\\
$f_2(1270)$ & $\sim88\%$ & $84.8\%$ & $ <1\%$ & $<1\%$ & $\sim 10\%$&$4.6\%$ & $<1\%$ &\\
$f'_2(1525)$ &$<1\%$ & $0.8\%$ & $\sim21\%$ & $10.4\%$ & $\sim 66\%$& $88.7\%$ & $\sim13\%$&\\
\hline\hline
\end{tabular}
\end{table}
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{1.4}
\setlength{\tabcolsep}{0.3cm}
\caption{Branching ratios of the $a_0$ and $a_2$ states.}
\begin{tabular}{c|ccc}
\hline\hline
&$\Gamma(K\bar{K})/\Gamma(\mathrm{total})$&$\Gamma(\pi\eta)/\Gamma(\mathrm{total})$&$\Gamma(\mathrm{VV})/\Gamma(\mathrm{total})$\\
\hline
$a_0$ & $\sim27\%$ & $\sim23\%$ & $\sim50\%$ \\
$a_2$ & $\sim21\%$ & $\sim17\%$ & $\sim62\%$\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\renewcommand{\arraystretch}{1.4}
\setlength{\tabcolsep}{0.3cm}
\begin{center}
\caption{Branching ratios of the $K^*_0$ and $K^*_2(1430)$ states in comparison with data~\cite{Amsler:2008zz}.}
\begin{tabular}{c|cc|cc|cc}
\hline\hline
&\multicolumn{2}{c|}{$\Gamma(K\pi)/\Gamma(\mathrm{total})$}&\multicolumn{2}{c|}{$\Gamma(K\eta)/\Gamma(\mathrm{total})$}&\multicolumn{2}{c}{$\Gamma(\mathrm{VV})/\Gamma(\mathrm{total})$}\\
& Our model & Data & Our model & Data & Our model & Data \\
\hline
$K^*_0$ & $\sim65\%$ & & $\sim9\%$ & & $\sim26\%$ \\
$K^*_2(1430)$ & $\sim93\%$ & $49.9\%$ & $\sim5\%$ & $<1\%$ & $\sim2\%$ \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
It is interesting to note that out of the 21 combinations of
strangeness, isospin and spin, we have found resonances only in nine of
them. In all the ``exotic'' channels, from the point of view that they cannot be formed from $q\bar{q}$
combinations, we do not find dynamically
generated resonances, including the three (strangeness=0, isospin=2)
channels, the three (strangeness=1, isospin=3/2) channels, the six
strangeness=2 channels (with either isospin=0 or isospin=1).
On the other hand, there do exist some structures on the real axis.
For instance, in the (strangeness=0, isospin=2) channel, one finds a
dip around $\sqrt{s}=1300$ MeV in the spin=0 channel, and a broad
bump in the spin=2 channel around $\sqrt{s}=1400$ MeV, as can be
clearly seen from Fig.~7 of Ref.~\cite{Geng:2008gx}. In the
(strangeness=1, isospin=3/2) and (strangeness=2, isospin=1) channels,
one observes similar structures occurring at shifted energies due to
the different masses of the $\rho$ and the $K^*$, as can be seen
from Figs.~8 and 9 of Ref.~\cite{Geng:2008gx}. However, these states do not correspond to poles on the complex plane, and hence, according to the common criteria, they do
not qualify as resonances.
\section{Summary and conclusions}
We have performed a study of vector meson-vector meson interaction
using a unitary approach. Employing the coupled-channel
Bethe-Salpeter equation to unitarize the transition
amplitudes provided by the hidden-gauge Lagrangians, we find that 11 states get dynamically generated
in nine strangeness-isospin-spin channels. Among them, five states are associated to those reported in
the PDG, i.e., the $f_0(1370)$, the $f_0(1710)$, the $f_2(1270)$,
the $f'_2(1525)$, the $K_2^*(1430)$. The association of two other
states, the $a_2(1700)$ and the $K_1(1650)$, are likely, particularly the
$K_1(1650)$, but less
certain. Four of the 11 dynamically generated states can not be
associated with any known states in the PDG. Another interesting finding of our work is that the broad bumps found in
four exotic channels do not correspond to poles on the
complex plane and, hence, do
not qualify as resonances.
\section{Acknowledgments}
L. S. Geng thanks R. Molina, L. Alvarez-Ruso, and M. J. Vicente Vacas for
useful discussions. This work is partly
supported by DGICYT Contract No. FIS2006-03438 and the EU Integrated
Infrastructure Initiative Hadron Physics Project under contract
RII3-CT-2004-506078. L.S.G. acknowledges support from the MICINN in the Program ``Juan de la Cierva.''
|
1,477,468,750,387 | arxiv | \chapter*{Abstract}
\textsc{Uniform} algebras have been extensively investigated because of their importance in the theory of uniform approximation and as examples of complex Banach algebras. An interesting question is whether analogous algebras exist when a complete valued field other than the complex numbers is used as the underlying field of the algebra. In the Archimedean setting, this generalisation is given by the theory of real function algebras introduced by S. H. Kulkarni and B. V. Limaye in the 1980s. This thesis establishes a broader theory accommodating any complete valued field as the underlying field by involving Galois automorphisms and using non-Archimedean analysis. The approach taken keeps close to the original definitions from the Archimedean setting.
\vspace{3mm}\\
Basic function algebras are defined and generalise real function algebras to all complete valued fields whilst retaining the obligatory properties of uniform algebras.\\
Several examples are provided. A basic function algebra is constructed in the non-Archimedean setting on a $p$-adic ball such that the only globally analytic elements of the algebra are constants.\\
Each basic function algebra is shown to have a lattice of basic extensions related to the field structure. In the non-Archimedean setting it is shown that certain basic function algebras have residue algebras that are also basic function algebras.\\
A representation theorem is established. Commutative unital Banach $F$-algebras with square preserving norm and finite basic dimension are shown to be isometrically $F$-isomorphic to some subalgebra of a Basic function algebra. The condition of finite basic dimension is always satisfied in the Archimedean setting by the Gel'fand-Mazur Theorem. The spectrum of an element is considered.\\
The theory of non-commutative real function algebras was established by K. Jarosz in 2008. The possibility of their generalisation to the non-Archimedean setting is established in this thesis and also appeared in a paper by J. W. Mason in 2011.\\
In the context of complex uniform algebras, a new proof is given using transfinite induction of the Feinstein-Heath Swiss cheese ``Classicalisation'' theorem. This new proof also appeared in a paper by J. W. Mason in 2010.
\chapter*{Acknowledgements}
\pdfbookmark[0]{Acknowledgements}{acknowledgements}
I would particular like to thank my supervisor J. F. Feinstein for his guidance and enthusiasm over the last four years. Through his expert knowledge of Banach algebra theory he has helped me to identify several productive lines of research whilst always allowing me the freedom required to make the work my own.\\
In addition to my supervisor, I. B. Fesenko also positively influenced the direction of my research. During my doctoral training I undertook a postgraduate training module on the theory of local fields given by I. B. Fesenko. With extra reading, this enabled me to work both in the Archimedean and non-Archimedean settings as implicitly suggested by my thesis title.\\
It was a pleasure to know my friends in the algebra and analysis group at Nottingham and I thank them for their interest in my work and hospitality.\\
I am grateful to the School of Mathematical sciences at the University of Nottingham for providing funds in support of my conference participation and visits.\\
Similarly I appreciate the support given to me by the EPSRC through a Doctoral Training Grant.\\
This PhD thesis was examined by A. G. O'Farrell and J. Zacharias who I thank for their time and interest in my work.
\tableofcontents
\newpage
\pagenumbering{arabic}
\include{chapter1}
\include{chapter2}
\include{chapter3}
\include{chapter4}
\include{chapter5}
\include{chapter6}
\include{chapter7}
\phantomsection
\addcontentsline{toc}{chapter}{References}
\nocite{*}
\chapter[Introduction]{Introduction}
\label{cha:IN}
This short chapter provides an informal overview of the material in this thesis. Justification of the statements made in this chapter can therefore be found in the main body of the thesis which starts at Chapter \ref{cha:CVF}.
\section[Background and Overview]{Background and Overview}
\label{sec:INBO}
Complex uniform algebras have been extensively investigated because of their importance in the theory of uniform approximation and as examples of complex Banach algebras. Let $C_{\mathbb{C}}(X)$ denote the complex Banach algebra of all continuous complex-valued functions defined on a compact Hausdorff space $X$. A complex uniform algebra $A$ is a subalgebra of $C_{\mathbb{C}}(X)$ that is complete with respect to the sup norm, contains the constant functions making it a unital complex Banach algebra and separates the points of X in the sense that for all $x_{1},x_{2}\in X$ with $x_{1}\not=x_{2}$ there is $f\in A$ satisfying $f(x_{1})\not=f(x_{2})$. Attempting to generalise this definition to other complete valued fields simply by replacing $\mathbb{C}$ with some other complete valued field $L$ produces very limited results. This is because the various versions of the Stone-Weierstrass theorem restricts our attention to $C_{L}(X)$ in this case.\\
However the theory of real function algebras introduced by S. H. Kulkarni and B. V. Limaye in the 1980s does provide an interesting generalisation of complex uniform algebras. One important departure in the definition of these algebras from that of complex uniform algebras is that they are real Banach algebras of continuous complex-valued functions. Similarly the elements of the algebras introduced in this thesis are also continuous functions that take values in some complete valued field or division ring extending the field of scalars over which the algebra is a vector space.\\
A prominent aspect of the emerging theory is that it has a lot to do with representation. As a very simple example the field of complex numbers itself is isometrically isomorphic to a real function algebra, all be it on a two point space.\\
When considering the generalisation of complex uniform algebras over all complete valued fields I naturally wanted the complex uniform algebras and real function algebras to appear directly as instances of the new theory. This resulted in the definition of basic function algebras involving the use of a Galois automorphism and homeomorphic endofunction that interact in a useful way, see Definition \ref{def:CGBFA}. In retrospect these particular algebras should more appropriately be referred to as cyclic basic function algebras since the functions involved take values in some cyclic extension of the underlying field of scalars of the algebra.\\
Necessarily this thesis starts by surveying complete valued fields and their properties. The transition from the Archimedean setting to the non-Archimedean setting preserves in places several of the nice properties that complete Archimedean fields have. However all complete non-Archimedean fields are totally disconnected, some of them are not locally compact and there is no non-Archimedean analog of the Gel'fand-Mazur Theorem.\\
On the other hand some complete non-Archimedean fields have interesting properties that only appear in the non-Archimedean setting. Consider for example the closed unit disc of the complex plane. It is closed under multiplication but not with respect to addition. In the non-Archimedean setting the closed unit ball $\mathcal{O}_{F}$, of a complete valued field $F$, is a ring since in this case the valuation involved observes the strong version of the triangle inequality, see Definition \ref{def:CVFV}. The set $\mathcal{M}_{F}=\{a\in F:|a|_{F}<1\}$ is a maximal ideal of $\mathcal{O}_{F}$ from which the residue field $\overline{F}=\mathcal{O}_{F}/\mathcal{M}_{F}$ is obtained. The residue field is of great importance in the study of such fields.\\
Similarly in the non-Archimedean setting we will see that certain basic function algebras have residue algebras that are also basic function algebras. In the process of proving this result an interesting fact is shown concerning a large class of complete non-Archimedean fields. For such a field $F$ and every finite extension $L$ of $F$, extending $F$ as a valued field, it is shown that for each Galois automorphism $g\in\mbox{Gal}(^{L}/_{F})$ there exists a set $\mathcal{R}_{L,g}\subseteq\mathcal{O}_{L}$ of residue class representatives such that the restriction of $g$ to $\mathcal{R}_{L,g}$ is an endofunction, i.e. a self map, on $\mathcal{R}_{L,g}$. This fact is probably known to certain number theorists.\\
This thesis also includes several examples of basic function algebras and these are considered at depth. A new proof of an existing theorem in the setting of complex uniform algebras is given and theory in the non-commutative setting is also considered.\\
With respect to commutative Banach algebra theory, Chapter \ref{cha:RT} presents an interesting new Gel'fand representation result extending those of the Archimedean setting. In particular we have the following theorem where the condition of finite basic dimension is automatically satisfied in the Archimedean setting and compensates for the lack of a Gel'fand-Mazur Theorem in the non-Archimedean setting. See Chapter \ref{cha:RT} for full details.
\begin{theorem}
\label{thr:INREPCRLF}
Let $F$ be a locally compact complete valued field with nontrivial valuation. Let $A$ be a commutative unital Banach $F$-algebra with $\|a^{2}\|_{A}=\|a\|_{A}^{2}$ for all $a\in A$ and finite basic dimension. Then:
\begin{enumerate}
\item[(i)]
if $F$ is the field of complex numbers then $A$ is isometrically $F$-isomorphic to a complex uniform algebra on some compact Hausdorff space $X$;
\item[(ii)]
if $F$ is the field of real numbers then $A$ is isometrically $F$-isomorphic to a real function algebra on some compact Hausdorff space $X$;
\item[(iii)]
if $F$ is non-Archimedean then $A$ is isometrically $F$-isomorphic to a non-Archimedean analog of the real function algebras on some Stone space $X$ where by a Stone space we mean a totally disconnected compact Hausdorff space.
\end{enumerate}
In particular $A$ is isometrically $F$-isomorphic to some subalgebra $\hat{A}$ of a basic function algebra and $\hat{A}$ separates the points of $X$.
\end{theorem}
Note that (i) and (ii) of Theorem \ref{thr:INREPCRLF} are the well known results from the Archimedean setting. This brings us to the following summary.
\section[Summary]{Summary}
\label{sec:INSU}
\begin{enumerate}
\item[Chapter 2:]
The relevant background concerning complete valued fields is provided. Several examples are given and the topological properties of complete valued fields are compared and discussed. A particularly useful and well known way of expressing the extension of a valuation is considered and the relevant Galois theory is introduced.
\item[Chapter 3:]
Some background concerning functional analysis over complete valued fields is given. Analytic functions are discussed. Banach $F$-algebras are introduced and the spectrum of an element is considered.
\item[Chapter 4:]
Complex uniform algebras are introduced. In the context of complex uniform algebras, a new proof is given using transfinite induction of the Feinstein-Heath Swiss cheese ``Classicalisation'' theorem. This new proof also appeared in a paper by J. W. Mason in 2010. This is followed by a preliminary discussion concerning non-complex analogs of uniform algebras. Real function algebras are introduced.
\item[Chapter 5:]
Basic function algebras are defined providing the required generalisation of real function algebras to all complete valued fields. A generalisation theorem proves that Basic function algebras have the obligatory properties of uniform algebras.\\
Several examples are provided. Complex uniform algebras and real function algebras now appear as instances of the new theory. A basic function algebra is constructed in the non-Archimedean setting on a $p$-adic ball such that the only globally analytic elements of the algebra are constants.\\
Each basic function algebra is shown to have a lattice of basic extensions related to the field structure. Further, in the non-Archimedean setting it is shown that certain basic function algebras have residue algebras that are also basic function algebras. To prove this each Galois automorphism, for certain field extensions, is shown to restrict to an endofunction on some set of residue class representatives.
\item[Chapter 6:]
A representation theorem is established in the context of locally compact complete fields with nontrivial valuation. For such a field $F$, commutative unital Banach $F$-algebras with square preserving norm and finite basic dimension are shown to be isometrically $F$-isomorphic to some subalgebra of a Basic function algebra. The condition of finite basic dimension is automatically satisfied in the Archimedean setting by the Gel'fand-Mazur Theorem.
\item[Chapter 7:]
The theory of non-commutative real function algebras was established by K. Jarosz in 2008. The possibility of their generalisation to the non-Archimedean setting is established in this thesis having been originally pointed out in a paper by J. W. Mason in 2011. The thesis concludes with a list of open questions highlighting the potential for further interesting developments of this theory.
\end{enumerate}
\chapter[Complete valued fields]{Complete valued fields}
\label{cha:CVF}
In this chapter we survey some of the basic facts and definitions concerning complete valued fields. Whilst also providing a background, most of the material presented here is required by later chapters and has been selected accordingly.
\section{Introduction}
\label{sec:CVFINT}
We begin with some definitions.
\begin{definition}
We adopt the following terminology:
\begin{enumerate}
\item[(i)]
Let $F$ be a field. We will call a multiplicative norm $|\cdot|_{F}:F\rightarrow \mathbb{R}$ a {\em valuation} on $F$ and $F$ together with $|\cdot|_{F}$ a {\em valued field}.
\item[(ii)]
Let $F$ be a valued field. If the valuation on $F$ satisfies the strong triangle inequality,
\begin{equation*}
|a-b|_{F}\leq\mbox{max}(|a|_{F},|b|_{F})\mbox{ for all }a,b\in F,
\end{equation*}
then we call $|\cdot|_{F}$ a {\em non-Archimedean valuation} and $F$ a {\em non-Archimedean field}. Else we call $|\cdot|_{F}$ an {\em Archimedean valuation} and $F$ an {\em Archimedean field}.
\item[(iii)]
If a valued field is complete with respect to the metric obtained from its valuation then we call it a {\em complete valued field}. Similarly we have {\em complete valuation} and {\em complete non-Archimedean field} etc.
\item[(iv)]
More generally, a metric space $(X,d)$ is called an {\em ultrametric space} if the metric $d$ satisfies the strong triangle inequality,
\begin{equation*}
d(x,z)\leq\mbox{max}(d(x,y),d(y,z))\mbox{ for all }x,y,z\in X.
\end{equation*}
\end{enumerate}
\label{def:CVFV}
\end{definition}
The following theorem is a characterisation of non-Archimedean fields, courtesy of \cite[p18]{Schikhof}.
\begin{theorem}
Let $F$ be a valued field. Then $F$ is a non-Archimedean field if and only if $|2|_{F}\leq1$.
\label{thr:CVFCHAR}
\end{theorem}
\begin{remark}
Whilst it is clear from the definition of the strong triangle inequality that an Archimedean field can't be extended as a valued field to a non-Archimedean field, Theorem \ref{thr:CVFCHAR} also shows that a non-Archimedean field can't be extended to an Archimedean field.
\label{rem:CVFE}
\end{remark}
\begin{theorem}
Let $F$ be a valued field. Let $\mathfrak{C}$, with pointwise operations, be the ring of Cauchy sequences of elements of $F$ and let $\mathfrak{N}$ denote its ideal of null sequences. Then the completion $\mathfrak{C}/\mathfrak{N}$ of $F$ with the function
\begin{equation*}
|(a_{n})+\mathfrak{N}|_{\mathfrak{C}/\mathfrak{N}}:=\lim_{n\to\infty}|a_{n}|_{F},
\end{equation*}
for $(a_{n})+\mathfrak{N}\in\mathfrak{C}/\mathfrak{N}$, is a complete valued field extending $F$ as a valued field.
\label{thr:CVFCOM}
\end{theorem}
\begin{proof}
We will only highlight one important part of the proof since further details can be found in \cite[p80]{McCarthy}. We first note that since the valuation $|\cdot|_{F}$ is multiplicative we have $|a^{-1}|_{F}=|a|^{-1}_{F}$ for all units $a\in F^{\times}$. Let $(a_{n})$ be a Cauchy sequence taking values in $F$ but not a null sequence. Then there exists $\delta>0$ and $N\in\mathbb{N}$ such that for all $n>N$ we have $|a_{n}|_{F}>\delta$. If $(a_{n})$ takes the value $0$ then a null sequence can be added to $(a_{n})$ such that the resulting sequence $(b_{n})$ does not takes the value $0$ and $(b_{n})$ agrees with $(a_{n})$ for all $n>N$. Hence for all $m>N$ and $n>N$ we have
\begin{equation*}
|b^{-1}_{m}-b^{-1}_{n}|_{F}=|b^{-1}_{m}|_{F}|b^{-1}_{n}|_{F}|b_{n}-b_{m}|_{F}<\frac{1}{\delta^{2}}|b_{n}-b_{m}|_{F}
\end{equation*}
and so the sequence $(b^{-1}_{n})$ is also a Cauchy sequence. This shows that the ideal of null sequences $\mathfrak{N}$ is maximal and $\mathfrak{C}/\mathfrak{N}$ is therefore a field opposed to merely a ring.
\end{proof}
\begin{definition}
Let $F$ be a valued field. We will call a function $\nu:F\rightarrow\mathbb{R}\cup\{\infty\}$ a {\em valuation logarithm} if and only if for an appropriate fixed $r>1$ we have $|a|_{F}=r^{-\nu(a)}$ for all $a\in F$.
\label{def:CVFVL}
\end{definition}
\begin{remark} We have the following basic facts.
\begin{enumerate}
\item[(i)]
With reference to Definition \ref{def:CVFV}, a valuation logarithm $\nu$ on a non-Archimedean field $F$ has the following properties. For $a, b\in F$ we have:
\begin{enumerate}
\item[(1)]
$\nu(a+b)\geq\mbox{min}(\nu(a),\nu(b))$;
\item[(2)]
$\nu(ab)=\nu(a)+\nu(b)$;
\item[(3)]
$\nu(1)=0$ and $\nu(a)=\infty$ if and only if $a=0$.
\end{enumerate}
\item[(ii)]
Every valued field $F$ has a valuation logarithm since we can take $r=e$, where $e:=\exp(1)$, and for $a\in F$ define
\begin{equation*}
\nu(a): = \left\{ \begin{array}{l@{\quad\mbox{if}\quad}l}-\log|a|_{F} & a\not=0 \\ \infty & a=0. \end{array} \right.
\end{equation*}
\item[(iii)]
If $\nu$ is a valuation logarithm on a valued field $F$ then so is $c\nu$ for any $c\in\mathbb{R}$ with $c>0$. However there will sometimes be a preferred choice. For example a valuation logarithm of {\em rank 1} is such that $\nu(F^{\times})=\mathbb{Z}$.
\end{enumerate}
\label{rem:CVFVL}
\end{remark}
\begin{lemma}
Let $F$ be a non-Archimedean field with valuation logarithm $\nu$. If $a,b\in F$ are such that $\nu(a)<\nu(b)$ then $\nu(a+b)=\nu(a)$.
\label{lem:CVFEQ}
\end{lemma}
\begin{proof}
Given that $\nu(a)<\nu(b)$ we have $\nu(a+b)\geq\mbox{min}(\nu(a),\nu(b))=\nu(a)$. Moreover $0=\nu(1)=\nu((-1)(-1))=2\nu(-1)$ therefore giving $\nu(-b)=\nu(-1)+\nu(b)=\nu(b)$. Hence $\nu(a)\geq\mbox{min}(\nu(a+b),\nu(-b))=\mbox{min}(\nu(a+b),\nu(b))$. But $\nu(a)<\nu(b)$ and so $\nu(a)\geq\nu(a+b)$ giving $\nu(a+b)=\nu(a)$.
\end{proof}
Before looking at specific examples of complete valued fields we first consider some of the theory concerning series representations of elements.
\subsection{Series expansions of elements of valued fields}
\label{subsec:CVFSE}
\begin{definition}
Let $F$ be a valued field. If 1 is an isolated point of $|F^{\times}|_{F}$, equivalently 0 is an isolated point of $\nu(F^{\times})$ for $\nu$ a valuation logarithm on $F$, then the valuation on $F$ is said to be {\em discrete}, else it is said to be {\em dense}.
\label{def:CVFDIS}
\end{definition}
\begin{lemma}
If a valued field $F$ has a discrete valuation then $\nu(F^{\times})$ is a discrete subset of $\mathbb{R}$ for $\nu$ a valuation logarithm on $F$.
\label{lem:CVFDV}
\end{lemma}
\begin{proof}
We show the contrapositive. Suppose there is a sequence $(a_{n})$ of elements of $F^{\times}$ such that $\nu(a_{n})$ converges to a point of $\mathbb{R}$ with $\nu(a_{n})\not=\lim_{m\to\infty}\nu(a_{m})$ for all $n\in\mathbb{N}$. We can take $(a_{n})$ to be such that $\nu(a_{n})\not=\nu(a_{m})$ for $n\not=m$. Then setting $b_{n}:=a_{n}a^{-1}_{n+1}$ defines a sequence $(b_{n})$ such that
\begin{equation*}
\nu(b_{n})=\nu(a_{n}a^{-1}_{n+1})=\nu(a_{n})+\nu(a^{-1}_{n+1})=\nu(a_{n})-\nu(a_{n+1})
\end{equation*}
which converges to 0.
\end{proof}
The following standard definitions are particularly important.
\begin{definition}
For $F$ a non-Archimedean field with valuation logarithm $\nu$, Define:
\begin{enumerate}
\item[(i)]
$\mathcal{O}_{F}:=\{a\in F:\nu(a)\geq0,\mbox{ equivalently }|a|_{F}\leq1\}$ the {\em ring of integers} of $F$ noting that this is a ring by the strong triangle inequality;
\item[(ii)]
$\mathcal{O}^{\times}_{F}:=\{a\in F:\nu(a)=0,\mbox{ equivalently }|a|_{F}=1\}$ the units of $\mathcal{O}_{F}$;
\item[(iii)]
$\mathcal{M}_{F}:=\{a\in F:\nu(a)>0,\mbox{ equivalently }|a|_{F}<1\}$ the maximal ideal of $\mathcal{O}_{F}$ of elements without inverses in $\mathcal{O}_{F}$;
\item[(iv)]
$\overline{F}:=\mathcal{O}_{F}/\mathcal{M}_{F}$ the {\em residue field} of $F$ of {\em residue classes}.
\end{enumerate}
\label{def:CVFRF}
\end{definition}
\begin{definition}
Let $F$ be a field with a discrete valuation and valuation logarithm $\nu$.
\begin{enumerate}
\item[(i)]
If $|F|_{F}=\{0,1\}$, equivalently $\nu(F)=\{0,\infty\}$, then the valuation is called {\em trivial}.
\item[(ii)]
If $|\cdot|_{F}$ is not trivial then an element $\pi\in F^{\times}$ such that $\nu(\pi)=\min\nu(F^{\times})\cap(0,\infty)$ is called a {\em prime} element since $\pi\not=ab$ for all $a,b\in\mathcal{O}_{F}\backslash\mathcal{O}^{\times}_{F}$ given above.
\end{enumerate}
\label{def:CVFPE}
\end{definition}
\begin{remark}
For a field $F$, as in part (ii) of Definition \ref{def:CVFPE}, it follows easily from Lemma \ref{lem:CVFDV} that $F$ has a prime element $\pi$ and from Remark \ref{rem:CVFVL} that $\nu(F^{\times})=\nu(\pi)\mathbb{Z}$ which we call the {\em value group}. Moreover for $a\in F^{\times}$ we have
\begin{equation*}
|a|_{F}=r^{-\nu(a)}=e^{\nu(\pi)\log(r)(-\nu(a)/\nu(\pi))}=e^{\log(|\pi|^{-1}_{F})(-\nu(a)/\nu(\pi))}=\left(|\pi|^{-1}_{F}\right)^{-\nu(a)/\nu(\pi)}
\end{equation*}
giving a rank 1 valuation logarithm $\frac{1}{\nu(\pi)}\nu$ noting that $|\pi|^{-1}_{F}>1$ since $r>1$.
\label{rem:CVFVG}
\end{remark}
\begin{theorem}
Let $F$ be a valued field with a non-trivial, discrete valuation. Let $\pi$ be a prime element of $F$ and let $\mathcal{R}\subseteq\mathcal{O}^{\times}_{F}\cup\{0\}$ be a set of residue class representatives with 0 representing $\bar{0}=\mathcal{M}_{F}$. Then every element $a\in F^{\times}$ has a unique series expansion over $\mathcal{R}$ of the form
\begin{equation*}
a=\sum_{i=n}^{\infty}a_{i}\pi^{i}\quad\mbox{for some }n\in\mathbb{Z}\mbox{ with }a_{n}\not=0.
\end{equation*}
Moreover if $F$ is complete then every series over $\mathcal{R}$ of the above form defines an element of $F^{\times}$.
\label{thr:CVFSE}
\end{theorem}
\begin{remark}
Concerning Theorem \ref{thr:CVFSE}.
\begin{enumerate}
\item[(i)]
A proof is given in \cite[p28]{Schikhof}, in fact a generalisation of Theorem \ref{thr:CVFSE} is also given that can be applied to non-Archimedean fields with a dense valuation.
\item[(ii)]
For $a=\sum_{i=n}^{\infty}a_{i}\pi^{i}$ as in Theorem \ref{thr:CVFSE} and using the rank 1 valuation logarithm of Remark \ref{rem:CVFVG} we have, for each $i\geq n$, $\nu(a_{i}\pi^{i})=\nu(a_{i})+i\nu(\pi)=i$ if $a_{i}\pi^{i}\not=0$ and $\nu(a_{i}\pi^{i})=\infty$ otherwise. Further by Lemma \ref{lem:CVFEQ} $b_{m}:=\sum_{i=n}^{m}a_{i}\pi^{i}$ defines a Cauchy sequence in $F$, with respect to $|\cdot|_{F}$, and its limit is $a$. Hence, since for each $m>n$ we have $|a|_{F}-|b_{m}|_{F}\leq |a-b_{m}|_{F}$, $|b_{m}|_{F}$ converges in $\mathbb{R}$ to $|a|_{F}$. But $\nu(b_{m})=n$ for all $m>n$ by Lemma \ref{lem:CVFEQ} and so $\nu(a)=n$.
\end{enumerate}
\label{rem:CVFSV}
\end{remark}
We will now look at some examples of complete valued fields and consider the availability of such structures in the Archimedean and non-Archimedean settings.
\subsection{Examples of complete valued fields}
\label{subsec:CVFEX}
\begin{example} Here are some non-Archimedean examples.
\begin{enumerate}
\item[(i)]
Let $F$ be any field. Then $F$ with the trivial valuation is a non-Archimedean field. It is complete noting that in this case each Cauchy sequences will be constant after some finite number of initial values. The trivial valuation induces the trivial topology on $F$ where every subset of $F$ is {\em clopen} i.e. both open and closed. Furthermore $F$ will coincide with its own residue field.
\item[(ii)]
There are examples of complete non-Archimedean fields of non-zero characteristic with non-trivial valuation. For each there is a prime $p$ such that the field is a transcendental extension of the finite field $\mathbb{F}_{p}$ of $p$ elements. The reason why such a field is not an algebraic extension of $\mathbb{F}_{p}$ follows easily from the fact that the only valuation on a finite field is the trivial valuation. One example of this sort is the valued field of formal Laurent series $\mathbb{F}_{p}\{\{T\}\}$ in one variable over $\mathbb{F}_{p}$ with termwise addition,
\begin{equation*}
\mbox{$\sum_{n\in\mathbb{Z}}a_{n}T^{n}+\sum_{n\in\mathbb{Z}}b_{n}T^{n}:=\sum_{n\in\mathbb{Z}}(a_{n}+b_{n})T^{n}$,}
\end{equation*}
multiplication in the form of the Cauchy product,
\begin{equation*}
\mbox{$(\sum_{n\in\mathbb{Z}}a_{n}T^{n})(\sum_{n\in\mathbb{Z}}b_{n}T^{n}):=\sum_{n\in\mathbb{Z}}(\sum_{i\in\mathbb{Z}}a_{i}b_{n-i})T^{n}$,}
\end{equation*}
and valuation given at zero by $|0|_{T}:=0$ and on the units $\mathbb{F}_{p}\{\{T\}\}^{\times}$ by,
\begin{equation*}
\mbox{$|\sum_{n\in\mathbb{Z}}a_{n}T^{n}|$}_{T}:=r^{-\mbox{min}\{n:a_{n}\not= 0\}}\mbox{ for any fixed }r>1.
\end{equation*}
The valuation on $\mathbb{F}_{p}\{\{T\}\}$ is discrete and its residue field is isomorphic to $\mathbb{F}_{p}$. The above construction also gives a complete non-Archimedean field if we replace $\mathbb{F}_{p}$ with any other field $F$, see \cite[p288]{Schikhof}.
\item[(iii)]
On the other hand complete valued fields of characteristic zero necessarily contain one of the completions of the rational numbers $\mathbb{Q}$. The Levi-Civita field $R$ is such a valued field, see \cite{Shamseddine}. Each element $a\in R$ can be represented as a formal power series of the form
\begin{equation*}
\mbox{$a=\sum_{q\in\mathbb{Q}}a_{q}T^{q}$ with $a_{q}\in\mathbb{R}$ for all $q\in\mathbb{Q}$}
\end{equation*}
such that for each $q\in\mathbb{Q}$ there are at most finitely many $q'<q$ with $a_{q'}\not=0$. Moreover addition, multiplication and the valuation for $R$ can all be obtained by analogy with example (ii) above. A total order can be put on the Levi-Civita field such that the order topology agrees with the topology induced by the field's valuation which is non-trivial. To verify this one shows that the order topology sub-base of open rays topologically generates the valuation topology sub-base of open balls and vice versa. This might be useful to those interested in generalising the theory of C*-algebras to new fields where there is a need to define positive elements. The completion of $\mathbb{Q}$ that the Levi-Civita field contains is in fact $\mathbb{Q}$ itself since the valuation when restricted to $\mathbb{Q}$ is trivial.
\end{enumerate}
\label{exa:CVFA}
\end{example}
We consider what examples of complete Archimedean fields there are. Since the only valuation on a finite field is the trivial valuation, it follows from Remark \ref{rem:CVFE} that every Archimedean field is of characteristic zero. Moreover every non-trivial valuation on the rational numbers is given by Ostrowski's Theorem, see \cite[p2]{Fesenko}\cite[p22]{Schikhof}.
\begin{theorem}
A non-trivial valuation on $\mathbb{Q}$ is either a power of the absolute valuation $|\cdot|_{\infty}^{c}$, with $0<c\leq1$, or a power of the $p$-adic valuation $|\cdot|_{p}^{c}$ for some prime $p\in\mathbb{N}$ with positive $c\in\mathbb{R}$.
\label{thr:CVFOS}
\end{theorem}
\begin{remark}
We will look at the $p$-adic valuations on $\mathbb{Q}$ and the $p$-adic numbers in Example \ref{exa:CVFPN}. We note that any two of the valuations mentioned in Theorem \ref{thr:CVFOS} that are not the same up to a positive power will also not be equivalent as norms. Further, since all of the $p$-adic valuations are non-Archimedean, Theorem \ref{thr:CVFOS} implies that every complete Archimedean field contains $\mathbb{R}$, with a positive power of the absolute valuation, as a valued sub-field. It turns out that almost all complete valued fields are non-Archimedean with $\mathbb{R}$ and $\mathbb{C}$ being the only two Archimedean exceptions up to isomorphism as topological fields, see \cite[p36]{Schikhof}. This in part follows from the Gel'fand-Mazur Theorem which depends on spectral analysis involving Liouville's Theorem and the Hahn-Banach Theorem in the complex setting. We will return to these issues in the more general setting of Banach $F$-algebras.
\label{rem:CVFOS}
\end{remark}
\begin{example}
\label{exa:CVFPN}
Let $p\in\mathbb{N}$ be a prime. Then with reference to Remark \ref{rem:CVFVL}, for $n\in\mathbb{Z}$,
\begin{equation*}
\nu_{p}(n):=\left\{\begin{array}{l@{\quad\mbox{if}\quad}l}\mbox{max}\{i\in\mathbb{N}_{0}:p^{i}|n\} & n\not=0 \\ \infty & n=0 \end{array}\right. , \quad\mathbb{N}_{0}:=\mathbb{N}\cup\{0\},
\end{equation*}
extends uniquely to $\mathbb{Q}$ under the properties of a valuation logarithm. Indeed for $n\in\mathbb{N}$ we have
\begin{equation*}
0=\nu_{p}(1)=\nu_{p}(n/n)=\nu_{p}(n)+\nu_{p}(1/n)
\end{equation*}
giving $\nu_{p}(1/n)=-\nu_{p}(n)$ etc. The standard $p$-adic valuation of $a\in\mathbb{Q}$ is then given by $|a|_{p}:=p^{-\nu_{p}(a)}$. This is a discrete valuation on $\mathbb{Q}$ with respect to which $p$ is a prime element in the sense of Definition \ref{def:CVFPE}. Moreover $\mathcal{R}_{p}:=\{0,1,\cdots,p-1\}$ is one choice of a set of residue class representatives for $\mathbb{Q}$. This is because for $m,n\in\mathbb{N}$ with $p\nmid m$ and $p\nmid n$ we have that $m$, using the Division algorithm, can be expressed as $m=a_{1}+pb_{1}$ and $1/n$, using the extended Euclidean algorithm, can be expressed as $1/n=a_{2}+pb_{2}/n$ with $a_{1},a_{2}\in\{1,\cdots,p-1\}$ and $b_{1},b_{2}\in\mathbb{Z}$. Hence, with reference to Definition \ref{def:CVFRF}, $m/n$ can be expressed as $m/n=a_{3}+pb_{3}/n$ with $a_{3}\in\{1,\cdots,p-1\}$ and $pb_{3}/n\in\mathcal{M}_{p}$ as required. With these details in place we can apply Theorem \ref{thr:CVFSE} so that every element $a\in\mathbb{Q}^{\times}$ has a unique series expansion over $\mathcal{R}_{p}$ of the form
\begin{equation*}
a=\sum_{i=n}^{\infty}a_{i}p^{i}\quad\mbox{for some }n\in\mathbb{Z}\mbox{ with }a_{n}\not=0.
\end{equation*}
The completion of $\mathbb{Q}$ with respect to $|\cdot|_{p}$ is the field of $p$-adic numbers denoted $\mathbb{Q}_{p}$. The elements of $\mathbb{Q}_{p}^{\times}$ are all of the series of the above form when using the expansion over $\mathcal{R}_{p}$. Further, with reference to Remark \ref{rem:CVFSV}, for such an element $a=\sum_{i=n}^{\infty}a_{i}p^{i}$ with $a_{n}\not=0$ we have $\nu_{p}(a)=n$. As an example of such expansions for $p=5$ we have,
\begin{equation*}
\frac{1}{2}=3\cdot5^{0}+2\cdot5+2\cdot5^{2}+2\cdot5^{3}+2\cdot5^{4}+\cdots.
\end{equation*}
More generally the residue field of $\mathbb{Q}_{p}$ is the finite field $\mathbb{F}_{p}$ of $p$ elements. Each non-zero element of $\mathbb{F}_{p}$ has a lift to a $p-1$ root of unity in $\mathbb{Q}_{p}$, see \cite[p37]{Fesenko}. These roots of unity together with $0$ also constitute a set of residue class representatives for $\mathbb{Q}_{p}$. The ring that they generate embeds as a ring into the complex numbers, e.g. see Figure \ref{fig:CVFRL}.
\begin{figure}[h]
\begin{center}
\includegraphics{Thesisfig1}
\end{center}
\caption{Part of a ring in $\mathbb{C}$. The points are labeled with the first coefficient of their corresponding $5$-adic expansion over $\mathcal{R}_{5}$ under a ring isomorphism.}
\label{fig:CVFRL}
\end{figure}
Moreover as a field, rather than as a valued field, $\mathbb{Q}_{p}$ has an embedding into $\mathbb{C}$. The $p$-adic valuation on $\mathbb{Q}_{p}$ can then be extended to a complete valuation on the complex numbers which in this case as a valued field we denote as $\mathbb{C}_{p}$, see \cite[46]{Schikhof}\cite{Roquette}.
\end{example}
Finally it is interesting to note that the different standard valuations on $\mathbb{Q}$, when restricted to the units $\mathbb{Q}^{\times}$, are related by the equation $|\cdot|_{0}|\cdot|_{2}|\cdot|_{3}|\cdot|_{5}\cdots|\cdot|_{\infty}=1$ where $|\cdot|_{0}$ denotes the trivial valuation and $|\cdot|_{\infty}$ the absolute value function. See \cite[p3]{Fesenko}.
\subsection{Topological properties of complete valued fields}
\label{subsec:CVFTP}
In this subsection we consider the connectedness and local compactness of complete valued fields.
\begin{definition}
Let $X$ be a topological space and $Y\subseteq X$.
\begin{enumerate}
\item[(i)]
If $Y$ cannot be expressed as the disjoint union of two non-empty clopen subsets with respect to the relative topology then $Y$ is said to be a {\em connected subset} of $X$.
\item[(ii)]
If the only non-empty connected subsets of $X$ are singletons then $X$ is said to be {\em totally disconnected}.
\item[(iii)]
If for each pair of points $x,y\in X$ there exists a continuous map $f:I\rightarrow X$, $I:=[0,1]\subseteq\mathbb{R}$, such that $f(0)=x$ and $f(1)=y$ then $X$ is {\em path-connected}.
\item[(iv)]
A {\em neighborhood base} $\mathfrak{B}_{x}$ at a point $x\in X$ is a collection of neighborhoods of $x$ such that for every neighborhood $U$ of $x$ there is $V\in\mathfrak{B}_{x}$ with $V\subseteq U$.
\item[(v)]
We call $X$ {\em locally compact} if and only if each point in $X$ has a neighborhood base consisting of compact sets.
\end{enumerate}
\label{def:CVFCO}
\end{definition}
The following lemma is well known however I have provided a proof for the reader's convenience.
\begin{lemma}
Let $F$ be a non-Archimedean field. Then $F$ is totally disconnected.
\label{lem:CVFTD}
\end{lemma}
\begin{proof}
Let $F$ be a non-Archimedean field and $r\in\mathbb{R}$ with $r>0$. For $a,b\in F$, $a\sim b$ if and only if $|a-b|_{F}<r$ defines an equivalence relation on $F$ by the strong triangle inequality noting that for transitivity if $a\sim b$ and $b\sim c$ then
\begin{equation*}
|a-c|_{F}\leq\mbox{max}(|a-b|_{F},|b-c|_{F})<r.
\end{equation*}
Hence for $a\in F$ the $F$-ball $B_{r}(a)$ is an equivalence class and so every element of $B_{r}(a)$ is at its center because every element is an equivalence class representative. In particular if $b\in B_{r}(a)$ then $B_{r}(b)=B_{r}(a)$ but for $b\notin B_{r}(a)$ we have $B_{r}(b)\cap B_{r}(a)=\emptyset$, showing that $B_{r}(a)$ is clopen. Since this holds for every $r>0$, $a$ has a neighborhood base of clopen balls. Hence since $F$ is Hausdorff, $\{a\}$ is the only connected subset of $F$ with $a$ as an element and so $F$ is totally disconnected.
\end{proof}
\begin{remark} We make the following observations.
\begin{enumerate}
\item[(i)]
Every complete Archimedean field is path-connected whereas every complete non-Archimedean field is totally disconnected.
\item[(ii)]
In general a valued field being totally disconnected is not the same as it being discrete. For example $\mathbb{Q}$ with the absolute valuation is a totally disconnected Archimedean field but it is obviously neither discrete nor complete. Also it is easy to show that a valued field admits a non-constant path if and only if it is path-connected, see \cite[p197]{Willard} for the standard definitions used here.
\item[(iii)]
With reference to the proof of Lemma \ref{lem:CVFTD}, $a\sim b$ if and only if $|a-b|_{F}\leq r$, noting the change from the strict inequality, is again an equivalence relation on $F$. Hence every ball of positive radius in a non-Archimedean field is clopen although a ball $\bar{B}_{r}(a):=\{b\in F:|b-a|_{F}\leq r\}$ may contain elements in addition to those in $B_{r}(a)$ depending on whether $r\in|F^{\times}|_{F}$. To clarify then, in the non-Archimedean setting $\bar{B}_{r}(a)$ does not denote the closure of $B_{r}(a)$ with respect to the valuation.
\item[(iv)]
In section \ref{subsec:UASCS} concerning complex uniform algebras we will look at Swiss cheese sets. For a non-Archimedean field $F$ if $a,b\in F$ and $r_{1},r_{2}\in\mathbb{R}$ with $r_{1}\geq r_{2}>0$ then either $B_{r_{2}}(b)\subseteq B_{r_{1}}(a)$ or $B_{r_{2}}(b)\cap B_{r_{1}}(a)=\emptyset$ since either $B_{r_{1}}(b)=B_{r_{1}}(a)$ or $B_{r_{1}}(b)\cap B_{r_{1}}(a)=\emptyset$. Further if $S$ is an $F$-ball or the complement of an $F$-ball then the closure of $S$ with respect to $|\cdot|_{F}$ coincides with $S$ since $F$-balls are clopen. Hence a Swiss cheese set $X\subseteq F$ will be classical exactly when there exists a countable or finite collection $\mathcal{D}$ of $F$-balls, with finite radius sum, and an $F$-ball $\Delta$ such that each element of $\mathcal{D}$ is a subset of $\Delta$ and $X=\Delta\backslash\bigcup\mathcal{D}$. It follows that such a set $X$ can be empty in the non-Archimedean setting.
\end{enumerate}
\label{rem:CVFTD}
\end{remark}
\begin{theorem}
Let $X$ be a Hausdorff space. Then $X$ is locally compact if and only if each point in $X$ has a compact neighborhood.
\label{thr:CVFHL}
\end{theorem}
\begin{theorem}
Let $F$ be a complete non-Archimedean field that is not simultaneously both infinite and with the trivial valuation. Then the following are equivalent:
\begin{enumerate}
\item[(i)]
$F$ is locally compact;
\item[(ii)]
the residue field $\overline{F}$ is finite and the valuation on $F$ is discrete;
\item[(iii)]
each bounded sequence in $F$ has a convergent subsequence;
\item[(iv)]
each infinite bounded subset of $F$ has an accumulation point in $F$;
\item[(v)]
each closed and bounded subset of $F$ is compact.
\end{enumerate}
\label{thr:CVFHB}
\end{theorem}
Proofs of Theorem \ref{thr:CVFHL} and Theorem \ref{thr:CVFHB} can be found in \cite[p130]{Willard} and \cite[p29,p57]{Schikhof} respectively.
\begin{remark}
Concerning Theorem \ref{thr:CVFHB}.
\begin{enumerate}
\item[(i)]
Let $F$ be an infinite field with the trivial valuation. Then $F$ is locally compact since for $a\in F$ it follows that $\{\{a\}\}$ is a neighborhood base of compact sets for $a$. However $F$ does not have any of the other properties given in Theorem \ref{thr:CVFHB}. For example the residue field $\overline{F}$ is $F$ and so it is not finite and $F$ itself is closed and bounded but not compact etc.
\item[(ii)]
We will call a complete non-Archimedean field $F$ that satisfies (ii) of Theorem \ref{thr:CVFHB} a {\em local field}. Some authors weaken the condition on the residue field when defining local fields so that the residue field needs only to be of prime characteristic for some prime $p$ and perfect, that is the Frobenius endomorphism on $\overline{F}$, $\bar{a}\to \bar{a}^{p}$, is an automorphism.
\item[(iii)]
Since the only complete Archimedean fields are $\mathbb{R}$ and $\mathbb{C}$ all but property (ii) of Theorem \ref{thr:CVFHB} hold for complete Archimedean fields by the Heine-Borel Theorem and Bolzano-Weierstrass Theorem etc. In fact this provides one way to prove Theorem \ref{thr:CVFHB} since if $F$ is a local field then there is a homeomorphic embedding of $F$ into $\mathbb{C}$ as a closed unbounded subset.
\item[(iv)]
By the details given in Example \ref{exa:CVFPN} it is immediate that for each prime $p$ the field of $p$-adic numbers $\mathbb{Q}_{p}$ is a local field. However $\mathbb{C}_{p}$ is not a local field since its valuation is dense and its residue field is infinite, see \cite[p45]{Schikhof}.
\end{enumerate}
\label{rem:CVFHB}
\end{remark}
\section{Extending complete valued fields}
\label{sec:CVFEGT}
In this section and later chapters we will adopt the following notation.
\notation
If $F$ is a field and $L$ is a field extending $F$ then we will denote the Galois group of $F$-automorphisms on $L$, that is automorphisms on $L$ that fix the elements of $F$, by $\mbox{Gal}(^{L}/_{F})$. Further we will denote fixed fields by:
\begin{enumerate}
\item[(i)]
$L^{g}:=\{x\in L:g(x)=x\}$, for $g\in\mbox{Gal}(^{L}/_{F})$;
\item[(ii)]
$L^{G}:=\bigcap_{g\in G}L^{g}$, for a subgroup $G\leqslant\mbox{Gal}(^{L}/_{F})$.
\end{enumerate}
More generally if $S$ is a set and $G$ is a group of self maps $g:S\rightarrow S$, with group law composition, then we will denote:
\begin{enumerate}
\item[(1)]
$\mbox{ord}(g):=\mbox{min}\{n\in\mathbb{N}:g^{(n)}=\mbox{id}\}$, the order of an element $g\in G$ with finite order;
\item[(2)]
$\mbox{ord}(g,s):=\mbox{min}\{n\in\mathbb{N}:g^{(n)}(s)=s\}$, the order at an element $s\in S$, when finite, of an element $g\in G$;
\item[(3)]
$\mbox{ord}(g,S):=\{\mbox{ord}(g,s):s\in S\}$, the order set of an element $g\in G$ with finite order.
\end{enumerate}
In the rest of this section we will look mainly at extensions of valued fields, including their valuations, as well as some Galois theory used in later chapters.
\subsection{Extensions}
\label{subsec:CVFE}
The first theorem below is rather general in scope.
\begin{theorem}
Let $F$ be a complete non-Archimedean field. All non-Archimedean norms, that is norms that observe the strong triangle inequality, on a finite dimensional $F$-vector space $E$ are equivalent. Further $E$ is a Banach space, i.e. complete normed space, with respect to each norm.
\label{thr:CVFEN}
\end{theorem}
Theorem \ref{thr:CVFEN} also holds for complete Archimedean fields and in the Archimedean setting proofs often make use of the underlying field being locally compact. However the complete non-Archimedean field $F$ in Theorem \ref{thr:CVFEN} is not assumed to be locally compact and so a proof of the theorem from \cite{Schikhof} has been included below for interest.
\begin{proof}[Proof of Theorem \ref{thr:CVFEN}]
We use induction on $n:=\dim E$. The base case $n=1$ is immediate. Suppose Theorem \ref{thr:CVFEN} holds for $(n-1)$-dimensional spaces and let $E$ be such that $\dim E=n$. We choose a base $e_{1},\cdots,e_{n}$ for $E$ and define
\begin{equation*}
\|x\|_{\infty}=:\max_{i}|a_{i}|_{F}\quad\mbox{for }x=\sum_{i=1}^{n}a_{i}e_{i}\in E.
\end{equation*}
Note that $\|\cdot\|_{\infty}$ is a non-Archimedean norm on $E$ by $|\cdot|_{F}$ being a non-Archimedean valuation. Now let $\|\cdot\|$ be any other non-Archimedean norm on $E$. We show that $\|\cdot\|$ is equivalent to $\|\cdot\|_{\infty}$. For $x=\sum_{i=1}^{n}a_{i}e_{i}\in E$ we have
\begin{equation*}
\|x\|\leq\max_{i}|a_{i}|_{F}\|e_{i}\|\leq M\|x\|_{\infty}
\end{equation*}
where $M:=\max_{i}\|e_{i}\|$. Hence it remains to show that there is a positive constant $N$ such that for all $x\in E$ we have $\|x\|\geq N\|x\|_{\infty}$. Let $D$ be the linear subspace generated by $e_{1},\cdots,e_{n-1}$. By the inductive hypothesis there is $c>0$ such that for all $x\in D$ we have $\|x\|\geq c\|x\|_{\infty}$. Further $D$ is complete and hence closed in $E$ with respect to $\|\cdot\|$. Hence for
\begin{equation*}
c':=\|e_{n}\|^{-1}\inf\{\|e_{n}-y\|:y\in D\}
\end{equation*}
we have $0<c'\leq1$. Now set $N:=\min(c'c,c'\|e_{n}\|)$ and let $x\in E$. Then $x=y+a_{n}e_{n}$ for some $y\in D$ and $a_{n}\in F$. If $a_{n}\not=0$ then $\|x\|=|a_{n}|_{F}\|e_{n}+a_{n}^{-1}y\|\geq|a_{n}|_{F}\|e_{n}\|c'=c'\|a_{n}e_{n}\|$. But then we also get
\begin{equation*}
\|x\|\geq\max(c'\|x\|,c'\|a_{n}e_{n}\|)\geq c'\|x-a_{n}e_{n}\|=c'\|y\|
\end{equation*}
and this inequality also holds for $a_{n}=0$ since $0<c'\leq1$. We get
\begin{equation*}
\|x\|\geq c'\max(\|y\|,\|a_{n}e_{n}\|)\geq c'\max(c\|y\|_{\infty},\|e_{n}\| |a_{n}|_{F})\geq N\max(\|y\|_{\infty},|a_{n}|_{F})
\end{equation*}
where $N\max(\|y\|_{\infty},|a_{n}|_{F})=N\|x\|_{\infty}$. Hence $\|\cdot\|$ and $\|\cdot\|_{\infty}$ are equivalent. Finally we note that a sequence in $E$ is a Cauchy sequence with respect to $\|\cdot\|_{\infty}$ if and only if each of its coordinate sequences is a Cauchy sequence with respect to $|\cdot|_{F}$. Hence $E$ is a Banach space with respect to each norm by the equivalence of norms and completeness of $F$.
\end{proof}
\begin{remark}
Let $L$ be a non-Archimedean field and let $F$ be a complete subfield of $L$. If $L$ is a finite extension of $F$ then $L$ is also complete by Theorem \ref{thr:CVFEN}. Now suppose $L$ is such a complete finite extension of $F$, then viewing $L$ as a finite dimensional $F$-vector space we note that convergence in $L$ is coordinate-wise since $|\cdot|_{L}$ is equivalent to $\|\cdot\|_{\infty}$ by Theorem \ref{thr:CVFEN}. Hence each element $g\in\mbox{Gal}(^{L}/_{F})$ is continuous since being linear over $F$. We will see later in this section that, for such complete finite extensions, each element of $\mbox{Gal}(^{L}/_{F})$ is in fact an isometry. Finally in all cases if $L$ is complete and $g$ is continuous then the fixed field $L^{g}$ is also complete. To see this let $(a_{n})$ be a Cauchy sequence in $L^{g}$ and let $a$ be its limit in $L$. For $\mathrm{id}$ the identity map on $L$, note that $g-\mathrm{id}$ is also continuous on $L$ and so $L^{g}=(g-\mathrm{id})^{-1}(0)$ is a closed subset of $L$. In particular we have $a\in L^{g}$ as required.
\label{rem:CVFEN}
\end{remark}
The following is Krull's extension theorem, a proof can be found in \cite[p34]{Schikhof}.
\begin{theorem}
Let $F$ be a subfield of a field $L$ and let $|\cdot|_{F}$ be a non-Archimedean valuation on $F$. Then there exists a non-Archimedean valuation on $L$ that extends $|\cdot|_{F}$.
\label{thr:CVFKET}
\end{theorem}
The following corollary to Theorem \ref{thr:CVFKET}, which also uses Theorem \ref{thr:CVFCOM}, contrasts with the Archimedean setting.
\begin{corollary}
For every complete non-Archimedean field $F$ there exists a proper extension $L$ of $F$ for which the complete valuation on $F$ extends to a complete valuation on $L$.
\label{cor:CVFEE}
\end{corollary}
Moreover an extension of a valuation is often unique.
\begin{theorem}
Let $F$ be a complete non-Archimedean field, let $L$ be an algebraic extension of $F$ and let $a\in L$. Then:
\begin{enumerate}
\item[(i)]
there is a unique valuation $|\cdot|_{L}$ on $L$ that extends the valuation on $F$;
\item[(ii)]
if $\|\cdot\|$ is an arbitrary norm on the $F$-vector space $L$ then $|a|_{L}=\lim_{n\to\infty}\sqrt[n]{\|a^{n}\|}$.
\end{enumerate}
\label{thr:CVFUT}
\end{theorem}
\begin{remark}
We make the following observations.
\begin{enumerate}
\item[(i)]
Part (i) of Theorem \ref{thr:CVFUT} follows easily from Theorem \ref{thr:CVFKET} and Theorem \ref{thr:CVFEN} applied respectively noting that if $a\in L$ then $a$ is also an element of a finite extension of $F$. See \cite[p39]{Schikhof} for the rest of the proof.
\item[(ii)]
It is worth emphasizing that Theorem \ref{thr:CVFEN}, Theorem \ref{thr:CVFKET} and Theorem \ref{thr:CVFUT} all hold for the case where the valuation on $F$ is trivial.
\item[(iii)]
Now for $L$ and $F$ conforming to the conditions of Theorem \ref{thr:CVFUT} we have that each $g\in\mbox{Gal}(^{L}/_{F})$ is indeed an isometry on $L$ since $|a|':=|g(a)|_{L}$, for $a\in L$, is a valuation on $L$ giving $|g(a)|_{L}=|a|_{L}$ by uniqueness.
\end{enumerate}
\label{rem:CVFUT}
\end{remark}
The following theory will often allow us to express the extension of a valuation in a particularly useful form. We begin with a standard theorem.
\begin{theorem}
Let $F$ be a field, let $L$ be an algebraic extension of $F$ and let $a\in L$. Then there is a unique monic irreducible polynomial $\mbox{Irr}_{F,a}(x)\in F[x]$ such that $\mbox{Irr}_{F,a}(a)=0$. Moreover, for the simple extension $F(a)$, we have $[F(a),F]=\mbox{degIrr}_{F,a}(x)$ where $[F(a),F]$ denotes the dimension of $F(a)$ as an $F$-vector space.
\label{thr:CVFIR}
\end{theorem}
\begin{definition}
Let $F$ be a field and let $L$ be an algebraic extension of $F$.
\begin{enumerate}
\item[(i)]
An element $a\in L$ is said to be {\em separable} over $F$ if $a$ is not a repeated root of its own irreducible polynomial $\mbox{Irr}_{F,a}(x)$.
\item[(ii)]
We call $L_{sc}:=\{a\in L:a\mbox{ is separable over }F\}$ the {\em separable closure} of $F$ in $L$.
\item[(iii)]
The extension $L$ is said to be a {\em separable extension} of $F$ if $L=L_{sc}$.
\item[(iv)]
Let $f(x)\in F[x]$. Then $L$ is called a {\em splitting field} of $f(x)$ over $F$ if $f(x)$ splits completely in $L[x]$ as a product of linear factors but not over any proper subfield of $L$ containing $F$.
\item[(v)]
We will call $L$ a {\em normal extension} of $F$ if $L$ is the splitting field over $F$ of some polynomial in $F[x]$.
\item[(vi)]
The field $L$ is called a {\em Galois extension} of $F$ if $L^{G}=F$ for $G:=\mbox{Gal}(^{L}/_{F})$.
\end{enumerate}
\label{def:CVFSN}
\end{definition}
\begin{remark}
Following Definition \ref{def:CVFSN} we note that the separable closure $L_{sc}$ of $F$ in $L$ is a field with $F\subseteq L_{sc}\subseteq L$. Moreover if $F$ is of characteristic zero then $L$ is a separable extension of $F$.
\label{rem:CVFSN}
\end{remark}
For proofs of the following two theorems and Remark \ref{rem:CVFSN} see \cite[p13-p19,p36]{McCarthy}.
\begin{theorem}
Let $F$ be a field and let $L$ be a finite extension of $F$. Then there is a normal extension $L_{ne}$ of $F$ which contains $L$ and which is the smallest such extension in the sense that if $K$ is a normal extension of $F$ which contains $L$ then there is a $L$-monomorphism of $L_{ne}$ into $K$, i.e. an embedding of $L_{ne}$ into $K$ that fixes $L$.
\label{thr:CVFNE}
\end{theorem}
\begin{theorem}
Let $F$ be a field and let $L$ be a finite extension of $F$. Then, with reference to Theorem \ref{thr:CVFNE} and Definition \ref{def:CVFSN}, there are exactly $[L_{sc}:F]$ distinct $F$-isomorphisms of $L$ onto subfields of $L_{ne}$. Further if $L=L_{ne}$ then $\#\mbox{Gal}(^{L}/_{F})=[L_{sc}:F]$. Moreover $L$ is a Galois extension of $F$ if and only if $L_{sc}=L=L_{ne}$ in which case $\#\mbox{Gal}(^{L}/_{F})=[L:F]$.
\label{thr:CVFGL}
\end{theorem}
\begin{definition}
Let $F$ be a field, let $L$ be a finite extension of $F$ and let $n_{0}:=[L_{sc}:F]$. By Theorem \ref{thr:CVFGL} there are exactly $n_{0}$ distinct $F$-isomorphisms $g_{1},\cdots,g_{n_{0}}$ of $L$ onto subfields of $L_{ne}$. The {\em norm map} $N_{^{L}/_{F}}:L\rightarrow F$ is defined as
\begin{equation*}
N_{^{L}/_{F}}(a):=\left(\prod_{i=1}^{n_{0}}g_{i}(a)\right)^{[L:L_{sc}]}\quad\mbox{for }a\in L.
\end{equation*}
\label{def:CVFNM}
\end{definition}
A proof showing that the norm map only takes values in the ground field can be found in \cite[p23,p24]{McCarthy}. Using the preceding theory we can now state and prove a theorem that will often allow us to express the extension of a valuation in a particularly useful form. The theorem is in the literature. However, having set out the preceding theory, the proof presented here is more immediate than the sources I have seen.
\begin{theorem}
Let $F$ be a complete non-Archimedean field with valuation $|a|_{F}=r^{-\nu(a)}$, for $a\in F$, where $\nu$ is a valuation logarithm on $F$. Let $L$ be a finite extension of $F$ as a field. Then, with reference to Theorem \ref{thr:CVFUT} and Theorem \ref{thr:CVFEN}, the unique extension of $|\cdot|_{F}$ to a complete valuation $|\cdot|_{L}$ on $L$ is given by
\begin{equation*}
|a|_{L}=\sqrt[n]{|N_{^{L}/_{F}}(a)|_{F}}=r^{-\omega(a)}\quad\mbox{for }a\in L,
\end{equation*}
where $n=[L:F]$ and $\omega:=\frac{1}{n}\nu\circ N_{^{L}/_{F}}$ is the corresponding extension of $\nu$ to $L$. If in addition the valuation $|\cdot|_{F}$ is discrete then $|\cdot|_{L}$ is also discrete. If further $|\cdot|_{F}$ is non-trivial and $\nu$ is the rank 1 valuation logarithm of remark \ref{rem:CVFVG} then $e\omega(L^{\times})=\mathbb{Z}$ for some $e\in\mathbb{N}$.
\label{thr:CVFEE}
\end{theorem}
\begin{proof}
Let $L_{ne}$ be the normal extension of $F$ containing $L$ of Theorem \ref{thr:CVFNE}. Since $L_{ne}$ is the splitting field of some polynomial in $F[x]$ it is a finite extension of $F$ and so also of $L$. Hence by Theorem \ref{thr:CVFUT} the valuation $|\cdot|_{L}$ extends uniquely to a valuation $|\cdot|_{L_{ne}}$ on $L_{ne}$. Let $n_{0}:=[L_{sc}:F]$ and let $g_{1},\cdots,g_{n_{0}}$ be the $n_{0}$ distinct $F$-isomorphisms of $L$ onto subfields of $L_{ne}$ as given by Theorem \ref{thr:CVFGL}. Then for each $i\in\{1,\cdots,n_{0}\}$ we have that $|a|_{i}:=|g_{i}(a)|_{L_{ne}}$, for $a\in L$, is a valuation on $L$ extending $|\cdot|_{F}$. Hence, by the uniqueness of $|\cdot|_{L}$ as an extension of $|\cdot|_{F}$ to $L$, each of $g_{1},\cdots,g_{n_{0}}$ is an isometry from $L$ onto a subfield of $L_{ne}$ with respect to $|\cdot|_{L_{ne}}$. Hence setting $n:=[L:F]$, $n_{1}:=[L:L_{sc}]$ and noting that the norm map $N_{^{L}/_{F}}$ takes values in $F$, we have for all $a\in L$
\begin{equation*}
|a|_{L}=\sqrt[n]{|a|_{L}^{[L:L_{sc}][L_{sc}:F]}}=\sqrt[n]{\left(\prod_{i=1}^{n_{0}}|g_{i}(a)|_{L_{ne}}\right)^{n_{1}}}=\sqrt[n]{\left|\left(\prod_{i=1}^{n_{0}}g_{i}(a)\right)^{n_{1}}\right|_{L_{ne}}}=\sqrt[n]{|N_{^{L}/_{F}}(a)|_{F}}.
\end{equation*}
Therefore we also have $\omega(L^{\times})=\frac{1}{n}\nu\circ N_{^{L}/_{F}}(L^{\times})\subseteq\frac{1}{n}\nu(F^{\times})$ and so if $|\cdot|_{F}$ is a discrete valuation then so is $|\cdot|_{L}$. Moreover $\omega$ is indeed an extension of $\nu$ since for $a\in F$ we have $\omega(a)=\frac{1}{n}\nu\circ N_{^{L}/_{F}}(a)=\frac{1}{n}\nu(a^{n})=\frac{1}{n}n\nu(a)=\nu(a)$. Now suppose that $\nu$ is a rank 1 valuation logarithm so that $\nu(F^{\times})=\mathbb{Z}$ and $\omega(L^{\times})\subseteq\frac{1}{n}\mathbb{Z}$. Then there are at most $n$ elements in $\omega(L^{\times})\cap(0,1]$ but also at least $1$ element since there is $\pi\in F^{\times}$ that is prime with respect to $\nu$ giving $\omega(\pi)=\nu(\pi)=1$. Hence let $e':=\min\omega(L^{\times})\cap(0,1]$ and $a\in L^{\times}$ such that $\omega(a)=e'$. We show that $\omega(L^{\times})=e'\mathbb{Z}$. Let $b\in L^{\times}$ giving $\omega(b)=ke'+\varepsilon$ for some $0\leq\varepsilon<e'$ and $k\in\mathbb{Z}$. Then since $a^{k},a^{-k}\in L^{\times}$ we have $\omega(ba^{-k})=\omega(b)-k\omega(a)=ke'+\varepsilon-ke'=\varepsilon$ giving $\varepsilon=0$ by the definition of $e'$. Hence $\omega(L^{\times})\subseteq e'\mathbb{Z}$. On the other hand for $k\in\mathbb{Z}$ we have $\omega(a^{k})=k\omega(a)=ke'$ so $e'\mathbb{Z}\subseteq\omega(L^{\times})$ giving $\omega(L^{\times})=e'\mathbb{Z}$. Finally since $\omega(\pi)=1$ we have $1\in e'\mathbb{Z}$ and so there is $e\in\mathbb{N}$ such that $e'e=1$ giving $e\omega(L^{\times})=\mathbb{Z}$ which completes the proof.
\end{proof}
\begin{remark}
Let $F$ and $L$ be as in Theorem \ref{thr:CVFEE} with non-trivial discrete valuations. Let $\nu$ be the rank 1 valuation logarithm on $F$ and let $\omega$ be the extension of $\nu$ to $L$.
\begin{enumerate}
\item[(i)]
With group law addition, $\omega(F^{\times})$ and $\omega(L^{\times})$ are groups. It is immediate from Theorem \ref{thr:CVFEE} that $e=[\omega(L^{\times}):\omega(F^{\times})]$, the index of $\omega(F^{\times})$ in $\omega(L^{\times})$.
\item[(ii)]
If $e=1$ then $L$ is called an {\em unramified} extension of $F$. If $e=[L:F]$ then $L$ is called a {\em totally ramified} extension of $F$. Other classifications are also in use in the literature.
\item[(iii)]
The value of $e$ has implications for the degree of the extension $\overline{L}$ of the residue field $\overline{F}$. For $F$ and $L$ as specified in these remarks we have $[L:F]=e[\overline{L}:\overline{F}]$, see \cite[p107,p108]{McCarthy} for details. Hence in this case, with reference to Theorem \ref{thr:CVFHB}, if $F$ is locally compact then $L$ is locally compact.
\end{enumerate}
\label{rem:CVFEE}
\end{remark}
\subsection{Galois theory}
\label{subsec:CVFGT}
The following is the fundamental theorem of Galois theory, see \cite[p36]{McCarthy}.
\begin{theorem}
Let $F$ and $E$ be fields such that $E$ is a finite Galois extension of $F$, that is $E^{G}=F$ for $G:=\mbox{Gal}(^{E}/_{F})$. Then we have the following one-one correspondence
\begin{equation*}
\{G':G'\leqslant G\mbox{ is a subgroup}\}\leftrightarrow\{E':E'\mbox{ is a field with }F\subseteq E'\subseteq E\}
\end{equation*}
given by the inverse maps $G'\mapsto E^{G'}$ and $E'\mapsto\mbox{Gal}(^{E}/_{E'})$.
\label{thr:CVFFTG}
\end{theorem}
\begin{corollary}
Let $F$ and $L$ be fields such that $L$ is a finite extension of $F$ and let $G:=\mbox{Gal}(^{L}/_{F})$. Then $L$ is a finite Galois extension of $L^{G}$ and so for $L$ and $L^{G}$ Theorem \ref{thr:CVFFTG} is applicable.
\label{cor:CVFLFTG}
\end{corollary}
\begin{proof}
We show that $L$ is a Galois extension of $L^{G}$. For $g\in\mbox{Gal}(^{L}/_{F})$ we have $g(a)=a$ for all $a\in L^{G}$ and so $g\in\mbox{Gal}(^{L}/_{L^{G}})$. On the other hand for $g\in\mbox{Gal}(^{L}/_{L^{G}})$ we have $g(a)=a$ for all $a\in F$ since $F\subseteq L^{G}$ and so $g\in\mbox{Gal}(^{L}/_{F})$. Therefore $\mbox{Gal}(^{L}/_{F})=\mbox{Gal}(^{L}/_{L^{G}})$ and so setting $G':=\mbox{Gal}(^{L}/_{L^{G}})$ gives $L^{G'}=L^{G}$ as required.
\end{proof}
The following group theory result must be known. However we will provide a proof in lieu of a reference.
\begin{lemma}
Let $(G,+)$ be a group and $g\in\mbox{Aut}(G)$ be a group automorphism on $G$. If $a,b\in G$ are such that $\mbox{gcd}(\mbox{ord}(g,a),\mbox{ord}(g,b))=1$ then $\mbox{ord}(g,a+b)=\mbox{ord}(g,a)\mbox{ord}(g,b)$.
\label{lem:CVFGCD}
\end{lemma}
\begin{proof}
We assume the conditions of Lemma \ref{lem:CVFGCD} and note that the result is immediate if one or more of $\mbox{ord}(g,a)$ and $\mbox{ord}(g,b)$ is equal to 1. So assuming otherwise, let $p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{i}^{k_{i}}$ and $q_{1}^{l_{1}}q_{2}^{l_{2}}\cdots q_{j}^{l_{j}}$ be the prime decompositions of $\mbox{ord}(g,a)$ and $\mbox{ord}(g,b)$ respectively. For $n:=\mbox{ord}(g,a)\mbox{ord}(g,b)$ we have
\begin{equation*}
g^{(n)}(a+b)=g^{(n)}(a)+g^{(n)}(b)=a+b.
\end{equation*}
Therefore $\mbox{ord}(g,a+b)|n$. Suppose towards a contradiction that $\mbox{ord}(g,a+b)<n$. Then $\mbox{ord}(g,a+b)|\frac{n}{r}$ for some $r\in\{p_{1},p_{2},\cdots,p_{i},q_{1},q_{2},\cdots,q_{j}\}$. If $r=p_{m}$ for some $m\in\{1,2,\cdots,i\}$ then $a+b=g^{(\frac{n}{r})}(a+b)=g^{(\frac{n}{r})}(a)+g^{(\frac{n}{r})}(b)=g^{(\frac{n}{r})}(a)+b$ giving, by right cancellation of $b$, $g^{(\frac{n}{r})}(a)=a$. It then follows that
\begin{equation*}
p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{m}^{k_{m}}\cdots p_{i}^{k_{i}}|p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{m}^{k_{m}-1}\cdots p_{i}^{k_{i}}q_{1}^{l_{1}}q_{2}^{l_{2}}\cdots q_{j}^{l_{j}}
\end{equation*}
giving $p_{m}|q_{1}^{l_{1}}q_{2}^{l_{2}}\cdots q_{j}^{l_{j}}$ which is a contradiction since $\mbox{gcd}(\mbox{ord}(g,a),\mbox{ord}(g,b))=1$. A similar contradiction occurs for $r=q_{m}$ with $m\in\{1,2,\cdots,j\}$. Hence $\mbox{ord}(g,a+b)=n$ as required.
\end{proof}
\begin{lemma}
Let $F$ be a field with finite extension $L$ and let $g\in\mbox{Gal}(^{L}/_{F})$. If $n\in\mathbb{N}$ is such that $n|\mbox{ord}(g)$ then $n\in\mbox{ord}(g,L)$.
\label{lem:CVFOSET}
\end{lemma}
\begin{proof}
Suppose towards a contradiction that there is $n\in\mathbb{N}$ such that $n|\mbox{ord}(g)$ but $n\notin\mbox{ord}(g,L)$. We can take $n$ to be the least such element and note that $n\not= 1$ since $1\in\mbox{ord}(g,L)$. Express $n$ as $n=p^{k}r$ where $p$ is a prime, $k,r\in\mathbb{N}$ and $p\nmid r$. We thus have the following two cases.\\
{\bf{Case:}} $r\not= 1$. In this case by the definition of $n$ we have $p^{k},r\in\mbox{ord}(g,L)$ and so there are $a,b\in L$ with $\mbox{ord}(g,a)=p^{k}$, $\mbox{ord}(g,b)=r$ and
\begin{equation*}
\mbox{gcd}(\mbox{ord}(g,a),\mbox{ord}(g,b))=1.
\end{equation*}
Then by Lemma \ref{lem:CVFGCD} we have $\mbox{ord}(g,a+b)=\mbox{ord}(g,a)\mbox{ord}(g,b)$ which contradicts our assumption that $n\notin\mbox{ord}(g,L)$.\\
{\bf{Case:}} $r=1$. In this case $n=p^{k}$ and note that $\mbox{ord}(g)=nm$ for some $m\in\mathbb{N}$. Hence we have the following subgroups of $G$:
\begin{enumerate}
\item[(i)]
$\langle g^{(n)}\rangle:=(\{\mbox{id},g^{(n)},g^{(2n)},\cdots,g^{((m-1)n)}\},\circ)<G$;
\item[(ii)]
$\langle g^{(\frac{n}{p})}\rangle:=(\{\mbox{id},g^{(\frac{n}{p})},g^{(2\frac{n}{p})},\cdots,g^{((mp-1)\frac{n}{p})}\},\circ)\leqslant G$.
\end{enumerate}
Therefore $\#\langle g^{(n)}\rangle=m$, $\#\langle g^{(\frac{n}{p})}\rangle=mp$ and $\langle g^{(n)}\rangle$ is a proper normal subgroup of $\langle g^{(\frac{n}{p})}\rangle$. Hence by Corollary \ref{cor:CVFLFTG} we have the following tower of fields
\begin{equation*}
L^{G}\subseteq L^{\langle g^{(n/p)}\rangle}\subsetneqq L^{\langle g^{(n)}\rangle}\subseteq L.
\end{equation*}
Now it is immediate that $L^{g^{(n)}}=L^{\langle g^{(n)}\rangle}$ and $L^{g^{(n/p)}}=L^{\langle g^{(n/p)}\rangle}$ and so there is some $a\in L^{g^{(n)}}\backslash L^{g^{(n/p)}}$ with $\mbox{ord}(g,a)|n$ but $\mbox{ord}(g,a)\nmid\frac{n}{p}$. Therefore $\mbox{ord}(g,a)=p^{k}=n$ which again contradicts our assumption that $n\notin\mbox{ord}(g,L)$. In particular the lemma holds.
\end{proof}
The following lemma is well known but we will provide a proof in lieu of a reference.
\begin{lemma}
\label{lem:CVFPOL}
Let $F$ be a field, let $L$ be an algebraic extension of $F$ and let $a\in L$. For the simple extension $F(a)$ of $F$ and $F[X]$ the ring of polynomials over $F$ we have $F(a)=F[a]$.
\end{lemma}
\begin{proof}
It is immediate that $F[a]\subseteq F(a)$. Now by Theorem \ref{thr:CVFIR} there is a unique monic irreducible polynomial $\mbox{Irr}_{F,a}(x)\in F[X]$ such that $\mbox{Irr}_{F,a}(a)=0$. Further for any element $\frac{p(a)}{q(a)}\in F(a)$, given by $p(x),q(x)\in F[X]$, we have $q(a)\not=0$. Hence $\mbox{Irr}_{F,a}(x)$ and $q(x)$ are relatively prime, that is we have $\mbox{gcd}(\mbox{Irr}_{F,a},q)=1$. Therefore by Bezout's identity there are $s(x),t(x)\in F[X]$ such that $s(x)q(x)+t(x)\mbox{Irr}_{F,a}(x)=1$ giving $q(x)=\frac{1-t(x)\mbox{Irr}_{F,a}(x)}{s(x)}$. Finally then we have $q(a)=\frac{1}{s(a)}$ giving $\frac{p(a)}{q(a)}=p(a)s(a)$ which is an element of $F[a]$ as required.
\end{proof}
\chapter[Functions and algebras]{Functions and algebras}
\label{cha:FAA}
In this chapter we build upon some of the basic facts and analysis of complete valued fields surveyed in Chapter \ref{cha:CVF}. The first section establishes particular facts in functional analysis over complete valued fields that will be used in later chapters. However it is not the purpose of the first section to provide an extensive introduction to the subject. The second section provides background on Banach $F$-algebras, Banach algebras over a complete valued field $F$. Whilst some of the details are included purely as background others also support the discussion from Remark \ref{rem:CVFOS} of Chapter \ref{cha:CVF}.
\section{Functional analysis over complete valued fields}
\label{sec:FAAFA}
We begin with the following lemma.
\begin{lemma}
Let $F$ be a non-Archimedean field and let $(a_{n})$ be a sequence of elements of $F$.
\begin{enumerate}
\item[(i)]
If $\lim_{n\to\infty}a_{n}=a$ for some $a\in F^{\times}$ then there exists $N\in\mathbb{N}$ such that for all $n\geq N$ we have $|a_{n}|_{F}=|a|_{F}$. We will call this {\em convergence from the side}, opposed to from above or below.
\item[(ii)]
If $F$ is also complete then $\sum{a_{n}}$ converges if and only if $\lim_{n\to\infty}a_{n}=0$, in sharp contrast to the Archimedean case. Further if $\sum{a_{n}}$ does converge then
\begin{equation*}
\left|\sum{a_{n}}\right|_{F}\leq\max\{|a_{n}|_{F}:n\in\mathbb{N}\}.
\end{equation*}
\end{enumerate}
\label{lem:FAACS}
\end{lemma}
\begin{proof}
For (i), since $a\not=0$ we have $|a|_{F}>0$ and so there is some $N\in\mathbb{N}$ such that, for all $n\geq N$, $|a_{n}-a|_{F}<|a|_{F}$. Hence for all $n\geq N$ we have by Lemma \ref{lem:CVFEQ} that $|a_{n}|_{F}=|(a_{n}-a)+a|_{F}=|a|_{F}$.\\
For (ii), suppose $\lim_{n\to\infty}a_{n}=0$ and let $\varepsilon>0$. Then there is $N\in\mathbb{N}$ such that for all $n\geq N$ we have $|a_{n}|_{F}<\varepsilon$. Hence for $n_{1},n_{2}\in\mathbb{N}$ with $N<n_{1}<n_{2}$ we have
\begin{equation*}
\left|\sum_{i=1}^{n_{2}}a_{i}-\sum_{i=1}^{n_{1}}a_{i}\right|_{F}=\left|\sum_{i=n_{1}+1}^{n_{2}}a_{i}\right|_{F}\leq\max\{|a_{n_{1}+1}|_{F},\cdots,|a_{n_{2}}|_{F}\}<\varepsilon.
\end{equation*}
Hence the sequence of partial sums is a Cauchy sequence in $F$ and so converges. The converse is immediate. Further suppose $\sum{a_{n}}$ does converge. For $\sum{a_{n}}\not=0$ we have by (i) that there is $N\in\mathbb{N}$ such that for all $n\geq N$
\begin{equation*}
\left|\sum_{i=1}^{\infty}a_{i}\right|_{F}=\left|\sum_{i=1}^{n}a_{i}\right|_{F}\leq\max\{|a_{1}|_{F},\cdots,|a_{n}|_{F}\}\leq\max\{|a_{i}|_{F}:i\in\mathbb{N}\}.
\end{equation*}
On the other hand for $\sum{a_{n}}=0$ the result is immediate.
\end{proof}
The following theorem appears in \cite[p59]{Schikhof} without proof.
\begin{theorem}
Let $F$ be a complete non-Archimedean field and let $a_{0},a_{1},a_{2},\cdots$ be a sequence of elements of $F$. Define the radius of convergence by
\begin{equation*}
\rho:=\frac{1}{\limsup_{n\to\infty}\sqrt[n]{|a_{n}|_{F}}}\quad\mbox{where by convention }0^{-1}=\infty\mbox{ and }\infty^{-1}=0.
\end{equation*}
Then the power series $\sum{a_{n}x^{n}}$, $x\in F$, converges if $|x|_{F}<\rho$ and diverges if $|x|_{F}>\rho$. Furthermore for each $t\in(0,\infty)$, $t<\rho$ the convergence is uniform on $\bar{B}_{t}(0):=\{a\in F:|a|_{F}\leq t\}$.
\label{thr:FAACP}
\end{theorem}
\begin{proof}
Note that the following equalities hold, except for when $|x|_{F}=\rho=0$,
\begin{equation}
\label{equ:FAACP}
\limsup_{n\to\infty}\sqrt[n]{|a_{n}x^{n}|_{F}}=\limsup_{n\to\infty}\sqrt[n]{|a_{n}|_{F}|x|_{F}^{n}}=\frac{|x|_{F}}{\rho}.
\end{equation}
Suppose $\sum{a_{n}x^{n}}$ is divergent. Then by part (ii) of Lemma \ref{lem:FAACS}, $\lim_{n\to\infty}a_{n}x^{n}$ is not $0$. Therefore there is some $\varepsilon\in(0,1]$ such that for each $m\in\mathbb{N}$ there is $n>m$ with $|a_{n}x^{n}|_{F}\geq\varepsilon$, in particular $\sqrt[n]{|a_{n}x^{n}|_{F}}\geq\sqrt[n]{\varepsilon}\geq\sqrt[m]{\varepsilon}$. Hence since $\lim_{m\to\infty}\sqrt[m]{\varepsilon}=1$ we have $\limsup_{n\to\infty}\sqrt[n]{|a_{n}x^{n}|_{F}}\geq1$. Therefore in this case $|x|_{F}\geq\rho$ by (\ref{equ:FAACP}). In particular for cases where $|x|_{F}<\rho$ the series $\sum{a_{n}x^{n}}$ converges.\\
On the other hand suppose $\sum{a_{n}x^{n}}$ converges. Then since $\lim_{n\to\infty}|a_{n}x^{n}|_{F}=0$ we have $\limsup_{n\to\infty}\sqrt[n]{|a_{n}x^{n}|_{F}}\leq\limsup_{n\to\infty}\sqrt[n]{\frac{1}{2}}=1$. Therefore in this case $|x|_{F}\leq\rho$ by (\ref{equ:FAACP}). In particular for cases where $|x|_{F}>\rho$ the series $\sum{a_{n}x^{n}}$ diverges.\\
Now suppose there is $t\in(0,\infty)$ with $t<\rho$ and let $\varepsilon>0$. If the valuation on $F$ is dense then $|F^{\times}|_{F}$ is dense in the positive reals and so there is some $x_{0}\in F^{\times}$ with $t<|x_{0}|_{F}<\rho$. Alternatively, if the valuation on $F$ is discrete, there is $x_{0}\in\bar{B}_{t}(0)$ with $|x_{0}|_{F}=\max\{|a|_{F}:a\in\bar{B}_{t}(0)\}$. In either case, since $|x_{0}|_{F}<\rho$, $\sum{a_{n}x_{0}^{n}}$ converges and so $\lim_{n\to\infty}a_{n}x_{0}^{n}=0$. Hence there is some $N\in\mathbb{N}$ such that for all $n>N$ we have $|a_{n}x_{0}^{n}|_{F}<\varepsilon$. Now by the last part of Lemma \ref{lem:FAACS} we have for all $m>N$ and $x\in \bar{B}_{t}(0)$ that
\begin{equation*}
|\sum_{n=1}^{\infty}a_{n}x^{n}-\sum_{n=1}^{m}a_{n}x^{n}|_{F}=|\sum_{n=m+1}^{\infty}a_{n}x^{n}|_{F}\leq\max\{|a_{n}x^{n}|_{F}:n\geq m+1\}.
\end{equation*}
But $|a_{n}x^{n}|_{F}=|a_{n}|_{F}|x|_{F}^{n}\leq |a_{n}|_{F}|x_{0}|_{F}^{n}=|a_{n}x_{0}^{n}|_{F}$ and so $\max\{|a_{n}x^{n}|_{F}:n\geq m+1\}<\varepsilon$ and the convergence is uniform on $\bar{B}_{t}(0)$.
\end{proof}
\begin{remark}
With reference to Theorem \ref{thr:FAACP}.
\begin{enumerate}
\item[(i)]
We note that the radius of convergence as defined in Theorem \ref{thr:FAACP} is the same as that used in the Archimedean setting when replacing $F$ with the complex numbers. However, unlike in the complex setting, if the valuation on $F$ is discrete then a power series $\sum{a_{n}x^{n}}$ may not have a unique choice for the definition of its radius of convergence since $|F^{\times}|_{F}$ is discrete in this case.
\item[(ii)]
We need to be careful when considering convergence of power series. Let $|\cdot|_{\infty}$ denote the absolute valuation on $\mathbb{R}$ and let $|\cdot|_{0}$ denote the trivial valuation on $\mathbb{R}$. All power series are convergent on $B_{1}(0):=\{a\in\mathbb{R}:|a|_{0}<1\}=\{0\}$ with respect to $|\cdot|_{0}$. Whereas the only power series that are convergent at a point $a\in\mathbb{R}^{\times}$ with respect to $|\cdot|_{0}$ are polynomials. On the other hand $\exp(x):=\sum_{n=1}^{\infty}\frac{x^{n}}{n!}$ converges everywhere on $\mathbb{R}$ with respect to $|\cdot|_{\infty}$. The function $\exp(x)$ defined with respect to $|\cdot|_{\infty}$ is a continuous function on all of $\mathbb{R}$ with respect to $|\cdot|_{0}$ but does not have a power series representation on $\mathbb{R}$ with respect to $|\cdot|_{0}$. Similarly $\sum_{n=1}^{\infty}\frac{x^{n}}{n!}$ does not converge everywhere on the $p$-adic numbers $\mathbb{Q}_{p}$ with respect to $|\cdot|_{p}$, see \cite[p70]{Schikhof} for details in this case.
\item[(iii)]
Under the conditions of Theorem \ref{thr:FAACP}, suppose that the ball $B_{\rho}(0)$ is without isolated points where $\rho$ is the radius of convergence of $f(x):=\sum{a_{n}x^{n}}$. Then, with differentiation defined as in the Archimedean setting, the derivative of $f$ exists on $B_{\rho}(0)$ and it is $f'(x)=\sum{na_{n}x^{n-1}}$. We will not consider this in depth but note, for $x\in B_{\rho}(0)$, the series $\sum{a_{n}x^{n}}$ converges giving $\lim_{n\to\infty}c_{n}=0$ for $c_{n}:=a_{n}x^{n}$ by Lemma \ref{lem:FAACS}. Hence for all $n\in\mathbb{N}$, since $|n|_{F}=|1_{1}+\cdots+1_{n}|_{F}\leq\max\{|1_{1}|_{F},\cdots,|1_{n}|_{F}\}=1$, we have for $x\not=0$ that
\begin{equation*}
|na_{n}x^{n-1}|_{F}=|n|_{F}|x^{-1}|_{F}|a_{n}x^{n}|_{F}\leq|x^{-1}|_{F}|c_{n}|_{F}.
\end{equation*}
Therefore the series $\sum{na_{n}x^{n-1}}$ also converges on $B_{\rho}(0)$ by Lemma \ref{lem:FAACS}.
\end{enumerate}
\label{rem:FAACP}
\end{remark}
\subsection{Analytic functions}
\label{subsec:FAAAF}
Let $F$ be a complete valued field. In this subsection we consider $F$ valued functions that are analytic on the interior of some subset of $F$ that is without isolated points. In particular the situation concerning such analytic functions is some what different in the non-Archimedean setting to that in the Archimedean one, even though the standard results of differentiation such as the chain rule and Leibniz rule are the same, see \cite[p59]{Schikhof}. Recall that if a complex valued function $f$ is analytic on an open disc $D_{r}(a)\subseteq\mathbb{C}$ then $f$ can be represented by the convergent power series
\begin{equation*}
f(z)=\sum_{n=0}^{\infty}\frac{f^{(n)}(a)}{n!}(z-a)^{n}\quad\mbox{for }z\in D_{r}(a),
\end{equation*}
known as the {\em Taylor expansion} of $f$ about $a$, where $f^{(n)}(a)$ is the $n$th derivative of $f$ at $a$. Moreover if $b\in D_{r}(a)$ then $f$ can also be expanded about $b$. However this expansion need not be convergent on all of $D_{r}(a)$ merely on the largest open disc centered at $b$ contained in $D_{r}(a)$ since a lack of differentiability of $f$ at points on, or outside, the boundary of $D_{r}(a)$ will restrict the radius of convergence of such an expansion, see \cite[p449,p450]{Apostol}.\\
Now for $F$ a complete non-Archimedean field the same scenario in this case is such that if $f$ is analytic on a ball $B_{r}(a)\subseteq F$ and can be represented by a Taylor expansion about $a$ on all of $B_{r}(a)$ then $f$ can be represented by a Taylor expansion about any other point $b\in B_{r}(a)$ and this expansion will also be valid on all of $B_{r}(a)$, see \cite[p68]{Schikhof}. This is closely related to the fact that every point of $B_{r}(a)$ is at its center, see the proof of Lemma \ref{lem:CVFTD}. However in general a function $f$ analytic on $B_{r}(a)\subseteq F$ need not have a Taylor expansion about $a$ that is valid on all of $B_{r}(a)$. This is because $B_{r}(a)$ can be decomposed as a disjoint union of clopen balls, see Remark \ref{rem:CVFTD}, upon each of which $f$ can independently be defined. This leads to the following definitions.
\begin{definition}
Let $F$ be a complete valued field with non-trivial valuation.
\begin{enumerate}
\item[(i)]
We will call a subset $X\subseteq F$ {\em strongly convex} if $X$ is either $F$, the empty set $\emptyset$, a ball or a singleton set.
\item[(ii)]
Let $X$ be an open strongly convex subset of $F$ and let $f:X\rightarrow F$ be a continuous $F$-valued function on $X$. If $f$ can be represented by a single Taylor expansion that is valid on all of $X$ then we say that $f$ is {\em globally analytic} on $X$.
\item[(iii)]
Let $X$ be an open subset of $F$ and let $f:X\rightarrow F$ be a continuous $F$-valued function on $X$. If for each $a\in X$ there is an open strongly convex neighborhood $V\subseteq X$ of $a$ such that $f|_{V}$ is globally analytic on $V$ then we say that $f$ is {\em locally analytic} on $X$.
\item[(iv)]
Let $X$ and $f$ be as in (iii). As usual, if the derivative
\begin{equation*}
f'(a):=\lim_{x\to a}\frac{f(x)-f(a)}{x-a}
\end{equation*}
exists at every $a\in X$ then we say that $f$ is {\em analytic} on $X$.
\item[(v)]
In the case where $X=F$ we similarly define {\em globally entire}, {\em locally entire} and {\em entire} functions on $X$.
\end{enumerate}
\label{def:FAAGL}
\end{definition}
\begin{remark}
Note that the condition in Definition \ref{def:FAAGL} that $F$ has a non-trivial valuation is there because it does not make sense to talk about analytic functions defined on a space without accumulation points. We also note that (ii), (iii) and (iv) of Definition \ref{def:FAAGL} are equivalent in the complex setting for $X$ an open strongly convex subset of $\mathbb{C}$, see \cite[p450]{Apostol}.
\label{rem:FAAGL}
\end{remark}
Now let $F$ be a complete non-Archimedean field and let $(f_{n})$ be a sequence of $F$-valued functions analytic on $\bar{B}_{1}(0)$ and converging uniformly on $\bar{B}_{1}(0)$ to a function $f$. We ask whether $f$ will also be analytic on $\bar{B}_{1}(0)$ in this case? It is very well known that the answer to the analog of this question involving the complex numbers is yes although in this case the functions are required to be continuous on $\bar{B}_{1}(0)$ and analytic only on the interior of $\bar{B}_{1}(0)$ since $\bar{B}_{1}(0)$ will not be clopen. In the case involving the real numbers the answer to the question is of course no since for example a function with a chevron shaped graph in $\mathbb{R}^{2}$ can be uniformly approximated by differentiable functions. In the non-Archimedean setting the following theorem provides insight for when $F$ is not locally compact and also gives a maximum principle result, see \cite[p122]{Schikhof} for proof.
\begin{theorem}
Let $F$ be a complete non-Archimedean field that is not locally compact and let $r\in|F^{\times}|_{F}$.
\begin{enumerate}
\item[(i)]
If $f_{1},f_{2},\cdots$ are globally analytic functions on $\bar{B}_{r}(0)$ and if $f:=\lim_{n\to\infty}f_{n}$ uniformly on $\bar{B}_{r}(0)$ then $f$ is also globally analytic on $\bar{B}_{r}(0)$.
\item[(ii)]
Let $f$ be a globally analytic function on $\bar{B}_{r}(0)$ with power series $f(x)=\sum_{n=0}^{\infty}a_{n}x^{n}$.\\
If the valuation $|\cdot|_{F}$ is dense then
\begin{equation*}
\sup\{|f(x)|_{F}:|x|_{F}\leq r\}=\sup\{|f(x)|_{F}:|x|_{F}<r\}=\max\{|a_{n}|_{F}r^{n}:n\geq0\}<\infty.
\end{equation*}
If the residue field $\overline{F}$ is infinite then
\begin{equation*}
\max\{|f(x)|_{F}:|x|_{F}\leq r\}=\max\{|f(x)|_{F}:|x|_{F}=r\}=\max\{|a_{n}|_{F}r^{n}:n\geq0\}<\infty.
\end{equation*}
\end{enumerate}
\label{thr:FAAMP}
\end{theorem}
\begin{remark}
In Theorem \ref{thr:FAAMP} $\bar{B}_{r}(0)$ is not compact since $F$ is not locally compact. In fact every ball of positive radius is not compact in this case and this follows from Theorem \ref{thr:CVFHB} noting that translations and non-zero scalings in $F$ are homeomorphisms on $F$. Now since we are progressing towards a study of uniform algebras and their generalisation over complete valued fields we note that in order to use the uniform norm, see Remark \ref{rem:UASA}, on such algebras of continuous functions we need the functions to be bounded. Hence to avoid imposing boundedness directly it is convenient to work on compact spaces.
\label{rem:FAAMP}
\end{remark}
For $\bar{B}_{1}(0)$ compact, i.e. in the $F$ locally compact case, I provide the following example to show that in this case the uniform limit of locally analytic, and hence analytic, functions on $\bar{B}_{1}(0)$ need not be analytic.
\begin{example}
Let $F$ be a locally compact, complete non-Archimedean field with non-trivial valuation. Then we have the following sequence of functions on $\bar{B}_{1}(0)\subseteq F$,
\begin{equation*}
f_{n}(x):=\left\{ \begin{array}{l@{\quad\mbox{if}\quad}l}\pi^{\nu(x)} & \nu(x)<n \\ 0 & \nu(x)\geq n \end{array} \right.\quad\mbox{for }x\in\bar{B}_{1}(0)
\end{equation*}
where $\pi$ is a prime element of $F$ and $\nu$ is the rank 1 valuation logarithm. For each $n\in\mathbb{N}$, $f_{n}$ is a locally constant function since convergence in $F$ is from the side, see Lemma \ref{lem:FAACS}, and so $f_{n}$ is locally analytic on $\bar{B}_{1}(0)$. Moreover the sequence $(f_{n})$ converges uniformly on $\bar{B}_{1}(0)$ to the continuous function
\begin{equation*}
f(x):=\left\{ \begin{array}{l@{\quad\mbox{if}\quad}l}\pi^{\nu(x)} & x\not=0 \\ 0 & x=0 \end{array} \right.\quad\mbox{for }x\in\bar{B}_{1}(0)
\end{equation*}
with $\lim_{x\to0}f(x)=0$ since $|f(x)|_{F}=|x|_{F}$ for all $x\in\bar{B}_{1}(0)$. We now show that $f$ is not differentiable at zero. let $a_{1},a_{2},\cdots$ and $b_{1},b_{2},\cdots$ be sequences in $F$ given by $a_{n}:=\pi^{n}$ and $b_{n}:=-\pi^{n}$. Both of these sequences tend to zero as $n$ tends to $\infty$. But then
\begin{equation*}
\frac{f(a_{n})-f(0)}{a_{n}}=(\pi^{n}-0)\pi^{-n}=1\quad\mbox{and }\frac{f(b_{n})-f(0)}{b_{n}}=(\pi^{n}-0)(-\pi^{-n})=-1
\end{equation*}
for all $n\in\mathbb{N}$ so that the limit $\lim_{x\to0}\frac{f(x)-f(0)}{x}$ does not exist as required. Alternatively we can obtain a similar example by redefining $f$ as
\begin{equation*}
f(x):=\left\{ \begin{array}{l@{\quad\mbox{if}\quad}l}\pi^{\frac{1}{2}\nu(x)} & \nu(x)\mbox{ is even} \\ \pi^{\frac{1}{2}(\nu(x)-1)} & \nu(x)\mbox{ is odd} \\ 0 & x=0 \end{array} \right.\quad\mbox{for }x\in\bar{B}_{1}(0).
\end{equation*}
In this case $\lim_{x\to0}\frac{f(x)-f(0)}{x}$ blows up with respect to $|\cdot|_{F}$ as demonstrated by the sequence $c_{1},c_{2},\cdots$ with $c_{n}:=\pi^{2n}$.
\label{exa:FAANA}
\end{example}
Later when we look at non-complex analogs of uniform algebras we will see, from Kaplansky's non-Archimedean generalisation of the Stone-Weierstrass theorem, that the continuous functions in Example \ref{exa:FAANA} can be uniformly approximated by polynomials on $\bar{B}_{1}(0)$ given that $\bar{B}_{1}(0)$ is compact in this case. Hence Example \ref{exa:FAANA} also shows that, for $F$ locally compact, the uniform limit of globally analytic functions on $\bar{B}_{1}(0)$ need not be analytic in contrast to Theorem \ref{thr:FAAMP}.\\
In anticipation of topics in the next section we now consider Liouville's theorem. It is immediate that the standard Liouville theorem never holds in the non-Archimedean setting since for a complete non-Archimedean field $F$ with non-trivial valuation the indicator function $\chi_{_{B}}$ for $B:=\bar{B}_{1}(0)$ is a non-constant bounded locally analytic function from $F$ to $F$ noting that $\bar{B}_{1}(0)$ is a clopen subset of $F$. However the following is called the ultrametric Liouville theorem.
\begin{theorem}
Let $F$ be a complete non-Archimedean field with non-trivial valuation. Then every bounded globally analytic function from $F$ to $F$ is constant if and only if $F$ is not locally compact.
\label{thr:FAAULT}
\end{theorem}
\begin{proof}
See \cite[p124,p125]{Schikhof} for a full proof of Theorem \ref{thr:FAAULT}. However proof in the if direction is as follows. Let $f(x)=\sum_{n=0}^{\infty}a_{n}x^{n}$, for $x\in F$, be as in Theorem \ref{thr:FAAULT}. Since $f$ is bounded there is $M<\infty$ such that $|f(x)|_{F}\leq M$ for all $x\in F$. Let $m\in\mathbb{N}$. Since $F$ is not locally compact we can apply (ii) of Theorem \ref{thr:FAAMP} so that for $r\in|F^{\times}|_{F}$ we have
\begin{equation*}
|a_{m}|_{F}r^{m}\leq\max\{|a_{n}|_{F}r^{n}:n\geq0\}=\sup\{|f(x)|_{F}:|x|_{F}\leq r\}\leq M.
\end{equation*}
This holds for every $r\in|F^{\times}|_{F}$ and so $a_{m}=0$ leaving $f=a_{0}$ for all $x\in F$.
\end{proof}
For a field $F$ with the trivial valuation we note that $F$ is locally compact and that there are bounded non-constant polynomials from $F$ to $F$, where we take polynomials to be the analog of globally analytic functions in this case.
\section{Banach {\it F}-algebras}
\label{sec:FAABFA}
We begin this section with the following definitions.
\begin{definition}
Let $F$ be a complete valued field.
\begin{enumerate}
\item[(i)]
A {\em general Banach ring} is a normed ring $R$ that is complete with respect to its norm which is required to be sub-multiplicative, i.e.
\begin{equation*}
\|ab\|_{R}\leq\|a\|_{R}\|b\|_{R}\quad\mbox{for all }a,b\in R.
\end{equation*}
We do not assume that $R$ has a multiplicative identity or that its multiplication is commutative, we merely assume it is associative.
\item[(ii)]
A {\em Banach ring} is a general Banach ring $R$ that has a left/right multiplicative identity satisfying $\|1_{R}\|_{R}=1=\|-1_{R}\|_{R}$.
\item[(iii)]
A {\em Banach $F$-algebra} is a general Banach ring $A$ that is also a normed vector space over $F$, with respect to the ring's addition operation and norm, and such that the ring's multiplication operation is a bilinear map over $F$, i.e. respectively
\begin{equation*}
\|\alpha a\|_{A}=|\alpha|_{F}\|a\|_{A}\mbox{ and }(\alpha a)b=a(\alpha b)=\alpha(ab)\quad\mbox{for all }a,b\in A\mbox{ and }\alpha\in F.
\end{equation*}
\item[(iv)]
A {\em unital Banach $F$-algebra} is a Banach $F$-algebra that is also a Banach ring opposed to being merely a general Banach ring.
\item[(v)]
By {\em commutative general Banach ring} and {\em commutative Banach $F$-algebra} etc. we mean that the multiplication is commutative in these cases. By {\em $F$-algebra} we mean the structure of a Banach $F$-algebra but without the requirement of a norm.
\end{enumerate}
\label{def:FAABA}
\end{definition}
\begin{remark}
In Definition \ref{def:FAABA} we always require a multiplicative identity to be different to the additive identity. As standard we will usually dispense with the subscript when denoting elements of the structures defined in Definition \ref{def:FAABA} and in the Archimedean setting we will call a Banach $\mathbb{C}$-algebra a complex Banach algebra and a Banach $\mathbb{R}$-algebra a real Banach algebra.
\label{rem:FAABA}
\end{remark}
\subsection{Spectrum of an element}
\label{subsec:FAASE}
The following discussion concerns the spectrum of an element.
\begin{definition}
Let $F$ be a complete valued field and let $A$ be a unital Banach $F$-algebra. Then for $a\in A$ we call the set
\begin{equation*}
\mbox{Sp}(a):=\{\lambda\in F:\lambda-a\mbox{ is not invertible in }A\}
\end{equation*}
the spectrum of $a$.
\label{def:FAASP}
\end{definition}
\begin{theorem}
Every element of every unital complex Banach algebra has non-empty spectrum.
\label{thr:FAANES}
\end{theorem}
Theorem \ref{thr:FAANES} is very well known. A proof can be found in \cite[p11]{Stout} and relies on Liouville's theorem and the Hahn-Banach theorem in the complex setting. We will confirm that this result is unique among unital Banach $F$-algebras and I will give details of where the proof from the complex setting fails for other complete valued fields. Let us first recall the Gelfand-Mazur theorem which demonstrates the importance of Theorem \ref{thr:FAANES} in the complex setting and supports Remark \ref{rem:CVFOS} of Chapter \ref{cha:CVF}.
\begin{theorem}
A unital complex Banach algebra that is also a division ring is isometrically isomorphic to the complex numbers.
\label{thr:FAAGM}
\end{theorem}
\begin{proof}
Let $A$ be a unital complex Banach algebra that is also a division ring and let $a\in A$. Since in this case $\mbox{Sp}(a)$ is non-empty, there is some $\lambda\in\mbox{Sp}(a)$. Hence because $A$ is a division ring $\lambda-a=0$ giving $a=\lambda$. More accurately we have $a=\lambda 1_{A}$ but because $A$ is unital we have $\|a\|_{A}=\|\lambda 1_{A}\|_{A}=|\lambda|_{\infty}\|1_{A}\|_{A}=|\lambda|_{\infty}$ and so the map from $A$ onto $\mathbb{C}$ given by $\lambda 1_{A}\mapsto\lambda$ is an isometric isomorphism.
\end{proof}
\begin{remark}
In the Archimedean setting it follows immediately from Theorem \ref{thr:FAAGM} that any complete valued field containing the complex numbers as a valued subfield will coincide with the complex numbers. Note that the proof of Theorem \ref{thr:FAAGM} is very well known.
\label{rem:FAAGM}
\end{remark}
In contrast to Theorem \ref{thr:FAANES} we have the following lemma. The result is certainly known but we give full details in lieu of a reference.
\begin{lemma}
Let $F$ be a complete valued field other than the complex numbers. Then there exists a unital Banach $F$-algebra $A$ such that $\mbox{Sp}(a)=\emptyset$ for some $a\in A$.
\label{lem:FAAES}
\end{lemma}
\begin{proof}
Let $F$ be a complete valued field other that the complex numbers. By Corollary \ref{cor:CVFEE} in the non-Archimedean setting, and since $\mathbb{R}$ is the only complete valued field other than $\mathbb{C}$ in the Archimedean setting, we can always find a complete valued field $L$ that is a proper extension of $F$. Let $a\in L\backslash F$ and note that $L$ is a unital Banach $F$-algebra. Then for every $\lambda\in F$ we have $\lambda-a\not=0$ and so $\lambda-a$ is invertible in $L$ since $L$ is a field. Hence $\mbox{Sp}(a)=\emptyset$.
\end{proof}
Whilst not considering every case, we now consider where the proof of Theorem \ref{thr:FAANES} fails when applying it to unital Banach $F$-algebras with $F\not=\mathbb{C}$. For $F=\mathbb{R}$ the Hahn-Banach theorem holds but Liouville's theorem does not with the trigonometric $\sin$ function restricted to $\mathbb{R}$ as an example of a non-constant, bounded, analytic function from $\mathbb{R}$ to $\mathbb{R}$. In the non-Archimedean setting we do have the ultrametric Liouville theorem, Theorem \ref{thr:FAAULT} for $F$ not locally compact, and there is also an ultrametric Hahn-Banach theorem for spherically complete fields, as follows.
\begin{definition}
An ultrametric space, see Definition \ref{def:CVFV}, is {\em spherically complete} if each nested sequence of balls has a non-empty intersection.
\label{def:FAASC}
\end{definition}
\begin{theorem}
Let $F$ be a spherically complete non-Archimedean field and let $V$ be an $F$-vector space, $s$ a seminorm on $V$ and $V_{0}\subseteq V$ a vector subspace. Then for every linear functional $\ell_{0}:V_{0}\rightarrow F$ such that $|\ell_{0}(v)|_{F}\leq s(v)$ for all $v\in V_{0}$ there is a linear functional $\ell:V\rightarrow F$ such that $\ell|_{V_{0}}=\ell_{0}$ and $|\ell(v)|_{F}\leq s(v)$ for all $v\in V$.
\label{thr:FAAHB}
\end{theorem}
\begin{remark}
We note that Theorem \ref{thr:FAAHB} is exactly the same as the Hahn-Banach theorem from the Archimedean setting, see \cite[p472]{Stout}, except with $\mathbb{R}$ and $\mathbb{C}$ replaced by any spherically complete non-Archimedean field. A proof can be found in both \cite[p51]{Schneider} and \cite[p288]{Schikhof} the latter of which further states that Theorem \ref{thr:FAAHB} becomes a falsity if $F$ is replaced by a non-spherically complete field. It is immediate that spherically complete ultrametric spaces are complete.
\label{rem:FAAHB}
\end{remark}
A proof of the following lemma can be found in \cite[p6]{Schneider}.
\begin{lemma}
All complete non-Archimedean fields with a discrete valuation are spherically complete. In particular if $F$ is a complete non-Archimedean fields that is locally compact then $F$ is spherically complete.
\label{lem:FAASC}
\end{lemma}
From the above details we see that for both Theorem \ref{thr:FAAULT} and Theorem \ref{thr:FAAHB} to be applicable we need a non-locally compact, spherically complete, non-Archimedean field. This restricts the possibilities since for example, for any prime $p$, a finite extension of $\mathbb{Q}_{p}$ is locally compact and $\mathbb{C}_{p}$ whilst not locally compact is also not spherically complete, see \cite[p5]{Schneider}. However, with reference to (ii) of Example \ref{exa:CVFA}, the complete non-Archimedean field $\mathbb{C}\{\{T\}\}$ is not locally compact since having an infinite residue field and it is spherically complete since its valuation is discrete. Moreover the totally ramified, see Remark \ref{rem:CVFEE}, simple extension $\mathbb{C}\{\{T\}\}(\sqrt{T})$ is a unital Banach $\mathbb{C}\{\{T\}\}$-algebra with complete valuation given by Theorem \ref{thr:CVFEE}. But by the proof of Lemma \ref{lem:FAAES} we have $\mbox{Sp}(\sqrt{T})=\emptyset$. So let's briefly review how the proof of Theorem \ref{thr:FAANES} from \cite[p11]{Stout} works and then consider where it fails for $\mathbb{C}\{\{T\}\}(\sqrt{T})$.\\
Let $A$ be a unital complex Banach algebra and let $a\in A$. Suppose towards a contradiction that $\mbox{Sp}(a)=\emptyset$. Then $\lambda-a$ is invertible for all $\lambda\in\mathbb{C}$. In particular $a^{-1}$ exists in $A$ and the map $\ell_{0}:\mathbb{C}a^{-1}\rightarrow\mathbb{C}$, given by $\ell_{0}(\lambda a^{-1}):=\lambda\alpha$ for a fixed $\alpha\in\mathbb{C}$ with $0<|\alpha|_{\infty}\leq\|a^{-1}\|_{A}$, is a continuous linear functional on the subspace $\mathbb{C}a^{-1}$ of $A$ to which the Hahn-Banach theorem can be applied directly. Hence there exists a continuous linear functional $\ell:A\rightarrow\mathbb{C}$ such that $\ell(-a^{-1})=-\alpha\not=0$. On the other hand for any continuous linear functional $\varphi:A\rightarrow\mathbb{C}$ we can define a function $f_{\varphi}:\mathbb{C}\rightarrow\mathbb{C}$ by
\begin{equation*}
f_{\varphi}(\lambda):=\varphi((\lambda-a)^{-1}).
\end{equation*}
The proof then shows that $f_{\varphi}$ is differentiable at every point of $\mathbb{C}$ and is therefore an entire function. Moreover $\lim_{\lambda\to\infty}f_{\varphi}(\lambda)=0$ since
\begin{equation*}
|f_{\varphi}(\lambda)|_{\infty}=\left|\frac{1}{\lambda}\varphi((1-\lambda^{-1}a)^{-1})\right|_{\infty}\leq\frac{1}{|\lambda|_{\infty}}\|\varphi\|_{\mbox{\footnotesize op}}\|(1-\lambda^{-1}a)^{-1}\|_{A},
\end{equation*}
where $\|\cdot\|_{\mbox{\footnotesize op}}$ is the standard operator norm. Hence, by Liouville theorem in the complex setting, $f_{\varphi}$ is the zero function. But we have $f_{\ell}(0)=-\alpha\not=0$, a contradiction, and so $\mbox{Sp}(a)\not=\emptyset$ as required. Note however that the function $f_{\varphi}$ is defined on $\mathbb{C}\backslash\mbox{Sp}(a)$.\\
Now for $\mathbb{C}\{\{T\}\}(\sqrt{T})$ the coordinate projection $P:\mathbb{C}\{\{T\}\}(\sqrt{T})\rightarrow\mathbb{C}\{\{T\}\}$ given by $P(\alpha+\beta\sqrt{T}):=\alpha$, where $\alpha,\beta\in\mathbb{C}\{\{T\}\}$, is a continuous linear functional analogous to an evaluation functional noting that convergence in $\mathbb{C}\{\{T\}\}(\sqrt{T})$ is coordinate-wise over $\mathbb{C}\{\{T\}\}$ by Remark \ref{rem:CVFEN}. Hence we can define a function $f_{P}:\mathbb{C}\{\{T\}\}\rightarrow\mathbb{C}\{\{T\}\}$ given by
\begin{equation*}
f_{P}(\lambda):=P((\lambda-\sqrt{T})^{-1})=P((\lambda+\sqrt{T})(\lambda^{2}-T)^{-1})=\lambda(\lambda^{2}-T)^{-1}.
\end{equation*}
The function $f_{P}$ is defined on all of $\mathbb{C}\{\{T\}\}$ since the roots of $\lambda^{2}-T$ are $\sqrt{T}$ and $-\sqrt{T}$. Furthermore $f_{P}$ is not constant and so it is the relative weakness of the ultrametric Liouville theorem in the non-Archimedean setting that allows the argument used in the proof of Theorem \ref{thr:FAANES} to fails in this case. Indeed we will now show that $f_{P}$ is not globally analytic on all of $\mathbb{C}\{\{T\}\}$. The first derivative of $f_{P}$ is
\begin{equation*}
f_{P}^{(1)}(\lambda)=(\lambda^{2}-T)^{-1}-2\lambda^{2}(\lambda^{2}-T)^{-2}
\end{equation*}
and so $f_{P}(0)=0$ and $f_{P}^{(1)}(0)=-\frac{1}{T}$. Continuing in this way we obtain the Taylor expansion of $f_{P}$ about zero as
\begin{equation*}
f_{P}(\lambda)=\sum_{n=0}^{\infty}\alpha_{n}\lambda^{n}=-\left(\frac{\lambda}{T}+\frac{\lambda^{3}}{T^{2}}+\frac{\lambda^{5}}{T^{3}}+\frac{\lambda^{7}}{T^{4}}+\cdots\right),\quad\mbox{for }|\lambda|_{T}<\rho,
\end{equation*}
where $\alpha_{n}:=\frac{f_{P}^{(n)}(0)}{n!}=-\frac{1-(-1)^{n}}{2}T^{-\frac{1}{2}(1+n)}\in\mathbb{C}\{\{T\}\}$ and
\begin{equation*}
\rho=\frac{1}{\limsup_{n\to\infty}\sqrt[n]{|\alpha_{n}|_{T}}}
\end{equation*}
is the radius of convergence of the Taylor series expansion. Hence we show that $\rho$ is finite. Using the rank 1 valuation logarithm, for $\sum_{n\in\mathbb{Z}}a_{n}T^{n}\in\mathbb{C}\{\{T\}\}^{\times}$ we have $|\sum_{n\in\mathbb{Z}}a_{n}T^{n}|_{T}=r^{-\min\{n:a_{n}\not=0\}}$ for some fixed $r>1$. Hence, noting that $\alpha_{2n}=0$ and $\alpha_{2n-1}=-T^{-n}$ for $n\in\mathbb{N}$, we have
\begin{equation*}
\limsup_{n\to\infty}\sqrt[n]{|\alpha_{n}|_{T}}=\lim_{n\to\infty}\sqrt[2n-1]{|\alpha_{2n-1}|_{T}}=\lim_{n\to\infty}\sqrt[2n-1]{r^{n}}=\lim_{n\to\infty}r^{\frac{n}{2n-1}}=r^{\frac{1}{2}}.
\end{equation*}
Hence $\rho=\frac{1}{\sqrt{r}}<1$ since $r>1$. In particular $f_{P}$ is only locally analytic on $\mathbb{C}\{\{T\}\}$ and not globally analytic, consistent with the ultrametric Liouville theorem not being applicable to $f_{P}$ as required.
\begin{definition}
Let $F$ be a complete valued field and let $A$ be a unital Banach $F$-algebra. Define $\mathcal{F}(A)$ as the set of all complete valued fields $L$ contained inside $A$ over which $A$ is also a unital Banach $L$-algebra.
\label{def:FAAMF}
\end{definition}
\begin{remark}
\label{rem:FAAFC}
Concerning the spectrum of an element.
\begin{enumerate}
\item[(i)]
It is tempting to conjecture that a generalisation of Theorem \ref{thr:FAANES} might hold for every complete valued field $F$ provided that, given $F$, we restrict the statement to those unital Banach $F$-algebras $A$ for which $F$ is a maximal element of $\mathcal{F}(A)$. This conjecture is false in both the non-commutative and commutative settings by Lemma \ref{lem:FAAHQ} below. However for a more general version of the conjecture one could permit the elements of $\mathcal{F}(A)$ to be complete normed division rings.
\item[(ii)]
Let $A$ be a unital real Banach algebra. In order to avoid an element $a\in A$ having empty spectrum Kaplansky gave the following alternative definition in this case,
\begin{equation*}
\mbox{Sp}_{\mathcal{K}}(a):=\{\alpha+i\beta\in\mathbb{C}:(a-\alpha)^{2}+\beta^{2}\mbox{ is not invertible in }A\}.
\end{equation*}
We won't investigate this definition here but for more details see \cite[p6]{Kulkarni-Limaye1992}.
\end{enumerate}
\end{remark}
\begin{lemma}
In both the non-commutative and commutative algebra settings one can find a complete valued field $F$, a unital Banach $F$-algebra $A$ and an element $a\in A$ such that $F$ is a maximal element of $\mathcal{F}(A)$ and $\mbox{Sp}(a)=\emptyset$.
\label{lem:FAAHQ}
\end{lemma}
\begin{proof}
Hamilton's real quaternions, $\mathbb{H}$, are an example of a non-commutative complete Archimedean division ring and unital real Banach algebra. Viewing $\mathbb{H}$ as a real vector space, the valuation on $\mathbb{H}$ is the Euclidean norm which is complete, Archimedean and indeed a valuation since being multiplicative on $\mathbb{H}$, see \cite[p56,p57]{Lam}. By the Gelfand-Mazur theorem, Theorem \ref{thr:FAAGM}, $\mathbb{H}$ is not a unital complex Banach algebra since being different to $\mathbb{C}$ and so $\mathbb{R}$ is maximal in $\mathcal{F}(\mathbb{H})$. Moreover for $a\in\mathbb{H}\backslash\mathbb{R}$ it is immediate that we have $\mbox{Sp}(a)=\emptyset$.\\
In the commutative setting consider the field of complex numbers $\mathbb{C}$ with the absolute valuation replaced by the $L_{1}$-norm as it applies to the real vector space $\mathbb{R}^{2}$, that is for $a=\alpha+i\beta\in\mathbb{C}$ we have $\|a\|_{1}:=|\alpha|_{\infty}+|\beta|_{\infty}$. Then $\mathbb{C}$ is complete with respect to $\|\cdot\|_{1}$ by the equivalence of norms on finite dimensional $\mathbb{R}$-vector spaces. Expressing complex numbers in their coordinate form it is easy to show that $\|\cdot\|_{1}$ is sub-multiplicative and so $(\mathbb{C},\|\cdot\|_{1})$ is a unital real Banach algebra. However $\|\cdot\|_{1}$ is not multiplicative since $\|(1+i)(1-i)\|_{1}=\|2\|_{1}=2<4=\|1+i\|_{1}\|1-i\|_{1}$ and so $\|\cdot\|_{1}$ is not a valuation on $\mathbb{C}$. Consequently $\mathbb{R}$ is maximal in $\mathcal{F}((\mathbb{C},\|\cdot\|_{1}))$ and over $\mathbb{R}$, $\mbox{Sp}(i)=\emptyset$ which completes the proof.
\end{proof}
\chapter[Uniform algebras]{Uniform algebras}
\label{cha:UA}
In the first section of this chapter we survey some of the basic facts about complex uniform algebras and recall the close connection with the study of compact Hausdorff spaces, such as Swiss cheese sets, upon which such algebras of functions are defined. An inductive proof by the author of the Feinstein-Heath Swiss cheese ``classicalisation''
theorem is then presented. An article containing this proof has been published by the American Mathematical Society, see \cite{Mason}. In the second section of this chapter we turn our attention to non-complex analogs of uniform algebras. The constraints imposed by the various generalisations of the Stone-Weierstrass theorem are considered and the theory of real function algebras developed by Kulkarni and Limaye is introduced. We will establish the topological requirements of the spaces upon which algebras of functions in the non-Archimedean setting can be defined whilst qualifying as non-complex analogs of uniform algebras. These observations together with some of the details and examples from other chapters have been gathered together by the author into a survey paper which was subsequently accepted for publication by the American Mathematical Society, see \cite{Mason2011}.
\section{Complex uniform algebras}
\label{sec:UAC}
\begin{definition}
\label{def:UACUA}
Let $C_{\mathbb{C}}(X)$ be the unital complex Banach algebra of all continuous complex valued functions, defined on a compact Hausdorff space $X$, with pointwise operations and the sup norm given by
\begin{equation*}
\|f\|_{\infty}:=\sup_{x\in X}|f(x)|_{\infty}\quad\mbox{for all }f\in C_{\mathbb{C}}(X).
\end{equation*}
A {\em{uniform algebra}}, $A$, is a subalgebra of $C_{\mathbb{C}}(X)$ that is complete with respect to the sup norm, contains the constant functions making it a unital complex Banach algebra and separates the points of $X$ in the sense that for all $x_{1}, x_{2}\in X$ with $x_{1}\not= x_{2}$ there is $f\in A$ satisfying $f(x_{1})\not= f(x_{2})$.
\end{definition}
\begin{remark}
Introductions to uniform algebras can be found in \cite{Browder}, \cite{Gamelin} and \cite{Stout}. Some authors take Definition \ref{def:UACUA} to be a representation of uniform algebras and take a uniform algebra $A$ to be a unital complex Banach algebra with a square preserving norm, that is $\|a^{2}\|=\|a\|^{2}$ for all $a\in A$, which they sometimes then referred to as a uniform norm. This is quite legitimate since, as we will discuss at depth in the section on representation theory, the Gelfand transform shows us that every such algebra is isometrically isomorphic to an algebra conforming to Definition \ref{def:UACUA}. In this thesis we mainly introduce generalisations of Definition \ref{def:UACUA} over complete valued fields and then investigate the important representation theory results. Hence for us by {\em uniform norm} we will mean the sup norm.
\label{rem:UASA}
\end{remark}
It is very well known that in the complex setting, for suitable $X$, there exist uniform algebras that are proper subalgebras of $C_{\mathbb{C}}(X)$. However if $A$ is such a uniform algebra then $A$ is not self-adjoint, that is there is $f\in A$ with $\bar{f}\notin A$ where $\bar{f}$ denotes the complex conjugate of $f$. This result is the complex Stone-Weierstrass theorem, generalisations of which we will meet in Section \ref{sec:UANC}. We will also meet several analogs of the following example.
\begin{example}
A standard example is the {\em{disc algebra}} $A(\Delta)\subseteq C_{\mathbb{C}}(\Delta)$, of continuous functions analytic on the interior of $\Delta:=\{z\in\mathbb{C}:|z|\leq1\}$, which is as far from being self-adjoint as possible since if both $f$ and $\bar{f}$ are in $A(\Delta)$ then $f$ is constant, see \cite[p47]{Kulkarni-Limaye1992}. Also $P(\Delta)=A(\Delta)$ where $P(\Delta)$ is the uniform algebra of all functions on $\Delta$ that can be uniformly approximated by polynomials restricted to $\Delta$ with complex coefficients. This largely follows from Remark \ref{rem:FAAGL}, see \cite[p5]{Browder} or \cite[p2]{Stout}.
\label{exa:UADA}
\end{example}
For a compact Hausdorff space $X$ let $R(X)$ denote the uniform algebra of all functions on $X$ that can be uniformly approximated by rational functions from $C_{\mathbb{C}}(X)$. We also generalise to $X$ the uniform algebras introduced in Example \ref{exa:UADA} giving $A(X)$ and $P(X)$. In the theory of uniform approximation it is standard to ask for which $X$ is one or more of the following inclusions non-trivial
\begin{equation*}
P(X)\subseteq R(X)\subseteq A(X)\subseteq C_{\mathbb{C}}(X).
\end{equation*}
Whilst not always the case, this often only depend on $X$ up to homeomorphism. In particular many properties of uniform algebras are topological properties of the spaces upon which they are defined. Hence there is a strong connection between the study of uniform algebras and that of compact Hausdorff spaces. Therefore, in addition to being of interest in their own right, uniform algebras are important in the theory of uniform approximation; as examples of complex Banach algebras; in representation theory and in the study of compact Hausdorff space. With respect to the latter, we now turn our attention to the compact plane sets known as Swiss cheese sets.
\subsection{Swiss cheese sets in the complex plane}
\label{subsec:UASCS}
Throughout subsections \ref{subsec:UASCS} and \ref{subsec:UACT}, all discs in the complex plane are required to have finite positive radius. More generally let $\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}$ from here on throughout the thesis. We begin with the following definitions taken from \cite{Feinstein-Heath}.
\begin{definition}
For a disc $D$ in the plane let $r(D)$ denote the radius of $D$.
\begin{enumerate}
\item[(i)]
A {\em{Swiss cheese}} is a pair ${\bf{D}}:=(\Delta,\mathcal{D})$ for which $\Delta$ is a closed disc and $\mathcal{D}$ is a countable or finite collection of open discs. A Swiss cheese ${\bf{D}}=(\Delta,\mathcal{D})$ is {\em{classical}} if the closures of the discs in $\mathcal{D}$ intersect neither one another nor $\mathbb{C}\backslash\mbox{\rm{int}}\Delta$, and $\sum_{D\in\mathcal{D}}r(D)<\infty$.
\item[(ii)]
The {\em{associated Swiss cheese set}} of a Swiss cheese ${\bf{D}}=(\Delta,\mathcal{D})$ is the plane set $X_{\bf{D}}:=\Delta\backslash\bigcup\mathcal{D}$.\\
A {\em{classical Swiss cheese set}} is a plane set $X$ for which there exists a classical Swiss cheese ${\bf{D}}=(\Delta,\mathcal{D})$ such that $X=X_{\bf{D}}$.
\item[(iii)]
For a Swiss cheese ${\bf{D}}=(\Delta,\mathcal{D})$, we define $\delta({\bf{D}}):=r(\Delta)-\sum_{D\in\mathcal{D}}r(D)$ so that $\delta({\bf{D}})>-\infty$ if and only if $\sum_{D\in\mathcal{D}}r(D)<\infty$.
\end{enumerate}
\label{def:UASC}
\end{definition}
Figure \ref{fig:UASCS} provides an example for (ii) of Definition \ref{def:UASC}.
\begin{figure}[h]
\begin{center}
\includegraphics{Thesisfig2}
\end{center}
\caption{A classical Swiss cheese set.}
\label{fig:UASCS}
\end{figure}
Swiss cheese sets are used extensively in the theory of uniform algebras since they provide many examples of uniform algebras with particular properties. For examples see \cite{Feinstein}, \cite[Ch2]{Gamelin} and \cite{Roth}. In particular \cite{Feinstein-Heath} includes a survey of the use of Swiss cheese constructions
in the theory of uniform algebras. The following example is from \cite{Roth}.
\begin{lemma}
For $X\subseteq\mathbb{C}$ non-empty and compact, let $A_{1}\subseteq A_{2}\subseteq C_{\mathbb{C}}(X)$ be uniform algebras with $A_{0}$ uniformly dense in $A_{1}$. Suppose we can find a continuous linear functional $\varphi:C_{\mathbb{C}}(X)\rightarrow\mathbb{C}$ such that $\varphi(A_{0})=\{0\}$ and $\varphi(f)=a\not=0$ for some $f\in A_{2}$. Then $A_{1}\not=A_{2}$.
\label{lem:UARO}
\end{lemma}
\begin{proof}
Let $q\in A_{0}$. Then
\begin{equation*}
0<|a|_{\infty}=|\varphi(f)-\varphi(q)|_{\infty}=|\varphi(f-q)|_{\infty}\leq\|f-q\|_{\infty}\|\varphi\|_{\mbox{\footnotesize op}}
\end{equation*}
giving $\|f-q\|_{\infty}\geq|a|_{\infty}\|\varphi\|_{\mbox{\footnotesize op}}^{-1}>0$ for all $q\in A_{0}$. Hence $f$ can not be uniformly approximated by elements of $A_{0}$. More simply, $\varphi(A_{1})=\{0\}$ by continuity.
\end{proof}
\begin{example}
\label{exa:UARO}
It is possible to have $R(X)\not=A(X)$. Let $D_{0}$ be a closed disc and let ${\bf{D}}=(D_{0},\mathcal{D})$ be a classical Swiss cheese with $\delta({\bf{D}})>0$ and $\mathcal{D}$ infinite. Let $(D_{n})$ be a sequence of open discs such that the map $n\mapsto D_{n}$ is a bijection from $\mathbb{N}$ to $\mathcal{D}$. For $n\in\mathbb{N}_{0}$ define $\gamma_{n}:[0,1]\rightarrow\mathbb{C}$ as the circular path
\begin{equation*}
\gamma_{n}(x):=r_{n}\exp(2\pi ix)+a_{n}
\end{equation*}
around the boundary $\partial D_{n}$. Now for a rational function $q\in C_{\mathbb{C}}(X_{\bf{D}})$ we note that on $\mathbb{C}$ the finitely many poles of $q$ lie in the open complement of $X_{\bf{D}}$ and so $X_{\bf{D}}$ is a subset of an open subset of $\mathbb{C}$ upon which $q$ is analytic. Hence by Cauchy's theorem, see \cite[p218]{Rudin}, we have $\varphi(q)=0$ for $\varphi:C_{\mathbb{C}}(X_{\bf{D}})\rightarrow\mathbb{C}$ defined by
\begin{equation*}
\varphi(f):=\int_{\gamma_{0}}f{\mathrm{d}}z-\sum_{n=1}^{\infty}\int_{\gamma_{n}}f{\mathrm{d}}z\quad\mbox{for }f\in C_{\mathbb{C}}(X_{\bf{D}}).
\end{equation*}
We now check that $\varphi$ is a bounded linear functional on $C_{\mathbb{C}}(X_{\bf{D}})$. The following uses the fundamental estimate. Let $f\in C_{\mathbb{C}}(X_{\bf{D}})$ with $\|f\|_{\infty}\leq 1$. Then,
\begin{align*}
|\varphi(f)|_{\infty} &=\left|\int_{\gamma_{0}}f(z){\mathrm{d}}z-\sum_{n=1}^{\infty}\int_{\gamma_{n}}f(z){\mathrm{d}}z\right|_{\infty}\\ &\leq\sum_{n=0}^{\infty}\left|\int_{\gamma_{n}}f(z){\mathrm{d}}z\right|_{\infty}\\
&\leq\sum_{n=0}^{\infty}\|f\|_{\infty}\int_{0}^{1}|\gamma'_{n}(x)|_{\infty}{\mathrm{d}}x\\
&\leq\sum_{n=0}^{\infty}\int_{0}^{1}|r_{n}2\pi i \exp(2\pi i x)|_{\infty}{\mathrm{d}}x\\
&=\sum_{n=0}^{\infty}r_{n}2\pi\int_{0}^{1}{\mathrm{d}}x=2\pi\left(\sum_{n=0}^{\infty}r_{n}\right)<4\pi r_{0}
\end{align*}
where $\sum_{n=0}^{\infty}r_{n}<2r_{0}$ since $\delta({\bf{D}})>0$. Now $4\pi r_{0}$ is an upper bound for the series of absolute terms, hence we have absolute convergence. Since absolute convergence implies convergence we have, for all $f\in C_{\mathbb{C}}(X_{\bf{D}})$,
\begin{equation*} \varphi(f)=\varphi\left(\frac{\|f\|_{\infty}}{\|f\|_{\infty}}f\right)=\|f\|_{\infty}\varphi\left(\frac{f}{\|f\|_{\infty}}\right)=\|f\|_{\infty}a_{f}\in\mathbb{C}
\end{equation*}
for some $a_{f}\in\mathbb{C}$. Moreover, our calculation shows that $\varphi$ is bounded with $\|\varphi\|_{\mbox{\footnotesize op}}<4\pi r_{0}$. The linearity of $\varphi$ follows from the linearity of integrating over a sum of terms and so $\varphi$ is a continuous linear functional on $C_{\mathbb{C}}(X_{\bf{D}})$. Next we note that the function $g:z\mapsto\bar{z}$ on $X_{\bf{D}}$ given by complex conjugation is an element of $C_{\mathbb{C}}(X_{\bf{D}})$. Cauchy's theorem does not imply that $\varphi(g)$ will be zero since $g$ is not analytic on any non-empty open subset of $\mathbb{C}$. We have
\begin{equation*}
\varphi(g)=\int_{\gamma_{0}}g(z){\mathrm{d}}z-\sum_{n=1}^{\infty}\int_{\gamma_{n}}g(z){\mathrm{d}}z=2\pi i\left(r_{0}^{2}-\sum_{n=1}^{\infty}r_{n}^{2}\right),
\end{equation*}
since for each $n\in\mathbb{N}_{0}$
\begin{align*}
\int_{\gamma_{n}}g(z){\mathrm{d}}z &=\int_{0}^{1}g(\gamma_{n})\gamma'_{n}{\mathrm{d}}x\\
&=\int_{0}^{1}(r_{n}\exp(-2\pi ix)+\bar{a}_{n})r_{n}2\pi i\exp(2\pi ix){\mathrm{d}}x\\
&=2\pi i r_{n}\int_{0}^{1}(r_{n}+\bar{a}_{n}\exp(2\pi ix)){\mathrm{d}}x\\
&=2\pi i r_{n}\left(r_{n}\int_{0}^{1}{\mathrm{d}}x+\bar{a}_{n}\int_{0}^{1}\exp(2\pi ix){\mathrm{d}}x\right)\\
&=2\pi i r_{n}\left(r_{n}+\bar{a}_{n}\left[\frac{1}{2\pi i}\exp(2\pi ix)\right]_{0}^{1}\right)\\
&=2\pi i r_{n}^{2}.
\end{align*}
Furthermore $\sum_{n=1}^{\infty}r_{n}^{2}\leq(\sum_{n=1}^{\infty}r_{n})^{2}<r_{0}^{2}$ since $\sum_{n=1}^{\infty}r_{n}<r_{0}$, by $\delta({\bf{D}})>0$, and so $\varphi(g)\not=0$. Therefore by Lemma \ref{lem:UARO} we have $R(X_{\bf{D}})\not=C_{\mathbb{C}}(X_{\bf{D}})$ and $g\not\in R(X_{\bf{D}})$. Certainly this result is immediate in the case where $X_{\bf{D}}$ has interior since then $g$ will not be an element of $A(X_{\bf{D}})$. However this is one of the occasions where the usefulness of Swiss cheese set constructions becomes evident since, with some consideration, it is straightforward to construct a classical Swiss cheese ${\bf{D}}=(D_{0},\mathcal{D})$ with $\delta({\bf{D}})>0$ such that $X_{\bf{D}}$ has empty interior. Since $A(X_{\bf{D}})$ is the uniform algebra of all elements from $C_{\mathbb{C}}(X_{\bf{D}})$ that are analytic on the interior of $X_{\bf{D}}$, in this case we have $A(X_{\bf{D}})=C_{\mathbb{C}}(X_{\bf{D}})$ and so by the above $R(X_{\bf{D}})\not=A(X_{\bf{D}})$ which completes this example.
\end{example}
Let ${\bf{D}}$ be a Swiss cheese as specified in Example \ref{exa:UARO} such that $X_{\bf{D}}$ has empty interior. The following subsection shows that in this case there is actually no need to require ${\bf{D}}$ to be classical in order that $R(X_{\bf{D}})\not=A(X_{\bf{D}})$.
\subsection{Classicalisation theorem}
\label{subsec:UACT}
In this subsection we give a new proof of an existing theorem by J. F. Feinstein and M. J. Heath, see \cite{Feinstein-Heath}. The theorem states that any Swiss cheese set defined by a Swiss cheese ${\bf{D}}$ with $\delta({\bf{D}})>0$ contains a Swiss cheese set as a subset defined by a classical Swiss cheese ${\bf{D^{'}}}$ with $\delta({\bf{D^{'}}})\geq\delta({\bf{D}})$. Feinstein and Heath begin their proof by developing a theory of allocation maps connected to such sets. A partial order on a family of these allocation maps is then introduced and Zorn's lemma applied. We take a more direct approach by using transfinite induction, cardinality and disc assignment functions, where a disc assignment function is a kind of labeled Swiss cheese that defines a Swiss cheese set. An explicit theory of allocation maps is no longer required although we are still using them implicitly. In this regard we will discuss the connections with the original proof of Feinstein and Heath. See \cite[p266]{Kelley} and \cite[p9]{Dales} for useful introductions to ordinals and transfinite induction which has been used in this subsection. We begin with the following definitions.
\begin{definition}
Let $\mathcal{O}$ be the set of all open discs and complements of closed discs in the complex plane.
\begin{enumerate}
\item[(i)]
A {\em{disc assignment function}} $d:S\rightarrow\mathcal{O}$ is a map from a subset $S\subseteq\mathbb{N}_{0}$, with $0\in S$, into $\mathcal{O}$ such that ${\bf{D}}_{d}:=(\mathbb{C}\backslash d(0),d(S\backslash\{0\}))$ is a Swiss cheese. We allow $S\backslash\{0\}$ to be empty since a Swiss cheese ${\bf{D}}=(\Delta,\mathcal{D})$ can have $\mathcal{D}=\emptyset$.
\item[(ii)]
For a disc assignment function $d:S\rightarrow\mathcal{O}$ and $i\in S$ we let $\bar{d}(i)$ denote the closure of $d(i)$ in $\mathbb{C}$, that is $\bar{d}(i):=\overline{d(i)}$. A disc assignment function $d:S\rightarrow\mathcal{O}$ is said to be {\em{classical}} if for all $(i,j)\in S^{2}$ with $i\not= j$ we have $\bar{d}(i)\cap\bar{d}(j)=\emptyset$ and $\sum_{n\in S\backslash\{0\}}r(d(n))<\infty$.
\item[(iii)]
For a disc assignment function $d:S\rightarrow\mathcal{O}$ we let $X_{d}$ denote the associated Swiss cheese set of the Swiss cheese ${\bf{D}}_{d}$.
\item[(iv)]
A disc assignment function $d:S\rightarrow\mathcal{O}$ is said to have the {\em{Feinstein-Heath condition}} when $\sum_{n\in S\backslash\{0\}}r(d(n))<r(\mathbb{C}\backslash d(0))$.
\item[(v)]
Define $H$ as the set of all disc assignment functions with the Feinstein-Heath condition.\\
For $h\in H$, $h:S\rightarrow\mathcal{O}$, define $\delta_{h}:=r(\mathbb{C}\backslash h(0))-\sum_{n\in S\backslash\{0\}}r(h(n))>0$.
\end{enumerate}
\label{def:UAPSC}
\end{definition}
Here is the Feinstein-Heath Swiss cheese ``Classicalisation'' theorem as it appears in \cite{Feinstein-Heath}.
\begin{theorem}
For every Swiss cheese ${\bf{D}}$ with $\delta({\bf{D}})>0$, there is a classical Swiss cheese ${\bf{D^{'}}}$ with $X_{\bf{D^{'}}}\subseteq X_{\bf{D}}$ and $\delta({\bf{D^{'}}})\geq\delta({\bf{D}})$.
\label{thr:UAPFHT}
\end{theorem}
From Definition \ref{def:UAPSC} we note that if a disc assignment function $d:S\rightarrow\mathcal{O}$ is classical then the Swiss cheese ${\bf{D}}_{d}$ will also be classical. Similarly if $d$ has the Feinstein-Heath condition then $\delta({\bf{D}}_{d})>0$. The converse of each of these implications will not hold in general because $d$ need not be injective. However it is immediate that for every Swiss cheese ${\bf{D}}=(\Delta,\mathcal{D})$ with $\delta({\bf{D}})>0$ there exists an injective disc assignment function $h\in H$ such that ${\bf{D}}_{h}={\bf{D}}$. We note that every disc assignment function $h\in H$ has $\delta({\bf{D}}_{h})\ge\delta_{h}$ with equality if and only if $h$ is injective and that classical disc assignment functions are always injective. With these observations it easily follows that Theorem \ref{thr:UAPFHT} is equivalent to the following theorem involving disc assignment function.
\begin{theorem}
For every disc assignment function $h\in H$ there is a classical disc assignment function $h^{'}\in H$ with $X_{h^{'}}\subseteq X_{h}$ and $\delta_{h^{'}}\geq\delta_{h}$.
\label{thr:UAPHT}
\end{theorem}
Several lemmas from \cite{Feinstein-Heath} and \cite[\S 2.4.1]{Heath} will be used in the proof of Theorem \ref{thr:UAPHT} and we consider them now.
\begin{lemma}
Let $D_{1}$ and $D_{2}$ be open discs in $\mathbb{C}$ with radii $r(D_{1})$ and $r(D_{2})$ respectively such that $\bar{D}_{1}\cap\bar{D}_{2}\not=\emptyset$. Then there is an open disc $D$ with $D_{1}\cup D_{2}\subseteq D$ and with radius $r(D)\le r(D_{1})+r(D_{2})$.
\label{lem:UAP2.4.13}
\end{lemma}
Figure \ref{fig:UAPdisclemmas}, Example 1 exemplifies the application of Lemma \ref{lem:UAP2.4.13}.
\begin{lemma}
Let $D$ be an open disc and $\Delta$ be a closed disc such that $\bar{D}\not\subseteq\mbox{\textup{int}}\Delta$ and $\Delta\not\subseteq\bar{D}$. Then there is a closed disc $\Delta^{'}\subseteq\Delta$ with $D\cap\Delta^{'}=\emptyset$ and $r(\Delta^{'})\geq r(\Delta)-r(D)$.
\label{lem:UAP2.4.14}
\end{lemma}
Figure \ref{fig:UAPdisclemmas}, Example 2 exemplifies the application of Lemma \ref{lem:UAP2.4.14}.
\begin{figure}[h]
\begin{center}
\includegraphics{Thesisfig3}
\end{center}
\caption{Examples for lemmas \ref{lem:UAP2.4.13} and \ref{lem:UAP2.4.14}.}
\label{fig:UAPdisclemmas}
\end{figure}
\begin{lemma}
Let $\mathcal{F}$ be a non-empty, nested collection of open discs in $\mathbb{C}$, such that we have $\sup\{r(E):E\in\mathcal{F}\}<\infty$. Then $\bigcup\mathcal{F}$ is an open disc $D$. Further, for $\mathcal{F}$ ordered by inclusion, $r(D)=\lim_{E\in\mathcal{F}}r(E)=\sup_{E\in\mathcal{F}}r(E)$.
\label{lem:UAP2.4.11}
\end{lemma}
\begin{lemma}
Let $\mathcal{F}$ be a non-empty, nested collection of closed discs in $\mathbb{C}$, such that we have $\inf\{r(E):E\in\mathcal{F}\}>0$. Then $\bigcap\mathcal{F}$ is a closed disc $\Delta$. Further, for $\mathcal{F}$ ordered by reverse inclusion, $r(\Delta)=\lim_{E\in\mathcal{F}}r(E)=\inf_{E\in\mathcal{F}}r(E)$.
\label{lem:UAP2.4.12}
\end{lemma}
\begin{proof}[Proof of Theorem \ref{thr:UAPHT}]
At the heart of the proof of Theorem \ref{thr:UAPHT} is a completely defined map $f:H\rightarrow H$ which we now define case by case.
\begin{definition}
\label{def:UAPfHtoH}
Let $f:H\rightarrow H$ be the self map with the following construction.\\
{\em{Case 1:}} If $h\in H$ is a classical disc assignment function then define $f(h):=h$.\\
{\em{Case 2:}} If $h\in H$ is not classical then for $h:S\rightarrow\mathcal{O}$ let
\begin{equation*}
I_{h}:=\{(i,j)\in S^{2}:\bar{h}(i)\cap\bar{h}(j)\not=\emptyset, i\not= j\}.
\end{equation*}
We then have lexicographic ordering on $I_{h}$ given by
\begin{equation*}
(i,j)\lesssim(i^{'},j^{'})\mbox{ if and only if $i<i^{'}$ or ($i=i^{'}$ and $j\le j^{'}$).}
\end{equation*}
Since this is a well-ordering on $I_{h}$, let $(n,m)$ be the minimum element of $I_{h}$ and hence note that $m\not= 0$ since $m>n$. We proceed toward defining $f(h):S^{'}\rightarrow\mathcal{O}$.
\begin{equation*}
\mbox{Define $S^{'}:=S\backslash\{m\}$ and for $i\in S^{'}\backslash\{n\}$ we define $f(h)(i):=h(i)$.}
\end{equation*}
It remains for the definition of $f(h)(n)$ to be given and to this end we have the following two cases.\\
{\em{Case 2.1:}} $n\not=0$. In this case, by Definition \ref{def:UAPSC}, we note that both $h(m)$ and $h(n)$ are open discs. Associating $h(m)$ and $h(n)$ with $D_{1}$ and $D_{2}$ of Lemma \ref{lem:UAP2.4.13} we define $f(h)(n)$ to be the open disc satisfying the properties of $D$ of the lemma. Note in particular that,
\begin{equation}
h(m)\cup h(n)\subseteq f(h)(n)\mbox{ with }n<m.
\label{equ:UAPsset1}
\end{equation}
{\em{Case 2.2:}} $n=0$. In this case, by Definition \ref{def:UAPSC}, we note that $h(m)$ is an open disc and $h(0)$ is the complement of a closed disc. Associate $h(m)$ with $D$ from Lemma \ref{lem:UAP2.4.14} and put $\Delta :=\mathbb{C}\backslash h(0)$. Since $(0,m)\in I_{h}$ we have $\bar{h}(0)\cap\bar{h}(m)\not=\emptyset$ and so $\bar{h}(m)\not\subseteq\mbox{\textup{int}}\Delta$, noting $\mbox{\textup{int}}\Delta=\mathbb{C}\backslash\bar{h}(0)$. Further, since $h\in H$ we have $r(h(m))<r(\Delta)$ and so $\Delta\not\subseteq\bar{h}(m)$. Therefore the conditions of Lemma \ref{lem:UAP2.4.14} are satisfied for $h(m)$ and $\Delta$. Hence we define $f(h)(0)$ to be the complement of the closed disc satisfying the properties of $\Delta^{'}$ of Lemma \ref{lem:UAP2.4.14}. Note in particular that,
\begin{equation}
h(m)\cup h(0)\subseteq f(h)(0)\mbox{ with }0<m.
\label{equ:UAPsset2}
\end{equation}
\end{definition}
For this definition of the map $f$ we have yet to show that $f$ maps into $H$. We now show this together with certain other useful properties of $f$.
\begin{lemma}Let $h\in H$, then the following hold:
\begin{enumerate}
\item[(i)]
$f(h)\in H$ with $\delta_{f(h)}\geq\delta_{h}$;
\item[(ii)]
For $(h:S\rightarrow\mathcal{O})$ $\mapsto$ $(f(h):S^{'}\rightarrow\mathcal{O})$ we have $S^{'}\subseteq S$ with equality if and only if $h$ is classical. Otherwise $S^{'}=S\backslash\{m\}$ for some $m\in S\backslash\{0\}$;
\item[(iii)]
$X_{f(h)}\subseteq X_{h}$;
\item[(iv)]
For all $i\in S^{'}, h(i)\subseteq f(h)(i)$.
\end{enumerate}
\label{lem:UAPf}
\end{lemma}
\begin{proof}
We need only check (i) and (iii) for cases 2.1 and 2.2 of the definition of $f$, as everything else is immediate. Let $h\in H$.\\
(i) It is clear that $f(h)$ is a disc assignment function. It remains to check that $\delta_{f(h)}\geq\delta_{h}$.\\
For Case 2.1 we have, by Lemma \ref{lem:UAP2.4.13},
\begin{align*}
\delta_{h}&=r(\mathbb{C}\backslash h(0))-(r(h(m))+r(h(n)))-\sum_{i\in S\backslash\{0,m,n\}}r(h(i))\\
&\leq r(\mathbb{C}\backslash h(0))-r(f(h)(n))-\sum_{i\in S\backslash\{0,m,n\}}r(h(i))=\delta_{f(h)}.
\end{align*}
For Case 2.2 we have, by Lemma \ref{lem:UAP2.4.14},
\begin{align*}
\delta_{h}&=r(\mathbb{C}\backslash h(0))-r(h(m))-\sum_{i\in S\backslash\{0,m\}}r(h(i))\\
&\leq r(\mathbb{C}\backslash f(h)(0))-\sum_{i\in S\backslash\{0,m\}}r(h(i))=\delta_{f(h)}.
\end{align*}
(iii) Since $X_{h}=\mathbb{C}\backslash\bigcup_{i\in S}h(i)$ we require $\bigcup_{i\in S}h(i)\subseteq\bigcup_{i\in S^{'}}f(h)(i)$.\\
For Case 2.1 we have by Lemma \ref{lem:UAP2.4.13} that $h(m)\cup h(n)\subseteq f(h)(n)$, as shown at (\ref{equ:UAPsset1}), giving $\bigcup_{i\in S}h(i)\subseteq\bigcup_{i\in S^{'}}f(h)(i)$.\\ For Case 2.2 put $\Delta:=\mathbb{C}\backslash h(0)$ and $\Delta^{'}:=\mathbb{C}\backslash f(h)(0)$. We have by Lemma \ref{lem:UAP2.4.14} that $\Delta^{'}\subseteq\Delta$ and $h(m)\cap\Delta^{'}=\emptyset$. Hence $h(0)\cup h(m)\subseteq f(h)(0)$, as shown at (\ref{equ:UAPsset2}), and so $\bigcup_{i\in S}h(i)\subseteq\bigcup_{i\in S^{'}}f(h)(i)$ as required.
\end{proof}
We will use $f:H\rightarrow H$ to construct an ordinal sequence of disc assignment functions and then apply a cardinality argument to show that this ordinal sequence must stabilise at a classical disc assignment function. We construct the ordinal sequence so that it has the right properties.
\begin{definition}Let $h\in H$.
\begin{enumerate}
\item[(a)]
Define $h^{0}:S_{0}\rightarrow\bf{D}$ by $h^{0}:=h$.
\end{enumerate}
Now let $\alpha>0$ be an ordinal for which we have defined $h^{\beta}\in H$ for all $\beta<\alpha$.
\begin{enumerate}
\item[(b)]
If $\alpha$ is a successor ordinal then define $h^{\alpha}:S_{\alpha}\rightarrow\mathcal{O}$ by $h^{\alpha}:=f(h^{\alpha-1})$.
\item[(c)]
If $\alpha$ is a limit ordinal then define $h^{\alpha}:S_{\alpha}\rightarrow\mathcal{O}$ as follows.
\begin{equation*}
\mbox{Set } S_{\alpha}:=\bigcap_{\beta<\alpha}S_{\beta}. \mbox{ Then for } n\in S_{\alpha} \mbox{ define } h^{\alpha}(n):=\bigcup_{\beta<\alpha}h^{\beta}(n).
\end{equation*}
\end{enumerate}
\label{def:UAPh ord}
\end{definition}
Suppose that for every ordinal $\alpha$ for which Definition \ref{def:UAPh ord} can be applied we have $h^{\alpha}\in H$. Then Definition \ref{def:UAPh ord} can be applied for every ordinal $\alpha$ by transfinite induction and therefore defines an ordinal sequence of disc assignment function. We will use transfinite induction to prove Lemma \ref{lem:UAPh ord} below which asserts that $h^{\alpha}$ is an element of $H$ as well as other useful properties of $h^{\alpha}$.
\begin{lemma}
Let $\alpha$ be an ordinal number and let $h\in H$. Then the following hold:
\begin{enumerate}
\item[($\alpha$,1)]
$h^{\alpha}\in H$ with $\delta_{h^{\alpha}}\geq\delta_{h}$;
\begin{enumerate}
\item[($\alpha$,1.1)]
$0\in S_{\alpha}$;
\item[($\alpha$,1.2)]
$h^{\alpha}(0)$ is the complement of a closed disc and\\
$h^{\alpha}(n)$ is an open disc for all $n\in S_{\alpha}\backslash\{0\}$;
\item[($\alpha$,1.3)]
$\sum_{n\in S_{\alpha}\backslash \{0\}}r(h^{\alpha}(n))\leq r(\mathbb{C}\backslash h^{\alpha}(0))-\delta_{h}$;
\end{enumerate}
\item[($\alpha$,2)]
For all $\beta\leq\alpha$ we have $S_{\alpha}\subseteq S_{\beta}$;
\item[($\alpha$,3)]
For all $\beta\leq\alpha$ we have $X_{h^{\alpha}}\subseteq X_{h^{\beta}}$;
\item[($\alpha$,4)]
For all $n\in S_{\alpha}$, $\{h^{\beta}(n):\beta\leq\alpha\}$ is a nested increasing family of open sets.
\end{enumerate}
\label{lem:UAPh ord}
\end{lemma}
\begin{proof} We will use transfinite induction.\\
For $\alpha$ an ordinal number let $P(\alpha)$ be the proposition, Lemma \ref{lem:UAPh ord} holds at $\alpha$.\\
The base case $P(0)$ is immediate and our inductive hypothesis is that for all $\beta<\alpha$, $P(\beta)$ holds.\\
Now for $\alpha$ a successor ordinal we have $h^{\alpha}=f(h^{\alpha-1})$ and so $P(\alpha)$ is immediate by the inductive hypothesis and Lemma \ref{lem:UAPf}. Now suppose $\alpha$ is a limit ordinal.
We have $S_{\alpha}:=\bigcap_{\beta<\alpha}S_{\beta}$ giving, for all $\beta\le\alpha$, $S_{\alpha}\subseteq S_{\beta}$. Hence ($\alpha$,2) holds. Also for all $\beta<\alpha$ we have $0\in S_{\beta}$ by ($\beta$,1.1). So $0\in S_{\alpha}$ showing that ($\alpha$,1.1) holds. To show ($\alpha$,1.2) we will use lemmas \ref{lem:UAP2.4.11} and \ref{lem:UAP2.4.12}.
\begin{enumerate}
\item[(i)]
Now for all $n\in S_{\alpha}\backslash\{0\}$, $\{h^{\beta}(n):\beta<\alpha\}$ is a nested increasing family of open discs by ($\beta$,1.2) and ($\beta$,4).
\item[(ii)]
Further, $\{\mathbb{C}\backslash h^{\beta}(0):\beta<\alpha\}$ is a nested decreasing family of closed discs by ($\beta$,1.2) and ($\beta$,4).
\item[(iii)]
Now for $n\in S_{\alpha}\backslash\{0\}$ and $\beta<\alpha$ we have\\ $r(h^{\beta}(n))\leq\sum_{m\in S_{\beta}\backslash\{0\}}r(h^{\beta}(m))=r(\mathbb{C}\backslash h^{\beta}(0))-\delta_{h^{\beta}}\leq r(\mathbb{C}\backslash h(0))-\delta_{h}$, by ($\beta$,1) and (ii). Hence $\sup\{r(h^{\beta}(n)):\beta<\alpha\}\leq r(\mathbb{C}\backslash h(0))-\delta_{h}$.
So by (i) and Lemma \ref{lem:UAP2.4.11} we have for $n\in S_{\alpha}\backslash\{0\}$ that
\begin{equation*}
h^{\alpha}(n):=\bigcup_{\beta<\alpha}h^{\beta}(n)
\end{equation*}
is an open disc with,
\begin{equation*}
r(h^{\alpha}(n))=\sup_{\beta<\alpha}r(h^{\beta}(n))\leq r(\mathbb{C}\backslash h(0))-\delta_{h}.
\end{equation*}
\item[(iv)]
Now for $\beta<\alpha$ we have $r(\mathbb{C}\backslash h^{\beta}(0))\geq\delta_{h}$ by ($\beta$,1.3).\\
Hence $\inf\{r(\mathbb{C}\backslash h^{\beta}(0)):\beta<\alpha\}\geq\delta_{h}$. So by De Morgan, (ii) and Lemma \ref{lem:UAP2.4.12} we have
\begin{equation*}
\mathbb{C}\backslash h^{\alpha}(0):=\mathbb{C}\backslash\bigcup_{\beta<\alpha}h^{\beta}(0)=\bigcap_{\beta<\alpha}\mathbb{C}\backslash h^{\beta}(0)
\end{equation*}
is a closed disc with,
\begin{equation*}
r(\mathbb{C}\backslash h^{\alpha}(0))=\inf_{\beta<\alpha}r(\mathbb{C}\backslash h^{\beta}(0))\geq\delta_{h}.
\end{equation*}
Hence $h^{\alpha}(0)$ is the complement of a closed disc and so ($\alpha$,1.2) holds.
\end{enumerate}
We now show that ($\alpha$,4) holds. By ($\beta$,4) we have, for all $n\in S_{\alpha}$, $\{h^{\beta}(n):\beta<\alpha\}$ is a nested increasing family of open sets. We also have $h^{\alpha}(n)=\bigcup_{\beta<\alpha}h^{\beta}(n)$ so, for all $\beta\leq\alpha$, $h^{\beta}(n)\subseteq h^{\alpha}(n)$ and $h^{\alpha}(n)$ is an open set since ($\alpha$,1.2) holds. Hence ($\alpha$,4) holds.
We will now show that ($\alpha$,1.3) holds. We first prove that, for all $\lambda<\alpha$, we have
\begin{equation}
\sum_{m\in S_{\alpha}\backslash\{0\}}r(h^{\alpha}(m))\leq r(\mathbb{C}\backslash h^{\lambda}(0))-\delta_{h}.
\label{equ:UAPinequ1}
\end{equation}
Let $\lambda<\alpha$, and suppose, towards a contradiction, that
\begin{equation}
\sum_{m\in S_{\alpha}\backslash\{0\}}r(h^{\alpha}(m))>r(\mathbb{C}\backslash h^{\lambda}(0))-\delta_{h},
\label{equ:UAPcont}
\end{equation}
noting that the right hand side of (\ref{equ:UAPcont}) is non-negative by ($\lambda$,1.3).\\
Set
\begin{equation*}
\varepsilon:=\frac{1}{2}\left(\sum_{m\in S_{\alpha}\backslash\{0\}}r(h^{\alpha}(m))-(r(\mathbb{C}\backslash h^{\lambda}(0))-\delta_{h})\right)>0.
\end{equation*}
Then there exists $n\in S_{\alpha}\backslash\{0\}$ such that for $S_{\alpha}|_{1}^{n}:=\{m\in S_{\alpha}\backslash\{0\}:m\leq n\}$ we have
\begin{equation}
\sum_{m\in S_{\alpha}|_{1}^{n}}r(h^{\alpha}(m))>r(\mathbb{C}\backslash h^{\lambda}(0))-\delta_{h}+\varepsilon>0.
\label{equ:UAPinequ2}
\end{equation}
Further for each $m\in S_{\alpha}|_{1}^{n}$ we have, by (iii), $r(h^{\alpha}(m))=\sup_{\beta<\alpha}r(h^{\beta}(m))$. Hence for each $m\in S_{\alpha}|_{1}^{n}$ there exists $\beta_{m}<\alpha$ such that $r(h^{\beta_{m}}(m))\geq r(h^{\alpha}(m))-\frac{1}{2k}\varepsilon$, for $k:=|S_{\alpha}|_{1}^{n}|$, $k\not=0$ by (\ref{equ:UAPinequ2}). Let $\lambda^{'}:=\max\{\beta_{m}:m\in S_{\alpha}|_{1}^{n}\}<\alpha$ and note that this is a maximum over a finite set of elements since $S_{\alpha}|_{1}^{n}\subseteq\mathbb{N}$ is finite. Now for any $\gamma$ with $\max\{\lambda,\lambda^{'}\}\leq\gamma<\alpha$ we have,
\begin{align*}
\sum_{m\in S_{\gamma}\backslash\{0\}}r(h^{\gamma}(m))&\geq\sum_{m\in S_{\alpha}\backslash\{0\}}r(h^{\gamma}(m))& &(\mbox{since }S_{\alpha}\subseteq S_{\gamma})&\\
&\geq\sum_{m\in S_{\alpha}|_{1}^{n}}r(h^{\gamma}(m))& &&\\
&\geq\sum_{m\in S_{\alpha}|_{1}^{n}}r(h^{\beta_{m}}(m))& &(\mbox{by ($\gamma$,4)})&\\
&\geq\sum_{m\in S_{\alpha}|_{1}^{n}}(r(h^{\alpha}(m))-\frac{\varepsilon}{2k})& &(\mbox{by the above})&\\
&>r(\mathbb{C}\backslash h^{\lambda}(0))-\delta_{h}+\varepsilon-k\frac{\varepsilon}{2k}& &(\mbox{by (\ref{equ:UAPinequ2}) and }k:=|S_{\alpha}|_{1}^{n}|)&\\
&>r(\mathbb{C}\backslash h^{\lambda}(0))-\delta_{h}& &&\\
&\geq r(\mathbb{C}\backslash h^{\gamma}(0))-\delta_{h}& &(\mbox{by (ii)}).&
\end{align*}
This contradicts ($\gamma$,1.3). Hence we have shown that, for all $\lambda<\alpha$, (\ref{equ:UAPinequ1}) holds.\\ Now by (iv) we have $r(\mathbb{C}\backslash h^{\alpha}(0))=\inf_{\lambda<\alpha}r(\mathbb{C}\backslash h^{\lambda}(0))$.\\ Hence we have $\sum_{m\in S_{\alpha}\backslash\{0\}}r(h^{\alpha}(m))\leq r(\mathbb{C}\backslash h^{\alpha}(0))-\delta_{h}$ and so ($\alpha$,1.3) holds.
We now show that ($\alpha$,3) holds. We will show that for all ordinals $\beta<\alpha$,\\ $\bigcup_{i\in S_{\beta}}h^{\beta}(i)\subseteq\bigcup_{i\in S_{\alpha}}h^{\alpha}(i)$. Let $\beta<\alpha$ and $z\in\bigcup_{i\in S_{\beta}}h^{\beta}(i)$. Define,
\begin{equation*}
m:=\min\{i\in\mathbb{N}_{0}:\mbox{ there exists }\lambda<\alpha\mbox{ with } i\in S_{\lambda}\mbox{ and }z\in h^{\lambda}(i)\}.
\end{equation*}
By the definition of $m$ there exists $\zeta<\alpha$ with $m\in S_{\zeta}$ and $z\in h^{\zeta}(m)$. We claim that the set $\{\lambda<\alpha:m\not\in S_{\lambda}\}$ is empty. To prove this suppose towards a contradiction that we can define,
\begin{equation*}
\lambda^{'}:=\min\{\lambda<\alpha:m\not\in S_{\lambda}\}.
\end{equation*}
Then $\lambda^{'}>0$ since, by ($\zeta$,2), $S_{\zeta}\subseteq S_{0}$ with $m\in S_{\zeta}$. If $\lambda^{'}$ is a limit ordinal then $m\not\in S_{\lambda^{'}}=\bigcap_{\gamma<\lambda^{'}}S_{\gamma}$ giving $m\not\in S_{\gamma}$, for some $\gamma<\lambda^{'}$, and this contradicts the definition of $\lambda^{'}$. If $\lambda^{'}$ is a successor ordinal then $h^{\lambda^{'}}=f(h^{\lambda^{'}-1})$ with $m\in S_{\lambda^{'}-1}$ by the definition of $\lambda^{'}$. By $m\not\in S_{\lambda^{'}}$ and Definition \ref{def:UAPfHtoH} of $f:H\rightarrow H$, $h^{\lambda^{'}-1}$ is not classical. Therefore by (\ref{equ:UAPsset1}) and (\ref{equ:UAPsset2}) of Definition \ref{def:UAPfHtoH} there is $n\in S_{\lambda^{'}}$ with $n<m$ and $h^{\lambda^{'}-1}(m)\subseteq h^{\lambda^{'}}(n)$. Further for all $\lambda$ with $\lambda^{'}\leq\lambda<\alpha$ we have $m\not\in S_{\lambda}$ since $m\not\in S_{\lambda^{'}}$ and, by ($\lambda$,2), $S_{\lambda}\subseteq S_{\lambda^{'}}$. Hence we have $\zeta<\lambda^{'}$. Now, by ($\lambda^{'}-1$, 4), $\{h^{\gamma}(m):\gamma\leq\lambda^{'}-1\}$ is a nested increasing family of sets giving $z\in h^{\zeta}(m)\subseteq h^{\lambda^{'}-1}(m)\subseteq h^{\lambda^{'}}(n)$ with $n\in S_{\lambda^{'}}$. This contradicts the definition of $m$ since $n<m$. Hence we have shown that $\{\lambda<\alpha:m\not\in S_{\lambda}\}$ is empty giving $m\in S_{\alpha}=\bigcap_{\lambda<\alpha}S_{\lambda}$. Therefore, by Definition \ref{def:UAPh ord} and the definition of $\zeta$, we have $z\in h^{\zeta}(m)\subseteq\bigcup_{\lambda<\alpha}h^{\lambda}(m)=h^{\alpha}(m)\subseteq\bigcup_{i\in S_{\alpha}}h^{\alpha}(i)$ as required. Hence ($\alpha$,3) holds. Therefore we have shown, by the principal of transfinite induction, that $P(\alpha)$ holds and this concludes the proof of Lemma \ref{lem:UAPh ord}.
\end{proof}
Recall that our aim is to prove that for every $h\in H$ there is a classical disc assignment function $h^{'}\in H$ with $X_{h^{'}}\subseteq X_{h}$ and $\delta_{h^{'}}\geq\delta_{h}$. We have the following closing argument using cardinality. By ($\alpha$,2) of Lemma \ref{lem:UAPh ord} we obtain a nested ordinal sequence of domains $(S_{\alpha})$,\\
$\mathbb{N}_{0}\supseteq S\supseteq S_{1}\supseteq S_{2}\supseteq\cdots\supseteq S_{\omega}\supseteq S_{\omega+1}\supseteq\cdots\supseteq\{0\}$.\\
Now setting $S_{\alpha}^{c}:=\mathbb{N}_{0}\backslash S_{\alpha}$ gives a nested ordinal sequence $(S_{\alpha}^{c})$,\\
$\emptyset\subseteq S^{c}\subseteq S_{1}^{c}\subseteq S_{2}^{c}\subseteq\cdots\subseteq S_{\omega}^{c}\subseteq S_{\omega+1}^{c}\subseteq\cdots\subseteq\mathbb{N}$.
\begin{lemma}
For the disc assignment function $h^{\beta}$ we have,\\
$h^{\beta}$ is classical if and only if $(S_{\alpha})$ has stabilised at $\beta$, i.e. $S_{\beta+1}=S_{\beta}$.
\label{lem:UAPstab}
\end{lemma}
\begin{proof}
The proof follows directly from (ii) of Lemma \ref{lem:UAPf}.
\end{proof}
Now let $\omega_{1}$ be the first uncountable ordinal. Suppose towards a contradiction that, for all $\beta<\omega_{1}$, $(S_{\alpha})$ has not stabilised at $\beta$. Then for each $\beta<\omega_{1}$ there exists some $n_{\beta+1}\in\mathbb{N}$ such that $n_{\beta+1}\in S_{\beta+1}^{c}$ but $n_{\beta+1}\not\in S_{\alpha}^{c}$ for all $\alpha\leq\beta$. Hence since there are uncountably many $\beta<\omega_{1}$ we have $S_{\omega_{1}}^{c}$ uncountable with $S_{\omega_{1}}^{c}\subseteq\mathbb{N}$, a contradiction. Therefore there exists $\beta<\omega_{1}$ such that $(S_{\alpha})$ has stabilised at $\beta$ and so, by Lemma \ref{lem:UAPstab}, $h^{\beta}$ is classical. Now by ($\beta$,1) of Lemma \ref{lem:UAPh ord} we have $h^{\beta}\in H$ with $\delta_{h^{\beta}}\geq\delta_{h}$ and by ($\beta$,3) we have $X_{h^{\beta}}\subseteq X_{h}$. In particular this completes the proof of Theorem \ref{thr:UAPHT} and the Feinstein-Heath Swiss cheese ``Classicalisation'' theorem.
\end{proof}
The proof of Theorem \ref{thr:UAPFHT} as presented above proceeded without reference to a theory of allocation maps. In the original proof of Feinstein and Heath, \cite{Feinstein-Heath}, allocation maps play a central role. We will recover a key allocation map from the original proof using the map $f:H\rightarrow H$ of Definition \ref{def:UAPfHtoH}. Here is the definition of an allocation map as it appears in \cite{Feinstein-Heath}.
\begin{definition}
Let ${\bf{D}}=(\Delta,\mathcal{D})$ be a Swiss cheese. We define
\begin{equation*}
\widetilde{{\bf{D}}}=\mathcal{D}\cup\{\mathbb{C}\backslash\Delta\}.
\end{equation*}
Now let ${\bf{E}}=(\mathsf{E},\mathcal{E})$ be a second Swiss cheese, and let $f:\widetilde{{\bf{D}}}\rightarrow\widetilde{{\bf{E}}}$. We define $\mathcal{G}(f)=f^{-1}(\mathbb{C}\backslash\mathsf{E})\cap\mathcal{D}$. We say that $f$ is an {\bf{allocation map}} if the following hold:
\begin{enumerate}
\item[(A1)] for each $U\in\widetilde{{\bf{D}}}$, $U\subseteq f(U)$;
\item[(A2)]
\begin{equation*}
\sum_{D\in\mathcal{G}(f)}r(D)\geq r(\Delta)-r(\mathsf{E});
\end{equation*}
\item[(A3)] for each $E\in\mathcal{E}$,
\begin{equation*}
\sum_{D\in f^{-1}(E)}r(D)\geq r(E).
\end{equation*}
\end{enumerate}
\label{def:UAPAllo}
\end{definition}
Let ${\bf{D}}$ be the Swiss cheese of Theorem \ref{thr:UAPFHT} and let $\mathcal{S}({\bf{D}})$ be the family of allocation maps defined on $\widetilde{{\bf{D}}}$. In \cite{Feinstein-Heath} a partial order is applied to $\mathcal{S}({\bf{D}})$ and subsequently a maximal element $f_{\mbox{\scriptsize{max}}}$ is obtained using Zorn's lemma. The connection between allocation maps and Swiss cheeses is then exploited. Towards a contradiction the non-existence of the desired classical Swiss cheese ${\bf{D^{'}}}$ of Theorem \ref{thr:UAPFHT} is assumed. This assumption implies the existence of an allocation map $f'\in\mathcal{S}({\bf{D}})$ that is higher in the partial order applied to $\mathcal{S}({\bf{D}})$ than $f_{\mbox{\scriptsize{max}}}$, a contradiction. The result follows. It is at the last stage of the original proof where a connection to the new version can be found. In the construction of Feinstein and Heath the allocation map $f'$ factorizes as $f'=g\circ f_{\mbox{\scriptsize{max}}}$ where $g$ is also an allocation map. Let ${\bf{E}}=(\mathsf{E},\mathcal{E})$ be a non-classical Swiss cheese with $\delta({\bf{E}})>0$. Using the same method of construction that Feinstein and Heath use for $g$, an allocation map $g_{\mbox{\tiny{E}}}$ defined on $\widetilde{{\bf{E}}}$ can be obtained without contradiction. Clearly $\widetilde{{\bf{E}}}\not=f_{\mbox{\scriptsize{max}}}(\widetilde{{\bf{D}}})$. We will obtain $g_{\mbox{\tiny{E}}}$ using the map $f:H\rightarrow H$ of Definition \ref{def:UAPfHtoH}. Let $h\in H$, $h:S\rightarrow\mathcal{O}$, be an injective disc assignment function such that ${\bf{D}}_{h}={\bf{E}}$ and recall from Definition \ref{def:UAPfHtoH} that $f(h):S^{'}\rightarrow\mathcal{O}$ has $S^{'}=S\backslash\{m\}$ where $(n,m)$ is the minimum element of $I_{h}$. Set ${\bf{E'}}:={\bf{D}}_{f(h)}$. By Definitions \ref{def:UAPSC} and \ref{def:UAPAllo} we have
\begin{equation*}
\widetilde{{\bf{E}}}=\widetilde{{\bf{D}}_{h}}=h(S)\mbox{ and }\widetilde{{\bf{E'}}}=\widetilde{{\bf{D}}_{f(h)}}=f(h)(S^{'}).
\end{equation*}
Now define a map $\iota:\widetilde{{\bf{E}}}\rightarrow S^{'}$ by,
\begin{equation*}
\mbox{for } U\in\widetilde{{\bf{E}}}\mbox{, }\quad\iota(U):=\begin{cases} h^{-1}(U) &\mbox{ if }\quad h^{-1}(U)\not=m\\ n &\mbox{ if }\quad h^{-1}(U)=m \end{cases},
\end{equation*}
and note that this is well defined since $h$ is injective.
\begin{figure}[h]
\begin{equation*}
\xymatrix{
\widetilde{{\bf{E}}}\ar[r]^{\mbox{{\small{$g$}}}_{\mbox{\tiny{E}}}}\ar[d]_{\iota}&\widetilde{{\bf{E'}}}\\
S^{'}\ar[ru]_{f(h)}&
}
\end{equation*}
\caption{$g_{\mbox{\tiny{E}}}=f(h)\circ\iota$.}
\label{fig:UAPgEandf}
\end{figure}
The commutative diagram in Figure \ref{fig:UAPgEandf} show how $g_{\mbox{\tiny{E}}}$ is obtained using $f:H\rightarrow H$. The construction of $f$ in Definition \ref{def:UAPfHtoH} was developed from the construction that Feinstein and Heath used for $g$. The method of combining discs in Lemma \ref{lem:UAP2.4.13} also appears in \cite{Zhang}.
\begin{remark}
\label{rem:UASCS}
Concerning classicalisation.
\begin{enumerate}
\item[(i)]
Interestingly, as Heath points out in \cite{Feinstein-Heath}, every classical Swiss cheese set in $\mathbb{C}$ with empty interior is homeomorphic to the Sierpi\'{n}ski carpet. Hence up to homeomorphism there is only 1 Swiss cheese set of this type. In particular if $X_{\bf{D}}$ is a Swiss cheese contained in $\mathbb{C}$ with empty interior then either one of the conditions $X_{\bf{D}}$ is classical or $\delta({\bf{D}})>0$ is enough for $R(X_{\bf{D}})\not=A(X_{\bf{D}})$.
\item[(ii)]
I also anticipate the possibility of an analog of Theorem \ref{thr:UAPHT} on the sphere. Let $S\subseteq\mathbb{R}^{3}$ be a sphere of finite positive radius $r_{s}$ and center $c\in\mathbb{R}^{3}$. For $a,b\in S$ let $d_{s}(a,b):=r_{s}\angle acb$ be the length of the geodesic path in $S$ from $a$ to $b$. Now $d_{s}$ is a metric with respect to which we will define open and closed $S$-discs contained in $S$. With analogy to Definition \ref{def:UASC} let ${\bf{D}}_{s}:=(\Delta,\mathcal{D})$ be a Swiss cheese on $S$. Then either $\Delta=S$ or ${\bf{D}}_{s}^{'}:=(S,\mathcal{D}\cup\{S\backslash\Delta\})$ is a Swiss cheese on $S$, since $S\backslash\Delta$ is an open $S$-disc, for which $X_{{\bf{D}}_{s}^{'}}=X_{{\bf{D}}_{s}}$ in $S$. Further we have
\begin{equation*}
\delta({\bf{D}}_{s}):=r(\Delta)-\sum_{D\in\mathcal{D}}r(D)=\pi r_{s}-r(S\backslash\Delta)-\sum_{D\in\mathcal{D}}r(D)=\delta({\bf{D}}_{s}^{'})
\end{equation*}
and so for our choice of metric on $S$ we note that $\delta$ is independent of whether we use ${\bf{D}}_{s}$ or ${\bf{D}}_{s}^{'}$. Hence the situation for the sphere is a little simpler than that for the plane since we can allow all Swiss cheeses on the sphere to have the form ${\bf{D}}_{s}:=(S,\mathcal{D})$ and avoid the need to handle a special closed $S$-disc $\Delta$. Therefore on the sphere analogs of lemmas \ref{lem:UAP2.4.14} and \ref{lem:UAP2.4.12} are not required. We will not establish here whether the condition $\delta({\bf{D}}_{s})>0$ is sufficient for the analog of Theorem \ref{thr:UAPHT} on $S$ to hold since the next step in generalising this theorem should be to establish the class of all metric spaces for which a general version of the theorem holds. However the sphere is of particular interest in the context of uniform algebras since, less one point, the sphere is homeomorphic to the plane allowing many examples of uniform algebras to be defined on subsets of the sphere.
\end{enumerate}
\end{remark}
\section{Non-complex analogs of uniform algebras}
\label{sec:UANC}
The most obvious non-complex analog of Definition \ref{def:UACUA} is obtained by simply replacing the complex numbers in the definition by some other complete valued field $F$. In this case, whilst $C_{F}(X)$ will be complete and contain the constants, we need to take care concerning the topology on $X$ when $F$ is non-Archimedean, e.g. $C_{\mathbb{Q}_{p}}([0,1])$ only contains the constant.
\begin{theorem}
Let $F$ be a complete non-Archimedean valued field and let $C_{F}(X)$ be the unital Banach $F$-algebra of all continuous $F$-valued functions defined on a compact, Hausdorff space $X$. Then $C_{F}(X)$ separates the points of $X$ if and only if $X$ is totally disconnected.
\label{thr:UAXtop}
\end{theorem}
Before giving a proof of Theorem \ref{thr:UAXtop} we have the following version of Urysohn's lemma which will certainly already be known in some form because of its simplicity.
\begin{lemma}
Let $X$ be a totally disconnected, compact, Hausdorff space with finite subset $\{x,y_{1},y_{2},y_{3}, \cdots, y_{n}\}\subseteq X$, $x\not= y_{i}$ for all $i\in\{1, \cdots, n\}$ where $n\in\mathbb{N}$. Let $L$ be any non-empty topological space and $a,b\in L$. Then there exists a continuous map $h:X\longrightarrow L$ such that $h(x)=a$ and $h(y_{1})=h(y_{2})=h(y_{3})= \cdots =h(y_{n})=b$.
\label{lem:UAUrys}
\end{lemma}
\begin{proof}
Since $X$ is a Hausdorff space, for each $i\in\{1, \cdots, n\}$ there are disjoint open subsets $U_{i}$ and $V_{i}$ of $X$ with $x\in U_{i}$ and $y_{i}\in V_{i}$. Hence $U:=\bigcap_{i\in\{1, \cdots, n\}}U_{i}$ is an open subset of $X$ with $x\in U$ and $U\cap V_{i}=\emptyset$ for all $i\in\{1, \cdots, n\}$. Now since $X$ is a totally disconnected, compact, Hausdorff space, $x$ has a neighborhood base of clopen sets, see \cite[Theorem 29.7]{Willard} noting that $X$ is locally compact by Theorem \ref{thr:CVFHL}. Hence there is a clopen subset $W$ of $X$ with $x\in W\subseteq U$. The function $h:X\longrightarrow L$ given by $h(W):=\{a\}$ and $h(X\backslash W):=\{b\}$ is continuous as required.
\end{proof}
We now give the proof of Theorem \ref{thr:UAXtop}.
\begin{proof}
With reference to Lemma \ref{lem:UAUrys} it remains to show that $C_{F}(X)$ separates the points of $X$ only if $X$ is totally disconnected. Let $X$ be a compact, Hausdorff space such that $C_{F}(X)$ separates the points of $X$. Let $U$ be a non-empty connected subset of $X$ and let $f\in C_{F}(X)$. We note that $f(U)$ is a connected subset of $F$ since $f$ is continuous. Now, since $F$ is non-Archimedean it is totally disconnected i.e. its connected subsets are singletons. Hence $f(U)$ is a singleton and so $f$ is constant on $U$. Therefore, since $C_{F}(X)$ separates the points of $X$, $U$ is a singleton and $X$ is totally disconnected.
\end{proof}
We next consider the constraints on $C_{F}(X)$ revealed by generalisations of the Stone-Weierstrass theorem. In the real case the Stone-Weierstrass theorem for $C_{\mathbb{R}}(X)$ says that for every compact Hausdorff space $X$, $C_{\mathbb{R}}(X)$ is without a proper subalgebra that is uniformly closed, contains the real numbers and separates the points of $X$. A proof can be found in \cite[p50]{Kulkarni-Limaye1992}. The non-Archimedean case is given by a theorem of Kaplansky, see \cite[p157]{Berkovich} or \cite{Kaplansky}.
\begin{theorem}
Let $F$ be a complete non-Archimedean valued field, let $X$ be a totally disconnected compact Hausdorff space, and let $A$ be a $F$-subalgebra of $C_{F}(X)$ which satisfies the following conditions:
\begin{enumerate}
\item[(i)]
the elements of $A$ separate the points of $X$;
\item[(ii)]
for each $x\in X$ there exists $f\in A$ with $f(x)\not=0$.
\end{enumerate}
Then $A$ is everywhere dense in $C_{F}(X)$.
\label{thr:UAKapl}
\end{theorem}
Note that, in Theorem \ref{thr:UAKapl}, $A$ being a $F$-subalgebra of $C_{F}(X)$ means that $A$ is a subalgebra of $C_{F}(X)$ and a vector space over $F$. If we take $A$ to be unital then condition (ii) in Theorem \ref{thr:UAKapl} is automatically satisfied and the theorem is analogous to the real version of the Stone-Weierstrass theorem. In subsection \ref{subsec:UARFA} we will see that real function algebras are a useful example when considering non-complex analogs of uniform algebras with qualifying subalgebras.
\subsection{Real function algebras}
\label{subsec:UARFA}
Real function algebras were introduced by Kulkarni and Limaye in a paper from 1981, see \cite{Kulkarni-Limaye1981}. For a thorough text on the theory see \cite{Kulkarni-Limaye1992}. The following definition is a little more general than what we need in this subsection.
\begin{definition}
\label{def:UATIAI}
Let $X$ be a topological space and let $\tau:X\rightarrow X$ be a homeomorphism.
\begin{enumerate}
\item[(i)]
We will call $\tau$ a {\em topological involution on} $X$ if $\tau\circ\tau =\mbox{id}$ on $X$.
\item[(ii)]
We will call $\tau$ a {\em topological element of finite order on} $X$ if $\tau$ has finite order but with $\mbox{ord}(\tau)>2$.
\end{enumerate}
Let $F$ and $L$ be complete valued fields such that $L$ is a finite extension of $F$ as a valued field and let $g\in\mbox{Gal}(^{L}/_{F})$. Let $A$ either be an $F$-algebra or an $L$-algebra for which $\sigma:A\rightarrow A$ is a map satisfying $\mbox{ord}(\sigma)=\mbox{ord}(g)$ and for all $a,b\in A$ and scalars $\alpha$:
\begin{enumerate}
\item[]
$\sigma(a+b)=\sigma(a)+\sigma(b)$;
\item[]
$\sigma(ab)=\sigma(b)\sigma(a)$;
\item[]
$\sigma(\alpha a)=g(\alpha)\sigma(a)$.
\end{enumerate}
\begin{enumerate}
\item[(iii)]
We will call $\sigma$ a {\em algebraic involution on} $A$ if $\sigma\circ\sigma =\mbox{id}$ on $A$.
\item[(iv)]
We will call $\sigma$ a {\em algebraic element of finite order on} $A$ if $\sigma$ has $\mbox{ord}(\sigma)>2$.
\end{enumerate}
\end{definition}
Note, in Definition \ref{def:UATIAI} the requirement that $\tau$ be a homeomorphism is satisfied if $\tau$ is continuous.
\begin{definition}
\label{def:UARefa}
Let $X$ be a compact Hausdorff space and $\tau$ a topological involution on $X$. A {\em{real function algebra on}} $(X,\tau)$ is a real subalgebra $A$ of
\begin{equation*}
C(X,\tau):=\{f\in C_{\mathbb{C}}(X):f(\tau(x))=\bar{f}(x)\mbox{ for all }x\in X\}
\end{equation*}
that is complete with respect to the sup norm, contains the real numbers and separates the points of $X$.
\end{definition}
\begin{remark}
Concerning real function algebras.
\begin{enumerate}
\item[(i)]
Later, Theorem \ref{thr:CGGUA} will confirm that $C(X,\tau)$ in Definition \ref{def:UARefa} is itself always a real function algebra on $(X,\tau)$ and in some sense it is to real function algebras as $C_{\mathbb{C}}(X)$ is to complex uniform algebras.
\item[(ii)]
Let $X$ be a compact Hausdorff space and $Y$ a closed non-empty subset of $X$. Then $C_{Y}:=\{f\in C_{\mathbb{C}}(X):f(Y)\subseteq\mathbb{R}\}$ is a commutative real Banach algebra. As pointed out in \cite[p2]{Kulkarni-Limaye1992}, every such $C_{Y}$ can be transformed into a real function algebra but the converse of this is false. Hence Definition \ref{def:UARefa} is a more general object.
\item[(iii)]
With reference to Definition \ref{def:UARefa} we have $C(X,\tau)=\{f\in C_{\mathbb{C}}(X):\sigma(f)=f\}$ where $\sigma(f):=\bar{f}\circ\tau$. Moreover $\sigma$ is an algebraic involution on $C_{\mathbb{C}}(X)$ and every algebraic involution on $C_{\mathbb{C}}(X)$ arises from a topological involution on $X$ in this way, see \cite[p29]{Kulkarni-Limaye1992} for a proof.
\end{enumerate}
\label{rem:UARefa}
\end{remark}
The following example is a useful standard.
\begin{example}
\label{exa:UARdal}
Recall from Example \ref{exa:UADA} the disc algebra $A(\Delta)$ on the closed unit disc and let $\tau:\Delta\longrightarrow\Delta$ be the map $\tau(z):=\bar{z}$ given by complex conjugation, which we note is a Galois automorphism on $\mathbb{C}$. Now let
\begin{equation*}
B(\Delta):=A(\Delta)\cap C(\Delta,\tau).
\end{equation*}
We see that $B(\Delta)$ is complete since both $A(\Delta)$ and $C(\Delta,\tau)$ are, and similarly $B(\Delta)$ contains the real numbers. Further by the definition of $C(\Delta,\tau)$ and the fact that $A(\Delta)=P(\Delta)$ we have that $B(\Delta)$ is the $\mathbb{R}$-algebra of all uniform limits of polynomials on $\Delta$ with real coefficients. Hence $B(\Delta)$ separates the points of $\Delta$ since it contains the function $f(z):=z$. However whilst $\tau$ is in $C(\Delta,\tau)$ it is not an element of $A(\Delta)$. Therefore $B(\Delta)$ is a real function algebra on $(\Delta,\tau)$ and a proper subalgebra of $C(\Delta,\tau)$. It is referred to as the {\em{real disc algebra}}.
\end{example}
Finally for each compact Hausdorff space $X$, $C_{\mathbb{C}}(X)$ can be put into the form of a real function algebra as the following example shows. In particular $\mathbb{C}$ can be expressed as a real function algebra on a two point set.
\begin{example}
\label{exa:UATPS}
Let $X$ be a compact Hausdorff space, let $Y:=\{i,-i\}\subseteq\mathbb{C}$ have the trivial topology and give $X\times Y$ the product topology. We note that the subspace given by $X_{i}:=\{(x,y)\in X\times Y:y=i\}$ is homeomorphic to $X$ and similarly so is $X_{-i}$. Define a topological involution $\tau:X\times Y\rightarrow X\times Y$ by $(x,y)\mapsto (x,\bar{y})$. Then $C_{\mathbb{C}}(X_{i})$ is isometrically isomorphic to $C(X\times Y,\tau)$ by way of the mapping $f\mapsto h_{f}$ where
\begin{equation*}
h_{f}(z):=\begin{cases} f(z) &\mbox{ if }\quad z\in X_{i}\\ \bar{f}(\tau(z)) &\mbox{ if }\quad z\in X_{-i} \end{cases},\quad\mbox{for } f\in C_{\mathbb{C}}(X_{i}),
\end{equation*}
so that for $z\in X_{i}$ we have
\begin{equation*}
h_{f}(\tau(z))=\bar{f}(\tau(\tau(z)))=\bar{f}(z)=\bar{h_{f}}(z)
\end{equation*}
and for $z\in X_{-i}$
\begin{equation*}
h_{f}(\tau(z))=f(\tau(z))=\bar{\bar{f}}(\tau(z))=\bar{h_{f}}(z)
\end{equation*}
showing that $h_{f}\in C(X\times Y,\tau)$. The inverse mapping from $C(X\times Y,\tau)$ to $C_{\mathbb{C}}(X_{i})$ is given by the restriction map $h\mapsto h|_{X_{i}}$. One might suspect that such a mapping exists for every $C(Z,\tau)$ by restricting its elements to a compact subspace of equivalence class representatives for the forward orbits of $\tau$. But this is not the case in general since there can be $z\in Z$ with $\mbox{ord}(\tau,z)=1$ forcing all of the functions to be real valued at $z$ preventing the representation of the complex constants in $C(Z,\tau)$.
\end{example}
\chapter[Commutative generalisation over complete valued fields]{Commutative generalisation over\\ complete valued fields}
\label{cha:CG}
If $J$ is a maximal ideal of a commutative unital complex Banach algebra $A$ then $J$ has codimension one since $A/J$ with the quotient norm is isometrically isomorphic to the complex numbers. This follows from the Gelfand-Mazur theorem noting that $A/J$ with the quotient norm is unital since $J$ is closed as a subset of $A$ and $J$ is different to $A$, see Lemma \ref{lem:RTCBR} and \cite[p16]{Stout}.\\
In contrast, for a complete non-Archimedean field $F$, if $I$ is a maximal ideal of a commutative unital Banach $F$-algebra then $I$ may have large finite or infinite codimension, note Corollary \ref{cor:CVFEE}. Hence, with Gelfand transform theory in mind, it makes sense to consider non-Archimedean analogs of uniform algebras in the form suggested by real function algebras where the functions take values in a complete extension of the underlying field of scalars. Moreover when there is a lattice of intermediate fields then these fields provide a way for a lattice of extensions of the algebra to occur. See \cite[Ch1]{Berkovich} and \cite[Ch15]{Escassut} for one form of the Gelfand transform in the non-Archimedean setting. This chapter introduces the main definitions of interest in the thesis. We will generalise the definitions made by Kulkarni and Limaye to all complete valued fields, show that the algebras obtained all qualify as generalisations of uniform algebras and that restricting attention to the Archimedean setting recovers the complex uniform algebras and real function algebras. Non-Archimedean examples and residue algebras are also introduced.
\section{Main definitions}
\label{sec:CGMD}
The following definition gives the requirements for those commutative algebras that are to be considered as generalisations of uniform algebras.
\newpage
\begin{definition}
Let $F$ and $L$ be complete valued fields such that $L$ is an extension of $F$ as a valued field. Let $X$ be a compact Hausdorff space and let $C_{L}(X)$ be the unital Banach $L$-algebra of all continuous $L$-valued functions on $X$ with pointwise operations and the sup norm. If a subset $A$ of $C_{L}(X)$ satisfies:
\begin{enumerate}
\item[(i)]
$A$ is an $F$-algebra under pointwise operations;
\item[(ii)]
$A$ is complete with respect to $\|\cdot\|_{\infty}$;
\item[(iii)]
$F\subseteq A$;
\item[(iv)]
$A$ separates the points of $X$,
\end{enumerate}
then we will call $A$ an $^{L}/_{F}$ {\em uniform algebra} or just a {\em uniform algebra} when convenient.
\label{def:CGLFUA}
\end{definition}
In the language of Definition \ref{def:CGLFUA}, an $^{L}/_{F}$ uniform algebra is a Banach $F$-algebra of $L$-valued functions, also every $^{L}/_{L}$ uniform algebra is an $^{L}/_{F}$ uniform algebra. We now generalise, in two parts, Kulkarni and Limaye's definition of a real function algebra.
\begin{definition}
Let $F$ and $L$ be complete valued fields such that $L$ is a finite extension of $F$ as a valued field. Let $X$ be a compact Hausdorff space and totally disconnected if $F$ is non-Archimedean. Define $C(X,\tau,g)\subseteq C_{L}(X)$ as the subset of elements $f\in C_{L}(X)$ for which the diagram in Figure \ref{fig:CGBFA} commutes.
\begin{figure}[h]
\begin{equation*}
\xymatrix{
X\ar@{->}[rr]^{f}\ar@{->}[dd]_{\tau}&&L\ar@{->}[dd]^{g}&&\mbox{(i) $g\in\mbox{Gal}(^{L}/_{F})$;}\hspace{28mm}\\
&&&\mbox{Where:}&\mbox{(ii) $\tau:X\rightarrow X$ with $\mbox{ord}(\tau)|\mbox{ord}(g)$;}\\
X\ar@{->}[rr]_{f}&&L&&\mbox{(iii) $g$ and $\tau$ are continuous.}\hspace{13.5mm}
}
\end{equation*}
\caption{Commutative diagram for $f\in C(X,\tau,g)$.}
\label{fig:CGBFA}
\end{figure}
We will call $C(X,\tau,g):=\{f\in C_{L}(X): f(\tau(x))=g(f(x)) \mbox{ for all } x\in X \}$ the {\em basic $^{L}/_{L^{g}}$ function algebra on $(X,\tau,g)$} or just a {\em basic function algebra} when convenient.
\label{def:CGBFA}
\end{definition}
\begin{definition}
Let $F$ and $L$ be complete valued fields such that $L$ is a finite extension of $F$ as a valued field. Let $(X,\tau,g)$ conform to the conditions of Definition \ref{def:CGBFA} and let $A$ be a subset of the basic $^{L}/_{L^{g}}$ function algebra on $(X,\tau,g)$. If $A$ is also an $^{L}/_{L^{g}}$ uniform algebra then we will call $A$ an {\em $^{L}/_{L^{g}}$ function algebra on $(X,\tau,g)$}.
\label{def:CGLGFA}
\end{definition}
\begin{remark}
In definitions \ref{def:CGBFA} and \ref{def:CGLGFA} the valued field $L^{g}$ is complete by Remark \ref{rem:CVFEN}. The continuity of $g$ in Definition \ref{def:CGBFA} is only an observation since $g$ is an isometry on $L$ by Remark \ref{rem:CVFUT}. In fact $g$ also acts as an isometric automorphism on $C(X,\tau,g)$.
\label{rem:CGBFA}
\end{remark}
\section{Generalisation theorems}
\label{sec:CGGT}
With Definition \ref{def:CGLGFA} in mind the following theorem, which is the main theorem of this chapter, clarifies why an algebra conforming to the conditions of Definition \ref{def:CGBFA} is to be called a basic $^{L}/_{L^{g}}$ function algebra on $(X,\tau,g)$.
\begin{theorem}
Let $(X,\tau,g)$ conform to the conditions of Definition \ref{def:CGBFA}. Then the basic $^{L}/_{L^{g}}$ function algebra on $(X,\tau,g)$ is always an $^{L}/_{L^{g}}$ uniform algebra.
\label{thr:CGGUA}
\end{theorem}
\begin{remark}
We will see in the proof of Theorem \ref{thr:CGGUA} that ord$(\tau)|$ord$(g)$ is an optimum condition in Definition \ref{def:CGBFA} since if we do not include it in the definition then $C(X,\tau,g)$ separates the points of $X$ if and only if ord$(\tau)|$ord$(g)$ as per Figure \ref{fig:CGED}.
\label{rem:CGOC}
\end{remark}
\begin{figure}[h]
\begin{equation*}
\xymatrix{
&\mbox{ord}(\tau)|\mbox{ord}(g)\ar@2{->}[ld]_{1}&\\
\mbox{ord}(\tau,X)\subseteq\mbox{ord}(g,L)\ar@2{->}[rr]^{2}&&C(X,\tau,g)\mbox{ separates }X\ar@2{->}[lu]_{3}
}
\end{equation*}
\caption{Equivalence diagram for Definition \ref{def:CGBFA}.}
\label{fig:CGED}
\end{figure}
\begin{proof}[Proof of Theorem \ref{thr:CGGUA}]
Let $(X,\tau,g)$ conform to the conditions of Definition \ref{def:CGBFA}. It is immediate that $C(X,\tau,g)$ is a ring under pointwise operations and $L^{g}\subseteq C(X,\tau,g)$. We now show that $C(X,\tau,g)$ is complete with respect to the sup norm. First note that
\begin{equation*}
C(X,\tau,g)=\{f\in C_{L}(X):\sigma(f)=f\}
\end{equation*}
where $\sigma(f):=g^{(\mbox{ord}(g)-1)}\circ f\circ\tau$ is an isometry on $C_{L}(X)$ since $\tau$ is surjective and $g$ is an isometry on $L$ by Remark \ref{rem:CVFUT}. Further $\sigma$ is either an algebraic involution or a algebraic element of finite order on $C_{L}(X)$. Hence since $C_{L}(X)$ is commutative $\sigma$ is in fact an isometric automorphism on $C_{L}(X)$. Now let $(f_{n})$ be a Cauchy sequence in $C(X,\tau,g)$ and let $f$ be its limit in $C_{L}(X)$. Then for each $\varepsilon>0$ there exists $N\in\mathbb{N}$ such that for all $n>N$ we have $\|f-f_{n}\|_{\infty}=\|\sigma(f_{n})-\sigma(f)\|_{\infty}<\frac{\varepsilon}{2}$. But then $\|f-\sigma(f)\|_{\infty}=\|f-\sigma(f_{n})+\sigma(f_{n})-\sigma(f)\|_{\infty}\leq \|f-f_{n}\|_{\infty}+\|\sigma(f_{n})-\sigma(f)\|_{\infty}<\varepsilon$. Hence $\|f-\sigma(f)\|_{\infty}=0$ giving $\sigma(f)=f$ and so $f\in C(X,\tau,g)$. Hence $C(X,\tau,g)$ is complete. It remains to show that $C(X,\tau,g)$ separates the points of $X$ and to this end we now show each of the implications in Figure \ref{fig:CGED}.\\
{\em{1:}} Let $n\in\mbox{ord}(\tau,X)$. It is immediate that $n|\mbox{ord}(\tau)$ and since $\mbox{ord}(\tau)|\mbox{ord}(g)$ we have $n\in\mbox{ord}(g,L)$ by Lemma \ref{lem:CVFOSET}. We also note that the converse is immediate since for each $n\in\mbox{ord}(g,L)$ we have $n|\mbox{ord}(g)$ and so $\mbox{ord}(\tau)|\mbox{ord}(g)$.\\
{\em{2:}} Note that $\mbox{ord}(\sigma)=\mbox{ord}(g)$ and so, like a norm map, for every $h\in C_{L}(X)$ we have
\begin{align*}
&h\sigma(h)\sigma^{(2)}(h)\cdots\sigma^{(\mbox{ord}(g)-1)}(h)\in C(X,\tau,g)\quad\mbox{and}\\
&h+\sigma(h)+\sigma^{(2)}(h)+\cdots+\sigma^{(\mbox{ord}(g)-1)}(h)\in C(X,\tau,g).
\end{align*}
Now if $g=\mbox{id}$ is the identity then $C(X,\tau,g)=C_{L}(X)$ which separates the points of $X$ when $L$ is Archimedean by Urysohn's lemma, since $X$ is locally compact, and when $L$ is non-Archimedean by Theorem \ref{thr:UAXtop} since we required $X$ to be totally disconnected in this case. So now suppose $\mbox{ord}(g)>1$ and let $x,y\in X$ with $x\not=y$. We need to check two cases.\\
{\em{Case 1:}} In this case $y\not=\tau^{(n)}(x)$ for all $n\in\mathbb{N}$ with $n\leq\mbox{ord}(\tau,x)$. By Urysohn's lemma in the Archimedean setting or Lemma \ref{lem:UAUrys} otherwise, there is $h\in C_{L}(X)$ with $h(y)=0$ and $h(\tau^{(n)}(x))=1$ for all $n\in\mathbb{N}_{0}$. Let $f:=h\sigma(h)\sigma^{(2)}(h)\cdots\sigma^{(\mbox{ord}(g)-1)}(h)$ so that $f\in C(X,\tau,g)$ with $f(y)=0$, by construction, and $f(x)=1$ by construction given that $g(1)=1$. Then in this case $x$ and $y$ are separated by $f$.\\
{\em{Case 2:}} In this case $y=\tau^{(n)}(x)$ for some $n\in\mathbb{N}$ with $n<\mbox{ord}(\tau,x)$. Let $m:=\mbox{ord}(g)$ and $k:=\mbox{ord}(\tau,x)$ and note therefore that we have $1\leq n\leq k-1$, since $y\not=x$, and $m=km'$ for some $m'\in\mathbb{N}$. Further since $\mbox{ord}(\tau,X)\subseteq\mbox{ord}(g,L)$ there is $a\in L$ with $\mbox{ord}(g,a)=k$. By Urysohn's lemma in the Archimedean setting or Lemma \ref{lem:UAUrys} otherwise, there is $h\in C_{L}(X)$ with $h(x)=a$ and $h(\tau^{(i)}(x))=0$ for $1\leq i\leq k-1$. We will now check two sub-cases.\\
{\em{Case 2.1:}} The characteristic of the field $L$ is zero, i.e. $\mbox{char}(L)=0$.\\
Let $f:=h+\sigma(h)+\sigma^{(2)}(h)+\cdots+\sigma^{(\mbox{ord}(g)-1)}(h)$ so that we have $f\in C(X,\tau,g)$ with $f=h+g^{(m-1)}\circ h\circ\tau+g^{(m-2)}\circ h\circ\tau^{(2)}+\cdots+g\circ h\circ\tau^{(m-1)}$. This gives
\begin{align*}
f(x)=&h(x)+g^{(m-k)}\circ h(\tau^{(k)}(x))+g^{(m-2k)}\circ h(\tau^{(2k)}(x))+\cdots\\
&\cdots+g^{(m-(m'-1)k)}\circ h(\tau^{((m'-1)k)}(x))\\
=&h(x)+g^{((m'-1)k)}\circ h(\tau^{(k)}(x))+g^{((m'-2)k)}\circ h(\tau^{(2k)}(x))+\cdots\\
&\cdots+g^{(k)}\circ h(\tau^{((m'-1)k)}(x))\\
=&a+a+a+\cdots+a,\quad m'\mbox{ times,}\\
=&m'a\quad\mbox{and}
\end{align*}
\begin{align*}
f(y)=&f(\tau^{(n)}(x))\\
=&g^{(m-(k-n))}\circ h(\tau^{(k)}(x))+g^{(m-(2k-n))}\circ h(\tau^{(2k)}(x))+\cdots\\
&\cdots+g^{(m-((m'-1)k-n))}\circ h(\tau^{((m'-1)k)}(x))+g^{(m-(m'k-n))}\circ h(\tau^{(m'k)}(x))\\
=&g^{((m'-1)k+n)}\circ h(\tau^{(k)}(x))+g^{((m'-2)k+n)}\circ h(\tau^{(2k)}(x))+\cdots\\
&\cdots+g^{(k+n)}\circ h(\tau^{((m'-1)k)}(x))+g^{(n)}\circ h(\tau^{(m)}(x))\\
=&m'g^{(n)}(a)\quad\mbox{with }1\leq n\leq k-1.
\end{align*}
Hence since $\mbox{ord}(g,a)=k$ we have $f(x)\not=f(y)$.\\
{\em{Case 2.2:}} The characteristic of $L$ is $p$, i.e. $\mbox{char}(L)=p$, for some prime $p\in\mathbb{N}$. In this case the proof for Case 2.1 breaks down when $m'=p^{t}s$ with $s,t\in\mathbb{N}$, $p\nmid s$. So with respect to such circumstances define $f':=h\sigma^{(sk)}(h)\sigma^{(2sk)}(h)\cdots\sigma^{((p^{t}-1)sk)}(h)$ and
\begin{equation*}
f:=f'+\sigma(f')+\sigma^{(2)}(f')+\cdots+\sigma^{(sk-1)}(f').
\end{equation*}
We will now show that $\sigma(f)=f$ so that $f\in C(X,\tau,g)$ and note that this is satisfied if $\sigma^{(sk)}(f')=f'$ since $\sigma(f)=\sigma(f')+\sigma^{(2)}(f')+\sigma^{(3)}(f')+\cdots+\sigma^{(sk)}(f')$. Indeed we have that $\sigma^{(sk)}(f')=\sigma^{(sk)}(h)\sigma^{(2sk)}(h)\sigma^{(3sk)}(h)\cdots\sigma^{(p^{t}sk)}(h)$ with $\sigma^{(p^{t}sk)}(h)=\sigma^{(m)}(h)=h$ so that $\sigma^{(sk)}(f')=f'$ giving $f\in C(X,\tau,g)$. Now, for $0\leq i\leq sk-1$, we have
\begin{align*}
\sigma^{(i)}(f')(x)=&\sigma^{(i)}(h)(x)\sigma^{(sk+i)}(h)(x)\sigma^{(2sk+i)}(h)(x)\cdots\sigma^{((p^{t}-1)sk+i)}(h)(x)\\
=&\begin{cases} a^{p^{t}} &\mbox{ if }\quad k|i\\ 0 &\mbox{ if }\quad k\nmid i \end{cases}
\end{align*}
since $\sigma^{(kj)}(h)(x)=g^{(m-kj)}\circ h(\tau^{(kj)}(x))=g^{((p^{t}s-j)k)}\circ h(x)=a$ for $kj<m$, $j\in\mathbb{N}_{0}$, and $\sigma^{(j)}(h)(x)=g^{(m-j)}\circ h(\tau^{(j)}(x))=g^{(m-j)}(0)=0$ for $j<m$, $k\nmid j$. Hence
\begin{align*}
f(x)=&f'(x)+\sigma(f')(x)+\sigma^{(2)}(f')(x)+\cdots+\sigma^{(sk-1)}(f')(x)\\
=&f'(x)+\sigma^{(k)}(f')(x)+\sigma^{(2k)}(f')(x)+\cdots+\sigma^{((s-1)k)}(f')(x)\\
=&sa^{p^{t}}.
\end{align*}
But for $0\leq i\leq sk-1$ we also have
\begin{align*}
\sigma^{(i)}(f')(y)=&\sigma^{(i)}(h)(y)\sigma^{(sk+i)}(h)(y)\sigma^{(2sk+i)}(h)(y)\cdots\sigma^{((p^{t}-1)sk+i)}(h)(y)\\
=&\begin{cases} g^{(n)}(a)^{p^{t}} &\mbox{ if }\quad k|(i+n)\\ 0 &\mbox{ if }\quad k\nmid(i+n) \end{cases}
\end{align*}
since if $k|(i+n)$ then $i$ has the form $kj-n$ and the exponents of $\sigma$ therefore also have the form $kj-n<m$, $j\in\mathbb{N}$, giving
\begin{align*}
\sigma^{(kj-n)}(h)(y)=&g^{(m-(kj-n))}\circ h(\tau^{(kj-n)}(y))\\
=&g^{((p^{t}s-j)k+n)}\circ h(\tau^{(kj-n)}(\tau^{(n)}(x)))\\
=&g^{((p^{t}s-j)k+n)}\circ h(x)=g^{(n)}(a)
\end{align*}
and if $k\nmid(i+n)$ then for $j<m$ an exponents of $\sigma$ we also have $k\nmid(j+n)$ giving
\begin{align*}
\sigma^{(j)}(h)(y)=&g^{(m-j)}\circ h(\tau^{(j)}(y))\\
=&g^{(m-j)}\circ h(\tau^{(j+n)}(x))\\
=&g^{(m-j)}(0)=0.
\end{align*}
Hence
\begin{align*}
f(y)=&f'(y)+\sigma(f')(y)+\sigma^{(2)}(f')(y)+\cdots+\sigma^{(sk-1)}(f')(y)\\
=&\sigma^{(k-n)}(f')(y)+\sigma^{(2k-n)}(f')(y)+\cdots+\sigma^{(sk-n)}(f')(y)\\
=&sg^{(n)}(a)^{p^{t}}.
\end{align*}
Now since $p\nmid s$ we have $s\in L^{\times}$. Furthermore recall that since $\mbox{ord}(g,a)=k$ and $1\leq n\leq k-1$ we have $g^{(n)}(a)\not=a$. Therefore it remains to show that $g^{(n)}(a)^{p^{t}}\not=a^{p^{t}}$ in order to conclude that $f(y)\not=f(x)$. Recall that $p=\mbox{char}(L)\in\mathbb{N}$ is a prime. For $b\in L$ the Frobenius $\mbox{Frob}:L\rightarrow L$, $\mbox{Frob}(b):=b^{p}$, is an injective endomorphism on $L$. We show that the Frobenius is injective on $L$. Let $b_{1},b_{2}\in L$ with $b_{1}^{p}=b_{2}^{p}$.\\
The case $p>2$ gives $(b_{1}-b_{2})^{p}=b_{1}^{p}-b_{2}^{p}=0$.\\
The case $p=2$ gives $(b_{1}-b_{2})^{2}=b_{1}^{2}+b_{2}^{2}=2b_{1}^{2}=0$.\\
In each case $L$ is an integral domain and so $b_{1}-b_{2}=0$ giving $b_{1}=b_{2}$ as required. Therefore $\mbox{Frob}^{(t)}:L\rightarrow L$, $\mbox{Frob}^{(t)}(b):=b^{p^{t}}$, is also injective giving $g^{(n)}(a)^{p^{t}}\not=a^{p^{t}}$ since $g^{(n)}(a)\not=a$ and this finishes the proof of implications 2 in Figure \ref{fig:CGED}.\\
{\em{3:}} We now show implication 3 in Figure \ref{fig:CGED} by showing the contrapositive. Suppose $\mbox{ord}(\tau)\nmid\mbox{ord}(g)$. Then there exists some $x\in X$ such that $\tau^{(\mbox{ord}(g))}(x)\not=x$. Let $y:=\tau^{(\mbox{ord}(g))}(x)$. Now for all $f\in C(X,\tau,g)$ we have for all $i\in\mathbb{N}$ that
\begin{equation*}
f\circ\tau^{(i)}=f\circ\tau\circ\tau^{(i-1)}=g\circ f\circ\tau^{(i-1)}=\cdots=g^{(i)}\circ f.
\end{equation*}
Therefore $f(y)=f\circ\tau^{(\mbox{ord}(g))}(x)=g^{(\mbox{ord}(g))}\circ f(x)=f(x)$. Hence for all $f\in C(X,\tau,g)$ we have $f(x)=f(y)$ as required.\\
Hence having shown each of the implications in Figure \ref{fig:CGED}, $\mbox{ord}(\tau)|\mbox{ord}(g)$ is a necessary and sufficient condition in Definition \ref{def:CGBFA} in order that $C(X,\tau,g)$ separates the points of $X$ and this completes the proof of Theorem \ref{thr:CGGUA}.
\end{proof}
\begin{remark}
It's worth noting that for $\mbox{char}(L)=p$ the Frobenius is also a endomorphism on $C(X,\tau,g)$. Moreover for $L$ of any characteristic we have seen in the proof of Theorem \ref{thr:CGGUA} that $\sigma$, given by $\sigma(f):=g^{(\mbox{ord}(g)-1)}\circ f\circ\tau$, is an isometric automorphism on $C_{L}(X)$ with fixed elements $C(X,\tau,g)$.
\label{rem:CGFROB}
\end{remark}
With reference to the complex, real and non-Archimedean Stone-Weierstrass theorems from Chapter \ref{cha:UA}, the following combined Stone-Weierstrass theorem is immediate.
\begin{theorem}
Let $L$ be a complete valued field. Let $X$ conform to Definition \ref{def:CGBFA} and let $A$ be an $^{L}/_{L}$ function algebra on ($X$,id,id). Then either $A=C_{L}(X)$ or $L=\mathbb{C}$ and $A$ is not self adjoint, that is there is $f\in A$ with $\bar{f}\notin A$.
\label{thr:CGGSW}
\end{theorem}
\section{Examples}
\label{sec:CGEXA}
Our first example considers $^{L}/_{L^{g}}$ function algebras in the Archimedean setting.
\begin{example}
Let $F=\mathbb{R}$, $L=\mathbb{C}$ and $X$ be a compact Hausdorff space. We have $\mbox{Gal}(^{\mathbb{C}}/_{\mathbb{R}})=\langle\mbox{id}, \bar{z}\rangle$.\\
Setting $g=\mbox{id}$ in Definition \ref{def:CGBFA} forces $\tau$ to be the identity on $X$. In this case it's immediate that $C(X,\tau,g)=C_{\mathbb{C}}(X)$ and each $^{L}/_{L^{g}}$ function algebra on $(X,\tau,g)$ is a complex uniform algebra.\\
On the other hand, setting $g=\bar{z}$ forces $\tau$ to be a topological involution on $X$. In this case the $^{L}/_{L^{g}}$ function algebras on $(X,\tau,g)$ are precisely the real function algebras of Kulkarni and Limaye.
\label{exa:CGAEFA}
\end{example}
Our first non-Archimedean example is very straightforward involving the trivial valuation.
\begin{example}
\label{exa:CGTVFA}
Let $F=\mathbb{Q}$, but with the trivial valuation instead of the absolute valuation, and let $L=\mathbb{Q}(a)$ with the trivial valuation where $a=\exp(\frac{1}{10}2\pi i)$. With reference to Theorem \ref{thr:CVFIR}, and having factorised $x^{10}-1$ in $F[x]$, we have $\mbox{Irr}_{F,a}(x)=x^{4}-x^{3}+x^{2}-x+1$ which gives $[L,F]=\mbox{degIrr}_{F,a}(x)=4$. The roots of $\mbox{Irr}_{F,a}(x)$ are the elements of $S:=\{a,a^{3},a^{7},a^{9}\}$ and so, with reference to Definition \ref{def:CVFSN}, $L$ is a normal extension of $F$. Moreover with reference to Remark \ref{rem:CVFSN} $L$ is a separable extension of $F$ and so $L$ is also a Galois extension of $F$ with $\#\mbox{Gal}(^{L}/_{F})=[L,F]=4$ by Theorem \ref{thr:CVFGL}. Now for $g\in\mbox{Gal}(^{L}/_{F})$ we must have $g:S\rightarrow S$ since, for $b\in S$, $0=g(0)=g(\mbox{Irr}_{F,a}(b))=\mbox{Irr}_{F,a}(g(b))$. Putting $g(a):=a^{3}$ makes $g$ a generator of $\mbox{Gal}(^{L}/_{F})$ and we have
\begin{equation*}
g(a)=a^{3},\quad g^{(2)}(a)=a^{9},\quad g^{(3)}(a)=a^{7}\mbox{ and }g^{(4)}(a)=a.
\end{equation*}
Hence $L$ is a cyclic extension of $F$ meaning that $\mbox{Gal}(^{L}/_{F})$ is a cyclic group. Moreover
\begin{align*}
(a+a^{9}-a^{3}-a^{7})^{2}=&4-a^{2}-a^{4}-a^{6}-a^{8}\\
=&4-(a^{4}-a^{3}+a^{2}-a)\\
=&4-(\mbox{Irr}_{F,a}(a)-1)=5
\end{align*}
giving $a+a^{9}-a^{3}-a^{7}=\sqrt{5}$ noting that the real part of each term is positive. Further
\begin{align*}
g(\sqrt{5})=&g(a+a^{9}-a^{3}-a^{7})\\
=&g(a+g^{(2)}(a)-g(a)-g^{(3)}(a))\\
=&g(a)+g^{(3)}(a)-g^{(2)}(a)-a=-\sqrt{5}
\end{align*}
and so we have the intermediate field $\mathbb{Q}(a)^{\langle g^{(2)}\rangle}=\mathbb{Q}(\sqrt{5})$. Now let $S_{1}\subseteq\mathbb{N}\times\{1\}$, $S_{2}\subseteq\mathbb{N}\times\{\sqrt{5},-\sqrt{5}\}$, $S_{3}\subseteq\mathbb{N}\times\{a,a^{3},a^{7},a^{9}\}$ and $X:=S_{1}\cup S_{2}\cup S_{3}$ all be non-empty finite sets such that for $(x,y)\in X$ we have $(x,g(y))\in X$. Put the trivial topology on $X$ so that $X$ is a totally disconnected compact Hausdorff space noting that a set with the trivial topology is compact if and only if it is finite. Define a topological element of finite order $\tau$ on $X$ by $\tau((x,y)):=(x,g(y))$ and note that for our choice of topology every self map on $X$ is continuous as is every map from $X$ to $L$. Hence $C_{L}(X)$ is the $^{\mathbb{Q}(a)}/_{\mathbb{Q}}$ uniform algebra of all functions from $X$ to $L$ and we also have $\mbox{ord}(\tau)|\mbox{ord}(g)$ by construction. Hence with reference to Definition \ref{def:CGBFA} we have $C(X,\tau,g)$ as an example of a basic $^{\mathbb{Q}(a)}/_{\mathbb{Q}}$ function algebra. For $z\in X$ each $f\in C(X,\tau,g)$ is such that
\begin{align*}
\hspace{4cm}&f(z)\in\mathbb{Q}(a) &\mbox{if }&z\in S_{3},\\
&f(z)\in\mathbb{Q}(\sqrt{5}) &\mbox{if }&z\in S_{2}\mbox{ and}\\
&f(z)\in\mathbb{Q} &\mbox{if }&z\in S_{1}\mbox{ since}\hspace{4cm}
\end{align*}
$f(z)=f(\tau^{(\mbox{ord}(\tau,z))}(z))=g^{(\mbox{ord}(\tau,z))}(f(z))$ giving $\mbox{ord}(g,f(z))|\mbox{ord}(\tau,z)$. Furthermore $C(X,\tau,g)$ extends to $C(X,\tau^{(2)},g^{(2)})$ which is a basic $^{\mathbb{Q}(a)}/_{\mathbb{Q}(\sqrt{5})}$ function algebra. We will look at such extensions in the next section. Finally note that in general if we use the trivial valuation on $L$ then for every totally disconnected compact Hausdorff space $X$ the sup norm $\|\cdot\|_{\infty}$ is the trivial norm on $C_{L}(X)$.
\end{example}
We now look at some non-Archimedean examples involving an order two extension of the 5-adic numbers.
\begin{example}
\label{exa:CGEXONE}
Let $F:=\mathbb{Q}_{5}$ and $L:=\mathbb{Q}_{5}(\sqrt{2})$. Suppose towards a contradiction that $\sqrt{2}$ is already an element of $\mathbb{Q}_{5}$. With reference to Chapter \ref{cha:CVF}, we would have $1=|2|_{5}=|\sqrt{2}^{2}|_{5}=|\sqrt{2}|_{5}^{2}$ giving $|\sqrt{2}|_{5}=1$. But then $\sqrt{2}$ would have a 5-adic expansion over $\mathcal{R}_{5}:=\{0,1,\cdots,4\}$ of the form $\sum_{i=0}^{\infty}a_{i}5^{i}$ with $a_{0}\not=0$. Hence
\begin{equation}
a_{0}^{2} +2a_{0}\sum_{i=1}^{\infty}a_{i}5^{i}+\left(\sum_{i=1}^{\infty}a_{i}5^{i}\right)^{2}
\label{equ:CGsqrt}
\end{equation}
should be equal to 2. In particular $a_{0}^{2}$ should have the form $2+b$ where $b$ is a natural number, with a factor of 5, that cancels with the remaining terms of (\ref{equ:CGsqrt}). But since $a_{0}\in\{1,2,3,4\}$ we have $a_{0}^{2}\in\{1,4,4+5,1+3\cdot5\}$, a contradiction. Therefore we have $\mbox{Irr}_{F,\sqrt{2}}(x)=x^{2}-2$ giving $[L,F]=2$ and so $\mathbb{Q}_{5}(\sqrt{2})=\mathbb{Q}_{5}\oplus\sqrt{2}\mathbb{Q}_{5}$ as a $\mathbb{Q}_{5}$-vector space. It is immediate that $L$ is a Galois extension of $F$ with $\mbox{Gal}(^{L}/_{F})=\langle\mbox{id}, g\rangle$ where $g(\sqrt{2})=-\sqrt{2}$. The complete valuation on $F$ has a unique extension to a complete valuation on $L$, see Theorem \ref{thr:CVFEE}. Explicitly we have, for all $a\in L$,
\begin{equation*}
|a|_{L}=\sqrt{|a|_{L}|g(a)|_{L}}=\sqrt{|ag(a)|_{L}}=\sqrt{|ag(a)|_{5}},
\end{equation*}
noting that $ag(a)\in F$. Moreover in terms of the valuation logarithm $\nu_{5}$ on $F$ we have $\sqrt{|ag(a)|_{5}}=5^{-\frac{1}{2}\nu_{5}(ag(a))}$ and so the valuation logarithm for $L$ is $\omega(a):=\frac{1}{2}\nu_{5}(ag(a))$ for $a\in L$. Hence for $a+\sqrt{2}b\in L^{\times}$, with $a,b\in F$, we have $\omega(a+\sqrt{2}b)=\frac{1}{2}\nu_{5}(a^{2}-2b^{2})$. We show that this in fact gives
\begin{equation}
\omega(a+\sqrt{2}b)=\min\{\nu_{5}(a),\nu_{5}(b)\}.
\label{equ:CGINF}
\end{equation}
First recall that $\nu_{5}(0)=\infty$. If $b=0$ then $\omega(a)=\frac{1}{2}\nu_{5}(a^{2})=\frac{1}{2}2\nu_{5}(a)=\nu_{5}(a)$.\\
If $a=0$ then $\omega(\sqrt{2}b)=\frac{1}{2}\nu_{5}(-2b^{2})=\frac{1}{2}(\nu_{5}(-2)+2\nu_{5}(b))=\frac{1}{2}2\nu_{5}(b)=\nu_{5}(b)$, noting that $\nu_{5}(-2)=0$ since $-2=3+\sum_{i=1}^{\infty}4\cdot5^{i}$.\\
If $a,b\in F^{\times}$ and $\nu_{5}(a)\not=\nu_{5}(b)$ then by the above $\nu_{5}(a^{2})\not=\nu_{5}(-2b^{2})$. Hence, by Lemma \ref{lem:CVFEQ}, $\omega(a+\sqrt{2}b)=\frac{1}{2}\nu_{5}(a^{2}-2b^{2})=\frac{1}{2}\min\{\nu_{5}(a^{2}),\nu_{5}(-2b^{2})\}=\min\{\nu_{5}(a),\nu_{5}(b)\}$.\\
If $a,b\in F^{\times}$ and $\nu_{5}(a)=\nu_{5}(b)=n$ for some $n\in\mathbb{Z}$ then the expansion $a=\sum_{i=n}^{\infty}a_{i}5^{i}$ over $\mathcal{R}_{5}$ has $a_{n}\in\{1,2,3,4\}$ and so the expansion $a^{2}=\sum_{i=2n}^{\infty}a'_{i}5^{i}$ has $\overline{a'_{2n}}=\overline{a_{n}^{2}}$ in the residue field $\overline{F}=\mathbb{F}_{5}$ giving $a'_{2n}\in\{1,4\}$. Similarly the expansion $b=\sum_{i=n}^{\infty}b_{i}5^{i}$ has $b_{n}\in\{1,2,3,4\}$ and so $-2b^{2}=\left(3+\sum_{i=1}^{\infty}4\cdot5^{i}\right)\left(\sum_{i=n}^{\infty}b_{i}5^{i}\right)^{2}=\sum_{i=2n}^{\infty}b'_{i}5^{i}$ has $\overline{b'_{2n}}=\overline{3b_{n}^{2}}$ in $\overline{F}$ giving $b'_{2n}\in\{2,3\}$. Hence the expansion $a^{2}-2b^{2}=\sum_{i=2n}^{\infty}c_{i}5^{i}$ has $\overline{c_{2n}}=\overline{a'_{2n}+b'_{2n}}$ in $\overline{F}$ giving $c_{2n}\in\{1,2,3,4\}$. In particular $c_{2n}\not=0$ and so
\begin{equation*}
\omega(a+\sqrt{2}b)=\frac{1}{2}\nu_{5}(a^{2}-2b^{2})=\frac{1}{2}\nu_{5}(\sum_{i=2n}^{\infty}c_{i}5^{i})=\frac{1}{2}2n=n
\end{equation*}
and this completes the proof of (\ref{equ:CGINF}). With reference to Remark \ref{rem:CVFEE} it follows that $L$ is an unramified extension of $F$ with $|a+\sqrt{2}b|_{L}=\max\{|a|_{L},|b|_{L}\}=\max\{|a|_{F},|b|_{F}\}$ for $a,b\in F$. Further it follows easily from (\ref{equ:CGINF}) that $\mathcal{R}_{L}:=\{a+\sqrt{2}b:a,b\in\{0,\cdots,4\}\}$ is a set of representatives in $L$ of the elements in the residue field $\overline{L}$. Hence $\overline{L}=\mathbb{F}_{25}$ since $\#\mathcal{R}_{L}=25$ and $[\overline{L},\overline{F}]=2$. Therefore, by Theorem \ref{thr:CVFHB}, $L$ is locally compact and the unit ball $\Delta_{L}:=\{x\in L:|x|_{L}\leq1\}=\{x\in L:\omega(x)\geq0\}$ is a totally disconnected compact Hausdorff space with respect to $|\cdot|_{L}$. Further if we take $\tau_{1}$ to be the restriction of $g$ to $\Delta_{L}$ then, since $g$ is an isometry on $L$, $\tau_{1}$ is a topological involution on $\Delta_{L}$ and the basic $^{L}/_{F}$ function algebra
\begin{equation*}
C(\Delta_{L},\tau_{1},g)=\{f\in C_{L}(\Delta_{L}):f(\tau_{1}(x))=g(f(x))\mbox{ for all }x\in\Delta_{L}\}
\end{equation*}
is a non-Archimedean analog of the real disc algebra. Now let $f(x)=\sum_{n=0}^{\infty}a_{n}x^{n}$ be a power series in $C(\Delta_{L},\tau_{1},g)$. Then for $x\in\Delta_{L}$ and $\sigma$ from Remark \ref{rem:CGFROB} we have
\begin{equation*}
\sum_{n=0}^{\infty}a_{n}x^{n}=f(x)=\sigma(f)(x)=g\left(\sum_{n=0}^{\infty}a_{n}g(x)^{n}\right)=\sum_{n=0}^{\infty}g(a_{n})x^{n}
\end{equation*}
where the last equality follows because the two series have identical sequences of partial sums. Hence similarly we have, for $x\in\Delta_{L}$, $\sum_{n=0}^{\infty}(a_{n}-g(a_{n}))x^{n}=0$. In the general case of such circumstance we can not immediately assume that all the pairs of coefficients $a_{n}$ and $g(a_{n})$ are equal since $\Delta_{L}$ could be a set of roots of the series $\sum_{n=0}^{\infty}(a_{n}-g(a_{n}))x^{n}$ whilst there being an element of $L$ in the region of convergence of the series that is not a root. However since $0\in\Delta_{L}$ we have $a_{0}=g(a_{0})$. Now let $m\in\mathbb{N}$ be such that for all $i\in\mathbb{N}_{0}$ with $i<m$ we have $a_{i}=g(a_{i})$. Then $x^{m}\sum_{n=m}^{\infty}(a_{n}-g(a_{n}))x^{n-m}=0$ on $\Delta_{L}$ and $\sum_{n=m}^{\infty}(a_{n}-g(a_{n}))x^{n-m}=0$ on $\Delta_{L}\backslash\{0\}$. Let $\rho$ be the radius of convergence of $\sum_{n=m}^{\infty}(a_{n}-g(a_{n}))x^{n-m}$. Then with reference to Theorem \ref{thr:FAACP}, since $1\in\Delta_{L}\backslash\{0\}$ with $|1|_{L}=1$ and $\sum_{n=m}^{\infty}(a_{n}-g(a_{n}))x^{n-m}$ converges on $\Delta_{L}\backslash\{0\}$ we have $\rho\geq1$. Hence $\sum_{n=m}^{\infty}(a_{n}-g(a_{n}))x^{n-m}$ converges uniformly on the ball $\bar{B}_{\frac{1}{5}}(0)=\{x\in L:\omega(x)\geq 1\}$ by Theorem \ref{thr:FAACP}. Therefore $\sum_{n=m}^{\infty}(a_{n}-g(a_{n}))x^{n-m}$ is continuous on $\bar{B}_{\frac{1}{5}}(0)$ and so $\sum_{n=m}^{\infty}(a_{n}-g(a_{n}))x^{n-m}=0$ at $0\in\bar{B}_{\frac{1}{5}}(0)$. Hence $a_{m}=g(a_{m})$ and by induction we have shown that $a_{n}=g(a_{n})$ for all $n\in\mathbb{N}_{0}$. In particular all the power series in $C(\Delta_{L},\tau_{1},g)$ only have $F$ valued coefficients. However since $\Delta_{L}\not\subseteq F$ these functions take values in $L$.
\end{example}
Whilst the last example of a basic function algebra included many globally analytic functions the only globally analytic functions in the following example are constants. However many locally analytic functions are included.
\begin{example}
\label{exa:CGEXTWO}
Let $F$, $L$, $\Delta_{L}$, $\omega$ and $g$ be as in Example \ref{exa:CGEXONE} and therefore note that $\omega|_{\Delta_{L}}:\Delta_{L}\rightarrow\mathbb{N}_{0}\cup\{\infty\}$. Define $\tau_{2}(0):=0$ and for $x\in\Delta_{L}\backslash\{0\}$,
\begin{equation}
\tau_{2}(x):= \left\{ \begin{array} {l@{\quad\mbox{if}\quad}l}
5x & 2\mid\omega(x) \\
5^{-1}x & 2\nmid\omega(x).
\end{array} \right.
\label{equ:CGTAUTWO}
\end{equation}
Let $x\in\Delta_{L}$ with $\omega(x)\in\mathbb{N}_{0}$. Then $\omega(\tau_{2}(x))=\omega(5x)=\omega(x)+\omega(5)=\omega(x)+1$ if $2|\omega(x)$. Similarly $\omega(\tau_{2}(x))=\omega(x)-1$ if $2\nmid\omega(x)$. Hence when $\omega(x)\in\mathbb{N}_{0}$ we have $\omega(\tau_{2}(x))\in\mathbb{N}_{0}$ giving $\tau_{2}(x)\in\Delta_{L}$. Further $\tau_{2}:\Delta_{L}\rightarrow\Delta_{L}$ since $\tau_{2}(0)=0$. Moreover $\mbox{ord}(\tau_{2})=2$ and so to show that $\tau_{2}$ is a topological involution on $\Delta_{L}$ it remains to show that $\tau_{2}$ is continuous. Let $x\in\Delta_{L}$ and $(x_{n})$ be a sequence in $\Delta_{L}$ such that $\lim_{n\to\infty}x_{n}=x$. Let $\varepsilon>0$. For $x\not=0$ there exists $N\in\mathbb{N}$ such that for all $n\geq N$ we have $\omega(x_{n})=\omega(x)$ since convergence in $L$ is from the side, see Lemma \ref{lem:FAACS}. With reference to (\ref{equ:CGTAUTWO}) this gives, for all $n\geq N$, $\tau_{2}(x_{n})=\tau_{2}(x_{n})x_{n}^{-1}x_{n}=\tau_{2}(x)x^{-1}x_{n}$. Further since $\lim_{n\to\infty}x_{n}=x$ there exists $M\in\mathbb{N}$ such that, for all $m\geq M$, $|\tau_{2}(x)x^{-1}|_{L}|x-x_{m}|_{L}<\varepsilon$. Hence for all $n\geq\max\{M,N\}$ we have
\begin{equation*}
|\tau_{2}(x)-\tau_{2}(x_{n})|_{L}=|\tau_{2}(x)x^{-1}(x-x_{n})|_{L}=|\tau_{2}(x)x^{-1}|_{L}|x-x_{n}|_{L}<\varepsilon.
\end{equation*}
On the other hand for $x=0$ note that $\omega(\tau_{2}(x_{n}))\geq\omega(x_{n})-1$ for all $n\in\mathbb{N}$. In this case since $\lim_{n\to\infty}x_{n}=0$ there exists $N'\in\mathbb{N}$ such that for all $n\geq N'$ we have $5|x_{n}|_{L}<\varepsilon$ giving
\begin{equation*}
|\tau_{2}(x_{n})|_{L}=5^{-\omega(\tau_{2}(x_{n}))}\leq5^{-(\omega(x_{n})-1)}=5|x_{n}|_{L}<\varepsilon
\end{equation*}
as required. Hence $\tau_{2}$ is a topological involution on $\Delta_{L}$. We now consider the basic $^{L}/_{F}$ function algebra
\begin{equation*}
C(\Delta_{L},\tau_{2},g)=\{f\in C_{L}(\Delta_{L}):f(\tau_{2}(x))=g(f(x))\mbox{ for all }x\in\Delta_{L}\}.
\end{equation*}
We begin by proving that the only power series in $C(\Delta_{L},\tau_{2},g)$ are the constants belonging to $F$. Let $f(x):=\sum_{n=0}^{\infty}a_{n}x^{n}$ be a power series in $C(\Delta_{L},\tau_{2},g)$. Since $\tau_{2}(0)=0$ and $f(\tau_{2}(0))=g(f(0))$ we have $a_{0}=g(a_{0})$ giving $a_{0}\in F$ and so $a_{0}\in C(\Delta_{L},\tau_{2},g)$. Hence $h:=f-a_{0}$ is also in $C(\Delta_{L},\tau_{2},g)$. Suppose towards a contradiction that $h$ is not identically zero on $\Delta_{L}$. Since $1\in\Delta_{L}$, $\sum_{n=1}^{\infty}a_{n}$ converges and so by Lemma \ref{lem:FAACS} we have $\lim_{n\to\infty}\omega(a_{n})=\infty$. Hence we can define $M:=\min\{\omega(a_{n}):n\in\mathbb{N}\}$. Also let $m:=\min\{n\in\mathbb{N}:a_{n}\not=0\}$. Now since $\Delta_{L}=\{x\in L:\omega(x)\geq0\}$ we can find $y\in\Delta_{L}\backslash\{0\}$ such that $2|\omega(y)$ and $M+\omega(y)>\omega(a_{m})$. Hence for every $n>m$ we have
\begin{equation*}
\omega(a_{m}y^{m})=\omega(a_{m})+m\omega(y)<M+\omega(y)+m\omega(y)\leq\omega(a_{n})+n\omega(y)=\omega(a_{n}y^{n}).
\end{equation*}
So, by Lemma \ref{lem:FAACS}, $\omega\left(\sum_{n=m+1}^{\infty}a_{n}y^{n}\right)\geq\min\{\omega(a_{n}y^{n}):n\geq m+1\}>\omega(a_{m}y^{m})$. Hence $\omega\left(\sum_{n=m}^{\infty}a_{n}y^{n}\right)=\omega\left(a_{m}y^{m}+\sum_{n=m+1}^{\infty}a_{n}y^{n}\right)=\omega(a_{m}y^{m})$ by Lemma \ref{lem:CVFEQ}. Similarly for every $n>m$ we have
\begin{align*}
\omega(a_{m}5^{m}y^{m})=&\omega(a_{m})+m(\omega(y)+1)\\
<&M+\omega(y)+1+m(\omega(y)+1)\\
\leq&\omega(a_{n})+n(\omega(y)+1)=\omega(a_{n}5^{n}y^{n}),
\end{align*}
giving $\omega\left(\sum_{n=m}^{\infty}a_{n}5^{n}y^{n}\right)=\omega(a_{m}5^{m}y^{m})=\omega(5^{m})+\omega(a_{m}y^{m})=m+\omega(a_{m}y^{m})$. Now $h(\tau_{2}(y))=g(h(y))$ and $2|\omega(y)$ hence $\sum_{n=m}^{\infty}a_{n}5^{n}y^{n}=g\left(\sum_{n=m}^{\infty}a_{n}y^{n}\right)$. But, since $g$ is an isometry, $\omega\left(g\left(\sum_{n=m}^{\infty}a_{n}y^{n}\right)\right)=\omega(a_{m}y^{m})$ and $\omega\left(\sum_{n=m}^{\infty}a_{n}5^{n}y^{n}\right)=m+\omega(a_{m}y^{m})$ with $m\in\mathbb{N}$ which is a contradiction. Therefore $h$ is identically zero on $\Delta_{L}$ as required. However, whilst the only power series in $C(\Delta_{L},\tau_{2},g)$ are constants belonging to $F$, it is easy to construct locally analytic elements of $C(\Delta_{L},\tau_{2},g)$ using power series. Define
\begin{equation*}
\mathcal{C}(n):=\{x\in\Delta_{L}:\omega(x)=n\}\mbox{ for }n\in\omega(\Delta_{L}),
\end{equation*}
let $(e_{n})_{n\in\mathbb{N}}$ be the even sequence in $\omega(\Delta_{L})$ given by $e_{n}:=2(n-1)$ and let $a\in F$. Now let $(f_{n})_{n\in\mathbb{N}}$ be a sequence of power series with the following properties:
\begin{enumerate}
\item[(i)]
for all $n\in\mathbb{N}$ the coefficients of $f_{n}$ are elements of $L$;
\item[(ii)]
for all $n\in\mathbb{N}$ we have $5^{-e_{n}}<\rho_{n}$ where $\rho_{n}$ is the radius of convergence of $f_{n}$ so that $f_{n}$ is convergent on $\mathcal{C}(e_{n})$;
\item[(iii)]
we have $\lim_{n\to\infty}\inf\{\omega(f_{n}(x)-a):x\in\mathcal{C}(e_{n})\}=\infty$.
\end{enumerate}
We then define $f:\Delta_{L}\rightarrow L$ by
\begin{equation*}
f(x):= \left\{ \begin{array} {l@{\quad\mbox{if}\quad}l}
f_{n}(x) & x\in\mathcal{C}(e_{n})\\
g(f_{n}(\tau_{2}(x))) & x\in\mathcal{C}(e_{n}+1)\\
a & x=0.
\end{array} \right.
\end{equation*}
We show that $f$ is continuous. Let $x\in\Delta_{L}$ and let $(x_{n})$ be a sequence in $\Delta_{L}$ such that $\lim_{n\to\infty}x_{n}=x$.\\
If $x\not=0$ then by Lemma \ref{lem:FAACS} there is $N\in\mathbb{N}$ such that for all $n\geq N$ we have $\omega(x_{n})=\omega(x)$. If for some $m\in\mathbb{N}$ we have $x\in\mathcal{C}(e_{m})$ then $f(x)=f_{m}(x)$ and for all $n\geq N$ we have $f(x_{n})=f_{m}(x_{n})$ since $x_{n}\in\mathcal{C}(e_{m})$. Hence by the continuity of $f_{m}$ on $\mathcal{C}(e_{m})$ we have $\lim_{n\to\infty}f(x_{n})=f(x)$.\\
If for some $m\in\mathbb{N}$ we have $x\in\mathcal{C}(e_{m}+1)$ then $f(x)=g(f_{m}(\tau_{2}(x)))$, with $\tau_{2}(x)\in\mathcal{C}(e_{m})$, and for all $n\geq N$ we have $f(x_{n})=g(f_{m}(\tau_{2}(x_{n})))$, since $x_{n}\in\mathcal{C}(e_{m}+1)$, with $\tau_{2}(x_{n})\in\mathcal{C}(e_{m})$. Now $\tau_{2}$ is continuous on $\Delta_{L}$, $f_{m}$ is continuous on $\mathcal{C}(e_{m})$ and $g$ is an isometry on $L$. Hence again we have $\lim_{n\to\infty}f(x_{n})=f(x)$.\\
If $x=0$ then by the definition of $f$ we have $f(x)=a$. Let $\varepsilon<\infty$. We need to show that there is $N\in\mathbb{N}$ such that for all $n\geq N$ we have $\omega(f(x_{n})-a)>\varepsilon$. By property (iii) given in the construction of $f$ there is $M\in\mathbb{N}$ such that for all $m\geq M$ we have $\inf\{\omega(f_{m}(y)-a):y\in\mathcal{C}(e_{m})\}>\varepsilon$. Since $\lim_{n\to\infty}\omega(x_{n})=\infty$ there is $N\in\mathbb{N}$ such that for all $n\geq N$ we have $\omega(x_{n})\geq e_{M}$. So let $n\geq N$. Then either $x_{n}=0$, noting that $\omega(0)=\infty$, or there is $m\geq M$ with either $x_{n}\in\mathcal{C}(e_{m})$ or $x_{n}\in\mathcal{C}(e_{m}+1)$.\\
For $x_{n}=0$ we have $\omega(f(0)-a)=\omega(a-a)=\infty>\varepsilon$.\\
For $x_{n}\in\mathcal{C}(e_{m})$ we have $\omega(f(x_{n})-a)=\omega(f_{m}(x_{n})-a)>\varepsilon$ since $m\geq M$.\\
For $x_{n}\in\mathcal{C}(e_{m}+1)$ define $y:=\tau_{2}(x_{n})$ and note that $y\in\mathcal{C}(e_{m})$. Then
\begin{align*}
\omega(f(x_{n})-a)=&\omega(g(f_{m}(\tau_{2}(x_{n})))-a)&&\\
=&\omega(g(f_{m}(y)-a)) &&(\mbox{since }a\in F)\\
=&\omega(f_{m}(y)-a) &&(\mbox{since }g\mbox{ is an isometry on }L)\\
>&\varepsilon &&(\mbox{since }m\geq M\mbox{ and }y\in\mathcal{C}(e_{m})).
\end{align*}
Hence $f$ is continuous. We now show that $f\in C(\Delta_{L},\tau_{2},g)$. Let $x\in\Delta_{L}$.\\
For $x=0$ we have $f(\tau_{2}(0))=f(0)=a=g(a)=g(f(0))$ since $a\in F$.\\
For $x\in\mathcal{C}(e_{n})$, for some $n\in\mathbb{N}$, we have $f(x)=f_{n}(x)$. Define $y:=\tau_{2}(x)$ giving $y\in\mathcal{C}(e_{n}+1)$. Then we have
\begin{equation*}
f(\tau_{2}(x))=f(y)=g(f_{n}(\tau_{2}(y)))=g(f_{n}(\tau_{2}(\tau_{2}(x))))=g(f_{n}(x))=g(f(x)).
\end{equation*}
For $x\in\mathcal{C}(e_{n}+1)$, for some $n\in\mathbb{N}$, we have $f(x)=g(f_{n}(\tau_{2}(x)))$. Put $y:=\tau_{2}(x)$ giving $y\in\mathcal{C}(e_{n})$. Then we have
\begin{equation*}
f(\tau_{2}(x))=f(y)=f_{n}(y)=f_{n}(\tau_{2}(x))=g(g(f_{n}(\tau_{2}(x))))=g(f(x)).
\end{equation*}
Hence $f\in C(\Delta_{L},\tau_{2},g)$ as required. Now suppose there is $N\in\mathbb{N}$ such that for all $n\geq N$ we have $f_{n}=a$. Then $f$ will be locally analytic on $\Delta_{L}$ noting that convergence in $\Delta_{L}$ is from the side, in particular see the proof of Lemma \ref{lem:FAACS}.
\end{example}
\begin{remark}
Concerning examples \ref{exa:CGEXONE} and \ref{exa:CGEXTWO}.
\begin{enumerate}
\item[(i)]
Since $g$ is an isometry on $\Delta_{L}$ and $5\in F$ we note that $\tau_{1}\circ\tau_{2}$ is also a topological involution on $\Delta_{L}$ with $\mbox{ord}(\tau_{1}\circ\tau_{2})=2$. For $f\in C(\Delta_{L},\tau_{1},g)\cap C(\Delta_{L},\tau_{2},g)$ and $x\in\Delta_{L}$ we have $f(\tau_{1}\circ\tau_{2}(x))=g(f(\tau_{2}(x)))=g(g(f(x)))=f(x)$ which gives $f\in C(\Delta_{L},\tau_{1}\circ\tau_{2},\mbox{id})$. But, with reference to Figure \ref{fig:CGED}, $C(\Delta_{L},\tau_{1}\circ\tau_{2},\mbox{id})$ is not a basic function algebra since it fails to separate the points of $\Delta_{L}$ noting that we have $\mbox{ord}(\tau_{1}\circ\tau_{2})\nmid\mbox{ord}(\mbox{id})$. Hence $C(\Delta_{L},\tau_{1},g)\cap C(\Delta_{L},\tau_{2},g)$ is not a $^{L}/_{F}$ function algebra on $(\Delta_{L},\tau_{1},g)$. However $C(\Delta_{L},\tau_{1}\circ\tau_{2},g)$ is a basic function algebra.
\item[(ii)]
We note that by Theorem \ref{thr:CGGSW} every element $f\in C(\Delta_{L},\tau_{2},g)$ can be uniformly approximated by polynomials belonging to $C_{L}(\Delta_{L})$. However, apart from the elements of $F$, none of these polynomials belong to $C(\Delta_{L},\tau_{2},g)$.
\end{enumerate}
\end{remark}
In the next section we look at ways of obtaining more basic function algebras.
\section{Non-Archimedean new basic function algebras from old}
\label{sec:CGNANFO}
\subsection{Basic extensions}
\label{subsec:CGBE}
The following theorem concerns extensions of basic function algebras resulting from the field structure involved.
\begin{theorem}
\label{thr:CGBET}
Let the basic $^{L}/_{L^{g}}$ function algebra on $(X,\tau,g)$ be such that $\mbox{Gal}(^{L}/_{L^{g}})$ and $\langle\mbox{id}\rangle$ are respectively at the top and bottom of a lattice of groups with intermediate elements. Then $C_{L}(X)$ and $C(X,\tau,g)$ are respectively at the top and bottom of a particular lattice of basic function algebras with intermediate elements and there is a one-one correspondence between the subgroups of $\mbox{Gal}(^{L}/_{L^{g}})$ and the elements of this lattice which we will call the lattice of basic extensions of $C(X,\tau,g)$.
\end{theorem}
\begin{proof}
With reference to Remark \ref{rem:CGFROB}, the automorphism $\sigma(f)=g^{(\mbox{ord}(g)-1)}\circ f\circ\tau$, for $f\in C_{L}(X)$, is such that $C_{L}(X)^{\langle\sigma\rangle}=C(X,\tau,g)$ where
\begin{equation*}
C_{L}(X)^{\langle\sigma\rangle}:=\{f\in C_{L}(X):\sigma(f)=f\}.
\end{equation*}
Now by the fundamental theorem of Galois theory we have $\mbox{Gal}(^{L}/_{L^{g}})=\langle g\rangle$ and so $\mbox{Gal}(^{L}/_{L^{g}})$ is a cyclic group. Moreover we have $\mbox{ord}(\sigma)=\mbox{ord}(g)$ giving $\langle\sigma\rangle\cong\langle g\rangle$ as cyclic groups. It is standard from group theory that a subgroup of a cyclic group is cyclic. In particular, for $n|\mbox{ord}(\sigma)$, $\langle\sigma^{(n)}\rangle$ is the unique cyclic subgroup of $\langle\sigma\rangle$ of size $\#\langle\sigma^{(n)}\rangle=\mbox{ord}(\sigma^{(n)})=\frac{\mbox{ord}(\sigma)}{n}$. Moreover for $G$ a subgroup of $\langle\sigma\rangle$ we have $\frac{\mbox{ord}(\sigma)}{\#G}\in\mathbb{N}$, by Lagrange's theorem, and so for $n=\frac{\mbox{ord}(\sigma)}{\#G}$ we have $\langle\sigma^{(n)}\rangle=G$ with $n|\mbox{ord}(\sigma)$. Hence we define a map $\varsigma:\{\langle\sigma^{(n)}\rangle:n|\mbox{ord}(\sigma)\}\rightarrow\{C_{L}(X)^{\langle\sigma^{(n)}\rangle}:n|\mbox{ord}(\sigma)\}$ by
\begin{equation*}
\varsigma(\langle\sigma^{(n)}\rangle):=C_{L}(X)^{\langle\sigma^{(n)}\rangle}:=\{f\in C_{L}(X):\sigma^{(n)}(f)=f\}=C(X,\tau^{(n)},g^{(n)}).
\end{equation*}
Now let $n|\mbox{ord}(\sigma)$. Since $\mbox{ord}(\tau)|\mbox{ord}(g)$ we also have $\mbox{ord}(\tau,X)\subseteq\mbox{ord}(g,L)$, see Figure \ref{fig:CGED}. Hence $\mbox{ord}(\tau^{(n)},X)\subseteq\mbox{ord}(g^{(n)},L)$ giving $\mbox{ord}(\tau^{(n)})|\mbox{ord}(g^{(n)})$ and so $\varsigma(\langle\sigma^{(n)}\rangle)$ is a basic function algebra. Moreover since
\begin{equation*}
C(X,\tau^{(n)},g^{(n)})=\{f\in C_{L}(X):f(\tau^{(n)}(x))=g^{(n)}(f(x))\mbox{ for all }x\in X\}
\end{equation*}
the constants in $\varsigma(\langle\sigma^{(n)}\rangle)$ are the elements of the field $L^{\langle g^{(n)}\rangle}$ and so $\varsigma$ is injective by the fundamental theorem of Galois theory. Finally it is immediate that the elements of $\{C_{L}(X)^{\langle\sigma^{(n)}\rangle}:n|\mbox{ord}(\sigma)\}$ form a lattice as described in the theorem and this completes the proof.
\end{proof}
\begin{example}
\label{exa:CGBEL}
Let $F=\mathbb{Q}$ and let $L=\mathbb{Q}(a)$ where $a=\exp(\frac{1}{14}2\pi i)$. Having factorised $x^{14}-1$ in $F[x]$, we have $\mbox{Irr}_{F,a}(x)=x^{6}-x^{5}+x^{4}-x^{3}+x^{2}-x+1$ with roots $S:=\{a,a^{3},a^{5},a^{9},a^{11},a^{13}\}$. Hence $L$ is the splitting field of $\mbox{Irr}_{F,a}(x)$ over $F$ and so $L$ is a Galois extension of $F$ with $\#\mbox{Gal}(^{L}/_{F})=[L,F]=6$ by Theorem \ref{thr:CVFGL}. In fact putting $g(a):=a^{3}$ makes $g$ a generator of $\mbox{Gal}(^{L}/_{F})$ and so $L$ is a cyclic extension of $F$ and $F=L^{g}$ since $L$ is a Galois extension. We can take $C(X,\tau,g)$ to be a basic $^{L}/_{L^{g}}$ function algebra constructed by analogy with Example \ref{exa:CGTVFA} where $X\subseteq\mathbb{N}\times L$ is non-empty and finite with $\tau((x,y))=(x,g(y))\in X$ for all $(x,y)\in X$. In this case Figure \ref{fig:CGBE} shows the lattice of basic extensions of $C(X,\tau,g)$ as given by Theorem \ref{thr:CGBET}.
\begin{figure}[h]
\begin{equation*}
\xymatrix{
&&&C_{L}(X)^{\langle\mbox{\footnotesize{id}}\rangle}&\\
&\langle\mbox{id}\rangle\ar@{.>}[rru]^{\varsigma}\ar@{->}[ld]\ar@{->}[rd]&C_{L}(X)^{\langle\sigma^{(2)}\rangle}\ar@{->}[ru]&&C_{L}(X)^{\langle\sigma^{(3)}\rangle}\ar@{->}[lu]\\
\langle\sigma^{(2)}\rangle\ar@{.>}[rru]\ar@{->}[rd]&\hspace{20mm}&\langle\sigma^{(3)}\rangle\ar@{.>}[rru]\ar@{->}[ld]&C_{L}(X)^{\langle\sigma\rangle}\ar@{->}[lu]\ar@{->}[ru]&\\
&\langle\sigma\rangle\ar@{.>}[rru]&&&
}
\end{equation*}
\caption{Lattice of basic extensions.}
\label{fig:CGBE}
\end{figure}
Finally we note that in this example $F=\mathbb{Q}$, requiring the trivial valuation, was chosen just to keep things simple.
\end{example}
\subsection{Residue algebras}
\label{subsec:CGRA}
We begin with an analog of Definition \ref{def:CVFRF}.
\begin{definition}
\label{def:CGRA}
For $C(X,\tau,g)$ a basic $^{L}/_{L^{g}}$ function algebra in the non-Archimedean setting, with valuation logarithm $\omega$ on $L$, we define:
\begin{enumerate}
\item[(i)]
$\mathcal{O}(X,\tau,g):=\{f\in C(X,\tau,g):\inf_{x\in X}\omega(f(x))\geq0,\mbox{\small{ equivalently }}\|f\|_{\infty}\leq1\}$;
\item[(ii)]
$\mathcal{O}^{\times}(X,\tau,g):=\{f\in C(X,\tau,g):\omega(f(x))=0,\mbox{\small{ equivalently }}|f(x)|_{L}=1,\forall x\in X\}$;
\item[(iii)]
$\mathcal{J}(X,\tau,g):=\{f\in C(X,\tau,g):\inf_{x\in X}\omega(f(x))>0,\mbox{\small{ equivalently }}\|f\|_{\infty}<1\}$;
\item[(iv)]
$\mathcal{M}^{y}(X,\tau,g):=\{f\in\mathcal{O}(X,\tau,g):\omega(f(y))>0,\mbox{\small{ equivalently }}|f(y)|_{L}<1\}$ for $y\in X$.
\end{enumerate}
\end{definition}
In this subsection we will mainly be interested in the following two theorems and their proofs. The main theorem is Theorem \ref{thr:CGRAT} which concerns the residue algebra of particular basic function algebras. Before proving these theorems we will need to prove several other results that are also of interest in their own right.
\begin{theorem}
\label{thr:CGRUI}
If $C(X,\tau,g)$ is a basic $^{L}/_{L^{g}}$ function algebra in the non-Archimedean setting then:
\begin{enumerate}
\item[(i)]
$\mathcal{O}(X,\tau,g)$ is a ring;
\item[(ii)]
$\mathcal{O}^{\times}(X,\tau,g)$ is the multiplicative group of units of $\mathcal{O}(X,\tau,g)$;
\item[(iii)]
$\mathcal{J}(X,\tau,g)$ is an ideal of $\mathcal{O}(X,\tau,g)$;
\item[(iv)]
$\mathcal{M}^{y}(X,\tau,g)$ is a maximal ideal of $\mathcal{O}(X,\tau,g)$ for each $y\in X$.
\end{enumerate}
\end{theorem}
\begin{theorem}
\label{thr:CGRAT}
Let $F$ be a locally compact complete non-Archimedean field of characteristic zero with nontrivial valuation. Let $L$ be a finite unramified extension of $F$ with $L^{g}=F$ for some $g\in\mbox{Gal}(^{L}/_{F})$ and let $C(X,\tau,g)$ be a basic $^{L}/_{F}$ function algebra. Then there is an isometric isomorphism
\begin{equation*}
\mathcal{O}(X,\tau,g)/\mathcal{J}(X,\tau,g)\cong C(X,\tau,\bar{g})
\end{equation*}
where $C(X,\tau,\bar{g})$ is the basic $^{\overline{L}}/_{\overline{F}}$ function algebra on $(X,\tau,\bar{g})$. Here $\overline{F}$ and $\overline{L}$ are respectively the residue field of $F$ and $L$ whilst $\bar{g}$ is the residue automorphism on $\overline{L}$ induced by $g$. More generally $L$ need not be an unramified extension of $F$ for the above to hold provided that we impose the condition $\mbox{ord}(\tau)|\mbox{ord}(\bar{g})$ directly.
\end{theorem}
\begin{remark}
\label{rem:CGRAT}
Concerning Theorem \ref{thr:CGRAT}.
\begin{enumerate}
\item[(i)]
The conditions in Theorem \ref{thr:CGRAT} imply that $F$ contains a $p$-adic field, up to a positive exponent of the valuation, since by Theorem \ref{thr:CVFHB} the residue field $\overline{F}$ is finite and so the valuation on $F$ when restricted to $\mathbb{Q}$ can not be trivial.
\item[(ii)]
Since $\overline{L}$ is finite the valuation on $\overline{L}$ is the trivial valuation. In general the quotient norm on a residue field is the trivial valuation. In particular for $\bar{a}\in\overline{L}$ with residue class representative $a\in\mathcal{O}_{L}$ we have, by Lemma \ref{lem:CVFEQ}, that
\begin{equation*}
\min\{|a-b|_{L}:b\in\mathcal{M}_{L}\}=\left\{ \begin{array} {l@{\quad\mbox{if}\quad}l}
1 & a\notin\mathcal{M}_{L} \\
0 & a\in\mathcal{M}_{L}.
\end{array} \right.
\end{equation*}
\item[(iii)]
To be thorough for the reader we show that $\bar{g}$ is well defined, although this is covered in \cite[p52]{Fesenko}. Let $\bar{a}\in\overline{L}$ with residue class representative $a\in\mathcal{O}_{L}$. For $g\in\mbox{Gal}(^{L}/_{F})$ we obtain $\bar{g}\in\mbox{Gal}(^{\overline{L}}/_{\overline{F}})$ by $\bar{g}(\bar{a}):=\overline{g(a)}$. Now let $b\in\mathcal{O}_{L}$ with $b\not=a$ but $\bar{b}=\bar{a}$ so that $a-b\in\mathcal{M}_{L}$. By Remark \ref{rem:CVFUT}, $g$ is an isometry on $L$ and so $g(a)-g(b)=g(a-b)\in\mathcal{M}_{L}$ giving $\bar{g}(\bar{b})=\overline{g(b)}=\overline{g(a)}=\bar{g}(\bar{a})$ and so $\bar{g}$ is well defined.
\item[(iv)]
The map $g\mapsto\bar{g}$ is a homomorphism from $\mbox{Gal}(^{L}/_{F})$ to $\mbox{Gal}(^{\overline{L}}/_{\overline{F}})$. Indeed for $\bar{a}\in\overline{L}$ and $g_{1},g_{2}\in\mbox{Gal}(^{L}/_{F})$ we have
\begin{equation*}
\overline{g_{1}\circ g_{2}}(\bar{a})=\overline{g_{1}\circ g_{2}(a)}=\overline{g_{1}(g_{2}(a))}=\bar{g}_{1}\left(\overline{g_{2}(a)}\right)=\bar{g}_{1}(\bar{g}_{2}(\bar{a}))=\bar{g}_{1}\circ\bar{g}_{2}(\bar{a}).
\end{equation*}
Under the conditions of Theorem \ref{thr:CGRAT} this homomorphism becomes an isomorphism as per Lemma \ref{lem:CGORDG} below, see \cite[p52]{Fesenko}. In particular this ensures that $C(X,\tau,\bar{g})$, in Theorem \ref{thr:CGRAT}, is a basic function algebra since $\mbox{ord}(\bar{g})=\mbox{ord}(g)$ gives $\mbox{ord}(\tau)|\mbox{ord}(\bar{g})$.
\end{enumerate}
\end{remark}
\begin{lemma}
\label{lem:CGORDG}
Let $F$ be a local field, as per Remark \ref{rem:CVFHB}, and let $L$ be a finite unramified Galois extension of $F$. Then $\mbox{Gal}(^{L}/_{F})\cong\mbox{Gal}(^{\overline{L}}/_{\overline{F}})$ giving $\mbox{ord}(\bar{g})=\mbox{ord}(g)$ for all $g\in\mbox{Gal}(^{L}/_{F})$.
\end{lemma}
The following definition and lemma will be useful when proving Theorem \ref{thr:CGRUI}. Note that the first part of Lemma \ref{lem:CGVLF} makes sense even though we have yet to show that $\mathcal{O}(X,\tau,g)$ is a ring.
\begin{definition}
\label{def:CGVLF}
Let $L$ be a complete valued field with valuation logarithm $\omega$ and let $X$ be a totally disconnected compact Hausdorff space.
\begin{enumerate}
\item[(i)]
We call a map $\iota:X\rightarrow\omega(\mathcal{O}_{L})$ a {\em value level function}.
\item[(ii)]
We place a partial order on the set of all value level function by setting
\begin{equation*}
\iota_{1}\geq\iota_{2}\mbox{ if and only if for all }x\in X\mbox{ we have }\iota_{1}(x)\geq\iota_{2}(x).
\end{equation*}
\end{enumerate}
\end{definition}
\begin{lemma}
\label{lem:CGVLF}
Let $C(X,\tau,g)$ be a basic $^{L}/_{L^{g}}$ function algebra in the non-Archimedean setting with valuation logarithm $\omega$ on $L$ and let $\iota:X\rightarrow\omega(\mathcal{O}_{L})$ be a value level function. Then:
\begin{enumerate}
\item[(i)]
$\mathcal{M}_{\iota}(X,\tau,g):=\{f\in C(X,\tau,g):\omega(f(x))\geq\iota(x)\mbox{ for all }x\in X\}$ is an ideal of $\mathcal{O}(X,\tau,g)$;
\item[(ii)]
for $\iota'$ another value level function with $\iota\geq\iota'$ we have $\mathcal{M}_{\iota}(X,\tau,g)\subseteq\mathcal{M}_{\iota'}(X,\tau,g)$.
\end{enumerate}
\end{lemma}
\begin{proof}
For (i), let $f_{1},f_{2}\in\mathcal{M}_{\iota}(X,\tau,g)$ and $f\in\mathcal{O}(X,\tau,g)$. Then for each $x\in X$ we have $\omega(f_{1}(x)+f_{2}(x))\geq\min\{\omega(f_{1}(x)),\omega(f_{2}(x))\}\geq\iota(x)$ giving $f_{1}+f_{2}\in\mathcal{M}_{\iota}(X,\tau,g)$ and $\omega(f_{1}(x)f(x))=\omega(f_{1}(x))+\omega(f(x))\geq\omega(f_{1}(x))\geq\iota(x)$ giving $f_{1}f\in\mathcal{M}_{\iota}(X,\tau,g)$ as required. For (ii), this is immediate.
\end{proof}
Note, the mapping $\iota\mapsto\mathcal{M}_{\iota}(X,\tau,g)$ is not assumed to be injective. We now prove Theorem \ref{thr:CGRUI}.
\begin{proof}[Proof of Theorem \ref{thr:CGRUI}]
For (i), note that since $\omega(1)=0$, $\omega(0)=\infty$ and $1,0\in F$ we have $1,0\in\mathcal{O}(X,\tau,g)$. Further $\mathcal{O}(X,\tau,g)$ is closed under multiplication and addition by Lemma \ref{lem:CGVLF} since for the value level function that is constantly zero we have $\mathcal{O}(X,\tau,g)=\mathcal{M}_{0}(X,\tau,g)$. Hence $\mathcal{O}(X,\tau,g)$ is a ring.\\
For (ii), we need to show that $\mathcal{O}^{\times}(X,\tau,g)=\mathcal{O}(X,\tau,g)^{\times}$. Let $f\in\mathcal{O}(X,\tau,g)^{\times}$. Then for all $x\in X$ we have $\omega(f(x))\geq0$ and $\omega(f^{-1}(x))\geq0$ since $f,f^{-1}\in\mathcal{O}(X,\tau,g)^{\times}$ but we also have $\omega(f^{-1}(x))=\omega((f(x))^{-1})=-\omega(f(x))$ giving $\omega(f(x))=0$. Hence $\mathcal{O}(X,\tau,g)^{\times}\subseteq\mathcal{O}^{\times}(X,\tau,g)$. Now let $f\in\mathcal{O}^{\times}(X,\tau,g)$. We have
\begin{equation}
\label{equ:CGUNIT}
\omega(f^{-1}(x))=-\omega(f(x))=0
\end{equation}
for all $x\in X$ and so it remains to show that $f^{-1}$ is an element of $C(X,\tau,g)$. We have $1=g(1)=g(f^{-1}f)=g(f^{-1})g(f)$ giving $g(f^{-1})=(g(f))^{-1}$ and so
\begin{equation*}
f^{-1}(\tau)=(f(\tau))^{-1}=(g(f))^{-1}=g(f^{-1}).
\end{equation*}
For continuity let $x\in X$ and $(x_{n})$ be a sequence in $X$ with $\lim_{n\to\infty}x_{n}=x$ in $X$. Then by (\ref{equ:CGUNIT}) we have
\begin{align*}
\omega(f^{-1}(x_{n})-f^{-1}(x))=&\omega(f^{-1}(x_{n})-f^{-1}(x))+\omega(f(x))\\
=&\omega((f^{-1}(x_{n})-f^{-1}(x))f(x))\\
=&\omega(f^{-1}(x_{n})f(x)-1)\\
=&\omega(f^{-1}(x_{n})(f(x)-f(x_{n})))\\
=&\omega(f^{-1}(x_{n}))+\omega(f(x)-f(x_{n}))\\
=&\omega(f(x)-f(x_{n})).
\end{align*}
Therefore by the continuity of $f$ we have $\lim_{n\to\infty}f^{-1}(x_{n})=f^{-1}(x)$ from which it follows that $\mathcal{O}^{\times}(X,\tau,g)\subseteq\mathcal{O}(X,\tau,g)^{\times}$ as required.\\
For (iii), taking into account that the valuation on $L$ could be dense, $\mathcal{J}(X,\tau,g)$ is an ideal of $\mathcal{O}(X,\tau,g)$ by Lemma \ref{lem:CGVLF} since $\mathcal{J}(X,\tau,g)=\bigcup_{n\in\mathbb{N}}\mathcal{M}_{\frac{1}{n}}(X,\tau,g)$ noting that $1\not\in\mathcal{M}_{\frac{1}{n}}(X,\tau,g)$ for all $n\in\mathbb{N}$ and that a union of nested ideals is an ideal.\\
For (iv), $\mathcal{M}^{y}(X,\tau,g)$ is an ideal of $\mathcal{O}(X,\tau,g)$ by Lemma \ref{lem:CGVLF} since $\mathcal{M}^{y}(X,\tau,g)=\bigcup_{n\in\mathbb{N}}\mathcal{M}_{\frac{1}{n}\chi_{\{y\}}}(X,\tau,g)$ where $\chi_{\{y\}}$ is the indicator function. Also by Lemma \ref{lem:CGVLF}, since $\frac{1}{n}\geq\frac{1}{n}\chi_{\{y\}}$ for all $n\in\mathbb{N}$, we have $\mathcal{J}(X,\tau,g)\subseteq\mathcal{M}^{y}(X,\tau,g)$. We now show that $\mathcal{M}^{y}(X,\tau,g)$ is a maximal ideal of $\mathcal{O}(X,\tau,g)$. Let $\mathcal{I}(X,\tau,g)$ be a, not necessarily proper, ideal of $\mathcal{O}(X,\tau,g)$ with $\mathcal{M}^{y}(X,\tau,g)\subsetneqq\mathcal{I}(X,\tau,g)$. Then there is $f\in\mathcal{I}(X,\tau,g)$ with $\omega(f(y))=0$. Define on $X$
\begin{equation*}
f'(x):=\left\{ \begin{array} {l@{\quad\mbox{if}\quad}l}
0 & \omega(f(x))=0 \\
1 & \omega(f(x))>0.
\end{array} \right.
\end{equation*}
We show that $f'$ is an element of $\mathcal{M}^{y}(X,\tau,g)$ and so $f'\in\mathcal{I}(X,\tau,g)$. For continuity let $x\in X$ and $(x_{n})$ be a sequence of elements of $X$ with $\lim_{n\to\infty}x_{n}=x$ in $X$. Since $f$ is continuous we have $\lim_{n\to\infty}f(x_{n})=f(x)$ with respect to $\omega$. Hence if $f(x)=0$ then there exists $N\in\mathbb{N}$ such that for all $n\geq N$ we have $\omega(f(x_{n}))=\omega(f(x_{n})-f(x))>0$. If $f(x)\not=0$ then since convergence in $L$ is from the side, see Lemma \ref{lem:FAACS}, there exists $N\in\mathbb{N}$ such that for all $n\geq N$ we have $\omega(f(x_{n}))=\omega(f(x))$. Hence in every case there exists $N\in\mathbb{N}$ such that for all $n\geq N$ we have $f'(x_{n})=f'(x)$ and so $f'$ is continuous. We need to show that $f'(\tau(x))=g(f'(x))$. Since $g$ is an isometry on $L$ we have $\omega(f(\tau(x)))=\omega(g(f(x)))=\omega(f(x))$ giving
\begin{align*}
f'(\tau(x))=&\left\{ \begin{array} {l@{\quad\mbox{if}\quad}l}
0 & \omega(f(\tau(x)))=0 \\
1 & \omega(f(\tau(x)))>0
\end{array} \right.\\
=&f'(x)\\
=&g(f'(x))
\end{align*}
noting that $f'$ takes values only in $\{0,1\}\subseteq F$. Now $\omega(1)=0$ and $\omega(0)=\infty$ so that for all $x\in X$ we have $\omega(f'(x))\geq0$ giving $f'\in\mathcal{O}(X,\tau,g)$. Further since $\omega(f(y))=0$ we have $\omega(f'(y))=\omega(0)=\infty$ and so we have shown that $f'\in\mathcal{M}^{y}(X,\tau,g)\subsetneqq\mathcal{I}(X,\tau,g)$. Now since $\mathcal{I}(X,\tau,g)$ is an ideal we have $f+f'\in\mathcal{I}(X,\tau,g)$. Moreover by the definition of $f'$, for each $x\in X$, if $\omega(f(x))=0$ then $\omega(f'(x))=\omega(0)=\infty$ and if $\omega(f(x))>0$ then $\omega(f'(x))=\omega(1)=0$. Hence for all $x\in X$, $\omega(f(x)+f'(x))=0$ by Lemma \ref{lem:CVFEQ} and so $f+f'\in\mathcal{O}^{\times}(X,\tau,g)$ giving $\mathcal{I}(X,\tau,g)=\mathcal{O}(X,\tau,g)$. Therefore $\mathcal{M}^{y}(X,\tau,g)$ is a maximal ideal of $\mathcal{O}(X,\tau,g)$ and this completes the proof of Theorem \ref{thr:CGRUI}.
\end{proof}
The following two lemmas will be used in the proof of Theorem \ref{thr:CGRAT}. The first of these, Lemma \ref{lem:CGSLCF}, will be known but we provide a proof in the absence of a reference. The second, Lemma \ref{lem:CGIRC}, may be new, since I have not seen it in the literature, however it could be known to some number theorists.
\begin{lemma}
\label{lem:CGSLCF}
Let $F$ be a complete non-Archimedean field with a nontrivial, discrete valuation and valuation logarithm $\nu$. Let $\pi$ be a prime element and $\mathcal{R}$ be a set of residue class representatives for $F$, as shown in Theorem \ref{thr:CVFSE}. Then, for $X$ a compact Hausdorff space, each $f\in C_{F}(X)$ has a unique expansion, as a series of locally constant $\mathcal{R}$-valued functions, of the form
\begin{equation*}
f=\sum_{i=n}^{\infty}f_{i}\pi^{i},\quad\mbox{for some }n\in\mathbb{Z}.
\end{equation*}
Moreover, for $j\geq n$ and $x,y\in X$ with $\nu(f(x)-f(y))>j\nu(\pi)$, we have $f_{i}(x)=f_{i}(y)$ for all $i$ in the interval $n\leq i\leq j$.
\end{lemma}
\begin{proof}
Let $f\in C_{F}(X)$ and note that since $X$ is compact, $f$ is bounded. Hence there is $n\in\mathbb{Z}$ such that, for all $x\in X$, $\nu(f(x))\geq n\nu(\pi)$. Therefore by allowing terms to be zero where necessary and by using the unique $\pi$-power series expansion over $\mathcal{R}$ for elements of $F^{\times}$, as shown in Theorem \ref{thr:CVFSE}, we have for each $x\in X$
\begin{equation*}
f(x)=\sum_{i=n}^{\infty}f_{i}(x)\pi^{i}\in F.
\end{equation*}
Hence for each $i\geq n$ we have obtained a function $f_{i}:X\rightarrow\mathcal{R}$ and the resulting expansion $f=\sum_{i=n}^{\infty}f_{i}\pi^{i}$ is unique. Now for $j\geq n$ let $x,y\in X$ be such that we have $\nu(f(x)-f(y))>j\nu(\pi)$. If we do not have $f_{k}(x)=f_{k}(y)$ for all $k\geq n$ then let $k\geq n$ be the first integer for which $f_{k}(x)\not=f_{k}(y)$. Therefore $f_{k}(x)$ and $f_{k}(y)$ are representatives in $\mathcal{O}_{F}$ of two different residue classes. Hence $f_{k}(x)-f_{k}(y)\not\in\mathcal{M}_{F}$ showing that $\nu(f_{k}(x)-f_{k}(y))=0$. Therefore by Lemma \ref{lem:CVFEQ} and the definition of $k$ we have
\begin{align*}
k\nu(\pi)=&\nu(f_{k}(x)-f_{k}(y))+\nu(\pi^{k})\\
=&\nu((f_{k}(x)-f_{k}(y))\pi^{k})\\
=&\nu\left((f_{k}(x)-f_{k}(y))\pi^{k}+\sum_{i=k+1}^{\infty}f_{i}(x)\pi^{i}-\sum_{i=k+1}^{\infty}f_{i}(y)\pi^{i}\right)\\
=&\nu\left(\sum_{i=n}^{\infty}f_{i}(x)\pi^{i}-\sum_{i=n}^{\infty}f_{i}(y)\pi^{i}\right)\\
=&\nu(f(x)-f(y))>j\nu(\pi).
\end{align*}
Hence $k>j$ giving $f_{i}(x)=f_{i}(y)$ for all $i$ in the interval $n\leq i\leq j$. Finally we show, for all $j\geq n$, that $f_{j}$ is a locally constant function. For $x\in X$ define the following ball in $F$
\begin{equation*}
B_{j\nu(\pi)}(f(x)):=\{a\in F:\nu(f(x)-a)>j\nu(\pi)\}.
\end{equation*}
Then since $f$ is continuous there exists an open subset $U$ of $X$ with $x\in U$ such that $f(U)\subseteq B_{j\nu(\pi)}(f(x))$. Hence for each $y\in U$ we have $\nu(f(x)-f(y))>j\nu(\pi)$ and so $f_{j}(x)=f_{j}(y)$. In particular $f_{j}$ is constant on $U$ and this completes the proof of Lemma \ref{lem:CGSLCF}.
\end{proof}
Lemma \ref{lem:CGSLCF} has the following corollary which, for locally constant functions, goes slightly further than Theorem \ref{thr:CGGSW} since it does not assume that $X$ is totally disconnected.
\begin{corollary}
\label{cor:CGSLCF}
Let $F$ and $X$ be as in Lemma \ref{lem:CGSLCF} and let $\mathrm{LC}_{F}(X)$ be the set of all locally constant $F$ valued functions defined on $X$. Then $\mathrm{LC}_{F}(X)$ is uniformly dense in $C_{F}(X)$.
\end{corollary}
\begin{proof}
For $f\in C_{F}(X)$ let $f=\sum_{i=n}^{\infty}f_{i}\pi^{i}$ be the expansion from Lemma \ref{lem:CGSLCF}. We note that a finite sum of locally constant functions is locally constant. Hence, for each $m\geq n$, $f_{i\leq m}:=\sum_{i=n}^{m}f_{i}\pi^{i}$ is an element of $\mathrm{LC}_{F}(X)$. Let $\varepsilon<\infty$ and $m>\frac{\varepsilon}{\nu(\pi)}$. Then we have $\inf_{x\in X}\nu(f(x)-f_{i\leq m}(x))=\inf_{x\in X}\nu(\sum_{i=m+1}^{\infty}f_{i}(x)\pi^{i})\geq\nu(\pi^{m+1})>m\nu(\pi)>\varepsilon$ as required.
\end{proof}
Here is the second of the two lemmas that will be used in the proof of Theorem \ref{thr:CGRAT}.
\begin{lemma}
\label{lem:CGIRC}
Let $F$ and $L$ be non-Archimedean fields with $L$ a finite extension of $F$ as a valued field such that the following holds:
\begin{enumerate}
\item[(i)]
we have $\mathbb{Q}\subseteq F$ and the valuation logarithm $\nu$ on $F$ when restricted to $\mathbb{Q}$ is a $p$-adic valuation logarithm;
\item[(ii)]
the residue field $\overline{F}$ is finite and so $\overline{L}$ is also finite;
\item[(iii)]
the elements of $\mbox{Gal}(^{L}/_{F})$ are isometric on $L$, noting that this is automatically satisfied if $F$ is complete.
\end{enumerate}
Then for each $g\in\mbox{Gal}(^{L}/_{F})$ there exists a set $\mathcal{R}_{L,g}\subseteq\mathcal{O}_{L}^{\times}\cup\{0\}$ of residue class representatives for $L$ such that the restriction of $g$ to $\mathcal{R}_{L,g}$ is an endofunction $g|_{\mathcal{R}_{L,g}}:\mathcal{R}_{L,g}\rightarrow\mathcal{R}_{L,g}$.
\end{lemma}
\begin{proof}
Let $\omega$ be the extension of $\nu$ to $L$ and let $\mathcal{R}_{L}$ with $0\in\mathcal{R}_{L}$ be an arbitrary set of residue class representatives for $L$. Fix $g\in\mbox{Gal}(^{L}/_{F})$ and for $a\in\mathcal{O}_{L}^{\times}$ denote the orbit of $a$ with respect to $g$ by
\begin{equation*}
\langle g\rangle(a):=\{g^{(n)}(a):n\in\{1,\cdots,\mbox{ord}(g,a)\}\}.
\end{equation*}
Also denote $\overline{\langle g\rangle(a)}:=\{\overline{g^{(n)}(a)}:n\in\{1,\cdots,\mbox{ord}(g,a)\}\}=\langle \bar{g}\rangle(\bar{a})\subseteq\overline{L}$. We will show that $\mathcal{R}_{L,g}\subseteq\mathcal{O}_{L}^{\times}\cup\{0\}$ can be constructed from $\mathcal{R}_{L}$. Clearly we can let $0$ represent $\bar{0}$ and so include $0$ in $\mathcal{R}_{L,g}$. More generally we need to make sure that:
\begin{enumerate}
\item[(1)]
for each $a'\in\mathcal{R}_{L}$ there is precisely one element $a\in\mathcal{R}_{L,g}$ such that $\bar{a}=\overline{a'}$, that is $a=a'+b$ for some $b\in L$ with $\omega(b)>0$;
\item[(2)]
for each $a\in\mathcal{R}_{L,g}$ we have $g(a)\in\mathcal{R}_{L,g}$.
\end{enumerate}
To this end we will show that the following useful facts hold for Lemma \ref{lem:CGIRC}.
\begin{enumerate}
\item[(a)]
For $a_{1},a_{2}\in\mathcal{O}_{L}^{\times}$ either $\overline{\langle g\rangle(a_{1})}\cap\overline{\langle g\rangle(a_{2})}=\emptyset$ or $\overline{\langle g\rangle(a_{1})}=\overline{\langle g\rangle(a_{2})}$. Clearly, since $\mbox{ord}(g)$ is finite, if $\langle g\rangle(a_{1})\cap\langle g\rangle(a_{2})\not=\emptyset$ then $\langle g\rangle(a_{1})=\langle g\rangle(a_{2})$.
\item[(b)]
Let $a'\in\mathcal{R}_{L}\backslash\{0\}$. Then there exists $a\in\mathcal{O}_{L}^{\times}$ with $\bar{a}=\overline{a'}$ such that if $a_{1},a_{2}\in\langle g\rangle(a)$ with $a_{1}\not=a_{2}$ then $\overline{a_{1}}\not=\overline{a_{2}}$. Further since $g$ is an isometry we have $\omega(a_{1})=0$ for all $a_{1}\in\langle g\rangle(a)$. This ensures that every residue class that has a representative in $\langle g\rangle(a)$ has only one representative in $\langle g\rangle(a)$.
\end{enumerate}
Hence by applying (a) and (b) above we obtain $\mathcal{R}_{L,g}$ as a disjoint union of the orbits of finitely many elements form $\mathcal{O}_{L}^{\times}\cup\{0\}$. Note if $\mathcal{R}_{F}$ is a set of residue class representatives for $F$ and $\mathcal{R}_{F}\subseteq\mathcal{R}_{L}$ then with the above construction we can choose to have $\mathcal{R}_{F}\subseteq\mathcal{R}_{L,g}$ since $g$ restricts to the identity map on $\mathcal{R}_{F}$. Also note that (b) is not in general satisfied for all $a\in\mathcal{O}_{L}^{\times}$ with $\bar{a}=\overline{a'}$. Indeed, in the case of Example \ref{exa:CGEXONE} where $L=\mathbb{Q}_{5}(\sqrt{2})$, for $a=1+5\sqrt{2}$ we have $g(a)=1-5\sqrt{2}\not=a$ and yet $\overline{g(a)}=\bar{a}=\bar{1}$. We will now prove that (a) and (b) above hold.\\
For (a) it is enough to confirm that $\overline{\langle g\rangle(a)}=\langle\bar{g}\rangle(\bar{a})$ for all $a\in\mathcal{O}_{L}^{\times}$. Since $g$ is an isometry, (iii) and (iv) of Remark \ref{rem:CGRAT} are applicable and so we have for each $n\in\mathbb{N}$ that $\overline{g^{(n)}(a)}=\overline{g^{(n)}}(\bar{a})=\bar{g}^{(n)}(\bar{a})$. Hence the result follows.\\
For (b) we first note that, for each $a'\in\mathcal{R}_{L}\backslash\{0\}$, $g$ maps residue class to residue class. That is $g$ restricts to a bijection $g|_{\overline{a'}}:\overline{a'}\rightarrow\overline{g(a')}$ and $g$ also restricts to a bijection $g|_{\overline{g(a')}}:\overline{g(a')}\rightarrow\overline{g^{(2)}(a')}$ and so forth. This is because $g$ restricts to a bijective endofunction on $\mathcal{M}_{L}$ since $g$ has finite order and is an isometry. Now for (b) to hold we need to check that for each $a'\in\mathcal{R}_{L}\backslash\{0\}$ there is an $a\in\overline{a'}$ such that when the forward orbit of $a$, with respect to $g$, returns to a residue class it has visited before then it returns to the same element of that residue class. Let $a'\in\mathcal{R}_{L}\backslash\{0\}$ and let $n$ be the first element of $\{1,2,\cdots,\mbox{ord}(g,a')\}$ such that there exists an $i\in\{0,1,\cdots,n-1\}$ with $g^{(i)}(a')$ in the same residue class as $g^{(n)}(a')$. Hence $\omega(g^{(i)}(a')-g^{(n)}(a'))>0$ and since $g$ is an isometry we have $\omega(a'-g^{(n-i)}(a'))>0$ giving $i=0$ by the definition of $n$. Therefore $g^{(n)}(a')=a'+b$ for some $b\in\mathcal{M}_{L}$ and $g^{(n)}$ restricts to $g^{(n)}|_{\overline{a'}}:\overline{a'}\rightarrow\overline{a'}$.\\
Hence for (b) to hold it is enough to show that there is $a\in\overline{a'}$ which is a fixed point with respect to $g^{(n)}$. To this end we more generally show that for each $g\in\mbox{Gal}(^{L}/_{F})$ with $g|_{\overline{a'}}:\overline{a'}\rightarrow\overline{a'}$ there is a fixed point $a\in\overline{a'}$ of $g$. So for such a $g\in\mbox{Gal}(^{L}/_{F})$ let $m:=\mbox{ord}(g,a')$. Now recall that we have $\mathbb{Q}\subseteq F$ and that $\nu$ on $F$ when restricted to $\mathbb{Q}$ is a $p$-adic valuation logarithm for some prime $p$. Hence we have two cases, $p\nmid m$ and $p|m$.\\
Suppose $p\nmid m$. We have $g(a')=a'+b$ for some $b\in\mathcal{M}_{L}$. Further then we have
\begin{align*}
g^{(2)}(a')=&g(a'+b)=g(a')+g(b)=a'+b+g(b),\\
g^{(3)}(a')=&g(a'+b+g(b))=g(a')+g(b)+g^{(2)}(b)=a'+b+g(b)+g^{(2)}(b),\\
\vdots &\\
g^{(m-1)}(a')=&a'+b+g(b)+g^{(2)}(b)+\cdots+g^{(m-2)}(b).
\end{align*}
Hence consider
\begin{align*}
a:=&\frac{1}{m}(a'+g(a')+g^{(2)}(a')+\cdots+g^{(m-1)}(a'))\\
=&\frac{1}{m}(ma'+(m-1)b+(m-2)g(b)+\cdots+(m-(m-1))g^{(m-2)}(b)).
\end{align*}
Since $\mathbb{Q}\subseteq F$ we have $g(\frac{1}{m})=\frac{1}{m}$ giving $g(a)=a$. Moreover since $p\nmid m$ we have $\omega(m^{-1})=\nu(m^{-1})=-\nu(m)=0$. Therefore
\begin{align*}
\omega(a-a')=&\omega\left(\frac{1}{m}((m-1)b+(m-2)g(b)+(m-3)g^{(2)}(b)+\cdots+g^{(m-2)}(b))\right)\\
=&0+\omega((m-1)b+(m-2)g(b)+(m-3)g^{(2)}(b)+\cdots+g^{(m-2)}(b))\\
\geq &\min\{\omega((m-1)b),\omega((m-2)g(b)),\omega((m-3)g^{(2)}(b)),\cdots,\omega(g^{(m-2)}(b))\}\\
=&\omega(g^{(m-2)}(b))=\omega(b)>0.
\end{align*}
Hence $a$ is an element of $\overline{a'}$ with $g(a)=a$ as required.\\
Suppose $p|m$. Then there is $n,m'\in\mathbb{N}$ such that $m=p^{n}m'$ with $p\nmid m'$. Hence $\mbox{ord}(g^{(p^{n-1}m')},a')=p$. Now suppose that the following holds.
\begin{enumerate}
\item[(b2)]
Let $a_{0}\in\mathcal{O}_{L}^{\times}$. Then for each $g'\in\mbox{Gal}(^{L}/_{F})$ with $g'|_{\overline{a_{0}}}:\overline{a_{0}}\rightarrow\overline{a_{0}}$ and $\mbox{ord}(g',a_{0})=p$ there is a fixed point $a_{1}\in\overline{a_{0}}$ of $g'$.
\end{enumerate}
Then by applying (b2) there is $a_{1}\in\overline{a'}$ which is a fixed point of $g^{(p^{n-1}m')}$ and so we have $\mbox{ord}(g,a_{1})|p^{n-1}m'$. By repeated application of (b2) we can obtain an element $a_{n}\in\overline{a'}$ such that $\mbox{ord}(g,a_{n})|m'$. Now, since the set $\mathcal{R}_{L}$ was an arbitrary set of residue class representatives for $L$ and $a_{n}$ represents $\overline{a'}$, we can apply the case $p\nmid m$ for $a_{n}$ to obtain $a\in\overline{a_{n}}=\overline{a'}$ with $g(a)=a$ as required.\\
It remains to show that (b2) holds. So let $a_{0}\in\mathcal{O}_{L}^{\times}$ and $g'\in\mbox{Gal}(^{L}/_{F})$ satisfy the conditions of (b2). Hence for some $b\in\mathcal{M}_{L}$ we have
\begin{align*}
g'(a_{0})=&a_{0}+b,\\
g'^{(2)}(a_{0})=&a_{0}+b+g'(b),\\
\vdots &\\
g'^{(p-1)}(a_{0})=&a_{0}+b+g'(b)+\cdots+g'^{(p-2)}(b).
\end{align*}
Define $b_{1}:=b$, $b_{2}:=b+g'(b)$, $\cdots$, $b_{p-1}:=b+g'(b)+\cdots+g'^{(p-2)}(b)$ and note that for all $i\in\{1,\cdots,p-1\}$ we have
\begin{equation}
\label{equ:CGWBI}
\omega(b_{i})\geq\min\{\omega(b),\omega(g'(b)),\cdots,\omega(g'^{(p-2)}(b))\}=\omega(b)>0.
\end{equation}
Now since $\omega|_{\mathbb{Q}}$ is the $p$-adic valuation logarithm $\nu_{p}$ we have $\mathbb{F}_{p}\subseteq\overline{L}$. Therefore since $\overline{L}$ is a finite field we have $\#\overline{L}=p^{k}$ for some $k\in\mathbb{N}$. Hence we consider
\begin{align*}
a_{1}:=&(a_{0}g'(a_{0})g'^{(2)}(a_{0})\cdots g'^{(p-1)}(a_{0}))^{p^{k-1}}\\
=&(a_{0}(a_{0}+b_{1})(a_{0}+b_{2})\cdots(a_{0}+b_{p-1}))^{p^{k-1}}\\ =&(a_{0}^{p}+a_{0}b_{1}(a_{0}+b_{2})\cdots(a_{0}+b_{p-1})+a_{0}^{2}b_{2}(a_{0}+b_{3})\cdots(a_{0}+b_{p-1})+\cdots\\
&\cdots+a_{0}^{p-1}b_{p-1})^{p^{k-1}}.
\end{align*}
Now, by Lemma \ref{lem:CVFEQ} and (\ref{equ:CGWBI}), we have
\begin{align*}
\omega(a_{0}b_{1}(a_{0}+b_{2})\cdots(a_{0}+b_{p-1}))=&\omega(a_{0})+\omega(b_{1})+\omega(a_{0}+b_{2})+\cdots\\
&\cdots+\omega(a_{0}+b_{p-1})\\
=&0+\omega(b_{1})+0+\cdots+0\\
\geq&\omega(b)>0.
\end{align*}
The same inequality holds for later terms in the above expansion on $a_{1}$, hence for
\begin{equation*}
c:=a_{0}b_{1}(a_{0}+b_{2})\cdots(a_{0}+b_{p-1})+a_{0}^{2}b_{2}(a_{0}+b_{3})\cdots(a_{0}+b_{p-1})+\cdots+a_{0}^{p-1}b_{p-1}
\end{equation*}
we have $\omega(c)>0$. This gives
\begin{equation*}
a_{1}=(a_{0}^{p}+c)^{p^{k-1}}=a_{0}^{p^{k}}+\sum_{i=1}^{p^{k-1}}\binom{p^{k-1}}{i}a_{0}^{p(p^{k-1}-i)}c^{i}
\end{equation*}
such that for each $i\in\{1,\cdots,p^{k-1}\}$ we have
\begin{align*}
\omega\left(\binom{p^{k-1}}{i}a_{0}^{p(p^{k-1}-i)}c^{i}\right)=&\omega\left(\binom{p^{k-1}}{i}\right)+p(p^{k-1}-i)\omega(a_{0})+i\omega(c)\\
=&\omega\left(\binom{p^{k-1}}{i}\right)+0+i\omega(c)>0
\end{align*}
noting that $\omega\left(\binom{p^{k-1}}{i}\right)\geq0$ since $\omega|_{\mathbb{Q}}$ is the $p$-adic valuation logarithm $\nu_{p}$. Hence for $c':=\sum_{i=1}^{p^{k-1}}\binom{p^{k-1}}{i}a_{0}^{p(p^{k-1}-i)}c^{i}$ we have $\omega(c')>0$. Further since $\#\overline{L}^{\times}=p^{k}-1$ we have $\overline{a_{0}}^{p^{k}-1}=\bar{1}$ by Lagrange's theorem. In particular $\overline{a_{1}}=\overline{a_{0}^{p^{k}}}+\overline{c'}=\overline{a_{0}}^{p^{k}}+\bar{0}=\overline{a_{0}}$ giving $a_{1}\in\overline{a_{0}}$ and since $a_{1}=(a_{0}g'(a_{0})g'^{(2)}(a_{0})\cdots g'^{(p-1)}(a_{0}))^{p^{k-1}}$ with $\mbox{ord}(g',a_{0})=p$ we have $g'(a_{1})=a_{1}$ as required. This completes the proof of Lemma \ref{lem:CGIRC}.
\end{proof}
We will now prove Theorem \ref{thr:CGRAT}.
\begin{proof}[Proof of Theorem \ref{thr:CGRAT}]
For $f\in\mathcal{O}(X,\tau,g)$ let $\tilde{f}:=f+\mathcal{J}(X,\tau,g)$ denote the quotient class to which $f$ belongs and let $\pi$ be a prime element of $L$. Note, $\mathcal{O}(X,\tau,g)/\mathcal{J}(X,\tau,g)$ is endowed with the usual quotient operations and the quotient norm which in this case gives the trivial valuation. We begin by establishing a set $\mathcal{R}(X,\tau,g)\subseteq\mathcal{O}(X,\tau,g)$ of quotient class representatives for $\mathcal{O}(X,\tau,g)/\mathcal{J}(X,\tau,g)$. By Lemma \ref{lem:CGIRC} there is a set $\mathcal{R}_{L,g}$ of residue class representatives for $\overline{L}$ such that $g|_{\mathcal{R}_{L,g}}:\mathcal{R}_{L,g}\rightarrow\mathcal{R}_{L,g}$. Furthermore by Lemma \ref{lem:CGSLCF} every $f\in C_{L}(X)$ has a unique expansion of the form
\begin{equation}
\label{equ:CGXRL}
f=\sum_{i=n}^{\infty}f_{i}\pi^{i},\quad\mbox{for some }n\in\mathbb{Z},
\end{equation}
where, for each $i\geq n$, $f_{i}:X\rightarrow\mathcal{R}_{L,g}$ is a locally constant function. Hence, using expansion (\ref{equ:CGXRL}), for $f\in\mathcal{O}(X,\tau,g)$ we have
\begin{equation*}
f_{0}\circ\tau+h\circ\tau=f\circ\tau=g\circ f=g\circ f_{0}+g\circ h
\end{equation*}
where $h:=\sum_{i=1}^{\infty}f_{i}\pi^{i}$ with $\omega(h(x))>0$ for all $x\in X$. Now note that $\tau:X\rightarrow X$ and $g|_{\mathcal{R}_{L,g}}:\mathcal{R}_{L,g}\rightarrow\mathcal{R}_{L,g}$ give $f_{0}\circ\tau:X\rightarrow\mathcal{R}_{L,g}$ and $g\circ f_{0}:X\rightarrow\mathcal{R}_{L,g}$. Further since $g$ is an isometry on $L$ we have $\omega(h\circ\tau(x))>0$ and $\omega(g\circ h(x))>0$ for all $x\in X$. Hence since the expansion of $f\circ\tau$ in the for of (\ref{equ:CGXRL}) is unique we have $f_{0}\circ\tau=g\circ f_{0}$ and $h\circ\tau=g\circ h$. Moreover $f_{0}$ is continuous since locally constant and, for $x\in X$,
\begin{equation*}
\omega(f_{0}(x))=\left\{ \begin{array} {l@{\quad\mbox{if}\quad}l}
\infty & f_{0}(x)=0 \\
0 & f_{0}(x)\not=0.
\end{array} \right.
\end{equation*}
In particular we have $f_{0}\in\mathcal{O}(X,\tau,g)$. Hence we also have $h=f-f_{0}\in\mathcal{O}(X,\tau,g)$ since $\mathcal{O}(X,\tau,g)$ is a ring. But since $\omega(h(x))>0$ for all $x\in X$ we in fact have $h\in\mathcal{J}(X,\tau,g)$ giving
\begin{equation*}
\tilde{f}=\widetilde{f_{0}}.
\end{equation*}
Now by the uniqueness of expansions in the form of (\ref{equ:CGXRL}) and since $\mathcal{J}(X,\tau,g)$ is an ideal we have for any other element $f'=f'_{0}+h'\in\mathcal{O}(X,\tau,g)$ that $\widetilde{f'}=\tilde{f}$ if and only if $f'_{0}=f_{0}$. Hence using expansion (\ref{equ:CGXRL}) we define
\begin{equation*}
\mathcal{R}(X,\tau,g):=\left\{f_{0}:f=\sum_{i=0}^{\infty}f_{i}\pi^{i}\in\mathcal{O}(X,\tau,g)\right\}
\end{equation*}
noting that $0\in\mathcal{R}(X,\tau,g)$ since $0\in\mathcal{R}_{L,g}$ and $0\in\mathcal{O}(X,\tau,g)$. We now define a map $\phi:\mathcal{O}(X,\tau,g)/\mathcal{J}(X,\tau,g)\rightarrow C(X,\tau,\bar{g})$ by
\begin{equation*}
\phi(\tilde{f})=\phi(\widetilde{f_{0}})=\phi(f_{0}+\mathcal{J}(X,\tau,g)):=\overline{f_{0}}
\end{equation*}
where for $x\in X$ we define $\overline{f_{0}}(x):=\overline{f_{0}(x)}=f_{0}(x)+\mathcal{M}_{L}$. We show that $\phi$ is a ring isomorphism by checking that:
\begin{enumerate}
\item[(i)]
for all $\tilde{f}\in\mathcal{O}(X,\tau,g)/\mathcal{J}(X,\tau,g)$ we have $\overline{f_{0}}\in C(X,\tau,\bar{g})$;
\item[(ii)]
$\phi$ is multiplicative, linear and $\phi(\tilde{1})=\bar{1}$;
\item[(iii)]
$\mbox{ker}(\phi)=\{\tilde{0}\}$ ensuring that $\phi$ is injective;
\item[(iv)]
$\phi$ is surjective.
\end{enumerate}
For (i), since $f_{0}$ is a locally constant function on $X$ we have $\overline{f_{0}}\in C_{\overline{L}}(X)$. Furthermore we have already shown above that $f_{0}\circ\tau=g\circ f_{0}$. Hence for each $x\in X$ we have $\overline{f_{0}}(\tau(x))=\overline{f_{0}(\tau(x))}=\overline{g(f_{0}(x))}=\bar{g}\left(\overline{f_{0}(x)}\right)=\bar{g}\left(\overline{f_{0}}(x)\right)$ and so $\overline{f_{0}}\in C(X,\tau,\bar{g})$.\\
For (ii), let $\tilde{f},\widetilde{f'}\in\mathcal{O}(X,\tau,g)/\mathcal{J}(X,\tau,g)$. We show that $\phi$ is multiplicative. Set $h:=f_{0}f'_{0}$ giving $f_{0}f'_{0}=h_{0}+h'$ with $h'\in\mathcal{J}(X,\tau,g)$ and $f_{0},f'_{0},h_{0}\in\mathcal{R}(X,\tau,g)$. Hence for each $x\in X$ we have $h'(x)\in\mathcal{M}_{L}$. Therefore for each $x\in X$ we have
\begin{align*}
\phi(\tilde{f}\widetilde{f'})(x)=\phi(\widetilde{f_{0}}\widetilde{f'_{0}})(x)=&\phi(\widetilde{f_{0}f'_{0}})(x)\\
=&\phi(\widetilde{h_{0}})(x)\\
=&\overline{h_{0}}(x)\\
=&\overline{h_{0}(x)}\\
=&\overline{h_{0}(x)+h'(x)}\\
=&\overline{f_{0}(x)f'_{0}(x)}\\
=&\overline{f_{0}}(x)\overline{f'_{0}}(x)\\
=&(\phi(\widetilde{f_{0}})\phi(\widetilde{f'_{0}}))(x)=(\phi(\tilde{f})\phi(\widetilde{f'}))(x).
\end{align*}
Linearity, $\phi(\tilde{f}+\widetilde{f'})=\phi(\tilde{f})+\phi(\widetilde{f'})$, is shown in much the same way. Showing that $\phi(\tilde{1})=\bar{1}$ is almost immediate. Let $1_{0}$ be the representative in $\mathcal{R}_{L,g}$ of $\bar{1}$. Then we have $1_{0}\in\mathcal{R}(X,\tau,g)$ giving $\phi(\tilde{1})=\phi(\widetilde{1_{0}})=\overline{1_{0}}=\bar{1}$ as required. In fact we can always choose $\mathcal{R}_{L,g}$ such that $1_{0}=1$.\\
For (iii), let $f\in\mathcal{O}(X,\tau,g)$. If for all $x\in X$ we have $\phi(\widetilde{f_{0}})(x)=\overline{f_{0}(x)}=\bar{0}$ then $\omega(f_{0}(x))>0$ for all $x\in X$ giving $f_{0}\in\mathcal{J}(X,\tau,g)$. Hence $\widetilde{f_{0}}=\tilde{0}$ and so $\mbox{ker}(\phi)=\{\tilde{0}\}$. In fact since $f_{0}$ is an element of $\mathcal{R}(X,\tau,g)$ we have $f_{0}=0$ in this case.\\
For (iv), given $\bar{f}\in C(X,\tau,\bar{g})$ and $x\in X$ we have $\bar{f}(x)=a_{0}(x)+\mathcal{M}_{L}$ for some element $a_{0}(x)$ of $\mathcal{R}_{L,g}$ since $\mathcal{R}_{L,g}$ is a set of residue class representatives for $\overline{L}$. Since the valuation on $\overline{L}$ is the trivial valuation, $\bar{f}$ is a locally constant function and hence, when viewed as a function on $X$, so is $a_{0}$. Therefore $a_{0}$ is a continuous $\mathcal{O}_{L}$ valued function noting that $\mathcal{R}_{L,g}\subseteq\mathcal{O}_{L}$. Further since $\bar{f}\in C(X,\tau,\bar{g})$ we have for each $x\in X$ that
\begin{equation*}
\overline{a_{0}(\tau(x))}=\bar{f}(\tau(x))=\bar{g}(\bar{f}(x))=\bar{g}\left(\overline{a_{0}(x)}\right)=\overline{g(a_{0}(x))}.
\end{equation*}
Now because $g|_{\mathcal{R}_{L,g}}:\mathcal{R}_{L,g}\rightarrow\mathcal{R}_{L,g}$ we have $g(a_{0}(x))\in\mathcal{R}_{L,g}$ giving $a_{0}(\tau(x))=g(a_{0}(x))$ for all $x\in X$. Hence, as a function on $X$, $a_{0}\in\mathcal{O}(X,\tau,g)$ and so $a_{0}\in\mathcal{R}(X,\tau,g)$ with $\phi(\widetilde{a_{0}})=\overline{a_{0}}=\bar{f}$ as required. Finally, since the valuation on $\overline{L}$ is the trivial valuation, the sup norm on $C(X,\tau,\bar{g})$ is the trivial norm. Therefore it is immediate that $\phi$ is an isometry and this completes the proof of Theorem \ref{thr:CGRAT}.
\end{proof}
The last result of this section follows easily from the preceding results.
\begin{corollary}
\label{cor:CGLCUD}
Let $F$, $L$ and $g\in\mbox{Gal}(^{L}/_{F})$ conform to Lemma \ref{lem:CGIRC} with $F$ and $L$ having complete nontrivial discrete valuations and let $C(X,\tau,g)$ be a basic $^{L}/_{L^{g}}$ function algebra. Further let $\mathcal{R}(X,\tau,g)\subseteq\mathcal{O}(X,\tau,g)$ be the subset of all locally constant $\mathcal{R}_{L,g}$ valued functions. If there is a prime element $\pi$ of $L$ such that $g(\pi)=\pi$ then each $f\in C(X,\tau,g)\backslash\{0\}$ has a unique series expansion of the form
\begin{equation*}
f=\sum_{i=n}^{\infty}f_{i}\pi^{i},\quad\mbox{for some }n\in\mathbb{Z},
\end{equation*}
where for each $i\geq n$ we have $f_{i}\in\mathcal{R}(X,\tau,g)$. In particular the subset of all locally constant functions, $\mathrm{LC}(X,\tau,g)\subseteq C(X,\tau,g)$, is uniformly dense in $C(X,\tau,g)$.
\end{corollary}
\begin{remark}
\label{rem:CGLCUD}
For $L$ an unramified extension of $F$, every prime element $\pi\in F$ is a prime element of $L$ with $g(\pi)=\pi$. In particular Corollary \ref{cor:CGLCUD} holds when $L$ is a finite unramified extension of $\mathbb{Q}_{p}$ as is the case for examples \ref{exa:CGEXONE} and \ref{exa:CGEXTWO}.
\end{remark}
\begin{proof}[Proof of Corollary \ref{cor:CGLCUD}]
Let $f$ be an element of $C(X,\tau,g)\backslash\{0\}$ and let $\pi$ be a prime element of $L$ with $g(\pi)=\pi$. By Lemma \ref{lem:CGSLCF} $f$ has a unique series expansion of the form
\begin{equation*}
f=\sum_{i=n}^{\infty}f_{i}\pi^{i},\quad\mbox{for some }n\in\mathbb{Z},
\end{equation*}
with $f_{n}\not=0$ and $f_{i}:X\rightarrow\mathcal{R}_{L,g}$ a locally constant function for all $i\geq n$. Hence
\begin{equation*}
\sum_{i=n}^{\infty}f_{i}\circ\tau\pi^{i}=f\circ\tau=g\circ f=\sum_{i=n}^{\infty}(g\circ f_{i})(g(\pi))^{i}=\sum_{i=n}^{\infty}g\circ f_{i}\pi^{i}.
\end{equation*}
Therefore since $g$ restricts to an endofunction on $\mathcal{R}_{L,g}$ and by the uniqueness of the expansion we have $f_{i}\circ\tau=g\circ f_{i}$ for all $i\geq n$. Hence $f_{i}$ is an element of $\mathcal{R}(X,\tau,g)$ for all $i\geq n$ and for each $m\in\mathbb{N}$ we have $\sum_{i=n}^{n+m-1}f_{i}\pi^{i}\in C(X,\tau,g)$. Finally $\left(\sum_{i=n}^{n+m-1}f_{i}\pi^{i}\right)_{m}$ is a sequence of locally constant functions which converges uniformly to $f$ as required.
\end{proof}
This brings us to the end of Chapter \ref{cha:CG}. In the next chapter we will see that $^{L}/_{L^{g}}$ function algebras have a part to play in representation theory.
\chapter[Representation theory]{Representation theory}
\label{cha:RT}
The first section of this chapter introduces several results from the theory of Banach rings and Banach $F$-algebras that we will use later in the chapter. These results have been taken from \cite[Ch1]{Berkovich}. However I have provided a thorough proof of each result in order to give significantly more detail than \cite{Berkovich} since some of them may not be widely known. The second section begins by recalling which Banach $F$-algebras can be represented by complex uniform algebras or real function algebras in the Archimedean setting and one such result in the non-Archimedean setting provided by \cite{Berkovich} is also noted. We then develop this theory further by identifying a large class of Banach $F$-algebras that can be represented by $^{L}/_{L^{g}}$ function algebras. The resulting representation theorem is the main result of interest in this chapter and the rest of the chapter is given over to the proof of the theorem.
\section{Further Banach rings and Banach {\it F}-algebras}
\label{sec:RTBR}
Since the definition of a Banach ring was given in Definition \ref{def:FAABA} we begin with the first lemma.
\begin{lemma}
\label{lem:RTRRT}
Let $R$ be a Banach ring and let $r\in\mathbb{R}$ be positive. Define
\begin{equation*}
R\langle r^{-1}T\rangle:=\{f=\sum_{i=0}^{\infty}a_{i}T^{i}:a_{i}\in R\mbox{ and }\sum_{i=0}^{\infty}\|a_{i}\|_{R}r^{i}<\infty\}.
\end{equation*}
Then with the Cauchy product and usual addition:
\begin{enumerate}
\item[(i)]
we have that $R\langle r^{-1}T\rangle$ is a Banach ring with respect to the norm
\begin{equation*}
\|f\|_{R,r}:=\sum_{i=0}^{\infty}\|a_{i}\|_{R}r^{i};
\end{equation*}
\item[(ii)]
for $a\in R$ we have $1-aT$ invertible in $R\langle r^{-1}T\rangle$ if and only if $\sum_{i=0}^{\infty}\|a^{i}\|_{R}r^{i}<\infty$.
\end{enumerate}
\end{lemma}
\begin{proof}
For (i), let $f_{1}=\sum_{i=0}^{\infty}a_{i}T^{i}$ and $f_{2}=\sum_{i=0}^{\infty}b_{i}T^{i}$ be elements of $R\langle r^{-1}T\rangle$. Then
\begin{align*}
\|f_{1}+f_{2}\|_{R,r}=&\left\|\sum_{i=0}^{\infty}(a_{i}+b_{i})T^{i}\right\|_{R,r}\\
=&\sum_{i=0}^{\infty}\|a_{i}+b_{i}\|_{R}r^{i}\\
\leq&\sum_{i=0}^{\infty}\left(\|a_{i}\|_{R}+\|b_{i}\|_{R}\right)r^{i}\\
=&\sum_{i=0}^{\infty}\|a_{i}\|_{R}r^{i}+\sum_{i=0}^{\infty}\|b_{i}\|_{R}r^{i}\\
=&\|f_{1}\|_{R,r}+\|f_{2}\|_{R,r}<\infty
\end{align*}
showing that $R\langle r^{-1}T\rangle$ is closed under addition and that the triangle inequality holds for $\|\cdot\|_{R,r}$. Clearly $\|f\|_{R,r}=0$ if and only if $f=0$. Further
\begin{align*}
\|f_{1}f_{2}\|_{R,r}=&\left\|\sum_{i=0}^{\infty}\left(\sum_{k=0}^{i}a_{k}b_{i-k}\right)T^{i}\right\|_{R,r}\\
=&\sum_{i=0}^{\infty}\left\|\sum_{k=0}^{i}a_{k}b_{i-k}\right\|_{R}r^{i}\\
\leq&\sum_{i=0}^{\infty}\left(\sum_{k=0}^{i}\|a_{k}\|_{R}\|b_{i-k}\|_{R}\right)r^{i}\\
=&\left(\sum_{i=0}^{\infty}\|a_{i}\|_{R}r^{i}\right)\left(\sum_{i=0}^{\infty}\|b_{i}\|_{R}r^{i}\right),\mbox{ by Mertens' Theorem, see \cite[p204]{Apostol}}\\
=&\|f_{1}\|_{R,r}\|f_{2}\|_{R,r}<\infty
\end{align*}
showing that $R\langle r^{-1}T\rangle$ is closed under multiplication and $\|\cdot\|_{R,r}$ is sub-multiplicative. Furthermore we have $1_{R,r}=1_{R}T^{0}$ which gives $\|1_{R,r}\|_{R,r}=\|1_{R}\|_{R}r^{0}=1$ and similarly $\|-1_{R,r}\|_{R,r}=\|-1_{R}\|_{R}r^{0}=1$.\\
We now show that $R\langle r^{-1}T\rangle$ is complete. Let $\left(\sum_{i=0}^{\infty}a_{i,n}T^{i}\right)_{n}$ be a Cauchy sequence in $R\langle r^{-1}T\rangle$. Then for $k\in \mathbb{N}_{0}$ and $\varepsilon>0$ there exists $M\in\mathbb{N}$ such that for all $m,m'\geq M$ we have $\|\sum_{i=0}^{\infty}a_{i,m}T^{i}-\sum_{i=0}^{\infty}a_{i,m'}T^{i}\|_{R,r}=\sum_{i=0}^{\infty}\|a_{i,m}-a_{i,m'}\|_{R}r^{i}<\varepsilon r^{k}$. Hence for all $m,m'\geq M$ we have $\|a_{k,m}-a_{k,m'}\|_{R}<\varepsilon$ and so for all $k\in \mathbb{N}_{0}$, $(a_{k,n})_{n}$ is a Cauchy sequence in $R$. Since $R$ is a Banach ring $(a_{k,n})_{n}$ converges to some $b_{k}\in R$.\\
We show that $\sum_{i=0}^{\infty}b_{i}T^{i}$ is an element of $R\langle r^{-1}T\rangle$. Let $\varepsilon_{0}>0$. Then there exists $M\in\mathbb{N}$ such that for all $m\geq M$ we have
\begin{align*}
\left\|\sum_{i=0}^{\infty}a_{i,m}T^{i}\right\|_{R,r}\leq&\left\|\sum_{i=0}^{\infty}a_{i,m}T^{i}-\sum_{i=0}^{\infty}a_{i,M}T^{i}\right\|_{R,r}+\left\|\sum_{i=0}^{\infty}a_{i,M}T^{i}\right\|_{R,r}\\
<&\varepsilon_{0}+\left\|\sum_{i=0}^{\infty}a_{i,M}T^{i}\right\|_{R,r}<\infty.
\end{align*}
Now let $N\in\mathbb{N}_{0}$ and $\varepsilon>0$. Since for each $k\in\mathbb{N}_{0}$, $(a_{k,n})_{n}$ is a Cauchy sequence in $R$ with limit $b_{k}$, there is $M'\in\mathbb{N}$ such that for all $m'\geq M'$ we have $\sum_{i=0}^{N}\|b_{i}-a_{i,m'}\|_{R}r^{i}<\varepsilon$. Hence letting $m_{0}\geq\max\{M,M'\}$ gives
\begin{align*}
\left\|\sum_{i=0}^{N}b_{i}T^{i}\right\|_{R,r}=&\left\|\sum_{i=0}^{N}b_{i}T^{i}-\sum_{i=0}^{N}a_{i,m_{0}}T^{i}+\sum_{i=0}^{N}a_{i,m_{0}}T^{i}\right\|_{R,r}\\
\leq&\left\|\sum_{i=0}^{N}(b_{i}-a_{i,m_{0}})T^{i}\right\|_{R,r}+\left\|\sum_{i=0}^{\infty}a_{i,m_{0}}T^{i}\right\|_{R,r}\\
<&\varepsilon+\varepsilon_{0}+\left\|\sum_{i=0}^{\infty}a_{i,M}T^{i}\right\|_{R,r}.
\end{align*}
Since $\varepsilon>0$ was arbitrary we have $\|\sum_{i=0}^{N}b_{i}T^{i}\|_{R,r}\leq\varepsilon_{0}+\|\sum_{i=0}^{\infty}a_{i,M}T^{i}\|_{R,r}$. Since this holds for each $N\in\mathbb{N}_{0}$ we have $\|\sum_{i=0}^{\infty}b_{i}T^{i}\|_{R,r}\leq\varepsilon_{0}+\|\sum_{i=0}^{\infty}a_{i,M}T^{i}\|_{R,r}$ giving $\sum_{i=0}^{\infty}b_{i}T^{i}\in R\langle r^{-1}T\rangle$ as required.\\
Let $\varepsilon>0$. We will show, for large enough $n\in\mathbb{N}$, that $\|\sum_{i=0}^{\infty}a_{i,n}T^{i}-\sum_{i=0}^{\infty}b_{i}T^{i}\|_{R,r}<\varepsilon$ and so $R\langle r^{-1}T\rangle$ is complete. Since $\left(\sum_{i=0}^{\infty}a_{i,n}T^{i}\right)_{n}$ is a Cauchy sequence there exists $M_{1}\in\mathbb{N}$ such that for all $m,n\geq M_{1}$ we have $\|\sum_{i=0}^{\infty}(a_{i,n}-a_{i,m})T^{i}\|_{R,r}<\varepsilon/4$.\\
Let $n\geq M_{1}$. Since $\sum_{i=0}^{\infty}a_{i,n}T^{i}$ and $\sum_{i=0}^{\infty}b_{i}T^{i}$ are elements of $R\langle r^{-1}T\rangle$ there exists $N\in\mathbb{N}$ such that $\|\sum_{i=N+1}^{\infty}a_{i,n}T^{i}\|_{R,r}<\varepsilon/4$ and $\|\sum_{i=N+1}^{\infty}b_{i}T^{i}\|_{R,r}<\varepsilon/4$.\\
Since for each $i\in\mathbb{N}_{0}$, $(a_{i,m})_{m}$ is a Cauchy sequence in $R$ with limit $b_{i}$, there is $M_{2}\in\mathbb{N}$ such that for all $m\geq M_{2}$ we have $\|\sum_{i=0}^{N}(a_{i,m}-b_{i})T^{i}\|_{R,r}=\sum_{i=0}^{N}\|a_{i,m}-b_{i}\|_{R}r^{i}<\varepsilon/4$. Let $m=\max\{M_{1},M_{2}\}$ and define $c_{n}:=\|\sum_{i=0}^{\infty}a_{i,n}T^{i}-\sum_{i=0}^{\infty}b_{i}T^{i}\|_{R,r}$, then
\begin{align*}
c_{n}=&\left\|\sum_{i=N+1}^{\infty}a_{i,n}T^{i}+\sum_{i=0}^{N}(a_{i,n}-a_{i,m})T^{i}+\sum_{i=0}^{N}(a_{i,m}-b_{i})T^{i}-\sum_{i=N+1}^{\infty}b_{i}T^{i}\right\|_{R,r}\\
<&\varepsilon/4+\varepsilon/4+\varepsilon/4+\varepsilon/4=\varepsilon\mbox{ as required.}
\end{align*}
For (ii), the result is obvious for $a=0$ and so suppose $a\not=0$. If $\sum_{i=0}^{\infty}\|a^{i}\|_{R}r^{i}<\infty$ then $\sum_{i=0}^{\infty}a^{i}T^{i}$ is an element of $R\langle r^{-1}T\rangle$ and by the definition of the Cauchy product we have
\begin{align*}
\left(\sum_{i=0}^{\infty}a^{i}T^{i}\right)(1-aT)=&(a^{0}1_{R})T^{0}+\sum_{i=1}^{\infty}(a^{i}1_{R}+a^{i-1}(-a))T^{i}\\
=&1_{R}T^{0}+\sum_{i=1}^{\infty}a^{i-1}(a+(-a))T^{i}\\
=&1_{R}T^{0}=1_{R,r}=1.
\end{align*}
Similarly this holds for $(1-aT)(\sum_{i=0}^{\infty}a^{i}T^{i})$ and so $1-aT$ is invertible.\\
Now conversely if $1-aT$ is invertible in $R\langle r^{-1}T\rangle$ then for $\sum_{i=0}^{\infty}b_{i}T^{i}$ the inverse of $1-aT$ in $R\langle r^{-1}T\rangle$ we have by the definition of the Cauchy product
\begin{equation*}
1_{R,r}=\left(\sum_{i=0}^{\infty}b_{i}T^{i}\right)(1-aT)=(b_{0}1_{R})T^{0}+\sum_{i=1}^{\infty}(b_{i}1_{R}+b_{i-1}(-a))T^{i}.
\end{equation*}
Hence $b_{0}=1_{R}=a^{0}$ and, for each $i\in\mathbb{N}$, $0=b_{i}+b_{i-1}(-a)$ giving
\begin{equation*}
b_{i-1}a=b_{i}+b_{i-1}a+b_{i-1}(-a)=b_{i}+b_{i-1}(a+(-a))=b_{i}.
\end{equation*}
Therefore for each $i\in\mathbb{N}$, $b_{i}=b_{i-1}a$ with $b_{0}=1_{R}$ giving $b_{i}=a^{i}$ by induction. Hence $\sum_{i=0}^{\infty}a^{i}T^{i}=\sum_{i=0}^{\infty}b_{i}T^{i}$ is an element of $R\langle r^{-1}T\rangle$ and so $\sum_{i=0}^{\infty}\|a^{i}\|_{R}r^{i}<\infty$ as required.
\end{proof}
\begin{remark}
\label{rem:RTRRT}
Since $R\langle r^{-1}T\rangle$ extends $R$ as a ring and by the definition of the Cauchy product it is immediate that $R\langle r^{-1}T\rangle$ is commutative if and only if $R$ is commutative. Similarly by the definition of the norm $\|\cdot\|_{A,r}$ and the Cauchy product, if $A$ is a unital Banach $F$-algebra then $A\langle r^{-1}T\rangle$ is also a unital Banach $F$-algebra. These details are easily checked.
\end{remark}
The following definitions will be used many times in this chapter.
\begin{definition}
\label{def:RTBMS}
Let $R$ be a Banach ring. A {\em bounded multiplicative seminorm} on $R$ is a map $|\cdot|:R\rightarrow\mathbb{R}$ taking non-negative values that is:
\begin{enumerate}
\item[(1)]
bounded, $|a|\leq\|a\|_{R}$ for all $a\in R$, but not constantly zero on $R$;
\item[(2)]
multiplicative, $|ab|=|a||b|$ for all $a,b\in R$ and hence $|1_{R}|=1$ by setting $a=1_{R}$ and $b\not\in\mbox{ker}(|\cdot|)$;
\item[(3)]
a seminorm and so $|\cdot|$ also satisfies the triangle inequality and $0\in\mbox{ker}(|\cdot|)$ but the kernel is not assumed to be a singleton.
\end{enumerate}
\end{definition}
\begin{definition}
\label{def:RTMOA}
Let $F$ be a complete non-Archimedean field and let $A$ be a commutative unital Banach $F$-algebra. In this chapter $\mathcal{M}_{0}(A)$ will denote the set of all proper closed prime ideals of $A$ that are the kernels of bounded multiplicative seminorms on $A$. For $x_{0}\in\mathcal{M}_{0}(A)$, or any proper closed ideal of $A$, we will denote the quotient norm on $A/x_{0}$ by $|\cdot|_{x_{0}}$ that is $|a+x_{0}|_{x_{0}}:=\inf\{\|a+b\|_{A}:b\in x_{0}\}$ for $a\in A$.
\end{definition}
We now proceed with a number of Lemmas. In particular towards the end of Section \ref{sec:RTBR} it will be show that $\mathcal{M}_{0}(A)$ is always nonempty.
\begin{lemma}
\label{lem:RTBMS}
Let $A$ be a unital Banach $F$-algebra. If $|\cdot|$ is a bounded multiplicative seminorm on $A$ as a Banach ring then we have $|\alpha|=|\alpha|_{F}$ for all $\alpha\in F$. Hence since $|\cdot|$ is multiplicative it is also a vector space seminorm, that is $|\alpha a|=|\alpha|_{F}|a|$ for all $a\in A$ and $\alpha\in F$.
\end{lemma}
\begin{proof}
For $\alpha\in F^{\times}$ we first note that $1=|1_{A}|=|\alpha\alpha^{-1}|=|\alpha||\alpha^{-1}|$ and so $|\alpha|\not=0$ and $|\alpha^{-1}|=|\alpha|^{-1}$. Similarly $|\alpha^{-1}|_{F}=|\alpha|_{F}^{-1}$ since $|\cdot|_{F}$ is a valuation. Moreover since $|\cdot|$ is bounded we have $|\alpha|\leq\|\alpha 1_{A}\|_{A}=|\alpha|_{F}\|1_{A}\|_{A}=|\alpha|_{F}$. Since this holds for all $\alpha\in F^{\times}$ we also have $|\alpha^{-1}|\leq |\alpha^{-1}|_{F}$ giving $|\alpha|_{F}\leq|\alpha|$ and so $|\alpha|=|\alpha|_{F}$ for all $\alpha\in F$.
\end{proof}
\begin{lemma}
\label{lem:RTQN}
Let $F$ and $A$ be as in Definition \ref{def:RTMOA}. For $x_{0}\in\mathcal{M}_{0}(A)$, or any proper closed ideal of $A$, the following holds:
\begin{enumerate}
\item[(i)]
the quotient ring $A/x_{0}$ has $F\subseteq A/x_{0}$ and is an integral domain if $x_{0}$ is prime;
\item[(ii)]
the quotient norm is such that $|\alpha+x_{0}|_{x_{0}}=|\alpha|_{F}$ for all $\alpha\in F$;
\item[(iii)]
the quotient norm $|\cdot|_{x_{0}}$ is an $F$-vector space norm on $A/x_{0}$, opposed to being merely a seminorm, and it is sub-multiplicative;
\item[(iv)]
if $\|\cdot\|_{A}$ is square preserving, that is $\|a^{2}\|_{A}=\|a\|_{A}^{2}$ for all $a\in A$, then both $\|\cdot\|_{A}$ and $|\cdot|_{x_{0}}$ observe the strong triangle inequality noting that $F$ is non-Archimedean;
\item[(v)]
by way of the map $a\mapsto |a+x_{0}|_{x_{0}}$, as a seminorm on $A$, $|\cdot|_{x_{0}}$ is bounded.
\end{enumerate}
\end{lemma}
\begin{proof}
For (i), if $a_{1}+x_{0}$, $a_{2}+x_{0}\in A/x_{0}$ with $(a_{1}+x_{0})(a_{2}+x_{0})=a_{1}a_{2}+x_{0}=0+x_{0}$ then we have $a_{1}a_{2}\in x_{0}$. Hence if $x_{0}$ is a prime ideal of $A$ then at least one of $a_{1}+x_{0}$ and $a_{2}+x_{0}$ is equal to $0+x_{0}$ and so $A/x_{0}$ is an integral domain. It is immediate that $A/x_{0}$ has a subset that is an isomorphic copy of $F$ since $x_{0}$ is a proper ideal of $A$.\\
For (ii), we first show that $|1+x_{0}|_{x_{0}}=1$. Note that $|1+x_{0}|_{x_{0}}\leq\|1\|_{A}=1$ since $0\in x_{0}$. So now suppose toward a contradiction that there is $b\in x_{0}$ such that $\|1+b\|_{A}<1$. We have for all $n\in\mathbb{N}$ that $b_{n}:=-((1+b)^{n}-1)$ is an element of $x_{0}$ since $x_{0}$ is an ideal of $A$. But $\|1-b_{n}\|_{A}=\|(1+b)^{n}\|_{A}\leq\|1+b\|_{A}^{n}$ with $\lim_{n\to\infty}\|1+b\|_{A}^{n}=0$ and so $1$ is an element of $x_{0}$ since $x_{0}$ is closed which contradicts $x_{0}$ being a proper ideal of $A$. We conclude that $\|1+b\|_{A}\geq1$ for all $b\in x_{0}$ and so $|1+x_{0}|_{x_{0}}\geq1$. Hence $|1+x_{0}|_{x_{0}}=1$ by the above. Now for $\alpha\in F^{\times}$ we have $x_{0}=\alpha x_{0}$ since $\alpha$ is invertible where $\alpha x_{0}:=\{\alpha b:b\in x_{0}\}$. Hence
\begin{align*}
|\alpha+x_{0}|_{x_{0}}=&\inf\{\|\alpha+b\|_{A}:b\in x_{0}\}\\
=&\inf\{\|\alpha+\alpha b\|_{A}:b\in x_{0}\}\\
=&\inf\{|\alpha|_{F}\|1+b\|_{A}:b\in x_{0}\}\\
=&|\alpha|_{F}|1+x_{0}|_{x_{0}}=|\alpha|_{F}
\end{align*}
as required. In a similar way for $a\in A$ one shows that $|\alpha a+x_{0}|_{x_{0}}=|\alpha|_{F}|a+x_{0}|_{x_{0}}$.\\
For (iii), we note that $|\cdot|_{x_{0}}$ is a norm on $A/x_{0}$ because $x_{0}$ is closed as a subset of $A$ so that for $a\in A\backslash x_{0}$ there is $\varepsilon>0$ with $\|a+b\|_{A}\geq\varepsilon$ for all $b\in x_{0}$ giving $|a+x_{0}|_{x_{0}}\geq\varepsilon$. We now show that $|\cdot|_{x_{0}}$ is sub-multiplicative. For $a_{1},a_{2}\in A$ we have
\begin{align*}
|a_{1}a_{2}+x_{0}|_{x_{0}}=&\inf\{\|a_{1}a_{2}+b\|_{A}:b\in x_{0}\}\\
\leq&\inf\{\|a_{1}a_{2}+a_{1}b_{2}+a_{2}b_{1}+b_{1}b_{2}\|_{A}:b_{1},b_{2}\in x_{0}\}\\
\leq&\inf\{\|a_{1}+b_{1}\|_{A}\|a_{2}+b_{2}\|_{A}:b_{1},b_{2}\in x_{0}\}\\
=&|a_{1}+x_{0}|_{x_{0}}|a_{2}+x_{0}|_{x_{0}}.
\end{align*}
For (iv), suppose $\|\cdot\|_{A}$ is square preserving. In this case the proof of Theorem \ref{thr:CVFCHAR} also works for $A$ and so $\|\cdot\|_{A}$ observes the strong triangle inequality, see \cite[p18]{Schikhof} for details. Hence for $a_{1},a_{2}\in A$ we also have
\begin{align*}
|a_{1}+a_{2}+x_{0}|_{x_{0}}=&\inf\{\|a_{1}+a_{2}+b\|_{A}:b\in x_{0}\}\\
=&\inf\{\|a_{1}+b_{1}+a_{2}+b_{2}\|_{A}:b_{1},b_{2}\in x_{0}\}\\
\leq&\inf\{\max\{\|a_{1}+b_{1}\|_{A},\|a_{2}+b_{2}\|_{A}\}:b_{1},b_{2}\in x_{0}\}\\
=&\max\{\inf\{\|a_{1}+b_{1}\|_{A}:b_{1}\in x_{0}\},\inf\{\|a_{2}+b_{2}\|_{A}:b_{2}\in x_{0}\}\}\\
=&\max\{|a_{1}+x_{0}|_{x_{0}},|a_{2}+x_{0}|_{x_{0}}\}.
\end{align*}
For (v), since we have $0\in x_{0}$ it is immediate that $|a+x_{0}|_{x_{0}}\leq\|a\|_{A}$ for all $a\in A$.
\end{proof}
\begin{lemma}
\label{lem:RTCBR}
Let $R$ be a commutative Banach ring. Then:
\begin{enumerate}
\item[(i)]
if $a\in R$ has $\|1-a\|_{R}<1$ then $a$ is invertible in $R$;
\item[(ii)]
for $I$ a proper ideal of $R$ the closure $J$ of $I$, as a subset of $R$, is a proper ideal of $R$;
\item[(iii)]
each non-invertible element of $R$ is an element of some maximal ideal of $R$. The maximal ideals of $R$ are proper, closed and prime.
\end{enumerate}
\end{lemma}
\begin{proof}
For (i), for $a\in R$ with $\|1-a\|_{R}<1$ let $\delta>0$ be such that $\|1-a\|_{R}<\delta<1$. Then setting $b:=1-a$ gives $\|b^{n}\|_{R}\leq\|b\|_{R}^{n}<\delta^{n}<1$ for all $n\in\mathbb{N}$. Therefore we have $\sum_{n=0}^{m}\|b^{n}\|_{R}<\sum_{n=0}^{\infty}\delta^{n}=\frac{1}{1-\delta}$ for each $m\in\mathbb{N}$ and so $\sum_{n=0}^{\infty}b^{n}\in R$ since $R$ is complete. Moreover
\begin{align*}
\left\|(1-b)\sum_{n=0}^{\infty}b^{n}-1\right\|_{R}=&\left\|(1-b)\sum_{n=0}^{\infty}b^{n}-(1-b)\sum_{n=0}^{m}b^{n}+(1-b)\sum_{n=0}^{m}b^{n}-1\right\|_{R}\\
\leq&\left(\sum_{n=m+1}^{\infty}\|b^{n}\|_{R}\right)\|1-b\|_{R}+\|b^{m+1}\|_{R}.
\end{align*}
Hence $a$ is invertible in $R$ since $1-b=1-(1-a)=a$ and
\begin{equation*}
\lim_{m\to\infty}\left(\left(\sum_{n=m+1}^{\infty}\|b^{n}\|_{R}\right)\|1-b\|_{R}+\|b^{m+1}\|_{R}\right)=0.
\end{equation*}
For (ii), let $I$ be a proper ideal of $R$ and $J$ its closure as a subset. For $a,b\in J$ there are sequences $(a_{n})$, $(b_{n})$ of elements of $I$ converging to $a$ and $b$ respectively with respect to $\|\cdot\|_{R}$. Hence for $a'\in R$, $(a'a_{n})$ is a sequence in $I$ with $\|a'a-a'a_{n}\|_{R}\leq\|a'\|_{R}\|a-a_{n}\|_{R}$ for each $n\in\mathbb{N}$ and so $a'a\in J$ since $\lim_{n\to\infty}\|a-a_{n}\|_{R}=0$. Similarly $(a_{n}+b_{n})$ is a sequence in $I$ with $\|(a+b)-(a_{n}+b_{n})\|_{R}\leq\|a-a_{n}\|_{R}+\|b-b_{n}\|_{R}$ for each $n\in\mathbb{N}$ and so $a+b\in J$. Hence $J$ is an ideal of $R$. Now since $I$ is a proper ideal of $R$ each element $a_{n}$ of the sequence $(a_{n})$ is not invertible and so $\|1-a_{n}\|_{R}\geq1$ for all $n\in\mathbb{N}$ by (i). Hence $1\leq\|1-a+a-a_{n}\|_{R}\leq\|1-a\|_{R}+\|a-a_{n}\|_{R}$ for all $n\in\mathbb{N}$ and so $\|1-a\|_{R}\geq1$ for all $a\in J$ giving $1\not\in J$. Hence $J$ is proper.\\
For (iii), let $a$ be a non-invertible element of $R$ noting that we can always take $a=0$. The principal ideal $I_{a}:=aR$ is proper since for all $b\in R$, $ab\not=1$. By Zorn's lemma $I_{a}$ is a subset of some maximal ideal $J_{a}$ of $R$. Every maximal ideal $J$ of $R$ is proper and prime, noting that $R/J$ is a field or by other means, and closed as a subset of $R$ by (ii).
\end{proof}
\begin{remark}
\label{rem:RTCBR}
We note that if $A$ is a commutative unital Banach $F$-algebra then Lemma \ref{lem:RTCBR} applies to $A$ and $A\langle r^{-1}T\rangle$ for each $r>0$.
\end{remark}
\begin{lemma}
\label{lem:RTAUX}
Let $F$ be a complete non-Archimedean field and let $A$ be a commutative unital Banach $F$-algebra with maximal ideal $m_{0}$. Let $S(A)$ denote the set of all norms on the field $A/m_{0}$ that are also unital bounded seminorms on $A$ as a Banach ring. That is if $|\cdot|$ is an element of $S(A)$ then $|1|=1$ and $|\cdot|$ conforms to Definition \ref{def:RTBMS} except it need not be multiplicative merely sub-multiplicative. It follows that:
\begin{enumerate}
\item[(i)]
the set $S(A)$ is non-empty;
\item[(ii)]
for $|\cdot|\in S(A)$, $\overline{A/m_{0}}$ the completion of $A/m_{0}$ with respect to $|\cdot|$, $r>0$ and $a\in A/m_{0}$, if $a-T$ is non-invertible in $\overline{A/m_{0}}\langle r^{-1}T\rangle$ then there is $|\cdot|'\in S(A)$ with $|a|'\leq r$ and $|b|'\leq|b|$ for all $b\in A/m_{0}$.
\end{enumerate}
\end{lemma}
\begin{proof}
For (i), we note that the quotient norm $|\cdot|_{m_{0}}$ is an element of $S(A)$ since (iii) of Lemma \ref{lem:RTCBR} shows that (ii), (iii) and (v) of Lemma \ref{lem:RTQN} apply to $|\cdot|_{m_{0}}$.\\
For (ii), suppose that $a-T$ is non-invertible in $\overline{A/m_{0}}\langle r^{-1}T\rangle$. Then $a-T$ is an element of some maximal ideal $J$ of $\overline{A/m_{0}}\langle r^{-1}T\rangle$ by Lemma \ref{lem:RTCBR}. Hence the quotient norm $|\cdot|_{J}$ on $\overline{A/m_{0}}\langle r^{-1}T\rangle/J$ is an element of $S(\overline{A/m_{0}}\langle r^{-1}T\rangle)$ by Lemma \ref{lem:RTQN}. Therefore since $J$ is closed and $A/m_{0}$ is a field, $|a'|':=|a'+J|_{J}$, for $a'\in A/m_{0}$, defines a norm on $A/m_{0}$. Since $|\cdot|_{J}$ is an element of $S(\overline{A/m_{0}}\langle r^{-1}T\rangle)$ we have that $|\cdot|'$ is unital as a seminorm on $A$. Similarly since $|\cdot|_{J}$ is bounded as a seminorm on $\overline{A/m_{0}}\langle r^{-1}T\rangle$ we have for all $a'\in A$ that
\begin{equation}
\label{equ:RTAUX}
|a'+m_{0}|'=|(a'+m_{0})+J|_{J}\leq\|a'+m_{0}\|_{\overline{A/m_{0}},r}=|a'+m_{0}|\leq\|a'\|_{A}
\end{equation}
noting that $|\cdot|$ is an element of $S(A)$. Hence $|\cdot|'$ is bounded as a seminorm on $A$ and so $|\cdot|'$ is an element of $S(A)$. Further since $a-T$ is an element of $J$ we have
\begin{equation*}
|a|'=|a+J|_{J}=|T+J|_{J}\leq\|T\|_{\overline{A/m_{0}},r}=r.
\end{equation*}
Finally we have $|b|'\leq|b|$ for all $b\in A/m_{0}$ by (\ref{equ:RTAUX}).
\end{proof}
The following Lemma will be particularly useful in Section \ref{sec:RTR}.
\begin{lemma}
\label{lem:RTNMT}
Let $F$ be a complete non-Archimedean field and let $A$ be a commutative unital Banach $F$-algebra. With reference to Definition \ref{def:RTMOA} the following holds:
\begin{enumerate}
\item[(i)]
the set $\mathcal{M}_{0}(A)$ is non-empty since every maximal ideal of $A$ is an element of $\mathcal{M}_{0}(A)$;
\item[(ii)]
an element $a\in A$ is invertible if and only if $a+x_{0}\not=0+x_{0}$ for all $x_{0}\in\mathcal{M}_{0}(A)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Whilst this proof provides more detail, much of the following has been taken from \cite[Ch1]{Berkovich}. For (i), let $m_{0}$ be a maximal ideal of $A$. Hence the quotient ring $A/m_{0}$ is a field. Let $S(A)$ be as in Lemma \ref{lem:RTAUX} and note therefore that $S(A)$ is non-empty. We put a partial order on $S(A)$ by $|\cdot|\lesssim|\cdot|'$ if and only if $|a+m_{0}|\leq|a+m_{0}|'$ for all $a+m_{0}\in A/m_{0}$. Now let $E$ be a chain in $S(A)$, that is $E$ is a subset of $S(A)$ such that $\lesssim$ restricts to a total order on $E$. Define a map $|\cdot|_{E}:A/m_{0}\rightarrow\mathbb{R}$ by
\begin{equation*}
|a+m_{0}|_{E}:=\inf\{|a+m_{0}|:|\cdot|\in E\}.
\end{equation*}
We will show that $|\cdot|_{E}$ is a lower bound for $E$ in $S(A)$. It is immediate from the definition of $|\cdot|_{E}$ that it is unital and bounded since all of the elements of $E$ are. Hence it suffices to show that $|\cdot|_{E}$ is a sub-multiplicative norm on $A/m_{0}$. Clearly $|0+m_{0}|_{E}=0$ so, simplifying our notation slightly, let $a$ be an element of $A/m_{0}^{\times}$. We show that $|a|_{E}\not=0$. Let $|\cdot|$ be an element of $E$ and suppose towards a contradiction that there is $|\cdot|'\in E$ such that $|a|'<\min\{|a|,|a^{-1}|^{-1}\}$. Then
\begin{equation*}
1=|1|'=|aa^{-1}|'\leq|a|'|a^{-1}|'<|a^{-1}|^{-1}|a^{-1}|'.
\end{equation*}
Hence by the above we have $|a^{-1}|<|a^{-1}|'$ and $|a|'<|a|$ giving $|\cdot|'\not\lesssim|\cdot|$ and $|\cdot|\not\lesssim|\cdot|'$ which contradicts both $|\cdot|$ and $|\cdot|'$ being elements of $E$. Therefore for all $|\cdot|'\in E$ we have $|a|'\geq\min\{|a|,|a^{-1}|^{-1}\}$. In particular $|a|_{E}\not=0$. Now for $a,b\in A/m_{0}$ we have
\begin{align*}
|a+b|_{E}=&\inf\{|a+b|:|\cdot|\in E\}\\
\leq&\inf\{|a|+|b|:|\cdot|\in E\}\\
=&\inf\{|a|+|b|':|\cdot|,|\cdot|'\in E\},\quad\dag\\
=&|a|_{E}+|b|_{E},
\end{align*}
where line $\dag$ follows from the line above it because if $|\cdot|\lesssim |\cdot|'$ then $|a|+|b|\leq |a|+|b|'$. Hence the triangle inequality holds for $|\cdot|_{E}$. Similarly we have $|ab|_{E}\leq |a|_{E}|b|_{E}$ and so $|\cdot|_{E}$ is sub-multiplicative as required. Hence $|\cdot|_{E}$ is a lower bound for $E$ in $S(A)$. Therefore by Zorn's lemma there exists a minimal element of $S(A)$ with respect to $\lesssim$. Let $|\cdot|$ be a minimal element of $S(A)$ and denote by $\overline{A/m_{0}}$ the completion of $A/m_{0}$ with respect to $|\cdot|$. We will show that $|\cdot|$ is multiplicative on $A/m_{0}$ and hence satisfies (i) of Lemma \ref{lem:RTNMT}. Note that for now we should only take $\overline{A/m_{0}}$ to be an integral domain and not a field since we can't apply Theorem \ref{thr:CVFCOM}.\\
Now since $A/m_{0}$ is a field $|\cdot|$ will be multiplicative if $|a^{-1}|=|a|^{-1}$ for all $a\in A/m_{0}^{\times}$ since for $a,b\in A/m_{0}^{\times}$ with $|a^{-1}|=|a|^{-1}$ we have $|b|=|baa^{-1}|\leq|ba||a^{-1}|=|ba||a|^{-1}$ giving $|a||b|\leq|ab|$ and since $|\cdot|$ is sub-multiplicative we have $|ab|=|a||b|$. Hence we will show that $|a^{-1}|=|a|^{-1}$ for all $a\in A/m_{0}^{\times}$. To this end we first show that $|\cdot|$ is power multiplicative that is $|a^{n}|=|a|^{n}$ for all $a\in A/m_{0}$ and $n\in\mathbb{N}$. Suppose towards a contradiction that there is $a\in A/m_{0}$ with $|a^{n}|<|a|^{n}$ for some $n>1$. We claim that $a-T$ is non-invertible in the Banach ring $\overline{A/m_{0}}\langle r^{-1}T\rangle$ with $r:=\sqrt[n]{|a^{n}|}$. By Lemma \ref{lem:RTRRT} it suffices to show that the series $\sum_{i=0}^{\infty}|a^{-i}|r^{i}$ does not converge. Expressing $i$ as $i=pn+q$, for some $q\in\{0,\cdots,n-1\}$, we have $|a^{i}|\leq|a^{n}|^{p}|a^{q}|$ and $|a^{i}|^{-1}\leq|a^{-i}|$ since $1=|a^{i}a^{-i}|\leq|a^{i}||a^{-i}|$. Therefore
\begin{equation*}
|a^{-i}|r^{i}\geq|a^{i}|^{-1}|a^{n}|^{p+\frac{q}{n}}\geq\frac{|a^{n}|^{p}|a^{n}|^{\frac{q}{n}}}{|a^{n}|^{p}|a^{q}|}=\frac{|a^{n}|^{\frac{q}{n}}}{|a^{q}|}.
\end{equation*}
Hence $|a^{-i}|r^{i}\geq\min\{\frac{|a^{n}|^{\frac{q}{n}}}{|a^{q}|}:q\in\{0,\cdots,n-1\}\}>0$ for all $i\geq0$. Therefore $a-T$ is non-invertible in $\overline{A/m_{0}}\langle r^{-1}T\rangle$ with $r:=\sqrt[n]{|a^{n}|}$. Now by Lemma \ref{lem:RTAUX} there exists $|\cdot|'\in S(A)$ such that $|a|'\leq r$ and $|b|'\leq|b|$ for all $b\in A/m_{0}$. But, since $|a^{n}|<|a|^{n}$, this gives $|a|'\leq r=\sqrt[n]{|a^{n}|}<|a|$ which contradicts $|\cdot|$ being a minimal element of $S(A)$. Hence we have shown that $|a^{n}|=|a|^{n}$ for all $a\in A/m_{0}$ and $n\in\mathbb{N}$.\\
Now suppose towards a contradiction that there exists an element $a\in A/m_{0}^{\times}$ with $|a|^{-1}<|a^{-1}|$. We claim that $a-T$ is non-invertible in $\overline{A/m_{0}}\langle r^{-1}T\rangle$ with $r:=|a^{-1}|^{-1}$. Again by Lemma \ref{lem:RTRRT} it suffices to show that the series $\sum_{i=0}^{\infty}|a^{-i}|r^{i}$ does not converge. Indeed since $|\cdot|$ is power multiplicative we have
\begin{equation*}
|a^{-i}|r^{i}=|(a^{-1})^{i}|r^{i}=|a^{-1}|^{i}(|a^{-1}|^{-1})^{i}=|a^{-1}|^{0}=1.
\end{equation*}
Hence $a-T$ is non-invertible in $\overline{A/m_{0}}\langle r^{-1}T\rangle$ with $r:=|a^{-1}|^{-1}$. Now again by Lemma \ref{lem:RTAUX} there exists $|\cdot|'\in S(A)$ such that $|a|'\leq r$ and $|b|'\leq|b|$ for all $b\in A/m_{0}$. But, since $|a|^{-1}<|a^{-1}|$, this gives $|a|'\leq r=|a^{-1}|^{-1}<|a|$ which contradicts $|\cdot|$ being a minimal element of $S(A)$. Hence we have shown that $|a^{-1}|=|a|^{-1}$ for all $a\in A/m_{0}^{\times}$ and so $|\cdot|$ is multiplicative. Finally $m_{0}$ is the kernel of $|\cdot|$ and as a maximal ideal of $A$ it is proper, closed and prime by Lemma \ref{lem:RTCBR}. In particular since $m_{0}$ was an arbitrary maximal ideal of $A$ every maximal ideal of $A$ is an element of $\mathcal{M}_{0}(A)$.\\
For (ii), for $a$ an invertible element of $A$ we have $a\not\in x_{0}$ for all $x_{0}\in\mathcal{M}_{0}(A)$ since $x_{0}$ is a proper ideal of $A$. Hence $a+x_{0}\not=0+x_{0}$ in $A/x_{0}$ for all $x_{0}\in\mathcal{M}_{0}(A)$. On the other hand for $a$ a non-invertible element of $A$ we have by Lemma \ref{lem:RTCBR} that $a$ is an element of a maximal ideal $J_{a}$ of $A$. By (i) above, $J_{a}$ is an element of $\mathcal{M}_{0}(A)$ and $a+J_{a}=0+J_{a}$ in $A/J_{a}$. Therefore for $a$ a non-invertible element of $A$ we do not have $a+x_{0}\not=0+x_{0}$ in $A/x_{0}$ for all $x_{0}\in\mathcal{M}_{0}(A)$.
\end{proof}
With the preceding theory in place we can now turn our attention to the main topic of this chapter.
\section{Representations}
\label{sec:RTR}
\subsection{Established theorems}
\label{subsec:RTRET}
The particular well known representation theorems in the Archimedean setting that we will find an analog of in the non-Archimedean setting are as follows. See \cite[p35]{Kulkarni-Limaye1992} for details of Theorem \ref{thr:RTREPR}.
\begin{theorem}
\label{thr:RTREPC}
Let $A$ be a commutative unital complex Banach algebra with $\|a^{2}\|_{A}=\|a\|_{A}^{2}$ for all $a\in A$. Then $A$ is isometrically isomorphic to a uniform algebra on some compact Hausdorff space $X$, in other words a $^{\mathbb{C}}/_{\mathbb{C}}$ function algebra on $(X,\mbox{id},\mbox{id})$.
\end{theorem}
\begin{theorem}
\label{thr:RTREPR}
Let $A$ be a commutative unital real Banach algebra with $\|a^{2}\|_{A}=\|a\|_{A}^{2}$ for all $a\in A$. Then $A$ is isometrically isomorphic to a real function algebra on some compact Hausdorff space $X$ with topological involution $\tau$ on $X$, in other words a $^{\mathbb{C}}/_{\mathbb{R}}$ function algebra on $(X,\tau,\bar{z})$.
\end{theorem}
We will now recall some of the theory behind Theorem \ref{thr:RTREPC}. For more details see \cite[p29]{Stout} or \cite[p4,p11]{Gamelin}. The space $X$ is the character space $\mbox{Car}(A)$ which as a set is the set of all non-zero, complex-valued, multiplicative $\mathbb{C}$-linear functionals on $A$. It turns out that the characters on $A$ are all automatically continuous. Note, in the case of Theorem \ref{thr:RTREPR} the functionals are complex-valued but $\mathbb{R}$-linear and $\tau$ maps each such functional to its complex conjugate. For a commutative unital complex Banach algebra $A$ the Gelfand transform is a homomorphism from $A$ to a space of complex valued functions $\hat{A}$ defined by $a\mapsto\hat{a}$ where $\hat{a}(\varphi):=\varphi(a)$ for all $a\in A$ and $\varphi\in\mbox{Car}(A)$. The topology on $\mbox{Car}(A)$ is the initial topology given by the family of functions $\hat{A}$. Known in this case as the Gelfand topology it is the weakest topology on $\mbox{Car}(A)$ such that all the elements of $\hat{A}$ are continuous giving $\hat{A}\subseteq C_{\mathbb{C}}(\mbox{Car}(A))$. The norm given to $\hat{A}$ is the sup norm.\\
Now for a commutative unital complex Banach algebra $A$ the set of maximal ideals of $A$ and the set of kernels of the elements of $\mbox{Car}(A)$ agree. In Theorem \ref{thr:RTREPC}, $\|\cdot\|_{A}$ being square preserving ensures that $A$ is semisimple, that is that the Jacobson radical of $A$ is $\{0\}$ where the Jacobson radical is the intersection of all maximal ideals of $A$ and so the intersection of all the kernels of elements of $\mbox{Car}(A)$. Forcing $A$ to be semisimple ensures that the Gelfand transform is injective since if $A$ is semisimple then the kernel of the Gelfand transform is $\{0\}$. Similarly to confirm that the Gelfand transform is injective it is enough to show that it is an isometry. Given Theorem \ref{thr:RTREPC} it is immediate that a commutative unital complex Banach algebra $A$ is isometrically isomorphic to a uniform algebra if and only if its norm is square preserving since the sup norm has this property. Hence Theorem \ref{thr:RTREPC} provides a characterisation of uniform algebras.\\
Now in the non-Archimedean setting Berkovich, the author of \cite{Berkovich}, takes the following approach involving Definition \ref{def:RTMACH}.
\begin{definition}
\label{def:RTMACH}
Let $F$ be a complete non-Archimedean field and let $A$ be a commutative unital Banach $F$-algebra. Define $\mathcal{M}_{1}(A)$ to be the set of all bounded multiplicative seminorms on $A$. Further a {\em{character on}} $A$ is a non-zero, multiplicative $F$-linear functional on $A$ that takes values in some complete field extending $F$ as a valued field.
\end{definition}
For an appropriate topology, $\mathcal{M}_{1}(A)$ plays the role for $A$ in Definition \ref{def:RTMACH} that the maximal ideal space, equivalently the character space, plays in the Archimedean setting. For $|\cdot|\in \mathcal{M}_{1}(A)$ let $x_{0}:=\mbox{ker}(|\cdot|)$. Then $x_{0}$ is a proper closed prime ideal of $A$. Hence the quotient ring $A/x_{0}$ is an integral domain. Lemma \ref{lem:RTQRV} is useful here.
\begin{lemma}
\label{lem:RTQRV}
Let $F$ be a complete valued field and let $A$ be a commutative unital Banach $F$-algebra. For $|\cdot|$ a bounded multiplicative seminorm on $A$ with kernel $x_{0}$ the value $|a|$ of $a\in A$ only depends on the quotient class in $A/x_{0}$ to which $a$ belongs. Hence $|\cdot|$ is well defined when used as a valuation on $A/x_{0}$ by setting $|a+x_{0}|:=|a|$. Further $x_{0}$ is a closed subset of $A$.
\end{lemma}
\begin{proof}
For $a\in A$ and $b\in x_{0}$ we have $|a|=|a+b-b|\leq|a+b|+|b|=|a+b|$ and $|a+b|\leq|a|+|b|=|a|$ hence $|a+b|=|a|$ as required. Furthermore this also gives an easy way of seeing that $x_{0}$ is a closed subset of $A$. Let $a$ be an element of $A\backslash x_{0}$ then for all $b\in x_{0}$ we have $|a|=|a-b|\leq\|a-b\|_{A}$ since $|\cdot|$ is bounded and so $x_{0}$ is closed.
\end{proof}
Now by Lemma \ref{lem:RTQRV} we can take $|\cdot|$ to be a valuation on $A/x_{0}$ and hence extend it to a valuation on the field of fractions $\mbox{Frac}(A/x_{0})$. Hence an element $|\cdot|\in \mathcal{M}_{1}(A)$ defines a character on $A$ by sending the elements of $A$ to their image in the completion of $\mbox{Frac}(A/x_{0})$ with respect to $|\cdot|$. With these details in place we have the following theorem by Berkovich, see \cite[p157]{Berkovich}.
\begin{theorem}
\label{thr:RTREPF}
Let $F$ be a complete non-Archimedean field. Let $A$ be a commutative unital Banach $F$-algebra with $\|a^{2}\|_{A}=\|a\|_{A}^{2}$ for all $a\in A$. Suppose that all of the characters of $A$ take values in $F$. Then:
\begin{enumerate}
\item[(i)]
the space $\mathcal{M}_{1}(A)$ is totally disconnected;
\item[(ii)]
the Gelfand transform gives an isomorphism from $A$ to $C_{F}(\mathcal{M}_{1}(A))$.
\end{enumerate}
\end{theorem}
As we move on to the next subsection it's worth pointing out that the Gelfand theory presented in \cite{Berkovich} does not make use of any definition such as that of $^{L}/_{L^{g}}$ function algebras.
\subsection{Motivation}
\label{subsec:RTRMOT}
For $A$ a commutative unital complex Banach algebra it is straightforward to confirm that there is a one-one correspondence between the elements of $\mbox{Car}(A)$ and the elements of the maximal ideal space. Since $A$ is unital the complex constants are elements of $A$ and for $\varphi\in\mbox{Car}(A)$, $\varphi$ restricts to the identity on $\mathbb{C}$. Hence by the first isomorphism theorem for rings we have
\begin{equation}
\label{equ:RTCONG}
A/\mbox{ker}(\varphi)\cong\varphi(A)=\mathbb{C}
\end{equation}
showing that $\mbox{ker}(\varphi)$ is a maximal ideal of $A$. Therefore, by also noting the prelude to Chapter \ref{cha:CG}, the set of maximal ideals of $A$ and the set of kernels of the elements of $\mbox{Car}(A)$ do indeed agree. It remains to show that no two characters on $A$ have the same kernel and this marks an important difference with the theory we are about to present. First though let $\varphi,\phi$ be elements of $\mbox{Car}(A)$ with $\mbox{ker}(\varphi)=\mbox{ker}(\phi)$. We note that for each $a\in A$ there is a unique $\alpha\in\mathbb{C}$ representing the quotient class $a+\mbox{ker}(\varphi)$ by (\ref{equ:RTCONG}). Hence for some $b\in\mbox{ker}(\varphi)$ we have $a+b=\alpha$ giving
\begin{equation*}
\varphi(a)=\varphi(a)+\varphi(b)=\varphi(a+b)=\varphi(\alpha)=\alpha=\phi(\alpha)=\phi(a+b)=\phi(a)+\phi(b)=\phi(a)
\end{equation*}
and so no two characters on $A$ have the same kernel.\\
Now let $F$ be a complete non-Archimedean field. We wish to identify sufficient conditions for a commutative unital Banach $F$-algebra to be represented by some $^{L}/_{F}$ function algebra. In this respect the following lemma is informative and motivates an appropriate choice of character space in Subsection \ref{subsec:RTRFBD}.
\begin{lemma}
\label{lem:RTMOTMA}
For $A$ an $^{L}/_{L^{g}}$ function algebra on $(X,\tau,g)$, where $L$ can be Archimedean or non-Archimedean and $A$ is not assumed to be basic, define a family of maps on $A$ by
\begin{equation*}
|f|_{A,x}:=|f(x)|_{L}\quad\mbox{for }x\in X\mbox{ and }f\in A.
\end{equation*}
Then for each $x\in X$:
\begin{enumerate}
\item[(i)]
the map $|\cdot|_{A,x}$ is a bounded multiplicative seminorm on $A$;
\item[(ii)]
the kernel $\mbox{\rm{ker}}(|\cdot|_{A,x})$, which is the same as $\mbox{\rm{ker}}(\hat{x})$ where $\hat{x}$ is the evaluation character $\hat{x}$(f):=f(x) on $A$, is not only a proper closed prime ideal of $A$ but it is also a maximal ideal;
\item[(iii)]
we have $\mbox{\rm{ker}}\left(\widehat{\tau(x)}\right)=\mbox{\rm{ker}}(\hat{x})$ even if $\tau$ is not the identity and in general different evaluation characters can have the same kernel.
\end{enumerate}
\end{lemma}
\begin{proof}
For (i), it is immediate that $|\cdot|_{A,x}$ is a bounded multiplicative seminorm on $A$ since the norm on $A$ is the sup norm and $|\cdot|_{L}$ is a valuation on $L$.\\
For (ii), it is immediate that $\mbox{ker}(|\cdot|_{A,x})$ is a proper ideal of $A$ noting that $|1|_{A,x}=|1|_{L}=1$. It remains to show that $\mbox{ker}(|\cdot|_{A,x})$ is a maximal ideal of $A$ noting Lemma \ref{lem:RTCBR}. To this end we show that the quotient ring $A/\mbox{ker}(\hat{x})$ is a field. We first note that $L^{g}\subseteq\hat{x}(A)\subseteq L$ and that $\hat{x}(A)$ is a ring and so an integral domain. Further by the first isomorphism theorem for rings we have $A/\mbox{ker}(\hat{x})\cong\hat{x}(A)$ and so $A/\mbox{ker}(\hat{x})$ contains an embedding of $L^{g}$ and each element $a\in A/\mbox{ker}(\hat{x})$ is an element of an algebraic extension of $L^{g}$ since $L$ is a finite extension of $L^{g}$. Therefore for $a\in A/\mbox{ker}(\hat{x})$ with $a\not=0$ we have by Lemma \ref{lem:CVFPOL} that $L^{g}(a)=L^{g}[a]$ where $L^{g}(a)$ is a simple extension of $L^{g}$ and $L^{g}[X]$ is the ring of polynomials over $L^{g}$. Hence, since $L^{g}[a]\subseteq A/\mbox{ker}(\hat{x})$, the inverse $a^{-1}$ is an element of $A/\mbox{ker}(\hat{x})$ which is therefore a field as required.\\
For (iii), we note that for all $f\in A$ and $x\in X$ we have $f(\tau(x))=g(f(x))$ since $f$ is an element of $C(X,\tau,g)$. Further since $g\in\mbox{Gal}(^{L}/_{F})$ we have $g(f(x))=0$ if and only if $f(x)=0$ and so $\mbox{ker}\left(\widehat{\tau(x)}\right)=\mbox{ker}(\hat{x})$. However in general $f(x)$ need not be equal to $g(f(x))$ and so different evaluation characters can have the same kernel.
\end{proof}
\subsection{Representation under finite basic dimension}
\label{subsec:RTRFBD}
This subsection will involve the use of Definition \ref{def:RTFISO}.
\begin{definition}
\label{def:RTFISO}
Suppose $F_{1}$ and $F_{2}$ are extensions of a field $F$ such that there exists an isomorphism $\varphi:F_{1}\rightarrow F_{2}$ with $\varphi(a)=a$ for all $a\in F$. Then $\varphi$ is called an $F$-{\em isomorphism} and $F_{1}$ and $F_{2}$ are called $F$-{\em isomorphic} or with the same meaning $F$-{\em conjugate}. Similarly if $F$ is complete then we can talk of $F$-{\em isomorphic} Banach $F$-algebras etc.
\end{definition}
The following definition and theorem will be the focus of attention for the rest of this chapter.
\begin{definition}
\label{def:RTFBD}
Let $F$ be a complete valued field and let $A$ be a commutative unital Banach $F$-algebra. We say that $A$ has {\em finite basic dimension} if there exists a finite extension $L$ of $F$ extending $F$ as a valued field such that:
\begin{enumerate}
\item[(i)]
for each proper closed prime ideal $J$ of $A$, that is the kernel of a bounded multiplicative seminorm on $A$, the field of fractions $\mbox{Frac}(A/J)$ is $F$-isomorphic to a subfield of $L$;
\item[(ii)]
there is $g\in\mbox{Gal}(^{L}/_{F})$ with $L^{g}=F$.
\end{enumerate}
Cases where $L=F$ are allowed.
\end{definition}
The purpose of Definition \ref{def:RTFBD} is to generalise to the non-Archimedean setting conditions that are innately present in the Archimedean case due to the Gelfand Mazur theorem. We will discuss this in Remark \ref{rem:RTFBD}.
\begin{theorem}
\label{thr:RTREPLF}
Let $F$ be a locally compact complete non-Archimedean field with nontrivial valuation. Let $A$ be a commutative unital Banach $F$-algebra with $\|a^{2}\|_{A}=\|a\|_{A}^{2}$ for all $a\in A$ and finite basic dimension. Then:
\begin{enumerate}
\item[(i)]
for some finite extension $L$ of $F$ extending $F$ as a valued field, a character space $\mathcal{M}(A)$ of $L$ valued, multiplicative $F$-linear functionals can be defined;
\item[(ii)]
the space $\mathcal{M}(A)$ is a totally disconnected compact Hausdorff space;
\item[(iii)]
$A$ is isometrically $F$-isomorphic to a $^{L}/_{F}$ function algebra on $(\mathcal{M}(A),g,g)$ for some $g\in\mbox{Gal}(^{L}/_{F})$.
\end{enumerate}
\end{theorem}
\begin{remark}
\label{rem:RTFBD}
Concerning the condition of finite basic dimension.
\begin{enumerate}
\item[(i)]
We first note that all commutative unital complex Banach algebras and commutative unital real Banach algebras have finite basic dimension. To see this let $A$ be such an algebra and let $J$ be a proper closed prime ideal of $A$ such that $J$ is the kernel of a bounded multiplicative seminorm $|\cdot|$ on $A$. Then, by Lemma \ref{lem:RTBMS} and Lemma \ref{lem:RTQRV}, $|\cdot|$ extends the absolute valuation on $\mathbb{R}$ to a valuation on the integral domain $A/J$. Extending $|\cdot|$ to a valuation on $\mbox{Frac}(A/J)$ gives either $\mathbb{R}$ or $\mathbb{C}$ by the Gelfand Mazur theorem and noting Theorem \ref{thr:CVFCOM}. Finally with consideration of $\mbox{Gal}(^{\mathbb{C}}/_{\mathbb{R}})$ the result follows. Hence we note that with little modification Theorem \ref{thr:RTREPC}, Theorem \ref{thr:RTREPR} and Theorem \ref{thr:RTREPLF} could be combined into a single theorem.
\item[(ii)]
Now the argument in (i) was deliberately a little naive noting that for every commutative unital Banach $F$-algebra $A$ with finite basic dimension the kernel of every bounded multiplicative seminorm on $A$ is a maximal ideal of $A$. This follows easily from Lemma \ref{lem:CVFPOL} since such a kernel $J$ is a proper closed prime ideal of $A$ and the elements of the quotient ring $A/J$ are algebraic over $F$ and so $A/J$ is a field.
\item[(iii)]
Finally if $A$ is a commutative unital Banach $F$-algebra then in general the set of maximal ideals of $A$ is a subset of the set of kernels of bounded multiplicative seminorms on $A$ by Lemma \ref{lem:RTNMT}. Hence Theorem \ref{thr:RTREPLF} might be strengthened if we can find a proof that accepts changing (i) in Definition \ref{def:RTFBD} to the condition that for each maximal ideal $J$ of $A$ the field $A/J$ is $F$-isomorphic to a subfield of $L$. This is something for the future. The change only makes a difference for cases where there is a bounded multiplicative seminorm on $A$ with kernel $J$ such that $A/J$ has elements that are transcendental over $F$ since otherwise $J$ is a maximal ideal of $A$.
\end{enumerate}
\end{remark}
\begin{proof}[Proof of Theorem \ref{thr:RTREPLF}]
Let $\mathcal{M}_{0}(A)$ be as in Definition \ref{def:RTMOA}. Now $A$ has finite basic dimension so for each $x_{0}\in\mathcal{M}_{0}(A)$ the quotient ring $A/x_{0}$ is a field by Remark \ref{rem:RTFBD}. Further there is a finite extension $L$ of $F$ extending $F$ as a valued field such that for all $x_{0}\in\mathcal{M}_{0}(A)$ the field $A/x_{0}$ is $F$-isomorphic to a subfield of $L$. Moreover for $|\cdot|$ a bounded multiplicative seminorm on $A$ with kernel $x_{0}$ the map $|a+x_{0}|_{A/x_{0}}:=|a|$, for $a\in A$, defines a valuation on $A/x_{0}$ extending the valuation on $F$ by Lemma \ref{lem:RTQRV} and Lemma \ref{lem:RTBMS}. We note that since $L$ and $A/x_{0}$ are both finite extensions of $F$ they are complete valued fields. Further since $|\cdot|_{A/x_{0}}$ is defined by a bounded multiplicative seminorm on $A$ we have
\begin{equation}
\label{equ:RTBOU}
|a+x_{0}|_{A/x_{0}}\leq\|a\|_{A}\quad\mbox{for all }a\in A.
\end{equation}
We now progress towards defining the character space of $A$. Define $\mathcal{M}(A)$ as the set of all pairs $x:=(x_{0},\varphi)$ where $x_{0}\in\mathcal{M}_{0}(A)$ and $\varphi$ is an $F$-isomorphism from $A/x_{0}$ to a subfield of $L$ extending $F$. Then to each $x=(x_{0},\varphi)\in\mathcal{M}(A)$ we associated a map $\hat{x}:A\rightarrow L$ given by $\hat{x}(a):=\varphi(a+x_{0})$ for all $a\in A$. Note that for each element $x=(x_{0},\varphi)\in\mathcal{M}(A)$ we have
\begin{equation}
\label{equ:RTEQU}
|a+x_{0}|_{A/x_{0}}=|\hat{x}(a)|_{L}\quad\mbox{for all }a\in A
\end{equation}
by the uniqueness of the valuation on $A/x_{0}$ extending the valuation on $F$, see Theorem \ref{thr:CVFUT}. In particular each $F$-isomorphism from $A/x_{0}$ to a subfield of $L$ extending $F$ is an isometry and similarly we recall that each element of $\mbox{Gal}(^{L}/_{F})$ is isometric. Now for the element $g\in\mbox{Gal}(^{L}/_{F})$ with $L^{g}=F$, or indeed any other element of $\mbox{Gal}(^{L}/_{F})$, we note that $g$ can be considered as a map of finite order $g:\mathcal{M}(A)\rightarrow\mathcal{M}(A)$ given by $g((x_{0},\varphi)):=(x_{0},g\circ\varphi)$. In particular for $x=(x_{0},\varphi_{1})\in\mathcal{M}(A)$ we have $g\circ\hat{x}=\widehat{g(x)}$ and so there is $y=(y_{0},\varphi_{2})\in\mathcal{M}(A)$ with $y_{0}=x_{0}$ such that the diagram in Figure \ref{fig:RTCMUT} commutes.
\begin{figure}[h]
\begin{equation*}
\xymatrix{
L&&L\ar@{->}[ll]_{g}\\
&A/x_{0}\ar@{->}[ru]^{\varphi_{1}}\ar@{->}[lu]_{\varphi_{2}}&\\
&A\ar@{->}[ruu]_{\hat{x}}\ar@{->}[u]_{q}\ar@{->}[luu]^{\hat{y}}&
}
\end{equation*}
\caption{Commutative diagram for the characters associated to {\it{x}} and {\it{y}}.}
\label{fig:RTCMUT}
\end{figure}
Note that in the case of Figure \ref{fig:RTCMUT} the fields $\hat{x}(A)$ and $\hat{y}(A)$ are $F$-conjugate and could actually be the same subfield of $L$ if the restriction $g|_{\hat{x}(A)}$ is an element of $\mbox{Gal}(^{\hat{x}(A)}/_{F})$. Now by construction for each $x\in\mathcal{M}(A)$ the map $\hat{x}$ is a non-zero, $L$-valued, multiplicative $F$-linear functional on $A$. Hence $\hat{x}$ is continuous since we have
\begin{equation}
\label{equ:RTXBOU}
|\hat{x}(a)|_{L}\leq\|a\|_{A}\quad\mbox{for all }a\in A
\end{equation}
by (\ref{equ:RTEQU}) and (\ref{equ:RTBOU}). We now set up the Gelfand transform in the usual manner by defining a map
\begin{equation*}
\widehat{\cdot}:A\rightarrow\hat{A},\quad a\mapsto\hat{a},
\end{equation*}
where the elements of $\hat{A}$ are the functions $\hat{a}:\mathcal{M}(A)\rightarrow L$ given by $\hat{a}(x):=\hat{x}(a)$. We equip $\hat{A}$ with the binary operations of pointwise addition and multiplication and put the sup norm
\begin{equation*}
\|\hat{a}\|_{\infty}:=\sup_{x\in\mathcal{M}(A)}|\hat{a}(x)|_{L}\quad\mbox{for all }\hat{a}\in\hat{A}
\end{equation*}
on $\hat{A}$ making $\hat{A}$ a commutative unital normed $F$-algebra. Note that with these binary operations it is immediate that the Gelfand transform is an $F$-homomorphism and so $\hat{A}$ is closed under addition and multiplication. Later we will show that $\widehat{\cdot}:A\rightarrow\hat{A}$ is an isometry and so it is also injective. It then follows that $\hat{A}$ is a Banach $F$-algebra since $A$ and $\hat{A}$ are isometrically $F$-isomorphic.\\
Now we equip $\mathcal{M}(A)$ with the Gelfand topology which is the initial topology of $\hat{A}$. Hence the elements of $\hat{A}$ are continuous $L$-valued functions on the space $\mathcal{M}(A)$. We show that $\hat{A}$ separates the points of $\mathcal{M}(A)$ and that $\mathcal{M}(A)$ is a compact Hausdorff space. Let $x$ and $y$ be elements of $\mathcal{M}(A)$ with $x=(x_{0},\varphi)$, $y=(y_{0},\phi)$ and $x\not=y$. If $x_{0}\not=y_{0}$ then there is $a\in x_{0}\cup y_{0}$ such that $a\not\in x_{0}\cap y_{0}$ for which precisely one of $\hat{a}(x)=\hat{x}(a)$ and $\hat{a}(y)=\hat{y}(a)$ is zero. If $x_{0}=y_{0}$ then $\varphi\not=\phi$ on $A/x_{0}$. Hence there is some $a\in A$ such that $\varphi(a+x_{0})\not=\phi(a+x_{0})$ giving
\begin{equation*}
\hat{a}(x)=\hat{x}(a)=\varphi(a+x_{0})\not=\phi(a+x_{0})=\hat{y}(a)=\hat{a}(y)
\end{equation*}
and so $\hat{A}$ separates the points of $\mathcal{M}(A)$. We now show that $\mathcal{M}(A)$ is Hausdorff, note in fact that the proof is standard. Let $x$ and $y$ be elements of $\mathcal{M}(A)$ with $x\not=y$. Since $\hat{A}$ separates the points of $\mathcal{M}(A)$ there is $\hat{a}\in\hat{A}$ such that $\hat{a}(x)\not=\hat{a}(y)$. Further $L$ is Hausdorff and so there are disjoint open subsets $U_{1}$ and $U_{2}$ of $L$ such that $\hat{a}(x)\in U_{1}$ and $\hat{a}(y)\in U_{2}$. Since the topology on $\mathcal{M}(A)$ is the initial topology of $\hat{A}$ the preimage $\hat{a}^{-1}(U_{1})$ is an open neighborhood of $x$ in $\mathcal{M}(A)$ and the preimage $\hat{a}^{-1}(U_{2})$ is an open neighborhood of $y$ in $\mathcal{M}(A)$. Moreover $\hat{a}^{-1}(U_{1})$ and $\hat{a}^{-1}(U_{2})$ are disjoint because $U_{1}$ and $U_{2}$ are, as required.\\
The following, showing that $\mathcal{M}(A)$ is compact, is an adaptation of part of the proof of Theorem \ref{thr:RTREPR} from \cite[p23]{Kulkarni-Limaye1992}. For each $a\in A$ define $L_{a}:=\{\alpha\in L:|\alpha|_{L}\leq\|a\|_{A}\}$ and $L_{A}:=\prod_{a\in A}L_{a}$ with the product topology. Each $L_{a}$ is compact by Theorem \ref{thr:CVFHB} noting that $L$ is locally compact by Remark \ref{rem:CVFEE}. Hence $L_{A}$ is compact by Tychonoff's Theorem. Now by (\ref{equ:RTXBOU}) we have $|\hat{x}(a)|_{L}\leq\|a\|_{A}$ for all $x\in\mathcal{M}(A)$ and $a\in A$. Therefore for each $x\in\mathcal{M}(A)$ we have $\hat{x}(a)\in L_{a}$ and so $\hat{x}$ is a point of $L_{A}$ and $\mathcal{M}(A)$ can be considered as a subset of $L_{A}$. Now the product topology on $L_{A}$ is the initial topology of the family of coordinate projections $P_{a}:L_{A}\rightarrow L_{a}$, $a\in A$. Since we have $P_{a}|_{\mathcal{M}(A)}=\hat{a}$ the Gelfand topology on $\mathcal{M}(A)$ is the initial topology of the family of coordinate projections restricted to $\mathcal{M}(A)$. Hence the topology on $\mathcal{M}(A)$ is the relative topology of $\mathcal{M}(A)$ as a subspace of $L_{A}$. Since $L_{A}$ is compact, any subspace of $L_{A}$ that is closed as a subset is also compact. Hence it remains to show that $\mathcal{M}(A)$ is a closed subset of $L_{A}$. Let $\varphi\in L_{A}$ be in the closure of $\mathcal{M}(A)$. Hence we have $|\varphi(a)|_{L}\leq\|a\|_{A}$ for all $a\in A$ and there is a net $(x_{\lambda})$ in $\mathcal{M}(A)$ converging to $\varphi$. Now since $L_{A}$ has the product topology, convergence in $L_{A}$ is coordinate-wise, see \cite[\S 8]{Willard}. Therefore for $a,b\in A$ we have
\begin{equation*}
\varphi(a+b)=\lim\hat{x}_{\lambda}(a+b)=\lim(\hat{x}_{\lambda}(a)+\hat{x}_{\lambda}(b))=\varphi(a)+\varphi(b).
\end{equation*}
Similarly, $\varphi(ab)=\varphi(a)\varphi(b)$ and $\varphi(\alpha)=\alpha$ for all $a,b\in A$ and $\alpha\in F$. Now since $\varphi$ takes values in $L$ and $L$ is a finite extension of $F$ we have that $\varphi(A)$ is a subfield of $L$ extending $F$ by Lemma \ref{lem:CVFPOL}. Hence since $A/\mbox{ker}(\varphi)\cong\varphi(A)$, by the first isomorphism theorem for rings, the kernel of $\varphi$ is a maximal ideal of $A$. Therefore $\mbox{ker}(\varphi)$ is an element of $\mathcal{M}_{0}(A)$. Further $\varphi$ defines an $F$-isomorphism from $A/\mbox{ker}(\varphi)$ to a subfield of $L$ extending $F$ by $\varphi'(a+\mbox{ker}(\varphi)):=\varphi(a)$. Hence we have obtained $y:=(\mbox{ker}(\varphi),\varphi')$ which is an element of $\mathcal{M}(A)$ with $\hat{y}=\varphi$ and so $\mathcal{M}(A)$ is closed as a subset on $L_{A}$.\\
We will now show that $g:\mathcal{M}(A)\rightarrow\mathcal{M}(A)$ is continuous. The set of preimages
\begin{equation*}
\mathcal{S}:=\{\hat{a}^{-1}(U):\hat{a}\in\hat{A}\mbox{ and }U\subseteq L\mbox{ is open}\}
\end{equation*}
is a sub-base for the Gelfand topology on $\mathcal{M}(A)$. To show that $g:\mathcal{M}(A)\rightarrow\mathcal{M}(A)$ is continuous it is enough to show that for each $V\in\mathcal{S}$ the preimage $g^{-1}(V)$ is also an element of $\mathcal{S}$. We note that $g:\mathcal{M}(A)\rightarrow\mathcal{M}(A)$ is a bijection since $g$ has finite order. So let $V=\hat{a}^{-1}(U)$ be an element of $\mathcal{S}$ for some $\hat{a}\in A$ and open $U\subseteq L$. We have $x=(x_{0},\varphi)\in\mathcal{M}(A)$ an element of $V$ if and only if $\hat{a}(x)=\hat{x}(a)=\varphi(a+x_{0})$ is an element of $U$. Now consider the elements of the preimage $g^{-1}(V)$ and note that they are the elements $y=(y_{0},\phi)\in\mathcal{M}(A)$ such that $g(y)=(y_{0},g\circ\phi)\in V$. These are precisely the elements of $\mathcal{M}(A)$ such that
\begin{equation*}
\hat{a}(y)=\hat{y}(a)=\phi(a+y_{0})\in g^{(\mbox{ord}(g)-1)}(U).
\end{equation*}
And so $g^{-1}(V)=\hat{a}^{-1}\left(g^{(\mbox{ord}(g)-1)}(U)\right)$ and since $g$ is an isometry on $L$ we note that $g^{(\mbox{ord}(g)-1)}(U)$ is an open subset of $L$. Hence $g^{-1}(V)$ is an element of $\mathcal{S}$ as required.\\
We now show that the Gelfand transform is an isometry. Note that the following adapts material that can be found in \cite[Ch1]{Berkovich}. Let $a$ be an element of $A$. By (\ref{equ:RTXBOU}) we have $|\hat{a}(x)|_{L}=|\hat{x}(a)|_{L}\leq\|a\|_{A}$ for all $x\in\mathcal{M}(A)$ and so $\|\hat{a}\|_{\infty}\leq\|a\|_{A}$. For the reverse inequality let $\varepsilon>0$ and set $r:=\|\hat{a}\|_{\infty}+\varepsilon$. Then for all $x_{0}\in\mathcal{M}_{0}(A)$ we have
\begin{equation}
\label{equ:RTLER}
|a+x_{0}|_{A/x_{0}}=|\hat{x}(a)|_{L}=|\hat{a}(x)|_{L}\leq\|\hat{a}\|_{\infty}<r
\end{equation}
for some $x=(x_{0},\varphi)\in\mathcal{M}(A)$ by $(\ref{equ:RTEQU})$ and noting that $A$ has finite basic dimension. Now consider the commutative unital Banach $F$-algebra $A\langle rT\rangle$. Let $\mathcal{M}_{0}(A\langle rT\rangle)$ be the set of all proper closed prime ideals of $A\langle rT\rangle$ that are the kernels of bounded multiplicative seminorms on $A\langle rT\rangle$. Note that $\mathcal{M}_{0}(A\langle rT\rangle)$ is non-empty by Lemma \ref{lem:RTNMT}. We recall that the elements of $A\langle rT\rangle$ are of the form $\sum_{i=0}^{\infty}a_{i}T^{i}$ with
\begin{equation*}
\left\|\sum_{i=0}^{\infty}a_{i}T^{i}\right\|_{A,r^{-1}}=\sum_{i=0}^{\infty}\|a_{i}\|_{A}(r^{-1})^{i}=\sum_{i=0}^{\infty}\|a_{i}\|_{A}r^{-i}<\infty
\end{equation*}
and $a_{i}\in A$ for all $i\in\mathbb{N}_{0}$. Hence $A$ is a subring of $A\langle rT\rangle$ since for each $b\in A$ we have $b=bT^{0}$ an element of $A\langle rT\rangle$. Now for $y_{0}\in\mathcal{M}_{0}(A\langle rT\rangle)$ let $|\cdot|$ be a bounded multiplicative seminorm on $A\langle rT\rangle$ with $y_{0}=\mbox{ker}(|\cdot|)$. Since $|\cdot|$ is bounded we have
\begin{equation}
\label{equ:RTLERI}
|T|\leq\|T\|_{A,r^{-1}}=r^{-1}.
\end{equation}
Moreover since for $b\in A$ we have $\|bT^{0}\|_{A,r^{-1}}=\|b\|_{A}(r^{-1})^{0}=\|b\|_{A}$, the restriction $|\cdot||_{A}$ is a bounded multiplicative seminorm on $A$. Further $m_{0}:=\mbox{ker}(|\cdot||_{A})$ is closed as a subset of $A$ by Lemma \ref{lem:RTQRV} and so $m_{0}$ is an element of $\mathcal{M}_{0}(A)$. Hence $m_{0}$ is a maximal ideal of $A$ by remark \ref{rem:RTFBD}. In particular $|b+m_{0}|_{A/m_{0}}:=|b|$, for $b\in A$, is the unique valuation on $A/m_{0}$ extending the valuation on $F$ as we have seen earlier in this proof for other elements of $\mathcal{M}_{0}(A)$. Therefore by (\ref{equ:RTLER}) and (\ref{equ:RTLERI}) we have
\begin{equation*}
|aT|=|a||T|\leq|a+m_{0}|_{A/m_{0}}r^{-1}<rr^{-1}=1.
\end{equation*}
Furthermore $1=|1|\leq|1-aT|+|aT|$ and so we have $|1-aT|\geq1-|aT|>0$. Therefore $1-aT$ is not an element of $y_{0}$ since $y_{0}$ is the kernel of $|\cdot|$. Since $y_{0}$ was any element of $\mathcal{M}_{0}(A\langle rT\rangle)$ we have $1-aT\not\in y_{0}$ for all $y_{0}\in\mathcal{M}_{0}(A\langle rT\rangle)$. Hence by Lemma \ref{lem:RTNMT} we note that $1-aT$ is invertible in $A\langle rT\rangle$. Therefore by Lemma \ref{lem:RTRRT} the series $\sum_{i=0}^{\infty}\|a^{i}\|_{A}r^{-i}$ converges. In particular we can find $N\in\mathbb{N}$ such that for all $n>N$ we have $\|a^{2^{n}}\|_{A}r^{-2^{n}}<\frac{1}{2}$ giving $(\|a\|_{A}r^{-1})^{2^{n}}<\frac{1}{2}$ since $\|\cdot\|_{A}$ is square preserving. Hence $\|a\|_{A}<r=\|\hat{a}\|_{\infty}+\varepsilon$ and since $\varepsilon>0$ was arbitrary we have $\|a\|_{A}\leq\|\hat{a}\|_{\infty}$ and so $\|a\|_{A}=\|\hat{a}\|_{\infty}$ as required.\\
What remains to be shown is that the elements of $\hat{A}$ are also elements of $C(\mathcal{M}(A),g,g)$ and that $\mathcal{M}(A)$ is totally disconnected. For $\hat{a}\in\hat{A}$ and $x=(x_{0},\varphi)\in\mathcal{M}(A)$ we have
\begin{align*}
\hat{a}(g(x))=\hat{a}((x_{0},g\circ\varphi))=&\widehat{(x_{0},g\circ\varphi)}(a)\\
=&g\circ\varphi(a+x_{0})\\
=&g(\varphi(a+x_{0}))\\
=&g\left(\widehat{(x_{0},\varphi)}(a)\right)=g(\hat{x}(a))=g(\hat{a}(x))
\end{align*}
and so $\hat{a}$ is an element of $C(\mathcal{M}(A),g,g)$. Finally it is immediate that $\mathcal{M}(A)$ is totally disconnected since $\hat{A}$ separates the points of $\mathcal{M}(A)$, the elements of $\hat{A}$ are continuous functions from $\mathcal{M}(A)$ to $L$, the image of a connected component is connected for continuous functions and $L$ is totally disconnected. In particular see the proof of Theorem \ref{thr:UAXtop}. This completes the proof of Theorem \ref{thr:RTREPLF}.
\end{proof}
In the next chapter we will survey some existing results in the Archimedean non-commutative setting and also consider the possibility of their generalisation to the non-Archimedean setting. We will then finish by noting some of the open questions arising from the Thesis.
\chapter[Non-commutative generalisation and open questions]{Non-commutative generalisation and open questions}
\label{cha:NG}
In recent years a theory of non-commutative real function algebras has been developed by Jarosz, see \cite{Jarosz} and \cite{Abel-Jarosz}. In the first section of this short chapter we survey and remark upon some of this non-commutative Archimedean theory and consider the possibility of non-commutative non-Archimedean analogs. In the second section we note some of the open questions arising from the thesis.
\section{Non-commutative generalisation}
\label{sec:NGNCG}
\subsection{Non-commutative real function algebras}
\label{subsec:NGNCRFA}
In the recent theory of non-commutative real function algebras the continuous functions involved take values in Hamilton's real quaternions, $\mathbb{H}$, which are an example of a non-commutative complete Archimedean division ring and $\mathbb{R}$-algebra. Viewing $\mathbb{H}$ as a real vector space, the valuation on $\mathbb{H}$ is the Euclidean norm which is complete, Archimedean and indeed a valuation since being multiplicative on $\mathbb{H}$. To put $\mathbb{H}$ into context, as in the case of complete Archimedean fields, there are very few unital division algebras over the reals with the Euclidean norm as a valuation. Up to isomorphism they are $\mathbb{R}$, $\mathbb{C}$, $\mathbb{H}$ and the octonions $\mathbb{O}$. We note that the octonions are non-associative. The proof that there are no other unital division algebras over the reals with the Euclidean norm as a valuation is given by Hurwitz's 1, 2, 4, 8 Theorem, see \cite[Ch1]{Shapiro} and \cite{Lewis}. In particular for such an algebra $\mathbb{A}$ the square of the Euclidean norm is a regular quadratic form on $\mathbb{A}$ and since for $\mathbb{A}$ the Euclidean norm is a valuation it is multiplicative. This shows that $\mathbb{A}$ is a real composition algebra to which Hurwitz's 1, 2, 4, 8 Theorem can be applied.\\
Here we only briefly consider non-commutative real function algebras and hence the reader is also referred to \cite{Jarosz}. Note I am unaware of any such developments involving the octonions. Here is Jarosz's analog of $C(X,\tau)$ from Definition \ref{def:UARefa}.
\begin{definition}
\label{def:NGNCRFA}
Let $\mbox{Gal}(^{\mathbb{H}}/_{\mathbb{R}})$ be the group of all automorphisms on $\mathbb{H}$ that are the identity on $\mathbb{R}$. Let $X$ be a compact space and $\mbox{Hom}(X)$ be the group of homeomorphisms on $X$. For a group homomorphism $\Phi:\mbox{Gal}(^{\mathbb{H}}/_{\mathbb{R}})\rightarrow\mbox{Hom}(X)$, $\Phi(T)=\Phi_{T}$, we define
\begin{equation*}
C_{\mathbb{H}}(X,\Phi):=\{f\in C_{\mathbb{H}}(X):f(\Phi_{T}(x))=T(f(x))\mbox{ for all }x\in X\mbox{ and } T\in\mbox{Gal}(^{\mathbb{H}}/_{\mathbb{R}})\}.
\end{equation*}
\end{definition}
\begin{remark}
\label{rem:NGPHI}
Concerning Definition \ref{def:NGNCRFA}.
\begin{enumerate}
\item[(i)]
The groups $\mbox{Gal}(^{\mathbb{H}}/_{\mathbb{R}})$ and $\mbox{Hom}(X)$ in Definition \ref{def:NGNCRFA} have composition as their group operation. We note that the map $\ast:\mbox{Gal}(^{\mathbb{H}}/_{\mathbb{R}})\times C_{\mathbb{H}}(X)\rightarrow C_{\mathbb{H}}(X)$ given by $T\ast f:=T^{-1}(f(\Phi_{T}(x)))$ is similar to a group action on $C_{\mathbb{H}}(X)$ only with the usual associativity replaced by $T_{1}\circ T_{2}\ast f=T_{2}\ast T_{1}\ast f$.
\item[(ii)]
There is an interesting similarity between Definition \ref{def:NGNCRFA} and Definition \ref{def:CGBFA} of Basic function algebras. Let $X$ be a compact Hausdorff space, $F$ a complete valued field and $L$ a finite extension of $F$. Further let $\langle g\rangle$ be the cyclic group generated by some $g\in\mbox{Gal}(^{L}/_{F})$ and similarly let $\langle\tau\rangle$ be the cyclic group generated by some homeomorphism $\tau:X\rightarrow X$. Then there exists a surjective group homomorphism $\Phi:\langle g\rangle\rightarrow\langle\tau\rangle$ if and only if $\mbox{ord}(\tau)|\mbox{ord}(g)$. To see this suppose such a surjective group homomorphism exists. Then there are $m,n\in\mathbb{N}$ such that $\Phi(g^{(m)})=\mbox{id}$ and $\Phi(g^{(n)})=\tau$. This gives
\begin{align*}
\tau^{(\mbox{ord}(g))}=\Phi(g^{(n)})^{(\mbox{ord}(g))}=&\Phi(g^{(n\mbox{ord}(g))})\\
=&\Phi(\mbox{id})\\
=&\mbox{id}\circ\Phi(\mbox{id})\\
=&\Phi(g^{(m)})\circ\Phi(\mbox{id})\\
=&\Phi(g^{(m)}\circ\mbox{id})=\Phi(g^{(m)})=\mbox{id}
\end{align*}
and so $\mbox{ord}(\tau)|\mbox{ord}(g)$. Conversely if $\mbox{ord}(\tau)|\mbox{ord}(g)$ then $\Phi$ defined by $\Phi(g):=\tau$ will do. It is an interesting question then whether the definition of basic function algebras can be further generalised by utilizing group homomorphisms as Definition \ref{def:NGNCRFA} suggests noting that $\Phi$ is onto for some subgroup of $\mbox{Hom}(X)$. In particular, with reference to Definition \ref{def:CGBFA}, we have considered basic $^{L}/_{L^{g}}$ function algebras where $g$ is an element of $\mbox{Gal}(^{L}/_{F})$. We note that $L$ is a cyclic extension of $L^{g}$ by the fundamental theorem of Galois theory. Therefore it is interesting to consider the possibility of basic $^{L}/_{F}$ function algebras where $L$ is a Galois extension of $F$ but not necessarily a cyclic extension. Such group homomorphisms might also be useful in cases involving infinite extensions of $F$.
\item[(iii)]
Turning our attention back to the non-commutative setting, as a conjecture I suggests that Definition \ref{def:NGNCRFA} may also be useful if $\mbox{Gal}(^{\mathbb{H}}/_{\mathbb{R}})$ is replaced by a subgroup, particularly when considering extensions of the algebra.
\end{enumerate}
\end{remark}
Definition \ref{def:NGNCRFA} has been used by Jarosz in the representation of non-commutative real Banach algebras with square preserving norm as follows.
\begin{definition}
\label{def:NGFNON}
A real algebra $A$ is {\em{fully non-commutative}} if every nonzero multiplicative, linear functional $\varphi:A\rightarrow\mathbb{H}$ is surjective.
\end{definition}
\begin{theorem}
\label{thr:NGREPHR}
Let $A$ be a non-commutative real Banach algebra with $\|a^{2}\|_{A}=\|a\|_{A}^{2}$ for all $a\in A$. Then there is a compact set $X$ and an isomorphism $\Phi:\mbox{Gal}(^{\mathbb{H}}/_{\mathbb{R}})\rightarrow\mbox{Hom}(X)$ such that $A$ is isometrically isomorphic with a subalgebra $\hat{A}$ of $C_{\mathbb{H}}(X,\Phi)$. Furthermore $a\in A$ is invertible if and only if the corresponding element $\hat{a}\in\hat{A}$ does not vanish on $X$. If $A$ is fully non-commutative then $\hat{A}=C_{\mathbb{H}}(X,\Phi)$.
\end{theorem}
Jarosz also gives the following Stone-Weierstrass theorem type result.
\begin{theorem}
\label{thr:NGNCSW}
Let $X$ be a compact Hausdorff space and let $A$ be a fully non-commutative closed subalgebra of $C_{\mathbb{H}}(X)$. Then $A=C_{\mathbb{H}}(X)$ if and only if $A$ strongly separates the points of $X$, that is for all $x_{1},x_{2}\in X$ with $x_{1}\not=x_{2}$ there is $f\in A$ satisfying $f(x_{1})\not=f(x_{2})=0$.
\end{theorem}
\subsection{Non-commutative non-Archimedean analogs}
\label{subsec:NGNCNA}
Non-commutative, non-Archimedean analogs of uniform algebras have yet to be seen. Hence in this subsection we give an example of a non-commutative extension of a complete non-Archimedean field which would be appropriate when considering such analogs of uniform algebras. We first have the following definition from the general theory of quaternion algebras. The main reference for this subsection is \cite[Ch3]{Lam} but \cite{Lewis} is also useful.
\begin{definition}
\label{def:NGGTQA}
Let $F$ be a field, with characteristic not equal to 2, and $s,t\in F^{\times}$ where $s=t$ is allowed. We define the {\em{quaternion $F$-algebra}} $(\frac{s,t}{F})$ as follows. As a 4-dimensional vector space over $F$ we define
\begin{equation*}
\left(\frac{s,t}{F}\right):=\{a+bi+cj+dk:a,b,c,d\in F\}
\end{equation*}
with $\{1,i,j,k\}$ as a natural basis giving the standard coordinate-wise addition and scalar multiplication. As an $F$-algebra, multiplication in $(\frac{s,t}{F})$ is given by
\begin{equation*}
i^{2}=s,\quad j^{2}=t,\quad k^{2}=ij=-ji
\end{equation*}
together with the usual distributive law and multiplication in $F$.
\end{definition}
Hamilton's real quaternions, $\mathbb{H}:=(\frac{-1,-1}{\mathbb{R}})$ with the Euclidean norm, is an example of a non-commutative, complete valued, Archimedean division algebra over $\mathbb{R}$. It is not the case that every quaternion algebra $(\frac{s,t}{F})$ will be a division algebra, although there are many examples that are. For our purposes we have the following example.
\begin{example}
\label{exa:NGNCNA}
Using $\mathbb{Q}_{5}$, the complete non-Archimedean field of 5-adic numbers, define
\begin{equation*}
\mathbb{H}_{5}:=\left(\frac{5,2}{\mathbb{Q}_{5}}\right).
\end{equation*}
Then for $q,r\in\mathbb{H}_{5}$, $q=a+bi+cj+dk$, the conjugation on $\mathbb{H}_{5}$ given by
\begin{equation*}
\bar{q}:=a-bi-cj-dk
\end{equation*}
is such that $\overline{q+r}=\bar{q}+\bar{r}$, $\overline{qr}=\bar{r}\bar{q}$, $\bar{q}q=q\bar{q}=a^{2}-5b^{2}-2c^{2}+10d^{2}$ with $\bar{q}q\in\mathbb{Q}_{5}$. Further
\begin{equation*}
|q|_{\mathbb{H}_{5}}:=\sqrt{|\bar{q}q|_{5}}
\end{equation*}
is a complete non-Archimedean valuation on $\mathbb{H}_{5}$, where $|\cdot|_{5}$ is the 5-adic valuation on $\mathbb{Q}_{5}$. In particular $\mathbb{H}_{5}$, together with $|\cdot|_{\mathbb{H}_{5}}$, is an example of a non-commutative, complete valued, non-Archimedean division algebra over $\mathbb{Q}_{5}$. When showing this directly it is useful to know that for $a,b,c,d\in\mathbb{Q}_{5}$ we have
\begin{equation*}
\nu_{5}(a^{2}-5b^{2}-2c^{2}+10d^{2})=\mbox{min}\{\nu_{5}(a^{2}),\nu_{5}(5b^{2}),\nu_{5}(2c^{2}),\nu_{5}(10d^{2})\}
\end{equation*}
where $\nu_{5}$ is the 5-adic valuation logarithm as defined in Example \ref{exa:CVFPN}. Given the above, we will confirm that $|\cdot|_{\mathbb{H}_{5}}$ is multiplicative. For more details please see the suggested references \cite[Ch3]{Lam} and \cite{Lewis}. Let $q,r\in\mathbb{H}_{5}$ and note that we have $\bar{r}\bar{q}qr=\bar{q}q\bar{r}r$ since $\bar{q}q$ is an element of $\mathbb{Q}_{5}$. Therefore
\begin{align*}
|qr|_{\mathbb{H}_{5}}=\sqrt{|\overline{qr}qr|_{5}}=&\sqrt{|\bar{r}\bar{q}qr|_{5}}\\
=&\sqrt{|\bar{q}q\bar{r}r|_{5}}\\
=&\sqrt{|\bar{q}q|_{5}|\bar{r}r|_{5}}=\sqrt{|\bar{q}q|_{5}}\sqrt{|\bar{r}r|_{5}}=|q|_{\mathbb{H}_{5}}|r|_{\mathbb{H}_{5}}
\end{align*}
as required.
\end{example}
More generally for the $p$-adic field $\mathbb{Q}_{p}$ the quaternion algebra $(\frac{p,u}{\mathbb{Q}_{p}})$ will be a division algebra as long as $u$ is a unit of $\{a\in\mathbb{Q}_{p}:|a|_{p}\leq 1\}$, i.e. $|u|_{p}=1$, and $\mathbb{Q}_{p}(\sqrt{u})$ is a quadratic extension of $\mathbb{Q}_{p}$.
\section{Open questions}
\label{sec:NGOQ}
There are many open questions related to the content of this thesis and I had intended to investigate more of them but there was no time. Many of these questions come from the need to generalise established Archimedean results whilst others arise from the developing theory itself. We now consider some of these questions and note that several of them appear to be quite accessible.
\begin{enumerate}
\item[(Q1)]
J. Wermer gave the following theorem in 1963.
\begin{theorem}
\label{thr:NGWER}
Let $X$ be a compact Hausdorff space, $A\subseteq C_{\mathbb{C}}(X)$ a complex uniform algebra and $\Re(A):=\{\Re(f):f=\Re(f)+i\Im(f)\in A\}$ the set of the real components of the functions in $A$. If $\Re(A)$ is a ring then $A=C_{\mathbb{C}}(X)$.
\end{theorem}
The following analog of Theorem \ref{thr:NGWER} for real function algebras was given by S. H. Kulkarni and N. Srinivasan in \cite{Kulkarni-Srinivasan}, although I have not used their notation.
\begin{theorem}
\label{thr:NGWKS}
Let $X$ be a compact Hausdorff space, $\tau$ a topological involution on $X$ and $A$ a $^{\mathbb{C}}/_{\mathbb{R}}$ function algebra on $(X,\tau,\bar{z})$, i.e. a real function algebra. If $\Re(A)$ is a ring then $A=C(X,\tau,\bar{z})$.
\end{theorem}
It is interesting to know whether Theorem \ref{thr:NGWKS} can be generalised to all $^{L}/_{L^{g}}$ function algebras on $(X,\tau,g)$. Of course the result would be trivial if, in the non-Archimedean setting, the basic $^{L}/_{L^{g}}$ function algebra on $(X,\tau,g)$ is the only $^{L}/_{L^{g}}$ function algebra on $(X,\tau,g)$. With analogy to $\Re(A)$ above, in this case we should ask whether the set of $L^{g}$ components of the functions in $C(X,\tau,g)$ form a ring.
\item[(Q2)]
As alluded to in (Q1) we have not given an example in the non-Archimedean setting of a $^{L}/_{L^{g}}$ function algebra on $(X,\tau,g)$ that is not basic. We need to know whether the basic function algebras are the only such examples. Theorem \ref{thr:UAKapl}, Kaplansky's version of the Stone Weierstrass Theorem, may be important here. Further even if in the non-Archimedean setting there is a $^{L}/_{L^{g}}$ function algebra on $(X,\tau,g)$ that is not basic, such an algebra might still be isometrically isomorphic to some Basic function algebras.
\item[(Q3)]
With reference to Theorem \ref{thr:RTREPLF} we note that there are plenty of examples of commutative, unital Banach $F$-algebras with finite basic dimension in the non-Archimedean setting. Indeed if $K$ is not only a finite Galois extension of $F$ but also a cyclic extension then taking $A:=K$ gives such an algebra. In this case the character space $\mathcal{M}(A)$ will be finite with each element given by an element of $\mbox{Gal}(^{K}/_{F})$, see the proof of Theorem \ref{thr:RTREPLF} for details. However such examples are not particularly interesting and it would be good to know whether all $^{L}/_{L^{g}}$ function algebras on $(X,\tau,g)$ have finite basic dimension so that Theorem \ref{thr:RTREPLF} becomes closer to a characterisation result. We recall that all commutative unital complex Banach algebras and commutative unital real Banach algebras have finite basic dimension, see Remark \ref{rem:RTFBD}.
\item[(Q4)]
With reference to Definition \ref{def:CGBFA} of the basic $^{L}/_{L^{g}}$ function algebra on $(X,\tau,g)$ the map $\sigma(f)=g^{(\mbox{ord}(g)-1)}\circ f\circ\tau$ on $C_{L}(X)$ is such that each $f\in C_{L}(X)$ is an element of $C(X,\tau,g)$ if and only if $\sigma(f)=f$. We have seen that $\sigma$ is either an algebraic involution on $C_{L}(X)$ or a algebraic element of finite order on $C_{L}(X)$. It should be established whether every such involution and element of finite order on $C_{L}(X)$ has the form of $\sigma$ for some $g$ and $\tau$. This is the case for real function algebras, see \cite[p29]{Kulkarni-Limaye1992}.
\item[(Q5)]
As described in Remark \ref{rem:NGPHI} it might be possible to generalise the definition of Basic function algebras by involving a group homomorphism in the definition. The algebras currently given by Definition \ref{def:CGBFA} could then appropriately be referred to as cyclic basic function algebras given that the group $\mbox{Gal}(^{L}/_{L^{g}})$ is cyclic. Further the possibility of generalising the definition of Basic function algebras to the case where the functions take values in some infinite extension of the underlying field over which the algebra is a vector space should also be considered. The involvement of a group homomorphism might also be useful in this case as well as some more of the theory from \cite{Berkovich}.
\item[(Q6)]
As seen in Subsection \ref{subsec:NGNCNA} the general theory of quaternion algebras provides the necessary structures for generalising the theory of non-commutative real function algebras to the non-Archimedean setting. Further with reference to Subsection \ref{subsec:CGBE} it would be interesting to see what sort of lattice of basic extensions the non-commutative real function algebras have. We can also look at this in the non-Archimedean setting along with the residue algebra.
\item[(Q7)]
A proof of the following theorem can be found in \cite[p18]{Kulkarni-Limaye1992}.
\begin{theorem}
\label{thr:NGCOM}
Let $A$ be a unital Banach algebra in the Archimedean setting satisfying one of the following conditions:
\begin{enumerate}
\item[(i)]
the algebra $A$ is a complex algebra and there exists some positive constant $c$ such that $\|a\|_{A}^{2}\leq c\|a^{2}\|_{A}$ for all $a\in A$;
\item[(ii)]
the algebra $A$ is a real algebra and there exists some positive constant $c$ such that $\|a\|_{A}^{2}\leq c\|a^{2}+b^{2}\|_{A}$ for all $a,b\in A$ with $ab=ba$.
\end{enumerate}
Then $A$ is commutative.
\end{theorem}
It would be interesting to establish whether there is such a theorem for all unital Banach $F$-algebras. If not then perhaps some special cases are possible in the non-Archimedean setting. The proof of theorem \ref{thr:NGCOM} uses Liouville's theorem and some spectral theory in the Archimedean setting. Both of these are different in the non-Archimedean setting, see Theorem \ref{thr:FAAULT} and Subsection \ref{subsec:FAASE}.
\item[(Q8)]
It might be interesting to investigate the isomorphism classes of basic function algebras. That is for a given basic function algebra $A$ are there other basic function algebras that are isometrically isomorphic to $A$.
\item[(Q9)]
It is interesting to consider whether the Kaplansky spectrum of Remark \ref{rem:FAAFC} can be used for some cases in the non-Archimedean setting and, if so, whether it is one such definition in some larger family of definitions of spectrum applicable in the non-Archimedean setting.
\item[(Q10)]
More broadly the established theory of Banach algebras provides a large supply of topics that can be considered for generalisation over complete valued fields. In addition to several of the other references included in this thesis \cite{Dales} will be of much interest when considering such possibilities. One obvious example is the generalisation of automatic continuity results. That is what conditions on a Banach $F$-algebra force homomorphisms from, or to, that algebra to be continuous. There is one such result in this thesis noting that in Theorem \ref{thr:RTREPLF} the elements of $\mathcal{M}(A)$ are automatically continuous. Further \cite{Bachman} may also be of interest concerning function algebras.
\item[(Q11)]
As mentioned in Remark \ref{rem:UASCS} there is a possible generalisation of the Swiss cheese classicalisation theorem to the Riemann sphere and possibly to a more general class of metric spaces.
\item[(Q12)]
It might be interesting to consider generalising over all complete valued fields the theory of algebraic extensions of commutative unital normed algebras. See the survey paper \cite{Dawson} for details.
\item[(Q13)]
The possibility of generalising $C^{*}$-Algebras over complete valued fields is interesting but perhaps not straightforward. The Levi-Civita field might be of interest here since it is totally ordered such that the order topology agrees with the valuation topology. Hence it might be possible to define positive elements in this case. Perhaps the algebraic elements of finite order mentioned in (Q4) are relevant. Also there is a monograph by Goodearl from 1982 that considers real $C^{*}$-Algebras that might be of use. The possibility of a non-Archimedean theory of Von Neumann algebras might also be a good place to start.
\end{enumerate} |
1,477,468,750,388 | arxiv | \section{Multi-task Multi-agent RL}
This section introduces MT-MARL under partial observability. We formalize single-task MARL using the Decentralized Partially Observable Markov Decision Process (Dec-POMDP), defined as $\langle \mathcal{I}, \mathcal{S}, \bm{\mathcal{A}}, \mathcal{T}, \mathcal{R}, \bm{\Omega}, \mathcal{O}, \gamma \rangle$, where $\mathcal{I}$ is a set of $n$ agents, $\mathcal{S}$ is the state space, $\bm{\mathcal{A}} = \times_i \mathcal{A}^{(i)}$ is the joint action space, and $\bm{\Omega} = \times_i \Omega^{(i)}$ is the joint observation space \cite{bernstein2002complexity}.\footnote{Superscripts indicate \emph{local} parameters for agent $i \in \mathcal{I}$.} Each agent $i$ executes action $a^{(i)} \in \mathcal{A}^{(i)}$, where joint action $\bm{a} = \langle a^{(1)}, \ldots, a^{(n)} \rangle$ causes environment state $s \in \mathcal{S}$ to transition with probability $P(s'|s,\bm{a}) = \mathcal{T}(s,\bm{a},s')$. At each timestep, each agent receives observation $o^{(i)} \in \Omega^{(i)}$, with joint observation probability $P(\bm{o}|s',\bm{a}) = \mathcal{O}(\bm{o},s',\bm{a})$, where $\bm{o} = \langle o^{(1)}, \ldots, o^{(n)} \rangle$. Let local observation history at timestep $t$ be $\vec{o_t}^{(i)} = (o^{(i)}_{1},\ldots,o^{(i)}_{t})$, where $\vec{o_t}^{(i)} \in \vec{O_t}^{(i)}$. Single-agent policy $\pi^{(i)}: \vec{O_t}^{(i)} \mapsto A^{(i)} $ conducts action selection, and the joint policy is denoted $\bm{\pi} = \langle \pi^{(1)}, \ldots, \pi^{(n)} \rangle $. For simplicity, we consider only pure joint policies, as finite-horizon Dec-POMDPs have at least one pure joint optimal policy \cite{oliehoek2008optimal}. The team receives a joint reward $r_t = \mathcal{R}(s_t,\bm{a}_t) \in \mathbb{R}$ at each timestep $t$, the objective being to maximize the value (or expected return), $V = \mathbb{E}[\sum_{t=0}^{H}\gamma^{t}r_t]$. While Dec-POMDP \emph{planning} approaches assume agents do not observe intermediate rewards, we make the typical RL assumption that they do. This assumption is consistent with prior work in MARL \cite{banerjee2012sample,peshkin2000learning}.
ILs provide a scalable way to learn in Dec-POMDPs, as each agent's policy maps local observations to actions. However, the domain appears \emph{non-stationary} from the perspective of each Dec-POMDP agent, a property we formalize by extending the definition by \citet{laurent2011world}.
\begin{definition}
Let $\bm{a}^{-(i)} = \bm{a} \setminus \{a^{(i)}\}$. Local decision process for agent $i$ is stationary if, for all timesteps $t,u \in \mathbb{N}$,
\begin{equation}\label{eq:stationary_trans}
\thinmuskip=0mu
\sum_{\mathclap{\hspace{13pt}\bm{a}_t^{-(i)}\in \bm{\mathcal{A}}^{-(i)}}} P(s'|s,\langle a^{(i)}, \bm{a}_{t}^{-(i)} \rangle )
= \sum_{\mathclap{\hspace{13pt}\bm{a}_u^{-(i)}\in \bm{\mathcal{A}}^{-(i)}}} P(s'|s, \langle a^{(i)}, \bm{a}_{u}^{-(i)} \rangle),
\end{equation}
and
\begin{equation}\label{eq:stationary_obs}
\thinmuskip=0mu
\sum_{\mathclap{\hspace{22pt}\bm{a}_t^{-(i)}\in \bm{\mathcal{A}}^{-(i)}}} P(o^{(i)}|s',\langle a^{(i)}, \bm{a}_{t}^{-(i)} \rangle) = \sum_{\mathclap{\bm{a}_u^{-(i)}\in \bm{\mathcal{A}}^{-(i)}}} P(o^{(i)}|s',\langle a^{(i)}, \bm{a}_{u}^{-(i)} \rangle).
\end{equation}
\end{definition}
Letting $\bm{\pi}^{-(i)} = \bm{\pi} \setminus \{\pi^{(i)}\}$, non-stationarity from the local perspective of agent $i$ follows as in general $\bm{a}_t^{-(i)} = \bm{\pi}^{-(i)}(\vec{\bm{o}}_t) \neq \bm{\pi}^{-(i)}(\vec{\bm{o}}_u) = \bm{a}_u^{-(i)}$, which causes violation of \eqref{eq:stationary_trans} and \eqref{eq:stationary_obs}. Thus, MARL extensions of single-agent algorithms that assume stationary environments, such as Dec-DRQN, are inevitably ill-fated. This motivates our decision to first design a single-task, decentralized MARL approach targeting non-stationarity in Dec-POMDP learning
The MT-MARL problem in partially observable settings is now introduced by extending the single-agent, fully-observable definition of \citet{fernandez2006probabilistic}.
\begin{definition} A partially-observable MT-MARL Domain $\mathcal{D}$ is a tuple $\langle \mathcal{I}, \mathcal{S}, \bm{\mathcal{A}}, \bm{\Omega}, \gamma \rangle$, where $\mathcal{I}$ is the set of agents, $\mathcal{S}$ is the environment state space, $\bm{\mathcal{A}}$ is the joint action space, $\bm{\Omega}$ is the joint observation space, and $\gamma$ is the discount factor.
\end{definition}
\begin{definition} A partially-observable MT-MARL Task $T_j$ is a tuple $\langle \mathcal{D}, \mathcal{T}_j, \mathcal{R}_j, \mathcal{O}_j \rangle$, where $\mathcal{D}$ is a shared underlying domain; $\mathcal{T}_j$, $\mathcal{R}_j$, $\mathcal{O}_j$ are, respectively, the task-specific transition, reward, and observation functions.
\end{definition}
In MT-MARL, each episode \mbox{$e \in \{1,\ldots,E\}$} consists of a randomly sampled Task $T_j$ from domain $\mathcal{D}$. The team observes the task ID, $j$, during learning, but not during execution. The objective is to find a joint policy that maximizes average empirical execution-time return in all $E$ episodes, $\bar{V} = \frac{1}{E} \sum_{e=0}^{E} \sum_{t=0}^{H_e}\gamma^{t}\mathcal{R}_e(s_t,\bm{a}_t)$, where $H_e$ is the time horizon of episode $e$.
\section{Approach}
This section introduces a two-phase approach for partially-observable MT-MARL; the approach first conducts single-task specialization, and subsequently unifies task-specific DRQNs into a joint policy that performs well in all tasks.
\subsection{Phase I: Dec-POMDP Single-Task MARL}
As Dec-POMDP RL is notoriously complex (and solving for the optimal policy is NEXP-complete even with a known model \cite{bernstein2002complexity}), we first introduce an approach for stable single-task MARL. This enables agents to learn coordination, while also learning Q-values needed for computation of a unified MT-MARL policy.
\subsubsection{Decentralized Hysteretic Deep Recurrent Q-Networks (Dec-HDRQNs)}
Due to partial observability and local non-stationarity, model-based Dec-POMDP MARL is extremely challenging \cite{banerjee2012sample}. Our approach is model-free and decentralized, learning Q-values for each agent. In contrast to policy tables or FSCs, Q-values are amenable to the multi-task distillation process as they inherently measure quality of all actions, rather than just the optimal action.
Overly-optimistic MARL approaches (e.g., Distributed Q-learning \cite{lauer2000algorithm}) completely ignore low returns, which are assumed to be caused by teammates' exploratory actions. This causes severe overestimation of Q-values in stochastic domains. Hysteretic Q-learning \cite{matignon2007hysteretic}, instead, uses the insight that low returns may also be caused by domain stochasticity, which should not be ignored. This approach uses two learning rates: nominal learning rate, $\alpha$, is used when the TD-error is non-negative; a smaller learning rate, $\beta$, is used otherwise (where $0 < \beta < \alpha < 1$). The result is hysteresis (lag) of Q-value degradation for actions associated with positive past experiences that occurred due to successful cooperation. Agents are, therefore, robust against negative learning due to teammate exploration and concurrent actions. Notably, unlike Distributed Q-learning, Hysteretic Q-learning permits eventual degradation of Q-values that were overestimated due to outcomes unrelated to their associated action.
Hysteretic Q-learning has enjoyed a strong empirical track record in fully-observable MARL \cite{xu2012multiagent,matignon2012independent,barbalios2014robust}, exhibiting similar performance as more complex approaches. Encouraged by these results, we introduce Decentralized Hysteretic Deep Recurrent Q-Networks (Dec-HDRQNs) for partially-observable domains. This approach exploits the robustness of hysteresis to non-stationarity and alter-exploration, in addition to the representational power and memory-based decision making of DRQNs. As later demonstrated, Dec-HDRQN is well-suited to Dec-POMDP MARL, as opposed to non-hysteretic Dec-DRQN.
\subsubsection{Concurrent Experience Replay Trajectories (CERTs)}
Experience replay (sampling a memory bank of experience tuples $\langle s,a,r,s' \rangle$ for TD learning) was first introduced by \citet{lin1992self} and recently shown to be crucial for stable deep Q-learning \cite{mnih2015human}.
With experience replay, sampling cost is reduced as multiple TD updates can be conducted using each sample, enabling rapid Q-value propagation to preceding states \emph{without} additional environmental interactions. Experience replay also breaks temporal correlations of samples used for Q-value updates---crucial for reducing generalization error, as the stochastic optimization algorithms used for training DQNs typically assume i.i.d. data \cite{bengio2012practical}.
Despite the benefits in single-agent settings, existing MARL approaches have found it necessary to disable experience replay \cite{foerster2016learning}. This is due to the non-concurrent (and non-stationary) nature of local experiences when sampled independently for each agent, despite the agents learning concurrently. A contributing factor is that inter-agent desynchronization of experiences compounds the prevalence of earlier-mentioned shadowed equilibria challenges, destabilizing coordination. As a motivating example, consider a 2 agent game where $\mathcal{A}^{(1)} = \mathcal{A}^{(2)} =\{a_1,a_2\}$. Let there be two optimal joint actions: $\langle a_1, a_1 \rangle$ and $\langle a_2, a_2 \rangle$ (e.g., only these joint actions have positive, equal reward). Given \emph{independent} experience samples for each agent, the first agent may learn action $a_1$ as optimal, whereas the second agent learns $a_2$, resulting in arbitrarily poor joint action $\langle a_1, a_2\rangle$. This motivates a need for \emph{concurrent} (synchronized) sampling of experiences across the team in MARL settings. Concurrent experiences induce correlations in local policy updates, so that given existence of multiple equilibria, agents tend to converge to the same one. Thus, we introduce Concurrent Experience Replay Trajectories (CERTs), visualized in \cref{fig:CERT_structure}. During execution of each learning episode $e \in \mathbb{N}^{+}$, each agent $i$ collects experience tuple $\langle o_t^{(i)}, a_t^{(i)}, r_t, o_{t+1}^{(i)} \rangle$ at timestep $t$, where $o_t$, $a_t$, and $r_t$ are current observation, action, and reward, and $o_{t+1}$ is the subsequent observation. \cref{fig:CERT_structure} visualizes each experience tuple as a cube. Experiences in each episode are stored in a sequence (along time axis $t$ of \cref{fig:CERT_structure}), as Dec-HDRQN assumes an underlying RNN architecture that necessitates sequential samples for each training iteration. Importantly, as all agents are aware of timestep $t$ and episode $e$, they store their experiences concurrently (along agent index axis $i$ of \cref{fig:CERT_structure}). Upon episode termination, a new sequence is initiated (a new row along episode axis $e$ of \cref{fig:CERT_structure}). No restrictions are imposed on terminal conditions (i.e., varying trajectory lengths are permitted along axis $t$ of \cref{fig:CERT_structure}). CERTs are a first-in first-out circular queue along the episode axis $e$, such that old episodes are eventually discarded.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.235\textwidth}
\centering
\includegraphics[width=1\linewidth,page=1]{./figs/CERT.pdf}
\caption{CERT structure.}
\label{fig:CERT_structure}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.235\textwidth}
\centering
\includegraphics[width=1\linewidth,page=2]{./figs/CERT.pdf}
\caption{CERT minibatches.}
\label{fig:CERT_minibatches}
\end{subfigure}
\caption{Concurrent training samples for MARL. Each cube signifies an experience tuple $\langle o_t^{(i)}, a_t^{(i)}, r_t, o_{t+1}^{(i)} \rangle$. Axes $e$, $t$, $i$ correspond to episode, timestep, and agent indices, respectively. }
\label{fig:certs}
\end{figure}
\subsubsection{Training Dec-HDRQNs using CERTs}
Each agent $i$ maintains DRQN $Q^{(i)}(o_t^{(i)},h_{t-1}^{(i)},a^{(i)};\theta^{(i)})$, where $o_t^{(i)}$ is the latest local observation, $h^{(i)}_{t-1}$ is the RNN hidden state, $a^{(i)}$ is the action, and $\theta^{(i)}$ are the local DRQN parameters. DRQNs are trained on experience sequences (traces) with \emph{tracelength} $\tau$. \Cref{fig:CERT_minibatches} visualizes the minibatch sampling procedure for training, with $\tau=4$. In each training iteration, agents first sample a \emph{concurrent} minibatch of episodes. All agents' sampled traces have the same starting timesteps (i.e., are coincident along agent axis $i$ in \cref{fig:CERT_minibatches}). Guaranteed concurrent sampling merely requires a one-time (offline) consensus of agents' random number generator seeds prior to initiating learning. This ensures our approach is fully decentralized and assumes no explicit communication, even during learning. \cref{fig:CERT_minibatches} shows a minibatch of 3 episodes, $e$, sampled in red. To train DRQNs, \citet{hausknecht2015deep} suggest randomly sampling a timestep within each episode, and training using $\tau$ backward steps. However, this imposes a bias where experiences in each episode's final $\tau$ timesteps are used in fewer recurrent updates. Instead, we propose that for each sampled episode $e$, agents sample a concurrent start timestep $t_0$ for the trace from interval $\{-\tau+1,\ldots, H_e\}$, where $H_e$ is the timestep of the episode's final experience. For example, the three sampled (red) traces in \cref{fig:CERT_minibatches} start at timesteps $+1$, $-1$, and $+2$, respectively. This ensures all experiences have equal probability of being used in updates, which we found especially critical for fast training on tasks with only terminal rewards.
Sampled traces sometimes contain elements outside the episode interval (indicated as $\varnothing$ in \cref{fig:CERT_minibatches}). We discard $\varnothing$ experiences and zero-pad the suffix of associated traces (to ensure all traces have equal length $\tau$, enabling seamless use of fixed-length minibatch optimizers in standard deep learning libraries). Suffix (rather than prefix) padding ensures RNN internal states of non-$\varnothing$ samples are unaffected. In training iteration $j$, agent $i$ uses the sampling procedure to collect a minibatch of traces from CERT memory $\mathcal{M}^{(i)}$,
\begin{align}\label{eq:CERT_minibatch}
\mathcal{B} = \{\langle &\langle o_{t_0}^{b}, a_{t_0}^{b}, r_{t_0}^{b}, o_{t_0+1}^{b} \rangle, \ldots,\\ &\langle o_{t_0+\tau-1}^{b}, a_{t_0+\tau-1}^{b}, r_{t_0+\tau-1}^{b}, o_{t_0+\tau}^{b} \rangle\rangle\}_{b=\{1,\ldots,B\}}\nonumber,
\end{align}
where $t_0$ is the start timestep for each trace, $b$ is trace index, and $B$ is number of traces (minibatch size).\footnote{For notational simplicity, agent superscripts $(i)$ are excluded from local experiences $\langle o^{(i)}, a^{(i)}, r, o'^{(i)} \rangle$ in \cref{eq:CERT_minibatch,eq:DRQN_targets,eq:DRQN_loss,eq:distillation_loss}.} Each trace $b$ is used to calculate a corresponding sequence of target values,
\begin{equation}\label{eq:DRQN_targets}
\{\langle \langle y_{t_0}^{b} \rangle, \ldots, \langle y_{t_0+\tau-1}^{b}\rangle\rangle\}_{b=\{1,\ldots,B\}},
\end{equation}
where $y_t^{b} = r_t^{b} + \gamma \max_{a'}Q(o_{t+1}^{b},h_{t}^{b},a';\hat{\theta}^{(i)}_{j})$. Target network parameters $\hat{\theta}^{(i)}_{j}$ are updated less frequently, for stable learning \cite{mnih2015human}. Loss over all traces is,
\begin{equation}\label{eq:DRQN_loss}
\thinmuskip=0mu
L_j(\theta_{j}^{(i)}) = \mathbb{E}_{(o_{t}^{b},a_t^{b},r_t^{b},o_{t+1}^{b})\sim \mathcal{M}^{(i)}} \lbrack(\delta_t^{b})^{2}\rbrack,
\end{equation}
where $\delta_t^{b} = y_t^{b}-Q(o_t^{b},h_{t-1}^{b},a_t^{b};\theta_{j}^{(i)})$. Loss contributions of suffix $\varnothing$-padding elements are masked out. Parameters are updated via gradient descent on \cref{eq:DRQN_loss}, with the caveat of hysteretic learning rates $0 < \beta < \alpha < 1$, where learning rate $\alpha$ is used if $\delta_t^b \geq 0$, and $\beta$ is used otherwise.
\subsection{Phase II: Dec-POMDP MT-MARL}
Following task specialization, the second phase involves distillation of each agent's set of DRQNs into a unified DRQN that performs well in all tasks without explicit provision of task ID. Using DRQNs, our approach extends the single-agent, fully-observable MTRL method proposed by \citet{rusu2015policy} to Dec-POMDP MT-MARL. Specifically, once Dec-HDRQN specialization is conducted for each task, multi-task learning can be treated as a regression problem over Q-values. During multi-task learning, our approach iteratively conducts data collection and regression.
For data collection, agents use each specialized DRQN (from Phase I) to execute actions in corresponding tasks, resulting in a set of regression CERTs $\{\mathcal{M}_R\}$ (one per task), each containing sequences of \emph{regression experiences} $\langle o_t^{(i)}, Q_t^{(i)} \rangle$, where $Q_t^{(i)} = Q_t^{(i)}(\vec{o_t}^{(i)};\theta^{(i)})$ is the specialized DRQN's Q-value vector for agent $i$ at timestep $t$.
Supervised learning of Q-values is then conducted. Each agent samples experiences from its local regression CERTs to train a \emph{single} distilled DRQN with parameters $\theta_R^{(i)}$. Given a minibatch of regression experience traces $\mathcal{B}_R = \{\langle \langle o_{t_0}^{b}, Q_{t_0}^{b} \rangle, \ldots, \langle o_{t_0+\tau-1}^{b}, Q_{t_0+\tau-1}^{b} \rangle\rangle\}_{b=\{1,\ldots,B\}}$, the following tempered Kullback-Leibler (KL) divergence loss is minimized for each agent,
\begin{align}\label{eq:distillation_loss}
&L_{KL}(\mathcal{B}_R,\theta_R^{(i)}; T)\\
&\!\!= \mathbb{E}_{(o_{t}^{b},Q_{t}^{b})\sim \{\mathcal{M_R}^{(i)}\}}\!\!\! \sum_{a=1}^{|\mathcal{A}^{(i)}|}\!\! \text{softmax}_a(\frac{Q_{t}^{b}}{T})\ln\frac{\text{softmax}_a(\frac{Q_{t}^{b}}{T})}{\text{softmax}_a(Q_{t,R}^{b})}\nonumber,
\end{align}
where $Q_{t,R}^{b} = Q_{t,R}^{b}(\vec{o_t}^{b};\theta_R^{(i)})$ is the vector of action-values predicted by distilled DRQN given the same input as the specialized DRQN, $T$ is the softmax temperature, and $\text{softmax}_a$ refers to the $a$-th element of the softmax output. The motivation behind loss function \eqref{eq:distillation_loss} is that low temperatures ($0<T<1$) lead to sharpening of specialized DRQN action-values, $Q_{t}^{b}$, ensuring that the distilled DRQN ultimately chooses similar actions as the specialized policy it was trained on. We refer readers to \citet{rusu2015policy} for additional analysis of the distillation loss. Note that concurrent sampling is not necessary during the distillation phase, as it is entirely supervised; CERTs are merely used for storage of the regression experiences.
\section{Background}
\subsection{Reinforcement Learning}
Single-agent RL under full observability is typically formalized using Markov Decision Processes (MDPs) \cite{sutton1998reinforcement}, defined as tuple $\langle \mathcal{S}, \mathcal{A}, \mathcal{T}, \mathcal{R}, \gamma \rangle$. At timestep $t$, the agent with state $s \in \mathcal{S}$ executes action $a \in \mathcal{A}$ using policy $\pi(a|s)$, receives reward $r_{t} = \mathcal{R}(s) \in \mathbb{R}$, and transitions to state $s' \in \mathcal{S}$ with probability $P(s'|s,a) = \mathcal{T}(s,a,s')$. Denoting discounted return as $R_t = \sum_{t'=t}^{H}\gamma^{t'-t}r_{t}$, with horizon $H$ and discount factor $\gamma \in [0,1)$, the action-value (or Q-value) is defined as $Q^{\pi}(s,a) = \mathbb{E}_{\pi}[R_t|s_t=s,a_t=a]$. Optimal policy $\pi^{*}$ maximizes the Q-value function, $Q^{\pi^{*}}(s,a) = \max_{\pi}Q(s,a)$. In RL, the agent interacts with the environment to learn $\pi^{*}$ \emph{without} explicit provision of the MDP model. Model-based methods first learn $\mathcal{T}$ and $\mathcal{R}$, then use a planner to find $Q^{\pi^{*}}$. Model-free methods typically directly learn Q-values or policies, so can be more space and computation efficient.
Q-learning \cite{watkins1992q} iteratively estimates the optimal Q-value function using backups,
$Q(s,a) = Q(s,a) + \alpha[r + \gamma \max_{a'} Q(s',a')-Q(s,a)]$,
where $\alpha \in [0,1)$ is the learning rate and the term in brackets is the temporal-difference (TD) error. Convergence to $Q^{\pi^*}$ is guaranteed in the tabular (no approximation) case provided sufficient state/action space exploration; however, tabulated learning is unsuitable for problems with large state/action spaces. Practical TD methods instead use function approximators \cite{gordon1995stable} such as linear combinations of basis functions or neural networks, leveraging inductive bias to execute similar actions in similar states. Deep Q-learning is a state-of-the-art approach using a Deep Q-Network (DQN) for Q-value approximation \cite{mnih2015human}. At each iteration $j$, experience tuple $\langle s,a,r,s' \rangle$ is sampled from replay memory $\mathcal{M}$ and DQN parameters $\theta$ are updated to minimize loss $L_j(\theta_j) = \mathbb{E}_{(s,a,r,s')\sim \mathcal{M}} \lbrack(r+\gamma \max_{a'}Q(s',a';\hat{\theta}_{j})-Q(s,a;\theta_j))^{2}\rbrack$. Replay memory $\mathcal{M}$ is a first-in first-out queue containing the set of latest experience tuples from $\epsilon$-greedy policy execution. \emph{Target network} parameters $\hat{\theta}_{j}$ are updated less frequently and, in combination with experience replay, are critical for stable deep Q-learning.
Agents in partially-observable domains receive observations of the latent state. Such domains are formalized as Partially Observable Markov Decision Processes (POMDPs), defined as $\langle \mathcal{S}, \mathcal{A}, \mathcal{T}, \mathcal{R}, \Omega, \mathcal{O}, \gamma \rangle$ \cite{kaelbling1998planning}. After each transition, the agent observes $o \in \Omega$ with probability $P(o|s',a) = \mathcal{O}(o,s',a)$. Due to noisy observations, POMDP policies map observation histories to actions. As Recurrent Neural Networks (RNNs) inherently maintain an internal state $h_t$ to compress input history until timestep $t$, they have been demonstrated to be effective for learning POMDP policies \cite{wierstra2007solving}. Recent work has introduced Deep Recurrent Q-Networks (DRQNs) \cite{hausknecht2015deep}, combining Long Short-Term Memory (LSTM) cells \cite{hochreiter1997long} with DQNs for RL in POMDPs. Our work extends this single-task, single-agent approach to the multi-task, multi-agent setting.
\subsection{Multi-agent RL}
Multi-agent RL (MARL) involves a set of agents in a shared environment, which must learn to maximize their individual returns \cite{bucsoniu2010multi}. Our work focuses on cooperative settings, where agents share a joint return. \citet{claus1998dynamics} dichotomize MARL agents into two classes: Joint Action Learners (JALs) and Independent Learners (ILs). JALs observe actions taken by \emph{all} agents, whereas ILs only observe \emph{local} actions. As observability of joint actions is a strong assumption in partially observable domains, ILs are typically more practical, despite having to solve a more challenging problem \cite{claus1998dynamics}. Our approach utilizes ILs that conduct both learning \emph{and} execution in a decentralized manner.
Unique challenges arise in MARL due to agent interactions during learning \cite{bucsoniu2010multi,matignon2012independent}. Multi-agent domains are non-stationary from agents' local perspectives, due to teammates' interactions with the environment. ILs, in particular, are susceptible to \emph{shadowed equilibria}, where local observability and non-stationarity cause locally optimal actions to become a globally sub-optimal joint action \cite{fulda2007predicting}. Effective MARL requires each agent to tightly coordinate with fellow agents, while also being robust against destabilization of its own policy due to environmental non-stationarity. Another desired characteristic is robustness to \emph{alter-exploration}, or drastic changes in policies due to exploratory actions of teammates \cite{matignon2012independent}.
\subsection{Transfer and Multi-Task Learning}
Transfer Learning (TL) aims to generalize knowledge from a set of \emph{source tasks} to a \emph{target task} \cite{pan2010survey}. In single-agent, fully-observable RL, each task is formalized as a distinct MDP (i.e., MDPs and tasks are synonymous) \cite{taylor2009transfer}. While TL assumes sequential transfer, where source tasks have been previously learned and even may not be related to the target task, Multi-Task Reinforcement Learning (MTRL) aims to learn a policy that performs well on related target tasks from an underlying task distribution \cite{caruana1998multitask, pan2010survey}. MTRL tasks can be learned simultaneously or sequentially \cite{taylor2009transfer}. MTRL directs the agent's attention towards pertinent training signals learned on individual tasks, enabling a \emph{unified} policy to generalize well across all tasks. MTRL is most beneficial when target tasks share common features \cite{wilson2007multi}, and most challenging when the task ID is not explicitly specified to agents during execution -- the setting addressed in this paper.
\section{Contribution}
This paper introduced the first formulation and approach for multi-task multi-agent reinforcement learning under partial observability.
Our approach combines hysteretic learners, DRQNs, CERTs, and distillation, demonstrably achieving multi-agent coordination using a single joint policy in a set of Dec-POMDP tasks with sparse rewards, despite not being provided task identities during execution. The parametric nature of the capture tasks used for evaluation (e.g., variations in grid size, target assignments and dynamics, sensor failure probabilities) makes them good candidates for ongoing benchmarks of multi-agent multi-task learning. Future work will investigate incorporation of skills (macro-actions) into the framework, extension to domains with heterogeneous agents, and evaluation on more complex domains with much larger numbers of tasks.
\section{Evaluation}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mamt_drqn_better_label.pdf}
\caption{Learning via Dec-DRQN.}
\label{fig:2agt_mamt_drqn_v}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mamt_hdrqn_better_label.pdf}
\caption{Learning via Dec-HDRQN (our approach).}
\label{fig:2agt_mamt_hdrqn_v}
\end{subfigure}
\caption{Task specialization for MAMT domain with $n=2$ agents, $P_f = 0.3$. (a) Without hysteresis Dec-DRQN policies destabilize in the $5 \times 5$ task and fails to learn in the $6 \times 6$ and $7 \times 7$ tasks. (b) Dec-HDRQN (our approach) performs well in all tasks.}
\label{fig:2agt_mamt_v_comparisons}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{./figs/3agt_mamt_better_label.pdf}
\caption{The advantage of hysteresis is even more pronounced for MAMT with $n=3$ agents. $P_f = 0.6$ for $3\times3$ task, and $P_f = 0.1$ for $4\times 4$ task. Dec-HDRQN indicated by (H).}
\label{fig:3agt_v}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{./figs/beta_sensitivity.pdf}
\caption{Dec-HDRQN sensitivity to learning rate $\beta$ ($6 \times 6$ MAMT domain, $n=2$ agents, $P_f=0.25$). Anticipated return $Q(o_0,a_0)$ upper bounds actual return due to hysteretic optimism. }
\label{fig:beta_sensitivity}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=1\linewidth]{./figs/distillation_all_combined.pdf}
\caption{MT-MARL performance of the proposed Dec-HDRQN specialization/distillation approach (labeled as \emph{Distilled}) and simultaneous learning approach (labeled as \emph{Multi}). Multi-task policies for both approaches were trained on all MAMT tasks from $3 \times 3$ through $6 \times 6$. Performance shown only for $4 \times 4$ and $6 \times 6$ domains for clarity. Distilled approach shows specialization training (Phase I of approach) until 70K epochs, after which distillation is conducted (Phase II of approach). Letting the simultaneous learning approach run for up to 500K episodes did not lead to significant performance improvement. By contrast, the performance of our approach during the distillation phase (which includes task identification) is almost as good as its performance during the specialization phase.}
\label{fig:distillation_all_combined}
\end{figure*}
\subsection{Task Specialization using Dec-HDRQN}\label{sec:expt_specialize}
We first evaluate single-task performance of the introduced Dec-HDRQN approach on a series of increasingly challenging domains. Domains are designed to support a large number of task variations, serving as a useful MT-MARL benchmarking tool. All experiments use DRQNs with 2 multi-layer perceptron (MLP) layers, an LSTM layer \cite{hochreiter1997long} with 64 memory cells, and another 2 MLP layers. MLPs have 32 hidden units each and rectified linear unit nonlinearities are used throughout, with the exception of the final (linear) layer. Experiments use $\gamma = 0.95$ and Adam optimizer \mbox{\cite{kingma2014adam}} with base learning rate 0.001. Dec-HDRQNs use hysteretic learning rate $\beta = 0.2$ to $0.4$.
All results are reported for batches of 50 randomly-initialized episodes.
Performance is evaluated on both multi-agent single-target (MAST) and multi-agent multi-target (MAMT) capture domains, variations of the existing meeting-in-a-grid Dec-POMDP benchmark \cite{amato2009incremental}. Agents $i\in\{1,\ldots,n\}$ in an $m\times m$ toroidal grid receive $+1$ terminal reward only when they \emph{simultaneously} capture moving targets (1 target in MAST, and $n$ targets in MAMT). Each agent always observes its own location, but only sometimes observes targets' locations. Target dynamics are unknown to agents and vary across tasks. Similar to the Pong POMDP domain of \citet{hausknecht2015deep}, our domains include observation flickering: in each timestep, observations of targets are sometimes obscured, with probability $P_f$. In MAMT, each agent is assigned a unique target to capture, yet is unaware of the assignment (which also varies across tasks). Agent/target locations are randomly initialized in each episode. Actions are `move north', `south', `east', `west', and `wait', but transitions are noisy (0.1 probability of moving to an unintended adjacent cell).
In the MAST domain, each task is specified by a unique grid size $m$; in MAMT, each task also has a unique agent-target assignment. The challenge is that agents must learn particular roles (to ensure coordination) and also discern aliased states (to ensure quick capture of targets) using local noisy observations. Tasks end after $H$ timesteps, or upon simultaneous target capture. Cardinality of local policy space for agent $i$ at timestep $t$ is $O(|\mathcal{A}^{(i)}|^{\frac{|\Omega^{(i)}|^t-1}{|\Omega^{(i)}|-1}})$ \cite{oliehoek2008optimal}, where $|A^{(i)}| = 5$, $|\Omega^{(i)}| = m^4$ for MAST, and $|\Omega^{(i)}| = m^{2(n+1)}$ for MAMT. Across all tasks, non-zero reward signals are extremely sparse, appearing in the terminal experience tuple only if targets are \emph{simultaneously} captured. Readers are referred to the supplementary material for domain visualizations.
The failure point of Dec-DRQN is first compared to Dec-HDRQN in the MAST domain with $n=2$ and $P_f = 0$ (full observability) for increasing task size, starting from $4 \times 4$. Despite the domain simplicity, Dec-DRQN fails to match Dec-HDRQN at the $8 \times 8$ mark, receiving value 0.05$\pm$0.16 in contrast to Dec-HDRQN's 0.76$\pm$0.11 (full results reported in supplementary material).
Experiments are then scaled up to a 2 agent, 2 target MAMT domain with $P_f=0.3$. Empirical returns throughout training are shown in \cref{fig:2agt_mamt_v_comparisons}. In the MAMT tasks, a well-coordinated policy induces agents to capture targets simultaneously (yielding joint $+1$ reward). If any agent strays from this strategy during learning (e.g., while exploring), teammates receive no reward even while executing optimal local policies, leading them to deviate from learned strategies.
Due to lack of robustness against alter-exploration/non-stationarity, the Dec-DRQN becomes unstable in the $5 \times 5$ task, and fails to learn altogether in the $6 \times 6$ and $7 \times 7$ tasks (\cref{fig:2agt_mamt_drqn_v}). Hysteresis affords Dec-HDRQN policies the stability necessary to consistently achieve agent coordination (\cref{fig:2agt_mamt_hdrqn_v}). A \emph{centralized-learning} variation of Dec-DRQN with inter-agent parameter sharing (similar to RIAL-PS in \citet{foerster2016learning}) was also tested, but was not found to improve performance (see supplementary material). These results further validate that, despite its simplicity, hysteretic learning significantly improves the stability of MARL in cooperative settings. Experiments are also conducted for the $n=3$ MAMT domain (\cref{fig:3agt_v}). This domain poses significant challenges due to reward sparsity. Even in the $4 \times 4$ task, only 0.02\% of the joint state space has a non-zero reward signal. Dec-DRQN fails to find a coordinated joint policy, receiving near-zero return after training. Dec-HDRQN successfully coordinates the 3 agents. Note the high variance in empirical return for the $3 \times 3$ task is due to flickering probability being increased to $P_f = 0.6$.
Sensitivity of Dec-HDRQN empirical performance to hysteretic learning rate $\beta$ is shown in \cref{fig:beta_sensitivity}, where lower $\beta$ corresponds to higher optimism; $\beta=0$ causes monotonic increase of approximated Q-values during learning, whereas $\beta=1$ corresponds to Dec-DRQN. Due to the optimistic assumption, anticipated returns at the initial timestep, $Q(o_0,a_0)$, overestimate true empirical return. Despite this, $\beta \in [0.1,0.6]$ consistently enables learning of a well-coordinated policy, with $\beta \in [0.4,0.5]$ achieving best performance. Readers are referred to the supplementary material for additional sensitivity analysis of convergence trends with varying $\beta$ and CERT tracelength $\tau$.
\subsection{Multi-tasking using Distilled Dec-HDRQN}
We now evaluate distillation of specialized Dec-HDRQN policies (as learned in \cref{sec:expt_specialize}) for MT-MARL. A first approach is to forgo specialization and directly learn a Dec-HDRQN using a pool of experiences from all tasks. This approach, called Multi-DQN by \citet{rusu2015policy}, is susceptible to convergence issues even in single-agent, fully-observable settings. In \cref{fig:distillation_all_combined}, we compare these approaches (where we label ours as `Distilled', and Multi-HDRQN as `Multi'). Both approaches were trained to perform multi-tasking on 2-agent MAMT tasks ranging from $3 \times 3$ to $6 \times 6$, with $P_f = 0.3$. Our distillation approach uses no task-specific MLP layers, unlike \citet{rusu2015policy}, due to our stronger assumptions on task relatedness and lack of execution-time observability of task identity.
In \cref{fig:distillation_all_combined}, our MT-MARL approach first performs Dec-HDRQN specialization training on each task for 70K epochs, and then performs distillation for 100K epochs. A grid search was conducted for temperature hyperparameter in \cref{eq:distillation_loss} ($T=0.01$ was found suitable). Note that performance is plotted only for the $4 \times 4$ and $6 \times 6$ tasks, simply for plot clarity (see supplementary material for MT-MARL evaluation results on all tasks). Multi-HDRQN exhibits poor performance across all tasks due to the complexity involved in concurrently learning over multiple Dec-POMDPs (with partial observability, transition noise, non-stationarity, varying domain sizes, varying target dynamics, and random initializations). We experimented with larger and smaller network sizes for Multi-HDRQN, with no major difference in performance (we also include training results for 500K Multi-HDRQN iterations in the supplementary). By contrast, our proposed MT-MARL approach achieves near-nominal execution-time performance on all tasks using a single distilled policy for each agent -- despite not explicitly being provided the task identity.
\section{Introduction}
In multi-task reinforcement learning (MTRL) agents are presented several related \emph{target} tasks \cite{taylor2009transfer, caruana1998multitask} with shared characteristics. Rather than specialize on a single task, the objective is to generalize performance across all tasks. For example, a team of autonomous underwater vehicles (AUVs) learning to detect and repair faults in deep-sea equipment must be able to do so in many settings (varying water currents, lighting, etc.), not just under the circumstances observed during training.
Many real-world problems involve multiple agents with partial observability and limited communication (e.g., the AUV example) \cite{DecPOMDPBook16}, but generating accurate models for these domains is difficult due to complex interactions between agents and the environment. Learning is difficult in these settings due to partial observability and local viewpoints of agents, which perceive the environment as non-stationary due to teammates' actions. Efficient learners must extract knowledge from past tasks to accelerate learning and improve generalization to new tasks. Learning specialized policies for individual tasks can be problematic, as not only do agents have to store a distinct policy for each task, but in practice face scenarios where the identity of the task is often non-observable.
Existing MTRL methods focus on single-agent and/or fully observable settings \cite{taylor2009transfer}. By contrast, this work considers cooperative, independent learners operating in partially-observable, stochastic environments, receiving feedback in the form of local noisy observations and joint rewards. This setting is general and realistic for many multi-agent domains. We introduce the multi-task multi-agent reinforcement learning (MT-MARL) under partial observability problem, where the goal is to maximize execution-time performance on a set of related tasks, without explicit knowledge of the task identity. Each MT-MARL task is formalized as a Decentralized Partially Observable Markov Decision Process (Dec-POMDP) \cite{bernstein2002complexity}, a general formulation for cooperative decision-making under uncertainty. MT-MARL poses significant challenges, as each agent must learn to coordinate with teammates to achieve good performance, ensure policy generalization across \emph{all} tasks, and conduct (implicit) execution-time inference of the underlying task ID to make sound decisions using local noisy observations. As typical in existing MTRL approaches, this work focuses on average asymptotic performance across all tasks \cite{caruana1998multitask, taylor2009transfer} and sample-efficient learning.
We propose a two-phase MT-MARL approach that first uses cautiously-optimistic learners in combination with Deep Recurrent Q-Networks (DRQNs) \mbox{\cite{hausknecht2015deep}} for action-value approximation. We introduce Concurrent Experience Replay Trajectories (CERTs), a decentralized extension of experience replay \cite{lin1992self,mnih2015human} targeting sample-efficient and stable MARL. This first contribution enables coordination in single-task MARL under partial observability. The second phase of our approach distills each agent's specialized action-value networks into a generalized recurrent multi-task network. Using CERTs and optimistic learners, well-performing distilled policies \cite{rusu2015policy} are learned for multi-agent domains. Both the single-task and multi-task phases of the algorithm are demonstrated to achieve good performance on a set of multi-agent target capture Dec-POMDP domains. The approach makes no assumptions about communication capabilities and is fully decentralized during learning and execution. To our knowledge, this is the first formalization of decentralized MT-MARL under partial observability.
\section*{Acknowledgements}
The authors thank the anonymous reviewers for their insightful feedback and suggestions. This work was supported by Boeing Research \& Technology, ONR MURI Grant N000141110688 and BRC Grant N000141712072.
\bibliographystyle{icml2017}
\section{Related Work}
\subsection{Multi-agent RL}
\citet{bucsoniu2010multi} present a taxonomy of MARL approaches. Partially-observable MARL has received limited attention. Works include model-free gradient-ascent based methods \cite{peshkin2000learning,Dutech01}, simulator-supported methods to improve policies using a series of linear programs \cite{wu2012rollout}, and model-based approaches where agents learn in an interleaved fashion to reduce destabilization caused by concurrent learning \cite{banerjee2012sample}.
Recent scalable methods use Expectation Maximization to learn finite state controller (FSC) policies \cite{Wu13,LiuIJCAI15,LiuAAAI16}.
Our approach is most related to IL algorithms that learn Q-values, as their representational power is more conducive to transfer between tasks, in contrast to policy tables or FSCs. The majority of existing IL approaches assume full observability. \citet{matignon2012independent} survey these approaches, the most straightforward being Decentralized Q-learning \cite{tan1993multi}, where each agent performs independent Q-learning. This simple approach has some empirical success \cite{matignon2012independent}. Distributed Q-learning \cite{lauer2000algorithm} is an optimal algorithm for deterministic domains; it updates Q-values only when they are guaranteed to increase, and the policy only for actions that are no longer greedy with respect to Q-values. \citet{bowling2002multiagent} conduct Policy Hill Climbing using the Win-or-Learn Fast heuristic to decrease (increase) each agent's learning rate when it performs well (poorly). Frequency Maximum Q-Value heuristics \cite{kapetanakis2002reinforcement} bias action selection towards those consistently achieving max rewards. Hysteretic Q-learning \cite{matignon2007hysteretic} addresses miscoordination using cautious optimism to stabilize policies while teammates explore. Its track record of empirical success against complex methods \cite{xu2012multiagent,matignon2012independent,barbalios2014robust} leads us to use it as a foundation for our MT-MARL approach. \citet{foerster2016learning} present architectures to learn communication protocols for Dec-POMDP RL, noting best performance using a centralized approach with inter-agent backpropagation and parameter sharing. They also evaluate a model combining Decentralized Q-learning with DRQNs, which they call Reinforced Inter-Agent Learning. Given the decentralized nature of this latter model (called Dec-DRQN herein for clarity), we evaluate our method against it. Concurrent to our work, \citet{foerster2017stabilising} investigated an alternate means of stabilizing experience replay for the centralized learning case.
\subsection{Transfer and Multi-task RL}
\citet{taylor2009transfer} and \citet{torrey2009transfer} provide excellent surveys of transfer and multi-task RL, which almost exclusively target single-agent, fully-observable settings. \citet{tanaka2003multitask} use first and second-order statistics to compute a prioritized sweeping metric for MTRL, enabling an agent to maximize lifetime reward over task sequences. \citet{fernandez2006probabilistic} introduce an MDP policy similarity metric, and learn a \emph{policy library} that generalizes well to tasks within a shared domain. \citet{wilson2007multi} consider TL for MDPs, learning a Dirichlet Process Mixture Model over source MDPs, used as an informative prior for a target MDP. They extend the work to multi-agent MDPs by learning characteristic agent roles \cite{wilson2008learning}. \citet{brunskill2013sample} introduce an MDP clustering approach that reduces negative transfer in MTRL, and prove reduction of sample complexity of exploration using transfer. \citet{taylor2013transfer} introduce \emph{parallel transfer} to accelerate multi-agent learning using inter-agent transfer.
Recent work extends the notion of neural network distillation \cite{hinton2015distilling} to DQNs for single-agent, fully-observable MTRL, first learning a set of specialized teacher DQNs, then distilling teachers to a single multi-task network \cite{rusu2015policy}. The efficacy of the distillation technique for single-agent MDPs with large state spaces leads our work to use it as a foundation for the proposed MT-MARL under partial observability approach.
\section{Multi-agent Multi-target (MAMT) Domain Overview}
\begin{figure*}[h]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[page=1,width=1\linewidth]{./figs/toroidal_domain.pdf}
\caption{Agents must learn inherent toroidal transition dynamics in the domain for fast target capture (e.g., see green target).}
\label{fig:toroidal_domain_init}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[page=2,width=1\linewidth]{./figs/toroidal_domain.pdf}
\caption{In MAMT tasks, no reward is given to the team above, despite two agents successfully capturing their targets.}
\label{fig:toroidal_domain_bad}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[page=3,width=1\linewidth]{./figs/toroidal_domain.pdf}
\caption{Example successful simultaneous capture scenario, where the team is given $+1$ reward.}
\label{fig:toroidal_domain_good}
\end{subfigure}
\caption{Visualization of MAMT domain. Agents and targets operate on a toroidal $m \times m$ gridworld. Each agent (circle) is assigned a unique target (cross) to capture, but does not observe its assigned target ID. Targets' states are fully occluded at each timestep with probability $P_f$. Despite the simplicity of gridworld transitions, reward sparsity makes this an especially challenging task. During both learning and execution, the team receives no reward unless all targets are captured \emph{simultaneously} by their corresponding agents.}
\end{figure*}
\section{Empirical Results: Learning on Multi-agent Single-Target (MAST) Domain}
Multi-agent Single-target (MAST) domain results for Dec-DRQN and Dec-HDRQN, with 2 agents and $P_f = 0.0$ (observation flickering disabled). These results mainly illustrate that Dec-DRQN sometimes has \emph{some} empirical success in low-noise domains with small state-space. Note that in the $8 \times 8$ task, Dec-HDRQN significantly outperforms Dec-DRQN, which converges to a sub-optimal policy despite domain simplicity.
\begin{figure*}[h]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mast_drqn_v.pdf}
\caption{Dec-DRQN empirical returns during training.}
\label{fig:2agt_mast_drqn_v}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mast_drqn_q.pdf}
\caption{Dec-DRQN anticipated values during training. }
\label{fig:2agt_mast_drqn_q}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mast_hdrqn_v.pdf}
\caption{Dec-HDRQN empirical returns during training.}
\label{fig:2agt_mast_hdrqn_v}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mast_hdrqn_q.pdf}
\caption{Dec-HDRQN anticipated values during training. }
\label{fig:2agt_mast_hdrqn_q}
\end{subfigure}
\caption{Multi-agent Single-target (MAST) domain results for Dec-DRQN and Dec-HDRQN, with 2 agents and $P_f = 0.0$ (observation flickering disabled). All plots conducted (at each training epoch) for a batch of 50 randomly-initialized episodes. Anticipated value plots (on right) were plotted for the exact starting states and actions undertaken for the episodes used in the plots on the left.}
\label{fig:mast_plots_comparisons}
\end{figure*}
\clearpage
\section{Empirical Results: Learning on MAMT Domain}
Multi-agent Single-target (MAMT) domain results, with 2 agents and $P_f = 0.3$ (observation flickering disabled). We also evaluated performance of inter-agent parameter sharing (a centralized approach) in Dec-DRQN (which we called Dec-DRQN-PS). Additionally, performance of a Double-DQN was deemed to have negligible impacts (Dec-DDRQN).
\begin{figure*}[h]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mamt_drqn_v.pdf}
\caption{Dec-DRQN empirical returns during training. }
\label{fig:2agt_mamt_drqn_v_appendix}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mamt_drqn_q_init.pdf}
\caption{Dec-DRQN anticipated values during training. }
\label{fig:2agt_mamt_drqn_q_appendix}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mamt_drqn_ps_v.pdf}
\caption{Dec-DRQN-PS (with parameter sharing), empirical returns during training.}
\label{fig:2agt_mamt_drqn_ps_v_appendix}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mamt_drqn_ps_q_init.pdf}
\caption{Dec-DRQN-PS (with parameter sharing), anticipated values during training.}
\label{fig:2agt_mamt_drqn_ps_q_init_appendix}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mamt_double_drqn_v.pdf}
\caption{Dec-DDRQN (double DRQN) empirical returns during training. }
\label{fig:2agt_mamt_double_drqn_v_appendix}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mamt_double_drqn_q_init.pdf}
\caption{Dec-DDRQN (double DRQN) anticipated values during training. }
\label{fig:2agt_mamt_double_drqn_q_init_appendix}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mamt_hdrqn_v.pdf}
\caption{Dec-HDRQN (our approach) empirical returns during training.}
\label{fig:2agt_mamt_hdrqn_v_appendix}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/2agt_mamt_hdrqn_q_init.pdf}
\caption{Dec-HDRQN (our approach) anticipated values during training.}
\label{fig:2agt_mamt_hdrqn_q_appendix}
\end{subfigure}
\caption{MAMT domain results for Dec-DRQN and Dec-HDRQN, with 2 agents and $P_f = 0.3$. All plots conducted (at each training epoch) for a batch of 50 randomly-initialized episodes. Anticipated value plots (on right) were plotted for the exact starting states and actions undertaken for the episodes used in the plots on the left.}
\label{fig:mamt_plots_comparisons_appendix}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.7\linewidth]{./figs/finalplot_multiagent_q_comparison.pdf}
\caption{Comparison of agents' anticipated value plots using Dec-HDRQN during training. MAMT domain, with 2 agents and $P_f = 0.3$. All plots conducted (at each training epoch) for a batch of 50 randomly-initialized episodes. For a given task, agents have similar anticipated value convergence trends due to shared reward; differences are primarily caused by random initial states and independently sampled target occlusion events for each agent. }
\label{fig:finalplot_multiagent_q_comparison}
\end{figure*}
\begin{figure*}[h]
\centering
\begin{subfigure}[h]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/3agt_v.pdf}
\caption{Empirical returns during training. For batch of 50 randomly-initialized games.}
\label{fig:3agt_v_appendix}
\end{subfigure}
\hfill
\begin{subfigure}[h]{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/3agt_q_init.pdf}
\caption{Anticipated values during training. For specific starting states and actions undertaken in the same 50 randomly-initialized games as \cref{fig:3agt_v_appendix}.}
\label{fig:3agt_q_init_appendix}
\end{subfigure}
\caption{MAMT domain results for Dec-DRQN and Dec-HDRQN, with $n=3$ agents. $P_f = 0.6$ for the $3 \times 3$ task, and $P_f = 0.1$ for the $4 \times 4$ task.}
\label{fig:mamt_plots_3agt_comparisons_appendix}
\end{figure*}
\twocolumn[]
\section{Empirical Results: Learning Sensitivity to Dec-HDRQN Negative Learning Rate $\beta$ }
\begin{figure}[h]
\centering
\begin{subfigure}[h]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/v_varying_beta.pdf}
\caption{Sensitivity of Dec-HDRQN empirical returns to $\beta$ during training. For batch of 50 randomly-initialized games.}
\label{fig:v_varying_beta}
\end{subfigure}
\hfill
\begin{subfigure}[h]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/q_init_varying_beta.pdf}
\caption{Sensitivity of Dec-HDRQN predicted action-values to $\beta$ during training. For specific starting states and actions undertaken in the same 50 randomly-initialized games of \cref{fig:v_varying_beta}.}
\label{fig:q_init_varying_beta}
\end{subfigure}
\begin{subfigure}[h]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/q_varying_beta.pdf}
\caption{Sensitivity of Dec-HDRQN average Q values to $\beta$ during training. For random minibatch of 32 experienced observation inputs.}
\label{fig:q_varying_beta}
\end{subfigure}
\caption{Learning sensitivity to $\beta$ for $6 \times 6$, 2 agent MAMT domain with $P_f=0.25$. All plots for agent $i=0$. $\beta=1$ corresponds to Decentralized Q-learning, $\beta=0$ corresponds to Distributed Q-learning (not including the distributed policy update step).}
\label{fig:beta_sensitivity_values}
\end{figure}
\newpage
\section{Empirical Results: Learning Sensitivity to Dec-HDRQN Recurrent Training Tracelength Parameter $\tau$}
\begin{figure}[h]
\centering
\begin{subfigure}[h]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/v_varying_tracelength.pdf}
\caption{Dec-HDRQN sensitivity to tracelength $\tau$. 6x6 MAMT domain with $P_f=0.25$. }
\label{fig:v_varying_tracelength_appendix}
\end{subfigure}
\begin{subfigure}[h]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/q_init_varying_tracelength.pdf}
\caption{Sensitivity of Dec-HDRQN predicted action-values to recurrent training tracelength parameter. For specific starting states and actions undertaken in the same 50 randomly-initialized games of \cref{fig:v_varying_tracelength_appendix}.}
\label{fig:q_init_varying_tracelength}
\end{subfigure}
\caption{Learning sensitivity to recurrent training tracelength parameter for $6 \times 6$, 2 agent MAMT domain with $P_f=0.25$. All plots for agent $i=0$. $\beta=1$ corresponds to Decentralized Q-learning, $\beta=0$ corresponds to Distributed Q-learning (not including the distributed policy update step).}
\label{fig:tracelength_sensitivity_values}
\end{figure}
\newpage
\section{Empirical Results: Multi-tasking Performance Comparison}
The below plots show multi-tasking performance of both the distillation and Multi-HDRQN approaches. Both approaches were trained on the $3 \times 3$ through $6 \times 6$ MAMT tasks. Multi-DRQN failed to achieve specialized-level performance on all tasks, despite 500K training epochs. By contrast, the proposed MT-MARL distillation approach achieves nominal performance after 100K epochs.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/mhdrqn.pdf}
\caption{MT-MARL via Multi-HDQRN.}
\label{fig:mhdrqn_appendix}
\end{subfigure} \\
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{./figs/distillation.pdf}
\caption{MT-MARL via specialized and distilled Dec-HDRQN.}
\label{fig:distillation_appendix}
\end{subfigure}
\caption{Multi-task performance on MAMT domain, $n=2$ agents and $P_f=0.3$.}
\label{fig:multitask_comparisons_appendix}
\end{figure}
|
1,477,468,750,389 | arxiv |
\section{Introduction}
\input{sections/intro}
\section{Preliminaries}
\input{sections/preliminary}
\section{Hierarchical policy blending as optimal transport}\label{sec:pbot}
\input{sections/methods}
\section{Experiments}\label{sec:experiments}
\input{sections/experiments}
\section{Related work}
\input{sections/related}
\section{Conclusion}\label{sec:discussion}
\input{sections/discussion}
\clearpage
\acks{This research is supported by the German Research Foundation through the collaborative projects METRIC4IMITATION (PE 2315/11-1) CHIRON, and the Emmy Noether Programme (CH 2676/1-1). The authors would like to thank Joao Carvalho for his feedback, and Snehal Jauhri for his advice on setting up the whole-body control environment.}
\subsection{Toy Environments}
\begin{figure}[tb]
\centering
\begin{minipage}[b]{0.4\textwidth}
\includegraphics[width=\linewidth]{figures/maze_env.png}
\end{minipage}
%
\begin{minipage}[b]{0.4\textwidth}
\includegraphics[width=\linewidth]{figures/box_env.png}
\end{minipage}
\caption{(Left) Planar Maze Environment. (Right) Planar Box Environment. Green lines are expert rollouts.}
\label{fig:toy_env}
\vspace{-0.6cm}
\end{figure}
The \textbf{Planar Maze Environment} is a cluttered environment in which an agent moves from a random start (red) to a random goal (green) position, Fig.~\ref{fig:toy_env}. The environment randomly samples $K$ circular obstacles inside a restricted area between the start and goal positions. We model the movement of the obstacles using simple Euler integration. This maze environment mimics a dense, cluttered and dynamic environment. In this case, local minima are created but often disappear independently. However, the control methods have to be reactive enough to avoid collision. Unlike the maze, the \textbf{Planar Box Environment} (Fig.~\ref{fig:toy_env}) is a sparse domain, with the box being either static or dynamic, and its motion is modeled as a constant velocity. In this setting, the agent start position is sampled randomly to the right or left of the box. The challenge lies in not getting into a local optimum in front (left or right) or below the box. Furthermore, the dynamic nature complicates the planning of a promising solution. In this case, although the reactive requirement is relaxed due to being sparse, the difficult local minima always exist.
For all methods, we design common RMPs such as collision avoidance and goal reaching, as in~\citep{cheng_rmpflow_2018}. To achieve a curving behavior, we design an expert $\pi_{\mathbf{curl}}$ that exerts forces on the normal space of the potential forces and add two opposing curling experts for balancing. Although this extension does not affect the RMPflow baseline, hierarchical methods allow change of the weight scaling, and, thus, achieve curving behavior.
\noindent\textbf{Comparative Evaluation.}~Table~\ref{tab:static_toy} shows the comparative results for the static versions of the toy environments. It is evident that the myopic RMPflow is not able to solve the Box domain, but it guarantees safety due to its stable property. Short horizons in HiPBI and HiPBOT are not as effective as longer ones. It is notable that HiPBOT with only 10 steps look-ahead, outperforms the baselines in most metrics, guaranteeing maximum safety and good success rates. Table~\ref{tab:dynamic_toy}, shows the dynamic versions of the toy tasks, for both synchronous (S) and asynchronous (A) execution of policy blending. HiPBOT outperforms the baselines in all cases with short horizons, making it much faster for deployment in highly dynamic domains. With comparative performance, considering the rollout computation and optimization, we observe that HiPBOT ($h=10$) achieves mean planning rate of around 30Hz due to being efficient with shorter horizon, while HiPBI ($h=50$) runs at about 2Hz. This computing gap hurts the performance of baselines even more in highly dynamic environments, as seen in \cref{fig:stress_method}, depicting a \textit{stress-test} on the 2D Maze for changing acceleration levels of the obstacles by adding Brownian noises. We evaluate a plain goal-reaching success rate--regardless of collisions, along with the safety rate and collision-free success rate. Evidently, HiPBOT with $h=10$ performs overall better, even in the extreme scenarios~\footnote{See videos in our project website \url{https://sites.google.com/view/hipobot/experiment-videos}}.
\begin{figure}[tb]
\begin{center}
{\scriptsize
\captionof{table}{{Evaluation of HiPBOT versus baselines on the static planar environments. This experiment shows the capabilities of HiPBOT to overcome local minima (100 seeds per evaluation).}}\label{tab:static_toy}
\vspace{-0.25cm}
\tabcolsep1.pt
\adjustbox{max width=0.89\textwidth}{
\begin{tabular*}{\linewidth}{l
@{\extracolsep{\fill}}
c c c c @{\extracolsep{\fill}} c
c c c c}
\toprule
\phantom{Var.} &
\multicolumn{4}{c}{2D Toy Box Environment} && \multicolumn{4}{c}{2D Toy Maze Environment}\\
\cmidrule{2-5}
\cmidrule{7-10}
& {SUC$[\%]$} & {SAFE$[\%]$} & {D2G} & {TS} & {} & {SUC$[\%]$} & {SAFE$[\%]$} & {D2G} & {TS} \\
\midrule
RMP\textit{flow}
& $0$ & $\mathbf{100}$ & $198.8\,\pm0.7$ & $500.0\,\pm0.0$
&
& $73$ & $99$ & $161.5\,\pm296.7$ & $338.7\,\pm100.8$ \\[1ex]
HiPBI ($h=25$)
& $0$ & $\mathbf{100}$ & $198.9\,\pm0.5$ & $500.0\,\pm0.0$
&
& $77$ & $93$ & $148.2\,\pm284.5$ & $331.3\,\pm98.9$ \\
HiPBI ($h=5$)
& $64$ & $\mathbf{100}$ & $82.1\,\pm79.9$ & $354.3\,\pm169.7$
&
& $77$ & $97$ & $151.1\,\pm284.1$ & $\mathbf{323.2\,\pm99.3}$ \\
\textbf{HiPBOT} ($h=5$)
& $0$ & $\mathbf{100}$ & $176.2\,\pm1.2$ & $500.0\,\pm0.0$
&
& $72$ & $96$ & $200.0\,\pm334.2$ & $386.2\,\pm234.2$\\
\textbf{HiPBOT} ($h=10$)
& $\mathbf{93}$ & $\mathbf{100}$ & $\mathbf{17.2\,\pm6.7}$ & $\mathbf{132.9\,\pm10.1}$
&
& $\mathbf{82}$ & $\mathbf{100}$ & $\mathbf{138.5\,\pm300.6}$ & $401.0\,\pm281.6$ \\
\bottomrule
\end{tabular*}}}%
\end{center}
\vspace{-0.7cm}
\begin{center}
{\scriptsize
\captionof{table}{{Evaluation of HiPBOT vs. baselines on the dynamic planar environments with 10-pixel velocity levels. We also compare HiPBI and HiPBOT in synchronous (S) and asynchronous (A) settings. In (S), the simulation waits for the blending solution before the agent steps in the environment. In (A), the environment and the methods act asynchronously. This experiment demonstrates the computational advantage of HiPBOT with short look-ahead horizon, balancing between being reactive and explorative for safety (100 seeds per evaluation).}}\label{tab:dynamic_toy}
\vspace{-0.25cm}
\tabcolsep1.pt
\adjustbox{max width=0.89\textwidth}{
\begin{tabular*}{\linewidth}{l
@{\extracolsep{\fill}}
c c c c @{\extracolsep{\fill}} c
c c c c}
\toprule
\phantom{Var.} &
\multicolumn{4}{c}{2D Toy Box Environment} && \multicolumn{4}{c}{2D Toy Maze Environment}\\
\cmidrule{2-5}
\cmidrule{7-10}
& {SUC$[\%]$} & {SAFE$[\%]$} & {D2G} & {TS} & {} & {SUC$[\%]$} & {SAFE$[\%]$} & {D2G} & {TS} \\
\midrule
RMP\textit{flow}
& $0$ & $\mathbf{100}$ & $198.9\,\pm1.5$ & $500.0\,\pm0.0$
&
& $77$ & $89$ & $161.5\,\pm620.0$ & $330.7\,\pm191.3$ \\[1ex]
HiPBI ($h=25$, S)
& $2$ & $\mathbf{100}$ & $189.3\,\pm44.7$ & $490.9\,\pm81.8$
&
& $98$ & $\mathbf{99}$ & $20.3\,\pm172.7$ & $247.6\,\pm55.8$ \\
HiPBI ($h=5$0, S)
& $61$ & $\mathbf{100}$ & $49.5\,\pm75.6$ & $276.6\,\pm251.2$
&
& $\mathbf{99}$ & $\mathbf{99}$ & $17.5\,\pm162.6$ & $\mathbf{247.5\,\pm47.6}$ \\
HiPBI ($h=75$, S)
& $\mathbf{100}$ & $\mathbf{100}$ & $\mathbf{7.3\,\pm5.9}$ & $131.9\,\pm18.0$
&
& $\mathbf{99}$ & $\mathbf{99}$ & $\mathbf{19.0\,\pm171.7}$ & $252.1\,\pm47.3$ \\
\textbf{HiPBOT} ($h=5$, S)
& $0$ & $\mathbf{100}$ & $199.3\,\pm1.1$ & $500.0\,\pm0.0$
&
& $\mathbf{99}$ & $\mathbf{99}$ & $26.1\,\pm108.8$ & $315.9\,\pm129.5$ \\
\textbf{HiPBOT} ($h=10$, S)
& $\mathbf{100}$ & $\mathbf{100}$ & $25.8\,\pm0.6$ & $143.9\,\pm22.8$
&
& $\mathbf{99}$ & $\mathbf{99}$ & $22.0\,\pm72.9$ & $294.2\,\pm108.1$ \\
\textbf{HiPBOT} ($h=15$, S)
& $\mathbf{100}$ & $\mathbf{100}$ & $16.4\,\pm4.1$ & $\mathbf{127.3\,\pm18.2}$
&
& 98 & 98 & $30.8\,\pm104.5$ & $312.9\,\pm145.6$ \\[1ex]
HiPBI ($h=25$, A)
& $7$ & $\mathbf{100}$ & $178.6\,\pm71.1$ & $477.1\,\pm120.3$
&
& $83$ & $84$ & $116.2\,\pm386.3$ & $294.2\,\pm131.4$ \\
HiPBI ($h=5$0, A)
& $73$ & $\mathbf{100}$ & $40.1\,\pm76.9$ & $324.3\,\pm169.7$
&
& $85$ & $87$ & $100.0\,\pm357.9$ & $\mathbf{293.4\,\pm123.7}$ \\
HiPBI ($h=75$, A)
& $\mathbf{100}$ & $\mathbf{100}$ & $\mathbf{8.5\,\pm6.0}$ & $205.8\,\pm35.3$
&
& $86$ & $87$ & $106.1\,\pm376.5$ & $297.3\,\pm122.1$ \\
\textbf{HiPBOT} ($h=5$, A)
& $0$ & $\mathbf{100}$ & $199.1\,\pm1.1$ & $500.0\,\pm0.0$
&
& $94$ & $94$ & $55.1\,\pm170.5$ & $321.9\,\pm176.2$\\
\textbf{HiPBOT} ($h=10$, A)
& $\mathbf{100}$ & $\mathbf{100}$ & $22.2\,\pm4.2$ & $147.9\,\pm20.3$
&
& $\mathbf{99}$ & $\mathbf{99}$ & $\mathbf{20.5\,\pm99.6}$ & $286.0\,\pm80.9$ \\
\textbf{HiPBOT} ($h=15$, A)
& $\mathbf{100}$ & $\mathbf{100}$ & $17.5\,\pm3.3$ & $\mathbf{126.4\,\pm18.2}$
&
& $94$ & $94$ & $59.8\,\pm203.4$ & $330.6\,\pm188.3$ \\
\bottomrule
\end{tabular*}}}%
\end{center}
\vspace{-0.25cm}
\centering
\includegraphics[width=0.9\linewidth]{figures/stress_methods.pdf}
\vspace{-0.25cm}
\captionof{figure}{\scriptsize Comparative evaluation for different velocity and acceleration levels of obstacles (30 seeds per evaluation).}
\label{fig:stress_method}
\vspace{-0.5cm}
\end{figure}
\begin{figure}[tb]
\centering
\vspace{-0.1cm}
\includegraphics[width=0.89\linewidth]{figures/stress_hipbot.pdf}
\vspace{-0.25cm}
\caption{Stress test of HiPBOT on extreme velocity and noisy acceleration levels. We run 30 seeds per evaluation.}
\label{fig:stress_hipbot}
\centering
\vspace{-0.125cm}
\includegraphics[width=0.75\linewidth]{figures/panda_exp.pdf}
\vspace{-0.25cm}
\caption{Comparative study in the manipulation environment in both static and dynamic obstacle setting, with increasing obstacle number. We run 70 seeds per evaluation.}
\label{fig:eval_panda}
\vspace{-0.6cm}
\end{figure}
\noindent\textbf{Stress-test of HiPBOT.}~We even tested more difficult scenarios for the HiPBOT ($h=10$) in the 2D Maze environment, where we varied both numbers of dynamic obstacles and their acceleration levels. While there is total success regardless of collisions, safety is compromised for these very dynamic cases.
We hypothesize that longer horizons, more efficient optimization, and even a ``cleverer'' exploration strategy during planning are necessary for these more complex environments.
\subsection{Manipulator Environment}~We test the scalability of our method in high-dimensional manipulation tasks, Fig.~\ref{fig:high_dof}. The robot has to start from a randomly selected box (brown) and reach the randomly selected target box (green), while avoiding all possible collisions with itself, and the dynamic environment (red obstacles can be static or moving with constant velocity). We implemented eight expert RMPs, ranging from joint \& velocity limits, self-collision to reaching the target, and obstacle avoidance. Both HiPBI and HiPBOT use the same experts with four additional local curling experts for the end-effector that operates in the normal space of the target-reaching potential. While RMPflow cancels out the curling, hierarchical methods adapt the temperatures to achieve the desired dynamic behavior. HiPBOT with horizon 10 is again superior in terms of collision-free success rate. We see that performance drops in the static environment due to difficult local minima that do not vanish over time. This case would require longer look-ahead horizons, but we want to point to the increased efficiency with as few as 10 steps with mean planning rate of 6.4Hz, compared to the 50 steps of HiPBI with mean planning time of 3.5s, which yields lower performance.
\subsection{Whole-Body Environment}~Finally, we demonstrate HiPBOT capabilities in the MEMA setting with a high-dimensional, multi-objective and highly dynamic environment, where the TIAGo++ must track two potentially conflicting reference trajectories while avoiding self-collision and an obstacle, Fig.~\ref{fig:high_dof}. As demonstrated in the videos, HiBPOT is able to compromise between objectives thanks to its ability to adapt expert priorities online. In contrast, RMPflow struggles to find good situational actions and eventually collides.
\subsection{Product of experts}
We formalize each expert policy $i$ in the form of a Boltzmann distribution
$
\pi_{i}({\bm{a}} \mid {\bm{s}}; {\boldsymbol{\theta}}_{i})
\propto
\exp(-E({\bm{s}}, {\bm{a}}; {\boldsymbol{\theta}}_{i})),
$
where the quantities ${\bm{s}} \in \mc{S}$ and ${\bm{a}} \in \mc{A}$ denote a state and an action, respectively. An energy function $E:\mc{S} \times \mc{A} \rightarrow \mathbb{R}$ assigns a cost to each state-action pair. The choice of the energy function $E$ and its hyperparameter ${\boldsymbol{\theta}}_{i}$ is usually designed or learned in advance. Following the Product of Experts (PoE)~\citep{hinton2002training} formulation, the blended policy for an agent can be defined as
\begin{equation}
\label{eq:poe}
\pi({\bm{a}} \mid {\bm{s}},\;{\boldsymbol{\beta}}) = \prod_{i=1}^{n} \pi_{i}({\bm{a}} \mid {\bm{s}};\;{\boldsymbol{\theta}}_{i})^{\beta_i} \propto \exp\left(-\sum_{i=1}^{n} \beta_i E({\bm{s}}, {\bm{a}}; {\boldsymbol{\theta}}_{i}) \right)
\end{equation}
with weighting factors ${\boldsymbol{\beta}}$, also known as \textit{temperatures}, representing the importance or relevance of each policy in the product. In the logarithmic space, this blending corresponds to a weighted superposition. The optimal action results from
$
{\bm{a}}^* = \argmax_{ {\bm{a}} \in \mc{A} } \log \pi ({\bm{a}} \mid {\bm{s}}, {\boldsymbol{\beta}}) = \argmin_{ {\bm{a}} \in \mc{A} } \sum_{i=1}^{n} \beta_i E({\bm{s}}, {\bm{a}}; {\boldsymbol{\theta}}_{i})
$
depending on state ${\bm{s}}$ and ${\boldsymbol{\beta}}$. We can further formulate the state-dependent temperature ${\boldsymbol{\beta}}({\bm{s}})$, giving the possibility to change the state-dependent relevance or importance of experts. In an online fashion, a change in the relevance of experts makes it possible to induce planning into the myopic nature of the policy $\pi({\bm{s}}, {\bm{a}})$.
In the RMP framework, the energy $E_{i}({\bm{x}}, {\bm{a}}^{{\mathcal{X}}_i}; {\boldsymbol{\theta}}_{i})$ is usually designed in the task space ${\mathcal{X}}_i$ as a quadratic function having smooth and convex properties, with corresponding task map ${\bm{x}} = \phi_i({\bm{s}})$. Accordingly, the Boltzmann distribution forms a Gaussian
$
\pi_{i}({\bm{a}}^{{\mathcal{X}}_i} \mid {\bm{x}}; {\boldsymbol{\theta}}_{i}) =
{\mathcal{N}}({\bm{f}}_i({\bm{x}}), {\bm{M}}_i({\bm{x}})^{-1})
$
with the forcing function ${\bm{f}}_i({\bm{x}})$ and ${\bm{M}}_i({\bm{x}})$ as the mean and the weighting matrix (i.e., the precision matrix), respectively. Within the PoE view, the \textit{pullbacked} forcing term and precision matrix of the configuration policy would be
$
{\bm{f}}({\bm{s}}) = \sum_{i=1}^n\beta_i({\bm{s}}){\bm{J}}_{\phi_i}^{\dagger} {\bm{f}}_i({\bm{x}}), \,{\bm{M}} = \sum_{i=1}^n {\bm{J}}_{\phi_i}^{\intercal} {\bm{M}}_i({\bm{x}}) {\bm{J}}_{\phi_i}
$, respectively.
Due to the quadratic nature of the RMP energy functions, given the current state and determined temperatures, the optimal action can be computed analytically in closed form as
\begin{equation}\label{eq:agent_optimal_a}
{\bm{a}}^* = \argmin_{ {\bm{a}} \in \mc{A} } \sum_{i=1}^{n} \beta_i E({\bm{s}}, {\bm{a}}; {\boldsymbol{\theta}}_{i}) = {\bm{M}}^\dagger {\bm{f}}({\bm{s}}).
\end{equation}
\subsection{Product of experts-agents}
Let us consider multi-arm systems (Fig. \ref{fig:high_dof}-left), where each robotic arm can be considered as an agent acting on the whole system to execute some tasks. Assuming the agents' behaviors are collaborative, and that there exists a pool of $n$ experts and $m$ agents, we propose a simple solution for the Multi-Experts-Multi-Agents (MEMA) policy blending problem by extending the PoE \eqref{eq:poe}, as defined in Definition \ref{def:mema}.
\begin{definition}[Product of Experts-Agents] \label{def:mema}
Let ${\boldsymbol{\beta}} \in \mathbb{R}_+^{n \times m}$ be the positive blending weight matrix for $n$ experts and $m$ agents. The MEMA blending policy is defined as the product of experts-agents (PoEA)
\begin{equation}\label{eq:mema}
\pi({\bm{a}} \mid {\bm{s}},\;{\boldsymbol{\beta}}) = \prod_{i, j=1}^{n, m} \pi_{i}({\bm{a}} \mid {\bm{s}};\;{\boldsymbol{\theta}}_{i, j})^{\beta_{i, j}} \propto \exp\left(-\sum_{i, j=1}^{n, m} \beta_{i, j} E({\bm{s}}, {\bm{a}};\;{\boldsymbol{\theta}}_{i, j}) \right)
\end{equation}
\end{definition}
PoEA can be realized under the RMP framework, where the optimal action ${\bm{a}}^*_j$ of the $j^{\text{th}}$-agent can be computed using \eqref{eq:agent_optimal_a} with the $j^{\text{th}}$-column of ${\boldsymbol{\beta}}$. The final optimal action acting on the system is ${\bm{a}}^* = \sum_j {\bm{M}}_j^\dagger {\bm{f}}_j({\bm{s}})$.
\subsection{Policy blending as entropic-regularized unbalanced optimal transport}
In practice, we found that the normalizing constraint of the policy temperature is restrictive, as it requires spreading enough masses to the policy weight-scaling leading to overestimation. On the other hand, in case of large number of experts, the normalized temperature matrix puts too small masses on expert policies, thus leading to underestimation and making them under-performing. Thus, we propose to cast policy blending as an entropic-regularized UOT problem to relax the need of the normalizing constraint.
\begin{definition}[MEMA Policy Blending] \label{def:ot_problem}
Let ${\boldsymbol{\mu}} \in \mathbb{R}_+^n, {\bm{\nu}} \in \mathbb{R}_+^m$ be arbitrary positive vectors representing the priors of expert-policy and agent-policy temperatures, respectively. The temperature matrix as transportation plan ${\boldsymbol{\beta}}({\bm{s}}) \in \mathbb{R}_+^{n \times m}$ is a strictly positive matrix. The entropic-regularized UOT for the policy blending is defined as
\begin{equation}\label{eq:policy_blending_ot}
{\boldsymbol{\beta}}^*({\bm{s}}) = \argmin_{{\boldsymbol{\beta}} \in \mathbb{R}_+^{n \times m}}\dotprod{{\boldsymbol{\beta}}}{{\bm{C}}} - \frac{1}{\lambda} H({\boldsymbol{\beta}}) + \lambda_{KL} \left(\widetilde{\mathrm{KL}}({\boldsymbol{\beta}}\bm{1}_m \;\|\; {\boldsymbol{\mu}}) + \widetilde{\mathrm{KL}}({\boldsymbol{\beta}}^\intercal\bm{1}_n \;\|\; {\bm{\nu}})\right)
\end{equation}
with ${\bm{C}}({\bm{s}})$ is the state-dependent cost matrix, which can be learned or computed analytically.
\end{definition}
The solution of \eqref{eq:policy_blending_ot} is unique due to the strict convexity of the objective in ${\boldsymbol{\beta}}$. Due to the uniqueness of the solution and the practical computation complexity of the generalized Sinkhorn algorithm solving (\ref{eq:policy_blending_ot}), it is well-suited for reactive motion generation.
In reactive motion generation, the objectives are usually goal-reaching, obstacle avoidance, and self-collision avoidance in dynamic settings. Hence, we follow these objectives to design the state-dependent cost matrix as
\begin{equation}\label{eq:cost_matrix}
\left[{\bm{C}}({\bm{s}})\right]_{i, j} = \frac{1}{h}\sum_{t=1}^h w_g\underbrace{d({\bm{s}}^t_{i, j}, {\bm{s}}^g_j)}_{\textrm{Goal Cost}} + w_c \underbrace{\exp\left(-\frac{\textrm{DF}_j({\bm{s}}^t_{i, j})^2}{2l^2}\right)}_{\textrm{Collision-Avoidance Cost}}
\end{equation}
where from the current system state ${\bm{s}}$, the rollout with horizon $h$ from the perspective of $j^{\text{th}}$-agent following the $i^{\text{th}}$-expert is $\{{\bm{s}}, {\bm{s}}_{i, j}^1, \ldots, {\bm{s}}^h_{i, j}\}$. We assume the transition dynamics ${\mathcal{T}}(\cdot, \cdot)$ of the system known and the rollout is computed by following ${\bm{s}}^{t+1} = {\mathcal{T}}({\bm{s}}^t, {\bm{a}}^*),\, {\bm{a}}^* = \argmin E({\bm{s}}, {\bm{a}}; {\boldsymbol{\theta}})$. ${\bm{s}}^g_j$ is the $j^{\text{th}}$-agent goal state, and $\textrm{DF}_j(\cdot)$ is the distance field measuring the closest distance of the $j^{\text{th}}$-agent's robot links to obstacles including itself (i.e., self-distance). $l$ is the hyperparameter for the collision margin. Note that the goal or obstacles can be changed over time; thus, the goal and distance field are also updated in the loop. This cost design enables integration of additional higher level of planning abstractions, e.g., task planning, where the symbolic planner can set the intermediate goals or other contexts in the cost matrix (but this is not integrated in the current work). Since experts are independent by assumption, the elements of the cost matrix can be computed in parallel using GPU. An algorithm overview of our HiPBOT framework can be found on the project website.
\subsection{Stability Analysis}
HiPBOT, as we deploy as expert policies RMPs, sets the scaling factor for the expert Riemannian matrix ${\bm{M}}_{i, j}$ in the upper level, and analyzing its local stability is straightforward.
\begin{proposition}[Local Stability]
As ${\boldsymbol{\beta}} \in \mathbb{R}_+^{n \times m}$ by Definition \ref{def:ot_problem} denotes that the weight matrix of the expert Riemannian matrices is strictly positive. If all expert RMPs are in the form of Geometric Dynamical Systems, then by \textbf{Theorem 2} in~\citep{cheng_rmpflow_2018}, the system that follows the configuration policy as Product of Experts-Agents in Definition \ref{def:mema} converges to the forward invariance set $\mathcal{C}_{\infty}:=\left\{({\bm{s}}, \dot{{\bm{s}}}): {\bm{f}}({\bm{s}})=0, \dot{{\bm{s}}}=0\right\}$.
\end{proposition}
Note that this local stability of HiPBOT is only valid for static environments, where the parameters for collision-avoidance RMPs do not change. Nevertheless,
for dynamic environments, in most cases, we empirically observed that the agents also exhibit locally stable behaviors, and we plan to investigate this theoretically further in the future.
\subsection{Preliminaries on optimal transport}
\noindent\textbf{Histograms and discrete transport.}~For two histograms ${\bm{n}} \in \Sigma_n$ and ${\bm{m}} \in \Sigma_m$ in the simplex $\Sigma_d \defeq \{{\bm{x}} \in \mathbb{R}^d_+: {\bm{x}}^\intercal\bm{1}_d=1\}$, we write the transport polytope $U({\bm{n}}, {\bm{m}})$, namely the polyhedral set of $n \times m$ matrices
$
U({\bm{n}}, {\bm{m}}) \defeq \{{\bm{P}} \in \mathbb{R}_+^{n \times m}\; |\; {\bm{P}}\bm{1}_m={\bm{n}}, {\bm{P}}^\intercal\bm{1}_n={\bm{m}}\} \,
$
where $\bm{1}_d$ is the $d$-dimensional vector of ones. $U({\bm{n}}, {\bm{m}})$ contains all non-negative $n \times m$ matrices with row and column sums ${\bm{n}}$ and ${\bm{m}}$ respectively. From a probabilistic view, the set of $U({\bm{n}}, {\bm{m}})$ contains all possible \emph{joint probabilities} of two multinomial random variables $(X, Y)$ having histograms ${\bm{n}}$ and ${\bm{m}}$, respectively. Indeed, any matrix ${\bm{P}} \in U({\bm{n}}, {\bm{m}})$ can be identified as the joint probability for $(X, Y)$ such that $p(X=i,Y=j)=p_{ij}$. We define the entropy $H(\cdot)$ of these histograms and their marginals as
$
{\bm{x}} \in \Sigma_n,\, H({\bm{x}})=-\sum_{i=1}^n {x_i} \log {x_i},\, H({\bm{P}})=-\sum_{i,j=1}^{n, m} p_{ij} (\log p_{ij} - 1).
$
\noindent\textbf{Entropic-regularized optimal transport.}~Given a $n \times m$ cost matrix ${\bm{M}}$, the OT between ${\bm{n}}$ and ${\bm{m}}$ given cost ${\bm{M}}$ is
$
d_M({\bm{n}}, {\bm{m}}) \defeq \min_{{\bm{P}}\in U({\bm{n}}, {\bm{m}})} \dotprod{{\bm{P}}}{{\bm{M}}},
$
where $\langle \cdot,\cdot \rangle$ is the Frobenius dot-product. For a general matrix ${\bm{M}}$ and $d = \max (n, m)$, the worst case complexity of classical algorithms~\citep{ahuja1988network, orlin1988faster} solving for the optimal plan ${\bm{P}}^*$ is $O(d^3\log d)$. To deal with the scalability of computing the OT,~\citet{cuturi2013sinkhorn} propose to regularize its objective function by the entropy of the transport plan, which results in the entropic-regularized OT. Although the Sinkhorn distance (Definition 1 in~\citep{cuturi2013sinkhorn}) is defined as minimizing a tighter transport space by imposing a hard constraint on the entropy of $H({\bm{P}})$, it can be rewritten using a Lagrange multiplier for the entropy constraint
\begin{equation}\label{eq:sinkhorn_dist}
d_{{\bm{M}}}^{\lambda}({\bm{n}}, {\bm{m}}) \defeq \dotprod{{\bm{P}}^\lambda}{{\bm{M}}},\,\text{ where } {\bm{P}}^\lambda=\argmin_{{\bm{P}}\in U({\bm{n}}, {\bm{m}})} \dotprod{{\bm{P}}}{{\bm{M}}} - \frac{1}{\lambda} H({\bm{P}}).
\end{equation}
The solution ${\bm{P}}^\lambda$ is unique due to the strict convexity of the negative entropy term. The entropic regularization enables the efficient Sinkhorn-Knopp algorithm to solve OT, shown to have a complexity of $\Tilde{O}(d^2/\epsilon^3)$ ~\citep{altschuler2017near}.
\noindent\textbf{Entropic-Regularized Unbalanced Optimal Transport.}~A key constraint of classical OT is that it requires the input measures to be normalized to the unit mass, which is a problematic assumption for many applications that require handling arbitrary positive measures (mass creation or destruction), and/or allowing for only partial displacement of mass. The entropic-regularized UOT~\citep{frogner2015learning, chizat2018scaling} is defined as
\begin{equation}\label{eq:uot}
{\bm{P}}^{\lambda}_{UOT} \defeq \argmin_{{\bm{P}}\in \mathbb{R}_+^{n \times m}}\dotprod{{\bm{P}}}{{\bm{M}}} - \frac{1}{\lambda} H({\bm{P}}) + \lambda_{KL} \left(\widetilde{\mathrm{KL}}({\bm{P}}\bm{1}_m \;\|\; {\bm{n}}) + \widetilde{\mathrm{KL}}({\bm{P}}^\intercal\bm{1}_n \;\|\; {\bm{m}})\right)
\end{equation}
where now ${\bm{n}} \in \mathbb{R}_+^n, {\bm{m}} \in \mathbb{R}_+^m$ are arbitrary positive vectors, $\lambda_{KL}$ is the marginal regularization scalar, and $\widetilde{\mathrm{KL}}({\bm{w}} || {\bm{z}}) = {\bm{w}}^\intercal\log ({\bm{w}} \oslash {\bm{z}}) - \bm{1}^\intercal{\bm{w}} + \bm{1}^\intercal{\bm{z}}$ is the generalized Kullback-Leibler (KL) divergence between two positive vectors ${\bm{w}}, {\bm{z}} \in \mathbb{R}_+^k$ ($\oslash$ is the element-wise division), with the convention $0 \log 0 = 0$.~\citet{pham2020unbalanced} show that the generalized Sinkhorn-Knopp matrix scaling algorithm~\citep{frogner2015learning} solves the dual of (\ref{eq:uot}) with complexity of $\Tilde{O}(d^2/\epsilon)$ and is guaranteed to converge (Theorem 4.1 in~\citep{chizat2018scaling}). With these properties, casting policy blending as a entropic-regularized UOT problem is desirable, since assuming a Gaussian distribution over blending weights is often non-realistic, as there may be multiple policies having similar priorities given a situation. Furthermore, UOT relaxes the normalizing constraint, as policy blending weights are usually unnormalized quantities. Finally, the blending problem as entropic-regularized UOT benefits from the computation efficiency of the Sinkhorn-Knopp algorithm.
\subsection{Riemannian Motion Policy}
An RMP~\citep{ratliff_rmp_2018} is a mathematical object $({\bm{\pi}}, {\bm{M}})$ representing reactive, modular, and composable motion generation policies, where ${\bm{\pi}}$ is a deterministic policy mapping states to actions, and ${\bm{M}}$ is the Riemannian matrix representing the policy weight. The state ${{\bm{s}} = ({\bm{q}}, \dot{{\bm{q}}}, {\bm{c}})}$ represents the robot's position ${\bm{q}}\in \mathbb{R}^q$, velocity $\dot{{\bm{q}}}\in \mathbb{R}^q$ and environment context ${\bm{c}}$. We assume a set of task maps $\{\phi_i: {\mathcal{Q}} \xrightarrow{} {\mathcal{X}}_i\}$, that relate the robot configuration ${\mathcal{Q}}$ space and a certain task space ${\mathcal{X}}_i$ of the $i^{\text{th}}$ task. Then, given a set of task-space policies $({\bm{\pi}}^{{\mathcal{X}}_i}, {\bm{M}}^{{\mathcal{X}}_i})$, we can represent a policy in the robot configuration space by ${\bm{\pi}} = \sum_i{\bm{J}}_{\phi_i}^{\dagger} {\bm{\pi}}^{{\mathcal{X}}_i}(\phi({\bm{s}}))\text{, }{\bm{M}} = \sum_i {\bm{J}}_{\phi_i}^{\intercal} {\bm{M}}^{{\mathcal{X}}_i} {\bm{J}}_{\phi_i}$, with ${\bm{J}}_{\phi_i}$ the Jacobian of the task map $\phi_i$ and $\dagger$ is the pseudo-inverse operator. Finally, the configuration action is the input acceleration ${\bm{a}}=\ddot{{\bm{q}}} = {\bm{M}}^{\dagger}{\bm{\pi}} \in \mathbb{R}^q$.
|
1,477,468,750,390 | arxiv | \section{\protect\bigskip \textbf{Introduction}}
A pseudo-Riemannian metric $g$ on a smooth $2n-$manifold $M$ is called
neutral if it has signature $(n,n)$. A pair $(M,g)$ is called a
pseudo-Riemannian manifold. An anti-K\"{a}hler structure on a manifold $M$
consists of an almost complex structure $J$ and a neutral metric $g$
satisfying the followings:
\textbullet\ algebraic conditions
$(a)$ $J$ is an almost complex structure: $J^{2}=-id.$
$(b)$ The neutral metric $g$ is anti-Hermittian relative to $J$
\begin{equation*}
g(JX,JY)=-g(X,Y)
\end{equation*
or equivalently
\begin{equation}
g(JX,Y)=g(X,JY),\forall X,Y\in TM. \label{GC}
\end{equation}
\textbullet\ analytic condition
$(c)$ $J$ is parallel relative to the Levi-Civita connection $\nabla ^{g}$
(\nabla ^{g}J=0)$. This condition is equivalent to ${\Phi }_{J}g=0$, where $
\Phi }_{J}$ is the Tachibana operator \cite{IscanSalimov:2009}.
Obviously, by algeraic conditions, the triple $(M,J,g)$ is an almost
anti-Hermitian manifold. Given the anti-Hermitian structure $(J,g)$ on a
manifold $M$, we can immediately recover the other anti-Hermitian metric,
called the twin metric, by the formula
\begin{equation*}
G(X,Y)=(g\circ J)(X,Y)=g(JX,Y).
\end{equation*
Thus, the triple $(M,J,G)$ is another an almost anti-Hermitian manifold.
Notes that the condition (\ref{GC}) also refers to the purity of $g$
relative to $J$. From now on, by manifold we understand a smooth $2n-
manifold and will use the notations $J$, $g$ and $G$ for the almost complex
structure, the pseudo-Riemannian metric and the twin metric, respectively.
In addition, we shall assign the quadruple $(M,J,g,G)$ as almost
anti-Hermitian manifolds.
Our paper aims to study Codazzi pairs on an almost anti-Hermitian manifold
(M,J,g,G)$. The structure of the paper is as follows. In Sect. 2, we start
by the $g-$conjugation, $G-$conjugation and $J-$conjugation of arbitrary
linear connections. Then we state the relations among the $(0,4)-$curvature
tensors of these conjugate connections and also show that the set which has
g-$conjugation, $G-$conjugation, $J-$conjugation and an identity operation
is a Klein group on the space of linear connections. In Sect. 3, we obtain
some remarkable results under the assumption that $(\nabla ,G)$ or $(\nabla
,J)$ being a Codazzi pair, where $\nabla $ is a linear connection. One of
them is a necessary and sufficient condition under which the almost
anti-Hermitian manifold $(M,J,g,G)$ is an anti-K\"{a}hler relative to a
torsion-free linear connection $\nabla $. Sect. 4 closes our paper with
statistical structures under the assumption that $\nabla $ being $J-
invariant relative to a torsion-free linear connection $\nabla $.
\section{Conjugate connections}
In the following let $(M,J,g,G)$ be an almost anti-Hermitian manifold and
\nabla $ be a linear connection. We define respectively the conjugate
connections of $\nabla $ relative to $g$ and $G$ as the linear connections
determined by the equations
\begin{equation*}
Zg\left( X,Y\right) =g\left( {\nabla }_{Z}X,Y\right) +g\left( X,{{\nabla
_{Z}^{\ast }}Y\right)
\end{equation*
and
\begin{equation*}
ZG\left( X,Y\right) =G\left( {\nabla }_{Z}X,Y\right) +G\left( X,{{\nabla
_{Z}^{\dagger }}Y\right)
\end{equation*
for all vector fields $X,Y,Z$ on $M$. We are calling these connections $g-
conjugate connection and $G-$conjugate connection, respectively. Note that
both $g-$conjugate connection and $G-$conjugate connection of a linear
connection are involutive: ${\left( {\nabla }^{\ast }\right) }^{\ast
}=\nabla $ and ${\left( {\nabla }^{\dagger }\right) }^{\dagger }=\nabla $.
Conjugate connections are a natural generalization of Levi-Civita
connections from Riemannian manifolds theory. Especially, ${\nabla }^{\ast }$
(or ${\nabla }^{\dagger })$ coincides with $\nabla $ if and only if $\nabla $
is the Levi-Civita connection of $g$ (or $G)$.
Given a linear connection $\nabla $ of $(M,J,g,G)$, the $J-$conjugate
connection of $\nabla $, denoted ${\nabla }^{J}$, is a new linear connection
given b
\begin{equation*}
{\nabla }^{J}(X,Y)=J^{-1}(\nabla _{X}JY)
\end{equation*
for any vector fields $X$ and $Y$ on $M$ \cite{Simon}. Since conjugate
connections arise from affine differential geometry and from geometric
theory of statistical inferences, many studies have been carried out in the
recent years \cite{Amari,Nomizu,Nomizu2}.
Through relationships among the $g-$conjugate connection ${\nabla }^{\ast }
, $G-$conjugate connection ${\nabla }^{\dagger }$ and $J-$conjugate
connection ${\nabla }^{J}$ of $\nabla $, we have the following result.
\begin{theorem}
Let $(M,J,g,G)$ be an almost anti-Hermitian manifold. ${\nabla }^{\ast }$, $
\nabla }^{\dagger }$ and ${\nabla }^{J}$ denote respectively $g-
conjugation, $G-$conjugation and $J-$conjugation of a linear connection
\nabla $. Then $(id,\ast ,\dagger ,J)$ acts as the 4-element Klein group on
the space of linear connections
\begin{eqnarray*}
i)\text{ }{\left( {\nabla }^{\ast }\right) }^{\ast } &=&{\left( {\nabla
^{\dagger }\right) }^{\dagger }={\left( {\nabla }^{J}\right) }^{J}=\nabla ,
\\
ii)\text{ }{\left( {\nabla }^{\dagger }\right) }^{J} &=&{\left( {\nabla
^{J}\right) }^{\dagger }={\nabla }^{\ast }, \\
iii)\text{ }{\left( {\nabla }^{\ast }\right) }^{J} &=&{\left( {\nabla
^{J}\right) }^{\ast }={\nabla }^{\dagger }, \\
iv)\text{ }{\left( {\nabla }^{\ast }\right) }^{\dagger } &=&{\left( {\nabla
^{\dagger }\right) }^{\ast }={\nabla }^{J}.
\end{eqnarray*}
\end{theorem}
\begin{proof}
\textit{i)} The statement is a direct consequence of definitions of
conjugate connections.
\textit{ii)} We comput
\begin{eqnarray*}
G\left( {{\left( {\nabla }^{\dagger }\right) }_{Z}^{J}}X,Y\right) &=&G\left(
J^{-1}{{\nabla }_{Z}^{\dagger }}\left( JX\right) ,Y\right) \\
&=&G\left( {{\nabla }_{Z}^{\dagger }}\left( JX\right) ,J^{-1}Y\right) \\
&=&ZG\left( JX,J^{-1}Y\right) -G(JX,{\nabla }_{Z}\left( J^{-1}Y\right) ) \\
&=&Zg\left( J^{2}X,\ J^{-1}Y\right) -g(J^{2}X,{\nabla }_{Z}\left(
J^{-1}Y\right) ) \\
&=&-Zg\left( X,J^{-1}Y\right) +g(X,{\nabla }_{Z}\left( J^{-1}Y\right) ) \\
&=&-g\left( {{\nabla }_{Z}^{\ast }}X,J^{-1}Y\right) =G({{\nabla }_{Z}^{\ast
}X,Y)
\end{eqnarray*
which gives ${{\left( {\nabla }^{\dagger }\right) }^{J}={\nabla }^{\ast }}$.
Similarl
\begin{eqnarray*}
ZG\left( X,Y\right) &=&G\left( {{\nabla }_{Z}^{J}}X,Y\right) +G\left( X,{
\left( {\nabla }^{J}\right) }_{Z}^{\dagger }}Y\right) , \\
Zg\left( JX,Y\right) &=&g\left( {JJ^{-1}\nabla }_{Z}\left( JX\right)
,Y\right) +g\left( JX,{{\left( {\nabla }^{J}\right) }_{Z}^{\dagger }
Y\right) , \\
Zg\left( JX,Y\right) &=&g\left( {\nabla }_{Z}\left( JX\right) ,Y\right)
+g\left( JX,{{\left( {\nabla }^{J}\right) }_{Z}^{\dagger }}Y\right) , \\
g(JX,{{\nabla }_{Z}^{\ast }}Y) &=&g\left( JX,{{\left( {\nabla }^{J}\right)
_{Z}^{\dagger }}Y\right)
\end{eqnarray*
which establishes ${{\left( {\nabla }^{J}\right) }^{\dagger }={\nabla
^{\ast }}$. Hence, we get ${{\left( {\nabla }^{\dagger }\right) }^{J}=
\left( {\nabla }^{J}\right) }^{\dagger }=\nabla }$.
\textit{iii)} On applying the $J-$conjugation to both sides of $ii)$, ${
\nabla }^{\dagger }}={(\nabla }^{\ast })^{J}$ and also
\begin{eqnarray*}
g\left( JX,{{\left( {\nabla }^{J}\right) }_{Z}^{\ast }}Y\right) &=&\
Zg\left( JX,Y\right) -g\left( {{\nabla }_{Z}^{J}}\left( JX\right) ,Y\right)
\\
&=&\ ZG\left( X,Y\right) -G\left( {J^{-1}{\nabla }_{Z}^{J}}\left( JX\right)
,Y\right) \\
&=&\ ZG\left( X,Y\right) -G\left( {J^{-1}J^{-1}\nabla }_{Z}\left(
J^{2}X\right) ,Y\right) \\
&=&ZG\left( X,Y\right) -G\left( {\nabla }_{Z}X,Y\right) \\
&=&\ G\left( X,{{\nabla }_{Z}^{\dagger }}Y\right) =\ g\left( JX,{{\nabla
_{Z}^{\dagger }}Y\right) .
\end{eqnarray*
These show that ${{\nabla }^{\dagger }}={(\nabla }^{\ast })^{J}={{\left(
\nabla }^{J}\right) }^{\ast }}${.}
\textit{iv) }On applying the $G-$conjugation to both sides of $ii)$, $
\nabla }^{J}={\left( {\nabla }^{\ast }\right) }^{\dagger }$ and on applying
the $g-$conjugation to both sides of $iii)$, ${\nabla }^{J}={\left( {\nabla
^{\dagger }\right) }^{\ast }$. Thus, the proof completes.
\end{proof}
Recall that the curvature tensor field $R$ of a linear connection $\nabla $
is the tensor field, for all vector fields $X,Y,Z$
\begin{equation*}
R(X,Y)Z=\nabla _{X}\nabla _{Y}Z-\nabla _{Y}\nabla _{X}Z-\nabla _{\lbrack
X,Y]}Z.
\end{equation*
If $(M,g)$ is a (pseudo-)Riemannian manifold, it is sometimes convenient to
view the curvature tensor field as a $(0,4)-$tensor field by
\begin{equation*}
R(X,Y,Z,W)=g(R(X,Y)Z,W)
\end{equation*
called the $(0,4)-$curvature tensor field. If we consider the relationship
among the $(0,4)-$curvature tensor fields of $\nabla $, ${\nabla }^{\ast }$
and ${\nabla }^{J}$, we obtain the following.
\begin{theorem}
Let $(M,J,g,G)$ be an almost anti-Hermitian manifold. ${\nabla }^{\ast }$
and ${\nabla }^{J}$ denote respectively $g-$conjugation and $J-$conjugation
of a linear connection $\nabla $ on $M$. The relationship among the $(0,4)-
curvature tensor fields $R,R^{\ast }$ and $R^{J}$ of $\nabla $, ${\nabla
^{\ast }$ and ${\nabla }^{J}$ is as follow
\begin{equation*}
R\left( X,Y,JZ,W\right) =-R^{\ast }\left( X,Y,W,JZ\right) =R^{J}(X,Y,Z,JW)
\end{equation*
for \ all vector fields $X,Y,Z,W$ on $M$.
\end{theorem}
\begin{proof}
Since the relation is linear in the arguments $X,$ $Y,W$ and $Z$, it
suffices to prove it only on a basis. Therefore we assume $X,Y,W,Z\in \
\frac{\partial }{\partial x^{1}},...,\frac{\partial }{\partial x^{2n}}\}$
and take computational advantage of the following vanishing Lie bracket
\begin{equation*}
\lbrack X,Y]=[Y,W]=[W,Z]=0.
\end{equation*
Then we get
\begin{eqnarray*}
XYG\left( Z,W\right) &=&X\left( Yg\left( JZ,W\right) \right) \\
&=&X(g\left( {\nabla }_{Y}JZ,W\right) )+X\left( g\left( JZ,{{\nabla
_{Y}^{\ast }}W\right) \right) \\
&=&g\left( {{\nabla }_{X}\nabla }_{Y}JZ,W\right) +g\left( {\nabla }_{Y}JZ,{
\nabla }_{X}^{\ast }}W\right) \\
&&+g\left( {\nabla }_{X}JZ,{{\nabla }_{Y}^{\ast }}W\right) +g\left( JZ,{{{
\nabla }_{X}^{\ast }}\nabla }_{Y}^{\ast }}W\right)
\end{eqnarray*
and by alternation
\begin{eqnarray*}
YXG\left( Z,W\right) &=&g\left( {{\nabla }_{Y}\nabla }_{X}JZ,W\right)
+g\left( {\nabla }_{X}JZ,{{\nabla }_{Y}^{\ast }}W\right) \\
&&+g\left( {\nabla }_{Y}JZ,{{\nabla }_{X}^{\ast }}W\right) +g\left( JZ,{{{
\nabla }_{Y}^{\ast }}\nabla }_{X}^{\ast }}W\right) .
\end{eqnarray*
Because of the above relations, we fin
\begin{eqnarray*}
0 &=&\left[ X,Y\right] G\left( Z,W\right) =XYG\left( Z,W\right) -YXG\left(
Z,W\right) \\
0 &=&g\left( {{\nabla }_{X}\nabla }_{Y}JZ-{{\nabla }_{Y}\nabla
_{X}JZ,W\right) +g\left( JZ,{{{{\nabla }_{X}^{\ast }}\nabla }_{Y}^{\ast }}W-
{{{\nabla }_{Y}^{\ast }}\nabla }_{X}^{\ast }}W\right) \\
0 &=&R\left( X,Y,JZ,W\right) +R^{\ast }\left( X,Y,W,JZ\right)
\end{eqnarray*
and similarl
\begin{eqnarray*}
0 &=&\left[ X,Y\right] G\left( Z,W\right) =XYG\left( Z,W\right) -YXG\left(
Z,W\right) \\
0 &=&G\left( {{J^{-1}\nabla }_{X}}J(J{^{-1}\nabla }_{Y}JZ)-J^{-1}{{\nabla
_{Y}}J(J{^{-1}\nabla }_{X}JZ),W\right) \\
&&+G\left( Z,{{{{\nabla }_{X}^{\ast }}\nabla }_{Y}^{\ast }}W-{{{{\nabla
_{Y}^{\ast }}\nabla }_{X}^{\ast }}W\right) \\
0 &=&G\left( {{{\nabla }_{X}^{J}\nabla }_{Y}^{J}}Z-{{{\nabla }_{Y}^{J}\nabla
}_{X}^{J}}Z,W\right) \\
&&+G\left( Z,{{{{\nabla }_{X}^{\ast }}\nabla }_{Y}^{\ast }}W-{{{{\nabla
_{Y}^{\ast }}\nabla }_{X}^{\ast }}W\right) \\
0 &=&g\left( {{{\nabla }_{X}^{J}\nabla }_{Y}^{J}}Z-{{{\nabla }_{Y}^{J}\nabla
}_{X}^{J}}Z,JW\right) \\
&&+g\left( {{{{\nabla }_{X}^{\ast }}\nabla }_{Y}^{\ast }}W-{{{{\nabla
_{Y}^{\ast }}\nabla }_{X}^{\ast }}W,JZ\right) \\
0 &=&R^{J}(X,Y,Z,JW)+R^{\ast }\left( X,Y,W,JZ\right) .
\end{eqnarray*
Hence, it follows that $R\left( X,Y,JZ,W\right) =-R^{\ast }\left( X,Y,W,JZ\
\right) =R^{J}(X,Y,Z,JW)$.
\end{proof}
\section{\noindent \noindent \noindent \noindent Codazzi Pairs}
Let $\nabla $ be an arbitrary linear connection on a pseudo-Riemannian
manifold $(M,g)$. Given the pair $(\nabla ,g)$, we construct respectively
the $(0,3)-$tensor fields $F$ and $F^{\ast }$ b
\begin{equation*}
F(X,Y,Z):=(\nabla _{Z}g)(X,Y)
\end{equation*
an
\begin{equation*}
F^{\ast }(X,Y,Z):=(\nabla _{Z}^{\ast }g)(X,Y),
\end{equation*
where $\nabla ^{\ast }$ is $g-$conjugation of $\nabla $. The tensor field $F$
(or $F^{\ast }$) is sometimes referred to as the cubic form associated to
the pair $(\nabla ,g)$ (or $(\nabla ^{\ast },g)$). These tensors are related
vi
\begin{equation*}
F(X,Y,Z)=g(X,(\nabla ^{\ast }-\nabla )_{Z}Y)
\end{equation*
so that
\begin{equation*}
F^{\ast }(X,Y,Z):=(\nabla _{Z}^{\ast }g)(X,Y)=-F(X,Y,Z).
\end{equation*
Therefore $F(X,Y,Z)=F^{\ast }(X,Y,Z)=0$ if and only if $\nabla ^{\ast
}=\nabla $, that is, $\nabla $ is $g-$self-conjugate \cite{Zhang}.
For an almost complex structure $J$, a pseudo-Riemannian metric $g$ and a
symmetric bilinear form $\rho $ on a manifold $M$, we call $(\nabla ,J)$ and
$(\nabla ,\rho )$, respectively, a Codazzi pair, if their covariant
derivative $(\nabla J)$ and $(\nabla \rho )$, respectively, is (totally)
symmetric in $X,Y,Z$: \cite{Simon}
\begin{equation*}
\left( {\nabla }_{Z}J\right) X=\left( {\nabla }_{X}J\right) Z\text{, }\left(
{\nabla }_{Z}\rho \right) \left( X,Y\right) =\left( {\nabla }_{X}\rho
\right) \left( Z,Y\right) \text{.}
\end{equation*}
\subsection{The Codazzi pair $(\protect\nabla ,G)$}
Let $\nabla $ be a linear connection $\nabla $ on $(M,J,g,G)$. Next we shall
consider the Codazzi pair $(\nabla ,G)$. In here, the $(0,3)-$tensor field
F $ is defined by
\begin{equation*}
F(X,Y,Z):=\left( {\nabla }_{Z}G\right) \left( X,\ Y\right) .
\end{equation*}
\begin{proposition}
\label{pr1}Let $\nabla $ be a linear connection on $(M,J,g,G)$. If $(\nabla
,G)$ is a Codazzi pair, then the following statements hold:
$i)$ $F\left( X,Y,Z\right) =\left( {\nabla }_{Z}G\right) \left( X,Y\right) $
is totally symmetric,
$ii)$ $\left( {{\nabla }_{JZ}^{\ast }}G\right) \left( X,Y\right) =\left( {
\nabla }_{JX}^{\ast }}G\right) \left( Z,Y\right) ,$
$iii)$ $T^{\nabla }=T^{{\nabla }^{\ast }}$ if and only if $(\nabla ^{\ast
},J)$ is a Codazzi pair,
$iv)$ $T^{\nabla }=T^{{\left( {\nabla }^{\ast }\right) }^{J}}$,
where $\nabla ^{\ast }$ is the $g-$conjugation of $\nabla $ and ${\left(
\nabla }^{\ast }\right) }^{J}$ is the $J-$conjugation of $\nabla ^{\ast }$.
\end{proposition}
\begin{proof}
$i)$ Due to symmetry of $G$, $F(X,Y,Z)=\left( {\nabla }_{Z}G\right) \left(
X,Y\right) =\left( {\nabla }_{Z}G\right) \left( Y,X\right) =F(Y,X,Z)$. Also
for $({\nabla },G)$ being a Codazzi pair, $F(X,Y,Z)=\left( {\nabla
_{Z}G\right) \left( X,Y\right) =F(X,Y,Z)=\left( {\nabla }_{X}G\right) \left(
Z,Y\right) =F(Z,Y,X)$, that is, $F$ is totally symmetric in all of its
indices.
$ii)$ By virtue of the purity of $g$ relative to $J$, we yiel
\begin{equation*}
\left( {\nabla }_{Z}G\right) \left( X,Y\right) =\left( {\nabla }_{X}G\right)
\left( Z,\ Y\right)
\end{equation*
\begin{eqnarray*}
&&Zg\left( JX,Y\right) -g\left( J{\nabla }_{Z}X,Y\right) -g\left( JX,{\nabla
}_{Z}Y\right) \\
&=&Xg\left( JZ,Y\right) -g\left( J{\nabla }_{X}Z,Y\right) -g\left( JZ,
\nabla }_{X}Y\right)
\end{eqnarray*
\begin{equation*}
g\left( {{\nabla }_{Z}^{\ast }}\left( JX\right) ,Y\right) -g\left( J{\nabla
_{Z}X,Y\right) =g\left( {{\nabla }_{X}^{\ast }}\left( JZ\right) ,\ Y\right)
-g\left( J{\nabla }_{X}Z,Y\right)
\end{equation*
\begin{eqnarray*}
&&g\left( {{\nabla }_{Z}^{\ast }}\left( JX\right) ,Y\right) -Zg\left(
X,JY\right) +g\left( X,{{\nabla }_{Z}^{\ast }}\left( JY\right) \right) \\
&=&\ g\left( {{\nabla }_{X}^{\ast }}\left( JZ\right) ,Y\right) -Xg\left(
Z,JY\right) +g\left( Z,{{\nabla }_{X}^{\ast }}\left( JY\right) \right)
\end{eqnarray*
\begin{eqnarray*}
&&Zg\left( X,JY\right) -g\left( {{\nabla }_{Z}^{\ast }}\left( JX\right)
,Y\right) -g\left( X,{{\nabla }_{Z}^{\ast }}\left( JY\right) \right) \\
&=&Xg\left( Z,JY\right) -g\left( {{\nabla }_{X}^{\ast }}\left( JZ\right) ,\
Y\right) -g\left( Z,{{\nabla }_{X}^{\ast }}\left( JY\right) \right) .
\end{eqnarray*
Putting $X=JX,\ Y=JY\ \ $and$\ \ Z=JZ$ in the last relation, we fin
\begin{eqnarray*}
&&JZg\left( JX,J(JY)\right) -g\left( {{\nabla }_{JZ}^{\ast }}\left(
J(JX)\right) ,JY\right) -g\left( JX,{{\nabla }_{JZ}^{\ast }}\left(
J(JY)\right) \right) \\
&=&JXg\left( JZ,J(JY)\right) -g\left( {{\nabla }_{JX}^{\ast }}\left(
J(JZ)\right) ,JY\right) -g\left( JZ,{{\nabla }_{JX}^{\ast }}\left(
J(JY)\right) \right)
\end{eqnarray*
\begin{eqnarray*}
&&JZg\left( JX,Y\right) -g\left( {{\nabla }_{JZ}^{\ast }}X,JY\right)
-g\left( JX,{{\nabla }_{JZ}^{\ast }}Y\right) \\
&=&JXg\left( JZ,Y\right) -g\left( {{\nabla }_{JX}^{\ast }}Z,JY\right)
-g\left( JZ,{{\nabla }_{JX}^{\ast }}Y\right)
\end{eqnarray*
\begin{eqnarray*}
&&JZG\left( X,Y\right) -G\left( {{\nabla }_{JZ}^{\ast }}X,Y\right) -G\left(
X,{{\nabla }_{JZ}^{\ast }}Y\right) \\
&=&JXG\left( Z,Y\right) -G\left( {{\nabla }_{JX}^{\ast }}Z,Y\right) -G\left(
Z,{{\nabla }_{JX}^{\ast }}Y\right)
\end{eqnarray*
\begin{equation*}
\left( {\nabla }_{JZ}^{\ast }G\right) \left( X,Y\right) =\left( {\nabla
_{JX}^{\ast }G\right) \left( Z,Y\right) .
\end{equation*}
$iii)$ Let $T^{\nabla }$ and $T^{{\nabla }^{\ast }}$ be respectively the
torsion tensors of $\nabla $ and its $g-$conjugation $\nabla ^{\ast }$. We
calculat
\begin{equation*}
\left( {\nabla }_{Z}G\right) \left( X,Y\right) =\left( {\nabla }_{X}G\right)
\left( Z,Y\right)
\end{equation*
\begin{eqnarray*}
&&Zg\left( JX,Y\right) -g\left( J{\nabla }_{Z}X,Y\right) -g\left( JX,{\nabla
}_{Z}Y\right) \\
&=&Xg\left( JZ,Y\right) -g\left( J{\nabla }_{X}Z,Y\right) -g\left( JZ,
\nabla }_{X}Y\right)
\end{eqnarray*
\begin{eqnarray*}
&&g\left( {{\nabla }_{Z}^{\ast }}\left( JX\right) ,Y\right) -g\left( J
\nabla }_{Z}X,Y\right) \\
&=&g\left( {{\nabla }_{X}^{\ast }}\left( JZ\right) ,Y\right) -g\left( J
\nabla }_{X}Z,Y\right)
\end{eqnarray*
\begin{eqnarray*}
&&G\left( J^{-1}{{\nabla }_{Z}^{\ast }}\left( JX\right) ,Y\right) -G\left(
\nabla }_{Z}X,Y\right) \\
&=&G\left( {{J^{-1}\nabla }_{X}^{\ast }}\left( JZ\right) ,Y\right) -G\left(
\nabla }_{X}Z,Y\right)
\end{eqnarray*
\begin{equation}
G\left( J^{-1}\left\{ {{\nabla }_{Z}^{\ast }}\left( JX\right) -{{\nabla
_{X}^{\ast }}\left( JZ\right) \right\} ,Y\right) =G\left( {\nabla }_{Z}X-
\nabla }_{X}Z,Y\right) \label{GC1}
\end{equation
from which we ge
\begin{equation*}
J^{-1}\left\{ {{\nabla }_{Z}^{\ast }}\left( JX\right) -{{\nabla }_{X}^{\ast
}\left( JZ\right) \right\} ={\nabla }_{Z}X-{\nabla }_{X}Z
\end{equation*
\begin{equation*}
J^{-1}\left\{ \left( {{\nabla }_{Z}^{\ast }}J\right) X+J{{\nabla }_{Z}^{\ast
}}X-\left( {{\nabla }_{X}^{\ast }}J\right) Z-J{{\nabla }_{X}^{\ast }
Z\right\} ={\nabla }_{Z}X-{\nabla }_{X}Z
\end{equation*
\begin{eqnarray*}
&&J^{-1}\left\{ \left( {{\nabla }_{Z}^{\ast }}J\right) X-\left( {{\nabla
_{X}^{\ast }}J\right) Z\right\} +\left( {{\nabla }_{Z}^{\ast }}X-{{\nabla
_{X}^{\ast }}Z-\left[ Z,X\right] \right) \\
&=&{\nabla }_{Z}X-{\nabla }_{X}Z-\left[ Z,X\right]
\end{eqnarray*
\begin{equation*}
J^{-1}\left\{ \left( {{\nabla }_{Z}^{\ast }}J\right) X-\left( {{\nabla
_{X}^{\ast }}J\right) Z\right\} +T^{{\nabla }^{\ast }}\left( Z,X\right)
=T^{\nabla }(Z,X).
\end{equation*
This means that $T^{{\nabla }^{\ast }}\left( Z,X\right) =T^{\nabla }(Z,X)$
if and only if $\left( {{\nabla }_{Z}^{\ast }}J\right) X=\left( {{\nabla
_{X}^{\ast }}J\right) Z$.
$iv)$ From (\ref{GC1}), we can writ
\begin{equation*}
G\left( {{\left( {\nabla }^{\ast }\right) }_{Z}^{J}}X-{{\left( {\nabla
^{\ast }\right) }_{X}^{J}}Z,Y\right) =\ G\left( {\nabla }_{Z}X-{\nabla
_{X}Z,Y\right)
\end{equation*
\begin{equation*}
G(T^{{\left( {\nabla }^{\ast }\right) }^{J}}(Z,X),Y)=G(T^{\nabla }(Z,X),Y)
\end{equation*
\begin{equation*}
T^{{\left( {\nabla }^{\ast }\right) }^{J}}(Z,X)=T^{\nabla }(Z,X).
\end{equation*}
\end{proof}
Now we shall state the following proposition without proof, because its
proof is similar to the proof of Proposition 2.10 in \cite{Zhang}.
\begin{proposition}
\noindent \label{pr2}Let $\nabla $ be a linear connection on $(M,J,g,G)$.
Then the following statements are equivalent:
$i)$ $(\nabla ,G)$ is a Codazzi pair
$ii)$ $({\nabla }^{\dagger },G)$ is a Codazzi pair,
$iii)$ $F^{\dagger }\left( X,Y,Z\right) =\left( {\nabla }_{Z}^{\dagger
}G\right) \left( X,Y\right) $ is totally symmetric,
$iv)$ $T^{\nabla }=T^{{\nabla }^{\dagger }}.$
\end{proposition}
\noindent\ As a corollary to Proposition \ref{pr1} and \ref{pr2}, we obtain
the following conclusion.
\begin{corollary}
Let $(M,J,g,G)$ be an almost anti-Hermitian manifold. ${\nabla }^{\ast }$
and ${\nabla }^{\dagger }$ denote respectively $g-$conjugation and $G-
conjugation of a linear connection $\nabla $ on $M$. If $(\nabla ,G)$ and $
\left( {\nabla }^{\ast },J\right) }$ are Codazzi pairs, then $T^{\nabla }=T^
{\nabla }^{\ast }}=T^{{\nabla }^{\dagger }}.$
\end{corollary}
\noindent
\subsection{The Codazzi pair $(\protect\nabla ,J)$}
\begin{proposition}
Let $\nabla $ be a linear connection on $(M,J,g,G)$. ${\nabla }^{\dagger }$
denote $G-$conjugation of $\nabla $ on $M$. Under the assumption that
(\nabla ,G)$ being a Codazzi pair, $({\nabla }^{\dagger },J)$ is a Codazzi
pair if and only if $({\nabla ,g)}$ is so.
\end{proposition}
\begin{proof}
Using the definition of $G-$conjugation and $T^{\nabla }=T^{{\nabla
^{\dagger }}$, we fin
\begin{equation*}
G\left( ({\nabla }_{Z}^{\dagger }J)X-({\nabla }_{X}^{\dagger }J)Z,Y\right)
=G({\nabla }_{Z}^{\dagger }JX-{{J}\nabla }_{Z}^{\dagger }X,Y)-G({\nabla
_{X}^{\dagger }JZ-{{J}\nabla }_{X}^{\dagger }Z,Y)
\end{equation*
\begin{eqnarray*}
&=&ZG\left( JX,Y\right) -G\left( JX,{\nabla }_{Z}Y\right) -G\left( {{J
\nabla }_{Z}^{\dagger }X,Y\right) -XG\left( JZ,Y\right) \\
&&+G\left( JZ,{\nabla }_{X}Y\right) +G({{J}\nabla }_{X}^{\dagger }Z,Y)
\end{eqnarray*
\begin{eqnarray*}
&=&ZG\left( JX,Y\right) -G\left( JX,{\nabla }_{Z}Y\right) -XG\left(
JZ,Y\right) +G\left( JZ,{\nabla }_{X}Y\right) \\
&&+G\left( {{J(}\nabla }_{X}^{\dagger }Z-{\nabla }_{Z}^{\dagger }X-\left[ Z,
\right] )+J\left[ Z,X\right] ,Y\right)
\end{eqnarray*
\begin{eqnarray*}
&=&ZG\left( JX,Y\right) -G\left( JX,{\nabla }_{Z}Y\right) -XG\left(
JZ,Y\right) +G\left( JZ,{\nabla }_{X}Y\right) \\
&&+G\left( {{J(}\nabla }_{X}Z-{\nabla }_{Z}X-\left[ Z,X\right] )+J\left[ Z,
\right] ,Y\right)
\end{eqnarray*
\begin{eqnarray*}
&=&-Zg\left( X,Y\right) +g\left( X,{\nabla }_{Z}Y\right) +Xg\left( Z,Y\right)
\\
&&-g\left( Z,{\nabla }_{X}Y\right) +g\left( {\nabla }_{Z}X,Y\right) -g(
\nabla }_{X}Z,Y) \\
&=&({\nabla }_{Z}g)(X,Y)-({\nabla }_{X}g)(Z,Y).
\end{eqnarray*}
\end{proof}
Now we consider the ${\Phi }-$operator (or Tachibana operator \cit
{Tachibana}) applied to the anti-Hermitian metric $g$
\begin{equation}
({\Phi }_{J}g)(X,Y,Z)=(L_{JX}g-L_{X}(g\circ J))(Y,Z). \label{GC3}
\end{equation
Because of the fact that the twin metric $G$ on an almost anti-Hermitian
manifold $(M,J,g)$ is an anti-Hermitian metric, we can apply the ${\Phi }-
operator to the twin metric $G$: \cite{Salimov3
\begin{eqnarray}
({\Phi }_{J}G)(X,Y,Z) &=&(L_{JX}G-L_{X}(G\circ J))(Y,Z) \label{GC4} \\
&=&({\Phi }_{J}g)(X,JY,Z)+g(N_{J}(X,Y),Z). \notag
\end{eqnarray}
\begin{proposition}
\label{pr3}Let $\nabla $ be a torsion-free linear connection on $(M,J,g,G)$.
If $({\nabla },J)$ is a Codazzi pair, the
\begin{equation*}
({\Phi }_{J}G)(X,Y,Z)=({\Phi }_{J}g)(X,JY,Z)=\left( {\nabla }_{JX}G\right)
\left( Y,Z\right) -\left( {\nabla }_{X}g\right) \left( JY,JZ\right) .
\end{equation*}
\end{proposition}
\begin{proof}
Using ${\nabla }_{X}Z-{\nabla }_{Z}X=\left[ Z,X\right] $, from (\ref{GC3})
we ge
\begin{equation*}
\left( {\Phi }_{J}g\right) \left( X,JY,Z\right) =(L_{JX}g-\left(
L_{X}goJ\right) \left( JY,Z\right)
\end{equation*
\begin{equation*}
=\left( L_{JX}g\right) \left( JY,Z\right) -\left( L_{X}goJ\right) \left(
JY,Z\right)
\end{equation*
\begin{eqnarray*}
&=&JXg\left( JY,Z\right) -g\left( L_{JX}JY,Z\right) -g\left( {JY,L
_{JX}Z\right) -XgoJ\left( JY,Z\right) \\
&&+goJ\left( L_{X}JY,Z\right) +goJ\left( JY,L_{X}Z\right)
\end{eqnarray*
\begin{eqnarray*}
&=&JXg\left( JY,Z\right) -g\left( \left[ JX,JY\right] ,Z\right) -g\left( JY
\left[ JX,Z\right] \right) -XgoJ\left( JY,Z\right) \\
&&+goJ\left( \left[ X,JY\right] ,Z\right) +goJ\left( JY,\left[ X,Z\right]
\right)
\end{eqnarray*
\begin{eqnarray*}
&=&JXg\left( JY,Z\right) -g\left( {\nabla }_{JX}JY-{\nabla }_{JY}JX,Z\right)
-g\left( JY,{\nabla }_{JX}Z-{\nabla }_{Z}JX\right) \\
&&-XgoJ\left( JY,Z\right) +goJ\left( {\nabla }_{X}JY-{\nabla
_{JY}X,Z\right) +goJ\left( JY,{\nabla }_{X}Z-{\nabla }_{Z}X\right)
\end{eqnarray*
\begin{eqnarray*}
&=&JXg\left( JY,Z\right) -g\left( \left( {\nabla }_{JX}J\right) Y+J{\nabla
_{JX}Y-\left( {\nabla }_{JY}J\right) X-J{\nabla }_{JY}X,Z\right) \\
&&-g\left( JY,{\nabla }_{JX}Z-\left( {\nabla }_{Z}J\right) X-J{\nabla
_{Z}X\right) -Xg\left( JY,JZ\right) \\
&&+g\left( \left( {\nabla }_{X}J\right) Y+J{\nabla }_{X}Y-{\nabla
_{JY}X,JZ\right) +g\left( JY,J{\nabla }_{X}Z-J{\nabla }_{Z}X\right)
\end{eqnarray*
\begin{eqnarray*}
&=&JXg\left( JY,Z\right) -g\left( \left( {\nabla }_{JX}J\right) Y,Z\right)
-g\left( J{\nabla }_{JX}Y,Z\right) \\
&&+g\left( \left( {\nabla }_{JY}J\right) X,Z\right) +g\left( J{\nabla
_{JY}X,Z\right) -g\left( JY,{\nabla }_{JX}Z\right) +g\left( JY,\left(
\nabla }_{Z}J\right) X\right) \\
&&+g\left( JY,J{\nabla }_{Z}X\right) -Xg\left( JY,JZ\right) +g\left( \left(
\nabla }_{X}J\right) Y,JZ\right) +g\left( J{\nabla }_{X}Y,JZ\right) \\
&&-g\left( {\nabla }_{JY}X,JZ\right) +g\left( JY,J{\nabla }_{X}Z\right)
-g\left( JY,J{\nabla }_{Z}X\right) .
\end{eqnarray*
\bigskip \noindent By virtue of the purity of $g$ relative to $J$, $({\nabla
}_{Z}J)X=({\nabla }_{X}J)Z$, the last relation reduces to
\begin{eqnarray*}
&=&JXg\left( JY,Z\right) -g\left( \left( {\nabla }_{JX}J\right) Y,Z\right)
-g\left( J{\nabla }_{JX}Y,Z\right) +g\left( \left( {\nabla }_{JY}J\right)
X,Z\right) \\
&&+g\left( J{\nabla }_{JY}X,Z\right) -g\left( JY,{\nabla }_{JX}Z\right)
+g\left( JY,\left( {\nabla }_{Z}J\right) X\right) \\
&&+g\left( JY,J{\nabla }_{Z}X\right) -Xg\left( JY,JZ\right) +g\left( \left(
\nabla }_{X}J\right) Y,JZ\right) \\
&&+g\left( J{\nabla }_{X}Y,JZ\right) -g\left( J{\nabla }_{JY}X,Z\right)
+g\left( JY,J{\nabla }_{X}Z\right) -g\left( JY,J{\nabla }_{Z}X\right)
\end{eqnarray*
\begin{eqnarray*}
&=&JXg\left( JY,Z\right) -g\left( J{\nabla }_{JX}Y,Z\right) -g\left( JY,
\nabla }_{JX}Z\right) +g\left( JY,\left( {\nabla }_{Z}J\right) X\right) \\
&&-Xg\left( JY,JZ\right) +g\left( \left( {\nabla }_{X}J\right) Y,JZ\right)
+g\left( J{\nabla }_{X}Y,JZ\right) +g\left( JY,J{\nabla }_{X}Z\right)
\end{eqnarray*
\begin{eqnarray*}
&=&JXG\left( Y,Z\right) -G\left( {\nabla }_{JX}Y,Z\right) -G\left( Y,{\nabla
}_{JX}Z\right) -Xg\left( JY,JZ\right) \\
&&+g\left( {\nabla }_{X}JY,JZ\right) +g\left( JY,{\nabla }_{Z}JX\right)
\end{eqnarray*
\begin{equation}
=\left( {\nabla }_{JX}G\right) \left( Y,Z\right) -\left( {\nabla
_{X}g\right) \left( JY,JZ\right) . \label{GC5}
\end{equation}
Relative to the torsion-free connection $\nabla $, the Nijenhuis tensor has
the following form
\begin{equation*}
N_{J}\left( X,Y\right) =-J\{(\nabla _{JY}J)JX-(\nabla _{JX}J)JY\}+J\{(\nabla
_{Y}J)X-(\nabla _{X}J)Y\}
\end{equation*
From here, it is easy to $N_{J}\left( X,Y\right) =0$ because $({\nabla },J)$
is a Codazzi pair. Hence, taking account of (\ref{GC4}) and (\ref{GC5}) we
hav
\begin{equation*}
\left( {\Phi }_{J}G\right) \left( X,Y,Z\right) =\left( {\Phi }_{J}g\right)
\left( X,JY,Z\right) =\left( {\nabla }_{JX}G\right) \left( Y,Z\right)
-\left( {\nabla }_{X}g\right) \left( JY,JZ\right) .
\end{equation*}
\end{proof}
As is well known, the anti-K\"{a}hler condition ($\nabla ^{g}J=0$) is
equivalent to
\mathbb{C}
$-holomorphicity (analyticity) of the anti-Hermitian metric $g$, that is, $
\Phi }_{J}g=0$. If the anti-Hermitian metric $g$ is
\mathbb{C}
-$holomorphic, then the triple $(M,J,g)$ is an anti-K\"{a}hler manifold \cit
{IscanSalimov:2009}.
\begin{theorem}
Let $\nabla $ be a torsion-free linear connection on $(M,J,g,G)$. Under the
assumption that $({\nabla },J)$ being a Codazzi pair, $(M,J,g,G)$ is an
anti-K\"{a}hler manifold if and only if the following condition is fulfilled
\begin{equation*}
\left( {\nabla }_{JX}G\right) \left( Y,Z\right) =\left( {\nabla
_{X}g\right) \left( JY,JZ\right) .
\end{equation*}
\end{theorem}
\begin{proof}
The statement is a direct consequence of Proposition \ref{pr3}.
\end{proof}
\section{\noindent $J-$invariant Linear Connections}
Given arbitrary linear connection $\nabla $ on an almost complex manifold
(M,J)$, if the following condition is satisfied
\begin{equation*}
{\nabla }_{X}JY={J\nabla }_{X}Y
\end{equation*
for any vector fields $X,Y$ on $M$, then $\nabla $ is called a $J-$invariant
linear connection on $M$.
\begin{proposition}
\label{pr4} Let $\nabla $ be a linear connection on $(M,J,g,G)$. ${\nabla
^{\ast }$ and ${\nabla }^{\dagger }$ denote respectively $g-$conjugation and
$G-$conjugation of $\nabla $ on $M$. Then
$i)$ $\nabla $ is $J-$invariant if and only if ${\nabla }^{\ast }$ is so.
$ii)$ $\nabla $ is $J-$invariant if and only if ${\nabla }^{\dagger }$ is so.
\end{proposition}
\begin{proof}
$i)$ Using the definition of $g-$conjugation and the purity of $g$ relative
to $J$, we have
\begin{equation*}
G\left( {{\nabla }_{X}^{\ast }}JY-{J{\nabla }_{X}^{\ast }}Y,Z\right)
=g\left( {{\nabla }_{X}^{\ast }}JY,JZ\right) -{g(J{\nabla }_{X}^{\ast }}Y,JZ)
\end{equation*
\begin{equation*}
=-Xg\left( Y,Z\right) -g\left( JY,{\nabla }_{X}JZ\right) +Xg\left(
Y,Z\right) -g\left( Y,{\nabla }_{X}Z\right)
\end{equation*
\begin{equation*}
=-g\left( JY,{\nabla }_{X}\ JZ\right) +g\left( JY,J{\nabla }_{X}Z\right)
=-G\left( Y,{\nabla }_{X}JZ\right) +G\left( Y,J{\nabla }_{X}Z\right) .
\end{equation*
Hence, ${{\nabla }_{X}^{\ast }}JY={J{\nabla }_{X}^{\ast }}Y$ if and only if
{\nabla }_{X}\ JZ=J{{\nabla }_{X}\ Z}$.
$ii)$ Similarly, we get\noindent
\begin{equation*}
G\left( {{\nabla }_{X}^{\dagger }}JY-J{{\nabla }_{X}^{\dagger }}Y,Z\right)
=G\left( {{\nabla }_{X}^{\dagger }}JY,Z\right) -G(J{{\nabla }_{X}^{\dagger }
Y,Z)
\end{equation*
\begin{equation*}
=XG\left( JY,Z\right) -G\left( JY,{\nabla }_{X}Z\right) -XG\left(
Y,JZ\right) +G\left( Y,{\nabla }_{X}\ JZ\right)
\end{equation*
\begin{equation*}
=G\left( Y,{\nabla }_{X}JZ\right) -G\left( JY,{\nabla }_{X}Z\right) =G\left(
{\nabla }_{X}\ JZ-{J\nabla }_{X}Z,Y\right)
\end{equation*
which gives the result.
\end{proof}
\begin{proposition}
\label{pr5}Let $\nabla $ be a $J-$invariant linear connection on $(M,J,g,G)
. ${\nabla }^{\ast }$ and ${\nabla }^{\dagger }$ denote respectively $g-
conjugation and $G-$conjugation of $\nabla $ on $M$. The following
statements hold:
$i)$ ${\nabla }^{\dagger }$ coincides with ${\nabla }^{\ast }$,
$ii)$ $(\nabla ,G)$ is a Codazzi pair if and only if $(\nabla ,g)$ is so.
\end{proposition}
\begin{proof}
$i)$ By the definition of $g-$conjugation, $G-$conjugation and $J-
invariance, we have \textbf{\ }
\begin{equation*}
ZG\left( X,Y\right) =G\left( {\nabla }_{Z}X,Y\right) +G\left( X,{\nabla
_{Z}^{\dagger }Y\right)
\end{equation*
\begin{equation*}
Zg\left( JX,Y\right) =g\left( J{\nabla }_{Z}X,Y\right) +g\left( JX,{\nabla
_{Z}^{\dagger }Y\right)
\end{equation*
\begin{equation*}
Zg\left( JX,Y\right) -g\left( J{\nabla }_{Z}X,Y\right) =g\left( JX,{\nabla
_{Z}^{\dagger }Y\right)
\end{equation*
\begin{equation*}
Zg\left( JX,Y\right) -g\left( {\nabla }_{Z}JX,Y\right) =g\left( JX,{\nabla
_{Z}^{\dagger }Y\right)
\end{equation*
\begin{equation*}
g\left( JX,{{\nabla }_{Z}^{\ast }}Y\right) =g\left( JX,{\nabla
_{Z}^{\dagger }Y\right) \iff {{\nabla }^{\ast }}={\nabla }^{\dagger }
\end{equation*}
$ii)$ Using the purity of $g$ relative to $J$, we get
\begin{equation*}
\left( {\nabla }_{Z}G\right) \left( X,Y\right) =\left( {\nabla }_{X}G\right)
\left( Z,Y\right)
\end{equation*
\begin{equation*}
Zg\left( JX,Y\right) -g\left( J{\nabla }_{Z}X,Y\right) -g\left( JX,{\nabla
_{Z}Y\right) =Xg\left( JZ,Y\right) -g\left( J{\nabla }_{X}Z,Y\right)
-g\left( JZ,{\nabla }_{X}Y\right)
\end{equation*
\begin{equation*}
Zg\left( X,JY\right) -g\left( {\nabla }_{Z}X,JY\right) -g\left( X,{J\nabla
_{Z}Y\right) =Xg\left( Z,JY\right) -g\left( {\nabla }_{X}Z,JY\right)
-g\left( Z,{J\nabla }_{X}Y\right)
\end{equation*
\begin{equation*}
Zg\left( X,JY\right) -g\left( {\nabla }_{Z}X,JY\right) -g\left( X,{\nabla
_{Z}JY\right) =Xg\left( Z,JY\right) -g\left( {\nabla }_{X}Z,JY\right)
-g\left( Z,{\nabla }_{X}JY\right)
\end{equation*
\begin{equation*}
\left( {\nabla }_{Z}g\right) \left( X,JY\right) =\left( {\nabla
_{X}g\right) \left( Z,JY\right) .
\end{equation*}
\end{proof}
For the moment, we consider a torsion-free linear connection $\nabla $ on a
pseudo-Riemannian manifold $(M,g)$. In the case, if $(\nabla ,g)$ is a
Codazzi pair which characterizes what is known to information geometers as
statistical structures, then the manifold $M$ together with a statistical
structure $(\nabla ,g)$ is called a statistical manifold. The notion of
statistical manifold was originally introduced by Lauritzen \cite{Lauritzen
. Statistical manifolds are widely studied in affine differential geometry
\cite{Lauritzen,Nomizu} and plays a central role in information geometry.
\begin{theorem}
Let $\nabla $ be a $J-$invariant torsion-free linear connection on
(M,J,g,G) $. ${\nabla }^{\dagger }$ and ${\nabla }^{\ast }$ denote
respectively the $G- $conjugation and $g-$conjugation of $\nabla $ on $M$.
If $({\nabla },G)$ is a statistical structure, then the following statements
hold:\noindent
$i)$ $({\nabla }^{\dagger },g)$ is a statistical structure,
$ii)$ $\left( \nabla ,g\right) $ is a statistical structure,
$iii)$ $({\nabla }^{\ast },g)$ is a statistical structure.
Conversely, if any one of the statements $i)-iii)$ is satisfied, then $(
\nabla },G)$ is a statistical structure.
\end{theorem}
\begin{proof}
The result comes directly from Proposition \ref{pr4} and \ref{pr5}.
\end{proof}
\begin{theorem}
\noindent \noindent Let $\nabla $ be a $J-$invariant torsion-free linear
connection on $(M,J,g,G)$. ${\nabla }^{\ast }$ denote the $g-$conjugation of
$\nabla $ on $M$. $({\nabla },G)$ is a statistical structure if and only if
({\nabla }^{\ast },G)$ is so.
\end{theorem}
\begin{proof}
The result immediately follows from Proposition \ref{pr1}, using the
condition of $\nabla $ being $J-$invariant.
\end{proof}
|
1,477,468,750,391 | arxiv | \section{Introduction}
Integrability provides powerful tools for some special non - linear differential equations. An indicative example, that will occupy our interest, is provided by the equations of motion NLSMs on symmetric spaces. The equations of motion of the NLSM can be obtained as the compatibility condition of an auxiliary system, which is a linear system of first order differential equations. The dressing method \cite{Zakharov:1973pp,Zakharov:1980ty} takes advantage of gauge transformations of the auxiliary system, which can be performed in a trivial manner, in order to construct new non-trivial solutions of the NLSM. The implementation of the dressing method relies on the existence of a known solution, which is called the seed solution. In order to obtain a dressed solution of the NLSM one has to solve the auxiliary system and impose appropriate constraints. The latter ensure that the dressed solution belongs to the symmetric space of the NLSM. Once this task is performed, a whole tower of solutions of the NLSM can be constructed algebraically.
Another aspect of integrability is related to the fact that the embedding equations of the solution of the NLSM in the target space, which is in turn embedded in an enhanced flat space, are in fact multicomponent generalizations of the sine-Gordon equation, the so called Symmetric Space Sine Gordon models (SSSGs). This is known as Pohlmeyer reduction\cite{Pohlmeyer:1975nb,Lund:1976ze,Miramontes:2008wt}. An important implication of the Pohlmeyer reduction is the fact that the equations of motion of the NLSM become linear, given a solution of the Pohlmeyer reduced theory. Notice that these linear equations do not provide the general solution of the NLSM, but only the solutions that correspond to the given Pohlmeyer counterpart. It is significant that the Pohlmeyer reduction is a many-to-one mapping. There is whole family of solutions of the NLSM that corresponds to the same Pohlmeyer field. In what follows, the term ``family" always refers to this set of solutions.
The SSSGs possess \Backlund transformations\cite{Bakas:1995bm}, which is the analogous to the existence of the dressing transformations for the NLSMs. These are sets of first order differential equations that interrelate different pairs of solutions of the SSSGs. It has been shown that dressing transformations are equivalent to \Backlund transformations on the Pohlmeyer reduced theory \cite{Hollowood:2009tw}. Solutions obtained by \Backlund transformations of the same seed solution can be combined algebraically using addition formulas in order to create new ones, see e.g.\cite{Park:1995np}. In order to construct new solutions algebraically there are first order differential equations to be solved. Regarding the NLSM, these are the auxiliary system, whereas on the Pohlmeyer reduced theory these are the \Backlund transformations.
In \cite{Spradlin:2006wk} the dressing method was applied on the simplest possible seed, the BMN particle \cite{Berenstein:2002jq}, in order to produce the Giant Magnons \cite{Hofman:2006xt}. In our recent work \cite{Katsinis:2018ewd} we applied the dressing method on elliptic string solutions in $\mathbb{R}\times \mathrm{S}^2$, discussed in \cite{Katsinis:2018zxi}, using a mapping between $\mathrm{S}^2$ and the coset $\mathrm{SO}(3)/\mathrm{SO}(2)$. The auxiliary system in this case was solved in a systematic way. This was achieved by incorporating an appropriate parametrization, which implements the dressing transformation through its effect on the identity element of the coset, along with special properties of the elliptic seeds.
Motivated by the systematic solution of the auxiliary system for the elliptic seeds, in this work, we obtain a formal solution of the auxiliary system for an arbitrary seed. This formal solution is expressed in terms of a specific element of the family of the seed. This implies that the particular NLSM has a more fundamental property, which is a non-linear superposition rule. The dressing method is exactly the implementation of this non-linear superposition rule.
The structure of the paper is as follows: In section \ref{sec:strings_s2} we review basic elements of the NLSM that describes strings propagating on $\mathbb{R}\times \mathrm{S}^2$. In section \ref{sec:dressing} we discuss the dressing method and solve the auxiliary system for an arbitrary seed. In section \ref{sec:discussion} we discuss our results. There are also some appendices. Appendices \ref{sec:Normalization} and \ref{sec:remaining} contain some technical details on the derivation of the solution of the auxiliary system, while in appendix \ref{sec:dressing_factor} the construction of the simplest dressing factor is presented and the equivalence of the corresponding dressing transformation to the Pohlmeyer reduced theory is discussed.
\section{Strings in $\mathbb{R}\times \mathrm{S}^2$}
\label{sec:strings_s2}
In this section, we introduce the conventions that are used throughout the text and discuss the equations that describe the propagation of strings on $\mathbb{R}\times \mathrm{S}^2$. Our convention for the world-sheet metric is $\mathrm{diag}(-1,1)$, while the light-cone world-sheet coordinates are defined as $\xi^\pm=\left(\xi^1\pm\xi^0\right)/2$. The last relation implies $\partial_\pm=\partial_1\pm\partial_0$. Additionally, we set the radius of $\textrm{S}^2$ equal to one. We consider $\textrm{S}^2$ as being embedded in a flat three dimensional space and denote the corresponding embedding functions as $\vec{X}$. The NLSM action reads
\begin{equation}\label{eq:action}
S=T\int d\xi^+d\xi^-\left[-\partial_+ X^0\partial_- X^0+\partial_+\vec{X}\cdot\partial_-\vec{X}+\nu\left(\vec{X}\cdot\vec{X}-1\right)\right],
\end{equation}
where $T$ is the string tension and $\nu$ is the Lagrange multiplier, which ensures that the string propagates on the sphere surface. The equations of motion read
\begin{align}
\partial_+\partial_-X^0&=0,\\
\partial_+\partial_-\vec{X}&=\nu\vec{X}.
\end{align}
The equation for the temporal component $X^0$ is trivially solved by
\begin{equation}
X^0(\xi^+,\xi^-)=f_+\left(\xi^+\right)+f_-\left(\xi^-\right).
\end{equation}
Taking advantage of the invariance under diffeomorphisms, we may select $\xi^\pm$ so that
\begin{equation}
f_\pm\left(\xi^\pm\right)=m_\pm\xi^\pm.
\end{equation}
This choice corresponds to a ``linear" gauge for the temporal component $X^0$, which is a generalization of the static gauge. By differentiating the geometric constraint $\vert \vec{X}\vert^2=1$ and using the equations of motion, we can determine the Lagrange multiplier, which reads
\begin{equation}
\nu=-\left(\partial_+\vec{X}\right)\cdot\left(\partial_-\vec{X}\right).
\end{equation}
The action \eqref{eq:action} is supplemented by the Virasoro constraints, which read
\begin{equation}\label{eq:def_Vir}
\partial_\pm\vec{X}\cdot\partial_\pm\vec{X}=m^2_\pm.
\end{equation}
The embedding of the string in the enhanced flat space of $\textrm{S}^2$, which is $\mathbb{R}^3$, is controlled by a single degree of freedom, the Pohlmeyer field $a$, which is defined so that
\begin{equation}\label{eq:def_Pohl}
\partial_+\vec{X}\cdot\partial_-\vec{X}=m_+ m_-\cos a.
\end{equation}
The standard Pohlmeyer reduction implies that $ a$ satisfies the sine-Gordon equation \cite{Pohlmeyer:1975nb}. In our conventions the latter reads
\begin{equation}\label{eq:SG}
\partial_+\partial_- a=-m_+ m_-\sin a,
\end{equation}
where $m_+m_-<0,$ so that $\xi^0$ is the time-like world-sheet coordinate and $\xi^1$ the space-like one. For details on the Pohlmeyer reduction without gauge fixing $f_+$ and $f_-$ see \cite{Katsinis:2018zxi}. Taking advantage of the Pohlmeyer field, the equations of motion of the NLSM read
\begin{equation}\label{eq:NLSM_EOM}
\partial_+\partial_-\vec{X}=-m_+m_-\cos a\vec{X}.
\end{equation}
For a given a solution of the Pohlmeyer reduced theory, i.e. the sine-Gordon equation, these equations are linear.
We introduce the usual spherical coordinates: the polar angle $\theta$ and the azimuthal angle $\phi$, where $\theta=0$ corresponds to the z-axis. Then, the equations of motion read
\begin{align}
\partial_0\left[\sin^2\theta\partial_0\phi\right]&=\partial_1\left[\sin^2\theta\partial_1\phi\right],\\
\partial_0^2\theta-\cos\theta\sin\theta\left(\partial_0\phi\right)^2&=\partial_1^2\theta-\cos\theta\sin\theta\left(\partial_1\phi\right)^2,
\end{align}
while the Virasoro constraints assume the form
\begin{equation}
\left(\partial_1\theta \pm \partial_0\theta\right)^2+\sin^2\theta\left(\partial_1\phi \pm \partial_0\phi\right)^2=m^2_\pm.
\end{equation}
Finally, the coordinates of the string are related to the Pohlmeyer field by
\begin{equation}
\left(\partial_1\theta\right)^2 - \left(\partial_0\theta\right)^2+\sin^2\theta\left[\left(\partial_1\phi\right)^2 - \left(\partial_0\phi\right)^2\right]=m_+m_-\cos a.
\end{equation}
This relation does not determine the sign of the Pohlmeyer field. We fix it by demanding
\begin{equation}\label{eq:rule_sin_Pohl}
\vec{X}\cdot\left(\partial_+\vec{X}\times\partial_-\vec{X}\right)=2\sin\theta\left[\partial_0\theta\partial_1\phi-\partial_1\theta\partial_0\phi\right]=m_+m_-\sin a.
\end{equation}
Combining the above, we obtain the following expressions that will be used extensively in what follows:
\begin{align}
\left(\partial_0\theta\right)^2+\sin^2\theta\left(\partial_0\phi\right)^2&=\frac{m_+^2}{4}+\frac{m_-^2}{4}-\frac{m_+m_-}{2}\cos a,\label{eq:rule00}\\
\left(\partial_1\theta\right)^2+\sin^2\theta\left(\partial_1\phi\right)^2&=\frac{m_+^2}{4}+\frac{m_-^2}{4}+\frac{m_+m_-}{2}\cos a,\label{eq:rule11}\\
\partial_0\theta\partial_1\theta+\sin^2\theta\partial_0\phi\partial_1\phi&=\frac{m_+^2}{4}-\frac{m_-^2}{4}.\label{eq:rule01}
\end{align}
It is important to point out that the Pohlmeyer reduced theory depends only on the product $m_+m_-$. The Pohlmeyer reduction is a many-to-one mapping. For each solution of the Pohlmeyer reduced theory, there is a whole family of NLSM solutions\footnote{This family is an associate (Bonnet) family of world-sheets.}, which corresponds to the same product $m_+m_-$. Its members are parametrized by the ratio $m_+/m_-$, see e.g. \cite{Katsinis:2018zxi}.
\section{Dressed Strings in $\mathbb{R}\times S^2$}
\label{sec:dressing}
The dressing method enables us to construct new solutions of a NLSM once we are given a solution of the latter. We refer to this solution as seed solution. Given the seed solution we may obtain a new solution of the NLSM by solving a pair of first order equations, which are called the auxiliary system. This is considered a simpler task than solving the original equations of motion, which are non-linear and second order. We will not only verify this statement, but we will obtain the solution of the auxiliary system, which corresponds to strings in $\mathbb{R}\times S^2$, for an arbitrary seed.
The auxiliary system reads
\begin{equation}\label{eq:auxiliary_system}
\partial_\pm\Psi(\lambda)=\frac{1}{1\pm\lambda}\left(\partial_\pm g\right)g^{-1}\Psi(\lambda),
\end{equation}
where $\lambda$ is the spectral parameter, which is in general complex. The seed solution $X$ is mapped to an element of the coset $SO(3)/SO(2)$, which is denoted as $g$. The compatibility relation $\partial_+\partial_-\Psi=\partial_-\partial_+\Psi,$ which ensures the local existence of a solution of the auxiliary system, implies that $g$ obeys the equations of motion $\partial_+\left(\left(\partial_-g\right)g^{-1}\right)+\partial_-\left(\left(\partial_+g\right)g^{-1}\right)=0$. The normalization of $\Psi(\lambda)$ is fixed as
\begin{equation}\label{eq:Psi_initial_condition}
\Psi(0)=g.
\end{equation}
The main idea of the dressing method is the fact that a gauge transformation of the auxiliary field
\begin{equation}
\Psi^\prime(\lambda)=\chi(\lambda)\Psi(\lambda),
\end{equation}
corresponds to a new, non-trivial element of the coset, namely
\begin{equation}
g^\prime=\chi(0)g,
\end{equation}
which is associated to a new string solution, via the inverse mapping. We refer the reader to \cite{Hollowood:2009tw,Katsinis:2018ewd} for more details on the dressing method.
The mapping from $\mathbb{R}^3,$ the enhanced space of $\mathrm{S}^2$, to the coset $\mathrm{SO}(3)/\mathrm{SO}(2),$ that is used, is
\begin{equation}\label{eq:g_mapping}
g=J\left(I-2XX^T\right),\qquad J=\left(I-2X_0X_0^T\right),
\end{equation}
where $X_0$ is a constant vector and $X^TX=X_0^TX_0=1.$ It is easy to show that $\left(I-2XX^T\right)^2=I,$ for any unit norm vector $X$, which implies that
\begin{equation}\label{eq:g_constraints_1}
gJ gJ=I,\qquad
g^T=g^{-1}.
\end{equation}
In addition, $g$ is real, i.e.
\begin{equation}
\bar{g}=g.\label{eq:g_constraints_2}
\end{equation}
Thus, $g$ is indeed an element of the coset $\mathrm{SO}(3)/\mathrm{SO}(2)$. On a more formal basis, starting with the group $\mathrm{SL}(2;\mathbb{C})$, the coset can by constructed using the following involutions
\begin{align}
\sigma_1\left(g\right)&=(g^\dagger)^{-1},\label{eq:inv_1}\\
\sigma_2\left(g\right)&=JgJ,\label{eq:inv_2}\\
\sigma_3\left(g\right)&=\bar{g}.\label{eq:inv_3}
\end{align}
Demanding invariance under the first involution restricts $g$ to $\mathrm{SU}(3)$. Setting $\sigma_2\left(g\right)=g^{-1}$ restricts $g$ further, to the coset $\mathrm{SU}(3)/\mathrm{U}(2)$. Finally, invariance under the last involution implies that $g$ is an element of the coset $\mathrm{SO}(3)/\mathrm{SO}(2)$. Applying the same involutions on the auxiliary system \eqref{eq:auxiliary_system} implies that the transformed $\Psi(\xi^0,\xi^1;\lambda)$ must belong to the set of solutions of the auxiliary system. The latter is generated by the right multiplication with a constant matrix of a given solution; in our discussion this solution is $\Psi(\xi^0,\xi^1;\lambda)$. Thus, the following constraints must be imposed\footnote{Equation \eqref{eq:Psi_inv} corresponds to the action of both involutions \eqref{eq:inv_1} and \eqref{eq:inv_3}.}:
\begin{align}
\Psi(\lambda)m_1(\lambda)&=\left(\Psi(\lambda)^T\right)^{-1},\label{eq:Psi_inv}\\
\Psi(\lambda)m_2(\lambda)&=gJ\Psi(1/\lambda)J,\label{eq:Psi_cos}\\
\Psi(\lambda)m_3(\lambda)&=\overline{\Psi(\bar{\lambda})}.\label{eq:Psi_real}
\end{align}
The matrices $m_i$ themselves are subject to constraints, which stem from the fact that the involutions satisfy $\sigma^2=I$. In particular, they obey
\begin{align}
m_1(\lambda)=m_1^T(\lambda),\label{eq:constraint_m_inv}\\
m_2(\lambda)J m_2(1/\lambda) J=I,\label{eq:constraint_m_cos}\\
m_3(\lambda)\bar{m}_3(\bar{\lambda})=I.\label{eq:constraint_m_real}
\end{align}
In addition, since $\Psi(0)=g$ the matrices $m_1$ and $m_3$ must reduce to the identity matrix for $\lambda=0$, i.e.
\begin{equation}
m_1(0)=m_3(0)=I.\label{eq:constraint_m_0}
\end{equation}
These matrices are related to the so called reduction group \cite{Zakharov:1973pp,Mikhailov:1981us}. As we will show subsequently, the dressed string solution is not affected by the choice of these matrices.
\subsection{The Auxiliary System}
Having set the framework, we are ready to implement the dressing method. We follow the approach introduced in \cite{Katsinis:2018ewd}, parametrizing the seed as a rotation of a constant reference vector. In order to proceed we change the coordinates of the auxiliary system \eqref{eq:auxiliary_system} from the left- and right-moving coordinates $\xi^{\pm}$ to $\xi^0$ and $\xi^1$. The auxiliary system assumes the form
\begin{equation}
\partial_i\Psi(\lambda)=\left[\left(\tilde{\partial}_i g\right)g^{-1}\right]\Psi(\lambda),
\end{equation}
where $i=0,1$ and
\begin{equation}
\tilde{\partial}_{0/1}=\frac{1}{1-\lambda^2}\partial_{0/1}-\frac{\lambda}{1-\lambda^2}\partial_{1/0}.
\end{equation}
We express the seed solution of the NLSM as
\begin{equation}
X= U \hat{X},
\end{equation}
where $\hat{X}$ is a constant vector and $U$ is a rotation matrix. The element of the coset that corresponds to the seed solution can be expressed as
\begin{equation}\label{eq:ghat_definition}
g=J U J \hat{g}U^T,
\end{equation}
where
\begin{equation}\label{eq:ghat_mapping}
\hat{g}:=J\left(I-2\hat{X}\hat{X}^T\right).
\end{equation}
Notice that $\hat{g}$ is an element of the coset. In a similar manner, we define $\hat{\Psi}$ as
\begin{equation}\label{eq:psihat_definition}
\Psi=J U J\hat{\Psi}.
\end{equation}
The auxiliary system assumes the form
\begin{equation}
\partial_i\hat{\Psi}=\left\{J U^T \left[\left(\tilde{\partial}_i-\partial_i\right)U\right]J-\hat{g}U^T\left[\tilde{\partial}_iU\right]\hat{g}^T+\left[\tilde{\partial}_i\hat{g}\right]\hat{g}^T\right\}\hat{\Psi}.
\end{equation}
We select $X_0$ to be the unit norm vector along the z axis, i.e.
\begin{equation}
X_0=\begin{pmatrix}
0 \\ 0 \\ 1
\end{pmatrix},
\end{equation}
so that $J=\mathrm{diag}(1,1,-1)$. Moreover, the matrix $U$ can be selected so that $\hat{X}=X_0$. Thus, $\hat{g}$ becomes the identity element of the coset and the equations of the auxiliary system assume the form
\begin{equation}
\partial_i\hat{\Psi}=\left\{J U^T \left[\left(\tilde{\partial}_i-\partial_i\right)U\right]J-U^T\left[\tilde{\partial}_iU\right]\right\}\hat{\Psi},
\end{equation}
while the normalization \eqref{eq:Psi_initial_condition} reduces to
\begin{equation}\label{eq:Psi_hat_initial_condition}
\hat{\Psi}(0)=U^T.
\end{equation}
In addition, the constraints \eqref{eq:Psi_inv}, \eqref{eq:Psi_cos} and \eqref{eq:Psi_real} for $\Psi$, imply that $\hat{\Psi}$ is subject to the following constraints:
\begin{align}
\hat{\Psi}(\lambda)m_1(\lambda)&=\left(\hat{\Psi}(\lambda)^T\right)^{-1},\label{eq:Psi_hat_inv}\\
\hat{\Psi}(\lambda)m_2(\lambda)&=J\hat{\Psi}(1/\lambda)J,\label{eq:Psi_hat_cos}\\
\hat{\Psi}(\lambda)m_3(\lambda)&=\bar{\hat{\Psi}}(\bar{\lambda}).\label{eq:Psi_hat_real}
\end{align}
We express the matrix $U$ as $U=U_2U_1$, where
\begin{equation}
U_1=\begin{pmatrix}
\cos\theta & 0 & \sin\theta \\
0 & 1 & 0 \\
- \sin\theta & 0 & \cos\theta
\end{pmatrix}, \qquad U_2= \begin{pmatrix}
\cos\phi & -\sin\phi & 0\\
\sin\phi & \cos\phi & 0\\
0 & 0 & 1
\end{pmatrix}.\label{eq:def_U}
\end{equation}
The auxiliary system can by expressed as
\begin{equation}\label{eq:auxiliary_psi_hat}
\partial_i\hat{\Psi}=\left(t_i^jT_j\right)\hat{\Psi},
\end{equation}
where $T_j$ are the generators of the group $\mathrm{SO}(3)$:
\begin{equation}
T_1=\begin{pmatrix}
0 & 0 & 0\\
0 & 0 & -1\\
0 & 1 & 0
\end{pmatrix},\quad
T_2=\begin{pmatrix}
0 & 0 & 1\\
0 & 0 & 0\\
-1 & 0 & 0
\end{pmatrix},\quad
T_3=\begin{pmatrix}
0 & -1 & 0\\
1 & 0 & 0\\
0 & 0 & 0
\end{pmatrix}.\label{eq:generators_T}
\end{equation}
It is a matter of algebra to show that
\begin{align}
t_{0/1}^1&=\sin\theta\left(\frac{1+\lambda^2}{1-\lambda^2}\partial_{0/1}\phi- \frac{2\lambda}{1-\lambda^2}\partial_{1/0}\phi\right),\\
t_{0/1}^2&=-\frac{1+\lambda^2}{1-\lambda^2}\partial_{0/1}\theta+\frac{2\lambda}{1-\lambda^2}\partial_{1/0}\theta,\\
t_{0/1}^3&=-\cos\theta\partial_{0/1}\phi.
\end{align}
For later convenience we define
\begin{equation}
\vec{t}_i:=\begin{pmatrix}
t_i^1\\
t_i^2\\
t_i^3
\end{pmatrix},
\qquad
\vec{\tau}_i:=\vec{t}_i-\left(\vec{X}_0\cdot\vec{t}_i\right)\vec{X}_0=\begin{pmatrix}
t_i^1\\
t_i^2\\
0
\end{pmatrix}.\label{eq:def_vectors}
\end{equation}
Notice that
\begin{equation}
\frac{d}{d\lambda}\vec{t}_0=-\frac{2\lambda}{1-\lambda^2}\vec{\tau}_1,\qquad\frac{d}{d\lambda}\vec{t}_1=-\frac{2\lambda}{1-\lambda^2}\vec{\tau}_0\label{eq:der_lambda}.
\end{equation}
Under the inversion of $\lambda\rightarrow 1/\lambda$ the quantities $\vec{\tau}_i$ and $t_i^3$ have the following parity properties
\begin{align}\label{eq:tau_inv}
\vec{\tau}_i(1/\lambda)&=-\vec{\tau}_i(\lambda),\qquad t_i^3(1/\lambda)=t_i^3(\lambda).
\end{align}
In addition, all quantities are real functions of the complex spectral parameter, i.e.
\begin{align}\label{eq:t_reality}
\bar{\vec{t}}_i(\bar{\lambda})&=\vec{t}(\lambda).
\end{align}
The derivatives of $\vec{t}_i$ and $\vec{\tau}_i$ obey the following algebra
\begin{align}
\partial_1\vec{t}_0-\partial_0\vec{t}_1&=\vec{t}_1\times\vec{t}_0,\label{eq:der_t}\\
\partial_1\vec{\tau}_1-\partial_0\vec{\tau}_0&=\vec{\tau}_0\times\vec{t}_0+\vec{t}_1\times\vec{\tau}_1.\label{eq:der_tau}
\end{align}
Notice that \eqref{eq:der_tau} can be obtained from \eqref{eq:der_t} using \eqref{eq:der_lambda}.
Moreover, it is straightforward to show that:
\begin{align}
\vert\vec{\tau}_0\vert^2&=\frac{m_+^2}{4}\left(\frac{1-\lambda}{1+\lambda}\right)^2+\frac{m_-^2}{4}\left(\frac{1+\lambda}{1-\lambda}\right)^2-\frac{m_+m_-}{2}\cos a,\label{eq:t00}\\
\vert\vec{\tau}_1\vert^2&=\frac{m_+^2}{4}\left(\frac{1-\lambda}{1+\lambda}\right)^2+\frac{m_-^2}{4}\left(\frac{1+\lambda}{1-\lambda}\right)^2+\frac{m_+m_-}{2}\cos a,\label{eq:t11}\\
\vec{\tau}_0\cdot\vec{\tau}_1&=\frac{m_+^2}{4}\left(\frac{1-\lambda}{1+\lambda}\right)^2-\frac{m_-^2}{4}\left(\frac{1+\lambda}{1-\lambda}\right)^2.\label{eq:t01}
\end{align}
The careful reader will that recognize these relations are identical to \eqref{eq:rule00}, \eqref{eq:rule11} and \eqref{eq:rule01} upon the substitutions $\partial_i\vec{X}\rightarrow\vec{\tau}_i$ and
\begin{equation}\label{eq:m_lambda}
m^2_\pm\rightarrow m^2_\pm \left(\frac{1\mp\lambda}{1\pm\lambda}\right)^2.
\end{equation}
This fact will be crucial in what follows. In addition, one may obtain
\begin{equation}\label{eq:cross_01}
\vec{\tau}_0\times\vec{\tau}_1=\frac{1}{2}m_+m_-\sin a\vec{X}_0,
\end{equation}
which is analogous to \eqref{eq:rule_sin_Pohl}.
Finally, the expressions of $U_1$ and $U_2$, which are given by \eqref{eq:def_U}, imply that the condition \eqref{eq:Psi_hat_initial_condition} assumes the form
\begin{equation}\label{eq:Psi_hat_initial_condition_explicit}
\hat{\Psi}(0)=\begin{pmatrix}
\cos\theta\cos\phi & \cos\theta\sin\phi & -\sin\phi \\
-\sin\phi & \cos\phi & 0 \\
\sin\theta\cos\phi & \sin\theta\sin\phi & \cos\theta
\end{pmatrix}.
\end{equation}
\subsection{The Solution of the Auxiliary System}
The auxiliary system \eqref{eq:auxiliary_psi_hat} comprises of three independent, identical, pairs of equations, one for each column of $\hat{\Psi}$, which we denote as\footnote{In this notation $\hat{\Psi}=\begin{pmatrix}\vec{\hat{\Psi}}_1 & \vec{\hat{\Psi}}_2 & \vec{\hat{\Psi}}_3\end{pmatrix}$.} $\vec{\hat{\Psi}}_j$. In particular, each column obeys the equations
\begin{equation}\label{eq:auxialiary_vector_psi_hat}
\partial_i\vec{\hat{\Psi}}_j=\vec{t}_i\times\vec{\hat{\Psi}}_j,
\end{equation}
where $j=1,2,3.$ Let us consider the inner product of two arbitrary solutions of this system of equations. It is straightforward to show that
\begin{equation}\label{eq:d_inner_product}
\partial_i\left[\vec{\hat{\Psi}}_j\cdot\vec{\hat{\Psi}}_k\right]=\left(\vec{t}_i\times\vec{\hat{\Psi}}_j\right)\cdot\vec{\hat{\Psi}}_k+\vec{\hat{\Psi}}_j\cdot\left(\vec{t}_i\times\vec{\hat{\Psi}}_k\right)=0.
\end{equation}
This proves that the constraint \eqref{eq:Psi_hat_inv}, which implies $\vec{\hat{\Psi}}_j\cdot\vec{\hat{\Psi}}_k=\left(m^{-1}_1(\lambda)\right)_{jk}$, is compatible with the equations of the auxiliary system\footnote{We remind the reader that the matrix $m_1(\lambda)$, is symmetric due to \eqref{eq:constraint_m_inv}.}. The system \eqref{eq:auxialiary_vector_psi_hat} has three linearly independent solutions. For some given $\xi^0$ and $\xi^1$ we may specify linear combinations of these solutions, which we denote as $\vec{\hat{V}}_j$ that form an orthonormal basis. Due to the linearity of the equations \eqref{eq:auxialiary_vector_psi_hat}, $\vec{\hat{V}}_j$ satisfy
\begin{equation}\label{eq:auxialiary_vector}
\partial_i\vec{\hat{V}}_j=\vec{t}_i\times\vec{\hat{V}}_j.
\end{equation}
Then, equation \eqref{eq:d_inner_product} implies that these vectors form an orthonormal basis for any $\xi^0$ and $\xi^1$, i.e.
\begin{equation}\label{eq:con_orthonormal}
\vec{\hat{V}}_j\cdot\vec{\hat{V}}_k=\delta_{jk}.
\end{equation}
We will solve \eqref{eq:auxialiary_vector} by projecting it on linear independent directions, namely $\vec{\hat{V}}_j$, $\vec{X}_0$ and $\vec{X}_0\times\vec{\tau}_i$. Obviously, the equations obtained by the projection of \eqref{eq:auxialiary_vector} along $\vec{\hat{V}}_j$, are redundant, since they are equivalent to the constraint \eqref{eq:con_orthonormal}.
Recognizing that the third components of $\vec{\hat{V}}_j$ are special will enable us to solve the rest of the equations of the auxiliary system, as well as the constraints. This is due to the fact that $\vec{X}_0$ is parallel to the third axis. These components obey the same equations of motion as the embedding functions of the string solution \eqref{eq:NLSM_EOM}, i.e
\begin{equation}\label{eq:V_3_eom}
\partial_1^2\hat{V}_j^3-\partial_0^2\hat{V}_j^3=-m_+m_-\cos a \hat{V}_j^3.
\end{equation}
Let us prove this statement. The auxiliary system \eqref{eq:auxialiary_vector} implies that
\begin{multline}
\partial_1^2 \hat{V}_j^3-\partial_0^2 \hat{V}_j^3= \\ \left[\vec{X}_0\times\left(\partial_1 \vec{\tau}_1-\partial_0 \vec{\tau}_0\right)\right]\cdot\vec{\hat{V}}_j+\left[\vec{t}_1\times\left[\vec{t}_1\times\vec{\hat{V}}_j\right]\right]\cdot \vec{X}_0-\left[\vec{t}_0\times\left[\vec{t}_0\times\vec{\hat{V}}_j\right]\right]\cdot \vec{X}_0.
\end{multline}
Taking the equation \eqref{eq:der_tau}, as well as \eqref{eq:def_vectors}, into account, it is easy to show that
\begin{equation}
\partial_1^2 \hat{V}_j^3-\partial_0^2 \hat{V}_j^3=-\left(\vert\vec{\tau}_1\vert^2-\vert\vec{\tau}_0\vert^2\right)\hat{V}_j^3.
\end{equation}
It is important that the component of $\vec{\hat{V}}_j$, which is parallel to $\vec{X}_0$, namely $\hat{V}_j^3$, obeys a second order equation that is \emph{decoupled}, i.e. it does not contain the other components. Moreover, equations \eqref{eq:t00} and \eqref{eq:t11} trivially imply that this relation assumes the form of \eqref{eq:V_3_eom}.
It is evident that we have to single out $\hat{V}_j^3$ and use the auxiliary system in order to express the other two components $\hat{V}_j^1$ and $\hat{V}_j^2$ in terms of the former. The projection of the auxiliary system \eqref{eq:auxialiary_vector} on the direction of $\vec{X}_0$ reads
\begin{equation}
\partial_i\hat{V}_j^3=\left[\vec{X}_0\times\vec{\tau}_i\right]\cdot\vec{\hat{V}}_j,
\end{equation}
since $\vec{X}_0\times\vec{t}_i=\vec{X}_0\times\vec{\tau}_i$. This implies that the column $\vec{\hat{V}}_j$ has the following form
\begin{equation}\label{eq:column_form}
\vec{\hat{V}}_j= \frac{\vec{\tau}_1}{\left(\vec{\tau}_0\times\vec{\tau}_1\right)\cdot\vec{X}_0}\partial_0 \hat{V}_j^3- \frac{\vec{\tau}_0}{\left(\vec{\tau}_0\times\vec{\tau}_1\right)\cdot\vec{X}_0}\partial_1 \hat{V}_j^3+\hat{V}_j^3\vec{X}_0.
\end{equation}
Thus, the components $\hat{V}_j^3$ completely specify the solution $\vec{\hat{V}}_j$.
Finally, we may obtain another pair of independent equations by projecting the auxiliary system $\eqref{eq:auxialiary_vector}$ on $\vec{X}_0\times\vec{\tau}_i$. After some simple algebraic manipulations, this yields
\begin{equation}\label{eq:last_eq}
\partial^2_i \hat{V}_j^3=\left(\partial_i\vec{\tau}_i-\vec{t}_i\times\vec{\tau}_i\right)\cdot\left[\vec{\hat{V}}_j\times\vec{X}_0\right]-\vert\vec{\tau}_i\vert^2 \hat{V}_j^3.
\end{equation}
In virtue of \eqref{eq:der_tau}, the difference of this equation for $i=0$ and $i=1$ is trivially equation \eqref{eq:V_3_eom}. Thus, the vector $\vec{\hat{V}}_j$ has the form of equation \eqref{eq:column_form}, where $\hat{V}_j^3$ obeys equations \eqref{eq:V_3_eom} and \eqref{eq:last_eq} for either values of $i$. In addition one has to impose the constraints \eqref{eq:con_orthonormal}.
We continue the discussion inspired by the latter. As a direct consequence of equation \eqref{eq:con_orthonormal} it is true that $\sum_j\left(\hat{V}^3_j\right)^2=1$. Therefore, having at the back of our mind equation \eqref{eq:Psi_hat_initial_condition_explicit} we may define
\begin{align}
\hat{V}^3_1&=\sin\Theta\cos\Phi,\\
\hat{V}^3_2&=\sin\Theta\sin\Phi,\\
\hat{V}^3_3&=\cos\Theta,
\end{align}
where $\Theta=\Theta\left(\xi^0,\xi^1;\lambda\right)$ and $\Phi=\Phi\left(\xi^0,\xi^1;\lambda\right)$, will be specified by the various equations and constraints. It is obvious that the condition \eqref{eq:Psi_hat_initial_condition_explicit} is equivalent to the fact that for $\lambda=0$ these functions should reduce to the coordinates of the seed solution, i.e.
\begin{equation}\label{eq:initial_condition_theta_phi}
\Theta\vert_{\lambda=0}=\theta,\qquad \Phi\vert_{\lambda=0}=\phi.
\end{equation}
So far, we know that the functions $\Theta$ and $\Phi$ could decribe a solution of the NLSM, which has the same Pohlmeyer counterpart as the seed solution. We turn to the condition that $\vec{\hat{V}}_j$ should form an orthonormal basis as suggested by \eqref{eq:con_orthonormal}. This condition allows us to obtain the analogous of the Virasoro constraints, which are obeyed by $\Theta$ and $\Phi$. We derive them in appendix \ref{sec:Normalization}. They read
\begin{align}
\left(\partial_0\Theta\right)^2+\sin^2\Theta\left(\partial_0\Phi\right)^2&=\vert\vec{\tau}_0\vert^2=\frac{m_+^2}{4}\frac{\left(1-\lambda\right)^2}{\left(1+\lambda\right)^2}+\frac{m_-^2}{4}\frac{\left(1+\lambda\right)^2}{\left(1-\lambda\right)^2}-\frac{m_+m_-}{2}\cos a,\label{eq:Vir_00_lambda}\\
\left(\partial_1\Theta\right)^2+\sin^2\Theta\left(\partial_1\Phi\right)^2&=\vert\vec{\tau}_1\vert^2=\frac{m_+^2}{4}\frac{\left(1-\lambda\right)^2}{\left(1+\lambda\right)^2}+\frac{m_-^2}{4}\frac{\left(1+\lambda\right)^2}{\left(1-\lambda\right)^2}+\frac{m_+m_-}{2}\cos a,\label{eq:Vir_11_lambda}\\
\partial_0\Theta\partial_1\Theta+\sin^2\Theta\partial_0\Phi\partial_1\Phi&=\vec{\tau}_0\cdot\vec{\tau}_1=\frac{m_+^2}{4}\frac{\left(1-\lambda\right)^2}{\left(1+\lambda\right)^2}-\frac{m_-^2}{4}\frac{\left(1+\lambda\right)^2}{\left(1-\lambda\right)^2}.\label{eq:Vir_01_lambda}
\end{align}
In appendix \ref{sec:remaining}, we show that equation \eqref{eq:last_eq} is satisfied without any further constraints on $\Theta$ and $\Phi$.
Therefore, the triplet which is composed by the third components of the vectors $\vec{\hat{V}}_j$ obeys:
\begin{enumerate}
\item The normalization $\sum_j\left(\hat{V}^3_j\right)^2=1$, which is analogous to the geometric constraint $\vert\vec{X}\vert^2=1$ that defines the $S^2$. This justifies the definition of $\Theta$ and $\Phi$ in the same fashion as $\theta$ and $\phi$ in the original NLSM.\\
\item Equations \eqref{eq:Vir_00_lambda}, \eqref{eq:Vir_11_lambda} and \eqref{eq:Vir_01_lambda} which are identical to equations \eqref{eq:rule00}, \eqref{eq:rule11} and \eqref{eq:rule01}, upon the substitution \eqref{eq:m_lambda}. It is important that this transformation leaves the product $m_+m_-$ invariant. This implies that triplet $\hat{V}^3_j$ obeys the same ``Virasoro'' constraints as the seed but with different constants $m_\pm$ and it has the same ``Pohlmeyer counterpart'' as the seed solution.
\item The equation \eqref{eq:V_3_eom} which is identical to the equations of motion \eqref{eq:NLSM_EOM} obeyed by the components of the original seed solution with given Pohlmeyer counterpart.
\end{enumerate}
Thus, following the discussion at the end of section \ref{sec:strings_s2}, the triplet $\hat{V}^3_j$ is given by \emph{the member of the family of the seed which corresponds to the ratio }
\begin{equation}
\frac{m_+}{m_-}\left(\lambda\right)=\left(\frac{1+\lambda}{1-\lambda}\right)^2\frac{m_+}{m_-}.\label{eq:m_of_lambda}
\end{equation}
However, since $\lambda$ is in general complex, one is not restricted to the real solutions of the family of the seed, but rather to its analytic continuation.
Obviously, for $\lambda=0$, equations \eqref{eq:Vir_00_lambda}, \eqref{eq:Vir_11_lambda} and \eqref{eq:Vir_01_lambda} reduce to the relevant equations of the seed solution. One may be tempted to regard this as the fact that this ``virtual'' solution of the NLSM reduces to the seed one, yet this is true up to global rotations. To ensure that no such global rotation is involved, so that the condition \eqref{eq:initial_condition_theta_phi} is satisfied, one has to employ \eqref{eq:m_lambda} directly to the coordinates of the seed solution.
Let $\hat{V}(\lambda)$ be the matrix, whose columns are the three orthonormal solutions $\vec{\hat{V}}_j$ of the system \eqref{eq:auxialiary_vector} that we constructed above. Taking into account the freedom of the right multiplication of a solution of \eqref{eq:auxiliary_psi_hat} with a constant matrix $C(\lambda)$, we consider the whole class of solutions of the auxiliary system
\begin{equation}\label{eq:definition_C}
\hat{\Psi}(\lambda)=\hat{V}(\lambda) C(\lambda).
\end{equation}
Obviously, equation \eqref{eq:con_orthonormal} implies
\begin{equation}\label{eq:V_Orth}
\hat{V}^T(\lambda)=\hat{V}^{-1}(\lambda).
\end{equation}
The equation \eqref{eq:tau_inv}, implies that the matrix $\hat{V}$ transforms under the inversion of $\lambda$ as
\begin{equation}\label{eq:V_Coset}
\hat{V}\left(1/\lambda\right)=-J \hat{V}\left(\lambda\right)M\left(\lambda\right),
\end{equation}
where the matrix $M$ represents the transformation of $\hat{V}_j^3$ under $\lambda\rightarrow 1/\lambda$. Since $\hat{V}_j^3(\lambda)$ satisfy the equations of motion \eqref{eq:V_3_eom}, it implies that $\hat{V}_j^3(1/\lambda)$ belongs to the set of solutions of this equation. In addition $\hat{V}_j^3(1/\lambda)$ obeys equations \eqref{eq:Vir_00_lambda}, \eqref{eq:Vir_11_lambda} and \eqref{eq:Vir_01_lambda}. Thus, it is related to $\hat{V}_j^3(\lambda)$ with a global rotation. The corresponding rotation matrix $M$ obeys
\begin{align}
M\left(\lambda\right)M\left(1/\lambda\right)&=I,\\
M^T\left(\lambda\right)M\left(\lambda\right)&=I,\\
\bar{M}\left(\bar{\lambda}\right)&=M\left(\lambda\right).
\end{align}
In any case, given a specific seed solution, one will be able to specify the matrix M\footnote{In the case of the BMN particle and the elliptic strings $M=-J$.}.
Similarly, equation \eqref{eq:t_reality} and the fact that the seed solution is a real function of $m_+$ and $m_-$ implies that
\begin{equation}\label{eq:V_Reality}
\bar{\hat{V}}\left(\bar{\lambda}\right)=\hat{V}\left(\lambda\right).
\end{equation}
It is trivial to show that the above imply
\begin{align}
m_1(\lambda)&=\left[C^T(\lambda)C(\lambda)\right]^{-1},\label{eq:def_m1_C}\\
m_2(\lambda)&=-C^{-1}(\lambda)M(\lambda)C(1/\lambda)J,\label{eq:def_m2_C}\\
m_3(\lambda)&=C^{-1}(\lambda)\bar{C}(\bar{\lambda}),\label{eq:def_m3_C}
\end{align}
These matrices satisfy identically \eqref{eq:constraint_m_inv}, \eqref{eq:constraint_m_cos} and \eqref{eq:constraint_m_real}. Furthermore, as for $\lambda=0$ the matrix $\hat{V}(\lambda)$ satisfies
\begin{equation}
\hat{V}(0)=U^T,
\end{equation}
it is evident that
\begin{equation}
C(0)=I.
\end{equation}
The latter implies that the equations \eqref{eq:constraint_m_0} are satisfied too.
The aftermath of this analysis is an unexpected statement. If one knows not only the seed solution, but also the whole family of solutions that correspond to the same Pohlmeyer counterpart as the seed\footnote{This is the case when one constructs NLSM via the ``inversion'' of the Pohlmeyer reduction in the spirit of \cite{Bakas:2016jxp}.}, then one can construct \emph{algebraically} the corresponding solution of the auxiliary system. For real values of the spectral parameter, the elements of the auxiliary field are constructed via an interpolation between different members of this family of solutions. In general they are determined by the analytic continuation of the family.
In appendix \ref{subsec:simplest_dressing_factor} the simplest dressing factor is constructed. This contains a pair of poles on the unit circle at $e^{\pm i\theta_1}$. In appendix \ref{subsec:dressed_solution} we derive the corresponding dressed solution for a general seed. We show that this obeys the equations of motion and the same Virasoro constraints as the seed. The cosine of the Pohlmeyer field of the dressed solution is given by
\begin{equation}\label{eq:add_cos_pohl}
m_+m_-\cos a^\prime=m_+m_-\cos a+\partial_+\partial_-\ln\left[\left(\hat{W}^T X_0\right)^2\right],
\end{equation}
where $\hat{W}^T=J \hat{V}(e^{i\theta_1})p$ and $p$ is a constant complex column, which obeys appropriate conditions (see appendix \ref{subsec:simplest_dressing_factor}). The argument of the logarithm is simply a linear combination of $\hat{V}^3_j$, i.e. the analytic continuation of the the family of the seed string solution.
Notice that the proofs of the fact that the dressing trasformation with the simplest dressing factor preserves the Virasoro constraints and that the dressed solution obeys the equation of motion, are valid for any number of dimensions. Similarly, the structure of the addition formula \eqref{eq:add_cos_pohl} is the same for any NLSM defined on $\textrm{R}\times \textrm{S}^d$. Obviously, if $d\geq3$, one has to appropriately generalize the presented solution of the auxiliary system $\hat{V}(\lambda)$.
\section{Discussion}
\label{sec:discussion}
Integrability of NLSMs on symmetric spaces stems from the existence of the Lax connection, which is flat and leads to an infinite tower of conserved charges. Yet, there are more aspects of integrability related to NLSMs. Given a seed solution of the NLSM, the dressing method enables the construction of new solutions of the NLSM through a pair of first order differential equations, the auxiliary system. Once this system is solved, multiple dressing transformations can be performed systematically. The Pohlmeyer reduction reveals that the embedding of the world-sheet into the target space, which is in turn embedded into a flat enhanced space is described by integrable models. Given a solution of the Pohlmeyer reduced theory, \Backlund transformations can be employed in order to construct new solutions. Moreover, by substituting a solution of the Pohlmeyer reduced theory in the equations of motion of the NLSM, these become linear, since the Lagrange multiplier acts as a self-consistent potential. The dressing method and the \Backlund transformations are interrelated, as the application of the dressing method on the NLSM automatically performs a \Backlund transformation on the Pohlmeyer reduced theory.
In this work we discussed strings, which, as time flows, propagate on a two-dimentional sphere. Their motion is described by the NLSM on $\mathrm{S}^2$. It is well known that the Pohlmeyer reduced theory of this NLSM is the sine-Gordon equation. We applied the dressing method on this NLSM using a mapping of $\mathrm{S}^2$ to the coset $\mathrm{SO}(3)/\mathrm{SO}(2)$. {Taking advantage of the parametrization introduced in \cite{Katsinis:2018ewd},\emph{ we obtained the solution of the auxiliary system for an arbitrary seed solution. This solution is built by combining appropriately the seed solution with a virtual one.} The latter has the same Pohlmeyer counterpart as the seed solution, it solves the NLSM equations of motion, yet, in general it is complex and obeys altered Virasoro constraints, which do not correspond to a valid string solution in $\mathbb{R}\times \mathrm{S}^2$. \emph{This virtual solution can be constructed trivially as long as one knows the whole class of solutions of the NLSM that correspond to a given solution of the Pohlmeyer reduced theory.} Subsequently, we constructed the solution of the NLSM that corresponds to the simplest dressing factor, namely the one that has a pair of poles on the unit circle. \emph{The dressed solution of the NLSM is a non-linear superposition of the seed solution of the NLSM and the virtual one.} This is a completely novel aspect of integrability of NLSMs.
Furthermore, we derived an addition formula for the on-shell Lagrangian density. This addition formula encapsulates the pair of the first order equations that constitute the \Backlund transformation of the sine-Gordon equation. We specify the relation between the location of the poles of the dressing factor and the spectral parameter of the \Backlund transformations. Our construction proves that the knowledge of the whole class of solutions of the NLSM that correspond to a given solution of the sine-Gordon equation, enables the insertion of solitons in this solution of the sine-Gordon equation \emph{without solving the equations of the \Backlund transformation.} As we obtained the general solution of the auxiliary system, our work implies that \emph{the dressing method is actually implementing the non-linear superposition we presented.} At the level of the sine-Gordon equation, since \emph{solitons} are inserted through \Backlund transformations, we showed that they \emph{are the Pohlmeyer counterpart of the non-linear superposition at the level of NLSM.} It is worth noticing that this non-linear superposition does not rely on finite gap integration and explicit construction of solutions of the NLSM; it is a fundamental property.
Non-linear equations are characterized by the fact that one can not construct new solutions of them by forming linear combinations of known solutions. Yet, the fact that the solutions of the Pohlmeyer reduced theory render the equations of motion of the NLSM linear seems to be a key element in our construction. Two solutions of the same equations of motion, which are effectively linear for the given solution of the sine-Gordon equation, are the ones that are superimposed in order to obtain a new solution. This operation constructs a dressed solution that does not correspond to the same Pohlmeyer field, thus the dressed solution belongs to a different ``effectively linearised" sector. It is interesting that starting from an arbitrary seed, the whole tower of sectors that are reached through the non-linear superposition is built by inserting solitons in the Pohlmeyer counterpart of the seed solution.
On the converse root, let us consider our construction from the point of view of the sine-Gordon equation. In order to perform a \Backlund tranformation on a given seed solution one needs to solve a pair of first order \emph{non-linear} differential equations. The presented analysis shows that this is equivalent to the construction of family of the NLSM solution, which correspond to the specific Pohlmeyer field. This requires the general solution of a \emph{linear} second order differential equation \eqref{eq:NLSM_EOM}, whereas the non-linear part of the calculation has become purely algebraic. The latter is the enforcement of the geometric and Virasoro constraints. Once the family has been constructed, the application of the dressing transformation using our construction and equation \eqref{eq:add_cos_pohl} effectively linearizes the \Backlund transformations.
The generalization of this work in symmetric spaces such as $\textrm{S}^d$, $\textrm{AdS}_d$, $\textrm{dS}_d$ and $\textrm{CP}^d$, as well as direct products of them, which are relevant for the gauge gravity duality, is highly interesting. The same is true for euclidean NLSMs on $\textrm{H}^d$ that are relevant for the holographic calculation of Wilson Loops. In a similar manner one may study the implications of the choice of the coset that is used in order to implement the dressing method. It is known that the mapping of a symmetric space to different cosets may lead to different solutions \cite{Kalousios:2006xy}.
Finally, the implications of this construction to the physics of the NLSMs deserves a thorough study. Our previous works \cite{Katsinis:2019oox,Katsinis:2019sdo} revealed that dressed elliptic strings have interesting physical properties. A compelling finding is that there is a special class of dressed string solutions, which consists of the strings that correspond to the unstable modes of their precursors. These instabilities are related to the propagation of superluminal solitons on the background of the Pohlmeyer counterpart of the seed. It would be interesting to investigate whether one can discuss similar properties for arbitrary seed solutions in the context of the presented construction. Another potential implication of this construction regards the spectral problem of AdS/CFT. The latter was solved in \cite{Beisert:2005bm} in the thermodynamic limit, which is associated with long strings. Long strings can naturally be constructed via the application of the dressing method. Since they propagate on an infinite size world-sheet, the Pohlmeyer counterpart has a diverging period. The latter corresponds precisely to the existence of a soliton. As a result, long strings can be described as the non-linear superposition of a short string with a virtual one. It is interesting to study applications of our construction in this context, as it can be used for a general short string seed. As a last comment, the presented construction, in particular the addition formula for the cosine of the Pohlmeyer field, describes the instanton contributions to the action of the $\textrm{O}(3)$ sigma model over any zero instanton classical configuration. Maybe this could be incorporated for investigations along the lines of \cite{Krichever:2020tgp}.
\subsection*{Acknowledgements}
The research of D.K. is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme ``Human Resources Development, Education and Lifelong Learning'' in the context of the project ``Strengthening Human Resources Research Potential via Doctorate Research'' (MIS-5000432), implemented by the State Scholarships Foundation (IKY). The research of G.P. has received funding from the Hellenic Foundation for Research and Innovation (HFRI) and the General Secretariat for Research and Technology (GSRT), in the framework of the ``First Post-doctoral researchers support'', under grant agreement No 2595.
|
1,477,468,750,392 | arxiv |
\section{First Section}
\subsection{A Subsection Sample}
Please note that the first paragraph of a section or subsection is
not indented. The first paragraph that follows a table, figure,
equation etc. does not need an indent, either.
Subsequent paragraphs, however, are indented.
\subsubsection{Sample Heading (Third Level)} Only two levels of
headings should be numbered. Lower level headings remain unnumbered;
they are formatted as run-in headings.
\paragraph{Sample Heading (Fourth Level)}
The contribution should contain no more than four levels of
headings. Table~\ref{tab1} gives a summary of all heading levels.
\begin{table}
\caption{Table captions should be placed above the
tables.}\label{tab1}
\begin{tabular}{|l|l|l|}
\hline
Heading level & Example & Font size and style\\
\hline
Title (centered) & {\Large\bfseries Lecture Notes} & 14 point, bold\\
1st-level heading & {\large\bfseries 1 Introduction} & 12 point, bold\\
2nd-level heading & {\bfseries 2.1 Printing Area} & 10 point, bold\\
3rd-level heading & {\bfseries Run-in Heading in Bold.} Text follows & 10 point, bold\\
4th-level heading & {\itshape Lowest Level Heading.} Text follows & 10 point, italic\\
\hline
\end{tabular}
\end{table}
\noindent Displayed equations are centered and set on a separate
line.
\begin{equation}
x + y = z
\end{equation}
Please try to avoid rasterized images for line-art diagrams and
schemas. Whenever possible, use vector graphics instead (see
Fig.~\ref{fig1}).
\begin{figure}
\includegraphics[width=\textwidth]{fig1.eps}
\caption{A figure caption is always placed below the illustration.
Please note that short captions are centered, while long ones are
justified by the macro package automatically.} \label{fig1}
\end{figure}
\begin{theorem}
This is a sample theorem. The run-in heading is set in bold, while
the following text appears in italics. Definitions, lemmas,
propositions, and corollaries are styled the same way.
\end{theorem}
\begin{proof}
Proofs, examples, and remarks have the initial word in italics,
while the following text appears in normal font.
\end{proof}
For citations of references, we prefer the use of square brackets
and consecutive numbers. Citations using labels or the author/year
convention are also acceptable. The following bibliography provides
a sample reference list with entries for journal
articles~\cite{ref_article1}, an LNCS chapter~\cite{ref_lncs1}, a
book~\cite{ref_book1}, proceedings without editors~\cite{ref_proc1},
and a homepage~\cite{ref_url1}. Multiple citations are grouped
\cite{ref_article1,ref_lncs1,ref_book1},
\cite{ref_article1,ref_book1,ref_proc1,ref_url1}.
\section{The DeCon Language}
\label{sec:language}
The syntax of a DeCon smart contract is shown as follows:
\[
\boxed{
\begin{array}{rcl}
(Contract)~P & \coloneqq & Decl ~|~ Annot ~|~ Rule \\
Decl & \coloneqq & SR ~|~ SG \\
Annot & \coloneqq & .(init ~|~ violation ~|~ public)~Str \\
(Relation)~SR & \coloneqq & .decl~Str(Str: Type,~Str: Type,...)[k1,k2,...] \\
(Singleton~relation)~SG & \coloneqq & .decl~*Str(Str: Type,~Str: Type,...) \\
Type & \coloneqq & address ~|~ uint ~|~ int ~|~ bool \\
\end{array}
}
\]
A DeCon smart contract consists of three main blocks:
(1) Relation declarations, (2) Relation annotations, and (3) Rules.
\noindent \textbf{Relation declarations.}
There are two kinds of relation declarations.
Simple relations ($SR$) have a string for a relation name,
followed by a schema in parenthesis, and optional primary
key indices in a square bracket.
The schema consists of a list of column names and types.
When inserting a new tuple to a relational table,
if a row with the same primary keys exists, then the row is updated by the new tuple.
Singleton relations ($SG$) are relations annotated with a \lstinline{*} symbol.
These relations have only one row.
Row insertion is also an update for singleton relations.
\noindent \textbf{Relation annotations.}
DeCon supports three kinds of relation annotations.
First, \lstinline{init} indicates that the relation is initialized
by a constructor argument passed during deployment.
Second, \lstinline{violation} means that the relation represents a
safety violation query.
Third, \lstinline{public} generates a public interface to
read the contents of the corresponding relational table.
\noindent \textbf{Rules.}
The syntax of a DeCon rule is as follows:
\[
\boxed{
\begin{array}{rl}
Rule & \coloneqq H(\bar{x}) \dldeduce Body \\
Body & \coloneqq Join ~|~ R(\bar{x}), y = Agg~n: R(\bar{y}) \\
Join & \coloneqq R(\bar{x}) ~|~ Pred, Join \\
Agg & \coloneqq sum~|~max~|~min~|~count \\
Pred & \coloneqq R(\bar{x}) ~|~ C(\bar{x}) ~|~ y = F(\bar{x})\\
(Condition)~C & \coloneqq > ~|~ < ~|~ \geq ~|~ \leq ~|~ \neq ~|~ == \\
(Function)~F & \coloneqq + ~|~ - ~|~ \times ~|~ / \\
\end{array}
}
\]
where $H(\bar{x})$ and $R(\bar{x})$ are relational literals,
with $H$ and $R$ being the relation name, and $\bar{x}$ is an array
of variables or constants.
\noindent \textbf{Rule semantics.}
A DeCon rule is of the form $head \dldeduce body$,
and is interpreted from right to left: if the body is true,
then it inserts the head tuple into the corresponding relational table.
A rule body is a conjunction of literals,
and is evaluated to true if there exists a valuation of variables
$\pi: V \mapsto D$ such that all literals are true.
$\pi$ maps a variable $v \in V$ to its concrete value in domain $D$.
Given a variable valuation $\pi$,
a relational literal is evaluated to true if and only if
there exists a matching row in the corresponding relational table.
Other kinds of literals, including conditions, functions,
and aggregations, are interpreted as constraints
on the variables.
\noindent \textbf{Join rules.}
Join rules are rules that have a list of predicates in the rule body,
and contain at least one relational literal.
A predicate can be either a relational literal,
a condition, or a function.
\noindent \textbf{Transaction rules.}
Transaction rules are a special kind of join rules that
have one special literal in the body: transaction handlers.
A transaction handler literal has \lstinline{recv_} prefix
in its relation name, and is evaluated to true when
the corresponding transaction request is received.
The rest of the rule body specifies the approving condition for the transaction.
\noindent \textbf{Aggregation rules}
are rules that contain a relational literal
$R(\bar{x})$ and an aggregator literal $y = Agg~n: R(\bar{y})$,
where $Agg$ can be either $max$, $min$, $count$, or $sum$.
For each valid valuation of variables in $R(\bar{x})$,
it computes the aggregate on the matching rows in $R(\bar{y})$.
Take the following rule from the voting contract as an example.
\begin{lstlisting}
votes (p , c ) : - vote (_ , p ) , c = count : vote (_ , p ).
\end{lstlisting}
For each unique value $p$ in the second column of table \lstinline{vote},
the aggregator \lstinline{c = count : vote (_ , p )},
counts the number of rows in table \lstinline{vote} whose second column
equals $p$.
\section{Conclusion}
We present DCV\xspace, an automatic safety verification tool
for declarative smart contracts written in the DeCon language.
It leverages the high-level abstraction of DeCon to generate
succinct models of the smart contracts,
performs sound verification via mathematical induction,
and applies domain-specific adaptations of the Houdini algorithm
to infer inductive invariants.
Evaluation shows that it is highly efficient,
verifying all 10 benchmark smart contracts,
with significant speedup over the baseline tools.
Our experience with DCV\xspace has also inspired interesting directions
for future research.
First, although DCV\xspace can verify a wide range of contracts
in the financial domain, we find certain interesting applications
that require non-trivial extensions to the modeling language,
including contract inheritance, interaction between contracts,
and functions that lie outside relational logic.
Second, since DCV\xspace verifies on the contract logic-level,
we would also like to verify translation correctness for the
DeCon-to-Solidity compiler, to
ensure the end-to-end soundness of DCV\xspace's verification results.
\section{Evaluation}
\label{sec:eval}
\noindent \textbf{Benchmarks}.
We survey public smart contract repositories~\cite{openzepplin,verx-bench,solidity-examples},
and gather 10 representative contracts as the evaluation benchmarks.
Each selected contract either has contract-level safety specifications annotated,
or has proper documentation from which we can come up with a
contract-level safety specification.
They cover a wide range of application domains, including ERC20~\cite{erc20} and
ERC721~\cite{erc721}, the two most popular token standards.
Table~\ref{tab:benchmarks} shows all contract names and their target properties.
\input{tab_benchmarks}
\noindent \textbf{Baselines}.
We use solc~\cite{solidity} and solc-verify~\cite{hajdu2019solc} as the comparison
baselines.
Solc is a Solidity compiler with a built-in checker
to verify assertions in source programs.
It has been actively maintained by the Ethereum community,
and version 0.8.13 is used for this experiment.
Solc-verify extends from solc 0.7.6 and performs automated formal verification
using strategies of specification annotation and modular program verification.
We have also considered Verx~\cite{permenev2020verx} and Zeus~\cite{kalra2018zeus},
but neither is publicly available.
\noindent \textbf{Experiment setup}.
We modify certain functionalities and syntax of the benchmark contracts
so that they are compatible with all comparison tools.
In particular, the delegate vote function of the voting contract contains
recursion, which is not yet supported by DeCon, and is thus dropped.
In addition, solc and solc-verify do not support inline assembly analysis.
Therefore, inline assembly in the Solidity contracts are replaced with native Solidity code.
Minor syntax changes are also made to
satisfy version requirements of the two baseline tools.
With these modifications, for each reference contract in Solidity,
we implement its counterpart in DeCon.
Then we conduct verification tasks on three versions of benchmark contracts:
(1) DeCon contracts with DCV\xspace,
(2) reference Solidity contracts with solc and solc-verify,
and (3) Solidity contracts generated from DeCon with solc and solc-verify.
For each set of verification tasks, we measure the verification time
and set the time budget to be one hour.
All experiments are performed on a server with
32 2.6GHz cores and 125GB memory.
\input{tab_results}
\noindent \textbf{Results}.
Table~\ref{tab:results} shows the evaluation results.
DCV\xspace verifies all but two contracts in one second,
with ERC1155 in three seconds and auction in 54 seconds.
In particular, the properties for the voting and
auction contract are not inductive, and thus require
inductive invariant generation.
Auction takes more time because it contains more rules
and has a more complicated inductive invariant.
On the other hand, solc only successfully verifies four reference
contracts, with comparable efficiency. It times out on four contracts,
and reports SMT solver invocation error on another two.
This error has been an open issue according to the GitHub repository
issue tracker~\cite{solc-error}, which
is sensitive to the operating system and the underlying library versions of Z3.
Similarly, solc-verify verifies five reference contracts,
and reports unknown on three others.
It also returns errors on two contracts
because it cannot analyze certain parts of the included OpenZepplin libraries,
although the libraries are written in compatible Solidity version.
For Solidity contracts generated from DeCon,
solc verifies one and solc-verify verifies five.
The performance difference between the reference version
and the DeCon-generated version is potentially caused by the fact
that DeCon generates stand-alone contracts that implement all
functionalities without external libraries.
On the other hand, DeCon implements contract states (relations)
as mappings from primary keys to tuples, which may incur
extra analysis complexity compared to the reference version.
In summary, DCV\xspace is highly efficient in verifying
contract-level safety invariants, and can handle a wider range of smart contracts compared to other tools.
By taking advantage of the high-level abstractions of the DeCon
language, it achieves significant speedup over
the baseline verification tools. In several instances, alternative tools timeout after an hour or report an error, while DeCon is able to complete verification successfully.
\section{Illustrative Example}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{figures/overview.pdf}
\caption{Overview of DCV\xspace.}
\label{fig:overview}
\end{figure}
Figure~\ref{fig:overview} presents an overview of DCV\xspace.
It takes a smart contract and a property specification
(in the form of a violation query) as input,
both of which are written in the DeCon language (Section~\ref{sec:language}).
The smart contract is then translated into a state transition
system, and the property is translated into a safety invariant
on the system states.
DCV\xspace then proves the transition system preserves the safety
invariant by mathematical induction.
In our implementation, theorem proving is performed by Z3~\cite{z3},
an automatic theorem prover.
If the proof succeeds, the smart contract is verified to be safe,
meaning that the violation query result is always empty,
and an inductive invariant is returned as a safety proof.
Otherwise, DCV\xspace returns ``unknown'', meaning that the smart contract
may not satisfy the specified safety invariant.
In the rest of this section, we use a voting contract (Listing~\ref{lst:voting})
as an example to illustrate the work flow of DCV\xspace. This example is adapted from the voting example in Solidity~\cite{solidity-examples}, simplified for ease of exposition.
\input{lst_voting}
\subsection{A Voting Contract}
\label{sec:decon-example}
Listing~\ref{lst:voting} shows a voting contract written in DeCon.
In DeCon, transaction records and contract states are modeled as relational
tables (lines 1-10). These declarations define table schemas in relational databases,
where each schema has the table name followed by column names and types
in a parenthesis. Optionally, a square bracket annotates the index of the primary key columns,
meaning that these columns uniquely identify a row.
For example, the relation \lstinline{votes(proposal: uint, c: uint)[0]}
on line 5 has two columns, named \lstinline{proposal} and \lstinline{c},
and both have type \lstinline{uint}. The first column is the primary key.
If no primary keys are annotated, all columns are interpreted as primary keys,
i.e., the table is a set of tuples.
A special kind of relation is singleton relation, annotated by $*$.
Singleton relations only have one row, e.g., \lstinline{winningProposal}
in line 8.
By default all relational tables are initialized to be empty, except relations
annotated by the \lstinline{init} keyword (line 12).
These relations are initialized by the constructor arguments passed during deployment.
Each transaction is written in the form of a rule used in Datalog programs:
\lstinline{head :- body.}
The rule body consists of a list of relational literals, and is evaluated
to true if and only if all relational literals are true.
If the rule body is true, the head is inserted into the corresponding
relational table.
For example, the rule in line 15 specifies that a \lstinline{vote} transaction
can be committed if there is no winner yet,
the message sender is a voter, and the voter has not voted yet.
The literal \lstinline{recv_vote(p)} is a transaction handler
that evaluates to true on receiving a \lstinline{vote} transaction request.
Rules that contain such transaction handlers (literal with a \lstinline{recv_}
prefix in the relation name) are called transaction rules.
Inserting a new \lstinline{vote(v,p)} literal triggers updates to all its
direct dependent rules.
A rule is directly dependent on a relation $R$ if and only
if a literal of relation $R$ is in its body.
In this case, relation \lstinline{wins} and \lstinline{voted} are updated next.
The chain of dependent rule updates go on until no further dependent rules
can be triggered, and the transaction handling is finished.
On the other hand, if the body of a transaction rule is evaluated to false
on receiving a transaction request,
then no dependent rule is triggered, and the transaction is returned as failed.
Line 31 specifies a safety property in the form of a violation query.
If the rule is evaluated to true, it means that there exists
two different winning proposals, indicating a violation to the safe invariant
that there is at most one winning proposal.
Such violation query rule is expected to be always false during the execution
of a correct smart contract.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/transition-system.pdf}
\caption{The voting contract as a state transition system.}
\label{fig:transition-system}
\end{figure}
\subsection{Translating DeCon Contract to State Transition System}
In order to verify the DeCon contract against the safety invariant,
the declarative rules are translated into a state transition system.
Figure~\ref{fig:transition-system} illustrates part of the transition system translated
from the voting contract in Listing~\ref{lst:voting},
where all relational tables are the states, and every smart contract
transaction commit results in a state transition step.
The middle portion of Figure~\ref{fig:transition-system} shows a state after
$i$ transactions from one of the initial states,
where proposal $p_1$ has two votes, proposal $p_2$ has one vote,
and there is no winner yet.
Two outgoing edges from this state are highlighted.
Suppose the quorum size $Q$ in this example is three.
On the top is the transaction \lstinline{vote(p1)}, where $p_1$ gets another vote,
making it reach the quorum and become the winner.
The edge annotates the conditions for this transaction to go through
(only a fraction of the condition is shown in the figure due to space limit).
It is translated from the transaction rule $r$ in Listing~\ref{lst:voting} line 15
($\var{recv\_vote}(p_1) \land \neg hasWinner \land ...$),
as well as $r$'s dependent rules from line 19 to 26 ($\var{votes}[p_1] \geq Q \land ...$).
This edge leads to a new state where proposal $p_1$'s votes is incremented by one,
and it becomes the winner, which is also translated from line 19 to 26.
Similarly, the bottom right shows another transaction where proposal $p_2$ gets
a vote, but $\var{hasWinner}$ remains $False$ since there is no proposal reaching
the quorum.
Section~\ref{sec:transition} formally describes the algorithm to
translate a DeCon smart contract into a state transition system.
\noindent \textbf{Property.}
The violation query rule (line 31) is translated into the following safety invariant:
$\neg [ \exists p_1,p_2.~ wins(p1) \land wins(p2) \land p1 \neq p2 ]$,
It says that there do not exist proposals $p1$ and $p2$ such
that the violation query is true,
which means that there is at most one winning proposal.
\subsection{Proof by Induction}
Given the state transition system translated from the DeCon smart contract,
the target property $prop(s)$ is proven by mathematical induction.
In particular, let $S$ be the set of states in the transition system,
and $E$ be the set of transaction types.
Given $s,s'\in S, e \in E$, let $\var{init}(s)$ indicate whether $s$ is in the
initial state, and $\var{tr}(s,e,s')$ indicate whether $s$ can transition
to $s'$ via transaction $e$. The mathematical induction is as follows:
\begin{equation}
\label{eq:induction}
\boxed{
\begin{array}{rl}
\text{ProofByInduction}(init, tr, prop) \triangleq & \text{Base}(init,prop) \land \text{Induction}(tr,prop) \\
\text{Base}(init,prop) \triangleq & \forall s \in S.~init(s) \implies prop(s) \\
\text{Induction}(tr,prop) \triangleq & \forall s,s' \in S, e \in E.~inv(s) \land prop(s) \\
& \land tr(s,e,s') \implies inv(s') \land prop(s') \\
\end{array}
}
\end{equation}
where $inv(s) \land prop(s)$ is an inductive invariant inferred by DCV\xspace
such that $prop(s)$ is proved to be an invariant of the transition system.
To find such an inductive invariant, DCV\xspace first generates a set of
candidate invariants using predicates extracted from
transaction rules in the DeCon contract,
and then applies the Houdini algorithm~\cite{lahiri2009complexity}
to find the inductive invariant.
The detailed steps are as follows:
\noindent \textbf{(1) Extract predicates from all transaction rules.}
Take the transaction rule in line 15 as an example.
The following predicates can be extracted from it:
$\neg hasWinner$, $ \neg voted[v]$, $isVoter[v]$.
\noindent \textbf{(2) Generate candidate invariants.}
Given the extracted predicates,
candidate invariants are generated in the form,
$\forall x \in X. ~\neg init(s) \implies \neg p(s,x) $,
where $X$ is the set of all possible values of the local variables
(variables other than the state variables) in predicate $p(s,x)$.
And $p(s,x)$ is one of the predicates extracted from the transaction rules.
$\neg init(s)$ is introduced as the premise of the implication
so that the candidate invariant is trivially implied by the system's
initial constraints.
Having $\neg p(s,x)$ in the implication conclusion is based on the
heuristics that in order to prove safety invariants, the lemma should
prohibit the system from making an unsafe transition.
For example, the following invariant is generated following the
above pattern:
\begin{equation}
\label{eq:invariant-example}
\forall u \in \var{Proposal}. ~ \var{wins}[u] \implies \var{hasWinner}
\end{equation}
The invariant expresses that if any proposal $u \in \var{Proposal}$ is marked
as winner, the predicate $hasWinner$ must also be true.
\noindent \textbf{(3) Infer inductive invariants.}
Given the set of candidate invariants, DCV\xspace applies the Houdini
algorithm~\cite{lahiri2009complexity} and returns the
formula in Equation~\ref{eq:invariant-example} as an inductive invariant.
Applying the inductive invariant $inv$ to the induction procedure (Equation~\ref{eq:induction}),
the target property can be proven.
\section{Introduction}
Smart contracts are programs that process transactions on blockchains: a
type of decentralized and distributed ledgers.
The combination of smart contracts and blockchains has enabled a wide range of
innovations in fields including banking~\cite{erc20-tokens},
trading~\cite{notheisen2017trading,nft},
and financing~\cite{wang2018overview}, etc.
Nowadays, smart contracts are managing a massive amount of digital assets
\footnote{According to \href{https://etherscan.io/tokens}{Etherscan},
at the writing of this paper,
the top ERC20 tokens are managing billions dollars worth of tokens.},
but they also suffer from security vulnerabilities~\cite{dao,dice2win,kingofether},
which leads to significant financial loss.
In addition, since smart contracts are stored and executed on
blockchains, once they are deployed, it is hard to terminate execution
and update the contracts when new vulnerabilities are discovered.
One way to reduce potential vulnerabilities, whose patterns are unknown
during development, is safety verification:
a smart contract that is always safe is less likely to suffer from
undiscovered vulnerabilities\cite{kalra2018zeus,permenev2020verx}.
Thus, this work focuses on the problem of safety verification for smart contracts.
Most existing solutions directly verify the implementation of smart
contracts~\cite{solidity-smt,hajdu2019solc,marescotti2020accurate,kalra2018zeus,permenev2020verx}.
These solutions have worked very well on verifying transaction-level properties,
e.g., pre/post conditions, integer overflow, etc. However, when it comes to contract-level properties,
where an invariant needs to hold across an infinite sequence of transactions,
these approaches suffer from low efficiency due to state explosion issues.
Some solutions~\cite{frank2020ethbmc,antonino2020formalising}
trade soundness for efficiency,
verifying properties up to a certain number of transactions.
On the other hand, in model-based verification approaches,
a formal model of the smart contract is specified
separately from the implementation.
Given such a formal model and the implementation, two kinds of verification are performed:
(1) does the formal model satisfy the desired properties~\cite{nehai2018model,cassez2022deductive}?
(2) is the implementation consistent with the formal model~\cite{chen2018language}?
This verification approach is more efficient, because the formal model
typically abstracts away implementation details that are irrelevant to the verification task.
However, a separate model needs to be written
in addition to the implementation, and is typically in a formal
language that is unfamiliar to software engineers.
In this paper, we propose an alternative verification approach based on
an executable specification of smart contracts.
In particular, we target smart contracts written in DeCon~\cite{chen2022declarative},
a domain-specific language for smart contract specification and implementation.
A DeCon contract is a declarative specification for the smart contract logic in itself,
making it more efficient to reason about than the low-level implementation in Solidity.
It is also executable, in that it can be automatically compiled
into a Solidity program that can be deployed on the Ethereum blockchain.
When verification is completed, the automatic code generation saves developers' effort to
manually implement the contract following the specification.
The high-level abstraction and executability make DeCon
an ideal target for verifying contract-level properties.
We implement a prototype, DCV\xspace (\underline{D}e\underline{C}on \underline{V}erifier),
for verifying declarative smart contracts.
DCV\xspace performs sound verification of safety invariants using mathematical induction.
A typical challenge in induction is to infer inductive invariants that can help
prove the target property.
A key insight is that the DeCon language exposes the exact logical
predicates that are necessary for constructing such inductive invariants,
which makes inductive invariant inference tractable.
As another benefit of using DeCon as the verification target,
DeCon provides uniform interfaces for both contract implementation
and property specification.
It models the smart contract states as relational databases, and properties as
violation queries against these databases. Thus, both the smart contract
transaction logic and properties can be specified in a declarative and succinct
way.
With DCV\xspace, developers can specify both the contract logic and its properties
in DeCon, and have it verified and implemented automatically.
This paper makes the following contributions.
\begin{itemize}
\item A sound and efficient verification method for smart contracts,
targeting contract-level safety invariants that is based on a declarative specification language and induction proof strategy
(Sections~\ref{sec:program-transformation},~\ref{sec:verification-method}).
\item A domain-specific adaptation of the Houdini algorithm~\cite{lahiri2009complexity}
to infer inductive invariants for induction proof (Section~\ref{sec:verification-method}).
\item An open-source verification tool for future study and comparison.
\item Evaluation that compares DCV\xspace with state-of-the-art verification
tools, on ten representative benchmark smart contracts.
DCV\xspace successfully verifies all benchmarks, is able to handle benchmarks not supported by other tools, and is significantly more efficient than baseline tools. In some instances, DCV\xspace completes verification within seconds when other tools timeout after an hour
(Section~\ref{sec:eval}).
\end{itemize}
\section{Verification Method}
\label{sec:verification-method}
\subsection{Proof by Induction}
Given a state transition system $\langle S, I, E, Tr \rangle$
transformed from the Decon contract,
DCV\xspace uses mathematical induction to prove the target property $prop$.
The induction procedure is defined in Equation~\ref{eq:induction},
where $init$ and $tr$ are the Boolean formulas that define the set
of initial states $I$ and the transition relation $Tr$.
The rest of this section introduces the algorithm to infer the
inductive invariant $inv$ used by the induction proof.
\begin{algorithm}
\caption{Procedure to find inductive invariants.}
\label{alg:find-indcutive-invariant}
\hspace*{\algorithmicindent} \textbf{Input}:
a transition system $ts$,
a map from relation to its modeling variable $\Gamma$,
and a set of DeCon transaction rules $R$. \\
\hspace*{\algorithmicindent} \textbf{Output}:
an inductive invariant of $ts$.
\begin{algorithmic}[1]
\Function{FindInductiveInvariant}{C,ts}
\For{inv \textbf{in} C}:
\If{refuteInvariant(inv, C, ts)}
\State \Return $\text{FindInductiveInvariant}(C \setminus inv, ts)$
\EndIf
\EndFor
\State \Return $ \bigwedge_{c_i \in C} c_i $
\EndFunction
\State $P \gets \bigcup_{r \in R} \text{ExtractPredicates}(r,\Gamma) $
\State $C \gets \text{GenerateCandidateInvariants(P)} $
\State \Return $\text{FindInductiveInvariant}(C,ts)$
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{alg:find-indcutive-invariant} presents the procedure to
infer inductive invariants.
It first extracts a set of predicates $P$ from the set of transaction rules $R$
(Section~\ref{sec:extract}).
Then it generates a set of candidate invariants using predicates in $P$,
following two heuristic patterns (Section~\ref{sec:patterns}).
Finally, it invokes a recursive subroutine \textsc{FindInductiveInvariant}
to find an inductive invariant.
The procedure \textsc{FindInductiveInvariants} is adopted from the Houdini algorithm~\cite{lahiri2009complexity}.
It iteratively refutes candidate invariants in $C$, until there is no candidate that
can be refuted, and returns the conjunction of all remaining invariants.
The subroutine \text{refuteInvariant} is defined in Equation~\ref{eq:refute},
which refutes a candidate invariant if it is not inductive.
\begin{equation}
\begin{array}{rl}
\text{refuteInvariant}(inv,C,ts) \triangleq & \lor \neg (ts.init \implies inv) \\
& \lor \neg [(\bigwedge_{c\in C}c) \land ts.tr \implies inv'] \\
\end{array}
\label{eq:refute}
\end{equation}
where $inv'$ is adopted by replacing all state variables in $inv$
with their corresponding variable in the next transition step
A property of this algorithm is that, given a set of candidate invariants $C$,
it always returns the strongest inductive invariant that can be constructed
in the form of conjunction of the candidates in $C$~\cite{lahiri2009complexity}.
\subsection{Predicate Extraction}
\label{sec:extract}
\begin{algorithm}
\caption{ExtractPredicate(r, $\Gamma$).}
\label{alg:extract-predicate}
\hspace*{\algorithmicindent} \textbf{Input}:
a transaction rule $r$, a map from relation to its modeling variable $\Gamma$. \\
\hspace*{\algorithmicindent} \textbf{Output}:
a set of predicates $P$.
\begin{algorithmic}[1]
\State $\tau \gets r.trigger $
\State $P_0 \gets \{ p ~|~ l \in r.body, \Gamma,\tau \vdash l \rightsquigarrow p \} $
\State $P_1 \gets \{ p \land q ~|~ p \in P_0, q \in \text{MatchingPredicates}(p,r) \}$
\State \Return $P_0 \cup P_1$
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{alg:extract-predicate} presents the predicate extraction procedure.
It first transforms each literal in the transaction rule into a predicate,
and puts them into a set $P_0$.
Some predicates in $P_0$ do not contain enough information on their own, e.g.,
predicates that contain only free variables.
Because the logic of a rule is established on the relation among its literals
(e.g. two literals sharing the same variable $v$ means joining on the
corresponding columns).
On the contrary, predicates that contain constants, e.g. hasWinner == true,
convey the matching of a column to a certain concrete value,
and can thus be used directly in candidate invariant construction.
Therefore, in the next step, each predicate $p$ in $P_0$ is augmented by one of its
matching predicates in $\var{matchingPredicates}(p,r)$,
which is the set of predicates in rule $r$ that share at least one
variable with predicate $p$. This set of augmented predicates is $P_1$.
Finally, the union of $P_0$ and $P_1$ is returned.
\subsection{Candidate Invariant Generation}
\label{sec:patterns}
Given the set of predicates in $P$,
DCV\xspace generates candidate invariants in the following patterns:
\begin{gather*}
\{ \forall x \in X.~ \neg init(s) \implies \neg p(s,x) ~|~ p \in P \} \\
\{ \forall x \in X.~ \neg init(s) \land q(s,x) \implies \neg p(s,x) ~|~
p,q \in P \}
\end{gather*}
where $X$ is the set of non-state variables in the body of the formula.
$\neg init(s)$ is used as the implication premise
so that the whole formula can be trivially implied by the
transition system's initial constraints.
Having $\neg p$ as the implication conclusion is based on the
observation that, in order to prove safety invariants, a lemma is needed to
prevent the system from unsafe transitions.
In the second pattern, we add another predicate $q \in P_0$ in the implication
premise to make the pattern more robust.
\section{Related work}
\label{sec:related}
\noindent \textbf{Verification of Solidity smart contracts.}
Solc~\cite{marescotti2020accurate}, Solc-verify~\cite{hajdu2019solc},
Zeus~\cite{kalra2018zeus},
Verisol~\cite{wang2019formal},
and Verx~\cite{permenev2020verx}
perform safety verification for smart contracts.
Similar to DCV\xspace, they infer inductive invariants to perform sound
verification of safety properties.
They also generate counter-examples as a sequence of transactions
to disprove the safety properties.
SmartACE~\cite{wesley2022verifying} is a safety verification framework that
incorporates a wide variety of verification techniques, including fuzzing,
bounded model checking, symbolic execution, etc.
In addition to safety properties, SmartPulse~\cite{stephens2021smartpulse}
supports liveness verification. It leverages
the counterexample-guided abstraction refinement (CEGAR) paradigm to perform
efficient model checking, and can generate attacks given an environment
model.
DCV\xspace differs from these work in that it uses a high-level executable
specification, DeCon, as the verification target.
Such high-level modeling improves verification efficiency,
but it also means that DCV\xspace can only apply to
smart contracts written in DeCon, which is a new language,
while the other tools can work on most existing smart contracts in Solidity.
\noindent \textbf{Formal semantics of smart contracts.}
KEVM~\cite{chen2018language} introduces formal semantics for smart contracts,
and can automatically verify that a Solidity program (its compiled EVM bytecode)
implements the formal semantics specified in KEVM.
This verification is also sound, but it focuses on the functional
correctness of each Solidity function, instead of the state
invariants across multiple transactions.
Formal semantics of EVM bytecode have also been formalized
in F*~\cite{grishchenko2018semantic} and Isabelle/HOL~\cite{amani2018towards}.
Scilla~\cite{sergey2019safer} is a type-safe intermediate
language for smart contracts that also provides formal semantics.
They offer precise models of the smart contract behaviors,
and support deductive verification via proof assistants.
However, working with a proof assistant requires non-trivial manual effort.
On the contrary, DCV\xspace provides fully automatic verification.
\noindent \textbf{Vulnerability detection.}
Securify~\cite{tsankov2018securify}
encodes smart contract semantic information into relational facts,
and uses Datalog solver to search for property
compliance and violation patterns in these facts.
Oyente~\cite{luu2016making} uses symbolic execution
to check generic security vulnerabilities,
including reentrancy attack, transaction order dependency, etc.
Maian~\cite{Nikoli2018finding} detects vulnerabilities by
analyzing transaction traces.
Unlike the sound verification tools,
which require some amount of formal specification
from the users, these work require no formal
specification and can be directly applied to
any existing smart contracts without modification,
offering a quick and light-weight alternative to sound
verification, although may suffer from false positives
or negatives.
\noindent \textbf{Fuzzing and testing.}
Fuzzing and testing techniques have also been widely applied
to smart contract verification.
They complement deductive verification tools by
presenting concrete counter-examples.
ContractFuzzer~\cite{jiang2018contractfuzzer} instruments
EVM bytecodes to log run-time contract behaviors,
and uncovers security vulnerabilities from these
run-time logs.
Smartisan~\cite{choi2021smartian} uses static
analysis to predict effective transaction sequences,
and uses this information to guide fuzzing process.
SmartTest~\cite{so2021smartest} introduces a language
model for vulnerable transaction sequences, and uses
this model to guide the search path in the fuzzing phase.
\section{Program Transformation}
\label{sec:program-transformation}
\subsection{Declarative Smart Contracts as Transition Systems}
This section introduces the algorithm to translate a DeCon smart contract
into a state transition system $\langle S,I,E,Tr \rangle$ where
\begin{itemize}
\item $S$ is the state space: the set of all possible valuations
of all relational tables in DeCon.
\item $I \subseteq S$ is the set of initial states that
satisfy the initial constraints of the system.
All relations are by default initialized to zero, or unconstrained
if they are annotated to be initialized by constructor arguments.
\item $E$ is the set of events. Each element in $E$ correspond to a
kind of transaction in DeCon.
\item $Tr \subseteq S \times E \times S$ is the transition relation,
generated from DeCon rules.
$Tr(s,e,s')$ means that state $s$ can transit to state $s'$
via transaction $e$.
\end{itemize}
In the rest of this section, we introduce the algorithm to generate the transition
relation from a DeCon smart contract.
\subsection{Transition Relation}
\label{sec:transition}
The transition relation $\var{Tr}$ is defined by
a boolean formula $\var{tr}: S \times E \times S \mapsto Bool$.
Given $s,s' \in S, e \in E$, $s$ can transition to $s'$ in one step
via transaction $e$ if and only if $\var{tr}(s,e,s')$ is true.
Equation~\ref{eq:transition} defines $\var{tr}$ as a disjunction over
the set of formulas encoding each transaction rule.
$R$ is the set of rules in the DeCon contract.
$\Gamma$ is a map from relation to its modeling variable,
e.g., the relation \lstinline{vote(proposal:uint,c:uint)[0]}
is mapped to $\var{votes}: uint \mapsto uint$.
Recall from Section~\ref{sec:language} that transaction rules
are rules that listen to incoming transaction and is only
triggered by the incoming transaction request
($r.trigger$ is the literal with \lstinline{recv_} prefix in $r$'s body).
\begin{equation}
\label{eq:transition}
tr \triangleq \bigvee\limits_{r \in \var{TransactionRules}} [\text{EncodeDeConRule}(r,R,\Gamma,r.trigger)
\land e = r.TxName ]
\end{equation}
\begin{algorithm}
\caption{$\text{EncodeDeConRule}(r,R,\Gamma,\tau)$.}
\label{alg:transform-rule}
\hspace*{\algorithmicindent} \textbf{Input}:
(1) a DeCon rule $r$,
(2) the set of all DeCon rules $R$,
(3) a map from relation to its modeling variable $\Gamma$,
(4) a trigger $\tau$, the newly inserted literal that triggers $r$'s update.\\
\hspace*{\algorithmicindent} \textbf{Output}:
A boolean formula over $S \times S$, encoding $r$'s body condition,
and all state updates triggered by inserting $r$'s head literal.
\begin{algorithmic}[1]
\State $ \var{Body} \gets \text{EncodeRuleBody}(\Gamma, \tau, r)$
\State $ \var{Dependent} \gets \{ \text{EncodeDeConRule}(dr,R,\Gamma, r.head)~|~
dr \in \text{DependentRules}(r,R)\}$
\State $(H, H') \gets \text{GetStateVariable}(\Gamma, r.head)$
\State $\var{Update} \gets H' = H.insert(r.head)$
\State $\var{TrueBranch} \gets \var{Body} \land \var{Update} \land (\bigwedge_{d \in \var{Dependent}} d ) $
\State $\var{FalseBranch} \gets \neg \var{Body} \land (H'=H)$
\State \Return $ \var{TrueBranch} \oplus \var{FalseBranch} $
\end{algorithmic}
\end{algorithm}
The procedure $\text{EncodeDeConRule}$ is defined by algorithm~\ref{alg:transform-rule}.
We explain it using the voting contract in Listing~\ref{lst:voting} as an example.
In step 1, $r$'s body is encoded as a boolean formula, $\var{BodyConstraint}$,
by calling a procedure $\var{EncodeRuleBody}$ (Section~\ref{sec:encode-rule-body}).
Take the rule for \lstinline{vote} transaction in listing~\ref{lst:voting} line 15 as an example.
Its body is encoded as:\\
\[ \neg \var{hasWinner}
\land~ \neg \var{hasVoted}[v]
\land~ \var{isVoter}[v] \]
In step 2, for each direct dependent rule $dr$ of $r$, it gets $dr$'s encoding
by recursively calling itself on $dr$.
A rule $dr$ is directly dependent on rule $r$ if and only if $r$'s head relation
appear in $dr$'s body. For example, rules in line 19 and 26 of Listing~\ref{lst:voting} are directly dependent
on the \lstinline{vote} transaction rule in line 15.
Step 3 generates state variables for the head relation,
where $H$ is for the current step, and $H'$ is for the next transition step.
Step 4 generates the head relation update constraint:
$H'$ equals to inserting $r$'s head literal into $H$.
Suppose we are in the recursion step for encoding
the \lstinline{votes} rule in line 19, its update constraint
is generated as:
$\var{votes}' = Store(\var{votes},p,\var{votes}[p]+1)$,
where the votes for proposal $p$ is incremented by one.
Step 5 generates the constraint where $r$'s body is true,
in conjunction with the update constraint and all dependent rules'
constraints. Step 6, on the other hand, generates constraints
where $r$'s body is false, no dependent rule is triggered,
and the head relation remains the same.
Step 7 returns the final formula as an exclusive-or of the true and
false branches,
which encodes $r$'s body and how its update affects
other relations in the contract.
\subsection{Encoding Rule Bodies}
\label{sec:encode-rule-body}
The procedure EncodeRuleBody is defined by two sets of inference rules.
The first judgment of the form
$\Gamma, \tau \vdash r \hookrightarrow \phi$
states that a DeCon rule $r$ is encoded by a boolean formula $\phi$
under context $\Gamma$ and $\tau$.
The second judgment of the form
$ \Gamma,\tau \vdash \var{Pred} \rightsquigarrow \phi $
states that a predicate $\var{Pred}$ is encoded by
a formula $\phi$ under context $\Gamma$ and $\tau$.
The contexts ($\Gamma$ and $\tau$) of both judgement forms
are defined the same as the input of Algorithm~\ref{alg:transform-rule}.
The judgment $\Gamma, \tau \vdash r \hookrightarrow \phi$
is defined by the following inference rules:
\[
\infer[(Join1)]{
\Gamma,\tau \vdash H(\bar{y}) \dldeduce R(\bar{x}) \hookrightarrow \phi
}{
\Gamma,\tau \vdash R(\bar{x}) \rightsquigarrow \phi
}
\]
\[
\infer[(Join2)]{
\Gamma,\tau \vdash H(\bar{y}) \dldeduce Pred, Join \hookrightarrow \phi_1 \land \phi_2
}{
\Gamma,\tau \vdash Pred \rightsquigarrow \phi_1
&
\Gamma,\tau \vdash H(\bar{y}) \dldeduce Join \hookrightarrow \phi_2
}
\]
\[
\infer[(Sum)]{
\Gamma,\tau \vdash H(\bar{y}) \dldeduce R(\bar{x}), s=sum~n: R(\bar{z})
\hookrightarrow s=s'
}{
s'=\Gamma(H)[\bar{k}].value + n
}
\]
where $\bar{k}$ represents the primary keys of relation $H$, extracted from the
array $\bar{y}$, and $\Gamma(H)[\bar{k}].value$ reads the current sum value.
Note that, unlike the join rules, the literal $R(\bar{x})$
here does not join with the aggregation literal,
because it is only introduced to obtain valid valuations for the rule variables
(every row in table $R$ is a valid valuation).
For each valid valuation, the aggregator computes the aggregate summary
for the matching rows in table $R$ (Section~\ref{sec:language}).
This applies to all inference rules for aggregators.
\[
\infer[(Max)]{
\Gamma, \tau \vdash H(\bar{y}) \dldeduce R(\bar{x}), m=max~n: R(\bar{z})
\hookrightarrow \phi
}
{
m' = \Gamma(H)[\bar{k}].value
& \phi \coloneqq (n > m' \land m=n) \oplus (n \leq m')
}
\]
\[
\infer[(Min)]{
\Gamma, \tau \vdash H(\bar{y}) \dldeduce R(\bar{x}), m=min~n: R(\bar{z})
\hookrightarrow \phi
}
{
m' = \Gamma(H)[\bar{k}].value
&
\phi \coloneqq (n < m' \land m=n) \oplus (n \geq m')
}
\]
\[
\infer[(Count)]{
\Gamma,\tau \vdash H(\bar{y}) \dldeduce R(\bar{x}), c=count: R(\bar{z})
\hookrightarrow \phi
}{
\phi \coloneqq c=\Gamma(H)[\bar{k}].value+1
}
\]
Following are the inference rules for judgment:
$ \Gamma,\tau \vdash \var{Pred} \rightsquigarrow \phi $
\[
\infer[(Lit1)]{
\Gamma, \tau \vdash R(\bar{x}) \rightsquigarrow \tau = R(\bar{x})
}{
\tau.rel = R
}
\quad
\infer[(Lit2)]{
\Gamma, \tau \vdash R(\bar{x}) \rightsquigarrow \Gamma(R)[\bar{k}] = \bar{v}
}{
\tau.rel \neq R
}
\]
where $\bar{k}$ represents the primary keys in relational literal $R(\bar{x})$,
extracted from $\bar{x}$,
and $\bar{v}$ represents the remaining fields in $\bar{x}$.
\[
\infer[(Condition)]{
\Gamma, \tau \vdash C \rightsquigarrow C
}{}
\quad
\infer[(Function)]{
\Gamma, \tau \vdash y = F(\bar{x}) \rightsquigarrow y=F(\bar{x})
}{}
\]
\paragraph{Assumptions.}
DCV\xspace assumes that on every new incoming transaction request,
there is at most one new tuple derived by each rule,
and that there is no recursion in the rules.
Recursion means that there is a mutual dependency between
rules. A rule $r_a$ is dependent to another rule $r_b$
($r_a \rightarrow r_b$)
if and only if $r_b$'s head relation appears in $r_a$'s body,
or there exists another rule $r_c$ such that
$r_a \rightarrow r_c \land r_c \rightarrow r_b$.
This assumption keeps the size of the transition constraint
linear to the number of rules in the Decon contract,
thus making the safety verification tractable.
We find this assumption holds for most smart contracts
in the financial domain, and is true for all of the ten
benchmark contracts in our evaluation.
\subsection{Safety Invariant Generation}
For each violation query rule $qr$ in a DeCon contract,
it is first encoded as a formula $\phi$ such that
$\Gamma,\tau \vdash qr \hookrightarrow \phi$.
Note that the context $\Gamma$ is the same mapping used
in the transition system encoding process.
The second context, trigger $\tau$, is a reserved literal $check()$,
which triggers the violation query rule after every transaction.
Next, the safety invariant is generated from $\phi$ as follows:
\[
Prop \triangleq \neg (\exists x \in X. \phi(s,x))
\]
where $X$ is the state space for the set of non-state variables in $\phi$.
The property states that there exists no valuations of variables
in $X$ such that the violation query is non-empty.
In other words, the system is safe from such violation.
|
1,477,468,750,393 | arxiv | \section{Introduction}
Lie used algebraic symmetry properties of differential equations to
extract their solutions \cite{lie1, lie, lie2, lie3}. One method
developed was to transform the equation to linear form by changing
the dependent and independent variables invertibly. Such
transformations are called \emph{point transformations} and the
transformed equations are said to be \emph{linearized}. Equations
that can be so transformed are said to be {\it linearizable}. Lie
proved that the necessary and sufficient condition for a scalar
nonlinear ordinary differential equation (ODE) to be linearizable is
that it must have eight Lie point symmetries. He exploited the fact
that all scalar linear second-order ODEs are equivalent under point
transformations \cite{sur}, i.e. every linearizable scalar
second-order ODE is reducible to the free particle equation. While
the situation is not so simple for scalar linear ODEs of order
$n\geq3$, it was proved that there are three equivalence classes
with $n+1$, $n+2$ or $n+4$ infinitesimal symmetry generators
\cite{lea}.
For linearization of systems of two nonlinear ODEs, we will first
consider the equivalence of the corresponding linear systems under
point transformations. Nonlinear systems of two second-order ODEs
that are linearizable to systems of ODEs with constant coefficients,
were proved to have three equivalence classes \cite{gor}. They have
$7$, $8$ or $15$-dimensional Lie algebras. This result was extended
to those nonlinear systems which are equivalent to linear systems of
ODEs with constant or variable coefficients \cite{sb}. They obtained
an ``optimal" canonical form of the linear systems involving three
parameters, whose specific choices yielded five equivalence classes,
namely with $5$, $6$, $7$, $8$ or $15$-dimensional Lie algebras.
Geometric methods were developed to transform nonlinear systems of
second-order ODEs \cite{aa,mq1,mq2} to a system of the free particle
equations by treating them as geodesic equations and then projecting
those equations down from an $m\times m$ system to an $(m-1)\times
(m-1)$ system. In this process the originally homogeneous
quadratically semi-linear system in $m$ dimensions generically
becomes a non-homogeneous, cubically semi-linear system in $(m-1)$
dimensions. When used for $m=2$ the Lie conditions for the scalar
ODE are recovered precisely. The criterion for linearizability is
simply that the manifold for the (projected) geodesic equations be
flat. The symmetry algebra in this case is $sl(n+2,\R)$ and hence
the number of generators is $n^2+4n+3$. Thus for a system of two
equations to be linearizable by this method it must have 15
generators.
A scalar complex ODE involves two real functions of two real
variables, yielding a system of two partial differential equations
(PDEs) \cite{saj, saj1}. By restricting the independent variable to
be real we obtain a system of ODEs. Complex symmetry analysis (CSA)
provides the symmetry algebra for systems of two ODEs with the help
of the symmetry generators of the corresponding complex ODE. This is
not a simple matter of doubling the generators for the scalar
complex ODE. The inequivalence of these systems with the above
mentioned systems obtained earlier (by geometric means) \cite{mq2},
has been proved \cite{saf2}. \emph{Thus their symmetry structures
are not the same}. We prove that a general two-dimensional system of
second-order ODEs corresponds to a scalar complex second-order ODE
if the coefficients of the system satisfy Cauchy-Riemann equations
(CR-equations). We provide the full symmetry algebra for the systems
of ODEs that correspond to linearizable scalar complex ODEs. For
this purpose we derive a \emph{reduced optimal canonical form} for
linear systems obtainable from a complex linear equation. We prove
that this form provides three equivalence classes of linearizable
systems of two second-order ODEs, while there exist five
linearizable classes \cite{sb} by real symmetry analysis. This
difference arises due to the fact that in CSA we invoke equivalence
of {\it scalar} second-order ODEs to obtain the reduced optimal
form, while in real symmetry analysis equivalence of linear {\it
systems} of two ODEs was used to derive their optimal form. The
nonlinear systems transformable to one of the three equivalence
classes we provide here, are characterized by complex
transformations of the form
\begin{eqnarray*}
T:(x,u(x))\rightarrow (\chi(x),U(x,u)).
\end{eqnarray*}
Indeed, these complex transformations generate these linearizable
classes of two dimensional systems. Note that not all the complex
linearizing transformations for scalar complex equations provide the
corresponding real transformations for systems.
The plan of the paper is as follows. In the next section we present
the preliminaries for determining the symmetry structures. The third
section deals with the conditions derived for systems that can be
obtained by CSA. In section four we obtain the reduced optimal
canonical form for systems associated with complex linear ODEs. The
theory developed to classify linearizable systems of ODEs
transformable to this reduced optimal form is given in the fifth
section. Applications of the theory are given in the next section.
The last section summarizes and discusses the work.
\section{Preliminaries}
The simplest form of a second-order equation has the
maximal-dimensional algebra, $sl(3,\R)$. To discuss the equivalence
of systems of two linear second-order ODEs, we need to use the
following result for the equivalence of a general system of $n$
linear homogeneous second-order ODEs with $2n^{2}+n$ arbitrary
coefficients and some canonical forms that have fewer arbitrary
coefficients \cite{wf}. Any system of $n$ second-order
non-homogeneous linear ODEs
\begin{eqnarray}
\ddot{\textbf{u}}=\textbf{A} \dot{\textbf{u}}+\textbf{B}
\textbf{u}+\textbf{c},\label{1}
\end{eqnarray}
can be mapped invertibly to one of the following forms
\begin{equation}
\ddot{\textbf{v}}=\textbf{C} \dot{\textbf{v}},\label{2}
\end{equation}
\begin{equation}
\ddot{\textbf{w}}=\textbf{D} \textbf{w},\label{3}
\end{equation}
where $\textbf{A}$, $\textbf{B}$, $\textbf{C}$, $\textbf{D}$ are
$n\times n$ matrix functions, $\textbf{u}$, $\textbf{v}$,
$\textbf{w}$, $\textbf{c}$ are vector functions and dot represents
differentiation relative to the independent variable $t$. For a
system of two second-order ODEs ($n=2$) there are a total of $10$
coefficients for the system represented by equation ($\ref{1}$). It
is reducible to the first and second canonical forms, ($\ref{2}$)
and ($\ref{3}$) respectively. Thus a system with $4$ arbitrary
coefficients of the form
\begin{eqnarray}
\ddot{w_{1}}=d_{11}(t)w_{1}+d_{12}(t)w_{2},\nonumber\\
\ddot{w_{2}}=d_{21}(t)w_{1}+d_{22}(t)w_{2},\label{4}
\end{eqnarray}
can be obtained by using the equivalence of ($\ref{1}$) and the
counterpart of the Laguerre-Forsyth second canonical form
($\ref{3}$). This result demonstrates the equivalence of systems of
two ODEs having $10$ and $4$ arbitrary coefficients respectively.
The number of arbitrary coefficients can be further reduced to three
by the change of variables \cite{sb}
\begin{eqnarray}
\tilde{y}=w_{1}/\rho(t),\hspace{2mm}\tilde{z}=w_{2}/\rho(t),
\hspace{2mm}x=\int^{t}\rho^{-2}(s)ds,\label{5}
\end{eqnarray}
where $\rho$ satisfies
\begin{equation}
\rho^{\prime\prime}-\frac{d_{11}+d_{22}}{2}\rho=0,\label{6}
\end{equation}
to the linear system
\begin{eqnarray}
\tilde{y}^{\prime\prime}=\tilde{d}_{11}(x)\tilde{y}+\tilde{d}_{12}(x)\tilde{z},\nonumber\\
\tilde{z}^{\prime\prime}=\tilde{d}_{21}(x)\tilde{y}-\tilde{d}_{11}(x)\tilde{z},\label{7}
\end{eqnarray}
where
\begin{eqnarray}
\tilde{d}_{11}=\frac{\rho^{3}(d_{11}-d_{22})}{2},\hspace{2mm}\tilde{d}_{12}=
\rho^{3}d_{12},\hspace{2mm}\tilde{d}_{21}=\rho^{3}d_{21}.\label{8}
\end{eqnarray}
This procedure of reduction of arbitrary coefficients for
linearizable systems simplifies the classification problem
enormously. System ($\ref{7}$) is called the \emph{optimal canonical
form} for linear systems of two second-order ODEs, as it has the
fewest arbitrary coefficients, namely three.
\section{Systems of ODEs obtainable by CSA}
Following the classical Lie procedure, one uses point
transformations
\begin{equation}
X=X(x,y,z),\hspace{2mm}Y=Y(x,y,z),\hspace{2mm}Z=Z(x,y,z),\label{9}
\end{equation}
to map the general linearizable system of two second-order ODEs
\cite{jm}, which is (at most) cubically semi-linear in both the
dependent variables,
\begin{eqnarray}
y^{^{\prime \prime }}=\omega_{1}(x,y,z,y^{^{\prime }},z^{^{\prime }}),\nonumber \\
z^{^{\prime \prime }}=\omega_{2}(x,y,z,y^{^{\prime }},z^{^{\prime
}}),\label{10}
\end{eqnarray}
where prime denotes differentiation relative to $x$, to the simplest
form
\begin{equation}
Y^{\prime\prime}=0,\hspace{2mm}Z^{\prime\prime}=0,\label{11}
\end{equation}
where the prime now denotes differentiation with respect to $X$ and
the mappings ($\ref{9}$) are invertible. The derivatives transform
as
\begin{eqnarray}
Y^{\prime}=\frac{D_{x}(Y)}{D_{x}(X)}=F_{1}(x,y,z,y^{^{\prime
}},z^{^{\prime }}),\nonumber \\
Z^{\prime}=\frac{D_{x}(Z)}{D_{x}(X)}=F_{2}(x,y,z,y^{^{\prime
}},z^{^{\prime}}),\label{12}
\end{eqnarray}
and
\begin{equation}
Y^{\prime\prime}=\frac{D_{x}(F_{1})}{D_{x}(X)},~~
Z^{\prime\prime}=\frac{D_{x}(F_{2})}{D_{x}(X)},\label{13}
\end{equation}
where $D_{x}$ is the total derivative operator. This yields
\begin{equation}
\begin{tabular}{l}
$y^{^{\prime \prime }}+\alpha_{11}y^{^{\prime
}3}+\alpha_{12}y^{^{\prime }2}z^{^{\prime }}+\alpha_{13}y^{^{\prime
}}z^{^{\prime }2}+\alpha_{14}z^{^{\prime }3}+\beta_{11}y^{^{\prime
}2}+\beta_{12}y^{^{\prime
}}z^{^{\prime }}+\beta_{13}z^{^{\prime }2}$ \\
$+\gamma_{11}y^{^{\prime }}+\gamma_{12}z^{^{\prime }}+\delta_{1}=0,$ \\
\\
$z^{^{\prime \prime }}+\alpha_{21}y^{^{\prime
}3}+\alpha_{22}y^{^{\prime }2}z^{^{\prime }}+\alpha_{23}y^{^{\prime
}}z^{^{\prime }2}+\alpha_{24}z^{^{\prime }3}+\beta_{21}y^{^{\prime
}2}+\beta_{22}y^{^{\prime
}}z^{^{\prime }}+\beta_{23}z^{^{\prime }2}$ \\
$+\gamma_{21}y^{^{\prime }}+\gamma_{22}z^{^{\prime
}}+\delta_{2}=0,$\label{14}
\end{tabular}
\end{equation}
the coefficients being functions of the independent and dependent
variables. System ($\ref{14}$) is the most general candidate for two
second-order ODEs that may be linearizable. While another candidate
of linearizability of two dimensional systems obtainable from the
most general form of a complex linearizable equation
\begin{eqnarray}
u^{\prime\prime}+E_{3}(x,u)u^{\prime 3}+E_{2}(x,u)u^{\prime
2}+E_{1}(x,u)u^{\prime}+E_{0}(x,u)=0,\label{15}
\end{eqnarray}
where $u$ is a complex function of the real independent variable
$x$, is also cubically semi-linear i.e. a system of the form
\begin{equation}
\begin{tabular}{l}
$y^{^{\prime \prime }}+\bar \alpha_{11}y^{^{\prime }3}-3\bar
\alpha_{12}y^{^{\prime }2}z^{^{\prime }}-3\bar
\alpha_{11}y^{^{\prime }}z^{^{\prime }2}+\bar \alpha_{12}z^{^{\prime
}3}+\bar \beta_{11}y^{^{\prime }2}-2\bar \beta_{12}y^{^{\prime
}}z^{^{\prime }}-\bar \beta_{11}z^{^{\prime }2}$ \\
$+\bar \gamma_{11}y^{^{\prime }}-\bar \gamma_{12}z^{^{\prime }}+\bar \delta_{11}=0,$ \\
\\
$z^{^{\prime \prime }}+\bar \alpha_{12}y^{^{\prime }3}+3\bar
\alpha_{11}y^{^{\prime }2}z^{^{\prime }}-3\bar
\alpha_{12}y^{^{\prime }}z^{^{\prime }2}-\bar \alpha_{11}z^{^{\prime
}3}+\bar \beta_{12}y^{^{\prime }2}+2\bar \beta_{11}y^{^{\prime
}}z^{^{\prime }}-\bar \beta_{12}z^{^{\prime }2}$ \\
$+\bar \gamma_{12}y^{^{\prime }}+\bar \gamma_{11}z^{^{\prime }}+\bar
\delta_{12}=0,$\label{16}
\end{tabular}
\end{equation}
here the coefficients $\bar \alpha_{1i}$, $\bar \beta_{1i}$, $\bar
\gamma_{1i}$ and $\bar \delta_{1i}$ for $i=1,2$ are functions of
$x$, $y$ and $z$. Clearly, the system ($\ref{16}$) corresponds to
($\ref{15}$) if the coefficients $\bar \alpha_{1i},~\bar
\beta_{1i},~\bar \gamma_{1i}$ and $\bar \delta_{1i}$ satisfy the
CR-equations i.e.
$\alpha_{11,y}=\alpha_{12,z},~\alpha_{12,y}=-\alpha_{11,z}$ and vice
versa. It is obvious as ($\ref{15}$) generates a system by breaking
the complex coefficients $E_{j}$, for $j=0,1,2,3$ into real and
imaginary parts
\begin{eqnarray}
E_{3}=\bar \alpha_{11}+i\bar \alpha_{12},~~ E_{2}=\bar
\beta_{11}+i\bar \beta_{12},~~ E_{1}=\bar \gamma_{11}+i\bar
\gamma_{12},~~ E_{0}=\bar \delta_{11}+i\bar \delta_{12},\label{17}
\end{eqnarray}
where all the functions are analytic. Hence we can state the following theorem.\\
\newline \textbf{Theorem 1.} \textit{A general two dimensional system of second-order ODEs} ($\ref{10}$) \textit{corresponds to a complex equation}
\begin{eqnarray}
u^{\prime\prime}=\omega(x,u,u^{\prime}),
\end{eqnarray}
\textit{if and only if $\omega_{1}$ and $\omega_{2}$ satisfy the
CR-equations}
\begin{eqnarray}
\omega_{1,y}=\omega_{2,z},~~\omega_{1,z}=-\omega_{2,y},\nonumber\\
\omega_{1,y^{\prime}}=\omega_{2,z^{\prime}},~~\omega_{1,z^{\prime}}=-\omega_{2,y^{\prime}}.
\end{eqnarray}
\\
For the correspondence of both the cubic forms ($\ref{14}$) and
($\ref{16}$) of two dimensional systems we state the following
theorem.\\
\newline \textbf{Theorem 2.} \textit{A system of the
form} ($\ref{14}$) \textit{corresponds to} ($\ref{16}$) \textit{if
and only if the coefficients $\alpha_{ij}$, $\beta_{ik}$,
$\gamma_{il}$ and $\delta_{i}$ satisfy the following conditions
\begin{eqnarray}
\alpha_{11}=-\frac{1}{3}\alpha_{13}=\frac{1}{3}\alpha_{22}=-\alpha_{24},\nonumber\\
-\frac{1}{3}\alpha_{12}=\alpha_{14}=\alpha_{21}=-\frac{1}{3}\alpha_{23},\nonumber\\
\beta_{11}=\frac{1}{2}\beta_{22}=-\beta_{13},\nonumber\\
\beta_{21}=-\frac{1}{2}\beta_{12}=-\beta_{23},\nonumber\\
\gamma_{11}=\gamma_{22}=,\quad \gamma_{21}=-\gamma_{12},\label{18}
\end{eqnarray}
where $i=l=1,2$, $j=1,...,4$ and $k=1,2,3$.}\\
\newline \textbf{Proof.} It can be trivially proved if we rewrite the above
equations as $\bar \alpha_{1i}$, $\bar \beta_{1i}$ and $\bar
\gamma_{1i}$, respectively. These coefficients correspond to complex
coefficients of ($\ref{15}$) if and only if they satisfy the
CR-equations.\\
\newline Thus Theorem (1) and (2) identify those two dimensional systems
which are obtainable from complex equations.
\section{Reduced optimal canonical forms}
The simplest forms for linear systems of two second-order ODEs
corresponding to complex scalar ODEs can be established by invoking
the equivalence of scalar second-order linear ODEs. Consider a
general linear scalar complex second-order ODE
\begin{equation}
u^{\prime\prime}=\zeta_{1}(x)u^{\prime}+\zeta_{2}(x)u+\zeta_{3}(x),\label{21}
\end{equation}
where prime denotes differentiation relative to $x$ and
$u(x)=y(x)+iz(x)$ is a complex function of the real independent
variable $x$. As all the linear scalar second-order ODEs are
equivalent, so equation ($\ref{21}$) is equivalent to the following
scalar second-order complex ODEs
\begin{equation}
u^{\prime\prime}=\zeta_{4}(x)u^{\prime},\label{22}
\end{equation}
\begin{equation}
u^{\prime\prime}=\zeta_{5}(x)u,\label{23}
\end{equation}
where all the three forms ($\ref{21}$), ($\ref{22}$) and
($\ref{23}$) are transformable to each other. Indeed these three
forms are reducible to the free particle equation. These three
complex scalar linear ODEs belong to the same equivalence class,
i.e. all have eight Lie point symmetry generators. In this paper we
prove that the systems obtainable by these forms using CSA have more
than one equivalence class. To extract systems of two linear ODEs
from ($\ref{22}$) and ($\ref{23}$) we put
$\zeta_{4}(x)=\alpha_{1}(x)+i\alpha_{2}(x)$ and
$\zeta_{5}(x)=\alpha_{3}(x)+i\alpha_{4}(x)$ to obtain two linear
forms of system of two linear second-order ODEs
\begin{eqnarray}
y^{\prime\prime}=\alpha_{1}(x)y^{\prime}-\alpha_{2}(x)z^{\prime},\nonumber\\
z^{\prime\prime}=\alpha_{2}(x)y^{\prime}+\alpha_{1}(x)z^{\prime}.\label{24}
\end{eqnarray}
and
\begin{eqnarray}
y^{\prime\prime}=\alpha_{3}(x)y-\alpha_{4}(x)z,\nonumber\\
z^{\prime\prime}=\alpha_{4}(x)y+\alpha_{3}(x)z,\label{25}
\end{eqnarray}
thus we state the following theorem.\\
\newline \textbf{Theorem 3.}
\textit{If a system of two second-order ODEs is linearizable via
invertible complex point transformations then it can be mapped to
one of the two forms} ($\ref{24}$) \textit{or} ($\ref{25}$).
Notice that here we have \emph{only two arbitrary coefficients} in
both the linear forms, while the minimum number obtained before was
three i.e. a system of the form ($\ref{7}$). The reason we can
reduce further is that we are dealing with the special classes of
linear systems of ODEs that correspond to the scalar complex ODEs.
In fact ($\ref{25}$) can be reduced further by the change of
variables
\begin{eqnarray}
Y=y/\rho(t),\hspace{2mm}Z=z/\rho(t),\hspace{2mm}
x=\int^{t}\rho^{-2}(s)ds,\label{26}
\end{eqnarray}
where $\rho$ satisfies
\begin{equation}
\rho^{\prime\prime}-\alpha_{3}\rho=0,\label{27}
\end{equation}
to
\begin{eqnarray}
Y^{\prime\prime}=-\beta(x)Z,~~~~
Z^{\prime\prime}=\beta(x)Y,\label{28}
\end{eqnarray}
where $\beta=\rho^{3}\alpha_{4}$. We state this result in the form
of a theorem.\\
\newline \textbf{Theorem 4.} \textit{Any linear
system of two second-order ODEs of the form} ($\ref{25}$)
\textit{with two arbitrary coefficients is transformable to a
simplest system of two linear ODEs} ($\ref{28}$) \textit{with one
arbitrary coefficient via real point transformations} ($\ref{26}$)
\textit{and} ($\ref{27}$).
Equation ($\ref{28}$) is the \emph{reduced optimal canonical form}
for systems associated with complex ODEs, with just one coefficient
which is an arbitrary function of $x$. The equivalence of systems
($\ref{24}$) and ($\ref{25}$) can be established via invertible
point transformations, so we state the following theorem.\\
\newline
\textbf{Theorem 5.} \textit{Two linear forms of the systems of two
second-order ODEs} ($\ref{24}$) \textit{and} ($\ref{25}$)
\textit{are equivalent via invertible point transformations}
\begin{eqnarray}
y=M_{1}(x)y_{1}-M_{2}(x)y_{2}+y^{*},\nonumber\\
z=M_{1}(x)y_{2}+M_{2}(x)y_{1}+z^{*},\label{29}
\end{eqnarray}
\textit{of the dependent variables only, where} $M_{1}(x)$,
$M_{2}(x)$ \textit{are two linearly independent solutions of}
\begin{eqnarray}
\alpha_{1}M_{1}-\alpha_{2}M_{2}=2M^{\prime}_{1},\nonumber\\
\alpha_{1}M_{1}-\alpha_{2}M_{2}=2M^{\prime}_{1},\label{30}
\end{eqnarray}
\textit{and} $y^{*}$, $z^{*}$ \textit{are the particular solutions
of} ($\ref{24}$).\\
\newline {\bf Proof.} Differentiating the
set of equations ($\ref{30}$) and using the result in the linear
form ($\ref{24}$), Routine calculations show that ($\ref{24}$) can
be mapped to ($\ref{25}$) where
\begin{eqnarray}
\alpha_{3}(x)=\frac{1}{M^{2}_{1}+M^{2}_{2}}[M_{1}(\alpha_{1}M^{\prime}_{1}-\alpha_{2}M^{\prime}_{2}-M^{\prime\prime}_{1})+M_{2}(\alpha_{1}M^{\prime}_{2}+\alpha_{2}M^{\prime}_{1}-M^{\prime\prime}_{2})],\nonumber\\
\alpha_{4}(x)=\frac{1}{M^{2}_{1}+M^{2}_{2}}[M_{1}(\alpha_{1}M^{\prime}_{2}+\alpha_{2}M^{\prime}_{1}-M^{\prime\prime}_{2})-M_{2}(\alpha_{1}M^{\prime}_{1}-\alpha_{2}M^{\prime}_{2}-M^{\prime\prime}_{1})].\nonumber\\
\label{31}
\end{eqnarray}
Thus the linear form ($\ref{24}$) is reducible to
($\ref{28}$).\\
\newline \textbf{Remark 1.} Any nonlinear system of
two second-order ODEs that is linearizable by complex methods can be
mapped invertibly to a system of the form ($\ref{28}$) with one
coefficient which is an arbitrary function of the independent
variable.
\section{Symmetry structure of linear systems obtained by CSA}
To use the reduced canonical form \cite{hs} for deriving the
symmetry structure of linearizable systems associated with the
complex scalar linearizable ODEs, we obtain a system of PDEs whose
solution provides the symmetry generators for the corresponding
linearizable systems of
two second-order ODEs.\\
\newline \textbf{Theorem 6.} \textit{Linearizable systems of two second-order ODEs reducible to the
linear form} ($\ref{28}$) \textit{via invertible complex point
transformations, have} $6$, $7$ \textit{or
$15$-dimensional Lie point symmetry algebras.}\\
\newline
\textbf{Proof.} The symmetry conditions provide the following set of
PDEs for the system ($\ref{28}$)
\begin{eqnarray}
\xi_{xx}=\xi_{xy}=\xi_{yy}=0=\eta_{1,zz}=\eta_{2,yy},\label{32}\\
\eta_{1,yy}-2\xi_{xy}=\eta_{1,yz}-\xi_{xz}=\eta_{2,yz}-\xi_{xy}=
\eta_{2,zz}-2\xi_{xz}=0,\label{33}\\
\xi_{xx}-2\eta_{1,xy}-3\beta(x)\xi_{,y}z+\beta(x)\xi_{,z}y
=\eta_{1,xz}+\beta(x)\xi_{,z}z=0,\label{34}\\
\xi_{xx}-2\eta_{2,xz}+3\beta(x)\xi_{,z}y-\beta(x)\xi_{,y}z=
\eta_{2,xy}-\beta(x)\xi_{,y}y=0,\label{35}\\
\eta_{1,xx}+\beta(x)(\eta_{1,z}y+2\xi_{,x}z-\eta_{1,y}z+\eta_{2})
+\beta^{\prime}(x)z\xi=0,\label{36}\\
\eta_{2,xx}+\beta(x)(\eta_{2,z}y-2\xi_{,x}y-\eta_{2,y}z-\eta_{1})
-\beta^{\prime}(x)y\xi=0.\label{37}
\end{eqnarray}
Equations ($\ref{34}$)-($\ref{37}$) involve an arbitrary function of
the independent variable and its first derivatives. Using equations
($\ref{32}$) and ($\ref{33}$) we have the following solution set
\begin{eqnarray}
\xi=\gamma_{1}(x)y+\gamma_{2}(x)z+\gamma_{3}(x),\nonumber\\
\eta_{1}=\gamma^{\prime}_{1}(x)y^{2}+\gamma^{\prime}_{2}(x)yz+
\gamma_{4}(x)y+\gamma_{5}(x)z+\gamma_{6}(x),\nonumber\\
\eta_{2}=\gamma^{\prime}_{1}(x)yz+\gamma^{\prime}_{2}(x)z^{2}+
\gamma_{7}(x)y+\gamma_{8}(x)z+\gamma_{9}(x).\label{38}
\end{eqnarray}
Using equations ($\ref{34}$) and ($\ref{35}$), we get
\begin{eqnarray}
\beta(x)\gamma_{1}(x)=0=\beta(x)\gamma_{2}(x).\label{39}
\end{eqnarray}
Now assuming $\beta(x)$ to be zero, non-zero constant and arbitrary function of $x$ will generate the following cases.\\
\newline\textbf{Case 1.1.} $\beta(x)=0$.\\
The set of determining equations ($\ref{32}$)-($\ref{37}$) will
reduce to a trivial system of PDEs
\begin{eqnarray}
\eta_{1,xx}=\eta_{1,xz}=\eta_{1,zz}=0,\nonumber\\
\eta_{2,xx}=\eta_{2,xy}=\eta_{2,yy}=0,\nonumber\\
2\xi_{,xy}-\eta_{1,yy}=0=2\xi_{,xz}-\eta_{2,zz},\nonumber\\
\xi_{,xz}-\eta_{1,yz}=0=\xi_{,xy}-\eta_{2,yz},\nonumber\\
\xi_{,xx}-2\eta_{1,xy}=0=\xi_{,xx}-2\eta_{2,xz},\label{40}
\end{eqnarray}
which can be extracted classically for the system of free particle
equations. Solving it we find a $15$-dimensional Lie point symmetry algebra.\\
\newline\textbf{Case 1.2.} $\beta(x)\neq0$.\\
Then ($\ref{39}$) implies $\gamma_{1}(x)=\gamma_{2}(x)=0$ and
($\ref{38}$) reduces to
\begin{eqnarray}
\xi=\gamma_{3}(x),\nonumber\\
\eta_{1}=(\frac{\gamma^{\prime}_{3}(x)}{2}+c_{3})y+c_{1}z+
\gamma_{6}(x),\nonumber\\
\eta_{2}=c_{2}y+(\frac{\gamma^{\prime}_{3}(x)}{2}+c_{4})z+
\gamma_{9}(x).\label{41}
\end{eqnarray}
Here two subcases arise.\\
\newline\textbf{Case 1.2.1.} $\beta(x)$ \emph{is a non-zero constant}.\\
As equations ($\ref{36}$) and ($\ref{37}$) involve the derivatives
of $\beta(x)$, which will now be zero, equations
($\ref{34}$)-($\ref{37}$) and ($\ref{41}$) yield a $7$-dimensional
Lie algebra. The explicit expressions of the symmetry generators
involve trigonometric functions. But for a simple demonstration of
the algorithm consider $\beta(x)=1$. The solution of the set of the
determining equations is
\begin{equation}
\xi=C_{1},\nonumber
\end{equation}
and
\begin{eqnarray}
\eta_{1}=C_{2}y+[-C_{4}e^{x/\sqrt{2}}-C_{3}e^{-x/\sqrt{2}}]
\sin({x/\sqrt{2}})+C_{6}e^{x/\sqrt{2}}\cos({x/\sqrt{2}})+
\nonumber\\
C_{5}e^{-x/\sqrt{2}}\cos({x/\sqrt{2}})+C_{7}z,\nonumber\\
\eta_{2}=[-C_{6}e^{x/\sqrt{2}}+C_{5}e^{-x/\sqrt{2}}]
\sin({x/\sqrt{2}})-C_{4}e^{x/\sqrt{2}}\cos({x/\sqrt{2}})-C_{2}z+
\nonumber\\
C_{3}e^{-x/\sqrt{2}}\cos({x/\sqrt{2}})+C_{7}y.\label{42}
\end{eqnarray}
This yields a $7$-dimensional symmetry algebra.\\
\newline\textbf{Case 1.2.2.1.} $\beta(x)=x^{-2}, x^{-4}$ or $(x+1)^{-4}$.\\
Equations ($\ref{34}$)-($\ref{37}$) and ($\ref{41}$) yield a
$7$-dimensional Lie algebra. Thus the $7$-dimensional algebras can
be related with systems which have variable coefficients in their
linear forms, apart from the linear forms with constant coefficients.\\
\newline\textbf{Case 1.2.2.2.} $\beta(x)=x^{-1}, x^{2}$, $x^{2}\pm C_{0}$ or $e^{x}$.\\
Using equations ($\ref{34}$)-($\ref{37}$) and ($\ref{41}$), we
arrive at a $6$-dimensional Lie point symmetry algebra. The explicit
expressions involve special functions, e.g for $\beta(x)=x^{-1}$,
$x^{2}$, $x^{2}\pm C_{0}$ we get Bessel functions. Similarly for
$\beta(x)=e^{x}$ there are six symmetries, including the generators
$y\partial_{y}-e^{x}z\partial_{z}$,
$z\partial_{z}+e^{x}y\partial_{y}$. The remaining four generators
come from the solution of an ODE of order four.\\
Thus there is only a $6$, $7$ or $15$-dimensional algebra for
linearizable systems of two second-order ODEs transformable to
($\ref{28}$) via invertible complex point transformations. We are
not investigating the remaining two linear forms ($\ref{24}$) and
($\ref{25}$), because these are transformable to system ($\ref{28}$)
i.e. all these forms have the same symmetry structures. The linear
forms providing $6$ or $7$-dimensional algebras here are obtainable
by linear forms extractable from ($\ref{7}$), with a $6$ or $7$
dimensional algebra respectively. Consider ($\ref{7}$) with all the
coefficients to be non-zero constants i.e.
$\tilde{d}_{11}(x)=a_{0}$, $\tilde{d}_{12}(x)=b_{0}$ and
$\tilde{d}_{21}(x)=c_{0}$, where
\begin{eqnarray}
a_{0}^{2}+b_{0}c_{0}\neq 0.\label{43}
\end{eqnarray}
This system provides seven symmetry generators. The linear form
($\ref{28}$) also provides a $7$-dimensional algebra with constant
coefficients satisfying ($\ref{43}$), while the $8$-dimensional
symmetry algebra was extracted \cite{sb} by assuming
\begin{eqnarray}
a_{0}^{2}+b_{0}c_{0}=0.\label{44}
\end{eqnarray}
Such linear forms cannot be obtained from ($\ref{28}$). These two
examples explain why a $7$-dimensional algebra can be obtained from
($\ref{28}$), but a linear form with an $8$-dimensional algebra is
not obtainable from
it.\\
\newline To prove these observations consider arbitrary point
transformations of the form
\begin{eqnarray}
\tilde{y}=a(x)y+b(x)z,~~~\tilde{z}=c(x)y+d(x)z.\label{45}
\end{eqnarray}
\newline\textbf{Case a.} If $a(x)=a_{0}$, $b(x)=b_{0}$, $c(x)=c_{0}$ and
$d(x)=d_{0}$ are constants then ($\ref{45}$) implies
\begin{eqnarray}
\tilde{y}^{\prime\prime}=a_{0}y^{\prime\prime}+b_{0}z^{\prime\prime},\nonumber\\
\tilde{z}^{\prime\prime}=c_{0}y^{\prime\prime}+d_{0}z^{\prime\prime}.\label{46}
\end{eqnarray}
Using ($\ref{7}$) and ($\ref{25}$) in the above equation we find
\begin{eqnarray}
(a_{0}d_{0}-b_{0}c_{0})y^{\prime\prime}=((a_{0}d_{0}+b_{0}c_{0})\tilde{d}_{11}(x)+c_{0}d_{0}\tilde{d}_{12}(x)-a_{0}b_{0}\tilde{d}_{21}(x))y+\nonumber\\
(2b_{0}d_{0}\tilde{d}_{11}(x)+d^{2}_{0}\tilde{d}_{12}(x)-b^{2}_{0}\tilde{d}_{21}(x))z,\nonumber\\
(a_{0}d_{0}-b_{0}c_{0})z^{\prime\prime}=((a_{0}d_{0}+b_{0}c_{0})\tilde{d}_{11}(x)+c_{0}d_{0}\tilde{d}_{12}(x)-a_{0}b_{0}\tilde{d}_{21}(x))z+\nonumber \\
(2a_{0}c_{0}\tilde{d}_{11}(x)+c^{2}_{0}\tilde{d}_{12}(x)-a^{2}_{0}\tilde{d}_{21}(x))y,\label{47}
\end{eqnarray}
where $a_{0}d_{0}-b_{0}c_{0}\neq0$. Using ($\ref{25}$), ($\ref{47}$)
and the linear independence of the $\tilde{d}$'s, gives
\begin{eqnarray}
a_{0}b_{0}=c_{0}d_{0}=0,\nonumber\\
a^{2}_{0}-b^{2}_{0}=c^{2}_{0}-d^{2}_{0}=0,\nonumber\\
a_{0}d_{0}+b_{0}c_{0}=a_{0}c_{0}-b_{0}d_{0}=0,\label{48}
\end{eqnarray}
which has a solution $a_{0}=b_{0}=c_{0}=d_{0}=0$, which is
inconsistent with ($\ref{47}$) because the requirement was
$a_{0}d_{0}-b_{0}c_{0}\neq 0$.\\
\newline \textbf{Case b.} If $a(x)$,
$b(x)$, $c(x)$ and $d(x)$ are arbitrary functions of $x$ then
\begin{eqnarray}
\tilde{y}^{\prime\prime}=a(x)y^{\prime\prime}+b(x)z^{\prime\prime}+a^{\prime\prime}(x)y+b^{\prime\prime}(x)z+2a^{\prime}(x)y^{\prime}+2b^{\prime}(x)z^{\prime},\nonumber\\
\tilde{z}^{\prime\prime}=c(x)y^{\prime\prime}+d(x)z^{\prime\prime}+c^{\prime\prime}(x)y+d^{\prime\prime}(x)z+2c^{\prime}(x)y^{\prime}+2d^{\prime}(x)z^{\prime}.\label{49}
\end{eqnarray}
Thus we obtain
\begin{eqnarray}
(ad-bc)y^{\prime\prime}=[(ad+bc)\tilde{d}_{11}+cd\tilde{d}_{12}-ab\tilde{d}_{21}-a^{\prime\prime}d+c^{\prime\prime}b]y+(2bd\tilde{d}_{11}+\nonumber\\
d^{2}\tilde{d}_{12}-b^{2}\tilde{d}_{21}-b^{\prime\prime}d+d^{\prime\prime}b)z-2d(a^{\prime}y^{\prime}+b^{\prime}z^{\prime})+2b(c^{\prime}y^{\prime}+d^{\prime}z^{\prime}),\label{50}\\
(ad-bc)z^{\prime\prime}=(2ac\tilde{d}_{11}+c^{2}\tilde{d}_{12}-a^{2}\tilde{d}_{21}-a^{\prime\prime}c+c^{\prime\prime}a)y+[(ad+bc)\tilde{d}_{11}+\nonumber\\
cd\tilde{d}_{12}-ab\tilde{d}_{21}-b^{\prime\prime}c+d^{\prime\prime}a]z-2c(a^{\prime}y^{\prime}+b^{\prime}z^{\prime})+2a(c^{\prime}y^{\prime}+d^{\prime}z^{\prime}).\label{51}
\end{eqnarray}
Comparing the coefficients as before and using the linear
independence of $\tilde{d}$'s we obtain
\begin{eqnarray}
a^{\prime}(x)=b^{\prime}(x)=c^{\prime}(x)=d^{\prime}(x)=0,\label{52}
\end{eqnarray}
which implies that it reduces to a system of the form ($\ref{47}$),
which leaves us again with the same result. Thus we have the
theorem.\\
\newline \textbf{Theorem 7.} \textit{The linear forms for
systems of two second-order ODEs obtainable by CSA are in general
inequivalent to those linear forms obtained by real symmetry
analysis.}
Before presenting some illustrative applications of the theory
developed we refine Theorem 6 by using Theorem 7 to make the
following remark.\\
\newline \textbf{Remark 2.} There are \emph{only}
$6$, $7$ or $15$-dimensional algebras for linearizable systems
obtainable by scalar complex linearizable ODEs, i.e. there are no
$5$ or $8$-dimensional Lie point symmetry algebras for such systems.
\section{Applications}
Consider a system of non-homogeneous geodesic-type differential
equations
\begin{eqnarray}
y^{\prime\prime}+y^{\prime 2}-z^{\prime 2}=\Omega_{1}(x,y,z,y^{\prime},z^{\prime}),\nonumber \\
z^{\prime\prime}+2y^{\prime}z^{\prime}=\Omega_{2}(x,y,z,y^{\prime},z^{\prime}).\label{53}
\end{eqnarray}
where $\Omega_{1}$ and $\Omega_{1}$ are linear functions of the
dependent variables and their derivatives. This system corresponds
to a complex scalar equation
\begin{eqnarray}
u^{\prime\prime}+u^{\prime 2}= \Omega(x,u,u^{\prime}),\label{54}
\end{eqnarray}
which is either transformable to the free particle equation or one
of the linear forms ($\ref{21}$)-($\ref{23}$), by means of the
complex transformations
\begin{eqnarray}
\chi=\chi(x), ~U(\chi)=e^{u}.\label{55}
\end{eqnarray}
Which are further transformable to the free particle equation by
utilizing another set of invertible complex point transformations.
Generally, the system ($\ref{53}$) is transformable to a system of
the free particle equations or a linear system of the form
\begin{eqnarray}
Y^{\prime\prime}=\widetilde{\Omega}_{1}(\chi, Y, Z, Y^{\prime}, Z^{\prime})-\widetilde{\Omega}_{2}(\chi, Y, Z, Y^{\prime}, Z^{\prime}),\nonumber \\
Z^{\prime\prime}=\widetilde{\Omega}_{2}(\chi, Y, Z, Y^{\prime},
Z^{\prime})+\widetilde{\Omega}_{1}(\chi, Y, Z, Y^{\prime},
Z^{\prime}).\label{56}
\end{eqnarray}
Here $\widetilde{\Omega}_{1}$ and $\widetilde{\Omega}_{2}$ are
linear functions of the dependent variables and their derivatives,
via an invertible change of variables obtainable from ($\ref{55}$).
The linear form ($\ref{56}$) can be mapped to a maximally symmetric
system if and only if there exist some invertible complex
transformations of the form ($\ref{55}$), otherwise these forms can
not be reduced further. This is the reason why we obtain three
equivalence classes namely with $6$, $7$ and $15$-dimensional
algebras for systems corresponding to linearizable complex equations
with only one equivalence class. We first consider an example of a
nonlinear system that admits a $15-$dimensional algebra which can be
mapped to the free particle system using ($\ref{55}$). Then we
consider four applications to nonlinear systems of quadratically
semi-linear ODEs transformable to ($\ref{56}$) via ($\ref{55}$) that
are not further
reducible to the free particle system.\\
\newline \textbf{1.} Consider ($\ref{53}$) with
\begin{eqnarray}
\Omega_{1}=-\frac{2}{x}y^{\prime}, \nonumber\\
\Omega_{2}=-\frac{2}{x}z^{\prime},\label{58}
\end{eqnarray}
it admits a $15$-dimensional algebra. The real linearizing
transformations
\begin{eqnarray}
\chi(x)=\frac{1}{x},~Y=e^{y}\cos(z),~Z=e^{y}\sin(z),\label{57}
\end{eqnarray}
obtainable from the complex transformations ($\ref{55}$) with
$U(\chi)=Y(\chi)+iZ(\chi)$, map the above nonlinear system to
$Y^{\prime\prime}=0$, $Z^{\prime\prime}=0$. Moreover, the solution
of ($\ref{58}$) corresponds to the solution of the corresponding
complex equation
\begin{eqnarray}
u^{\prime\prime}+u^{\prime 2}+\frac{2}{x}u^{\prime}=0.\label{59}
\end{eqnarray}
\newline
\textbf{2.} Now consider $\Omega_{1}$ and $\Omega_{2}$ to be linear
functions of the first derivatives $y^{\prime},~z^{\prime}$, i.e.,
system ($\ref{53}$) with
\begin{eqnarray}
\Omega_{1}=c_{1}y^{\prime}-c_{2}z^{\prime},\nonumber\\
\Omega_{2}=c_{2}y^{\prime}+c_{1}z^{\prime},\label{60}
\end{eqnarray}
which admits a $7$-dimensional algebra, provided both $c_{1}$ and
$c_{2}$, are not simultaneously zero. It is associated with the
complex equation
\begin{eqnarray}
u^{\prime\prime}+u^{\prime 2}-cu^{\prime}=0.
\end{eqnarray}
Using the transformations ($\ref{55}$) to generate the real
transformations
\begin{eqnarray}
\chi(x)= x,~Y=e^{y}\cos(z),~Z=e^{y}\sin(z),\label{61}
\end{eqnarray}
which map the nonlinear system to a linear system of the form
($\ref{24}$), i.e.,
\begin{eqnarray}
Y^{\prime\prime}=c_{1}Y^{\prime}-c_{2}Z^{\prime},\nonumber\\
Z^{\prime\prime}=c_{2}Y^{\prime}+c_{1}Z^{\prime},\label{62}
\end{eqnarray}
which also has a $7$-dimensional symmetry algebra and corresponds to
\begin{eqnarray}
U^{\prime\prime}-cU^{\prime}=0.\label{66}
\end{eqnarray}
All the linear second-order ODEs are transformable to the free
particle equation thus we can invertibly transform the above
equation to $\widetilde{U}^{\prime\prime}=0$, using
\begin{eqnarray}
(\chi(x), U)\rightarrow(\widetilde{\chi}=\alpha+\beta
e^{c\chi(x)},\widetilde{U}=U),
\end{eqnarray}
where $\alpha$, $\beta$ and $c$ are complex. But these complex
transformations can not generate real transformations to reduce the
corresponding system ($\ref{62}$) to a maximally symmetric system.\\
\newline \textbf{3.} A system with a $6-$dimensional Lie algebra is obtainable from ($\ref{53}$) by introducing a linear
function of $x$ in the above coefficients i.e.,
\begin{eqnarray}
\Omega_{1}=(1+x)(c_{1}y^{\prime}-c_{2}z^{\prime}),\nonumber\\
\Omega_{2}=(1+x)(c_{2}y^{\prime}+c_{1}z^{\prime}),\label{63}
\end{eqnarray}
in ($\ref{53}$), then the same transformations ($\ref{61}$) converts
the above system into a linear system
\begin{eqnarray}
Y^{\prime\prime}=(1+\chi) \left (c_{1}Y^{\prime}-c_{2}Z^{\prime}
\right ),\nonumber\\
Z^{\prime\prime}=(1+\chi) \left
(c_{2}Y^{\prime}+c_{1}Z^{\prime} \right ),\label{64}
\end{eqnarray}
where both systems ($\ref{63}$) and ($\ref{64}$) are in agreement on
the dimensions (i.e. six) of their symmetry algebras. Again, the
above system is a special case of the linear system
($\ref{24}$).\\
\newline \textbf{4.} If we choose $\Omega_{1}=c_{1},~
\Omega_{2}=c_{2}$, where $c_{i}$ $(i=1,2)$ are non-zero constants,
then under the same real transformations ($\ref{61}$), the nonlinear
system ($\ref{53}$) takes the form
\begin{eqnarray}
Y^{\prime\prime}=c_{1}Y-c_{2}Z,\nonumber\\
Z^{\prime\prime}=c_{2}Y+c_{1}Z.\label{65}
\end{eqnarray}
\section{Conclusion}
The classification of linearizable systems of two second-order ODEs
was obtained by using the equivalence properties of systems of two
linear second-order ODEs \cite{sb}. The ``optimal canonical form" of
the corresponding linear systems of two second-order ODEs, to which
a linearizable system could be mapped, is crucial. This canonical
form used invertible transformations, the invertibility of these
mappings insuring that the symmetry structure is preserved. That
optimal canonical form of the linear systems of two second-order
ODEs led to five linearizable classes with respect to Lie point
symmetry algebras with dimensions $5$, $6$, $7$, $8$ and $15$.
Systems of two second-order ODEs appearing in CSA correspond to some
scalar complex second-order ODE. We proved the existence of a
reduced optimal canonical form for such linear systems of two ODEs.
This reduced canonical form provided three equivalence classes,
namely with $6$, $7$ or $15$-dimensional point symmetry algebras.
Two cases are eliminated in the theory of complex symmetries: those
of $5$ and $8$-dimensional algebras. The systems corresponding to a
complex linearized scalar ODE involve one parameter which can only
cover {\it three} possibilities; (a) it is zero; (b) it is a
non-zero constant; and (c) it is a non-constant function. The non
existence of $5$ and $8$ dimensional algebras for the linear forms
appearing due to CSA has been proved by showing that these forms are
not equivalent to those provided by the real symmetry approach for
systems \cite{sb} with $5$ and $8$ generators.
Work is in progress \cite{saf3} to find complex methods of solving a
class of 2-dimensional \emph{nonlinearizable} systems of
second-order ODEs. It is also obtainable from the linearizable
scalar complex second-order ODEs, which are transformable to the
free particle equation via an invertible change of the dependent and
independent variables of the form
\begin{eqnarray}
\chi=\chi(x,u), ~U(\chi)=U(x,u).
\end{eqnarray}
Notice that these transformations are different from ($\ref{55}$).
The real transformations corresponding to the complex
transformations above cannot be used to linearize the real system.
But the linearizability of the complex scalar equations can be used
to provide solutions for the corresponding systems.\\
\newline\textbf{Acknowledgements}\newline The authors are grateful to
Fazal Mahomed for useful comments and discussion on this work. MS is
most grateful to NUST for providing financial support.
\bc{\bf REFERENCES}\ec \vspace{-1.8cm}
\renewcommand{\refname}{}
|
1,477,468,750,394 | arxiv | \section{Introduction}
\label{sec:introduction}
We began this investigation desiring to understand the relationship between prior belief and the resulting uncertainty in predictions obtained from inference
in the hope that new insights would provide a sound basis to improve prediction credibility in machine learning.
The mathematical and epistemological foundations of rational belief, from which the laws of probability and Bayesian inference are derived as an extended logic from binary propositional logic \citep{Cox1946},
lead us to assert the central role of Bayesian inference in obtaining rigorous justification for uncertainty in predictions.
Although this foundation of reason holds generally, it is critical when we need to learn robust predictions from limited datasets.
Yet, applying Bayesian inference within the machine learning context requires addressing a fundamental challenge: inference requires prior belief.
When the amount of evidence contained within a dataset regarding a phenomenon of interest is extremely limited, specifying prior belief is not merely an inconvenience;
it is the dominant source of uncertainty in predictions.
Examples of such data limitations include having few observations, noisy measurements, skewed or highly imbalanced labels of interest, or even a degree of mislabeling in the data.
When predictive models integrate well-understood physical principles, they are often accompanied by physically plausible parameter ranges that provide a strong basis for prior belief.
Likewise, canonical priors are acceptable for simple approximations with relatively few unconstrained parameters in comparison to the size of the dataset intended for inference.
\citet{Kass1996} give a thorough survey of related work.
In contrast, the machine learning paradigm seeks to instrument arbitrary algorithms with high parameter dimensionality.
A typical architecture may have tens of thousands, or perhaps millions, of free parameters.
In this setting, the sensitivity of predictions to an arbitrary choice of prior belief may be unacceptable for applications of consequence \citep{Owhadi2015}.
\subsection{Our Contributions}
Expanding on the work of \citet{Solomonoff1964a,Solomonoff1964b,Solomonoff2009}, \citet{Kolmogorov1965}, \citet{Rissanen1983,Rissanen1984}, and \citet{Hutter2007},
we develop a theoretical framework that assigns plausibility to arbitrary inference architectures.
Just as Solomonoff derives algorithmic probability from program length, \tAdd{the minimum} number of bits needed to encode a program for a specified \tAdd{Universal Turing Machine (UTM)},
we show how a modest generalization yields a universal hyperprior over symbolic encodings of ordinary priors.
We may regard an ordinary prior, that which is typically used in Bayesian inference, as a restricted state of belief from a general universe of potential explanatory models.
Within our framework, every choice of computational architecture, and associated prior over model parameters, is just a restriction of prior belief.
Our hyperprior provides a means to measure and control the complexity of such choices.
We show how our theory of information \citep{Duersch2020}, \Cref{thm:info}, allows us to derive a training objective from the information that is created when we select a prior representation, observe the training data,
and either infer the posterior distribution or construct a variational approximation of it.
\citet{Zhang2018} provide a thorough survey \tAdd{of recent work on variational inference}.
Our main result, \Cref{thm:parsimony_optimization},
clarifies how we may understand learning as an information optimization problem.
Our parsimony objective separates into three components:
\begin{itemize}
\item Encoding information contained within a symbolic description of prior belief;
\item Model information gained through inference using evidence;
\item Predictive information gained regarding the observed labels from plausible models.
\end{itemize}
In our derivation, the first two terms appear with negative signs and the third with a positive sign, revealing how our theory suppresses complexity as an intrinsic tradeoff against increased agreement with observed labels.
The first component guards against excessive complexity in our description of prior belief and the second guards against priors that are poorly suited to our data.
In contrast, the third component promotes agreement between resulting predictions and the data.
\tAdd{The main distinction between the second and third components is that model information is measured in the space of explanations, whereas predictive information measures the result of applying plausible models to our data.}
\uAdd{We review work on universal priors over integers, corresponding binary representations, and show how a simple integer encoding approximates the scaling invariance of Jeffreys prior.}
We then demonstrate this theory with two learning prototypes.
Our first algorithm casts polynomial regression within this framework, predicting a distribution over continuous outcomes from a continuous input.
By setting the maximum polynomial degree to be much higher than the data merits, standard machine learning training strategies are susceptible to memorization,
as we demonstrate by applying gradient based training with leave-one-out cross-validation.
In contrast, our prototype discovers much simpler models from the same high-degree basis.
Moreover, when we aggregate predictions over an ensemble of polynomial representations,
our prototype demonstrates the natural increase in uncertainty we intuitively associate with extrapolation.
Our second algorithm samples ensembles of decision trees that are constructed using the parsimonious inference objective.
These models aim to predict discrete labels through a sequence of partitions on continuous feature coordinates.
Our random forest prototype demonstrates the ability to learn credible prediction uncertainty from extremely small and heavily skewed datasets,
which we contrast with a standard decision tree model and bootstrap aggregation.
Both of these algorithms achieve superior prediction uncertainty through Bayesian inference from
prior belief that is derived to both quantify and naturally suppress complexity over arbitrary explanations.
\tAdd{Although our basic hyperprior is subject to a choice of interpreter---which need not be Turing-complete, but must transform valid codes into coherent probability distributions over predictive models---we go on to show that to be consistent with this theory, there exists a unique hyperprior over an ensemble of Turing-complete interpreters, \Cref{thm:utm_ensemble}.}
\subsection{Organization}
\Cref{sec:background} begins with a discussion and illustration of the severe inadequacies of traditional machine learning training approaches that depend on cross-validation.
We then briefly review the critical connections between scientific principles, rational belief, and Bayesian inference,
which provide a sound theory to obtain rigorously justified uncertainty in predictions.
When placed in the machine learning context, however, we explore how principled justification for prior belief over abstract models, as well as our unavoidable disregard for an infinite number of alternative models, remains a critical challenge.
Further, we summarize how our theory of information is derived to satisfy key properties that allow us to relate the various forms of complexity that follow in the parsimonious inference objective.
\Cref{sec:complexity} continues with our main contributions, including a discussion of generalized description length, a coherent complexity hyperprior, and the principles of minimum information and maximum entropy.
These notions culminate in the parsimonious inference objective, providing a suitable framework to understand and control model complexity over arbitrary learning architectures.
We also show how this objective allows us to quantify memorization.
Our theory allows us to apply these concepts within a wide variety of approaches to solve learning problems, including variational inference techniques.
\Cref{sec:implementation} examines implementation details within our prototype algorithms, including efficient encodings and training strategies for polynomial regression and decision trees.
\Cref{sec:discussion} concludes with a discussion of \tAdd{how we may consistently compare multiple interpreters}, a pathway to frame and address computability \`a priori, our theory's relationship to other work, and a summary of our findings.
\section{Background}
\label{sec:background}
In order to clarify how we may improve trust in machine learning predictions, we must begin with the origin of trust in science.
\tAdd{The epistemological foundation of the scientific method shares a fundamental connection with Bayesian inference and determines how we may optimally account for evidence to learn plausible explanations.
Bayesian theory alone, however, does not provide a complete learning framework when we employ high-parameter families, such as most machine learning architectures.
Thus, we also review Solomonoff's and Kolmogorov's notions of complexity as a means to promote simplicity in learned models.
To motivate the need for this discussion, we begin by illustrating the severe deficiencies of standard machine learning training practices when they are applied to small datasets.}
\subsection{Memorization}
The term \textit{memorization} is often conflated with \textit{overtraining}, but we distinguish these terms as follows.
Overtraining is characterized by degradation in prediction quality on unseen data that occurs after an initial stage of improvements.
In contrast, memorization refers more generally to any predictive algorithm that exhibits unjustifiable confidence, or low prediction uncertainty, in the training dataset labels that were used to adjust model parameters.
\Cref{sec:info_min} provides rigorous analysis to justify this view.
Conflating these terms leads to an incorrect picture of the problem; to avoid memorization, we must merely halt training at the correct moment.
Machine learning algorithms are typically trained using some variation of stochastic gradient descent \citep{Robbins1951}.
When applied to overparameterized models, traditional optimization strategies are subject to overtraining.
Cross-validation \citep{Allen1974} attempts to prevent overtraining by monitoring predictions on a holdout dataset, but we show how this method still fails to prevent memorization on small datasets.
The same strategy is also used to tune hyperparameters, such as such as regularization weights and learning-rate schedules.
The obvious difficulty presented by cross-validation is the inherent tradeoff between using as much data as possible to train parameters, but also having a reliable estimator for prediction quality.
For limited data, standard practices apply some form of k-fold cross-validation \citep{Hastie2009}.
One forms k distinct partitions of the dataset, trains k models respectively, and aggregates predictions by averaging.
Leave-one-out cross-validation uses the same number of partitions as datapoints.
Each partition reserves only one observation to estimate the best model over each training trajectory.
\begin{figure}[h!]
\centering
\includegraphics[width=0.825\textwidth]{r0_fig.png}
\caption{\small
Illustration of standard training shortcomings.
Top-left: optimum obtained from holding out the green point.
Bottom-left: mean predictions over all single-holdout optima.
Top-middle: idealized training on all original data and 1000 extra validation points from ground truth.
Bottom-middle: mean predictions over 12 random starts, $\mathcal{N}(\theta_i \mid \mu=0,\sigma = 0.2)$.
Removing the tradeoff between holdout and training data does not prevent complexities in predictions.
Top-right: optimal model discovered using our theoretical framework and prototype algorithm.
Bottom-right: our aggregate accounts for many plausible models, improving robustness and demonstrating natural extrapolation uncertainty.
}
\label{fig:deficiencies}
\end{figure}
\Cref{fig:deficiencies} demonstrates these techniques using polynomial regression, fitting 20th degree polynomials with only 12 points.
\tAdd{Retaining more polynomial coefficients than training points allows us to observe how standard training fails when data are limited.}
The top-left shows an example of a single model trained by holding out one point for validation, shown in green.
The bottom-left shows average predictions over 12 such models.
Yet, suppose we could train with all 12 points while remaining highly confident that we will halt training at the correct moment.
This ideal is demonstrated as a thought experiment in the middle column of \Cref{fig:deficiencies} by sampling 1000 extra data from the \textit{generative process}, the ground truth mechanism that creates observations.
We see that eliminating the tradeoff between training and validation would not prevent artifacts from developing that confidently hew to scant observations, memorization.
\tAdd{Typically, one would also use cross-validation to tune the optimal polynomial degree, which would certainly constrain complexity somewhat.
The purpose of this experiment, however, is to show that na\"ive training may never even explore low-complexity models, especially in high dimensions.
For most high-parameter families, such as neural networks, there is no feasible hierarchy of bases that would be analogous to limiting the polynomial degree.
For example, if we wished to constrain model parameters to a fixed sparsity pattern, the number of patterns to test would grow exponentially in the number of nonzero elements.}
This experiment demonstrates how neither of the competing cross-validation objectives address the core problem with learning from limited data.
Memorization is often framed in terms of a bias-variance tradeoff;
predictions should avoid fluctuating rapidly, but also remain flexible enough to extract predictive patterns.
In our theoretical framework, however, memorization is more comprehensively and rigorously understood as unparsimonious model complexity, i.e.~increases in model information that are not justified by only small improvements to training predictions.
Regularization strategies attempt to address this heuristically by penalizing excessive freedom in learning parameters, for example attaching an $\ell_1$ or $\ell_2$ norm to the training objective.
While many of these approaches can be equivalently cast as choices of prior belief, they lack \tAdd{a unifying principle that would illuminate and resolve choices of regularization shape and weight.
One must, again, resort to hyperparameter tuning via cross-validation, thus failing to address the core challenge: to efficiently learn from limited data.}
\subsection{Scientific Reasoning and Bayesian Inference}
In order to reiterate the concrete relationship between Bayesian inference and scientific reasoning,
we review the epistemological foundations of reason at the center of the scientific method.
These foundations bear decisive consequences regarding the valid forms of analysis we may pursue in order to obtain rational predictions.
At its core, the scientific method relies on coherent mathematical models of observable phenomena that have been informed over centuries of physical measurements.
Within the field of epistemology, this is the naturalist view of rational belief \citep{Brandt1985}.
It holds that validity is ultimately derived from consistency, which can be understood in three key components:
\begin{enumerate}
\item Rational beliefs must be logical, avoiding internal contradictions;
\item Rational beliefs must be empirical, accounting for all available evidence;
\item Rational beliefs must be predictive, continually reassessing validity by how well predictions agree with new observations.
\end{enumerate}
The third point is really nothing more than a restatement of the second point, placing emphasis on the evolving nature of rational beliefs as new data become available.
The critical significance of the first point is that it provides a path to elevate the second and third points to a rigorous extended logic: Bayesian inference.
Building on the rich body of work by many scholars---including \citet[original 1926]{Ramsey2016}, \citet{DeFinetti1937}, and \citet[original 1939]{Jeffreys1998}---\citet{Cox1946} shows that for a mathematical framework analyzing degrees of truth, belief as an extended logic,
to be consistent with binary propositional logic, that formalism must satisfy the laws of probability:
\begin{enumerate}
\item Probability is nonnegative.
\item Only impossibility has probability zero.
\item Only certainty has maximum probability, normalized to one.
\item To revise the degree of credibility we assign to a model upon reviewing empirical evidence, we must apply Bayes' theorem.
\end{enumerate}
Consequently, the Bayesian paradigm provides a uniquely rigorous approach to quantify uncertainty in predictions derived through inductive reasoning.
Therefore, the only logically correct path to quantify and suppress memorization in learning must be cast within the Bayesian perspective.
Analysis proceeds with a probability distribution called the prior $\pP(\vTheta)$ that quantifies our lack of information, or initial uncertainty, in plausible explanatory models.
Here, $\vTheta$ is any specific parameter state within a model class, or computational architecture.
When we need to emphasize the prior's dependence on a model class, as well as the shape of the parameter distribution within that class, we will write the prior as $\pP(\vTheta \mid \vPsi)$,
where a description $\vPsi$, or hyperparameter sequence, provides such details.
We will examine how $\vPsi$ plays a key role regarding model complexity in detail in \Cref{sec:complexity}.
The empirical data are expressed as a set of ordered pairs $\sData=\{ (\vX_i, \vY_i) \mid i \in [n] \}$ that have been sampled from the generative process $\pG(\vX, \vY)$.
Features $\vX_i$ are used to predict labels $\vY_i$ from the architecture paired with $\vTheta$.
We write the predicted distribution over all potential labels as $\pP(\vY_i \mid \vX_i, \vTheta)$.
If the ordered pairs in $\sData$ represent independent samples from the underlying process,
the likelihood is evaluated as $\pP(\sData \mid \vTheta) = \prod_{i\in [n]} \pP(\vY_i \mid \vX_i, \vTheta)$, which expresses the probability of observing $\sData$ if a hypothetical explanation $\vTheta$ held.
Then, we update our beliefs according to Bayes' theorem.
In our picture, we hold that having $\vTheta$ alone is sufficient to evaluate predictions.
When we explicate the role of prior descriptions $\vPsi$, that means inference can be written as
\submitBlank
\begin{align*}
\pP(\vTheta \mid \sData, \vPsi) = \frac{\pP(\sData \mid \vTheta) \pP(\vTheta \mid \vPsi)}{\pP(\sData \mid \vPsi)}
\quad\text{where}\quad
\pP(\sData \mid \vPsi) = \integ{d\vTheta} \pP(\sData \mid \vTheta) \pP(\vTheta \mid \vPsi)
\end{align*}
is the model-class evidence.
If we have a hyperprior $\pP(\vPsi)$ over potential descriptions, we can also infer the hyperposterior
\submitBlank
\begin{align*}
\pP(\vPsi \mid \sData) = \frac{\pP(\sData \mid \vPsi) \pP(\vPsi)}{\pP(\sData)}
\quad\text{where}\quad
\pP(\sData) = \integ{d\vPsi} \pP(\sData \mid \vPsi) \pP(\vPsi).
\end{align*}
The central point of inference is that it does not attempt to identify a single explanation matching the data, as with stochastic gradient optimization and cross-validation.
Rather, inference naturally adheres to the Epicurean principle---we should retain multiple explanations according to their respective degrees of plausibility---within a coherent mathematical framework.
\tAdd{As a distribution, the posterior is meaningful in a way that a single model is not; it allows us to update our beliefs consistently as new evidence emerges.}
We obtain rational predictions by evaluating the posterior predictive integral, or even the hyperposterior predictive integral, respectively constructed as
\submitBlank
\begin{align*}
& \pP(\vY \mid \vX, \sData, \vPsi) = \integ{d\vTheta} \pP(\vY \mid \vX, \vTheta) \pP(\vTheta \mid \sData, \vPsi)
\quad\text{and}\\
& \pP(\vY \mid \vX, \sData) = \integ{d\vPsi} \pP(\vY \mid \vX, \sData, \vPsi) \pP(\vPsi \mid \sData).
\end{align*}
The resulting predictions meet the exigent standard of rational belief for meaningful uncertainty quantification.
\subsection{The Universal Scope of Prior Belief}
The pervasive objection to the Bayesian paradigm is the lack of clear provenance for prior belief.
In addition to objections based on subjectivity, translating our intuitive beliefs into distributions can be difficult.
This problem is exacerbated in the domain of machine learning, where computational models are abstract and driven only by practical utility, rather than well-understood physical principles.
The premise of machine learning is that we do not need to integrate expert knowledge and specialized scientific theory into algorithms to obtain useful predictions from data,
which eludes the traditional view of priors, that we must express our beliefs.
Prediction sensitivity to prior belief is most apparent when the number of parameters approaches or exceeds the size of our dataset, as illustrated in \Cref{fig:sensitivity}.
If we have $n$ observations and $k > n$ differentiable parameters, then every point in parameter space must have at least $k-n$ perturbable dimensions in which the likelihood gradient is zero.
Because the likelihood remains constant as we move through these dimensions, each point lives within a $(k-n)$-dimensional submanifold wherein prior belief entirely determines the structure of posterior belief.
Thus, within these submanifolds, the contribution of parameter uncertainty to prediction uncertainty is not affected by evidence.
Clearly, we cannot be satisfied with meeting only the bare conditions for technically rational belief;
we require concrete philosophical justification for prior belief.
\begin{figure}[b!]
\centering
\includegraphics[width=\textwidth]{r2a_fig.png}
\caption{\vspace{-0.1in}\small
Illustration of prediction sensitivity to prior belief.
The first row uses Chebyshev polynomial bases and the second uses standard bases.
The third row exacerbates the problem with basis-dependence by using a polynomial that already memorized the data for the first basis function, followed by the Chebyshev basis.
All priors are normal, $\pN(\vTh \mid 0, \mI)$.
The choice of basis clearly matters; inference alone does not prevent the development of artifacts we associate with memorization.
}
\label{fig:sensitivity}
\end{figure}
In order to appreciate the solution, we must grasp the full severity of the problem by framing it in the most arduous scope.
We can define the \textit{model universe} as the set containing every computational architecture that could produce coherent predictions over $\vY$ from $\vX$.
By considering inference over the model universe, we see that every architectural design decision is equivalent to a choice of support for prior belief, i.e.~the subdomain in which prior belief is nonzero.
Using model-class descriptions $\vPsi$ to capture potential choices of prior belief allows us to subsume these complications by investigating the correct form of a hyperprior $\pP(\vPsi)$.
This picture also illuminates a second significant challenge in the machine learning setting, the problem of dimensionality in high-parameter families.
Even if we obtain an attractive hyperprior, we can always construct increasingly complicated architectures.
It is not possible to explore all of them.
Occam's Razor provides a compelling path to a solution;
explanations should not exhibit more complexity than what is required to explain the evidence.
We conclude that a comprehensive hyperprior must compute and suppress a rigorous formulation of complexity.
Moreover, our learning framework must justify disregarding infinite dimensions from inference and simultaneously address how to feasibly construct or approximate restricted posteriors.
\subsection{Controlling Complexity}
Bayesian theorists have had a persistent interest in articulating core principles for constructing priors over abstract models, particularly within the objectivist Bayesian philosophy.
Examples include maximum entropy priors \citep{Jaynes1957,Good1963} and Jeffrey's priors \citep{Jeffreys1946}.
Other approaches use information criteria to determine a suitable number of parameters, such as the Akaike Information Criterion (AIC)~\citep{Akaike1974} and Bayesian Information Criterion (BIC)~\citep{Schwarz1978}.
In contrast to these approaches that explicitly compare model classes, in which a parameter is either present or absent,
Automatic Relevance Determination (ARD) \citep{Mackay1995,Neal2012} takes a softer approach.
ARD uses a hyperprior to express uncertain relevance of different parameters and features in a model.
It postulates that most model parameters should be close to zero because only a limited number of features are relevant for prediction.
Through inference, relevant model parameters can be identified automatically.
Having been specifically developed for neural networks, ARD provides an important perspective to understand complete theories of learning.
We will discuss the relationship between ARD priors and our theory in \Cref{sec:ardprior}.
Perhaps the most principled approach to a universal prior is Solomonoff's work on algorithmic probability \citep{Solomonoff1960,Solomonoff1964a,Solomonoff1964b,Solomonoff2009}.
Solomonoff derives a prior over all possible programs based upon their lengths using binary encodings subject to an optimal UTM.
\citet{Hutter2007}, reviewing central principles of reason, goes further to show how Solomonoff's framework solves important philosophical problems in the Bayesian setting,
including predictive and decision-theoretic performance bounds under the assumption that the generative process is a program.
\citet{Potapov2012} also discuss Solomonoff's algorithmic probability, emphasizing the importance of retaining many alternative models to not only learn robust predictions,
but also maintain adaptability in decision making.
Kolmogorov's work on mapping complexity \citep{Kolmogorov1965} is closely related and we will examine it in detail in \Cref{sec:kolmogorov}.
Rissanen's work on universal priors and Minimum Description Length (MDL) \citep{Rissanen1983,Rissanen1984} is also related.
We examine his universal prior on integers in \Cref{sub:encodings} and the relationship between our theory and MDL in \Cref{sub:mdl}.
The key advantage of Solomonoff's approach is that it applies generally to any model we can program, thus eliminating artificial constraints on computational architectures.
Solomonoff does not separate the model $\vTheta$ from the model class $\vPsi$, since any model from any model class can be expressed as a program.
As this encoding-length based prior provides a strong base for our work, we provide a detailed discussion in \Cref{sec:solomonoff}.
\tAdd{Information-theoretic formulations of complexity can be traced back to \citet{Shannon1948} and the concept of entropy as a measure of the uncertainty associated with sequences of discrete symbols that may be transmitted over a communication channel.}
In order to rigorously understand how information in our datasets relates to Bayesian inference and encoding complexity,
we developed a theory of information \citep{Duersch2020} rooted in understanding information as an expectation over rational belief.
Given an arbitrary latent variable $\vZ$, we would like to measure the information gained by shifting belief between hypothetical states, i.e.~from $\pQ_0(\vZ)$ to $\pQ_1(\vZ)$.
We require this measurement to be taken relative to a third state of belief, $\pR(\vZ)$, which we hold to be valid.
The precise reasoning by which validity of $\pR(\vZ)$ is derived is an important epistemological question.
For our purposes, $\pR(\vZ)$ will either be rational belief, expressing our present understanding of the actual state of affairs, or a choice, representing a hypothetical state of affairs following a decision.
Rational belief is defined as the posterior distribution resulting from inference, which reserves some nonnegative probability for every outcome that is plausible.
In contrast, we regard a choice as a restriction on the support of belief, effectively confining probability to any distribution that we can describe.
This occurs when we must adhere to a course of action from a set of mutually incompatible options.
The following postulates and \Cref{thm:info} summarize key results.
\begin{enumerate}
\item Information gained by changing belief from $\pQ_0(\vZ)$ to $\pQ_1(\vZ)$ is quantified as an expectation over a third state $\pR(\vZ)$, called the view of expectation.
\item Information is additive over independent belief processes.
\item If belief does not change then no information is gained, regardless of the view of expectation.
\item Information gained from any normalized prior state of belief $\pQ_0(\vZ)$ to an updated state of belief $\pR(\vZ)$ in the view of $\pR(\vZ)$ must be nonnegative.
\end{enumerate}
\begin{theorem}
\label{thm:info} \thmTitle{Information as a rational measure of change in belief.}
Information, measured in bits, satisfying these postulates must take the form
\submitBlank
\begin{displaymath}
\info{\pR(\vZ)}{\pQ_1(\vZ)}{\pQ_0(\vZ)} = \integ{d\vZ} \pR(\vZ) \log_2\left( \frac{\pQ_1(\vZ)}{\pQ_0(\vZ)} \right) \text{bits}.
\end{displaymath}
\end{theorem}
When the view of expectation is the same as the target belief, we recover the Kullback-Leibler divergence \citep{Kullback1951}
\submitBlank
\begin{align*}
\KL{\pR(\vZ)}{\pQ(\vZ)} = \info{\pR(\vZ)}{\pR(\vZ)}{\pQ(\vZ)}.
\end{align*}
The entropy of a distribution $\pP(\vZ)$ over discrete outcomes $\vZ \in \{ \vZ_i \mid i \in [n] \}$ is equivalent to the expected information gained upon realization in the view of the realization
\submitBlank
\begin{displaymath}
\entropy{\pP(\vZ)} = \sum_{i=1}^n \pP(\vZ_i) \log_2\!\left( \frac{1}{\pP(\vZ_i)} \right) \text{bits}.
\end{displaymath}
Our theory allows us to relate and analyze changes in belief regarding our data, model parameters, and hyperparameters within a unified framework.
\Cref{sec:corollaries} provides selected corollaries of \Cref{thm:info} for reference.
Our main result, \Cref{thm:parsimony_optimization}, builds on this work to show how memorization may be quantified, \Cref{cor:memorization}, and prevented.
Simple changes to the model structure that benefit multiple predictions are parsimonious, worthwhile investments.
In contrast, memorization as a wasteful transfer of information from the space of predictions to the space of explanations.
As we must inevitably solve feasible approximations of the posterior predictive integral in order to obtain practical predictions, our theory provides additional benefit by allowing us to analyze posterior approximations within the same formalism.
\section{Complexity and Parsimony}
\label{sec:complexity}
We present our theoretical learning framework in three parts.
First, we analyze a modest generalization of Kolmogorov's notion of program length to sequences of symbols drawn from arbitrary alphabets that may be conditioned on previously realized symbols.
This simplifies our ability to assign complexity to arbitrary descriptions of prior belief.
Second, we use description length to derive a hyperprior that extends Solomonoff's formulation of algorithmic probability to general inference architectures.
Third, we cast learning as an information minimization principle.
We show how learning balances the two forms of information contained within models, due to both prior descriptions and inference, against the information the models provide about our dataset.
Not only does this formalism allow us to analyze the utility of potential restrictions of prior belief, we also recover variational inference optimization from the same principle.
\subsection{Program Length and Kolmogorov Complexity}
\label{sec:kolmogorov}
\newSet<\sObj>{A}
\newSet<\sSubObj>{B}
\DeclareDocumentCommand \sSub{m}{\sSubObj\!\left[ #1 \right]}
\newVector<\vObj>{a}
\newVector<\vObjr>{\check{a}}
\newMatrix<\sAlp>{\Sigma}
\newSet<\sX>{X}
\newSet<\sY>{Y}
Kolmogorov's discussion of complexity begins with a countable set of objects $\sObj = \{ \vObj \}$ that are indexed with binary sequences.
For Kolmogorov, an object $\vObj$ is a program and the length of the program $\fLen(\vObj)$ is taken to be the number of binary digits in a corresponding binary sequence $\vPsi(\vObj)$.
Given a domain of program inputs $\sX$ and a codomain of outputs $\sY$,
a \textit{programming method} $\vPhi(\cdot, \cdot)$ accepts the program $\vObj$ and an input $\vX \in \sX$ and returns an output $\vY = \vPhi(\vObj, \vX) \in \sY$.
The Kolmogorov complexity of an ordered pair $(\vX, \vY) \in \sX \times \sY$ is the length of the shortest program that is capable of reproducing the pair
\submitBlank
\begin{align*}
\fKol_{\vPhi}(\vX, \vY) = \min_{\vObj\in\sSubObj}\, \fLen(\vObj)
\quad\text{where}\quad
\sSubObj = \left\{ \vObj \mid \vY = \vPhi(\vObj, \vX) \right\} \subset \sObj.
\end{align*}
An ordered pair may be understood to enforce multiple function values or even the entire mapping that defines a function.
Further, Kolmogorov's framework easily captures the complexity of a singleton $\vY$ by taking an empty input, $\vX=\emptyset$.
Because Turing-complete programming methods can simulate one another,
\tAdd{the shortest length of a program in a new language is bound from above by that of the source language plus a constant;
the new shortest program cannot be longer than simply attaching a simulator to the source version.
Thus, the length of an efficient simulator for the original programming method provides the bounding constant offset.}
The descriptions of interest to us, however, may not admit perfectly efficient binary codes.
For example, we may wish to represent the outcome of rolling of a balanced six-sided die, \tAdd{having an approximate entropy of $2.585$ bits.}
Rather than solving for an optimal binary encoding \citep{Huffman1952},
\tAdd{yielding an expected length of approximately $2.667$ bits,}
it is convenient to extend the notion of the length to finite sequences of symbols drawn from multiple alphabets.
\tAdd{In this case, using an alphabet with six symbols would allow the encoding to exactly achieve the entropy limit.}
See \Cref{sub:decision_trees} for another example in which this extension supports efficient descriptions of feature domain partitions.
Let a description be composed of a sequence of symbols represented as $\vPsi = (\vS_i)_{i=1}^n$.
When we wish to draw attention to the role of a sequence as an encoding of an object, we write $\vPsi(\vObj)$.
For our purposes, these objects do not necessarily need to be programs.
Rather, they are simply descriptions of belief, $\pP(\vTh \mid \vPsi)$.
For each $i \in [n]$, a symbol $\vS_i$ is selected from an alphabet $\sAlp_i$.
We emphasize that each alphabet is allowed to depend on previously realized symbols in the sequence so that the sequence of alphabets is not fixed.
To avoid cumbersome notation and excessive indexing variables,
we express subsequences as $(\vS)_{1}^{j}$ and leave the natural indexing $(\vS_i)_{i=1}^{j}$ implied.
Let $(\vS)_{1}^{0}$ indicate the empty subsequence.
We also leave the conditional dependence of each alphabet on previous symbols implied so that we may simply write $\sAlp_{i}$ rather than $\sAlp_{i}\!\left[ (\vS)_{1}^{i-1} \right]$.
As with Kolmogorov, the length of an object is derived from a sequence, $\fLen(\vObj) = \fLen(\vPsi(\vObj))$.
If we treat each symbol in an encoding $\vPsi(\vObj)$ as a discrete random variable, we have
\submitBlank
\begin{align*}
\pP(\vPsi(\vObj)) = \prod_{i=1}^n \pP(\vS_i \mid (\vS)_{1}^{i-1}).
\end{align*}
The entropy corresponding to each potential symbol, or the information we expect to gain upon realization,
is the maximum if and only if the probability of each symbol is uniform over its alphabet, $\pP(\vS_i \mid (\vS)_{1}^{i-1}) = \frac{1}{|\sAlp_i|}$.
That is,
\submitBlank
\begin{align*}
\sum_{\vS_i \in \sAlp_i} \pP(\vS_i \mid (\vS)_{1}^{i-1}) \log_2\!\left(\frac{1}{\pP(\vS_i \mid (\vS)_{1}^{i-1})}\right) \leq \log_2(|\sAlp_i|).
\end{align*}
Our construction of generalized length in \Cref{def:length} invokes the principle of maximum entropy to remove restrictions on the kinds of codes we can consider,
while recovering Kolmogorov's length when all symbols are binary digits.
\begin{definition}
\label{def:length} \thmTitle{Generalized Length as Maximum Entropy Encoding.}
The generalized length of an arbitrary sequence $\vPsi$ is the upper bound on entropy of the corresponding sequence of alphabets from which each symbol is drawn,
\submitBlank
\begin{displaymath}
\fLen(\vPsi) = \sum_{i=1}^n \log_2( |\sAlp_i| ) \,\text{bits}.
\end{displaymath}
\end{definition}
\Cref{cor:length_lowerbound} shows that this length saturates the information lower bound in Shannon's source coding theorem \citep{Shannon1948}.
We provide all proofs in \Cref{sec:proofs}.
Furthermore, when we develop an efficient encoding for the kinds of objects we would like to use,
\Cref{cor:probability_from_length} allows us to naturally derive the probability of an object from the encoding.
\begin{corollary}
\label{cor:length_lowerbound} \thmTitle{Generalized Length Lower Bound.}
Given a set of objects $\sObj$ and probabilities $\pP(\vObj)$ for all $\vObj \in \sObj$,
the expected generalized length of an object is bound from below by the entropy
\submitBlank
\begin{displaymath}
\expect_{\pP(\vObj)} \fLen(\vPsi(\vObj)) \ge \expect_{\pP(\vObj)} \log_2\!\left(\frac{1}{\pP(\vObj)}\right).
\end{displaymath}
\end{corollary}
\begin{corollary}
\label{cor:probability_from_length} \thmTitle{Probability from Length.}
Given a set of objects $\sObj$ and a maximum entropy encoding with $\pP(\vObj) = \pP(\vPsi(\vObj))$ for all $\vObj \in \sObj$,
the generalized length satisfies
\submitBlank
\begin{displaymath}
\pP(\vObj) = 2^{-\fLen(\vObj)}.
\end{displaymath}
\end{corollary}
\subsection{Algorithmic Probability}
\label{sec:solomonoff}
\DeclareDocumentCommand \pQObj{} {\pQ(\vObj)}
\DeclareDocumentCommand \pQsObj{} {\pQs(\vObj)}
\tAdd{Five years before Kolmogorov published his work on mapping complexity}, Solomonoff articulated the foundations for inductive inference and algorithmic probability.
He was specifically interested in programs capable of reproducing a binary sequence $\vY$,
i.e.~the subset of programs $\sSubObj = \left\{ \vObj \mid \vY = \vPhi(\vObj, \emptyset) \right\} \subset \sObj$,
and he derived the probabilistic contribution of each program to plausible continuations of the sequence
\submitBlank
\begin{align*}
\pP(\vObj \mid \vY) \propto \begin{cases}
2^{-\fLen(\vPsi(\vObj))} & \vObj \in \sSubObj \\
0 & \vObj \notin \sSubObj
\end{cases}
\end{align*}
where, as with Kolmogorov's picture, length corresponds to an optimal binary encoding, subject to \tAdd{an optimal UTM.
That is, the UTM for which the optimal binary encoding is shortest.}
We understand his result as Bayesian inference wherein \Cref{cor:probability_from_length} provides prior belief and a program has unit likelihood if \tAdd{it halts and} reproduces the sequence.
\tAdd{Otherwise, the likelihood is zero.}
It follows that the Kolmogorov complexity is simply the \tAdd{length of the Maximum A Posteriori (MAP)} estimator in the same picture.
As such, the Kolmogorov complexity is the minimum amount of information that is possible to gain
by restricting belief to a discrete program that is capable of reproducing a desired ordered pair.
If, however, we allow distributions of belief over many programs, so that $\pQObj \geq 0$ for any $\vObj \in \sSubObj$,
\Cref{cor:solomonoff} shows that Solomonoff's algorithmic probability is the minimizer of information gain, improving beyond the Kolmogorov complexity.
\begin{corollary}
\label{cor:solomonoff} \thmTitle{Information Optimality of Solomonoff Programs.}
If we measure the change in belief from all possible programs according to \Cref{cor:probability_from_length}
to any distribution $\pQObj$ that restricts belief to programs capable of reproducing an input-output pair $(\vX, \vY)$,
the minimizer
\submitBlank
\begin{align*}
& \pQsObj = \argmin_{\pQObj}\, \KL{\pQObj}{\pP(\vObj)}
\quad\text{subject to}\quad
\pQObj = 0
\quad\forall\quad
\vObj \notin \sSubObj = \left\{ \vObj \mid \vY = \vPhi(\vObj, \vX) \right\},
\end{align*}
is uniquely given by
\submitBlank
\begin{align*}
& \pQsObj = \frac{2^{-\fLen(\vPsi(\vObj))}}{\pP(\sSubObj)}
\quad\forall\quad
\vObj \in \sSubObj
\quad\text{where}\quad
\pP(\sSubObj) = \sum_{\vObj \in \sSubObj} 2^{-\fLen(\vPsi(\vObj))}.
\end{align*}
\end{corollary}
Solomonoff's picture is even more general than it first appears.
There is no need to confine our attention to binary programs that reproduce binary sequences.
We may apply the same framework to any algorithm that generates coherent probabilities on a given dataset, $\pP(\vY \mid \vX, \vObj)$.
Since any algorithm we write is ultimately still a program, this induces universal prior belief over arbitrary predictive algorithms.
\tAdd{Again, such a prior depends on the choice of UTM or programming method.}
Then, Bayesian inference yields rational belief as a posterior distribution over all such algorithms.
Yet, this approach is fundamentally difficult because it requires us to efficiently explore the posterior over suitable programs via their discrete sequences.
Since it is not always possible to anticipate how a program will respond to given inputs in finite time, we arrive at the problem of uncomputability.
Moreover, even if we discover seemingly high posterior programs, we cannot guarantee that our sample adequately approximates the posterior predictive integral.
We discuss this problem and a potential solution further in \Cref{sec:computability}.
Rather than restricting our attention to programs, we would like to allow more general inference architectures.
As indicated earlier, we accomplish this by relaxing $\vPsi$ to be merely a description of prior belief.
\tAdd{Doing so requires an \textit{interpreter}, which translates valid sequences into coherent distributions over valid models, $\pP(\vTh \mid \vPsi)$,
and then computes coherent likelihoods from specific models, $\pP(\vY \mid \vX, \vTh)$.
If the interpreter is Turing complete, then a description could still be a complete program,
but we can also consider interpreters that merely require specification of a few hyperparameters of a distribution that is well-suited to our data source.}
Not only are short descriptions easier to discover, letting our data drive updates in belief through inference is often substantially more information-efficient than accounting for a full program that computes the equivalent result.
In this picture, we lose the ability of interpreters to simulate one another, but we gain access to simpler encodings that may be much easier to propose and evaluate.
That said, the choice of interpreter is an important problem.
We revisit this issue in \Cref{sec:interpreters}.
We remark that subjective prior beliefs may be expressible by allowing symbol probabilities to be nonuniform over relevant alphabets.
In this view, just as generalized length is the limit of expected information gained by realization from an encoding,
the corresponding probability in \Cref{cor:probability_from_length} may be regarded as the limit of subjective priors within a given encoding.
Subjective prior beliefs are also reflected by the choice of interpreter.
The parsimonious hyperprior over corresponding model classes from \Cref{cor:probability_from_length}, $\pP(\vPsi) = 2^{-\fLen(\vPsi)}$,
is enough to complete the Bayesian framework with well-founded justification for how we arrive at prior belief over arbitrary architectures.
When we perform Bayesian inference from a parsimonious prior or hyperprior, we call the result parsimonious rational belief.
To achieve computational feasibility, however, we still need to investigate principled restrictions of belief.
\subsection{The Principle of Information Minimization}
\label{sec:info_min}
We develop this paradigm in order to promote computational feasibility while retaining well-founded theoretical justification for resulting predictions.
As alluded to in \Cref{cor:solomonoff}, we can cast learning as an information minimization problem over our total change in belief due to observing the training data $\sData$,
selecting one or more model classes $\vPsi$, and solving for distributions over models $\vTheta$ within each class.
The principle of minimum information \citep{Evans1969}, based on the closely related principle of maximum entropy \citep{Jaynes1957},
intuitively states that driving the information gained upon viewing the training data to be as low as possible, we obtain better predictions.
If a dataset contains a highly predictive pattern, then once that pattern is known we can obtain strong predictions.
As a consequence, the information gained by observing new labels drops.
In contrast, the information gained by observing new labels will remain high when there is no discernable pattern.
We derive \Cref{thm:parsimony_optimization} from a rigorous formulation of the minimum information principle using our work regarding information as a rational measure of change in belief and show how this information objective can be manipulated into three terms
that provide insight into how we may understand and control complexity during learning as a constrained optimization problem.
\DeclareDocumentCommand \pRY{} {\pR(\vY \mid \vYr)}
\DeclareDocumentCommand \pQPsi{} {\pQ(\vPsi)}
\DeclareDocumentCommand \pQsPsi{} {\pQs(\vPsi)}
\DeclareDocumentCommand \pQTh{} {\pQ(\vTh)}
\DeclareDocumentCommand \pQThCPsi{} {\pQ(\vTh \mid \vPsi)}
\DeclareDocumentCommand \pQsThCPsi{} {\pQs(\vTh \mid \vPsi)}
\DeclareDocumentCommand \pQThPsi{} {\pQ(\vTh, \vPsi)}
\DeclareDocumentCommand \pQsThPsi{} {\pQs(\vTh, \vPsi)}
\DeclareDocumentCommand \pQsY{} {\pQs(\vY)}
\begin{theorem}
\label{thm:parsimony_optimization} \thmTitle{Parsimonious Inference Optimization.}
Let our training dataset be represented as an ordered pair $(\vX, \vY)$.
A model $\vTh$ computes coherent probabilities over potential labels $\vY$ from features $\vX$ as $\pP(\vY \mid \vX, \vTh)$.
Shorthand $\pP(\vY \mid \vTh)$ leaves dependence on $\vX$ implied.
A description of the model class is represented by a sequence $\vPsi$, which restricts prior belief to $\pP(\vTh \mid \vPsi)$.
The parsimonious hyperprior over potential sequences induced by generalized length is $\pP(\vPsi) = 2^{-\fLen(\vPsi)}$.
Our joint belief in labels, models, and prior encodings is given by $\pP(\vY, \vTh, \vPsi) = \pP(\vY \mid \vTh) \pP(\vTh \mid \vPsi) \pP(\vPsi)$.
Viewing the realized labels $\vYr$ from the dataset changes rational belief to $\pRY$, a distribution assigning full probability to the observed outcomes.
Let any potential choice of belief over descriptions after viewing the data be $\pQPsi$.
Likewise, within a model class $\vPsi$, an arbitrary distribution over models is $\pQThCPsi$.
The total information gained is given by the Kullback-Leibler divergence
\submitBlank
\begin{align*}
\KL{\pRY \pQThCPsi \pQPsi}{ \pP(\vY \mid \vTh) \pP(\vTh \mid \vPsi) \pP(\vPsi)}.
\end{align*}
The minimizer of this objective is equivalent to the maximizer of the following parsimony objective, expressed in three parts
\submitBlank
\begin{align*}
\omega\tAdd{[\pQThCPsi, \pQPsi]} =
& \expect_{\pQThCPsi \pQPsi} \info{\pRY}{\pP(\vY \mid \vTh)}{\pP(\vY \mid \vTh_0)}
& &\text{(prediction information)} \\
& - \expect_{\pQPsi} \KL{\pQThCPsi}{\pP(\vTh \mid \vPsi)}
& -&\text{(inference information)} \\
& - \expect_{\pQPsi} \fLen(\vPsi) + \entropy{\pQPsi},
& -&\text{(description information)}
\end{align*}
where $\vTh_0$ anchors the predictive information measurement to any fixed baseline.
\end{theorem}
\tAdd{The notation $\pRY$ is intended to emphasize that our rational belief in plausible labels is totally restricted to what we have observed after viewing the evidence.
In contrast, the arguments to the optimization objective, $\pQThCPsi$ and $\pQPsi$, do not depend on the data until optimization constrains them.}
We hold that the view of expectation taken in \Cref{thm:parsimony_optimization} is valid because it represents the known labels and the actual distributions that will be used in practice to compute predictions.
Further, this construction of information, rather than the reversed divergence $\KL{\pP(\vY, \vTh, \vPsi)}{\pRY \pQThCPsi \pQPsi}$, is necessary to avoid multiple infinities;
for each description $\vPsi$ with $\pQPsi=0$, the reversed divergence is infinite.
Moreover, we cannot avoid eliminating an infinite number of such cases from consideration.
The first term, prediction information, is the expected information gained about training data resulting from our belief in explanations.
Both secondary terms, inference information and description information, account for model complexity.
Anchoring predictive information to any fixed model $\pP(\vY \mid \vTh_0)$ allows us to coherently interpret label information as that which is gained relative to $\vTh_0$.
Any fixed predictive distribution suffices, including the prior predictive
\submitBlank
\begin{align*}
\pP(\vY \mid \vX) = \integ{d\vPsi\,d\vTh} \pP(\vY \mid \vX, \vTh) \pP(\vTh \mid \vPsi) \pP(\vPsi).
\end{align*}
However, because the prior predictive may be difficult (or impossible) to compute, it is much simpler to use a na\"ive model $\vTh_0$.
If we disregard the role of $\vPsi$ and only account for prediction information and inference information from prior belief $\pP(\vTh)$, i.e.~from the first two terms of the parsimonious inference objective,
then we recover a form of the Bayesian Occam's Razor \citep{Mackay1992a},
\submitBlank
\begin{align*}
& \info{\pRY}{\pP(\vY)}{\pP(\vY \mid \vTh_0)} = \log_2\!\left( \frac{\pP(\vYr)}{\pP(\vYr \mid \vTh_0)} \right) \\
&= \expect_{\pP(\vTh \mid \vYr)} \log_2\!\left(\frac{\pP(\vYr \mid \vTh)}{\pP(\vYr \mid \vTh_0)}\right) - \KL{\pP(\vTh \mid \vYr)}{\pP(\vTh)},
\end{align*}
provided we use the exact posterior $\pP(\vTh \mid \vYr)$ in place of $\pQThCPsi$.
Otherwise, we recover a variational inference objective that is equivalent to maximizing the Evidence Lower Bound (ELBO).
The Bayesian Occam's Razor also reveals the tradeoff between the information that explanations provide about our data and the complexity of those explanations,
but it leaves the provenance of $\pP(\vTh)$ unaddressed.
The description information terms show how our theory subsumes the Principle of Maximum Entropy.
If we were to disregard the critical role that the parsimonious hyperprior plays in controlling complexity, i.e.~dropping expected length,
we would be left with an optimization objective that drives increases in entropy within our chosen distribution of descriptions.
Doing so, however, would mean that long and complicated descriptions would be just as plausible as short and simple descriptions, as long as they are equally capable of explaining the data.
Correctly accounting for description length completes a rigorous formulation of Occam's Razor.
\DeclareDocumentCommand \sFeasible{} {\mathcal{F}}
Critically, \Cref{cor:model_inference,cor:hyper_inference} show that unconstrained optimization recovers, and is therefore consistent with, Bayesian inference and parsimonious rational belief.
As demonstrated in \Cref{sub:decision_trees}, some prior beliefs facilitate exact inference and easily allow us to take $\pQThCPsi =\pP(\vTh \mid \vYr, \vPsi)$.
\begin{corollary}
\label{cor:model_inference} \thmTitle{Optimality of Inference.}
Given a single description $\vPsi$ specifying prior belief $\pP(\vTh \mid \vPsi)$, the conditionally optimal distribution over models,
\submitBlank
\begin{align*}
\pQsThCPsi = \argmax_{\pQThCPsi}\quad
\expect_{\pQThCPsi} \info{\pRY}{\pP(\vY \mid \vTh)}{\pP(\vY \mid \vTh_0)} - \KL{\pQThCPsi}{\pP(\vTh \mid \vPsi)},
\end{align*}
is the posterior distribution $\pQsThCPsi = \pP(\vTh \mid \vYr, \vPsi)$.
\end{corollary}
\begin{corollary}
\label{cor:hyper_inference} \thmTitle{Optimality of Hyper Inference.}
Applying the optimizer from \Cref{cor:model_inference} to the objective in \Cref{thm:parsimony_optimization} produces the second optimization problem
\submitBlank
\begin{align*}
\pQsPsi = \argmax_{\pQPsi}\quad
\expect_{\pQPsi} \log_2\!\left( \frac{\pP(\vYr \mid \vPsi)}{\pP(\vYr \mid \vTh_0)} \right) - \KL{\pQPsi}{\pP(\vPsi)}.
\end{align*}
The optimizer is the hyperposterior distribution, $\pQsPsi = \pP(\vPsi \mid \vYr)$.
\end{corollary}
\tAdd{Yet, unconstrained optimization of $\pQPsi$, subject to a Turing-complete interpreter, would need to explore unlimited varieties of programs and model classes.
Instead, we can restrict the support of prior belief and posterior approximations to a feasible set, $\sFeasible = \left\{ \pQThCPsi \pQPsi \right\}$,
and \Cref{thm:parsimony_optimization} still provides a consistent framework to evaluate and compare the utility of such restrictions.}
The parsimony objective also allows us to understand and quantify memorization of training data in \Cref{cor:memorization} as a bound on the increase in model complexity that is required to achieve increased agreement between predictions and our training data.
\tAdd{This bound also holds when we restrict $\sFeasible$.}
\begin{corollary}
\label{cor:memorization} \thmTitle{Quantifying Memorization.}
We can write the combined model complexity terms as $\vChi[\pQThPsi] = \KL{\pQThPsi}{\pP(\vTh, \vPsi)}$
and let $\pQsThPsi$ be the constrained optimizer of the parsimony objective, restricted to a given feasible set $\sFeasible = \left\{ \pQThPsi \right\}$.
Let the optimal predictions be written as
\submitBlank
\begin{align*}
\pQsY = \integ{d\vPsi\, d\vTh} \pP(\vY \mid \vTh) \pQsThPsi.
\end{align*}
Every feasible alternative $\pQThPsi$ must satisfy
\submitBlank
\begin{align*}
\vChi[\pQThPsi] - \vChi[\pQsThPsi] \geq \expect_{\pQThPsi} \info{\pRY}{\pP(\vY \mid \vTh)}{\pQsY},
\end{align*}
showing that any increased agreement with training data can only be achieved by a still greater increase in model complexity.
\end{corollary}
\tAdd{\Cref{sub:polynomial_regression} includes a visualization of this complexity tradeoff, \Cref{fig:polynomial_memorization} corresponding to potential polynomial representations for a regression model.
As our experiments demonstrate, restricting our attention to classes of simple descriptions provides a tractable means to discover models and control complexity.}
\section{Implementation}
\label{sec:implementation}
The parsimony objective acts on opportunities for compression to reduce the complexity of our belief over models through both the description of prior belief and the information gained due to inference.
While there are many ways to encode the concepts that we need to articulate prior belief, compression is only possible if the interpreter admits a range of code lengths.
Consequently, it is important to review some efficient encodings, capable of expressing increasing degrees of specificity with longer codes, that are needed by our prototype implementations.
Then we discuss our algorithms for polynomial regression followed by decision trees.
\subsection{Useful Encodings}
\label{sub:encodings}
Sometimes we need to identify one of multiple states without any principle that would allow us to break the symmetry among potential outcomes.
For example, our decision tree algorithm requires a feature dimension to be specified from $n$ possibilities.
Laplace's principle of insufficient reason indicates that our encoding should not break symmetry among hypothetical permutations of the features.
We can easily handle this case by representing each state with a single symbol from an alphabet of $n$ possibilities.
As the cardinality of the set increases, however, the information provided by realizing a symbol increases logarithmically.
Thus, this approach cannot hold when we have countably infinite sets, such as with integers or rational numbers, or information would diverge.
Instead, we must break symmetry with either some notion of magnitude, some notion of precision, or both.
\subsubsection{Nonnegative integers}
Rissanen's universal prior over integers \citep{Rissanen1983} can be derived by counting outcomes over binary sequences of increasing length.
Provided the sequence length is known, any nonnegative integer $z$ can be encoded with $\lfloor \log_2(z+1) \rfloor$ binary digits,
as shown in \Cref{fig:rissanen_1}.
Yet, the sequence length is also a nonnegative integer, thus a recursive encoding of arbitrary nonnegative integers will have length approaching
\submitBlank
\begin{align*}
\log_2^*(z) =& \lfloor \log_2(z+1) \rfloor + \lfloor \log_2\left( \lfloor \log_2(z+1) \rfloor + 1 \right) \rfloor\\
& + \lfloor \log_2\left( \lfloor \log_2\left( \lfloor \log_2(z+1) \rfloor +1 \right) \rfloor + 1\right) \rfloor +\cdots.
\end{align*}
\begin{table}[h]
\centering
\begin{tabular}{| r | r | r | r | r | r | r | r | r | r | r |}
\thickhline
Sequence & & 0 & 1 & 00 & 01 & 10 & 11 & 000 & 001 & $\cdots$ \\ \thickhline
$z$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & $\cdots$ \\ \thickhline
\end{tabular}
\caption{\small
Enumeration of binary sequences of increasing length.
}
\label{fig:rissanen_1}
\end{table}
\uAdd{The Elias $\gamma$ coding \citep{Elias1975} simply represents the sequence length with a negated unary prefix.
Elias $\delta$ codes add one recursion, thus representing the sequence length with a $\gamma$ code.
Elias $\omega$ codes allow for arbitrary recursions by building up positive integers that either represent the length of the next sequence or the final outcome.
Decoding begins with the initial value $N=1$.
If the next bit is $0$, then $N$ is the final value.
Otherwise, the leading $1$ followed by $N$ bits encodes the updated value of $N$.
The process repeats until the next segment has a leading $0$.
We can take $z=N-1$ for nonnegative integers.}
\uAdd{In practice, however, we do not need representations for arbitrarily large integers and we can obtain more efficient codes with a limited maximum representation.
For example, we can start with a single bit to represent the length of the subsequent code segment and iterate representations from \Cref{fig:rissanen_1} a predetermined number of times.
If we know how many recursions are needed to represent a maximum integer, we obtain a code that approximates Rissanen's universal prior.}
\Cref{fig:rissanen_2} shows how the first few Rissanen codes are formed.
This encoding becomes very efficient for large integers, but the number of length recursions must be set high enough.
\begin{table}[h]
\centering
\begin{tabular}{| r | r | r | r | r | r | r | r | r || r | r || r | r || r | r || r |}
\thickhline
$\vPsi_0$ & 0 & 1 & 1 & 1 & 1 & 1 & 1 \\ \thickhline
$z_0$ & 0 & 1 & 1 & 1 & 1 & 1 & 1 \\ \thickhline
$\vPsi_1$ & & 0 & 0 & 1 & 1 & 1 & 1 \\ \thickhline
$z_1$ & 0 & 1 & 1 & 2 & 2 & 2 & 2 \\ \thickhline
$\vPsi_2$ & & 0 & 1 & 00 & 01& 10 & 11 \\ \thickhline
$z_2$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ \thickhline
\end{tabular}
\caption{\small
The first sequence $\vPsi_0$ has an implied length of 1 bit.
The represented outcome $z_0$ indicates the length of $\vPsi_1$ and so on.
$\text{Rissanen}_i$ codes are formed by concatenation $(\vPsi_0, \vPsi_1, \ldots, \vPsi_i)$.
With three length recursions, numbers 0 through 126 are compressed to use between 1 and 9 bits.
}
\label{fig:rissanen_2}
\end{table}
\uAdd{We can also obtain good compression by using a single symbol to indicate the length of the remaining sequence.
Length-symbol codes also approximate the scaling invariance of Jeffrey's prior.
See \Cref{sec:ardprior} for further discussion of this property.
\Cref{fig:unary} provides a comparison.
Although we show codes with a 2-bit length symbol for easy comparison to other binary codes, the length symbol does not need to have a binary representation in general.}
\DeclareDocumentCommand \norep{}{n.r.}
\begin{table}[h]
\centering\uAdd{
\begin{tabular}{| r | r | r | r | r | r | r | r | r |}
\thickhline
$z$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \thickhline
$\text{Unary}$ & 0 & 10 & 110 & 1110 & 11110 & 111110 & 1111110 & 11111110 \\ \thickhline
$\text{Elias } \gamma$ & 1 & 010 & 011 & 00100 & 00101 & 00110 & 00111 & 0001000 \\ \thickhline
$\text{Elias } \delta$ & 1 & 0100 & 0101 & 01100 & 01101 & 01110 & 01111 & 00100000 \\ \thickhline
$\text{Elias } \omega$ & 0 & 100 & 110 & 101000 & 101010 & 101100 & 101110 & 1110000 \\ \thickhline
$\text{Rissanen}_1$ & 0 & 10 & 11 & \norep & \norep & \norep & \norep & \norep \\ \thickhline
$\text{Rissanen}_2$ & 0 & 100 & 101 & 1100 & 1101 & 1110 & 1111 &\norep \\ \thickhline
$\text{Rissanen}_3$ & 0 & 1000 & 1001 & 10100 & 10101 & 10110 & 10111 & 1100000 \\ \thickhline
$\text{Length-symbol}$ & 00 & 010 & 011 & 1000 & 1001 &1010 & 1011 & 11000 \\ \thickhline
\end{tabular}
\caption{\small
Nonnegative integers with unary codes, Elias codes, Rissanen codes, and a 2-bit length-symbol code.
Integers that have no representation are indicated by \norep
}}
\label{fig:unary}
\end{table}
\subsubsection{Binary Fractions}
\label{sub:binary_fractions}
It will also be useful to represent a dense distribution of fractions on the open unit interval $\vQ \in (0,1)$, thus allowing us to approximate any real number to arbitrary precision by a variety of potential transformations.
Binary fractions, with a denominator that is an integer power of 2 and a numerator that is odd, provide such a set with a convenient encoding.
These fractions can be written as
\submitBlank
\begin{align*}
q = \frac{2 i - 1}{2^{z+1}}
\quad\text{where}\quad
i \in [2^z]
\quad\text{and}\quad
z \in \mathbb{Z}_{\geq 0}.
\end{align*}
If we desire all fractions of a specific precision, corresponding to a fixed $z$, to have the same encoding length, then the numerator may be regarded as a single symbol with $2^z$ outcomes or $z$ bits.
We must also represent the precision $z$ with one of the integer encodings above.
\uAdd{As with the length-symbol codes, we can also represent $z$ with a single symbol that indicates the number of numerator bits to read.}
\Cref{fig:open_frac} shows some examples.
We can then translate and scale $q$ to represent an angle on the real Riemann circle, the corresponding real numbers are $r = \tan(\pi(q - 1/2))$,
but other choices are also possible, such as inverting the normal cumulative distribution function $r = \sqrt{2} \erf^{-1}(2q-1)$.
Multiplying the result by some $\sigma>0$ would set any desirable scale of outcomes.
\begin{table}[h]
\centering\uAdd{
\begin{tabular}{| r | r | r | r | r | r | r | r | r | r |}
\thickhline
$q$ & $1/2$ & $1/4$ & $3/4$ & $1/8$ & $3/8$ & $5/8$ & $7/8$ & $1/16$ & $\cdots$ \\ \thickhline
$\text{Code}$ & 00 & 010 & 011 & 1000 & 1001 & 1010 & 1011 & 11000 & $\cdots$ \\ \thickhline
\end{tabular}
\caption{\small
Leading binary fractions on the open unit interval with a 2-bit encoding of $z$.
}}
\label{fig:open_frac}
\end{table}
\subsection{Polynomial Regression}
\label{sub:polynomial_regression}
Our regression prototype directly encodes a polynomial with a description of coefficients and captures uncertainty with a hyperposterior ensemble.
Since any $n$th degree polynomial can be written as a linear combination of $n+1$ basis functions of ascending degree, we first need to identify the degree of polynomial.
\uAdd{Our experiments show that a length-symbol encoding serves this purpose well.}
Coefficients are represented in the Chebyshev basis.
Since critical points equioscillate in this basis, the corresponding polynomial coefficients are interpretable as the length scales of oscillation.
Because we expect all $n+1$ coefficients to take nontrivial values, the encoding reserves a variable-length segment for each coefficient, rather than attempting to use a sparse encoding.
Still, natural sparsity will result from binary fractional codes, representing angles on the Riemann circle, after transforming them to the corresponding real numbers.
\begin{algorithm}[h!]
\caption{Parsimonious Polynomial Regression Gibb's Sampler}
\label{alg:pars_polyreg}
\fontsize{10}{16}\selectfont
\begin{algorithmic}[1]
\Require Vectors $\vX$ and $\vY$ provide abscissas and ordinates, respectively,
with $\vY$ scaled so that the intrinsic stochasticity of the process is $\sigma=1$.
\tAdd{Generate $n$ samples from polynomials using at most $b$ Chebyshev basis functions.}
\Ensure $\mPsi = \{\vPsi\}$ is an ensemble of hyperposterior polynomial descriptions $\vPsi_i \sim \pP(\vPsi \mid \mX, \vY)$.
\Function{ParsimoniousRegression}{$\vX, \vY, n, b$}
\State Initialize $\vPsi$ to the zero polynomial
\For{each sample iteration $i = 1, 2, \dots, n$}
\State Generate a random permutation of the basis functions.
\For{each permuted coefficient $j = 1, 2, \dots, b$}
\State Identify the leading nonzero coefficient $k$ in $\vPsi$.
\State Form tensor product of all representable perturbations over both $j$ and $k$.
\State Update $\vPsi$ by sampling the hyperposterior, restricted to these perturbations.
\EndFor
\State \tAdd{Add $\vPsi$ to the hyperposterior ensemble $\mPsi$.}
\EndFor
\EndFunction
\end{algorithmic}
\end{algorithm}
\Cref{alg:pars_polyreg} samples the hyperposterior using a nonreversible sequence of reversible samples over all representable polynomial coefficients.
\tAdd{
We regard each sample of polynomial coefficients as a Dirac delta distribution concentrated at a single polynomial.
This is equivalent to treating the complexity hyperprior as an ordinary prior over polynomial descriptions.
Because our encoding length is most sensitive to the degree, our sampler proposes all joint perturbations of the leading nonzero and each other coefficient in a randomly permuted order.}
All other coefficients are held fixed in each proposal set.
For each coefficient, the sampler considers all binary fractions with $z \leq 4$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.825\textwidth]{r1_fig.png}
\caption{\vspace{-0.1in}\small
Regression experiments comparing leave-one-out cross validation (left column), the hyper-MAP (middle column), and the hyperposterior aggregated over 50 samples (right column).
All data come from the same ground truth.
The hyper-MAP is consistently simpler than the leave-one-out aggregate.
The hyperposterior aggregate naturally captures extrapolation risk, increasing uncertainty as we deviate from data.
More data allow modest increases in complexity to reduce uncertainty.}
\label{fig:regression}
\end{figure}
\tAdd{\Cref{fig:regression} compares the complexity suppression of leave-one-out cross-validation with our results from \Cref{alg:pars_polyreg} using 20th degree polynomials.
In order to provide a fair comparison, we generate 21 full leave-one-out ensembles, corresponding to each polynomial degree, and then select the ensemble with the best average over holdout log-likelihoods.
Thus, both approaches explore the same model families, wherein all coefficients above a certain degree are zero.
We see that, if the hyperparameter search covers a range of dimensions, the standard approach achieves some, albeit limited, success in identifying relatively low complexity ensembles.
}\uAdd{We also tested unary, Elias $\gamma$, and sufficient Rissanen codes for both the polynomial degree and corresponding binary fractions using the data in the second row of \Cref{fig:regression}.
Specifically, we used $\text{Rissanen}_3$ codes for the polynomial degree and $\text{Rissanen}_2$ for the binary fraction precision.
The resulting aggregate parsimony objectives are compared in \Cref{fig:reg_codes}.
}
\begin{table}[h]
\centering\uAdd{
\begin{tabular}{| r | r | r | r | r |}
\thickhline
Polynomial encoding & Unary & Elias $\gamma$ & Rissanen & Length-symbol \\ \thickhline
Parsimony objective (bits) & $103.0$ & $104.5$ & $101.9$ & $104.8$ \\ \thickhline
\end{tabular}
\caption{\small
Length-symbol codes give the optimal parsimony objective for this dataset.
}}
\label{fig:reg_codes}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=0.825\textwidth]{r3c_fig.png}
\caption{\vspace{-0.1in}\small
Memorization demonstration with regression.
We plot prediction information against model complexity (left) for models obtained from \Cref{alg:pars_polyreg} (left cluster)
and likelihood samples from the same algorithm by disregarding generalized length (right cluster).
Likelihood samples have much longer descriptions, some achieving better agreement with the data (memorization domain).
Neither the MLE (middle) nor the likelihood ensemble (right) perform well.
\vspace{-0.1in}
}
\label{fig:polynomial_memorization}
\end{figure}
\tAdd{\Cref{fig:polynomial_memorization} demonstrates memorization by applying \Cref{alg:pars_polyreg} to the same data in the second row of \Cref{fig:regression}, but replacing the hyperprior with a uniform distribution,
effectively sampling the likelihood by disregarding generalized length.
Plotting prediction information against model complexity (left) for both ensembles shows that the likelihood ensemble has much higher complexity.
If we constrain feasible beliefs to only single samples from either of the ensembles,
then the MAP defines the complexity tradeoff limit in \Cref{cor:memorization}, as well as the corresponding memorization domain in which
models may gain increased agreement with the data at the cost of an even greater increase in complexity.
Memorization is worst at the Maximum Likelihood Estimator (MLE), found within the likelihood ensemble.
The corresponding predictions (middle) closely fit the data.
Predictions from the likelihood ensemble (right) show increased uncertainty, but still violate Occam's Razor,
thus underscoring the central role of generalized length in suppressing memorization.}
\subsection{Decision Trees}
\label{sub:decision_trees}
\DeclareDocumentCommand \nLabel{} {\ell}
\DeclareDocumentCommand \nFeature{} {k}
\DeclareDocumentCommand \iLabel{} {y}
Decision trees predict discrete classifications, labels, by evaluating a sequence of binary decisions.
Each case in our training dataset is represented by both a feature vector $\vX$, with $\nFeature$ components that are each comparable to a threshold, and an enumerated label $y \in [\nLabel]$, where $\nLabel$ is the number of labels.
Evaluation begins at the root note, representing the axis-aligned bounding box of potential features.
The node specifies a feature dimension and a comparison threshold serving to partition the feature domain into two components, the left and right child nodes.
The comparison outcome indicates membership and the process iterates so that a sequence of binary decisions filters each case through a series of increasingly restrictive partitions,
each of which is intended to simplify the classification problem.
This filtration terminates at a leaf node that specifies either a single label or, more generally, probabilities over all labels.
Decision trees are trained using a recursive process that also begins at the root node.
We take the set of training cases that are members of a given node and we must either construct a branch structure or halt splitting and finalize label probabilities.
The conventional procedure evaluates every potential splitting outcome with some utility function and then chooses the optimizer.
While a wide variety of utility functions are used in practice,
a standard information-theoretic approach maximizes the reduction in entropy due to the splitting.
Let $\vC_{\iLabel}$ represent the count of training cases with the label $y$ that fall within a given node domain.
If the node were a leaf, the frequentist approach to predicting label probabilities would use the sample mean
\submitBlank
\begin{align*}
\vMu_{\iLabel} = \frac{\vC_{\iLabel}}{c}
\quad\text{where}\quad
c = \sum_{\iLabel=1}^{\nLabel} \vC_{\iLabel}.
\end{align*}
We denote the corresponding variables for hypothetical left and right child nodes using superscripts, e.g.~$\vC_{\iLabel}^{(L)}$ and $\vC_{\iLabel}^{(R)}$, respectively.
The reduction in entropy associated with a potential splitting, weighted by the fraction of cases that appear within the respective domains of each child, is
\submitBlank
\begin{align*}
& \Delta S =
\frac{c^{(L)}}{c} \sum_{\iLabel=1}^{\nLabel} \vMu_{\iLabel}^{(L)} \log_2( \vMu_{\iLabel}^{(L)} ) + \frac{c^{(R)}}{c} \sum_{\iLabel=1}^{\nLabel} \vMu_{\iLabel}^{(R)} \log_2( \vMu_{\iLabel}^{(R)} ) - \sum_{\iLabel=1}^{\nLabel} \vMu_{\iLabel} \log_2( \vMu_{\iLabel} ).
\end{align*}
The splitting that maximizes $\Delta S$ is accepted and the resulting child nodes are trained recursively until no further reduction is possible, i.e.~when a node contains only cases of a single label.
Bootstrap aggregation constructs an ensemble of decision trees, a random forest, by resampling the dataset.
This consists of forming a new dataset, the same size as the original, by sampling the original dataset uniformly with replacement.
Predictions are then aggregated by taking the average over the ensemble.
\tAdd{Our approach encodes each decision tree as a state of prior belief on the corresponding partition of feature coordinates.
We describe the partition recursively using an encoding for each node in the binary search tree.
Each node begins by specifying whether it is a leaf or branch with 1 bit.
If the node is a branch, a symbol from $\nFeature$ possibilities gives the feature dimension used in the comparison, thus contributing $\log_2(\nFeature)$ bits to the generalized length.
Since we can translate and scale features in the given dimension to the unit interval $[0,1]$,
we can represent any splitting threshold as a binary fraction on the open unit interval.
Note that it is never useful to split at either $0$ or $1$.
The encoding continues by describing the left and right children.}
\tAdd{If the node is a leaf, its feature partition has an independent prior, a flat Dirichlet distribution, over the simplex of all coherent label probabilities.}
Let $\vTh$ represent a vector of label probabilities within a single leaf.
We can perform exact inference using label counts $\vC_{\iLabel}$ to recover Laplace's rule of succession
\submitBlank
\begin{align*}
\expect_{\pP(\vTh \mid \vYr)} \left[ \vTh_{\iLabel} \right] = \frac{\vC_{\iLabel} + 1}{c + \nLabel},
\end{align*}
which is the posterior-predictive distribution for labels of new data that land within this leaf.
\tAdd{Our parsimonious decision trees are generated using a recursive process that begins by calling \Cref{alg:pars_node} on the full training dataset to form the root node.
The remaining structure is generated by sampling both a feature dimension and threshold or by halting to form a leaf node that infers label probabilities as above.
If the structure splits, then data are partitioned accordingly, and the process repeats on the left and right children.}
\begin{algorithm}
\caption{Parsimonious Node Construction for Decision Trees}
\label{alg:pars_node}
\fontsize{10}{16}\selectfont
\tAdd{
\begin{algorithmic}[1]
\Require $\mX$ is a matrix of features and $\vY$ is the corresponding vector of labels for data in this node's partition.
\tAdd{We sample an approximate annealed hyperposterior, controlled by an annealing schedule $\vAlpha$ and recursion depth $d$.}
\Ensure Sample a node description $\vPsi$ with proposal probability $\pS(\vPsi)$ and unnormalized hyperposterior $\pP(\vPsi \mid \mX, \vY)$.
\Function{$\left[\vPsi,\, \pS(\vPsi),\, \pP(\vPsi \mid \mX, \vY) \right]$ = ParsimonyNode}{$\mX, \vY, \vAlpha, d$}
\State Let $\vPsi_0$ describe this node as a leaf.
\State Compute the likelihood $\pP(\vY \mid \vPsi_0)$ from a flat Dirichlet prior over all label probabilities.
\State Enumerate every possible feature domain splitting for this node as $i \in [n]$.
\For{\label{line:pars_node_for} each splitting $i = 1, 2, \dots, n$}
\State Let $\vPsi_i$ describe this node as a branch.
\State Partion data with the splitting, $(\mX^{(L)}, \vY^{(L)})$ and $(\mX^{(R)}, \vY^{(R)})$.
\State \label{line:apprx_post} Approximate the likelihood assuming both children are independent leaf nodes
\begin{align*}
\pP(\vY \mid \vPsi_i) \approx \pP(\vY^{(L)} \mid \vPsi_i)\pP(\vY^{(R)} \mid \vPsi_i).
\end{align*}
\EndFor
\State Sample $\vPsir$ from $\pS(\vPsi_i) \propto \pP(\vY \mid \vPsi_i)^{\vAlpha_d} \pP(\vPsi_i)$ over $i = 0, 1, \ldots, n$.
\If{$\vPsir$ is a branch node}
\State Partion data accordingly as $(\mX^{(L)}, \vY^{(L)})$ and $(\mX^{(R)}, \vY^{(R)})$.
\State Recursively construct left and right child nodes as
\begin{align*}
\left[\vPsi^{(L)},\, \pS(\vPsi^{(L)}),\, \pP(\vPsi^{(L)} \mid \mX^{(L)}, \vY^{(L)}) \right] & = \textbf{ParsimonyNode}(\mX^{(L)}, \vY^{(L)}, \vAlpha, d+1)\quad\text{and}\\
\left[\vPsi^{(R)},\, \pS(\vPsi^{(R)}),\, \pP(\vPsi^{(R)} \mid \mX^{(R)}, \vY^{(R)}) \right] & = \textbf{ParsimonyNode}(\mX^{(R)}, \vY^{(R)}, \vAlpha, d+1).
\end{align*}
\State Concatenate descriptions, $\vPsi = \left[ \vPsir,\; \vPsi^{(L)}, \vPsi^{(R)} \right]$.
\State Compose sample probabilities, $\pS(\vPsi) = \pS(\vPsir) \pS(\vPsi^{(L)})\pS(\vPsi^{(R)})$.
\State Compose the hyperposterior
\begin{align*}
\pP(\vPsi \mid \mX, \vY) = \pP(\vPsi^{(L)} \mid \mX^{(L)}, \vY^{(L)}) \pP(\vPsi^{(R)} \mid \mX^{(R)}, \vY^{(R)}) \pP(\vPsir).
\end{align*}
\Else{\;($\vPsir$ is the leaf node)}
\State Set $\vPsi = \vPsir$ and $\pS(\vPsi) = \pS(\vPsir)$.
\State Compute the unnormalized hyperposterior, $\pP(\vPsi \mid \mX, \vY) = \pP(\vY \mid \vPsir) \pP(\vPsir)$.
\EndIf
\EndFunction
\end{algorithmic}
}
\end{algorithm}
\tAdd{If we wanted to sample each splitting from the exact posterior, we would need to marginalize over all elaborations to the node structure that could follow.
Since the number of such structures grows exponentially with depth, this would not be computationally feasible.
Instead, we must sample an approximate posterior by assuming children will be leaf nodes, as in the standard approach.
Unfortunately, this approximation can generate over-attractive splitting domains, thus making it difficult to sample high posterior alternatives.
To mitigate this problem, we anneal the likelihood in the posterior approximation to increase sample diversity.
In our experiments, we used an annealing schedule that disregards the likelihood for the first two branch depths, $\vAlpha = [0,\, 0,\, 1,\, 1,\,\ldots]$.
The hyperposterior predictive integral can then be corrected using importance weighting over the ensemble of decision trees.
For each tree $t$ with a description $\vPsi_t$, we need the probabilities of both the composition of sampled proposals $\pS(\vPsi_t)$ and the posterior $\pP(\vPsi_t \mid \mX, \vY)$, up to the unknown normalization.
This gives importance weights}
\submitBlank
\begin{align*}
w_t \propto \frac{\pP(\vPsi_t \mid \vYr)}{\pS(\vPsi_t)}
\quad\text{so that}\quad
\sum_t w_t = 1.
\end{align*}
Because we are performing exact inference, we could skip the following information analysis and compute the log likelihood directly from label counts
\submitBlank
\begin{align*}
\log_2\left( \pP(\vYr \mid \vPsi) \right) = \log_2\!\left(\frac{\Gamma(\nLabel)}{\Gamma(c+\nLabel)}\right) +\sum_{\iLabel=1}^{\nLabel} \log_2(\Gamma(\vC_{\iLabel}+1)).
\end{align*}
However, we can use this opportunity to demonstrate how the analysis would proceed if a variational approximation were used
\tAdd{by simply replacing $\pP(\vTh \mid \vYr)$ below with any distribution from a feasible set, $\pQTh \in \sFeasible$.}
The amount of information due to change in model belief is
\submitBlank
\begin{align*}
&\KL{\pP(\vTh \mid \vYr)}{\pP(\vTh)} \\
&= \frac{1}{\log(2)} \left(
\log\!\left(\frac{\Gamma(c+\nLabel)}{\Gamma(\nLabel)}\right) - c \digamma(c + \nLabel)
+ \sum_{\iLabel=1}^{\nLabel} \vC_{\iLabel} \digamma(\vC_{\iLabel} + 1) - \log(\Gamma(\vC_{\iLabel} + 1)) \right)
\end{align*}
where $\digamma(x) = \frac{d}{dx} \log(\Gamma(x))$ is the digamma function.
The prediction information gained about labels from a uniform prior is
\submitBlank
\begin{align*}
\expect_{\pP(\vTh \mid \vYr)} \info{\pRY}{\pP(\vY \mid \vTh)}{\pP(\vY \mid \vTh_0)}
= \frac{1}{\log(2)} \left( c \log(\nLabel) - c \digamma(c+\nLabel) + \sum_{\iLabel=1}^{\nLabel} \vC_{\iLabel} \digamma(\vC_{\iLabel}+1) \right).
\end{align*}
Subtracting inference information from predictive information recovers the log likelihood up to an additive constant.
\begin{figure}[b!]
\centering
\includegraphics[width=0.75\textwidth]{dt1_fig.png}
\vspace{-0.1in}
\caption{\small
These first decision tree experiments use a highly skewed generative process with well-separated label domains.
Conventional decision trees and random forests, columns 1 and 3, obtain highly confident predictions despite having few data,
Our parsimonious trees and hyperposterior aggregates, columns 2 and 4, only gradually reduce uncertainty and
demonstrate better extrapolation uncertainty.
\vspace{-0.1in}
}
\label{fig:decision_tree_1}
\end{figure}
Our first set of decision tree experiments, \Cref{fig:decision_tree_1}, examines learning from a generative process that cleanly partitions the data into regions containing a single label.
Row one compares models that learn from a single sample.
Rows two and three learn from 25 and 100 samples, respectively.
The leading two columns compare a conventional decision trees with parsimonious decision trees.
The last two columns compare bootstrap aggregation with our hyperposterior aggregates.
\tAdd{Every aggregate ensemble contains 1000 decision tree samples.}
This is a highly skewed generative process;
even with 25 samples, the second row still has no realizations of a blue label.
Yet, the parsimonious aggregate predictions in both rows 1 and 2 naturally increase uncertainty as the prediction domain deviates from the training data.
In contrast, the conventional approach obtains absolute certainty in regions that lack data.
Blue labels finally appear in the third row, showing how the parsimonious forest reacts to skewed data.
This generative process is incapable of generating data in the off-diagonal regions,
but without any way of knowing that, we should be highly skeptical of certainty in the absence of evidence.
\begin{figure}[b!]
\centering
\includegraphics[width=0.75\textwidth]{dt2_fig.png}
\vspace{-0.1in}
\caption{\small
Our second decision tree experiments investigate a generative process with smooth mixing of labels.
Conventional approaches increase complexity rapidly with more data, yielding predictive artifacts that hew to few points.
In contrast, parsimonious trees remain simple, only gaining confidence with sufficient evidence.
\vspace{-0.1in}
}
\label{fig:decision_tree_2}
\end{figure}
The second set of experiments, \Cref{fig:decision_tree_2}, examines a generative process that mixes labels.
There is nonzero probability of generating a point of either label at any location, but red labels are more likely to appear on the left and blue on the right.
We observe that typical decision trees are more complicated than their parsimonious counterparts, as expected.
Moreover, complexity increases rapidly as our dataset grows.
In contrast, parsimonious decision trees increase complexity gradually.
The typical approach also yields confident artifacts in the vicinity of few data,
whereas parsimonious trees and parsimonious forests only gradually reduce uncertainty.
\section{Discussion}
\label{sec:discussion}
Because our formulation of complexity accounts for information from arbitrary descriptions,
it already contains the functionality needed to address a wide variety of challenges.
First, we examine how to include other sources of prior belief in this framework and how to extend it to prior belief over multiple interpreters.
Second, we discuss how changing the scope of descriptions to account for symbols generated and communicated by elementary operations during the evaluation of predictions provides a mechanism to prefer fast algorithms.
Third, we explore how other Bayesian hyperpriors relate to description complexity.
Fourth, we compare the non-Bayesian treatment of probability in Rissanen's Minimum Description Length to our approach.
Finally, we offer our concluding remarks.
\subsection{Comparing and Inferring Interpreters}
\label{sec:interpreters}
We can easily compare interpreters within the same theoretical framework by identifying a common \tAdd{\textit{language},
by which we indicate an interpreter that is also Turing-complete.
Let $\mPhi = \left\{ \vPhi_i \mid i \in [n]\right\}$ represent an ensemble of interpreters written in the common language $\vPhi^*$.
If $\vPhi_i$ is capable of producing a state $\vObj$ from an encoding $\vPsi_i(\vObj)$ and the interpreter itself is encoded by $\vPsi^*(\vPhi_i)$ within the common language,
then applying the same hyperprior gives
\begin{align*}
\pP(\vObj, \vPhi_i) = \pP(\vObj \mid \vPsi_i) \pP(\vPsi_i) = 2^{-\fLen(\vPsi_i(\vObj)) - \fLen(\vPsi^*(\vPhi_i))}.
\end{align*}
This is equivalent to simply prepending each state encoding with that of the relevant interpreter, $\vPsi^*(\vObj, \vPhi_i) = \left[\vPsi^*(\vPhi_i),\, \vPsi_i(\vObj)\right]$,
and regarding the common language as the shared interpreter of all such codes.}
\tAdd{Although this approach merely shifts the burden of how we derive interpreter validity to another level of abstraction,
we can still obtain practical insight into credible interpreters with this view.
Nefarious interpreters, such as the third basis in \Cref{fig:sensitivity}, effectively transfer complexity from an otherwise long encoding, subject to a simple interpreter, to a short encoding, subject to a long interpreter.
Yet, when we shift the derivation of plausibility to a common language, the excessive complexity becomes visible in both cases.}
This approach also provides a formulation for grammar discovery through inference.
If we have several datasets and associated learning problems that should be explainable within a common language, then we can infer the structure of an efficient interpreter.
An interpreter that represents common functions among the different learning problems efficiently will be more likely than one that solves a single problem well by hiding complex functions with shortcuts.
\tAdd{Ultimately, however, deriving interpreter validity from a common language alone creates a problem of infinite regress;
the question remains of how we may derive plausibility among several languages.
Although we hold that this question does not impede practical applications, it can be answered by appealing to consistency through simulation.
\Cref{thm:utm_ensemble} presents a unique universal prior, consistent with our theoretical framework, for any ensemble of languages.
The proof immediately follows.}
\begin{theorem}
\label{thm:utm_ensemble}
\tAdd{\thmTitle{Universality of Consistent Belief in Turing-Complete Ensembles.}
Given an ensemble of Turing-complete interpreters, $\mPhi = \left\{ \vPhi_i \mid i \in [n]\right\}$,
we may consider an arbitrary state of prior belief, $\pP(\vPhi_i)$ over $i \in [n]$,
and apply \Cref{cor:probability_from_length} to obtain the joint probability of one interpreter simulating another
\submitBlank
\begin{align*}
\pP(\vPsi_i, \vPsi_j) = \pP(\vPsi_i \mid \vPsi_j) \pP(\vPsi_j)\quad\text{for all}\quad i,j \in [n].
\end{align*}
There exists a unique prior for which the marginalized simulation probability recovers the prior,
thus consistently accounting for simulation complexity within prior belief.}
\end{theorem}
\begin{proof}\textbf{of \Cref{thm:utm_ensemble}.}
\tAdd{A simulator $\vPsi_{ij}$ is a code sequence that may be prepended to any valid code for $\vPhi_i$ and allow it to run on $\vPhi_j$.
Let the set $\mPsi_{ij} = \left\{ \vPsi_{ij} \right\}$ include all simulators for language pairs indexed $i, j \in [n]$.
Applying \Cref{cor:probability_from_length} yields the hyperprior transition matrix, expressed elementwise by row $i$ and column $j$ as
\submitBlank
\begin{align*}
\pP(\vPhi_i \mid \vPhi_j) = \left(\sum_k \sum_{\vPsi_{kj} \in \mPsi_{kj}} 2^{-\fLen(\vPsi_{kj})} \right)^{-1} \sum_{\vPsi_{ij} \in \mPsi_{ij}} 2^{-\fLen(\vPsi_{ij})},
\end{align*}
where each column is normalized over the languages in the ensemble.
A consistent prior must satisfy $\pP(\vPhi_i) = \sum_j \pP(\vPhi_i \mid \vPhi_j) \pP(\vPhi_j)$ for all $i \in [n]$.
Because all Turing-complete languages can simulate one another, $\mPsi_{ij}$ is nonempty for all pairs.
Existence and uniqueness immediately follow by applying the Perron--Frobenius Theorem;
every square matrix with positive entries has a unique largest eigenvalue and the paired eigenvector may be constructed to have positive entries.
Since the transition matrix maps every normalized state to another normalized state, that eigenvalue must be 1 and no other eigenvectors may be coherent probabilities.}
\end{proof}
\tAdd{This result shifts remaining subjectivity to the set of interpreters we are willing to consider for comparison.
In practice, the shortest simulator in each set, say $\vPsir_{ij} \in \mPsi_{ij}$, will dominate the corresponding matrix element.
Thus, a more practical approximation of the hyperprior transition matrix is
\begin{align*}
\pP(\vPhi_i \mid \vPhi_j) \approx \left(\sum_k 2^{-\fLen(\vPsir_{kj})}\right)^{-1} 2^{-\fLen(\vPsir_{ij})}.
\end{align*}
Note $2^{-\fLen(\vPsir_{ii})}=1$ for all $i \in [n]$, since the shortest self-simulator is trivial in each language, $\vPsir_{ii} = \emptyset$.
We find this result intuitive because it suppresses interpreters that require excessive complexity to simulate.
In the absence of such a computation, we may only conclude that we should prefer interpreters that appear to be simple.}
\subsection{Integrating Additional Beliefs}
Although this work is motivated by the difficulty of expressing prior belief over abstract models,
when we have access to additional information that could constrain prior beliefs, that information may be impactful.
Therefore, we should be able to integrate other prior beliefs within the general complexity framework.
Let our complexity-based prior belief be denoted as $\pP(\vObj \mid \mathcal{C}) = 2^{-\fLen(\vPsi(\vObj))}$.
If we also have other prior beliefs, $\pP(\vObj \mid \mathcal{B})$, we can form the composite prior $\pP(\vObj \mid \mathcal{B}, \mathcal{C})$.
For example, $\mathcal{B}$ may express physical laws or previously observed data.
One approach would be to use an interpreter that implicitly embeds $\mathcal{B}$ within viable encodings so that $\pP(\vObj \mid \mathcal{B}, \mathcal{C}) = 2^{-\fLen(\vPsi_{\mathcal{B}}(\vObj))}$.
Alternatively, if we assume that belief derived from $\mathcal{B}$ is conditionally independent of our complexity-based belief $\mathcal{C}$, then we have
\submitBlank
\begin{align*}
\pP(\vObj \mid \mathcal{B}, \mathcal{C}) &= \frac{\pP(\mathcal{B} \mid \vObj, \mathcal{C}) \pP(\vObj \mid \mathcal{C})}{\pP(\mathcal{B} \mid \mathcal{C})}
=\frac{\pP(\mathcal{B} \mid \vObj ) \pP(\vObj \mid \mathcal{C})}{\pP(\mathcal{B} \mid \mathcal{C})} \\
&=\frac{\pP(\vObj \mid \mathcal{B}) \pP(\mathcal{B}) \pP(\vObj \mid \mathcal{C})}{\pP(\mathcal{\vObj}) \pP(\mathcal{B} \mid \mathcal{C})}
\propto \pP(\vObj \mid \mathcal{B}) \pP(\vObj \mid \mathcal{C}),
\end{align*}
where $\pP(\mathcal{\vObj})$ must be a constant for all $\vObj$ since both $\mathcal{B}$ and $\mathcal{C}$ have been constructed to capture all our beliefs.
Thus, the composite prior is easily formed up to a constant of proportionality.
\subsection{The Imperative of Utility}
\label{sec:computability}
It is not useful to consider models that, in order to provide a substantial contribution to predictions, would require more evidence than we anticipate having.
Likewise, it is not useful to consider models that would require either more computational energy, communication capacity, or time to evaluate than we can afford.
Practical models must be discoverable, and predictions must be computable.
The Kolmogorov complexity is well-known to be uncomputable, thus raising a natural concern that generalizing prior belief to arbitrary descriptions only exacerbates the problem.
Yet, the primary purpose of \Cref{thm:parsimony_optimization} is to show how information theory allows us to restrict our attention to feasible manifolds of belief,
while simultaneously allowing us to compare outcomes from different choices of restriction.
Because long descriptions are already exponentially suppressed \`a priori, the information we generate by refusing to consider long descriptions becomes small as the descriptions we drop become long.
Even so, it is instructive to examine uncomputability more careful as it motivates future directions.
Suppose we had an oracle $\Omega$ that would determine whether or not a program $\vObj$ is capable of reproducing a mapping $(\vX, \vY)$ in a finite amount of time
\submitBlank
\begin{align*}
\Omega(\vPhi, \vObj, \vX, \vY) = \begin{cases}
\text{true} & \vY = \vPhi(\vObj, \vX) \\
\text{false} & \text{otherwise}.
\end{cases}
\end{align*}
The existence of such an oracle would allow us to determine the Kolmogorov complexity by brute force, generating and checking programs in order of increasing length until the oracle returns true.
Further, it would be a trivial matter to write another brute force subroutine to identify the first sequence $\vY$ with Kolmogorov complexity above an arbitrarily high limit $\fKol_{\vPhi}(\emptyset, \vY) > \tau$.
By setting $\tau$ to exceed the combined lengths of the oracle and brute force subroutines, we would have succeeded in writing a program that contradicts the Kolmogorov complexity.
The core problem with this thought experiment is the arbitrarily large amount of memory and elementary operations that would be required to run the program.
Disregarding the halting problem, the brute force search would need to generate full programs in memory, while only incurring the cost of encoding a counter.
We may conclude that problems associated with computability will be alleviated if we simply include memory operations,
every symbol generated or transmitted between slow and fast levels of memory, in the definition of model \textit{evaluation length}.
\citet{Lempel1976} present a related framework to measure sequence production complexity as the minimum number of steps required to build a sequence from a production process to construct a hierarchy of subsequences.
\citet{Speidel2008} provides additional discussion of recent work by \citet{Titchener1998}.
\tAdd{Speed priors \citep{Schmidhuber2002} and related work by \citet{Filan2016} develop these approaches to articulate prior beliefs that prefer efficient algorithms in the context of binary UTMs.}
In this view, prior belief becomes an expression for the degree of utility considering a model would contribute to obtaining feasible predictions.
Building on these approaches will allow us to restrict our attention to models that can be evaluated with limited resources.
For example, randomized algorithms such as Randomized QR with Column Pivoting (RQRCP) \citep{Duersch2020b} would gain plausibility by having reduced slow communication bottlenecks.
In order for machine learning to be capable of providing discoverable, computationally feasible, and useful models, we cannot avoid limiting our attention accordingly.
\subsection{Relationship to other Bayesian Methods}
\label{sec:ardprior}
Our hyperprior provides a principled foundation to derive results that are similar in function to several well-known methods for specific Bayesian inference problems.
Notable comparisons include sparsity inducing priors, like Automatic Relevance Determination, for regression problems with continuous coefficients.
The ARD prior is a hyperprior over parameters which is intended to identify critical parameters and drive remaining parameters towards zero.
The ARD prior is implemented as:
\submitBlank
\begin{align*}
\pP(\vTh) = \pN(\vTh \mid 0, \vSigma)
\end{align*}
where we need to specify $\pP(\vSigma)$.
In the original work introducing the ARD prior, $\pP(\vSigma)$ is a gamma distribution in the precision $\tau = \sigma^{-2}$ with a small shape parameter.
This closely corresponds to the improper Jeffrey's prior $\pP(\vSigma) \propto \frac{1}{\vSigma}$, often used in practice for unknown scalar covariances \uAdd{because it is scaling invariant}.
If we partition the potential values of $\vSigma$ into intervals $0 < a < b < \infty$, where $a$ and $b$ are any positive real numbers,
the cumulative probability diverges for values less than $a$ and values greater than $b$, thus dominating over the finite contribution within $[a,b]$.
It follows that sampling the Jeffrey's prior would yield outcomes either very close to zero or diverging towards infinity.
Within this formulation, if $\vTh$ has little relevance to the likelihood, then probability is maximized when $\vTh$ and $\vSigma$ approach zero.
Otherwise, a large enough $\vSigma$ will be found to allow $\vTh$ to take moderate nonzero values with the Jeffrey's prior introducing only a slight penalty as $\vSigma$ increases.
Therefore, it can be interpreted as making a binary choice between very large or very small $\sigma$.
More generally, a gamma distribution allows, indeed requires, the relative probably of these two outcomes to be tuned.
If we uniformly discretize the possible values of $\vSigma$ as $\vSigma_i = i \vSigma_1$ and assign them probabilities according to $\pP(\vSigma_i) \propto \frac{1}{\vSigma_i}$
\uAdd{up to a maximum value $i \in [M]$,}
we can equate this prior with the complexity prior $\pP(\sigma_i) = 2^{-\fLen(\sigma_i)}$ to obtain
\submitBlank
\begin{align*}
\fLen(\vSigma_i) = \log_2(\vSigma_i) + c = \log_2(i) + \log_2(\vSigma_1) + c = \log_2(i) + \fLen(\vSigma_1),
\end{align*}
where the constant $c$ gives the normalization.
Since the complexity of $\vSigma_i$ increases logarithmically in $i$,
we see that Jeffrey's prior is the continuous limit of the number of bits required to express \uAdd{an integer multiple of $\vSigma_1$.
We may interpret the fixed offset, $\fLen(\vSigma_1)$, as the contribution of a single symbol that determines the number of bits to read.}
While sparsity is a useful notion of complexity for many problems, it is not universal.
Sparsity either regards a continuous parameter as either complex (nonzero) or not complex (zero).
While sparsity-inducing priors, like ARD, can compel continuous parameters to zero if they do not provide enough benefit to predictions,
they have no affordance to suppress other forms of complexity.
For example, there is no compelling notion of sparsity within the construction of decision trees.
Moreover, when we need to encode constants within prior descriptions, our theory supports consistent distinctions in complexity among potential constants.
\subsection{Relationship to Minimum Description Length}
\label{sub:mdl}
Rissanen's Minimum Description Length (MDL) shares many similarities with our theory, but it is not motivated by the philosophical foundations of reason that drive the Bayesian paradigm.
Rather, MDL views inference as finding an optimal compressed representation of a dataset and probability as a way of developing efficient codes.
MDL representations contain both the model used to construct an efficient code and the compressed form of the data that follows.
The length of the data representation $\fLen(\sData)$ is the sum of the number of bits needed to describe the model $\fLen(\vObj)$ and the number of bits needed to describe the residual data $\fLen(\sData \mid \vObj)$.
In its simplest form, the inference problem for identifying a hypothesis or program $\vObj \in \sObj$ is
\submitBlank
\begin{align*}
\fLen( \sData) = \min_{\vObj \in \sObj} \;\; \fLen(\vObj ) + \fLen(\sData \mid \vObj),
\end{align*}
requiring a specific discretization and encoding for a hypothesis space $\sObj$.
To address the arbitrary task of designing a hypothesis space encoding, MDL proposes a minimax optimization over universal codes,
minimizing the worst-case regret associated with arbitrary data.
This simple form of MDL can also be refined to compare and optimize hypothesis classes instead of individual hypotheses, which corresponds to the Bayesian model-class selection problem.
\citet{Grunwald2007} provide an in-depth exposition.
\tAdd{While there is some similarity between our approach and that of MDL, our theory is driven by a comprehensive treatment of information, and a consistent derivation of prior belief from complexity.
MDL is also consistent with some objectivist Bayesian formulations, such as Jeffrey's priors, however the philosophical motivation is quite different.
Although optimizing MDL encoding length drives at a notion of simplicity, it is not framed within an extended logic to update beliefs from prior or intermediate results, subjective or otherwise.
Likewise, Minimum Message Length (MML) \citep{Wallace1968,Boulton1975,Wallace1987} is a Bayesian framework that is similar to MDL.
Instead of optimizing hypothesis encodings to minimize worst-case regret, MML minimizes expected code length, which depends on a subjective prior over the attributes a code describes.
We assert that a consistent treatment of both information and prior complexity is critical in the abstract setting of machine learning.}
Further, we highlight the deeper understanding of optimal representations that our theory provides compared to MDL and MML.
Optimization is fundamentally inconsistent with Bayesian probability theory;
inference compels posterior belief from a prior state and the result expresses our rational belief in possible models.
Yet, we recognize that a choice must be made to simplify this process so that problems can be solved on machines with finite resources.
This is why we must distinguish rational choice from rational belief.
Rational choices are informed by rational belief, but also require a utility function.
Building on Bernardo's work \citep{Bernardo1979}, \Cref{cor:model_inference,cor:hyper_inference,cor:properutility,cor:perturbations}
show the variety of circumstances in which information is a proper utility function that serves to guide well-posed optimization for rational choices.
The rational choice becomes the representation we use to approximate posterior belief for future predictions.
While a rational choice could be a single model, as in MDL and MML,
other representations, such as the ensembles in our experiments, have greater utility and provide better prediction uncertainty quantification.
\subsection{Summary and Conclusion}
We proposed Parsimonious Inference, a complete theory of learning based on an information-theoretic formulation of Bayesian inference that quantifies and suppresses a general notion of explanatory complexity.
We showed how our information-theoretic objective allows us to understand the relationship between model complexity and increased agreement between predictions and data labels.
Within the Bayesian perspective, once the prior, the likelihood, and the data are specified, the posterior inexorably follows.
Yet, when we consider the infinite varieties of algorithms that may be developed in machine learning, we find that any universal prior that reserves some degree of plausibility for an arbitrary algorithm becomes uncomputable in practice.
Our framework allows us to resolve the imperative of utility by quantifying the value of a choice,
wherein we only consider a feasible set of prior beliefs and posterior approximations.
By accounting for model complexity from first principles, we can evaluate the utility of such restrictions within a single framework to obtain well-justified predictions within a practical computational budget.
A central aspect of our framework is the distinction between the intrinsic meaning of a potential state of belief and an efficient encoding of that state.
Encoding complexity provides a critical missing component that is needed to measure the complexity of arbitrary inference architectures and naturally associate complexity with plausibility.
Our formulation of generalized length allows us to assign length to a wide variety of codes, beyond binary codes that are typically associated with program length.
We examined some elementary codes to express integers and fractions on the open interval, which can be mapped to a broad class of numbers that may prove useful to represent prior beliefs.
We showed how feasibility-constrained optimizers satisfy quantifiable memorization bounds in comparison to models that may produce better adherence to training data,
but at the cost of increased description length, increased inference information, or information generated by an approximating distribution proposed to generate predictions.
Our experimental results show how our hyperposterior ensembles avoid developing artifacts that artificially hew to seen data within the predictive structure.
Moreover, accounting for multiple explanations by hyperposterior sampling allows us to compute extrapolation uncertainty from first principles as the input domain deviates from past observations.
These experimental results demonstrate how our theory allows us to obtain predictions from extremely small datasets without cross-validation.
Our theory solves critical challenges in understanding how to efficiently learn from data, obtain well-grounded justification for uncertainty in predictions,
and anticipate extrapolation regimes where additional data would prove most beneficial, thus opening a new domain of predictive capabilities.
This work also provides a principled foundation to address the challenge of feasible learning in the face of high dimensionality.
\section*{Acknowledgements}
We would like to extend our earnest appreciation to Jaideep Ray, Justin Jacobs, and Philip Kegelmeyer for several helpful discussions on this topic.
We also acknowledge and appreciate a conversation with Andrew Charman that clarified our view of the distinction between rational belief and the inherently restrictive nature of a choice.
We sincerely appreciate reviewer feedback that helped us improve this work.
\noindent\textbf{Funding:} This work was funded, in part, by the U.S.~Department of Energy.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC.,
a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract
DE-NA-0003525. This paper describes objective technical results and analysis.
Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy
or the United States Government.
|
1,477,468,750,395 | arxiv | \section{Introduction}
\label{s:Introduction}
In the explorations of the phase diagram of strong nuclear force, some of the most intriguing questions are associated with the phase dubbed Quark-Gluon Plasma (QGP), in which under large temperatures and/or baryon densities quarks and gluons are deconfined. Properties of this extreme state of matter have been investigated in high-energy physics in the past 20 years with the plethora of different observables and across different collision systems, at SPS, RHIC and LHC experiments. One of the most important programs in these studies were the analyses of anisotropic flow phenomenon~\cite{Ollitrault:1992bk,Voloshin:2008dg}, which primarily have been carried out with the two- and multi-particle correlation techniques. The anisotropic flow measurements proved to be particularly successful in the studies of transport properties of QGP. For instance, they were used to constrain the ratio of shear viscosity to entropy density ($\eta/s$) of QGP to the very small values and therefore helped to establish a perfect liquid paradigm about the QGP properties~\cite{Braun-Munzinger:2015hba}.
Traditionally, anisotropic flow is quantified with the amplitudes $v_n$ and symmetry planes $\Psi_n$ in the Fourier series parametrization of anisotropic particle emission as a function of azimuthal angle $\varphi$ in the plane transverse to the beam direction~\cite{Voloshin:1994mz}:
\begin{equation}
f(\varphi) = \frac{1}{2\pi}\left[1+2\sum_{n=1}^\infty v_n\cos[n(\varphi-\Psi_n)]\right]\,.
\label{eq:FourierSeries_vn_psin}
\end{equation}
Correlation techniques have been utilized in the past to provide estimates for the average values of various moments of individual flow amplitudes, $\left<v_n^k\right>$ ($k > 1$), where each moment by definition carries a piece of independent information about the event-by-event fluctuations, i.e. the stochastic nature, of anisotropic flow. Flow fluctuations are unavoidable in ultrarelativistic collisions as they originate from the non-negligible fluctuations of positions of the nucleons inside the colliding nuclei, as well as from quantum fluctuations of the quark and gluon fields inside those nucleons. As a consequence, even the nuclear collisions with the same impact parameter (the vector connecting centers of two colliding nuclei) can generate different anisotropic flow event-by-event. Therefore, a great deal of additional insights about QGP properties can be extracted from flow fluctuations. It was demonstrated recently that more detailed information of QGP properties, like temperature dependence of $\eta/s$, cannot be constrained with the measurements of individual flow amplitudes due to their insensitivity. Instead, a set of more sensitive flow observables, focusing on correlations of fluctuations of two different flow amplitudes $v_n$ and $v_m$, have been proposed~\cite{Niemi:2012aj,Bilandzic:2013kga}. In this paper, we generalize this idea and introduce a new set of independent flow observables, which quantify the correlations of flow fluctuations involving more than two flow amplitudes.
By using solely the orthogonality relations of trigonometric functions, one can from Eq.~(\ref{eq:FourierSeries_vn_psin}) derive that $v_n = \left<\cos[n(\varphi\!-\!\Psi_n)]\right>$, where the average $\left<\cdots\right>$ goes over all azimuthal angles $\varphi$ of particles reconstructed in an event. However, this result has little importance in the measurements of flow amplitudes $v_n$, since symmetry planes $\Psi_n$ cannot be measured reliably in each event. Since azimuthal angles $\varphi$, on the other hand, can be measured with the great precision, we can estimate instead the flow amplitudes $v_n$ and the symmetry planes $\Psi_n$ by using the correlation techniques~\cite{Wang:1991qh,Jiang:1992bw}. The cornerstone of this approach is the following analytic result derived recently~\cite{Bhalerao:2011yg}
\begin{equation}
\left<e^{i(n_1\varphi_1+\cdots+n_k\varphi_k)}\right> = v_{n_1}\cdots v_{n_k}e^{i(n_1\Psi_{n_1}+\cdots+n_k\Psi_{n_k})}\,,
\label{eq:generalResult}
\end{equation}
where the true value of the average $\left<\cdots\right>$ can be estimated experimentally by averaging out all distinct tuples of $k$ different azimuthal angles $\varphi$ reconstructed in the same event. With the advent of a generic framework for flow analyses with multi-particle azimuthal correlations~\cite{Bilandzic:2013kga}, the LHS of Eq.~(\ref{eq:generalResult}) can be evaluated fast and exactly for any number of particles $k$, and for any choice of non-zero integers for harmonics $n_1,\ldots, n_k$. This new technology was successfully used in the first measurements of Symmetric Cumulants (SC) in~\cite{ALICE:2016kpq} and can be used also for their generalized version, which we present in this paper.
Before elaborating further on all the technical points which are relevant for the generalization of SC observables, we briefly review the existing studies of correlated fluctuations of different flow amplitudes, with the special focus on the ones performed with SC, both by theorists and experimentalists. We traced back the first theoretical study in which correlations of amplitude fluctuations of two different flow harmonics were used to extract the independent information about the QGP properties to~\cite{Niemi:2012aj}. The observables utilized in that work are available only when it is feasible to estimate flow amplitudes event-by-event, which is the case only in the theoretical studies. Alternatively, experimentalists use the correlation techniques to estimate the correlated fluctuations of amplitudes of two different flow harmonics, via the SC observables, in the way it was first proposed in the Sec.~IVC of~\cite{Bilandzic:2013kga}. Complementary, such correlated fluctuations can be also probed with the Event Shape Engineering (ESE) technique~\cite{Schukraft:2012ah}. Even though the SC are relatively novel observables, a lot of theoretical studies utilizing them have been already performed: the standard centrality dependence with state-of-the-art models was obtained in \cite{Bhalerao:2014xra,Zhou:2015eya,Gardim:2016nrr,Zhao:2017yhj,Eskola:2017imo,Alba:2017hhe,Gardim:2017ruc,Giacalone:2018wpp,Bernhard:2018hnz,Moreland:2018gsh,Schenke:2019ruo,Moreland:2019szz,Wen:2020amn}; the relation between SC and symmetry plane correlations was studied in~\cite{Giacalone:2016afq}; the more differential studies (including transverse momentum and pseudorapidity dependence, or using subevents) were performed in~\cite{Zhu:2016puf,Ke:2016jrd,Nasim:2016rfv,Jia:2017hbm}; the use of SC to constrain the details of energy density fluctuations in heavy-ion collisions was recently discussed in~\cite{Bhalerao:2019fzp}; extensive coordinate space study was reported in~\cite{Broniowski:2016pvx}; the study in the collisions of smaller systems was carried out in~\cite{Dusling:2017dqg,Dusling:2017aot,Albacete:2017ajt,Nie:2018xog,Sievert:2019zjr,Rybczynski:2019adt,Zhao:2020pty}. The complementary theoretical studies of $v_n-v_m$ correlations, without using SC, has been performed in~\cite{Qian:2016pau,Qian:2017ier}. The nontrivial technical details about the implementation of multi-particle cumulants, with the special mentioning of SC, was recently rediscussed from scratch in~\cite{DiFrancesco:2016srj}. How SC can emerge in the broader context of flow fluctuation studies was briefly mentioned in~\cite{Mehrabpour:2018kjs}.
Measurements of correlations between the amplitude fluctuations of two different flow harmonics in heavy-ion collisions with the ESE technique was first reported by ATLAS in~\cite{Jia:2014jca,Aad:2015lwa,Radhakrishnan:2016tsc}. On the other hand, analogous measurements by using SC observables were first reported by ALICE in~\cite{Zhou:2015slf,ALICE:2016kpq,Kim:2017rfn}. After these initial studies, the measurements of SC have been successfully extended to different collision systems and energies in~\cite{Guilbaud:2017alf,Sirunyan:2017uyl,STAR:2018fpo,Aaboud:2018syf,Gajdosova:2018pqo,Acharya:2019vdf,Aaboud:2019sma,Sirunyan:2019sef}. The detailed differential analyses and the extension to correlations of subleading flow harmonics were published in~\cite{Acharya:2017gsw}. Feasibility of measurements of SC in the future experiments was recently addressed in~\cite{Citron:2018lsq}.
The rest of the paper is organized as follows. In Sec.~\ref{s:Generalization-To-Higher-Orders} the generalization of SC observables is motivated and presented. In Sec.~\ref{s:Predictions-from-realistic-Monte-Carlo-studies} the detailed Monte Carlo studies for the higher order SC are presented, and first predictions provided for heavy-ion collisions at LHC energies. We have summarized our findings in Sec.~\ref{s:Summary}. In a series of appendices, we have placed all technical steps which were omitted in the main part of the paper.
\section{Generalization to higher orders}
\label{s:Generalization-To-Higher-Orders}
The possibility of higher order SC was first mentioned in Table 1 of~\cite{Jia:2014jca}, while the first systematic studies of their properties were performed independently in Sec.~3.2 of~\cite{Deniz:2017}. In this section, we elaborate on the generalization of SC observables for the concrete case of three flow amplitudes, but our results and conclusions can be straightforwardly generalized to any number of amplitudes by following the detailed list of requirements presented in Appendix~\ref{a:List-of-requirments}. The few final solutions for generalized SC involving more than three amplitudes we have outlined in Appendix~\ref{a:Definitions-for-higher-order-Symmetric-Cumulants}. We have cross-checked all our statements about higher order SC with the set of carefully designed Toy Monte Carlo studies in Appendix~\ref{a:Checking-the-other-requirements-for-the-Symmetric-Cumulants}. In Appendix~\ref{a:Comment-on-Parseval-theorem} we have outlined the argument that there are no built-in mathematical correlations in SC observables which might originate as a consequence of Parseval theorem. In all our derivations, we use as the starting point the general mathematical formalism of cumulants published in~\cite{Kubo}.
\subsection{Physics motivation}
\label{ss:Physics-motivation}
We now briefly motivate this generalization with the following simple observation: In mid-central heavy-ion collisions the correlated fluctuations of even amplitudes $v_2, v_4, v_6, \ldots$ can solely originate from the fluctuations of magnitude in the ellipsoidal shape~\cite{Kolb:2003zi}. This sort of fluctuations will contribute only to SC(2,4), SC(2,4,6), etc., but not to SC(2,3,4). The genuine correlation among three amplitudes, $v_2$, $v_3$ and $v_4$ can develop only due to fluctuations in the elliptical shape itself. The non-vanishing result for SC(2,3,4) implies that there are additional sources of fluctuations in the system which couple all three amplitudes $v_2, v_3, v_4$. In this sense, SC(2,3,4) can separate the following two sources of fluctuations: a) fluctuations in the shape of ellipsoidal; b) magnitude fluctuations of the persistent ellipsoidal shape. On the other hand, SC(2,4) cannot disentangle these two different sources of fluctuations. This argument is supported by concrete calculus in the simple mathematical model presented in Appendix~\ref{a:ellipse-like-distributions}.
\subsection{Technical details}
\label{ss:Technical-details}
Before going to the generalization of SC, we recall here a few important points about the use of multi-particle azimuthal correlations in flow analyses. In general, these correlations are susceptible to the non-trivial collective phenomena (e.g. anisotropic flow), but also to the phenomena involving only a few particles (e.g. momentum conservation, jet fragmentation, resonance decays, etc.). The latter, called nonflow, is considered as systematic bias in flow analyses. One way to separate the two contributions is to compute the genuine multi-particle correlations or cumulants.
In the general mathematical formalism of cumulants~\cite{Kubo}, for any set of $k$ ($k>1$) stochastic observables, $X_1,\ldots,X_k$, there exists a unique term, the $k$-particle cumulant, sensitive only to the genuine $k$-particle correlation among all the observables. This mathematical formalism has been introduced for anisotropic flow analyses in~\cite{Borghini:2000sa,Borghini:2001vi}, and further improved and generalized in~\cite{Bilandzic:2010jr,Bilandzic:2013kga}. As one has the freedom to choose the experimental observables of interest (e.g. azimuthal angles, flow amplitudes $v_n$, multiplicities of particles in a given momentum bin~\cite{DiFrancesco:2016srj}, etc.), different variants of cumulants can appear in practice. In flow analyses, the traditional choice is to identify $X_i \equiv e^{in_{i}\varphi_i}$, with $\varphi_i$ the azimuthal angles of reconstructed particles and $n_i$ the non-zero flow harmonics, as the observable of interest. This specific choice allows the identification of the averages in the cumulant expansion (e.g. Eq.~(2.8) in~\cite{Kubo}) with the single-event averages of azimuthal correlators (Eq.~(\ref{eq:generalResult})). If the detector has a uniform azimuthal acceptance, only the isotropic correlators with $\sum_{i} n_i$ = 0 are not averaged to zero in the generalization of the single-event averages $\left<\cdots\right>$ to all-event averages $\left<\left<\ldots\right>\right>$. Finally, in the resulting expression one groups together all terms which differ only by the re-labelling of azimuthal angles $\varphi_i$~\cite{Borghini:2000sa,Borghini:2001vi}.
For instance, the flow amplitude $v_n$ estimated with four-particle cumulant, $v_{n}\{4\}$, can be obtained by identifying $X_1 \equiv e^{in\varphi_1}$, $X_2 \equiv e^{in\varphi_2}$, $X_3 \equiv e^{-in\varphi_3}$ and $X_4 \equiv e^{-in\varphi_4}$. Following the steps described above, one obtains
\begin{eqnarray}
\left<\left<\cos[n(\varphi_1\!+\!\varphi_2\!-\!\varphi_3-\!\varphi_4)]\right>\right>_c &=& \left<\left<\cos[n(\varphi_1\!+\!\varphi_2\!-\!\varphi_3-\!\varphi_4)]\right>\right>\nonumber\\
&&{}-2\left<\left<\cos[n(\varphi_1\!-\!\varphi_2)]\right>\right>^2
\label{eq:4p_cumulant}\,.
\end{eqnarray}
A generalization of this idea for the case of non-identical harmonics leads to the definition of new observables dubbed SC~\cite{Bilandzic:2013kga}. With the more general choice $X_1 \equiv e^{in\varphi_1}$, $X_2 \equiv e^{im\varphi_2}$, $X_3 \equiv e^{-in\varphi_3}$, and $X_4 \equiv e^{-im\varphi_4}$, where $n$ and $m$ are two different positive integers, the general formalism of four-particle cumulants from \cite{Kubo} translates now into:
\begin{eqnarray}
\left<\left<\cos(m\varphi_1\!+\!n\varphi_2\!-\!m\varphi_3\!-\!n\varphi_4)\right>\right>_c &=& \left<\left<\cos(m\varphi_1\!+\!n\varphi_2\!-\!m\varphi_3\!-\!n\varphi_4)\right>\right>\nonumber\\
&&{}-\left<\left<\cos[m(\varphi_1\!-\!\varphi_2)]\right>\right>\left<\left<\cos[n(\varphi_1\!-\!\varphi_2)]\right>\right>
\label{eq:4p_sc_cumulant}\,.
\end{eqnarray}
It follows that, in both Eqs.~(\ref{eq:4p_cumulant}) and (\ref{eq:4p_sc_cumulant}) above, the final expressions for the cumulants depend only on the flow amplitudes, since by using Eq.~(\ref{eq:generalResult}) we obtain immediately:
\begin{eqnarray}
\left<\left<\cos[n(\varphi_1\!+\!\varphi_2\!-\!\varphi_3-\!\varphi_4)]\right>\right>_c &=& \left<v_n^4\right> - 2 \left<v_n^2\right>^2 \equiv - v_n\{4\}^4\nonumber\,,\\
\left<\left<\cos(m\varphi_1\!+\!n\varphi_2\!-\!m\varphi_3\!-\!n\varphi_4)\right>\right>_c &=& \left<v_{m}^2v_{n}^2\right>-\left<v_{m}^2\right>\left<v_{n}^2\right> \equiv \mathrm{SC}(m,n)\,.
\label{eq:cumulantsInTermsOfHarmonics}
\end{eqnarray}
The success of observables $v_n\{4\}$ and SC$(m,n)$ lies in the fact that they suppress much better the unwanted nonflow correlations, than for instance the starting azimuthal correlators $\left<\left<\cos[n(\varphi_1\!+\!\varphi_2\!-\!\varphi_3-\!\varphi_4)]\right>\right>$ and $\left<\left<\cos(m\varphi_1\!+\!n\varphi_2\!-\!m\varphi_3\!-\!n\varphi_4)\right>\right>$, and therefore they are much more reliable estimators of anisotropic flow properties. We will elaborate more on systematic biases due to nonflow later in the paper.
After these general considerations, to which we refer from this point onward as traditional or standard cumulant expansion in flow analyses, we proceed with the generalization of SC. As a concrete example, we provide the generalization of SC observables for the case of three flow amplitudes, and discuss in detail all consequences. As opposed to the traditional cumulant expansion in Eqs.~(\ref{eq:4p_cumulant}) and (\ref{eq:4p_sc_cumulant}), we now directly define SC$(k,l,m)$ by using the general cumulant expansion for three different observables (Eq.~(2.8) in~\cite{Kubo}), in which as fundamental observables, instead of azimuthal angles, we choose directly the amplitudes of flow harmonics. This difference in the choice of fundamental observables on which the cumulant expansion is performed is the landmark of our new approach. It follows immediately:
\begin{equation}
\mathrm{SC}(k,l,m) \equiv \left<v_k^2v_l^2v_m^2\right>
- \left<v_k^2v_l^2\right>\left<v_m^2\right>
- \left<v_k^2v_m^2\right>\left<v_l^2\right>
- \left<v_l^2v_m^2\right>\left<v_k^2\right>
+ 2 \left<v_k^2\right>\left<v_l^2\right>\left<v_m^2\right>\,.
\label{SC(k,l,m)_flowHarmonics}
\end{equation}
In theoretical studies, in which flow amplitudes can be computed on an event-by-event basis, this expression suffices. In Appendix~\ref{a:List-of-requirments} we demonstrate that SC$(k,l,m)$ satisfies the fundamental cumulant properties, while the remaining requirements needed for generalization are cross-checked in Appendix~\ref{a:Checking-the-other-requirements-for-the-Symmetric-Cumulants}.
In an experimental analysis, it is impossible to estimate reliably flow amplitudes event-by-event, and all-event averages of azimuthal correlators need to be used to estimate them. This connection is provided by the analytic result in Eq.~(\ref{eq:generalResult}). Solely by using that result, it follows immediately that the SC$(k,l,m)$ observable defined in Eq.~(\ref{SC(k,l,m)_flowHarmonics}) can be estimated experimentally with:
\begin{eqnarray}
\mathrm{SC}(k,l,m) &=&\left<\left<\cos[k\varphi_1\!+\!l\varphi_2\!+\!m\varphi_3\!-\!k\varphi_4\!-\!l\varphi_5\!-\!m\varphi_6]\right>\right>\nonumber\\
&-&\left<\left<\cos[k\varphi_1\!+\!l\varphi_2\!-\!k\varphi_3\!-\!l\varphi_4]\right>\right>\left<\left<\cos[m(\varphi_5\!-\!\varphi_6)]\right>\right>\nonumber\\
&-&\left<\left<\cos[k\varphi_1\!+\!m\varphi_2\!-\!k\varphi_5\!-\!m\varphi_6]\right>\right>\left<\left<\cos[l(\varphi_3\!-\!\varphi_4)]\right>\right>\nonumber\\
&-&\left<\left<\cos[l\varphi_3\!+\!m\varphi_4\!-\!l\varphi_5\!-\!m\varphi_6]\right>\right>\left<\left<\cos[k(\varphi_1\!-\!\varphi_2)]\right>\right>\nonumber\\
&+&2\left<\left<\cos[k(\varphi_1\!-\!\varphi_2)]\right>\right>\left<\left<\cos[l(\varphi_3\!-\!\varphi_4)]\right>\right>\left<\left<\cos[m(\varphi_5\!-\!\varphi_6)]\right>\right>\,.
\label{eq:3pSC}
\end{eqnarray}
Since each harmonic in each azimuthal correlator above appears with an equal number of positive and negative signs, any dependence on the symmetry planes $\Psi_n$ is canceled out by definition in each of the ten correlators (Requirement~4 in Appendix~\ref{a:List-of-requirments}, and proof therein). All correlators in Eq.~(\ref{eq:3pSC}) are invariant under the shift $\varphi_i \rightarrow \varphi_i + \alpha$ of all azimuthal angles, where $\alpha$ is arbitrary, which proves their isotropy (Requirement~5 in Appendix~\ref{a:List-of-requirments}).
We have arrived at the final result in Eq.~(\ref{eq:3pSC}) by following our new approach in which the cumulant expansion has been performed directly in Eq.~(\ref{SC(k,l,m)_flowHarmonics}) on flow amplitudes. While we have started with the cumulant expansion of order 3 in Eq.~(\ref{SC(k,l,m)_flowHarmonics}), the final expression in Eq.~(\ref{eq:3pSC}) depends on 6 azimuthal angles. We now make a comparison with the traditional approach in which the cumulant expansion is performed directly on 6 azimuthal angles $\varphi$, and discuss the differences.
\subsection{Difference between old and new cumulant expansion in flow analyses}
\label{ss:Difference-between-old-and-new-cumulant-expansion-in-flow-analyses}
We now scrutinize the cumulant expansion for one concrete example of six-particle azimuthal correlators, by following the traditional procedure, in which the starting observables are azimuthal angles. We start first with the analytic expression (derived solely using Eq.~(\ref{eq:generalResult})):
\begin{equation}
\left<\cos[n(3\varphi_1\!+\!2\varphi_2\!+\!\varphi_3-\!3\varphi_4-\!2\varphi_5-\!\varphi_6)]\right> = v_{n}^2v_{2n}^2v_{3n}^2\,.
\label{eq:second6pExample}
\end{equation}
The corresponding cumulant in the traditional approach is:
\begin{eqnarray}
\left<\cos[n(3\varphi_1\!+\!2\varphi_2\!+\!\varphi_3-\!3\varphi_4-\!2\varphi_5-\!\varphi_6)]\right>_c &=& \left<\left<\cos[n(3\varphi_1\!+\!2\varphi_2\!+\!\varphi_3-\!3\varphi_4-\!2\varphi_5-\!\varphi_6)]\right>\right> \\
&-& \left<\left<\cos[n(2\varphi_1\!+\!\varphi_2\!-\!2\varphi_3-\!\varphi_4)]\right>\right>
\left<\left<\cos[3n(\varphi_1\!-\!\varphi_2\!)]\right>\right>\nonumber\\
&-& \left<\left<\cos[n(3\varphi_1\!+\!\varphi_2\!-\!3\varphi_3-\!\varphi_4)]\right>\right>
\left<\left<\cos[2n(\varphi_1\!-\!\varphi_2\!)]\right>\right>\nonumber\\
&-& \left<\left<\cos[n(3\varphi_1\!+\!2\varphi_2\!-\!3\varphi_3-\!2\varphi_4)]\right>\right>
\left<\left<\cos[n(\varphi_1\!-\!\varphi_2\!)]\right>\right>\nonumber\\
&-& \left<\left<\cos[n(3\varphi_1\!-\!2\varphi_2\!-\!\varphi_3)]\right>\right>^2
-\left<\left<\sin[n(3\varphi_1\!-\!2\varphi_2\!-\!\varphi_3)]\right>\right>^2\nonumber\\
&+&2 \left<\left<\cos[3n(\varphi_1\!-\!\varphi_2\!)]\right>\right>
\left<\left<\cos[2n(\varphi_1\!-\!\varphi_2\!)]\right>\right>\left<\left<\cos[n(\varphi_1\!-\!\varphi_2\!)]\right>\right>\nonumber\,.
\label{eq:six3n2n1n3n2n1n}
\end{eqnarray}
In terms of flow amplitudes, this expression evaluates into:
\begin{eqnarray}
\left<v_{n}^2v_{2n}^2v_{3n}^2\right>_c &=& \left<v_{n}^2v_{2n}^2v_{3n}^2\right>
- \left<v_{n}^2v_{2n}^2\right>\left<v_{3n}^2\right>
- \left<v_{n}^2v_{3n}^2\right>\left<v_{2n}^2\right>
- \left<v_{2n}^2v_{3n}^2\right>\left<v_{n}^2\right>\nonumber\\
&-& \left<v_{n}v_{2n}v_{3n}\cos[n(3\Psi_{3n}\!-\!2\Psi_{2n}\!-\!\Psi_{n})]\right>^2
-\left<v_{n}v_{2n}v_{3n}\sin[n(3\Psi_{3n}\!-\!2\Psi_{2n}\!-\!\Psi_{n})]\right>^2\nonumber\\
&+& 2\left<v_{n}^2\right>\left<v_{2n}^2\right>\left<v_{3n}^2\right>\,.
\label{eq:six3n2n1n3n2n1n_inTermsOfFlow}
\end{eqnarray}
The result in Eq.~(\ref{eq:six3n2n1n3n2n1n_inTermsOfFlow}) is not a valid cumulant of flow amplitudes $v_{n}$, $v_{2n}$ and $v_{3n}$, since it has the extra terms in the 2nd line which depend also on the symmetry planes. Because of this contribution, this expression does not reduce to 0 for three independent flow amplitudes $v_{n}$, $v_{2n}$ and $v_{3n}$, which violates the elementary Theorem~1 on multivariate cumulants from~\cite{Kubo}.
Based on this concrete example, we conclude that one cannot start in general to calculate cumulants for one set of observables (e.g. azimuthal angles $\varphi_1,\varphi_2,\ldots$), and then interpret the final results to be a cumulant of some other observables (e.g. $v_n$'s and/or $\Psi_n$'s). At best one can state that the cumulants of multi-variate distribution of azimuthal angles $P(\varphi_1,\varphi_2,\ldots)$ can be parameterized with $v_n$'s and $\Psi_n$'s (since they are related via Fourier series expansion in Eq.~(\ref{eq:FourierSeries_vn_psin}) and analytic expression in Eq.~(\ref{eq:generalResult}), but one cannot state that the final results are direct cumulants of $v_n$'s and/or $\Psi_n$'s. In particular, the cumulants of flow degrees of freedom $v_n$'s and $\Psi_n$'s can be obtained only from the underlying p.d.f. $P(v_1,v_2,\ldots,\Psi_1,\Psi_2,\ldots)$ which governs the stochastic nature of $v_n$'s and $\Psi_n$'s observables, and not from $P(\varphi_1,\varphi_2,\ldots)$, which governs the stochastic nature of particle azimuthal angles.
In our new approach, we use azimuthal angles merely to estimate flow amplitudes via the mathematical identity in Eq.~(\ref{eq:generalResult}), but only after the cumulant expansion has been already performed directly on flow amplitudes $v_m,v_n,\ldots$. We do not perform the cumulant expansion on azimuthal angles directly, as it was done in flow analyses using cumulants so far (e.g. in Sec.~II of~\cite{Borghini:2001vi}). This subtle difference is one of the main points in our paper. This is a necessary change if one wants to estimate reliably genuine correlations of flow amplitudes, with the measured multi-particle azimuthal correlators.
This change also implies different expressions for cumulants of just one flow amplitude. For instance, the variance (second cumulant) of $v_n^2$ is
$\left<v_n^4\right>-\left<v_n^2\right>^2$, which is not the same as the usual 4-particle cumulant which gives $\left<v_n^4\right>-2\left<v_n^2\right>^2$.
We now support our conclusions further by presenting the Monte Carlo studies.
\subsection{Comparison of old and new cumulant expansion with Monte Carlo studies}
\label{a:Monte-Carlo-studies-Appendix-A}
To compare results for cumulants calculated by treating azimuthal angles as fundamental observables (tagged `old' in this section) and calculated with our new approach by treating flow amplitudes as fundamental observables (tagged `new'), we use as a concrete example the respective results for the higher order SC built out of the flow amplitudes $v_1, v_2$ and $v_3$. In the traditional approach, we use result in Eq.~(\ref{eq:six3n2n1n3n2n1n_inTermsOfFlow}) by setting $n=1$, and in the new approach we use Eq.~(\ref{SC(k,l,m)_flowHarmonics}) by setting $k=1, l=2, m=3$. Despite this specific choice, the outcome of this study is more general given the generic nature of Eq.~(\ref{eq:six3n2n1n3n2n1n_inTermsOfFlow}), which covers all cases when the order of one harmonic is the sum of orders of two remaining harmonics.
The comparison of the centrality dependence of these two expressions for the realistic VISHNU model (the details of VISHNU setup are presented later in Sec.~\ref{s:Predictions-from-realistic-Monte-Carlo-studies}) is presented in Fig.~\ref{fig:VishnuOLDvsNEW}(a), while in Fig.~\ref{fig:VishnuOLDvsNEW}(b) we have presented only the centrality dependence of the 2nd line in Eq.~(\ref{eq:six3n2n1n3n2n1n_inTermsOfFlow}), i.e. the difference between `new' and `old' approach. Clearly, in this concrete and realistic example, this difference is not negligible in mid-central and in peripheral collisions.
\begin{figure}
\begin{tabular}{c c}
\includegraphics[scale=0.43]{oldVsNew_20191128.pdf} &
\includegraphics[scale=0.43]{diff_20191128.pdf}
\end{tabular}
\caption{(a) Comparison of centrality dependence of `new' (i.e. SC($k,l,m$) as defined in Eq.~(\ref{SC(k,l,m)_flowHarmonics})) and `old' (Eq.~(\ref{eq:six3n2n1n3n2n1n_inTermsOfFlow})) approaches to calculate cumulants of three flow amplitudes $v_1$, $v_2$ and $v_3$; (b) Centrality dependence of the difference between `new' and `old' approach.}
\label{fig:VishnuOLDvsNEW}
\end{figure}
\begin{figure}
\begin{tabular}{c c}
\includegraphics[scale=0.43]{averages_20191202.pdf} &
\includegraphics[scale=0.43]{cumulants_20191202.pdf}
\end{tabular}
\caption{(a) Average values of all individual correlators which appear in the definitions in Eqs.~(\ref{SC(k,l,m)_flowHarmonics})and (\ref{eq:six3n2n1n3n2n1n_inTermsOfFlow}) in the simple Toy Monte Carlo study described in the main text. In order to bring all the values onto the same scale, only in this plot to improve the visibility the values of two-harmonic correlators were multiplied with 100, and the values of three-harmonic correlators with 10000 (as indicated in the bin labels). (b) Cumulants calculated from the averages presented in the LHS figure, by using the `old' approach in Eq.~(\ref{eq:six3n2n1n3n2n1n_inTermsOfFlow}) (with $n=1$) and `new' approach in Eq.~(\ref{SC(k,l,m)_flowHarmonics}) (with $k=1, l=2, m=3$). Only the cumulant calculated in the `new' approach is consistent with 0, as it should be for three independently sampled flow magnitudes $v_1$, $v_2$ and $v_3$.}
\label{fig:averagesAndcumulants}
\end{figure}
The remaining question is to demonstrate unambiguously which of the two expressions above is a valid cumulant of flow amplitudes. To answer this question we have set up the following clear-cut Toy Monte Carlo study, which demonstrates that only the Eq.~(\ref{SC(k,l,m)_flowHarmonics}) is the valid cumulant of three flow amplitudes $v_1$, $v_2$ and $v_3$: a) The amplitudes $v_1$, $v_2$ and $v_3$ are sampled in each event uniformly and independently in the intervals (0.03,0.1), (0.04,0.1), and (0.05,0.1), respectively; b) The symmetry planes $\Psi_1$ and $\Psi_2$ are sampled in each event uniformly and independently in the interval (0,2$\pi$);
c) The symmetry plane $\Psi_3$ is determined in each event as: $\Psi_3 \equiv \frac{1}{3}(\frac{\pi}{4} + 2\Psi_2 + \Psi_1)$. With such a setup, in each event $\cos(3\Psi_3 - 2\Psi_2 - \Psi_1) = \cos\frac{\pi}{4} = \frac{\sqrt{2}}{2}$ (and similarly for the sine term), and therefore the 2nd line in Eq.~(\ref{eq:six3n2n1n3n2n1n_inTermsOfFlow}) is non-vanishing, which means that in this study Eqs.~(\ref{eq:six3n2n1n3n2n1n_inTermsOfFlow}) and (\ref{SC(k,l,m)_flowHarmonics}) will yield systematically different results. However, since the three flow amplitudes $v_1$, $v_2$ and $v_3$ are sampled independently, the corresponding cumulant must be 0 (Theorem~1 in \cite{Kubo}). After we have carried out this study for 10000 events, we have obtained the results presented in Fig.~\ref{fig:averagesAndcumulants}.
Since only $\left<v_{1}^2v_{2}^2v_{3}^2\right>_{c,{\rm new}}$ is consistent with 0 (see Fig.~\ref{fig:averagesAndcumulants}(b)), we conclude that this observable is a valid cumulant of three flow amplitudes $v_1$, $v_2$ and $v_3$. This simple Toy Monte Carlo demonstrates also that another observable, $\left<v_{1}^2v_{2}^2v_{3}^2\right>_{c,{\rm old}}$, is not a valid cumulant of the three flow amplitudes $v_1$, $v_2$ and $v_3$ due to the presence of 2nd line in Eq.~(\ref{eq:six3n2n1n3n2n1n_inTermsOfFlow}) involving symmetry planes, which spoils the cumulant property.
Before presenting our first predictions for higher order SC in heavy-ion collisions at LHC energies, we discuss the systematic bias from nonflow contribution.
\subsection{Nonflow estimation with Toy Monte Carlo studies}
\label{ss:Nonflow-estimation-with-Toy-MonteCarlo-studies}
In this section, we discuss the generic scaling of the nonflow contribution in higher order SC. We begin with the implementation of the Fourier distribution $f(\varphi)$ parameterized as in Eq.~(\ref{eq:FourierSeries_vn_psin}), which we use to sample the azimuthal angle of each simulated particle. For simplicity, we define $f(\varphi)$ with three independent parameters, the flow amplitudes $v_2$, $v_3$ and $v_4$:
\begin{equation}
f(\varphi) = \frac{1}{2\pi} \left[ 1 + 2 v_2 \cos \left( 2\varphi \right) + 2 v_3 \cos \left( 3\varphi \right) + 2 v_4 \cos \left( 4\varphi \right) \right].
\label{sect-ToyMC_eq-Fourier}
\end{equation}
The setup of our Toy Monte Carlo simulations goes as follows: For each one of the input number of events $N$, we set the values of the multiplicity $M$ and the flow harmonics $v_2$, $v_3$ and $v_4$ according to the requirements of the current analysis. Examples of such conditions can be that these input parameters are kept constant for all events or that they are uniformly sampled event-by-event in given ranges. We indicate this second case with the notation ($\cdot$,$\cdot$). After insertion of the harmonics, Eq.~\eqref{sect-ToyMC_eq-Fourier} is used to sample the azimuthal angles of the $M$ particles. We then compute all the needed azimuthal correlators with the generic framework introduced in~\cite{Bilandzic:2013kga} with the possibility left open to choose which event weight to use during the transition from single- to all-event averages. Finally our SC observable is obtained using Eq.~\eqref{eq:3pSC}.
Now that our Toy Monte Carlo setup is in place, we can use it to check if our SC observable has the needed properties. We simulate one example of simple nonflow correlations and look at the scaling of our expression as a function of the multiplicity. Our nonflow is described as follows: for each one of the $N = 10^8$ events we generate, we sample a fixed initial number of particles $M_{\textrm{initial}}$ amongst the possibilities: 25, 38, 50, 75, 100, 150, 200, 250 and 500. The flow harmonics are all set to zero. We then introduce strong two-particle correlations by taking each particle two times in the computation of the two- and multi-particle correlations. This means our final multiplicity is given by
\begin{equation}
M_{\textrm{final}} = 2 M_{\textrm{initial}}.
\label{sect-ToyMC_eq-MultiNonFlow}
\end{equation}
As detailed in the list of requirements in Appendix~\ref{a:List-of-requirments}, the nonflow scaling of our three-harmonic SC can be described with the following expression:
\begin{equation}
\delta^{\rm SC}_3 = \frac{\alpha}{M^5} + \frac{\beta}{M^4} + \frac{\gamma}{M^3},
\label{sect_ToyMC_eq_fitNonFlow}
\end{equation}
where $M$ corresponds to the final multiplicity introduced in Eq.~\eqref{sect-ToyMC_eq-MultiNonFlow}. The fit is done over all simulated multiplicities and can be found in Fig.~\ref{sect_ToyMC_fig_nonflow}. The obtained fit parameters are as follows:
\begin{gather}
\begin{split}
\alpha & = 27.01 \pm 9.79, \\
\beta & = - 0.0947 \pm 0.1743, \\
\gamma & = (- 1.383 \pm 5.876)\cdot 10 ^{-4},
\end{split}
\end{gather}
with a goodness of the fit of $\chi ^2/ndf = 0.7$. We see that the results are well described with the chosen fit. Parameters $\beta$ and $\gamma$ are both consistent with zero, meaning the dominant contribution to the nonflow comes from the six-particle correlator. This is the same leading order behavior as in the corresponding traditional cumulant expansion for this particular SC. Therefore SC($k$,$l$,$m$) is not sensitive to lower order correlations.
\begin{figure}
\begin{tabular}{c}
\includegraphics[scale=0.43]{20200416_ToyMC_SC234_nonflow_usual.pdf}
\end{tabular}
\caption{Values of SC(2,3,4) obtained in the case of strong two-particle nonflow. The theoretical value of SC(2,3,4) is zero in this example.}
\label{sect_ToyMC_fig_nonflow}
\end{figure}
This simulation has been done in a totally controlled environment which is not a true representation of what happens in a heavy-ion collision. We check now SC properties in realistic Monte Carlo simulations in the next section, and provide first predictions.
\section{Predictions from realistic Monte Carlo studies}
\label{s:Predictions-from-realistic-Monte-Carlo-studies}
In this section we discuss the physics of our new observables. We provide studies and obtain separate predictions both for the coordinate and momentum space, by using two realistic Monte Carlo models: iEBE-VISHNU and HIJING.
\subsection{Predictions from iEBE-VISHNU}
\label{ss:Realistic_Monte_Carlo_studies}
To inspect to what extent the generalized SC can capture the collective behavior of the heavy-ion collision evolution, we use iEBE-VISHNU~\cite{Shen:2014vra}, a heavy-ion collision event generator based on the hydrodynamic calculations. In this event generator, after preparing the initial state using Monte Carlo approach, the evolution of the energy density is calculated via 2+1 causal hydrodynamics together with equation of state calculated from a combination of lattice QCD and hadronic resonance gas model (s95p-v1~\cite{Huovinen:2009yb}). After the hydrodynamic evolution, the Cooper-Frye formula~\cite{Cooper:1974mv} is used to convert each fluid patch to a distribution of hadrons. Using iEBE-VISHNU, one can study the hadronic gas evolution after hadronization by using Ultra-relativistic Quantum Molecular Dynamics (UrQMD) transport model~\cite{Schenke:2012wb}. The evolution stops if no collision or decay happen in the system. In the present study, we have not used UrQMD to decrease the simulation time.
In this paper, we study Pb--Pb collision with center-of-mass energy per nucleon pair $\sqrt{s_{\rm NN}}=2.76\,$TeV. For the initial state, MC-Glauber with $0.118$ as wounded nucleon/binary collision mixing ratio is used. For the hydrodynamic evolution, the DNMR formalism~\cite{Denicol:2010xn,Denicol:2012cn} with fixed shear viscosity over entropy density ($\eta/s=0.08$) and zero bulk viscosity is exploited. The hydrodynamic initial time is set to $0.6\;$fm/$c$. The events are divided into 16 centrality classes between 0--80\% with equally sized bins. For each centrality, we have generated 14000 events. Let us point out that in the present study we have taken into account the particles $\pi^{\pm}$, $K^{\pm}$, $p$ and $\bar{p}$ in the final particle distribution, since they are the most abundant particles in the final distribution.
In iEBE-VISHNU, the reaction plane angle, i.e. the angle between the impact parameter vector and a reference frame, is fixed to be zero for all events. One notes that here we are dealing with a 2+1 dimensional hydrodynamic calculations in which the boost invariance is considered in the third (longitudinal) direction. For that reason, there is no pseudorapidity dependence in the present simulation.
It is worth mentioning that the aim of the simulation in this paper is not to present a precise justification of the experimental observation. In fact, we would try to demonstrate that in the presence of flow the generalized SC have non-trivial values and that their measurements are feasible for Pb--Pb collisions at LHC energies in terms of required statistics. Nevertheless, to show that our simulations can be considered as at least a qualitative prediction for future experimental observation, we present our Monte Carlo simulation results of few well-studied two-harmonic SC and compare them to the published data from ALICE in Appendix~\ref{ss:AppendixD}.
Regarding the $p_T$ range, the range $0.28 < p_T < 4$~GeV has been chosen in the present study. We should say that in ALICE $p_T$ is in the range $0.2 < p_T < 5$~GeV. It turns out that SC is very sensitive to the lower limit of $p_T$ range where we expect to have a considerable amount of particles. For the range $0.28 < p_T < 4$~GeV, which is more close to ALICE $p_T$ range, we have a qualitative agreement between simulation and data for SC of two flow amplitudes (see Appendix~\ref{ss:AppendixD}). The reason we have used $0.28$ (not $0.3$ or $0.2$) for the lower $p_T$ range is that the $p_T$ dependent output of VISHNU is reported in a fixed discrete range between $0 < p_T <4$~GeV. We should point out that our computations with lower $p_T=0.28$~GeV show a reasonable agreement with Ref.~\cite{Zhu:2016puf} in which VISHNU simulation with MC-Glauber model for the initial state and constant $\eta/s=0.08$ has been studied.
According to the collective picture of the produced matter in heavy-ion collision, the anisotropic flow corresponds to the anisotropy in the initial state, quantified by eccentricities $\epsilon_n e^{in\Phi_n}$. In fact, eccentricities are defined as the moments of initial energy density~\cite{Teaney:2010vd},
\begin{equation}
\epsilon_n e^{in\Phi_n} \equiv-\frac{\int r^n e^{in\varphi} \rho(r,\varphi)rdrd\varphi}{\int r^n \rho(r,\varphi)rdrd\varphi},\qquad n=2,3,\ldots,
\end{equation}
where $(r,\varphi)$ are the polar coordinates in the two dimensional transverse space, and $\rho(r,\varphi)$ is the energy density in this space. Similar to the anisotropic flow, the SC of the eccentricity distribution
shown by $\text{SC}_{\epsilon}(n,m,p)$ can be studied by replacing $v_n$ with $\epsilon_n$ in Eq.~(\ref{eq:cumulantsInTermsOfHarmonics}).
In addition, the flow fluctuations are the manifestation of initial state fluctuations after the collective evolution. One notes that the difference between initial state models (Glauber, CGC, TrENTo, etc.) is not only encoded in different cumulants of eccentricities $\epsilon_2$ or $\epsilon_3$ but also in the other details of the initial state fluctuations, for instance in the correlations between eccentricities. In other words, to discriminate between different models, we need to study different aspects of the eccentricity fluctuations and their correlations. The higher order SC observables introduced in this paper should be considered as an example of such a study. Modeling collective evolution as a linear/non-linear response to the initial state helps our understanding of both the initial state and the collective evolution models. Using it, one can explain the experimentally observed flow fluctuations, in the sense that we can qualitatively inspect how much of the observed value is a manifestation of the initial state and how much of it is the consequence of the collective evolution. Even more than that, it is also possible to examine the strength (couplings) of the linear/non-linear terms with this method. A more detailed investigation in this context is postponed to future studies.
\begin{figure}[t!]
\begin{center}
\begin{tabular}{c c}
\includegraphics[scale=0.43]{Fig_SCV234.pdf} &
\includegraphics[scale=0.43]{Fig_SCV235.pdf} \\
\includegraphics[scale=0.43]{Fig_SCV345.pdf} &
\includegraphics[scale=0.43]{Fig_SCV246.pdf}
\end{tabular}
\caption{ Four different generalized SC obtained from iEBE-VISHNU. }
\label{SCVmnp}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\begin{tabular}{c c}
\includegraphics[scale=0.43]{Fig_SCeps234.pdf} &
\includegraphics[scale=0.43]{Fig_SCeps235.pdf} \\
\includegraphics[scale=0.43]{Fig_SCeps345.pdf} &
\includegraphics[scale=0.43]{Fig_SCeps246.pdf}
\end{tabular}
\caption{ Four different generalized SC obtained from MC-Glauber. }
\label{SCepsmnp}
\end{center}
\end{figure}
Using our Monte Carlo simulations, we have shown SC(2,3,4), SC(2,3,5), SC(3,4,5) and SC(2,4,6) in Fig.~\ref{SCVmnp}. Also the same quantities which are obtained from the initial state are presented in Fig.~\ref{SCepsmnp}.
One can see from the Figs.~\ref{SCVmnp} and \ref{SCepsmnp} that except for SC(3,4,5) in the final state (Figs.~\ref{SCVmnp}(c)) all the other presented cumulants show significant non-vanishing values. Moreover, one can observe that the cumulants obtained from the initial state (Fig.~\ref{SCepsmnp}) are monotonically increasing, while the same cumulants for the distribution after hydrodynamic evolution change the slope after centrality classes $45\%$ to $55\%$ and approach to zero. This can be due to the fact that, in the more peripheral collisions, the collective evolution duration is shorter, and small system size cannot transport the initially produced correlation into the final momentum distribution. Moreover, one can see a sign flip in $\text{SC}(2,3,4)$ and a suppression in $\text{SC}(3,4,5)$ in Fig.~\ref{SCVmnp}, compared to the initial state generalized SC in Fig.~\ref{SCepsmnp}. We will return to these points after defining the normalized generalized SC in the following.
As it can be seen from Fig.~\ref{SCVmnp} and Fig.~\ref{SCepsmnp}, due to the different scale of the initial and final distributions, we are not able to compare $\text{SC}(n,m,p)$ and $\text{SC}_{\epsilon}(n,m,p)$. In order to clarify to what extent the observed values of SC are related to the correlations in the initial state, we will exploit the normalized generalized SC,
\begin{eqnarray}\label{NGSC}
\text{NSC}(n,m,p)=\frac{\text{SC}(n,m,p)}{\langle v_n^2 \rangle \langle v_m^2 \rangle \langle v_p^2\rangle},\qquad \qquad \text{NSC}_{\epsilon}(n,m,p)=\frac{\text{SC}_{\epsilon}(n,m,p)}{\langle \epsilon_n^2 \rangle \langle \epsilon_m^2 \rangle \langle \epsilon_p^2\rangle}.
\label{NSCeq}
\end{eqnarray}
With such a study, for instance, one can answer what is the influence of the collective stage in the heavy-ion evolution. If NSC calculated in terms of $\epsilon_n$ and $v_n$ are the same, that means that the anisotropies in the initial state are the dominant source of anisotropies in the final state, and therefore NSC can be used to tune the details of initial conditions only. Using normalized (generalized) SC, we also avoid the sensitivity to the $p_T$ range in the final particle distribution. In fact, NSC clearly eliminates any dependence of multiplying amplitude of flow harmonics, which was obtained independently from correlators depending only on the same harmonics. Therefore, the independent information contained only in the correlations can be extracted best only from the normalized SC. While this is rather straightforward to achieve in models having only flow correlations, it is much more of a challenge in experimental analyses, due to difficulties in suppressing nonflow contribution in the denominator in Eq.~\eqref{NSCeq}. In \cite{ALICE:2016kpq}, where normalized SC were measured for the first time, the nonflow in denominator was suppressed by introducing pseudorapidity gaps in two-particle correlations.
The generalized NSC are depicted in Fig.~\ref{NSCmnp}. The similarity and discrepancy between $\text{NSC}(n,m,p)$ and $\text{NSC}_{\epsilon}(n,m,p)$ can be explained qualitatively by considering the linear and non-linear hydrodynamic response. As a matter of fact, the linear response is approximately true for $\epsilon_2$ and $\epsilon_3$ in the central and mid-central collisions \cite{Gardim:2011xv}. It means the event with larger ellipticity $\epsilon_2$ has larger elliptic flow $v_2$, and the dependence is approximately linear. The same relation holds between triangularity $\epsilon_3$ and triangular flow $v_3$. However, this is not the case for higher harmonics. For instance, it has been shown that for $v_4 e^{4i\Psi_4}$ and $v_5e^{5i\Psi_5}$ we have the following relations \cite{Gardim:2011xv,Teaney:2012ke,Teaney:2013dta,Giacalone:2018syb},
\begin{equation}\label{nonlinear}
\begin{aligned}
v_4 e^{4i\Psi_4} &= k_4 \epsilon_4 e^{i4\Phi_4}+k'_4 \epsilon_2^2 e^{i4\Phi_2}\,, \\
v_5 e^{5i\Psi_5} &= k_5 \epsilon_5 e^{i5\Phi_5}+k'_5 \epsilon_2 \epsilon_3 e^{i(2\Phi_2+3\Phi_3)}\,,
\end{aligned}
\end{equation}
where $k_n$ and $k'_n$ are coefficients related to the hydrodynamic response.
Interestingly, compared to $\text{NSC}_{\epsilon}(2,3,4)$ a sign change can be seen in $\text{NSC(2,3,4)}$ (Fig.~\ref{NSCmnp}(a)) which is similar to what has been observed for $\text{NSC(3,4)}$ and $\text{NSC}_{\epsilon}(3,4)$ in Fig.~\ref{NSCmn}(c) in Appendix~\ref{ss:AppendixD}. In the present case, we are dealing with genuine three-harmonic observable. However, the main difference between generalized SC of the initial state and the final state comes from the contribution of the non-linear term $\epsilon_2^2$ in the $v_4$. In fact, the term $\epsilon_2^2$ and its anti-correlation with $\epsilon_3$ should be responsible for this sign change similar to the $\text{NSC(3,4)}$ and $\text{NSC}_{\epsilon}(3,4)$ case (see Appendix~\ref{ss:AppendixD}). The same logic can explain the suppression of the cumulant $\text{NSC}(3,4,5)$ in Fig.~\ref{NSCmnp}(c) too. We know that there is a non-linear contribution with the term $\epsilon_2 \epsilon_3$ in $v_5$ (see Eq.~\eqref{nonlinear}).
As a result, the terms $\epsilon_2^2$ and $\epsilon_2 \epsilon_3$ can explain the small value of $\text{SC(3,4,5)}$ (or suppression of $\text{NSC}(3,4,5)$ in comparison with $\text{NSC}_{\epsilon}(3,4,5)$). However, in $\text{NSC}(2,3,5)$ only the term $\epsilon_2\epsilon_3$ plays the role. As can be seen in Fig.~\ref{NSCmnp}(b), the effect of the term $\epsilon_2\epsilon_3$ is small in $\text{NSC}(2,3,5)$.
It is worth to note that in even simpler cumulants $\text{NSC}(2,3)$ and $\text{NSC}_{\epsilon}(2,3)$ (see Fig.~\ref{NSCmn}(b) in Appendix~\ref{ss:AppendixD}) we do not have such an agreement in centrality classes above $40\%$. We think there must be a more complex reason such as the presence of extra non-linear terms behind the approximate agreement between $\text{NSC}(2,3,5)$ and $\text{NSC}_{\epsilon}(2,3,5)$ in a wide range of centrality classes. Finally, compared to $\text{NSC}_{\epsilon}(2,4,6)$, we observe a considerable enhancement in $\text{NSC}(2,4,6)$ in Fig.~\ref{NSCmnp}(d). This enhancement is even larger than what has been observed in Fig.~\ref{NSCmn}(b) for $\text{SC(2,4)}$. The reason would be due to the fact that in $\text{NSC}(2,4,6)$ the terms $\epsilon_2^2$ in $v_4$ and the $\epsilon_2^3$ in $v_6$ are responsible for this enhancement. It seems the term $\epsilon_3^2$ in $v_6$ and its anti-correlation with $\epsilon_2$ does not have enough power to compete with $\epsilon_2$, $\epsilon_2^2$ and $\epsilon_2^3$ trivial correlations. It is important to point out that we have explained the generalized SC from the initial state and hydrodynamic response only qualitatively. In this context, a further rigorous study is needed to be done in the future.
\begin{figure}[t!]
\begin{center}
\begin{tabular}{c c}
\includegraphics[scale=0.43]{Fig_NSC234.pdf} &
\includegraphics[scale=0.43]{Fig_NSC235.pdf} \\
\includegraphics[scale=0.43]{Fig_NSC345.pdf} &
\includegraphics[scale=0.43]{Fig_NSC246.pdf}
\end{tabular}
\caption{ Comparison between four different normalized generalized SC obtained from MC-Glauber and iEBE-VISHNU. }
\label{NSCmnp}
\end{center}
\end{figure}
\subsection{Estimating nonflow contribution with HIJING}
\label{ss:Estimating-nonflow-contribution-with-HIJING}
As detailed in Appendix~\ref{a:List-of-requirments}, one of requirements for our observables to be called SC is that it should be robust against nonflow. We have already studied in a Toy Monte Carlo simulation the case of strong two-particle correlation for different multiplicities in Sec.~\ref{ss:Nonflow-estimation-with-Toy-MonteCarlo-studies}. We now investigate nonflow contribution further and introduce HIJING, which stands for Heavy-Ion Jet INteraction Generator~\cite{Gyulassy:1994ew}. It is a Monte Carlo model used to study particle and jets production in nuclear collisions. It contains models to describe mechanisms like jet production, jet fragmentation, nuclear shadowing, etc. The correlations these mechanisms introduce involve generally only few particles and should not be included in the analysis of collective effects like anisotropic flow. Since HIJING has all the phenomena produced in a heavy-ion collision except flow itself, we can use it to test the robustness of our SC observable against few-particle nonflow correlations.
In general, when one uses an azimuthal correlator in an expression like
\begin{equation}
\left<e^{in(\varphi_1\!-\!\varphi_2)}\right> = \left<\cos[n(\varphi_1\!-\!\varphi_2)]\right> = v_n^2\,,
\end{equation}
one assumes in the derivation that the underlying two-variate p.d.f. $f(\varphi_1,\varphi_2)$ fully factorizes into the product of two marginal p.d.f.'s, i.e.:
\begin{equation}
f(\varphi_1,\varphi_2) = f_{\varphi_1}(\varphi_1)f_{\varphi_2}(\varphi_2)\,.
\end{equation}
\begin{figure}
\begin{tabular}{c c}
\includegraphics[scale=0.43]{20200410_SC234_Final_00-50} &
\includegraphics[scale=0.43]{20200410_SC235_Final_00-50}
\end{tabular}
\caption{Predictions for the centrality dependence of SC(2,3,4) (a) and SC(2,3,5) (b) from the HIJING and iEBE-VISHNU models. The $p_{T}$ range of the iEBE-VISHNU simulations (0.28 $< p_{T} <$ 4 GeV) have been chosen to be close to the HIJING one (0.2 $< p_{T} <$ 5 GeV).}
\label{sect_Hijing_fig_HijingVishnu}
\end{figure}
If all azimuthal correlations are induced solely by anisotropic flow, such factorization is exactly satisfied. In the case of few-particle nonflow correlations the above factorization is broken, and flow amplitudes $v_n$ are now systematically biased when estimated with azimuthal correlators, i.e. at leading order we expect to have:
\begin{equation}
\left<e^{in(\varphi_1\!-\!\varphi_2)}\right> = v_n^2 + \delta_2\,,
\end{equation}
where $\delta_2$ denotes the systematic bias in 2-particle azimuthal correlations due to nonflow. We have written the above argument for the two-particle azimuthal correlations, but the argument can be trivially generalized for any larger number of particles (see the discussion on nonflow in Appendix~\ref{a:List-of-requirments}). It is impossible in practice to calculate and to quantify the systematic bias $\delta_2$ stemming from vastly different sources of nonflow correlations. Since HIJING has all relevant sources of correlations, except anisotropic flow, it is a great model to estimate solely the value of this systematic bias $\delta_2$, i.e. in HIJING we expect the following relation to hold:
\begin{equation}
\left<e^{in(\varphi_1\!-\!\varphi_2)}\right> = \delta_2\,,
\end{equation}
and analogously for higher order azimuthal correlators. If the given multi-particle azimuthal correlator, or some compound observable like SC which is estimated from azimuthal correlators, when measured in the HIJING dataset is consistent with 0, that means that nonflow contribution in that observable is negligible, and therefore it can be used as a reliable estimator of anisotropic flow properties.
In this section we show the predictions from HIJING for the centrality dependence of two different combinations of harmonics: SC(2,3,4) and SC(2,3,5). The data used here correspond to Pb--Pb collisions taken at the center of mass energy of $\sqrt{s_{\text{NN}}}$ = 2.76 TeV. Two kinetic cuts have been applied: 0.2 $< p_{T} <$ 5 GeV and -0.8 $< \eta <$ 0.8. The results obtained with HIJING are shown on Fig.~\ref{sect_Hijing_fig_HijingVishnu} (a) for SC(2,3,4) and on Fig.~\ref{sect_Hijing_fig_HijingVishnu} (b) for SC(2,3,5), alongside with the VISHNU predictions for the same combinations of harmonics. We can see that in both cases, our new SC observable in HIJING is compatible with 0 for head-on and mid-central collisions, meaning our observable is robust against nonflow. The comparison also shows that flow has generally a more important contribution than nonflow. This means that any observation of nonzero values of SC in real data could be attributed to collective effects.
\section{Summary}
\label{s:Summary}
In summary, we have presented the generalization of recently introduced flow observables, dubbed Symmetric Cumulants. We have presented the unique way how the genuine multi-harmonic correlations can be estimated with multi-particle azimuthal correlators. All desired properties of higher order Symmetric Cumulants were tested with the carefully designed Toy Monte Carlo studies. By using the realistic iEBE-VISHNU model, we have demonstrated that their measurements are feasible and we have provided the first predictions for their centrality dependence in Pb--Pb collisions at LHC energies. A separate study has been presented for their values in the coordinate space. Study based on HIJING demonstrated that these new observables are robust against systematic biases due to nonflow correlations. These generalized, higher order observables, can be therefore used to extract reliably new and independent information about the initial conditions and the properties of QGP in high-energy nuclear collisions.
\acknowledgments{
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 759257).
This research was supported by the Munich Institute for Astro- and Particle Physics (MIAPP) of the DFG cluster of excellence ``Origin and Structure of the Universe.''
}
|
1,477,468,750,396 | arxiv | \section{Introduction}\label{sec:introduction}
{Models of solar flares and coronal heating mechanisms require the build-up and storage of magnetic energy in the coronal magnetic field. This build-up of magnetic energy is frequently modelled by imposing slow photospheric motions that gently stress the coronal field. The common assumption, valid when the driving velocities are very small compared with the coronal Alfv\'en speed, is that the magnetic field will simply pass through a sequence of equilibrium states until the critical conditions, for either an instability or non-equilibrium, are reached and the magnetic energy is subsequently released.
}
{
Ideally one would like to model this evolution through the full time-dependent non-linear MHD (magnetohydrodynamic) equations. This requires the adoption of a computational approach, but at present, limitations on resources make the slow evolution over long times difficult to complete. Instead, a variety of approximate approaches that treat the coronal magnetic field in a simplified way have been used. These make different assumptions in order to achieve tractability and it is important to understand how these approaches compare with each other and, especially, how they compare with a full MHD treatment. This has not been carried out before and is the purpose of this paper.}
{To do this, we consider an idealised problem of the shearing of an initially uniform magnetic field in a straightened coronal loop (with the photosphere modelled as two parallel boundaries). Four approximate methods are used, two that consider quasi-static evolution and calculate equilibrium fields and two that consider the time evolution of the field. Note that in the former category, one can calculate a sequence of equilibria in response to footpoint motions, but the intermediate time evolution is lost. The success or otherwise of these approximate models are benchmarked against solutions of the full MHD equations using the Lare computational method
(\cite{arber01}).}
{The first quasi-static methodology considered is the relaxation or magneto-frictional method
(\cite{yang86,klimchuk92}) which, together with a flux transport model
(\cite{mackay06a,mackay06b}), can be used to track the long time evolution of the force-free, coronal magnetic field from days to years. How the field reaches equilibrium is not considered in this approach, but the relaxed state, for the given time evolution of the photospheric magnetic field, is the main goal. This is discussed in Section \ref{subsec:relax}. If there is no equilibrium, for example if a Coronal Mass Ejection (CME) occurs, the relaxation code fails to converge. }
{The second method is based on the well-known idea that two-dimensional equilibria satisfy the Grad
Shafranov equation for the magnetic flux function, $A$; (see Section \ref{subsec:1D}) but, in general, it is difficult to
determine, for specified footpoint displacements, the unknown functional dependencies of the gas pressure
and the shear component of the magnetic field on $A$. However,
\cite{lothian89} and \cite{browning89}
used the fact that there is a narrow boundary layer through which the various variables rapidly change from their boundary values to coronal values and that the coronal values only depends on one coordinate. Thus, the two-dimensional approach can be reduced to a one-dimensional problem (in the case when the length of the coronal loop is much larger than the scale of variation of the footpoint motions).
}
{With time-dependent methods, the simplest and most common way to study the evolution is to linearise the MHD equations about a simple initial uniform initial state, as described by \cite{rosner82}. While linearised MHD is straightforward, the possible complexities for this class of problem can be demonstrated by taking the expansion procedure to a higher order. Thus, we can study weakly non-linear effects, due to the non-linear back reaction of the linear solution. The solutions, described in detail in Section \ref{subsec:linear} and the Appendix, will also reveal features that help to justify the use of the one-dimensional solution mentioned above.
}
{Finally, time-dependent non-linear evolution may also be described by the Reduced MHD (RMHD) equations. By eliminating the fast magnetoacoustic waves and utilising the difference in horizontal and parallel length scales, a set of simpler equations can be obtained. RMHD was introduced for laboratory fusion plasma by, for
example,
\cite{kadomtsev74,strauss76,zank92}
, and used for coronal plasmas by, for example, \cite{scheper99} and \cite{rappazzo10,rappazzo13}.
A recent review by \cite{oughton17} discusses the validity of the RMHD equations.
}
{There are a few similar investigations for other situations, for example \cite{pagano13}, who have compared the relaxation method with an MHD simulation for the onset of a CME, \cite{dmitruk05} who compare Reduced MHD with MHD for the case of turbulence and \cite{schrijver06} who test force free extrapolations against a known solution. Examples of footpoint driven simulations include \cite{murawski94,meyer11,meyer12,meyer13}.}
{
Section \ref{sec:methods} describes the simple footpoint shearing experiment, and outlines the details of the four approximate models we examine. Section \ref{sec:results} contains a comparison between these models and benchmarks them against solutions to the full MHD equations. We will find that some methods perform quite well, even when their basic assumptions are not necessarily satisfied. A discussion of the results and possible future benchmarking exercises are presented in Section \ref{sec:conclusions}.}
\section{MHD Equations and Solution Methods}\label{sec:methods}
\subsection{MHD: Basic Equations}
The time evolution of our simple experiment, outlined below, is determined by solving the viscous, ideal MHD equations. The full set of equations are expressed as
\begin{flalign}
&\rho \frac{\partial {\bf v}}{\partial t} + \rho ({\bf v}\cdot \nabla){\bf v} = - \nabla p + {\bf j} \times {\bf B} + \nabla \cdot {\bf S},\label{eq:motion}\\
&\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho {\bf v}) = 0,\label{eq:continuity}\\
&\frac{\partial {\bf B}}{\partial t} = \nabla \times ({\bf v}\times{\bf B}),\label{eq:induction}\\
&\frac{\partial}{\partial t}\left (\frac{p}{\gamma -1}\right ) + {\bf v}\cdot \nabla \left (\frac{p}{\gamma -1}\right ) = - \frac{\gamma p}{\gamma -1} \nabla \cdot {\bf v} + \epsilon_{ij}S_{ij}, \label{eq:energy}
\end{flalign}
together with
\begin{displaymath}
{\bf j} = \frac{\nabla \times {\bf B}}{\mu},\hbox{ and } \nabla \cdot {\bf B} = 0\; .
\end{displaymath}
${\bf v}$ is the plasma velocity, $\rho$ the mass density, $p$ the gas pressure, ${\bf B}$ the magnetic field and ${\bf j}$ the current density. Gravity is neglected.
The viscous stress tensor is given by
\begin{displaymath}
S_{ij} = 2\rho \nu \left (\epsilon_{ij} - \frac{1}{3}\delta_{ij}\nabla \cdot {\bf v}\right ),
\end{displaymath}
where $\nu$ is the viscosity and the strain rate is
\begin{displaymath}
\epsilon_{ij} = \frac{1}{2} \left (\frac{\partial v_i}{\partial x_j} + \frac{\partial v_j}{\partial x_i}\right ).
\end{displaymath}
Equations (\ref{eq:motion}) - (\ref{eq:energy}) conserve the total energy,
$E = \frac{1}{2}\rho v^2 + \frac{B^2}{2\mu} + \frac{p}{\gamma -1}$, so that the dissipation of kinetic energy must go either
into an increase in magnetic energy or an increase in internal energy (i.e. the gas pressure) {defined as $\bf{e=\frac{p}{\gamma-1}}$}.
The form of the viscous stress tensor does not include the anisotropies introduced by the magnetic field. However {for this experiment}, the main role of the viscosity is to damp out the waves generated by the boundary motions and to allow the
field and plasma to evolve through sequences of equilibrium states so its exact form is not essential. Resistivity is not included as, in general, it will decrease the magnetic energy. \textbf{It has been confirmed that numerical resistivity is negligable as the energy injected at the boundaries equals the energy in the system within $\sim 1\%$.} The aim is to
follow a sequence of magnetostatic equilibria.
It is normal to express the variables in the MHD equations
in terms of non-dimensional ones and look for dimensionless parameters in the system. Then, it may be possible to use the fact that these parameters are either very large or very small to determine approximate
solutions. Hence, we define a length scale, $R$, a density, $\rho_{0}$, and a magnetic field strength, $B_0$. The dimensionless speed is the Alfv\'en speed, $V_A= B_0/\sqrt{\mu \rho_0}$, and time is expressed in terms
of the Alfv\'en travel time, $t_0= R/V_A$. Hence, we set
\begin{eqnarray}
&&\left ( x, y, z\right ) = R \left (\tilde{x}, \tilde{y}, \tilde{z}\right )\; , \quad t = \frac{R}{V_A} \tilde{t}\; , \quad {\bf B} = B_0 \tilde{{\bf B}}\; ,\nonumber\\
&& p = \frac{B_0^2}{\mu} \tilde{p}\; , \quad {\bf v} = V_A\tilde{{\bf v}}\; , \quad \rho = \rho_0
\tilde{\rho}\; . \label{eq:normalisation}
\end{eqnarray}
Substituting these expressions into equations (\ref{eq:motion}) - (\ref{eq:energy}) and dropping the tildes, the equations remain exactly the same, except that $\mu=1$ and $\nu$ is a non-dimensionless viscosity that is
the inverse of the Reynolds number. For the values $R = 2\times 10^7$m, $\rho_0 = 1.67 \times 10^{-12}$kg m$^{-3}$ and $B_0 = 10^{-3}$ tesla, the Alfv\'en speed is $V_A = 690$km s$^{-1}$ and the Alfv\'en travel time is
$t_0 = 29$s.
\subsection{Experiment description}\label{sec:expdesc}
Consider a computational box $-l \le x \le l$, and $-L \le y \le L$ and an initial, uniform magnetic field, ${\bf B} = B_0 \hat{y}$, uniform density, $\rho_0$ and uniform pressure, $p_0$. This can be thought of as a coronal
loop of length $2L$ and width $2l$ with a dimensionless plasma $\beta$ equal to $2p/B^2$ and we will use the term \lq loop\rq\ though the results are generic.
In our dimensionless variables, $B_0 = 1$,
$\rho_0 = 1$ and $p_0$ is a constant related to the initial plasma beta, $\beta_0$, {by $\bf{\beta_0=2p_0}$ and initial internal energy by $\bf{e_{0}=\frac{p_0}{\gamma-1}}$}.\\
Now impose a shearing velocity in the $z$ direction at the two photospheric ends ($y= \pm L$). $z$ is chosen to be
an ignorable coordinate so that the MHD equations will reduce to the appropriate 2.5D form. For the driving motions, we select
\begin{equation}
v_z (x, \pm L, t) = \pm F(t) \sin kx,
\label{eq:shear}
\end{equation}
where $k = \pi/l$ and $v_z(\pm l, y, t) = 0$. The time variation of the shearing velocity is taken as
\begin{equation}
F(t) = \frac{V_0}{2}\left \{\tanh \left (\frac{t- t_1}{\tau_0}\right ) + 1 \right \},
\label{eq:shearvy}
\end{equation}
where $t_1 > \tau_0$ is the switch-on time. We use $t_1 = 6$ and $\tau_0= 2$.
If the parameter $\tau_0$ is small, then $F(t)$ can be approximated by
\begin{equation}
F(t) = \left \{\begin{array}{cc}
0\;, & t < t_1\;, \\
V_0\; , & t_1 \le t\; .
\end{array}\right .
\end{equation}
We can also switch the driving off by using a similar function to ramp down the velocity.
{This form of the velocity on the boundary will cause the photospheric footpoints to be displaced by a distance $d(x)=D\sin kx$. The maximum footpoint displacement, $D$, can be calculated by integrating the velocity amplitude in time as }
\begin{equation}
D = \int_0^{t} F(t) dt = \frac{V_0 \tau}{2} \left(\log \left \{\cosh \left (\frac{t - t_1}{\tau}\right )\right \} + \frac{t}{\tau}\right ) \approx V_0 (t - t_1)\; ,
\label{eq:displacement}
\end{equation}
for times greater than $t_1$. {Thus, we have three distinct lengths in this problem: the half-length of the loop, $L$; the half-width of the loop, $l$; and the photospheric footpoint displacement, $d(x)$,
from its initial position. In all
cases, we take $L=3$ and $l = 0.3$ so that $l/L =0.1 \ll 1$. However, we allow $D/L$ to vary from small to large values.}
{
Next, we consider the various speeds in our system. These are: the Alfv\'en speed, $V_A$; sound speed, $c_s=\sqrt{\gamma p_0/\rho_0}$ ($\gamma
=5/3$ is the ratio of specific heats); the speed of the driving motions at the photospheric ends, $V_0$; and a diffusion speed, $V_{visc}=\nu/l$, based on the horizontal lengthscales.
Typically we take $\nu = 10^{-3}$ so that $V_{visc} \approx 3 \times 10^{-3}$. A smaller value of
$\nu$ could be used but too small a value results in numerical diffusion being more important than the specified value.
In order to pass through sequences of equilibria, we require
\begin{equation}
V_{visc} \ll V_0 \ll c_s. \label{eq:vineq}
\end{equation}
The driving speed is also slow and sub-Alfv\'enic if $V_0 \ll 1$ from Equation (\ref{eq:normalisation}). Accordingly we choose $V_0$ as 0.02.
Equation (\ref{eq:vineq}) then requires that the pressure is larger than a minimum value of
$p_0 \gg 2.4 \times 10^{-4}$. We consider the range
$10^{-3} < p_0 < 1.0$. Equivalently this can be written in terms of the initial plasma $\beta_0$ as $2\times 10^{-3}<\beta_0 <2.0$ or in terms of the initial internal energy as $\frac{3\times 10^{-3}}{2}<e_{0}<\frac{3}{2}$.}
\subsection{Relaxation}\label{subsec:relax}
Magneto-frictional, relaxation methods solve the induction equation with the velocity given by {the unbalanced Lorentz force} {(see \cite{mackay06a, mackay06b})}. This approach
has had great success in modelling the long term evolution of the global coronal field and in predicting the onset of coronal mass ejections.
To ensure that ${\nabla}{\cdot}{\bf{B}}{=}{0}$, we express the
magnetic field in terms of a vector magnetic potential, ${\bf A} = ( A_x(x,y), A_y(x,y), A(x,y))$, so that
\begin{equation}
{\bf{B}} = \nabla \times {\bf{A}} =\left (\frac{\partial A}{\partial y}, - \frac{\partial A}{\partial x}, \frac{\partial A_y}{\partial x} - \frac{\partial A_x}{\partial y}\right )\; .
\end{equation}
The equations to be solved are
\begin{flalign}
&{\bf v} = \lambda \frac{{\bf j} \times {\bf B}}{B^2},\label{eq:relaxv}&\\
&\frac{\partial {\bf A}}{\partial t}= {\bf v}\times{\bf B},\label{eq:relaxa}&
\end{flalign}
{where $\lambda=0.3$ is the magneto-frictional constant (see \cite{mackay06a,mackay06b} for details).}
The time evolution is not {physically realistic and is a function of the footpoint displacement} but leads to an end state in which the magnetic field has relaxed to a force-free equilibrium,
with the imposed $B_z$ from
our shearing displacement.
{Hence the magnetic energy can be calculated for a given displacement, $D$.
However, since the velocity is not a realistic quantity the kinetic energy cannot be calculated. }
Once the relaxation process is complete and since the resulting equilibrium is independent of the coordinate $z$,
the $z$ component of ${\bf A}(x,y)$ is a flux function and the relaxed $z$ component of the magnetic field, $B_z = \partial A_y/\partial x - \partial A_x/\partial y$,
will be a function of the flux function $A(x,y)$, i.e. $B_z = B_z(A)$. The boundary conditions for the vector potential are
\begin{equation}
A_x(x, \pm L) = \mp B_0 D \sin (\pi x/l)\; \hbox{ and } A(x, \pm L) = - B_0 x\; .
\label{eq:potentialbc}
\end{equation}
Without loss of generality, the gauge function is chosen so that $A_y(x, \pm L) = 0$ and, once the field has relaxed, this implies that $A_y(x,y) = 0$.
We select a physical time, $t$, and use Equation (\ref{eq:displacement}) to
determine the maximum footpoint displacement, $D$. {Note that while solving the Grad-Shafranov equation, Equation (\ref{eq:gradshafranov}) below, for a final force-free equilibrium state, involves only $A$, the evolution
towards such an equilibrium, described by Equation (\ref{eq:relaxa}),} requires calculation of $A_x$ also.
Given a value of $D$, the magneto-frictional method of \cite{mackay06a,mackay06b}
determines the equilibrium force-free field. For illustration only, we choose $D = 3.0$ (equivalent to $t = 156$) so that $D$ is equal to the half-length $L$.The relaxed state for $B_y$ is shown in Figure \ref{fig:relax}. A more detailed comparison with the other methods is presented in Section \ref{sec:results}.
\begin{figure*}[ht]
\includegraphics[width=0.95\textwidth]{{by_relax_3.0_y}.png}
\caption{{Using the magneto-frictional relaxation method} $B_y/B_0$ is plotted as a function of $y$ for the loop axes $x=0$ (upper) and $x=l/2$ (lower). The horizontal scale is expanded at the two ends to illustrate the resolved boundary layers at $y = \pm L$
and compressed in the middle to demonstrate that there is no variation with $y$ there.}
\label{fig:relax}
\end{figure*}
There are two important points. Firstly, there are sharp boundary layers at the photospheric ends of the field {which have a width of $y/R=0.1$ in this case which equals $l/L$. Different values of this ratio, 0.05, 0.2 and 0.3 have been tested and it is concluded that the
width of these boundary layers is controlled by the width to length ratio, $l/L$. This is also given by the linearised MHD method in Section \ref{subsec:linear}.}
This shows that $B_y$ rapidly changes from the imposed constant boundary
value of $B_0$ over a short distance that is comparable to the half-width, $l$. Hence, the derivative with respect to $y$ of not just $B_y$ but several variables are large in the boundary layers.
The width of the boundary layer is \textit{not} dependent on the value of $D/L$ and we use this fact in the next {section when discussing the one dimensional approach}.
Secondly,
in the middle of the layer, away from the boundaries, $B_y$ is almost independent of $y$ but it does vary with $x$ as $\cos(2kx)$ when $D/L$ is small.
Thus, although the dominant $y$ component of the field started out uniform,
when the footpoint displacement is comparable to the length $L$, the variations in $B_y$ are the order of 10\%. Thus, the magneto-frictional method predicts what
will turn out to be a generic property of relaxed states.
\subsection{1D Equilibrium}\label{subsec:1D}
When $l/L\ll 1$ a simple estimate of the final equilibrium state is possible, even when the footpoint displacement, $D$, is larger than
the half-length $L$, {$D/L \ge 1$, by solving} the 1D form of the Grad-Shafranov equation. Following the approach of \cite{lothian89, browning89}
and \cite{mellor05}, we can use the fact that the 2D equilibrium can be expressed in terms of the flux function $A(x,y)$ that satisfies the Grad-Shafranov equation
\begin{equation}
\nabla^2 A + \frac{d}{dA}\left (\mu p(A) + \frac{1}{2}B_z^2(A)\right ) = 0.
\label{eq:gradshafranov}
\end{equation}
The pressure is a function of $A$ that is determined by the energy equation and $B_z$ is determined by the shearing introduced by the footpoint displacement.
For shearing motion\textbf{s} defined in Equations (\ref{eq:shear}) - (\ref{eq:displacement}),
the photospheric footpoint displacement is given by integrating {a fieldline from its initial position, $(x_0,y_0)$, to its final one at $(x,y)$. }
Hence, it is a function of the flux function and is given by
\begin{eqnarray}
D(A(x,L)) &&=\int_{y=0}^{y=+L} \left (\frac{B_z(A)}{B_y}\right )_{A=const} dy \nonumber\\
&&= B_z(A)\int_{y=0}^{y=+L}\left ( \frac{1}{-\partial A/\partial x}\right )_{A=const} dy.
\label{eq:dofA}
\end{eqnarray}
As shown in the above papers and from the magneto-frictional relaxation results, away from the boundaries
we can ignore the boundary layers and assume that the field lines are essentially straight over most of the loop. $l/L \ll 1$ is always assumed.
Away from the boundary layers $A$ is independent
of $y$ and this implies that the integrand is independent of $y$. Therefore, we can determine $B_z(A)$ in terms of the footpoint displacement. Following \cite{mellor05}, we have
\begin{equation}
B_z(A) = -\frac{d(A)}{L}\frac{dA}{dx}.
\label{eq:ByofA}
\end{equation}
For the shearing motion used above, we have on $y=L$, $d(A) = V_0 (t-t_1) \sin kx$, where $k = \pi/l$ and $A(x,L) = - B_0 x$. Hence, $d(A) = - V_0 (t - t_1)\sin (kx) = - D\sin (kA/B_0)$, where $D = V_0 (t - t_1)$ is the
maximum footpoint displacement.
The simple 1D approximation can be modified to include the gas pressure. Conservation of flux and mass between any two fieldlines implies that
\begin{equation}
\frac{B_y}{\rho} = \frac{B_0}{\rho_0}\; , \label{eq:Byrho}
\end{equation}
where $B_0$ and $\rho_0$ are the initial unsheared values.
Next, if the effect of viscous heating is small, the entropy remains constant between any two fieldlines so that
\begin{equation}
\frac{p}{\rho^\gamma} = \frac{p_0}{\rho_0^\gamma}\; .\label{eq:prho}
\end{equation}
Rearranging the last two equations gives the pressure in terms of $B_y$ as
\begin{equation}
p = \frac{p_0}{B_0^\gamma}B_y^\gamma = \frac{p_0}{B_0^\gamma}\left (-\frac{dA}{dx}\right )^\gamma\; ,
\end{equation}
where $-\partial A/\partial x > 0$.
Hence, the Grad-Shafranov equations reduces to a 1D pressure balance equation of the form
\begin{equation}
\frac{d}{dx}\left (B_y^2 + \left (\frac{D}{L}\right )^2 \sin^2(kA/B_0) B_y^2 + 2 p\right ) = 0\; .
\label{eq:1Deq}
\end{equation}
This implies that the total pressure is constant away from the boundary layers and there is no magnetic tension force.
Computationally, it is easier to express all variables in terms of the flux function, $A$, and solve
\begin{eqnarray}
&&\frac{d^2A}{dx^2}\left (1 + \left (\frac{D}{L}\right )^2 \sin^2 (kA/B_0) +\frac{ \gamma p_0}{B_0^\gamma} \left (- \frac{dA}{dx}\right )^{\gamma -2} \right ) \nonumber\\
&&= - \frac{k}{2B_0} \left (\frac{D}{L}\right)^2 \sin (2kA/B_0)\left (\frac{dA}{dx}\right )^2\; ,
\label{eq:1D1}
\end{eqnarray}
subject to $A(\pm l) = \mp B_0 l$. The value of the constant total pressure is determined as part of the solution. As shown in Section \ref{sec:results}, this approach provides
an excellent approximation to the full MHD results for both small and large values of $D/L$.
We can now investigate analytic solutions to Equation (\ref{eq:1D1}) in the extreme cases of small and large $D/L$.
For small shear, $D/L \ll 1$, the solution to Equation (\ref{eq:1D1}) is
\begin{eqnarray}
A &=& - B_0 \left ( x + \left (\frac{D}{L}\right )^2 \frac{\sin (2kx)}{8 k (1 + c_s^2/V_A^2)} \right ) + O\left(\frac{D^4}{L^4}\right )\; ,\\
B_y &=& B_0 \left ( 1 + \left (\frac{D}{L}\right )^2 \frac{\cos (2kx)}{4 (1 + c_s^2/V_A^2)} \right ) + O\left(\frac{D^4}{L^4}\right )\; .
\end{eqnarray}
Hence, the correction to $B_y$ is small (of order $(D/L)^2$).
For large shear, $D/L \gg 1$, Equation (\ref{eq:1Deq}) is dominated by the middle term, away from $x=0$ and $x=\pm l$. In this case,
\begin{eqnarray}
&&A = -\frac{B_0}{k} \cos^{-1}\left (1 - \frac{2 |x|}{l}\right )\; , \quad B_z = B_0\frac{D}{L} \frac{2}{\pi}\frac{\sin (kA/B_0)}{|\sin(kA/B_0)|} \qquad
\nonumber\\
&& \hbox{and}\qquad B_y =B_0 \frac{2}{\pi}\frac{1}{|\sin(kA/B_0)|}\; .
\end{eqnarray}
$B_z$ has the form of a square wave with value $B_0 (2/\pi) (D/L)$. The minimum value of $B_y$ is $B_0 (2/\pi)$. The variation of the axial field with $x$ is discussed
along with the other approaches in Section \ref{sec:results}.
\subsection{Time-dependent MHD: 1 Linear and weakly non-linear expansions}\label{subsec:linear}
A simple way to understand some of the properties of the solutions determined above is to linearise the MHD equations about the initial equilibrium state.
We assume that the uniform background magnetic field dominates and we consider small perturbations to this state.
The expansion is for the case $B_\perp \ll B_0$, which we expect to be valid when $D/L \ll 1$ and which will be checked a posteriori.
Thus, we set the form of the expansion as
\begin{eqnarray}
{\bf B} &=& B_0 \hat{\bf y} + B_{1z}(y,t)\sin kx \hat{\bf z} + \left ( B_{2x}(x,y,t) \hat{\bf x} + B_{2y} (x,y,t)\hat{\bf y}\right )\nonumber \\ &+& \cdots ,\label{eq:Bexp}\\
{\bf v} &=& V_{1z}(y,t)\sin kx \hat{\bf z} + \left ( V_{2x}(x,y,t) \hat{\bf x} + V_{2y}(x,y,t) \hat{\bf y}\right ) + \cdots ,\label{eq:pexp}\\
p &=& p_0 + p_2(x,y,t) \cdots ,\label{eq:vexp}\\
\rho &=& \rho_0 + \rho_2(x,y,t) \cdots \; , \label{eq:rhoexp}
\end{eqnarray}
where $B_0$, $p_0$ and $\rho_0$ are the constant initial state quantities.
The subscript \lq 1\rq\ denotes first order terms. Since, {in general, incompressible shearing motions initially only produce Alfv\'en waves,} there is no first order variation in $\rho$
and $p$. The subscript \lq 2\rq\ indicates terms that are second order in magnitude and driven by products of the first order terms, {and are thus weakly non-linear}. The higher order corrections
to the Alfv\'en wave terms will come in at third order.
The expansions break down if the magnitude of the second order terms become as large as the first order terms or if the first order terms are as large
as the background values. Then, full non-linear MHD must be used.
The MHD equations can now be expanded. To first order, we have the damped Alfv\'en wave equation
\begin{flalign}
&\rho_0 \frac{\partial V_{1z}}{\partial t} = B_0 \frac{\partial B_{1z}}{\partial y} + \rho_0 \nu \nabla^2 V_{1z}, \label{eq:motioneps} &\\
&\frac{\partial B_{1z}}{\partial t} = B_0 \frac{\partial V_{1z}}{\partial y}. \label{eq:inductioneps} &
\end{flalign}
The second order, {weakly non-linear}, equations are
\begin{flalign}
&\rho_0 \frac{\partial v_{2x}}{\partial t} = - \frac{\partial}{\partial x} \left ( p_2 + B_0 B_{2y} + \frac{1}{2}B_{1z}^2 \sin^2 kx\right ) +
B_0\frac{\partial B_{2x}}{\partial y} \nonumber &\\
& + \rho_0 \nu \left(\frac{\partial^2 v_{2x}}{\partial x^2}+\frac{\partial^2 v_{2x}}{\partial y^2}
+\frac{1}{3}\frac{\partial}{\partial x}\left ( \frac{\partial v_{2x}}{\partial x}+ \frac{\partial v_{2y}}{\partial y}\right )\right),\label{eq:motionxeps2}&\\
&\rho_0 \frac{\partial v_{2y}}{\partial t} = - \frac{\partial}{\partial y} \left ( p_2 + \frac{1}{2}B_{1z}^2 \sin^2 kx\right ) \nonumber&\\
& + \rho_0 \nu \left( \frac{\partial^2 v_{2y}}{\partial x^2}+\frac{\partial^2 v_{2y}}{\partial y^2}+\frac{1}{3}\frac{\partial}{\partial y}\left ( \frac{\partial v_{2x}}{\partial x}+ \frac{\partial v_{2y}}{\partial y}\right )\right), \label{eq:motionzeps2}&\\
&\frac{\partial \rho_{2}}{\partial t} = - \rho_{0}\left ( \frac{\partial v_{2x}}{\partial x}+ \frac{\partial v_{2y}}{\partial y}\right ),\label{eq:continuityeps2}&\\
&\frac{\partial B_{2x}}{\partial t} = B_0 \frac{\partial v_{2x}}{\partial y}, \label{eq:inductioneps2x}&\\
&\frac{\partial B_{2y}}{\partial t} = - B_0 \frac{\partial v_{2x}}{\partial x}, \label{eq:inductioneps2y}&\\
&\frac{\partial p_2}{\partial t} = - \gamma p_0 \left (\frac{\partial v_{2x}}{\partial x} + \frac{\partial v_{2y}}{\partial y}\right ) \nonumber &\\
& + (\gamma -1)\rho_0 \nu \left (k^2 V_{1z}^2 \cos^2 kx
+\left (\frac{\partial V_{1z}}{\partial y} \right)^2\sin^2 kx \right ).\label{eq:energyeps2}
\end{flalign}
In Equations (\ref{eq:motionxeps2}), (\ref{eq:motionzeps2}) and (\ref{eq:energyeps2}), the linear Alfv\'en wave terms appear as quadratic sources for the second order terms.
\subsubsection{First order solution}
Once the shearing motion starts, an Alfven wave is excited. However, the small viscosity damps this wave and a steady state is reached. To illustrate the ideas for small switch on time, $\bf{t_1}$,
the solutions to Equations (\ref{eq:motioneps}) and (\ref{eq:inductioneps}) are given by a steady state solution and a Fourier series
representation of a damped standing Alfv\'en wave. The steady state solution is given by
\begin{displaymath}
V_{1z }= \left \{
\begin{array}{l l}
0\; , & t < t_1\; ,\\
\frac{V_0y}{L} \sin kx\; , & t_1 < t\; ,
\end{array}\right.\
\end{displaymath}
and
\begin{equation}
B_{1z} = \left \{
\begin{array}{l l}
0\; , & t < t_1\; , \\
B_0\left (\frac{V_0 (t - t_1)}{L} + \frac{\nu k^2L V_0}{2 V_{A}^2}\left (\frac{y^2}{L^2} - 1\right )\right )\sin kx\; , & t_1 < t \; ,
\end{array}\right. \label{eq:BzSteady}
\end{equation}
as can be seen by direct substitution into Equations (\ref{eq:motioneps}) and (\ref{eq:inductioneps}).
While the solution for $V_{1z}$
remains valid for all time, the solution for $B_{1z}$ will be modified once the non-linearities develop. From the maximum
values of our 1D method and Equation (\ref{eq:BzSteady}), we expect the maximum value of $\hbox{max}(B_{z})=B_{max}$, to lie between
\begin{equation}
B_0\frac{2}{\pi}\frac{D}{L} \le B_{max} \le B_0 \frac{D}{L}\; .
\end{equation}
In addition, there are large currents near the photospheric boundaries and numerical resistivity results in field lines slippage (see \cite{bowness13}
and their Equation (24) and Figure 1).
A damped standing wave is required to satisfy the initial conditions, at $t = t_1$, that $V_{1z }= B_{1z} = 0$ for all $y$. The solution for $V_{1z}$ is of the form
\begin{displaymath}
\left ( \frac{V_0y}{L} + \sum_{n=1}^{\infty} \alpha_n \sin (n\pi y/L) e^{i\omega (t-t_1)}\right ) \sin kx\; ,
\end{displaymath}
where $\omega$ satisfies the appropriate dispersion relation.
Due to viscosity, $\omega$ is complex and the Fourier series terms decay to zero for large time leaving the steady state solution for $V_{1z}$.
The final steady state for $B_{1z}$ is given in Equation (\ref{eq:BzSteady}). The first term
depends on the footpoint displacement, $D=V_0 (t - t_1)$.
The only restrictions
on the maximum speed of the shearing motion of the footpoints are given above {in Section \ref{sec:expdesc}}, namely $V_0$ is greater than the diffusion speed and smaller than the sound and Alfv\'en speeds.
However, the driving time must be longer than the viscosity damping time
in order to reach a genuine steady state solution. The second term in Equation
(\ref{eq:BzSteady}) is due to viscosity and is independent of time. In a viscous fluid, if the ends
of the magnetic field are being moved at a speed $V_0$, the central part will lag behind. Hence, $B_{z}$ is smaller in magnitude at $y=0$. This
term will decay after the driving has stopped. What this term does do is produce a gradient in the $y$ direction of
the magnetic pressure associated with $B_z$ and, although small, it will contribute to a steady flow along the $y$ direction. This is
discussed later.
Using the first order solution, we can calculate the leading order integrated kinetic energy per unit width as a function of time. It is given by
\begin{equation}
\int_{x=-l}^{l} \int_{y=-L}^{L} \frac{1}{2} \rho_0 V_{1z}^2 dy dx = \frac{1}{3} \rho_0 V_0^2 l L\; .
\label{eq:keintegrated}
\end{equation}
This will be used when interpreting the full MHD, numerical solutions below.
The leading order change to the integrated magnetic energy, however, requires knowledge of second order variables and is discussed below.
\subsubsection{Second order solutions}
Now that the first order steady state solutions are known, the second order equations can be calculated. The terms are complicated although the calculations to generate them are straightforward but tedious.
The details are shown in the Appendix. The basic form of the solutions are given by
\begin{flalign}
&v_{2x}(x,y,t)=(B(y)(t-t_1)+C(y))\sin(2kx)\; ,&\\
&v_{2y}(x,y,t)=(F(y)(t-t_1)+E(y))\cos(2kx)+G(y)\; ,&\\
&B_{2x}(x,y,t)=B_0 \left (B^\prime(y)\frac{(t-t_1)^2}{2} + C^\prime(y) (t-t_1)\right ) \sin (2 k x)\; ,&\\
&B_{2y}(x,y,t)=- 2 k B_0 \left (B(y)\frac{(t-t_1)^2}{2} + C(y) (t-t_1)\right ) \cos(2 k x)\; ,&\\
&\frac{\rho_2(x,y,t)}{\rho_0}= - G^\prime(y) (t-t_1) &\\
& + \left ( [2 k B(y) + D^\prime(y)]\frac{(t-t_1)^2}{2} +(2 k C(y) + E^\prime(y))(t-t_1)\right )\cos(2 k x) \; ,\nonumber& \\
&p_2(x,y,t)= \frac{\gamma p_0}{\rho_0}\rho_2\nonumber&\\
& + (\gamma -1)\rho_0 \nu (t-t_1) \left (k^2 V_{1z}^2 \cos^2 kx
+\left (\frac{\partial V_{1z}}{\partial y} \right)^2\sin^2 kx \right )\; .&
\end{flalign}
Here $^{\prime}$ denotes a derivative with respect to $y$.
The functions $G(y)$, $B(y)$, $C(y)$, $F(y)$ and $E(y)$ are determined in the Appendix. A key point to note is that $v_{2y}$, when averaged over
$x$, has a variation in $y$, namely $G(y)$, where
\begin{equation}
G(y) = \frac{\nu k^2 V_0^2 (2 \gamma -1 )}{12 c_s^2} y \left ( \frac{y^2}{L^2} - 1\right )\; .
\label{eq:v2yA}
\end{equation}
Note that for a fixed value of the viscosity $\nu$, this term increases in magnitude if the initial pressure, $p_0$, is reduced. Because of $G(y)$, there is
a change to the density that is independent of $x$, namely
\begin{equation}
-\rho_0 G^\prime (y) (t-t_1) = \rho_0 \frac{\nu k^2 V_0^2(2\gamma-1)}{12 c_s^2}\left ( 1 - 3 \frac{y^2}{L^2}\right )(t-t_1)\; .
\label{eq:rhoeps2}
\end{equation}
Integrating $\rho_2(x,y,t)$ over $x$ and $y$,
we can show that mass is conserved. So the variations of $\rho$ from its uniform initial state are simply a redistribution of the mass through
not only the compression and expansion of the field (variations in $B_y$) but also the flow along fieldlines ($G(y)$). From Equation (\ref{eq:rhoeps2}), the magnitude of this term depends
on the ratio of two lengthscales and two velocities. Defining a diffusion length as $l_d = \sqrt{\nu (t - t_1)}$, the change in density depends on
\begin{equation}
\pi^2\left (\frac{ l_d}{l}\right )^2 \frac{V_0^2}{c_s^2}.
\end{equation}
As $l_d$ increases with time, $G(y)$ will eventually become important. In addition, it becomes more important for larger $V_0$ and/or lower sound speed, $c_s$.
\subsubsection{Second order solutions: Neglect viscosity}
The expressions for the second order terms are complicated and, for illustration, we simplify them by neglecting viscosity. Setting $\nu = 0$,
\begin{flalign}
&G(y)= C(y) = E(y) = 0\; ,&\\
&B(y) = \frac{\delta}{4k}\left (\frac{\cosh(2ky)}{\cosh(2kL)} - 1\right )\; , &\\
&F(y) = \frac{\delta}{4k}\left ( \tanh(2kL)\frac{y}{L} - \frac{\sinh(2ky)}{\cosh(2kL)}\right )\; , &\\
&\delta = \frac{V_0^2}{L^2}\frac{1}{1 + c_s^2/V_A^2 (1 - \tanh(2kL)/2kL)}\; .&
\end{flalign}
The nature of the boundary layers is clear from the terms, $\cosh(2 k y)/\cosh(2 kL)$ and $\sinh( 2ky)/\cosh(2kL)$, in $B(y)$ and $F(y)$. The width of the boundary layer is controlled by the magnitude
of $2kL$. Hence, the ratio of the half-width to half-length, $l/L$ is important for the size of the boundary layer, as mentioned in Section \ref{subsec:relax}.
Away from the boundary layers, namely for $2kL \gg 1$, $B(y)\approx -\delta/4k$, $F(y)\approx O(1/2kL)$ and $(1 + c_s^2/V_A^2)\delta \approx (V_0^2/L^2)$
and so the second order solutions can be expressed as
\begin{flalign}
&v_{2x}=-\frac{D}{L}\frac{V_0}{4 kL(1+c_s^2/V_A^2)}\sin(2kx)\; ,&\\
&v_{2y}= \frac{D}{L}\frac{V_0}{4 kL(1+c_s^2/V_A^2)} \frac{y}{L}\cos(2kx)\; ,&\\
&B_{2y}= \frac{D^2}{L^2}\frac{B_0\cos(2 k x)}{4 (1 + c_s^2/V_A^2)}\; , \quad B_{2x}= 0 \; ,\label{eq:B2y} &\\
&\rho_2= \rho_0 \frac{B_{2y}}{B_0}\; ,\quad p_2= c_s^2\rho_2 = \frac{c_s^2}{V_A^2}B_0 B_{2y}\; .\label{eq:rho2}&
\end{flalign}
Note that Equations (\ref{eq:B2y}) and (\ref{eq:rho2}) agree with the linearised forms of Equations (\ref{eq:Byrho}) and (\ref{eq:prho}) from the 1D equilibrium method. In addition, the second order total pressure,
$p_2 + B_{1z}^2/2 + B_0 B_{2y}$
is independent of $x$ and equals $(D^2/L^2)(B_0^2/4)$.
From the first order and second order
magnetic field components, Equations (\ref{eq:BzSteady})
and (\ref{eq:B2y}), the magnitudes of these terms are in powers of $D/L$, making this the appropriate expansion parameter. Hence, these solutions are only strictly
valid provided $D/L \ll 1$. When viscosity is included,
from Equation (\ref{eq:BzSteady}) the ordering of the terms remains the same provided $\nu < (2 V_A^2/k^2 L V_0)(D/L)$.
The leading order change in the integrated magnetic energy, including the viscosity terms, at second order is given by
\begin{flalign}
&\int_{x=-l}^{l} \int_{y=-L}^{L} \frac{1}{2} B_{1z}^2 dy dx \nonumber,&\\
&= B_0^2 l L \left (\frac{V_0^2 (t - t_1)^2}{L^2} -
\frac{2}{3}\frac{k^2 \nu V_0^2}{V_A^2} (t - t_1) \right.
\left.+ \frac{2}{15}\frac{k^4 \nu^2V_0^2 L^2}{V_A^4}\right ),\; \nonumber&\\
& \approx B_0^2 l L \left (\frac{D^2}{L^2} -
\frac{2}{3}\frac{D}{L}\frac{k^2 \nu V_0 L}{V_A^2} + \frac{2}{15}\left (\frac{k^2 \nu V_0 L}{V_A^2}\right )^2\right ) \; , &
\label{eq:beintegrated}
\end{flalign}
since the contribution from $B_0 B_{2y}$ integrates to zero. For large $D/L$ or equivalently large time, the magnetic energy is proportional to $(D/L)^2$.
\subsection{Reduced MHD}\label{subsec:rmhd}
Using the Reduced MHD equations and notation quoted in \cite{rappazzo10, rappazzo13} and \cite{oughton17} and assuming that there are no variations in the $z$ direction,
we can express them as
\begin{flalign}
&\rho_0 \frac{\partial u_x}{\partial t} + \rho_0 u_x\frac{\partial u_x}{\partial x}=-\frac{\partial}{\partial x}\left (p + \frac{b_x^2}{2} + \frac{b_z^2}{2} \right ) +
B_0\frac{\partial b_x}{\partial y} \nonumber&\\
& + \rho_0\nu \frac{\partial^2 u_x}{\partial x^2},&\\
&\rho_0 \frac{\partial u_z}{\partial t} + \rho_0 u_x\frac{\partial u_z}{\partial x}= b_x\frac{\partial b_z}{\partial x} + B_0\frac{\partial b_z}{\partial y} +
\rho_0\nu \frac{\partial^2 u_z}{\partial x^2},&\\
&\frac{\partial b_x}{\partial t} + u_x\frac{\partial b_x}{\partial x}=b_x\frac{\partial u_x}{\partial x} + B_0\frac{\partial u_x}{\partial y},&\\
&\frac{\partial b_z}{\partial t} + u_x\frac{\partial b_z}{\partial x}=b_x\frac{\partial u_z}{\partial x} + B_0\frac{\partial u_z}{\partial y},&\\
&\frac{\partial u_x}{\partial x} = 0, \quad \quad \frac{\partial b_x}{\partial x} = 0.\label{eq:RMHDdiv}&
\end{flalign}
Here we have only included viscosity and, in keeping with the linearised MHD results presented above, we neglect resistivity. The only horizontal derivative included is with respect to $x$.
One consequence of the invariance in the $z$ direction is the prevention of the development of any tearing modes,
which may assist in the creation of short lengths in $z$. ${\bf{B}}_0 = B_0 \hat{\bf{y}}$ is the initial uniform magnetic field
and ${\bf b}$ is the magnetic field created by the boundary motions. \cite{rappazzo10} consider a very similar set-up to this paper.
\cite{oughton17} describe the three main assumptions required for the use of RMHD. These are: (i) the magnetic energy associated with $\bf{B}_0$ is much larger than the magnetic energy associated with $\bf{b}$;
(ii) the derivatives along $\bf{B}_0$ are much smaller than the perpendicular derivatives; and (iii) there are no parallel perturbations so
that $\bf{B}_0\cdot {\bf b} = 0$ and $\bf{B}_0\cdot {\bf v} = 0$.
Obviously, assumption (i) will fail before the footpoint displacement becomes comparable to the length, $L$, along the initial field. Assumption (ii)
will hold everywhere, except in the boundary
layers at the two photospheric ends of the field. \cite{scheper99} have outlined an asymptotic matching procedure to deal with boundary layers in RMHD. They do allow for a
variation in the dominant field component at second order expansion in powers of $l/L$. However, they do not allow for the propagation of the Alfv\'en waves produced by the shearing motions. In
fact, their equations are extremely similar to the magneto-frictional relaxation method described above.
Assumption (iii) will fail before the magnetic pressure variations due to the sheared magnetic field component, $ b_z$,
becomes comparable to $B_0$. Again, this is when the distance the footpoints are moved is the order of $L$. These assumptions are not used
by the methods described above. It is possible that RMHD may be inappropriate because the derivatives in the $z$ direction are in fact
smaller than the $y$ derivatives.
From Equation (\ref{eq:RMHDdiv}), the incompressible and solenoidal conditions
simply reduce to $u_x = 0 $ and $b_x = 0$ and not just that they are independent of $x$. The density is assumed to remain constant
and equal to its initial uniform value.
Using Equation (\ref{eq:RMHDdiv}), the above equations simplify to
\begin{flalign}
&0 =-\frac{\partial}{\partial x}\left (p + \frac{b_z^2}{2} \right )\; ,\label{eq:RMHDx}&\\
&\rho_0 \frac{\partial u_z}{\partial t} = B_0\frac{\partial b_z}{\partial y} + \rho_0 \nu \frac{\partial^2 u_z}{\partial x^2}\; ,\label{eq:RMHDAlfven1}&\\
&\frac{\partial b_z}{\partial t} = B_0\frac{\partial u_z}{\partial y}\; . \label{eq:RMHDAlfven2}&
\end{flalign}
Equations (\ref{eq:RMHDAlfven1}) and (\ref{eq:RMHDAlfven2}) are similar to Equations (\ref{eq:motioneps}) and (\ref{eq:inductioneps}) in linear MHD
and describe the propagation of damped Alfv\'en waves.
Once the Alfv\'en waves introduced by the shearing motions have damped, the field passes through
sequences of steady state solutions that are the same as described by the first order linear MHD solutions.
{In fact the first order linear MHD solutions are exact solutions of the RMHD equations.}
From Equation (\ref{eq:RMHDx}), $p + b_z^2/2$ is constant in the horizontal direction, $x$.{ However, this total pressure is only constant in space and will still} depend on time, as in the 1D method
presented above.
Hence, the gas pressure must
balance the $x$ variations in $b_z^2/2$. Such a large gas pressure may not be compatible with a low $\beta_0$ plasma.
The 1D approach and
second order solutions, discussed above, include both the gas pressure and the magnetic pressure due to the
modification of $B_y$, namely $B_0 + b_y$. This is a second order change to the uniform magnetic field.
Thus, assumption (iii), that the axial field does not change, must be dropped when the footpoint displacement is sufficiently large.
Instead, it is the total pressure to second order that is constant in $x$, namely
\begin{displaymath}
p + \frac{B_0^2}{2} + B_0 b_y + \frac{b_z^2}{2} = {C}(t)\; .
\end{displaymath}
The constant $C(t)$ must be derived from conservation of flux through the mid-plane. \cite{rappazzo10} do not include $b_y$, where the plasma
forces and evolution depend on the gas pressure gradients and not the current due to variations in $b_{y}$. {Although in some of their cases $b_z$ is very small compared to our values.}
Because $b_y$ is no longer constant, this means that there is compression and expansion. Hence, mass conservation implies that the density
must also change. In a low $\beta_0$ plasma, this is similar to our 1D solution. However,
in the 1D approach, the shear component, $b_z$, is determined by linking the boundary conditions and the footpoint
displacement, through the boundary layers via the flux function $A$.
There is no mention of this in most reduced MHD papers, presumably due to
assumption (ii) that all $y$ derivatives are small compared to the horizontal derivatives. Yet, we know from the relaxation method and,
as will be shown in the full MHD results below, that there can be boundary layers where the $y$ and $x$ derivatives are comparable.
The solution for $u_z$ is constant in time and has a linear profile between the driving velocity on the lower boundary and the upper boundary. The solution for $b_z$
has two parts to it. The first part is the linear increase in time of the shearing field component while the second part is due to the viscosity term. This is in agreement
with the linearised, first-order solution.
{In summary, care needs to be shown in relating quantities on the boundary to quantities away from
the boundary layers. Many quantities are not the same away from the boundary as they are on the boundary due to the expansion and
contraction of the magnetic fieldlines.} Hence, it is important when using RMHD, particularly for
simulations in which the boundary footpoints have moved a significant distance in comparison to the length of the field, to check that
the assumptions in \cite{oughton17} and listed above are indeed satisfied.
\section{Results}\label{sec:results}
Now we briefly summarise each method and clearly distinguish between the many related parameters, $p_0, \ \beta_0$, $e_{0},\ D,\ t$ before comparing the results.
For full MHD,
we solve Equations (\ref{eq:motion}) - (\ref{eq:energy}) using the MHD code, Lare2D (see \cite{arber01}), in 2D ($\partial /\partial z = 0$)
for the system described in Section \ref{sec:expdesc} with the driven boundary condition in equations (\ref{eq:shear}) and (\ref{eq:shearvy}).
The width and length of the loop are $l = 0.3$, $L=3$.
The photospheric driving speed $V_0 = 0.02$ and the switch on time $t_1 = 6$.
Viscosity and resistivity are
$\nu = 10^{-3}$
and $\eta=0$.
{The driving velocity satisfied Equation (\ref{eq:vineq}) so the magnetic
field should pass through a sequence of equilibria.
This choice means that $V_0$ is slower than the Alfv\'en speed and sound speed, when neglecting slow waves and shocks, but faster than any diffusion speed, as discussed in Section \ref{sec:expdesc}. }
{We have done four simulations each with a different value of $\beta_0$, or equivalently $p_0$ or $e_{0}$. In order to distinguish these related quantities their values are shown in table \ref{tab1}. In the following simulation 1 is referred to as high $\beta_0$ and simulation 3 by low $\beta_0$ unless otherwise stated. This choice has been made for the majority of the results since the other two simulations are qualitatively the same and agree with our understanding in relation to their initial conditions. }
\begin{table}
\caption{The initial internal energy, $e_{0}$ and $\beta_0$ for our four full MHD simulations.}\label{tab1}
\begin{tabular}{m{1.5cm}m{1.5cm}m{1.75cm}}
\hline Simulation & $\beta_0=2p_0$ & $e_{0}=3/2p_0$ \\
\hline 1&$4/3$ & 1.0 \\
\hline 2& $4/30$& 0.1 \\
\hline 3& $4/300$&0.01\\
\hline 4& $4/3000$&0.001\\
\hline
\end{tabular}
\end{table}
{The maximum displacement, $D$, is related to time, $t$, by Equation (\ref{eq:displacement})
\begin{equation}
D=V_0(t-t_1)\label{eq:D}.
\end{equation}
We choose various times (or equivalently footpoint displacements using Equation (\ref{eq:D}))
but the times chosen must still be long enough that fast waves
have propagated and equalised the total pressure across the field lines. We present results for cases where the footpoint displacement, $D$, is
both smaller and larger than $L$, such that $0.29\lesssim D/L\lesssim 2.63$.
The Lare2D results are taken to be the ``exact'' solutions.}
\paragraph{Relaxation:}
\begin{itemize}
\item {As described in Section \ref{subsec:relax}, Equations (\ref{eq:relaxv}) and (\ref{eq:relaxa}) are solved
to evolve the vector potential $\bf{A}$ from an initial state perturbed by the footpoint displacement on the boundaries to a force-free equilibrium.}
\item {Since the actual time evolution of this method is not physical only the magnetic field components for the final state can be compared, hence there are no quantities as functions of time, such as the kinetic energy.}
\item { The perturbation, Equation (\ref{eq:potentialbc}), is determined by the maximum displacement, $D$.}
\end{itemize}
\paragraph{1D Equilibrium Approach:}
\begin{itemize}
\item {The 1D equilibrium approach, described in Section \ref{subsec:1D}, involves solving Equation (\ref{eq:1D1}) for the flux function $A(x,y)$. }
\item {Equation (\ref{eq:1D1}) is determined by the maximum displacement, $D$, and initial pressure, $p_0$. }
\item {This approach gives results for $B_y$, $B_z$, $p$, $j_y$, $j_z$ and $\rho$ as functions of $x$.}
\end{itemize}
\paragraph{Linearisation:}
\begin{itemize}
\item {The first and second order equations and their analytic solution of each variable are described in detail in Section \ref{subsec:linear} and in the Appendix.}
\item {These expressions are dependent on time, $t$, and the initial pressure, $p_0$.}
\item {\textbf{The solution for each variable consists of the linear and second order terms in order to take into account weakly non-linear effects. These results from linearisation are denoted by ``linear'' in the results section.}}
\end{itemize}
\paragraph{RMHD:}
\begin{itemize}
\item {As discussed {in Section \ref{subsec:rmhd}} RMHD is not applicable to this problem
but it does agree with the first order terms in linear MHD. }
\item {The first order linear terms are an exact solution to the RMHD equations, Equations (\ref{eq:RMHDAlfven1}) and (\ref{eq:RMHDAlfven2}).}
\end{itemize}
\subsection{Comparison with Lare2D Results}\label{subsec:Lare2Dresults}
We compare all the methods, apart from Reduced MHD, with the full MHD results from Lare2D for the quantities: $B_z$, $B_y$, kinetic and magnetic energy, $\rho$ and $j_y$.
\subsubsection{Comparison of $B_z$}\label{subsubsec:Bz}
Firstly we consider the magnetic field component, $B_z$, introduced by the shearing motion. Figure \ref{fig:BzCompare} shows how $B_z$ varies with the horizontal
coordinate, $x$, at the mid-line at $y=0$ (left) and its variation in $y$ at $x=-l/2$ (right) at $t=50$ corresponding to $D/L\approx 0.29$ using Equation (\ref{eq:D}). This is for simulation 2 in table \ref{tab1} which has a reasonably small plasma $\beta_0$ and
the resulting magnetic field will be approximately force-free.
All of the approximations are shown in Figure
\ref{fig:BzCompare}. In fact, the agreement of the $x$ dependence (left part of Figure \ref{fig:BzCompare}) between the methods is remarkably good. This is surprising since the
ratio of $D/L$ is 0.29, which is not particularly small. Hence, one would expect the non-linear terms to be important and the first and second order linear MHD to fail.
All the methods give good agreement with Lare2D for this value of the plasma $\beta_0$. In the right part of Figure \ref{fig:BzCompare}, the variation with $y$ is shown at $x = -l/2$. As predicted by the linearised MHD expressions
above, there is a slight variation of $B_{z}$ with $y$ that agrees with the Lare2D results. However, the linear results do not include the slight slippage of $B_{z}$
at the photospheric boundaries due to the
strong boundary layer currents and so the two curves are slightly displaced. This $y$ variation is not predicted by the 1D and relaxation methods, either
because they do not use viscosity or it has a different form.
\begin{figure*}[ht]
\includegraphics[width=0.45\textwidth]{{{bz_0.9_0.1_x}.png}}
\includegraphics[width=0.45\textwidth]{{{bz_0.9_0.1_y}.png}}
\caption{The sheared magnetic field $B_z$ as a function of $x$ at a midpoint in $y$ (left) and as a function of $y$ for $x=-0.15$ (right) for $\beta_0$ of 4/30 at $t=50$.
The footpoint displacement is $D/L \approx 0.29$. Solid black curve is for the Lare2D results, triple dot-dashed blue for the relaxation method, dot-dashed green for 1D approximation and
dashed turquoise for linearised MHD results.}
\label{fig:BzCompare}
\end{figure*}
When the footpoint displacement is larger than $L$, the shape of the $B_z$ profile changes due to non-linear effects and it takes on an almost square wave structure. This is
shown in Figure~\ref{fig:BzCompare1} for $D/L \approx 3.9/3.0 = 1.3$ ($t = 200$). The large gradients near $x=0$
correspond to an enhanced current component, $j_y$, there (shown in figure \ref{fig:currenty} and Section \ref{subsubsec:current}). The left figure is for high $\beta_0$
and, for such a large plasma $\beta_0$, the relaxation method has a slight difference {compared to the Lare2D results}. However, this discrepancy is not present in the right figure which is for
low $\beta_0$. In both figures, the linear
approximation is still remarkably good, while the 1D approximation and relaxation sit on top of the Lare2D results. The maximum value of $B_z$ is now about
unity for both energies and so it is definitely comparable in
magnitude to the initial background field strength. The RMHD results are not included but they are the same as the linear MHD results.
\begin{figure*}[ht]
\includegraphics[width=0.45\textwidth]{{{bz_3.9_1.0_x}.png}}
\includegraphics[width=0.45\textwidth]{{{bz_3.9_0.01_x}.png}}
\caption{Plots of $B_z$ against $x$ at the mid-line $y=0$ for each method. The time $t= 200$ and the footpoint displacement is $D\approx 3.9$.
Left: $\beta_0$ of 4/3 and right: $\beta_0$ of 4/300. }
\label{fig:BzCompare1}
\end{figure*}
\subsubsection{Comparison of $B_y$}\label{subsubsec:By}
$B_y$ is initially the only magnetic field component.
Figure \ref{fig:ByCompare} shows $B_y$ as a function of $x$ at the mid-line at $y=0$ for $D/L \approx 0.63$ ($t=100$) in the top row
and $D/L \approx 1.3$ ($t=200$) in the bottom row corresponding to high $\beta_0$ in the left column and low $\beta_0$ in the right column. The other
parameters are the same as above.
{For the Lare2D results with small $D/L\approx 0.63$ the maximum value of $B_y$ is about 5\% larger than the initial value for high $\beta_0$ and 10\% for low $\beta_0$, where non-linear effects are becoming important.
Hence, for footpoint displacements smaller than the loop length the variations in $B_y$
are not too significant. For the case of large $D/L\approx 1.3$ the maximum of $B_y$ is about 20\% larger for high $\beta_0$ and 30\% for low $\beta_0$. It can be concluded that for large values of $D$ any assumption that the horizontal variations in the background field are small is not valid.}
{For the large plasma $\beta_0$ case,
(left column), only the relaxation results are significantly different from the others for both small and large $D/L$, as expected, since this method assumes the field is force-free.
Similarly as for $B_z$, in the low $\beta_0$ regime in the right column, the relaxation method agrees with both Lare2D and the 1D approach regardless of the value of the footpoint displacement, $D$. Interestingly, the
approximation for $B_z$, the shear component, is consistently better than the $B_y$ component, whereas one may expect the same accuracy for both components.}
{The Lare2D and 1D approach both agree with each other extremely well for $4/3000 < \beta_0 < 4/3$ and for $D/L < 2.6$, the largest value
tested.}
{The first and second order linearised MHD agrees reasonably well with Lare2D for small $D/L\approx 0.63$ and high $\beta_0$. For low $\beta_0$ the linear MHD results show a more noticeable discrepancy for small displacement.
For large footpoint displacements, $D/L \approx 1.3$ ($t=200$) in the bottom row, the second order linearised MHD results predict a minimum value of $B_y$ that is too small by
about 10\% for high $\beta_0$ and 25\% for low $\beta_0$ as the non-linear terms become more important. }
{ For Reduced MHD, this component is assumed to remain unchanged during the shearing motion. However, we have shown in the other methods this is not the case and variations become significant after a short time.}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.45\textwidth]{{{by_1.9_1.0}.png}}
\includegraphics[width=0.45\textwidth]{{{by_1.9_0.01}.png}}
\includegraphics[width=0.45\textwidth]{{{by_3.9_1.0}.png}}
\includegraphics[width=0.45\textwidth]{{{by_3.9_0.01}.png}}
\caption{Plots of $B_y$ against $x$ in the midpoint in $y$ for $\beta_0=4/3$ (left column) and 4/300 (right column) for
$D\approx 1.9$ at $t=100$ (top row)
and $D\approx 3.9$ at $t=200$ (bottom row).}
\label{fig:ByCompare}
\end{figure*}
\subsubsection{Comparison of Integrated Energies}\label{subsubsec:kinetic}
The integrated magnetic energy is shown in Figure \ref{fig:Benergy} as a function of time for high plasma $\beta_0$ ($\beta_0 = 4/3$) and low $\beta_0$
($\beta_0 = 4/300$). The Poynting flux associated
with the shearing motion results in the magnetic energy increasing nearly quadratically in time for both values of $\beta_0$.
{The relaxation approach does not directly give quantities as functions of time. In order to calculate and compare the magnetic energy the magnetic field needs to relax for every value of the displacement. This is limited by resources so the magnetic energy is only calculated for a few values of $D$, shown as symbols on Figure \ref{fig:Benergy}. These data points agree well with the Lare2D results. As noted for the other quantities there is a marginal discrepancy for high $\beta_0$ which is not present for low $\beta_0$.}
It is interesting to note that the 1D approach correctly matches the results from
Lare2D for all times, even when the footpoint displacement is
larger than the half-length, $L$, for example, at $t=400$, $D/L \approx 2.6$ using Equation (\ref{eq:D}).
The analytical estimate from the linearised MHD equations,
given in Equation (\ref{eq:beintegrated}),
shows very good agreement up to $t=200$, $D/L\approx 1.3$ when the footpoint displacement is about equal to the loop length and is only in error by 10\% at $t=400$, $D/L\approx 2.6$.
Thus comparing with Lare2D, we can conclude that the slow magnetic field evolution is correctly modelled by the relaxation and 1D approach
for all times, provided the width to length ratio, $l/L$, is small, and by the linearised MHD method until the footpoint displacement becomes comparable to the loop length, regardless of the size of the plasma $\beta_0$.
{This is notable since once $D\sim L$ one might not have expected the linearisation approach to be valid. }
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{{{eb_1.0}.png}}
\includegraphics[width=0.45\textwidth]{{eb_0.01}.png}
\caption{{The integrated magnetic energy as a function of time, $t$. $\beta_0$: left 4/3, right 4/300.}}
\label{fig:Benergy}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.45\textwidth]{{{ke_1.0}.png}}
\includegraphics[width=0.45\textwidth]{{{ke_0.1}.png}}
\includegraphics[width=0.45\textwidth]{{{ke_0.01}.png}}
\includegraphics[width=0.45\textwidth]{{{ke_0.001}.png}}
\caption{The integrated kinetic energy as a function of time, $t$. $\beta_0$: top left 4/3, right 4/30, bottom left 4/300, right 4/3000}
\label{fig:keintegrated}
\end{figure*}
The integrated kinetic energy is shown as a function of time for each of the four different values of the initial plasma $\beta_0$ given in table \ref{tab1} in Figure {\ref{fig:keintegrated}}. The dashed lines are the kinetic energy
estimates given by the first and second order linearised MHD from Equation (\ref{eq:keintegrated}). There are no estimates from either the relaxation method or the 1D approach, as they are assumed to be in equilibrium.
The constant value is only obtained when the Alfv\'en waves, those excited by the switch on the boundary driving velocities, are
dissipated. Because the driving velocities are slow, the integrated kinetic energy is five orders of magnitude smaller than the magnetic energy.
What is surprising,
at first sight, is that the Lare2D results only really match the prediction from Equation (\ref{eq:keintegrated}) for an initial high $\beta_0$ plasma.
As $\beta_0$ is reduced, the departure
from the constant kinetic energy is much more significant. The reason for this departure is due to the flow along the initial magnetic field direction, $v_{y}$ (as shown analytically by the linearized MHD method in Section \ref{subsec:linear}), that is a consequence of
the magnetic pressure gradient in $y$ due to the $y$ variation in $B_z$ (see Equation (\ref{eq:BzSteady})). The size of the constant flow, $G(y)$, in the second order solution,
is proportional to $(l_d/l)^2(V_0/c_s)^2$, where the diffusion lengthscale, $l_d$, is defined above and $c_s^2 = \gamma p_0/\rho_0$ is proportional to the initial gas pressure. The viscosity may
be either real or due to numerical dissipation. In both cases, the viscosity damps out both the fast and Alfv\'en waves generated by the switch on of the driving. Once these waves are damped,
the plasma can pass through sequences of equilibria. Although $\nu$ is small, $l_d$ will become large eventually and, so, $p_0$ cannot be too small or else this change in density will occur sooner.
This steady flow is due to the magnetic pressure gradients introduced by viscosity in the shearing component of the magnetic field, $B_{1z}$. Although the magnitude of this flow is small, it is constant in
time and eventually it will modify the plasma density (see Section \ref{subsubsec:rho} and the second order Equation (\ref{eq:rhoeps2})). In turn, the change in the density will influence the integrated kinetic energy.
\subsubsection{Comparison with $\rho$}\label{subsubsec:rho}
{The comparison of the plasma density between the Lare2D results, the linearised MHD method and 1D approach is shown in Figure \ref{fig:rhocompare} at the midpoint
in $y$
for high plasma $\beta_0$ (left column), low $\beta_0$ (right column) and footpoint displacement of $D/L \approx 0.63$ (top row) and $D/L\approx 1.3$ (bottom row).
The relaxation and
RMHD methods are not considered as they do not account for variations in density.
For high $\beta_0$ and small $D/L$ the agreement
between the three methods is very good.
The density variations in the $x$ direction are the order of 4\% and all three methods give essentially the same results.
However, when the plasma $\beta_0$ is small (right column), the density variations
are now between 10\% and 20\% of the initial uniform value, with Lare2D having a general increase in the average value at $y=0$. This is due to the variation in $y$ of $B_z$. These large variations show that non-linear effects are already becoming important.}
\begin{figure*}[h]
\centering
\includegraphics[width=0.45\textwidth]{{{rho_1.9_1.0}.png}}
\includegraphics[width=0.45\textwidth]{{{rho_1.9_0.01}.png}}
\includegraphics[width=0.45\textwidth]{{{rho_3.9_1.0}.png}}
\includegraphics[width=0.45\textwidth]{{{rho_3.9_0.01}.png}}
\hfill
\caption{Plots of $\rho$ against $x$ at the midpoint in $y$ for $\beta_0$ of 4/3 (left) and 4/300 (right) for $D\approx 1.9$ at time $t= 100$ (top)
and $D\approx 3.9$ for $t=200$ (bottom).}
\label{fig:rhocompare}
\end{figure*}
{ For high $\beta_0$ and larger footpoint displacement of $D/L \approx 1.3$ (bottom row) the variations are similar to the
low $\beta_0$ case for small $D/L$. This shows that the high $\beta_0$ plasma will eventually evolve in the same way but over a much longer time.
Once the footpoint displacement has become large the variations in $\rho$ for low $\beta_0$ are nearly 60\% of the initial uniform value of 1.0, thus are very significant}
{The 1d approach agrees with Lare2D for high $\beta_0$ for both small and large $D/L$.
In the case of low $\beta_0$ this method predicts the same variation as full MHD but is displaced slightly which is due to the fact that velocity effects are not included in this approximation.}
{The first and second order linearised MHD results agree reasonably well for small displacement for both high and low $\beta_0$.
In the case of larger $D/L$ the linear results show a difference with the Lare2D results for both large and small
$\beta_0$ as non-linear effects become important.}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.45\textwidth]{{{rhosurf_040_1.0}.png}}
\includegraphics[width=0.45\textwidth]{{{rhosurf_040_0.01}.png}}
\hfill
\caption{Surfaces of density at $t=400$ ($D \approx 7.9$) for $\beta_0$ 4/3 (left) and 4/300 (right).}
\label{fig:rhosurface}
\end{figure*}
{The density dependence on the $y$ coordinate was predicted by the second order solution in Equation (\ref{eq:rhoeps2}). This variation is clearly seen
in the results of Lare2D and this is shown as a 2D surface of $\rho$ in Figure \ref{fig:rhosurface}, at $t=400$ ($D/L\approx 2.6$), for high $\beta$ (left) and the low $\beta_0$ case (right). The maximum variation in density increases almost linearly in time
and by $t=400$ there is a 15\% difference between the maximum and minimum values at $x=0$. This $y$ variation is not the same as the rapid boundary layer behaviour
seen previously. On the other hand, $\rho$ has almost no $y$ dependence for high $\beta_0$ values (left part of Figure \ref{fig:rhosurface}). This clearly illustrates the large variation in density at $y=0$ as shown in Figure \ref{fig:rhocompare}.
This is what causes the kinetic energy to decrease as discussed in Section \ref{subsubsec:kinetic}.}
The variations in $\rho$ and $B_y$ will modify the Alfv\'en speed and this can affect the propagation of MHD waves in this plasma.
\subsubsection{Comparison with $j_y$}\label{subsubsec:current}
The current density is an important quantity to determine correctly not only for both force balance but also for ohmic heating, $\eta j^2$. The dominant component of the current density is the $j_{y}$ component
given by $j_{y} = - (\partial B_z/\partial x)$. The results of $j_y$ for Lare2D, 1d approach and linearisation are shown { in Figure \ref{fig:currenty} for $D/L\approx 1.3$ for the case of high $\beta_0$ (left) and low $\beta_0$ (right). The current could be obtained from the relaxation method but has not been done here.}
It is clear that the 1D approach matches the Lare2D results and that the magnitude of the current values exceeds the linear MHD (and also the RMHD)
estimate by almost a factor of 2 (right part of Figure \ref{fig:currenty}) for low $\beta_0$ values. In general, the magnitude of the current increases as $\beta_0$ decreases.
\begin{figure*}[h]
\includegraphics[width=0.45\textwidth]{{{jy_3.9_1.0}.png}}
\includegraphics[width=0.45\textwidth]{{{jy_3.9_0.01}.png}}
\caption{Comparison of $j_y$ against $x$ at the midpoint in $y$ for $\beta_0$ 4/3 (left) and 4/300 (right) for $D \approx 3.9$ ($t = 200$).}
\label{fig:currenty}
\end{figure*}
The other component of the current, $j_z = (\partial B_y/\partial x) $ is smaller in magnitude than $j_{y}$ and again both
Lare2D and the 1D approach agree.
RMHD does not predict any value for $j_z$.
\section{Conclusions}\label{sec:conclusions}
A simple footpoint shearing experiment has been investigated to test four different methods against full MHD results of Lare2D (\cite{arber01})
and contrasting the other methods with it. This is the first detailed comparison between the different methods, although \cite{pagano13} have compared
the relaxation method with an MHD simulation for the onset of a CME {and \cite{dmitruk05} have compared Reduced MHD and full MHD in the case of turbulence.}
Two methods assume that the magnetic field passes through a sequence of equilibria, namely the magneto-frictional relaxation and 1D methods. The
relaxation method, in the present form, only studies force-free fields and it provides an excellent match to the Lare2D results for $\bf{B}$ for low $\beta_0$, regardless of the footpoint displacement. The inclusion of the gas pressure and
plasma density is possible (see \cite{hesse93}) but has not been done here.
The second equilibrium method is the 1D approach, which assumes the boundary layers at the photospheric footpoints are narrow and so reduces the
Grad-Shafranov equation to a simple 1D equation for the flux function.
Solutions to the resulting equation give outstanding agreement with the Lare2D results for $B_y$, $B_z$, $p$, $\rho$, $j_y$ and $j_z$ for all footpoint displacements and values of $\beta_0$.
The 1D approach is, of course, derived with this specific experiment in mind. It has
been used for the twisting of coronal loops with cylindrical symmetry (see \cite{lothian89, browning89}). The flux function, is this case, is a function of radius alone.
Unlike the relaxation method, it is not readily extendable to more complex photospheric footpoint displacements but it does do exceptionally well for this particular problem.
{The simplest dynamical approach is to expand the MHD equations in powers of $D/L$, the ratio of the maximum footpoint displacement to the loop half-length.
In principal, this should
only be valid for $D\ll L$. Surprisingly, it has been found that this method provides good agreement for $D/L\lesssim 1$.
One strength of this model is that it can provide useful insight into the system.
}
Next, we consider Reduced MHD. In general RMHD is identical to the first order linear MHD results, and thus is not capable of reproducing the results from Lare2D. This is primarily because its main assumptions do not hold in this situation.
While RMHD has the same current component, $j_y$,
as linear MHD, it does not provide any information about, $j_z\approx (\partial B_y/\partial x)$, {since there can be no change to $B_y$. }
Hence, force balance can only be maintained by balancing the magnetic pressure due to $B_{1z}^2/2$ by the gas pressure,
instead of through the change to $B_y$.
{There are many possible choices to extend this investigation to more complex systems and to explore the dependencies of this system in more detail.
One question is whether this system is dependent on the form of the internal energy equation.} The gas pressure and density structures produced by the steady shearing motions result in temperature variations. If thermal conduction is included and the boundary conditions keep the temperature
fixed at its initial value, then the temperature will relax towards an isothermal state. The assumption of an isothermal plasma can be included in the 1D method very easily by setting $\gamma = 1$. There is still a variation
in pressure and density. So the inhomogeneous nature of the resulting plasma is not dependent on the exact form of the internal energy equation.
{Further work on the validity of Reduced MHD is required.
For this experiment, we can neglect the variations parallel to the initial field whenever the horizontal lengthscales are much shorter than the parallel ones.
However, whether one can use RMHD or not depends on both the final footpoint location, the total displacement and how the field lines got there.
On the one hand, a simple shear followed by the opposite shear brings
the footpoints back to their initial locations but the field will remain potential. On the other hand, a complete rotation also brings the footpoints to
their initial locations but
this time the field is not potential. It is how the field gets to the final location and the total Poynting flux that is injected into the corona that is important.}
{The message from this work is that one needs to take care with simply implementing a method without thinking whether the assumptions are valid or not. The four approximate methods have been used for a particularly simple shearing experiment. For example, the simple 1d method is inappropriate for more complex and realistic photospheric footpoint motions. However, the magneto-frictional, relaxation method is still applicable provided the displacement of the footpoints is small from the previous equilibrium state. Hence, a simple rotation of the footpoint through 360 degrees can be achieved by splitting the rotation up into smaller angles and relaxing before taking the next small rotation. For small angular motions, the relaxation method will quickly reach the nearby equilibrium state. This is then repeated until the complete revolution is achieved. See \cite{meyer11, meyer12,meyer13} for the application of the relaxation approach to the velocities derived from the magnetic carpet. The linearization of the MHD equations can always be undertaken but the derivation of an analytical expression for the linear solution with more complex boundary conditions is not certain. Without an expression for the linear steady state, it will be difficult to determine the modifications to the density and main axial field in response to the non-linear driving by the linear steady state. Reduced MHD can certainly be applied to more complex photospheric motions but we would expect that the quadratic terms, due to the linear terms, will invalidate some of the main assumptions stated in \cite{oughton17}. Solving the full MHD equations remains the preferred approach, provided sufficient computing resources are available to generate the long time evolution of the magnetic field.}
\begin{acknowledgements}
The authors thank the referee for extremely useful comments.
AWH and EEG thank Duncan Mackay for useful discussions on the magneto-frictional method.
AWH acknowledges the financial support of STFC through the Consolidated grant, ST/N000609/1, to the University of St Andrews and EEG acknowledges the STFC studentship,
ST/I505999/1. This work used the DIRAC 1, UKMHD Consortium machine at the
University of St Andrews and the DiRAC Data Centric system at Durham University,
operated by the Institute for Computational Cosmology on behalf of the
STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by a BIS National
E-infrastructure capital grant ST/K00042X/1, STFC capital grant ST/K00087X/1,
DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the
National E-Infrastructure.
\end{acknowledgements}
\begin{appendix}
\section{Second order solutions}\label{sec:app}
Now we can solve the second order Equations (\ref{eq:motionxeps2}) - (\ref{eq:energyeps2}) for $v_{2x}$ and $v_{2y}$. Then we can determine the other variables.
We will include the viscous heating and dissipation terms.
Taking the time derivative of Equations (\ref{eq:motionxeps2}) and (\ref{eq:motionzeps2}) and using Equations (\ref{eq:inductioneps2y}) and (\ref{eq:energyeps2}), we have
\begin{flalign}
\frac{\partial^2 v_{2x}}{\partial t^2}=&-\frac{\partial}{\partial x}\left( - c_s^2 \left (\frac{\partial v_{2x}}{\partial x} + \frac{\partial v_{2y}}{\partial y}\right )\right.\nonumber&\\
&\left.+(\gamma-1)\nu \left(\frac{k^2V_0^2y^2}{L^2}\cos^2(kx)+\frac{V_0^2}{L^2}\sin^2(kx)\right)\right )\nonumber &\\
&-\frac{\partial}{\partial x}\left (\frac{V_A^2}{2}\left(\frac{2V_0^2\tau}{L^2}+\frac{\nu k^2V^2_0 }{V_A^2}\left(\frac{y^2}{L^2}-1\right)\right) \sin^2(kx)\right)\nonumber&\\
& +V_A^2\left(\frac{\partial^2 v_{2x}}{\partial x^2}+\frac{\partial^2v_{2x}}{\partial y^2}\right)\nonumber &\\
& +\nu \frac{\partial}{\partial t}\left(\frac{4}{3}\frac{\partial ^2v_{2x}}{\partial x^2}+\frac{\partial ^2v_{2x}}{\partial y^2}+\frac{1}{3}\frac{\partial^2v_{2y}}{\partial x\partial y}\right)\; ,&
\label{eq:fastvxeps2}
\end{flalign}
and
\begin{flalign}
\frac{\partial^2 v_{2y}}{\partial t^2} =& -\frac{\partial}{\partial y}\left(- c_s^2 \left (\frac{\partial v_{2x}}{\partial x} + \frac{\partial v_{2y}}{\partial y}\right )\right. \nonumber&\\
& \left.+(\gamma-1)\nu \left(\frac{k^2V_0^2y^2}{L^2}\cos^2(kx)+\frac{V_0^2}{L^2}\sin^2(kx)\right)\right )\nonumber &\\
& -\frac{\partial}{\partial y}\left (\frac{V_A^2}{2}\left(\frac{2V_0^2\tau}{L^2}+\frac{\nu k^2V^2_0 }{V_A^2}\left(\frac{y^2}{L^2}-1\right)\right) \sin^2(kx)\right) \nonumber &\\
&+\nu \frac{\partial}{\partial t}\left(\frac{\partial^2v_{2y}}{\partial x^2}+\frac{4}{3}\frac{\partial ^2v_{2y}}{\partial y^2}+\frac{1}{3}\frac{\partial^2v_{2x}}{\partial y\partial x}\right)\; ,&
\label{eq:fastvzeps2}
\end{flalign}
where $\tau = t- t_1$.
These two equations can be solved by taking
\begin{flalign}
&v_{2x}(x,y,t)=(B(y)\tau+C(y))\sin(2kx),\label{eq:v2xdriven}&\\
&v_{2y}(x,y,t)=(F(y)\tau+E(y))\cos(2kx)+G(y),&\\
&B(y) = \frac{\delta}{4k}\left (\frac{\cosh(2ky)}{\cosh(2kL)} - 1\right )\; , &\\
&G(y)=\nu \frac{(2 \gamma -1)k^2 L^2}{12c_s^2}\left (\frac{V_0^2}{L^2}\right ) y \left (1 - \frac{y^2}{L^2}\right )\; ,&\\
&C(y)=\frac{\nu}{2kV_A^2}\left(\alpha+\frac{\delta}{2}\right)\left(\frac{\cosh(2ky)}{\cosh(2kL)}\left(2k^2L^2+1\right)-\left(2k^2y^2+1\right)\right)\nonumber &\\
&+ \frac{\nu}{4kV_A^2}\left(2c_s^2\kappa-\delta+\frac{2}{3}\alpha\right)\left(\frac{\cosh(2ky)}{\cosh(2kL)}-1\right)\; ,&\\
&F(y) = \frac{\delta}{4k}\left ( \tanh(2kL)\frac{y}{L} - \frac{\sinh(2ky)}{\cosh(2kL)}\right )\; , &\\
&E(y)=\frac{2\nu k^2}{3}\left(\left(\alpha+\frac{\delta}{2}\right)\left(\frac{1}{V_A^2}+\frac{1}{c_s^2}\right)+\frac{V_0^2\left(\gamma-\frac{3}{2}\right)}{4L^2c_s^2}\right)y^3
\nonumber &\\
&+\nu \left(\frac{ k^2V_0^2}{4c_s^2}+\left(\frac{4\alpha}{3V_A^2}-\frac{V_0^2(\gamma-1)}{2L^2c_s^2}\right)+\kappa\left(\frac{c_s^2}{V_A^2}+1\right)\right)y\nonumber&\\
&-\nu \left(L^2 \left(\alpha+\frac{\delta}{2}\right)k^2+\frac{c_s^2\kappa}{2}+\frac{2 \alpha}{3}\right)\frac{\sinh(2ky)}{V_A^2k\cosh(2kL)}\; ,&
\end{flalign}
where $\alpha,\ \delta$ and $\kappa$ are constants, chosen to satisfy the boundary conditions, namely
\begin{eqnarray}
\alpha&=&\frac{\delta}{4kL}\tanh(2kL)-\frac{\delta}{2}\; ,\\
\delta&=&\frac{V_0^2/L^2}{1 + (c_s^2/V_A^2)\left ( 1 - \tanh(2kL)/2kL\right )}\; ,\\
\kappa&=& \frac{( V_1 + V_2)}{c_s^2L\left(kL\left(c_s^2+V_A^2\right)-\frac{1}{2}c_s^2\tanh(2kL)\right)}\; ,
\end{eqnarray}
and
\begin{eqnarray}
V_1 &=& c_s^2L\left(k^2\left(\alpha+\frac{\delta}{2}\right)L^2+\frac{2}{3}\alpha\right)\tanh(2kL)\; ,\\
V_2 &=&-\frac{2}{3}k\left(\left(c_s^2+V_A^2\right)\left(\alpha+\frac{\delta}{2}\right)k^2L^4+\left(\frac{V_0^2\gamma V_A^2k^2}{4}+2\alpha c_s^2\right)L^2\right. \nonumber \\
& &\left.-\frac{3V_0^2V_A^2(\gamma-1)}{4}\right)\; .
\end{eqnarray}
From the expressions for $v_{2x}$ and $v_{2y}$, we can calculate the other variables as
\begin{flalign}
&B_{2x}(x,y,t)=B_0 \left (B^\prime(y)\frac{\tau^2}{2} + C^\prime(y) \tau\right ) \sin (2 k x)\; ,&\\
&B_{2y}(x,y,t)=- 2 k B_0 \left (B(y)\frac{\tau^2}{2} + C(y) \tau)\right ) \cos(2 k x)\; , &\\
& \frac{\rho_2(x,y,t)}{\rho_0}= - G^\prime(y) \tau \nonumber&\\
&+ \left ( [2 k B(y) + F^\prime(y)]\frac{\tau^2}{2} +(2 k C(y) + E^\prime(y))\tau\right )\cos(2 k x) \; ,&\\
&p_2(x,y,t)= \frac{\gamma p_0}{\rho_0}\rho_2\nonumber&\\
& + (\gamma -1)\rho_0 \nu \tau \left (k^2 V_{1z}^2 \cos^2 kx
+\left (\frac{\partial V_{1z}}{\partial y} \right)^2\sin^2 kx \right )\; .&
\end{flalign}
Notice that $B_{2y}$ and $B_{2x}$ also have boundary layers and that
\begin{equation}
B_{2x} \approx 0\; , \quad B_{2y} \approx B_0 \left(\frac{\delta \tau^2}{4}+\frac{\nu c_s^2\kappa \tau}{V_A^2}\right)2\left (\frac{1}{2} - \sin^2(kx)\right )\; ,
\label{eq:Bzeps2approx}
\end{equation}
in the central part of the field away from the boundary layers.
From Equation (\ref{eq:v2xdriven}), $v_{2x}$ does remain small but it is essential in allowing the axial field to adjust value.
Calculating the magnetic pressure to second order we find that
\begin{eqnarray}
\frac{B_{1z}^2 + (B_0 + B_{2y})^2}{2} &=& \frac{B_{1z}^2}{2} + \frac{B_0^2}{2} + {B_0 B_{2y}}, \nonumber \\
&=& \frac{B_0^2}{2 } \left ( 1 + \frac{\delta \tau^2}{2}+\frac{2\nu c_s^2 \kappa \tau}{V_A^2}+\left[\tau^2\left (\frac{V_0^2}{L^2}-\delta \right) \right.\right. \nonumber\\
&+&\left.\left. \nu \tau\left(\frac{ k^2V_0^2}{V_A^2}\left(\frac{y^2}{L^2}-1\right)-\frac{4c_s^2}{V_A^2}\kappa\right)\right]\sin^2kx\right ).
\label{eq:Bpressure2}
\end{eqnarray}
The magnetic pressure is growing quadratically in time and is dependent on $x$ and $\delta$. The neglected term is the square of the viscous part of $B_{1z}$.
Including the second order gas pressure gives
\begin{flalign}
p_2+\frac{B_{1z}^2}{2} + \frac{B_0^2}{2} + {B_0 B_{2y}}
=&\frac{B_0^2}{2}+\frac{B_0^2V_0^2\tau^2}{4L^2}\nonumber\\
&+\frac{\rho_0\nu \left(k^2L^2(\gamma-2)+3(\gamma-1)\right)}{6}\frac{V_0^2}{L^2}\tau.
\label{eq:totalpressure2}
\end{flalign}
This removes the dependence on $x$ and thus it can be concluded that the \textit{total}
pressure is independent of $x$ but is increasing in time.
We can also calculate the first and second order current
\begin{eqnarray}
j_{1x}&=&\frac{\partial B_{1z}}{\partial y}\sin(kx)
={B_0}\frac{\nu k^2V_0}{V_A^2L}y\sin(kx), \; \\
j_{1y}&=&-{k}B_{1z}\cos(kx),\nonumber\\
&=&-{kB_0}\left(\frac{V_0 \tau}{L}+\frac{\nu k^2LV_0}{2V_A^2}\left(\frac{y^2}{L^2}-1\right)\right) \cos(kx)\: ,\\
j_{2z}&=&-2k B_{2y}\sin(2kx)=-2k B_0 \left(\frac{\delta \tau^2}{4}+\frac{\nu c_s^2\kappa \tau}{V_A^2}\right)\sin(2kx)\; .
\end{eqnarray}
\end{appendix}
\bibliographystyle{aa}
|
1,477,468,750,397 | arxiv | \section{Introduction}
Given a Kähler manifold $(X, \omega)$ one may define in a natural way a bundle $p\colon Z_X\rightarrow X$ of affine spaces called a \emph{canonical extension} of $X$. One possible way to define $Z_X$ is as the universal complex manifold on which the cohomology class $[p^*\omega]=0$ vanishes.
Canonical extensions were introduced by \cite{donaldson_SmoothnessOfMongeAmpereFlow} to prove regularity properties of solutions to the Monge-Ampère equation. They have subsequently also seen some uses related to K-stability and the existence of Kähler-Einstein metrics on Fano manifolds, see for example \cite{tian_KStabilityStrongly} or \cite{gkp_CanExtandKSTability}.
Recently, in \cite{greb_CanonicalExtensions} another point of view on canonical extensions was suggested. Namely, besides discussing some relations to the existence of complexifications, they suggested the following question:
\begin{Question}
Let $X$ be a compact Kähler manifold. Is it true, that the tangent bundle of $X$ is nef if and only if some (resp.\ any) canonical extension of $X$ is a Stein manifold?\label{IntroQ:GrebWong}
\end{Question}
The structure of compact Kähler manifolds possessing a nef tangent bundle is well-understood and by now classical (see \textbf{\cref{sec:StructureManifoldsWithNefTangent}}). However, specifically in the Fano case some very interesting questions such as the conjecture of Campana-Peternell remain open. Thus, \cref{IntroQ:GrebWong} is interesting as it suggests a possibly more geometric point of view on these problems.
In \textbf{\cref{sec:canExtGeneralManifoldsWithNefTangent}} we give the following partial answer to \Cref{IntroQ:GrebWong}:
\begin{Theorem}
Let $X$ be a compact Kähler manifold with nef tangent bundle. If the (weak) Campana-Peternell conjecture \cref{weakCampanaPeternellConjecture} holds true then any canonical extension $Z_X$ of $X$ is a Stein manifold.\label{IntroCanExtStein}
\end{Theorem}
The idea for the proof is quite simple: By a well-known result of Demailly-Peternell-Schneider, Cao (cf.\ \cref{flatness_of_Albanese}) some finite étale cover of $X$ fibres over a complex torus $T$. Now, any canonical extension of a torus is Stein as remarked in \cite[Proposition 2.13.]{greb_CanonicalExtensions} Moreover, assuming a weak form of the Campana-Peternell conjecture (see \cref{weakCampanaPeternellConjecture}) any canonical extension of any fibre $F$ is Stein as well. This was proved in \cite[Theorem 1.2.]{HP_Stein_complements}. In conclusion, it remains to \emph{put both cases together}. To this end, we understand canonical extensions of $X$ as fibre bundles in terms of the canonical extensions of $F$ and $T$ respectively.
In the converse direction of what can be said about manifolds admitting a canonical extension which is Stein, little is known (see \cite{HP_Stein_complements} for some partial results). In fact, even for projective surfaces \cref{IntroQ:GrebWong} is not completely settled yet, although it is known to hold in most cases by the work of \cite[Theorem 1.13.]{HP_Stein_complements}. In \textbf{\cref{sec:CanExtSurfaces}} we partially complement their results by ruling out the case of ruled surfaces over higher genus curves as well:
\begin{Lemma}
Let $X = \mathds{P}(\mathcal{E}) \rightarrow C$ be a ruled surface over a curve of genus $g(C)\geq 2$ defined by a semi-stable vector bundle $\mathcal{E}$. Then, no canonical extension of $X$ is Stein.
\end{Lemma}
This only leaves the case of unstable ruled surfaces over elliptic curves.
Finally, in a short \textbf{Appendix} we provide some clarifications on our convention regarding integration along the fibres and some (maybe not so standard) formulae from multi-linear algebra used throughout the text.
\subsection*{Acknowledgements}
The author would like to express his gratitude towards Mr.\ Daniel Greb who suggested the main question that is answered in this publication as a master's thesis topic. Moreover, the author is incredible thankful for Mr.\ Greb's very valuable advice.
\section{Canonical extensions of complex manifolds}
\subsection{A variety of constructions}
\noindent
In the following we provide a short overview of a general approach to constructing bundles of affine spaces over complex manifolds. This theory is necessary for the definition of canonical extensions in the next subsection.
\begin{Reminder}
Let $\mathcal{F}$ be a coherent sheaf on a complex analytic variety $X$. Recall that the elements of the cohomology group $\textmd{Ext}^1_{\mathcal{O}}(\mathcal{O}_X, \mathcal{F})$ are in one-to-one correspondence with isomorphism classes of extensions $0 \rightarrow \mathcal{F} \rightarrow \mathcal{G} \rightarrow \mathcal{O}_X \rightarrow 0$ of $\mathcal{O}_X$ by $\mathcal{F}$. Here, $\textmd{Ext}_{\mathcal{O}}(\mathcal{O}_X, -)$ coincides by definition with the right-derived functor
\begin{align*}
\textmd{Ext}_{\mathcal{O}}(\mathcal{O}_X, -) := R\textmd{Hom}_{\mathcal{O}}(\mathcal{O}_X, -) = R\Gamma(X, -).
\end{align*}
In other words, isomorphism classes of extensions $0 \rightarrow \mathcal{F} \rightarrow \mathcal{G} \rightarrow \mathcal{O}_X \rightarrow 0$ are in one-to-one correspondence with the elements of $H^1(X, \mathcal{F})$.
\end{Reminder}
Now, fix a complex manifold $X$, a holomorphic vector bundle $\mathcal{E}$ on $X$ and any cohomology class $a\in H^1(X, \mathcal{E})$. In the following we describe three equivalent ways of constructing affine bundles over $X$ from the data $(\mathcal{E}, a)$:
\begin{Construction}
(\textbf{as torsors})
\noindent
As we saw above, to $a\in H^1(X, \mathcal{E})$ we can associate an extension
\begin{align}
0 \rightarrow \mathcal{E} \rightarrow \mathcal{V}_a \overset{p}{\rightarrow} \mathcal{O}_X \rightarrow 0\label{eqs:CanExt1}
\end{align}
of holomorphic vector bundles on $X$. Consider the sub sheaf $\mathcal{Z}_a := p^{-1}(1) \subsetneq \mathcal{V}_a$ of sections of $\mathcal{V}_a$ mapping under $p$ to the constant function $1$. Note that $\mathcal{Z}_a$ is \emph{not} a sheaf of $\mathcal{O}_X$-modules. However, it comes with a natural action of $\mathcal{E}$ by translations making $\mathcal{Z}_a$ into an $\mathcal{E}$-torsor. In this sense, $\mathcal{Z}_a$ is an \emph{affine bundle}: Its underlying total space $Z_a := |\mathcal{Z}_a|\rightarrow X$ is a fibre bundle over $X$ and the fibre $Z_a|_x$ over any point $x$ is in a natural way an affine vector space with group of translations $\mathcal{E}|_x$. In the following, we will call the total space $Z_a := |\mathcal{Z}_a|\rightarrow X$ (an) \emph{extension} of $X$. Sometimes we may also denote it by $Z_{\mathcal{E}, a}$ if we want to make explicit the dependence on the bundle $\mathcal{E}$.
A similar way to construct $Z_a$ is as follows: We may consider $p$ as a holomorphic map between the underlying total spaces $|p|\colon |\mathcal{E}|\rightarrow |\mathcal{O}_X| = X\times\mathds{C}$. Then, $Z_a$ may be naturally identified with the pre-image
\begin{align*}
Z_a = |p|^{-1}(X\times \{ 1 \}).
\end{align*}
Since $p$ is a surjective morphism of vector bundles, $|p|$ is a submersion. In particular, we see from this that $Z_a$ is indeed a manifold and we also see that we may view the affine space structure on the fibres $Z_a|_x$ as arising from the embedding $Z_a|_x \subsetneq \mathcal{V}_a|_x$. This is the definition of $Z_a$ used in \cite{greb_CanonicalExtensions}.\label{con:CanonicalExtensions}
\end{Construction}
\begin{Construction}
(\textbf{as complements of a hypersurface})
\noindent
A second, possibly more geometric construction of $Z_a$ is as follows: Dualising the short exact sequence \cref{eqs:CanExt1} we find the short exact sequence
\begin{align*}
0 \rightarrow \mathcal{O}_X \rightarrow (\mathcal{V}_a)^* \rightarrow \mathcal{E}^* \rightarrow 0
\end{align*}
which defines an embedding
\begin{align*}
\mathds{P}(\mathcal{E}^*) \hookrightarrow \mathds{P}(\mathcal{V}_a^*).
\end{align*}
Here, throughout this paper we will always use the convention that $\mathds{P}(\mathcal{E})$ denotes the projective bundle of linear \emph{hyperplanes} in $\mathcal{E}$.
We claim that there exists a natural identification of the affine bundle $Z_a$ with the complement $\mathds{P}(\mathcal{V}_a^*)\setminus \mathds{P}(\mathcal{E}^*)$. Indeed, for any $x\in X$ the fibre $\mathds{P}(\mathcal{V}_a^*)|_x$ is just the space of lines in $\mathcal{V}_a|_x$ passing through the origin. Now, of course any point in the affine space $Z_a|_x \subsetneq \mathcal{V}_a|_x$ defines a unique line passing trough itself and the origin (here, we use that that $0\notin Z_a|_x = p_x^{-1}(1)$) and so $Z_a|_x \subset \mathds{P}(\mathcal{V}_a^*)|_x$ in a natural way. Moreover, the set $ \mathds{P}(\mathcal{V}_a^*)|_x\setminus Z_a|_x$ consists precisely in those lines which are parallel to $Z_a|_x$, i.e.\ contained in $\mathds{P}(\mathcal{E}^*)|_x$. This concludes the proof of the claim.
Note that by construction $\mathds{P}(\mathcal{E}^*)$ is embedded as a smooth hypersurface in the linear series of $\mathcal{O}_{\mathds{P}(\mathcal{V}_a^*)}(1)$. In particular, its normal bundle is given by
\begin{align*}
\mathcal{N}_{\mathds{P}(\mathcal{E}^*) / \mathds{P}(\mathcal{V}_a^*)}
= \mathcal{O}_{\mathds{P}(\mathcal{E}^*)}(1).
\end{align*}\label{rem:ExtensionsNormalBundle}
This is the preferred point of view in \cite{HP_Stein_complements}.
\end{Construction}
\begin{Construction}
(\textbf{via a universal property})
\noindent
Finally, $Z_a \overset{p}{\rightarrow} X$ enjoys the following universal property (which of course determines it uniquely): Let $Y$ be any complex manifold and let $h\colon Y \rightarrow X$ be any holomorphic map such that the cohomology class $h^*a = 0\in H^1(Y, f^*\mathcal{E})$ vanishes. Then, $h$ factors uniquely (up to translation by an element of $H^0(Y, f^*\mathcal{E})$) through $Z_a\overset{p}{\rightarrow} X$. In this sense, $Z_a\rightarrow X$ is the universal manifold on which the cohomology class $a$ vanishes. A more precise version of this statement may be found in \cite[Lemma 1.16.(c)]{greb_CanonicalExtensions}.\label{universal_property_can_extensions}
\end{Construction}
\begin{Corollary}\emph{(cf.\ \cite[Remark 2.4.]{greb_CanonicalExtensions})}
\noindent
Let $X$ be a complex manifold, let $\mathcal{E}$ be a holomorphic vector bundle on $X$ and fix a cohomology class $a\in H^1(X, \mathcal{E})$. Then, for any $\lambda\in \mathds{C}^\times$ there exists an isomorphism of affine bundles
\begin{align*}
Z_{a} = Z_{\lambda \cdot a},
\end{align*}
which is canonical up to translation.\label{scaling_invariance_canExt}
\end{Corollary}
\begin{Proof}
Both bundles share the same universal property described in \cref{universal_property_can_extensions}, hence are canonically isomorphic. Compare also \cite[Remark 2.4.]{greb_CanonicalExtensions}.
\end{Proof}
The construction of extensions is clearly functorial:
\begin{Proposition}\emph{(see also \cite[Lemma 1.16(b)]{greb_CanonicalExtensions})}
\noindent
Let $f\colon X \rightarrow T$ be a holomorphic map between complex manifolds. Let $\mathcal{E}$ be a holomorphic vector bundle on $T$ and fix any cohomology class $a \in H^1(T, \mathcal{E})$.
There exists a natural isomorphism of affine bundles
\begin{align*}
Z_{f^*\mathcal{E}, f^*a} \cong f^*Z_{\mathcal{E}, a} = Z_{\mathcal{E}, a} \times_T X.
\end{align*}
We will denote the induced map $Z_{f^*\mathcal{E}, f^*a} \rightarrow Z_{\mathcal{E}, a}$ by $Z_f$.\label{pullback_canonical_extensions}
\end{Proposition}
\subsection{Canonical extensions and positivity of curvature}
\label{subsec:CanExtKahlerMfds}
\begin{Definition}
Let $(X, \omega)$ be a complex Kähler manifold. Then, $\omega$ is a $\bar{\partial}$-closed form and hence defines a cohomology class $[\omega] \in H^1(X, \Omega_X^1)$. The associated extension $Z_{[\omega]}$ is called (a) \emph{canonical extension} of $X$. Alternatively, we also write $Z_{X, [\omega]}$ if we want to stress the dependence on $X$ or simply $Z_X$ if the dependence on $[\omega]\in H^1(X, \Omega_X^1)$ is not important in that situation.
\end{Definition}
In the preceding subsection we have seen three equivalent constructions for $Z_{[\omega]}$:
\begin{itemize}
\item[(1)] As a bundle of affine spaces over $X$ (more precisely: as a $\Omega^1_X$-torsor).
\item[(2)] As the complement $Z_{[\omega]} = \mathds{P}(\mathcal{V}_{[\omega]}^*)\setminus \mathds{P}(\mathcal{T}_X)$ of the smooth hypersurface $\mathds{P}(\mathcal{T}_X)$ which is an element in the linear series of $\mathcal{O}_{\mathds{P}(\mathcal{V}^*)}(1)$. The normal bundle of $\mathds{P}(\mathcal{T}_X)$ is given by $\mathcal{N}_{\mathds{P}(\mathcal{T}_X)/ \mathds{P}(\mathcal{V}^*)} = \mathcal{O}_{\mathds{P}(\mathcal{T}_X)}(1)$ (this was part of \cref{rem:ExtensionsNormalBundle}).
\item[(3)] As the universal manifold on which the cohomology class $[\omega]$ vanishes.
\end{itemize}
In the following we are going to use all of these constructions interchangeably.
The following conjecture arose out of the work of \cite{greb_CanonicalExtensions} and \cite{HP_Stein_complements} on canonical extensions:
\begin{Conjecture}\emph{(Greb-Wong, Höring-Peternell)}
\noindent
Let $X$ be a compact Kähler manifold. Then, the tangent bundle $\mathcal{T}_X$ is nef (respectively big and nef) if and only if some canonical extension $Z_X$ of $X$ is Stein (respectively affine).\label{con:GrebWongCanExt}
\end{Conjecture}
In this context, recall that a vector bundle $\mathcal{E}$ on a complex manifold is said to be nef (resp.\ big) if and only if the tautological bundle $\mathcal{O}_{\mathds{P}(\mathcal{E})}(1)$ is nef in the sense of \cite[Definition 1.2.]{DPS_ManifoldsWithNefTangentBundle} (resp.\ big in the classical sense, see \cite[Definition 2.2.1.]{lazarsfeld_PositivityI}). \cref{con:GrebWongCanExt} is interesting as it promises a possibly more geometric way to study manifolds of positive curvature. See \cref{sec:StructureManifoldsWithNefTangent} below for an overview of the (expected) structure theory of manifolds with a nef tangent bundle.
Let us quickly summarise what is known thus far about \cref{con:GrebWongCanExt} in general:
\begin{itemize}
\item The conjecture is known to hold for curves and (most) projective surfaces by \cite[Theorem 1.13.]{HP_Stein_complements}. More details on the cases left open will be provided in \cref{sec:CanExtSurfaces}.
\item If $X$ is projective and some canonical extension of $X$ is affine, then the tangent bundle of $X$ is big by \cite[Corollary 4.4.]{greb_CanonicalExtensions}. Conversely, if $\mathcal{T}_X$ is nef and big, then all canonical extensions of $X$ are affine. The latter result is due to \cite[Theorem 1.2.]{HP_Stein_complements}. Thus, (at least modulo the nef case) the big case is settled.
\item Building on the work of \cite{greb_CanonicalExtensions} and \cite{HP_Stein_complements}, in \cref{sec:canExtGeneralManifoldsWithNefTangent} we are going to prove that if the tangent bundle of $X$ is nef (and if the weak form of the conjecture of Campana and Peternell holds true, cf.\ \cref{weakCampanaPeternellConjecture}), then the canonical extensions of $X$ are always Stein.
\item The remaining case is to prove that if a canonical extension of $X$ is Stein then the tangent bundle of $X$ is nef. This problem is still almost completely open.
\end{itemize}
Let us end this section by stating the following basic fact which will be useful in \cref{sec:canExtGeneralManifoldsWithNefTangent}:
\begin{Proposition}
Let $\pi\colon X' \rightarrow X$ be an étale cover between Kähler manifolds. Then, for any Kähler form $\omega_X$ on $X$ there exists a natural isomorphism of affine bundles
\begin{align}
Z_{X', [\pi^*\omega_X]}
\cong \pi^*Z_{X, [\omega_X]}
:= Z_{X, [\omega_X]} \times_X X'.\label{eqs:CanExtEtaleCovers}
\end{align}
Moreover, if $\pi$ is finite then $Z_{X, [\omega_X]}$ is Stein if and only if $Z_{X', [\pi^*\omega_X]}$ is so.
\label{canonicalExtEtaleCov}
\end{Proposition}
\begin{Proof}
First of all, it follows from \cref{pullback_canonical_extensions} that
\begin{align*}
Z_{X, [\omega_X]} \times_X X'
= \pi^*Z_{X, [\omega_X]}
\cong Z_{\pi^*\mathcal{T}_X, [\pi^*\omega_X]}.
\end{align*}
Since $\pi$ is étale the natural morphism $d\pi\colon \mathcal{T}_{X'} \rightarrow \pi^*\mathcal{T}_X$ is an isomorphism. Thus, \cref{eqs:CanExtEtaleCovers} is proved. Regarding the second assertion, the identification
\begin{align*}
Z_{X', [\pi^*\omega_X]}
\cong Z_{X, [\omega_X]} \times_X X'
\end{align*}
which we just proved shows that together with $\pi\colon X' \rightarrow X$ also the holomorphic map $Z_\pi\colon Z_{X'} \rightarrow Z_{X}$ is a finite étale cover. But in general, if $Z' \rightarrow Z$ is any finite map between complex manifolds then $Z$ is Stein if and only if $Z'$ is Stein (see e.g.\ \cite[Lemma 2.]{narasimhan_SteinSpaces}).
\end{Proof}
\section{Structure theory of manifolds with nef tangent bundle}
\label{sec:StructureManifoldsWithNefTangent}
For the convenience of the reader, in this section we want to provide a short summary of the (conjectural) structure theory of manifolds with a nef tangent bundle. The following result summarises the successive work of \cite{campanaPeternell_Conjecture}, \cite{DPS_ManifoldsWithNefRicciClass}, \cite{DPS_ManifoldsWithNefTangentBundle} and \cite{cao_PhDThesis}:
\begin{Theorem}\emph{(Cao, Demailly-Peternell-Schneider)}
\noindent
Let $X$ be a compact Kähler manifold possessing a nef tangent bundle. There exists a finite étale cover $X' \rightarrow X$ such that the Albanese map $\alpha\colon X' \rightarrow \Alb(X')$ is a locally constant analytic fibre bundle. The typical fibre is a Fano manifold with a nef tangent bundle. \label{flatness_of_Albanese}
\end{Theorem}
Here, a fibre bundle is said to be \emph{locally constant} if it satisfies one of the following equivalent characterisations:
\begin{Lemma}
Let $\alpha\colon X\rightarrow T$ be a proper holomorphic fibre bundle with fibre $F$. Let $\widetilde{T} \rightarrow T$ denote the universal cover of $T$. Then, the following assertions are equivalent:
\begin{itemize}
\item[(1)] The transition functions of $\alpha\colon X\rightarrow T$ may be chosen to be locally constant.
\item[(2)] There exists a representation $\rho\colon \pi_1(T) \rightarrow \Aut(F)$ and a biholomorphism of fibre bundles
\begin{align*}
(\widetilde{T}\times F)/\pi_1(T) \cong X.
\end{align*}
Here, $\pi_1(T)$ acts on $\widetilde{T}$ in the natural way and on $F$ through $\rho$.
\item[(3)] The short exact sequence $0 \rightarrow \mathcal{T}_{X/T} \rightarrow \mathcal{T}_{X} \rightarrow \alpha^*\mathcal{T}_{T}
\rightarrow 0$ admits a global holomorphic splitting establishing $\alpha^*\mathcal{T}_{T}$ as an \textmd{integrable} sub bundle of $\mathcal{T}_{X}$.
\end{itemize}\label{locallyConstantFibrations}
\end{Lemma}
\begin{Proof}
The equivalence of $(1)$ and $(2)$ is clear. Moreover, if $(2)$ is satisfied, then the holomorphic vector bundle $pr_{\widetilde{T}}^*\mathcal{T}_{\widetilde{T}}$ on $\widetilde{T}\times F$ is clearly $\pi_1(T)$-invariant and, hence, descends to a holomorphic vector bundle on $X\cong (\widetilde{T}\times F)/\pi_1(T) $ providing an integrable splitting of the short exact sequence $0 \rightarrow \mathcal{T}_{X/T} \rightarrow \mathcal{T}_{X} \rightarrow \alpha^*\mathcal{T}_{T}
\rightarrow 0$.
Finally, assume $(3)$ to hold true and fix an integrable, holomorphic sub bundle $\mathcal{T}\subseteq \mathcal{T}_X$ splitting the sequence $0 \rightarrow \mathcal{T}_{X/T} \rightarrow \mathcal{T}_{X} \rightarrow \alpha^*\mathcal{T}_{T} \rightarrow 0$. Then, any local holomorphic frame $V_1,\dots, V_m$ of $\mathcal{T}_T$ (locally) admits a unique lift to a (local, holomorphic) frame $\widetilde{V}_1,\dots, \widetilde{V}_m$ for $\mathcal{T}$. Since $\alpha$ is proper the flows $\phi^1, \dots, \phi^m\colon X\times D\rightarrow X$ to $\widetilde{V}_1,\dots, \widetilde{V}_m$ are well-defined local automorphisms of $X$. Here, $D\subseteq \mathds{C}$ is a sufficiently small open disc. Then, the map $\psi\colon F\times D^m \rightarrow X, (y, z_1, \dots z_m) \mapsto \phi^1_{z_1}(\dots(\phi^m_{z_m}(y)))$ gives a local trivialisation of $X$. Since $\mathcal{T}$ was chosen to be integrable, this trivialisation respects $\mathcal{T}$ in the sense that $d\psi(\mathcal{T}_{D^m}) = \mathcal{T}$. In particular, if $\psi_1, \psi_2$ are any two such trivialisations, then $d(\psi_2^{-1}\circ\psi_1)(\mathcal{T}_{D^m}) = \mathcal{T}_{D^m}$, i.e. $\psi_1, \psi_2$ differ by a locally constant transition function.
\end{Proof}
It is conjectured that in the situation of \cref{flatness_of_Albanese} much more can be said about the fibre of $\alpha$:
\begin{Conjecture}\emph{(Campana-Peternell, \cite{campanaPeternell_Conjecture})}
\noindent Every Fano manifold with a nef tangent bundle is homogeneous.\label{con:Campana-Peternell}
\end{Conjecture}
As is well-known, the group of holomorphic automorphisms of a compact complex manifold is always a complex Lie group and its Lie algebra may be identified with $H^0(F, \mathcal{T}_F)$. In particular, $F$ is homogeneous if and only if its tangent bundle is globally generated.
\cref{con:Campana-Peternell} has seen attention by quite a number of authors and is by now verified for manifolds of dimension at most five by \cite{kanemitsu_Fano5foldsNefTangentBundle} (see also the introduction thereof for a short summary of contributions to this problem or alternatively \cite{munoz_SurveyCPConjecture} for a survey on the topic). In full generality however it has not even been proved yet that the tangent bundle must be semi ample. For now, we only have the following characterisation:
\begin{Lemma}
\noindent
Let $F$ be a Fano manifold with nef tangent bundle.
\begin{itemize}
\item[(1)] If the tangent bundle $\mathcal{T}_F$ is generated by global sections, then $\mathcal{T}_F$ is also big.
\item[(2)] If the tangent bundle $\mathcal{T}_F$ is big, then it is also semi ample (in the sense that $\mathcal{O}_{\mathds{P}(\mathcal{T}_X)}(1)$ is semi ample).
\end{itemize}\label{characterisation_Big/Semiample_Tangent_Bundle}
\end{Lemma}
\begin{Proof}
A proof of $(1)$ using the theory of canonical extensions may be found in \cite[Corollary 4.4.]{greb_CanonicalExtensions}. Alternatively, a more general argument is provided in \cite[Corollary 1.3.]{hsiao_BignessTangentBundleFlagVariety}.
The second statement seems to be a well-known consequence of the basepoint-free theorem, cf.\ \cite[Proposition 5.5.]{munoz_SurveyCPConjecture}.
\end{Proof}
In this sense, we will record the following weak version of \cref{con:Campana-Peternell}:
\begin{Conjecture}\emph{(weak Campana-Peternell conjecture)}
\noindent
If the tangent bundle of a Fano manifold is nef then it is also big.\label{weakCampanaPeternellConjecture}
\end{Conjecture}
\section{Canonical extensions of manifolds with nef tangent bundle}
\label{sec:canExtGeneralManifoldsWithNefTangent}
In this section we want to give a proof of \cref{IntroCanExtStein}. To this end, let $X$ be a compact Kähler manifold with nef tangent bundle. According to \cref{flatness_of_Albanese} there exists a finite étale cover $X'$ of $X$ whose Albanese map $\alpha\colon X' \rightarrow \Alb(X') =: T$ is a flat fibre bundle. Moreover, the typical fibre $F$ of $\alpha$ is (assuming the weak Campana-Peternell conjecture) a Fano manifold with big and nef tangent bundle. Now, in the extremal cases $X=T$ and $X = F$ the result is already known:
\subsection{Summary of known results}
\begin{Theorem}\emph{(Greb-Wong, \cite[Proposition 2.13.]{greb_CanonicalExtensions})}
\noindent
Let $T = \mathds{C}^q/\Gamma$ be a complex torus. Fix any Kähler form $\omega_T$ on $T$. Then the canonical extension of $T$ with respect to $\omega_T$ is a Stein manifold. In fact, there exists a biholomorphism
\begin{align*}
Z_{T, [\omega_T]} \cong (\mathds{C}^\times)^{2q}.
\end{align*}\label{canExtComplexTori}
\end{Theorem}
The proof of \cref{canExtComplexTori} uses that on a torus any Kähler class contains a unique constant Kähler metric. In the latter case, the extension may be computed explicitly. Moreover,
\begin{Theorem}\emph{(Höring-Peternell, \cite[Theorem 1.2.]{HP_Stein_complements})}
\noindent
Let $F$ be a Fano manifold with big and nef tangent bundle. Then, any canonical extension of $F$ is affine and, hence, Stein.\label{canExtFanosBigandNefTangent}
\end{Theorem}
The proof of \cref{canExtFanosBigandNefTangent} uses some basic birational geometry.
\begin{Remark}
Assuming the Campana-Peternell conjecture another proof of \cref{canExtFanosBigandNefTangent} is given in \cite[Proposition 2.23.]{greb_CanonicalExtensions}: Therein, the authors provide an explicit description of the canonical extension of a homogeneous Fano manifold $F$ from which it follows that $Z_F$ is affine. For concreteness, let us only make this explicit in case $F = \mathds{P}^n$. To this end, let us abbreviate $G:= \textmd{PGL}_n = \Aut(\mathds{P}^n)$. Then, we may identify $\mathds{P}^n = G/P$, where $P := \{ A \in G| Ae_1 = \lambda e_1 \}$. Let $L = (\mathds{C}^\times \times \GL_n)/\mathds{C}^\times \subset P$ be the subgroup of block diagonal matrices (note that $L$ is a Levi subgroup of $P$). Then, the bundle $Z_{\mathds{P}^n}\rightarrow \mathds{P}^n$ may naturally be identified with $G/L \rightarrow G/P$.\label{rem:CanExtHomFanos}
\end{Remark}
\subsection{The general case}
In this subsection we are going to prove that (in the notation at the beginning of this section) any canonical extension $Z_{X'}$ of $X'$ may be viewed in a natural way as a fibre bundle over a canonical extension $Z_T$ of $T$ and with fibre $Z_F$ a canonical extension of $F$. This will immediately imply that all canonical extensions of $X$ are Stein, thus partially confirming \cref{con:GrebWongCanExt}.
To explain the existence of the fibre bundle structure on $Z_X$ we need the following technical result which may be found in \cite{HP_Stein_complements}:
\begin{Proposition}\emph{(Höring-Peternell, \cite[Lemma 5.5]{HP_Stein_complements})}
\noindent
Let $(X,\omega_X)$ be a Kähler manifold. Assume that one may decompose $\mathcal{T}_X = \mathcal{E} \oplus \mathcal{F}$
into holomorphic sub bundles. Let $[\omega_X] = [\omega_{\mathcal{E}}] + [\omega_{\mathcal{F}}]$ be the induced decomposition in
\begin{align*}
\Ext1\left(\mathcal{O}_X, \Omega_X^1\right) = \Ext1\left(\mathcal{O}_X, \mathcal{E}^*\right) \oplus \Ext1\left(\mathcal{O}_X, \mathcal{F}^*\right).
\end{align*}
Then, there exists a natural isomorphism of affine bundles over $X$
\begin{align*}
Z_{[\omega_X]} \cong Z_{[\omega_{\mathcal{E}}]} \times_X Z_{[\omega_{\mathcal{F}}]}.
\label{splitting_canonical_extension}
\end{align*}
\end{Proposition}
\begin{Corollary}
Let $(X, \omega_X)$ be a compact Kähler manifold with nef tangent bundle. Assume the Albanese morphism $\alpha\colon X \rightarrow \Alb(X) =:T$ is a locally constant holomorphic fibre bundle. Then, there exists a natural isomorphism of affine bundles
\begin{align*}
Z_{\mathcal{T}_X, [\omega_X]}
\cong Z_{\mathcal{T}_{X/T}, [\omega_{X/T}]} \times_X Z_{\alpha^*\mathcal{T}_T}.
\end{align*}
Here, by $[\omega_{X/T}]$ we denote the image of $[\omega_X]$ under the natural homomorphism
\begin{align*}
H^1\left(X, \Omega_X^1\right) \rightarrow H^1\left(X, \Omega_{X/T}^1\right).
\end{align*}\label{splitting_canonical_extension_2}
\end{Corollary}
\begin{Remark}
Within the statement of \cref{splitting_canonical_extension_2} above, we leave the extension class that $Z_{\alpha^*\mathcal{T}_T}$ is build from ambiguous on purpose. Indeed, the proof below will implicitly determined this class but the given description is not all that useful for us. Our next order of business will thus be to have a closer look at this class and also give a more explicit description of it.\label{rem:ClassDefiningCanExt}
\end{Remark}
\begin{Proofnlb}(of \Cref{splitting_canonical_extension_2})
\noindent
Since $\alpha$ is a locally constant bundle the short exact sequence $0 \rightarrow \mathcal{T}_{X/T} \rightarrow \mathcal{T}_{X} \rightarrow \alpha^*\mathcal{T}_{T}
\rightarrow 0$ admits a global holomorphic splitting (we may even assume that $\alpha^*\mathcal{T}_{T} \subseteq \mathcal{T}_{X}$ is integrable; see \cref{locallyConstantFibrations}). Hence,
\begin{align*}
Z_{\mathcal{T}_X, [\omega_X]}
\cong Z_{\mathcal{T}_{X/T}} \times_X Z_{\alpha^*\mathcal{T}_T}.
\end{align*}
according to \cref{splitting_canonical_extension} above.
Here, the class defining the affine bundle $Z_{\mathcal{T}_{X/T}}$ is the image of $[\omega_X]$ under the induced map
\begin{align*}
\Ext1\left(\mathcal{O}_X, \Omega_X^1\right) \rightarrow \Ext1\left(\mathcal{O}_X, \Omega_{X/T}^1\right).
\end{align*}
Modulo the identification $\Ext1(\mathcal{O}_X, - ) = H^1(X, -)$ this is the proclaimed class.
\end{Proofnlb}
As explained in \cref{rem:ClassDefiningCanExt} our next goal is to give an explicit description of the cohomology class defining the extension $Z_{\alpha^*\mathcal{T}_T}$ in \cref{splitting_canonical_extension_2} above. To this end, we will require some auxiliary results.
\begin{Proposition}
Let $f\colon X \rightarrow T$ be a holomorphic submersion of relative dimension $m$ between compact Kähler manifolds. Let us denote by $F_t$ the fibres of $f$ and fix a Kähler form $\omega_X$ on $X$. Then, the function
\begin{align*}
\vol(F_t, \omega_X|_{F_t}) := \frac{1}{m!} \int_{F_t} \left(\omega_X|_{F_t}\right)^m
\end{align*}
is constant (i.e.\ does not depend on $t$).\label{volumeFibres}
\end{Proposition}
\begin{Proof}
Note that by definition
\begin{align*}
\vol(F_t, \omega_X|_{F_t}) = \frac{1}{m!} \hspace{0.1cm} f_* \left(\omega_X^m\right)\big|_t,
\end{align*}
where $f_*$ denotes the \emph{integration along the fibres} (cf.\ \cref{def:integrationAlongFibres}). In particular, since $f_*$ commutes with the exterior derivative (\cref{integrationAlongFibresProperties}) and since $\omega_X$ is $d$-closed ($X$ being Kähler) also the function $t \mapsto \vol(F_t)$ is $d$-closed, i.e.\ constant.
\end{Proof}
\begin{Corollary}
Let $f\colon X \rightarrow T$ be a holomorphic submersion between compact Kähler manifolds. Suppose that every fibre $F_t$ of $f$ is Fano and denote $m := \dim F_t$. Fix a Kähler form $\omega_X$ on $X$ and recall that by \cref{volumeFibres} the volume $\vol(F_t)$ of any fibre is the same. Then, the composition
\begin{align*}
P\colon H^q\left(X, f^*\Omega_{T}^p\right) \overset{i_*}{\longrightarrow} H^q\left(X, \Omega_X^p\right) \xrightarrow{\wedge\frac{\omega^m}{m!}} H^{q+m}\left(X, \Omega_X^{p+m}\right) \overset{f_*}{\longrightarrow} H^q\left(T, \Omega_T^p\right)
\end{align*}
is an isomorphism for all $p,q$. In fact, the inverse is given (up to a factor of $\frac{1}{\vol(F)}$) by the natural map
\begin{align*}
f^*\colon H^q\left(T, \Omega_T^p\right) \rightarrow H^q\left(X, f^*\Omega_{T}^p\right).
\end{align*}\label{integration_along_fibres}
\end{Corollary}
\begin{Proof}
First, let us prove that $P\circ f^* = \vol(F)\cdot \id$ using Dolbeaut representatives: Fix any integers $p,q$ and any closed differentiable $(p,q)$-form $\eta$ on $T$. Using the properties of the push forward we compute
\begin{align}
P(f^*([\eta]))
&=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=: \frac{1}{m!} \hspace{0.1cm} \left[f_*\big(f^*\eta\wedge \omega_X^m \big)\right] \nonumber \\
&\overset{\textmd{\cref{integrationAlongFibresProperties}}}{=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=}
\frac{1}{m!} \hspace{0.1cm} \left[ \eta \wedge f_*(\omega_X^m)\right] \nonumber \\
&=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=: [\eta] \cdot \vol(F) \label{eqs:ClassDefiningCanExt}
\end{align}
so that indeed $P\circ f^* = \vol(F)\cdot \id$. Since both $H^q(T, \Omega_T^p), H^q(X, f^*\Omega_{T}^p)$ are finite dimensional vector spaces to complete the proof of our result it thus suffices to prove that $f^*$ is an isomorphism. Then, (modulo a scalar factor) $P$ will automatically be its inverse and, hence, an isomorphism itself.
But indeed, since every fibre $F_t$ is Fano the relative Kodaira vanishing theorem yields
\begin{align*}
R^jf_*\mathcal{O}_X = R^jf_*\mathcal{O}_X(-K_X+K_X) = 0, \quad \forall j>0.
\end{align*}
It follows that also $R^jf_*f^*\Omega_T^p = \Omega_{T}^p\otimes R^jf_*\mathcal{O}_X = 0$ vanishes for all $j>0$ and so $f^*\colon H^q(T, \Omega_T^p) \rightarrow H^q(X, f^*\Omega_{T}^p)$ is an isomorphism as follows from the Leray spectral sequence. Combining this with \cref{eqs:ClassDefiningCanExt} we are done.
\end{Proof}
\begin{Proposition}
Let $f\colon X \rightarrow T$ be a holomorphic submersion of relative dimension $m$ between compact Kähler manifolds. Assume that the natural short exact sequence
\begin{align*}
0 \rightarrow f^*\Omega_{T}^1 \rightarrow \Omega_X^1 \rightarrow \Omega_{X/T}^1 \rightarrow 0
\end{align*}
admits a global holomorphic splitting $s\colon \Omega_X^1\rightarrow f^*\Omega_{T}^1$ (recall that this is always true provided that $f$ is a flat fibre bundle).
Fix a Kähler form $\omega_X$ on $X$, consider the decomposition
\begin{align*}
[\omega_X] = [\omega_{X/T}] + a_T \in H^1\left(X, \Omega_X^1\right) = H^1\left(X, \Omega_{X/T}^1\right) \oplus H^1\left(X, f^*\Omega_{T}^1\right)
\end{align*}
according to the splitting $s$ (i.e.\ $a_T = H^1(s)([\omega_X])$) and let $\omega_T := f_*(\omega_X^{m+1})$ denote the Kähler form on $T$ obtained from $\omega_X$ by integration along the fibres. Then,
\begin{align}
a_T = \frac{1}{(m+1)!\cdot \vol(F)} \cdot [f^*\omega_T] \in H^1\left(X, \Omega_X^1\right).\label{eqs:ClassDefiningCanExt_2}
\end{align}\label{determination_of_pushdown_class}
\end{Proposition}
\begin{Corollary}
Let $(X, \omega_X)$ be a compact Kähler manifold with nef tangent bundle and assume that its Albanese $\alpha\colon X \rightarrow \Alb(X) =:T$ is a locally constant holomorphic fibre bundle.
Then, there exists a natural isomorphism of affine bundles
\begin{align*}
Z_{\mathcal{T}_X, [\omega_X]}
\cong Z_{\mathcal{T}_{X/T}, [\omega_{X/T}]} \times_X Z_{\alpha^*\mathcal{T}_T, [\alpha^*\omega_T]}
\cong Z_{\mathcal{T}_{X/T}, [\omega_{X/T}]} \times_T Z_{\mathcal{T}_T, [\omega_T]}.
\end{align*}
Here, by $[\omega_{X/T}]$ we denote the image of $[\omega_X]$ under the natural homomorphism
\begin{align*}
H^1\left(X, \Omega_X^1\right) \rightarrow H^1\left(X, \Omega_{X/T}^1\right)
\end{align*}
and we denote $\omega_T :=\alpha_*(\omega_X^{m+1})$, where $m:=\dim F$.\label{fibreBundleStructureOnZX}
\end{Corollary}
\begin{Proof}
Since $\alpha$ is locally constant the short exact sequence $0 \rightarrow \alpha^*\Omega_{T}^1 \rightarrow \Omega_X^1 \rightarrow \Omega_{X/T}^1 \rightarrow 0$ splits. According to \cref{determination_of_pushdown_class} above, the decomposition of the cohomology class $[\omega_X]$ according to this splitting is given by
\begin{align*}
\left[\omega_X\right] = \left[\omega_{X/T}\right] + \lambda \cdot \left[\alpha^*\omega_T\right] \in \Ext1\left(\mathcal{O}_X, \Omega_X^1\right) = \Ext1\left(\mathcal{O}_X, \Omega_{X/T}^1\right)\oplus \Ext1\left(\mathcal{O}_X, \alpha^*\Omega_{T}^1\right),
\end{align*}
where $\lambda := \frac{1}{(m+1)!\cdot \vol(F)} >0$ is some positive real number. In effect, an application of \cref{splitting_canonical_extension} yields
\begin{align*}
Z_{\mathcal{T}_X, [\omega_X]}
\cong Z_{\mathcal{T}_{X/T}, [\omega_{X/T}]} \times_X Z_{ \alpha^*\mathcal{T}_T, \lambda \cdot [\alpha^*\omega_T]}.
\end{align*}
Since extensions only depend on their defining cohomology class up to scaling by \cref{scaling_invariance_canExt} it follows that
\begin{align*}
Z_{\mathcal{T}_X, [\omega_X]}
\cong Z_{\mathcal{T}_{X/T}, [\omega_{X/T}]} \times_X Z_{ \alpha^*\mathcal{T}_T, [\alpha^*\omega_T]}
\cong Z_{\mathcal{T}_{X/T}, [\omega_{X/T}]} \times_T Z_{\mathcal{T}_T, [\omega_T]}.
\end{align*}
Here, in the last step we used that we know from \cref{pullback_canonical_extensions} that there exists a natural identification $Z_{ \alpha^*\mathcal{T}_T, [\alpha^*\omega_T]} \cong Z_{\mathcal{T}_T, [\omega_T]}\times_T X$. This concludes the proof.
\end{Proof}
\begin{Proof}[of \cref{determination_of_pushdown_class}]
\noindent
We will verify \cref{eqs:ClassDefiningCanExt_2} by an explicit calculation using Dolbeaut representatives. To this end, recall that $s\colon \Omega_X^1\rightarrow f^*\Omega_{T}^1$ induces maps of sections $s^{(0,1)}\colon\mathcal{A}^{0,1}( \Omega_X^1) \rightarrow \mathcal{A}^{0,1}(f^*\Omega_{T}^1)$ and the class
\begin{align}
i_*(a_T) = i_*\left(H^1(s)\big([\omega_X]\big)\right) \in H^1\left(X, f^*\Omega_T^1\right) \overset{i_*}{\hookrightarrow} H^1\left(X, \Omega^1_X\right) \label{eqs:ClassDefiningCanExt_2.5}
\end{align}
is represented by the form $\zeta := i_*(s^{(0,1)}(\omega_X))$. Below, we will show that
\begin{align}
f_*(\zeta\wedge\omega_X^m) = \frac{f_*(\omega_X^{m+1})}{m+1} \label{eqs:ClassDefiningCanExt_3}
\end{align}
This will immediately yield the result because assuming \cref{eqs:ClassDefiningCanExt_3} we compute
\begin{align}
i_*(a_T) =: \left[\zeta\right]
&\overset{\textmd{\cref{integration_along_fibres}}}{=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=} \frac{1}{\vol(F)} \cdot i_*\left[f^*f_*\left(\zeta\wedge\frac{\omega_X^m}{m!}\right)\right] \nonumber \\
&\overset{\textmd{\cref{eqs:ClassDefiningCanExt_3}}}{=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=} \frac{1}{\vol(F)} \cdot \frac{1}{(m+1)!} \cdot i_*\left[f^*f_*\big(\omega_X^{m+1}\big)\right]\nonumber \\
&=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=: \frac{1}{\vol(F) \cdot (m+1)!} \cdot i_*\left[f^*\omega_T\right].\label{eqs:ClassDefiningCanExt_3.5}
\end{align}
which, using that by \cref{integration_along_fibres} $i_*$ is injective, is the equation to prove. In conclusion, it remains to verify \cref{eqs:ClassDefiningCanExt_3}. To this end, fix a point $t\in T $ and vectors $v \in T_t^{(1,0)}T$, $w \in T_t^{(0,1)}T$. Let $\widetilde{V} := s^*(v), \widetilde{W} := s^*(w)$ be the differentiable vector fields along $F_t$ induced by the dual splitting $s^*\colon f^*\mathcal{T}_T\hookrightarrow \mathcal{T}_X$. Then, $\widetilde{V}, \widetilde{W}$ are of type $(1,0)$ (respectively $(0,1)$) and lift $v, w$, i.e.\
\begin{align*}
df(\widetilde{V}|_x) = v, \quad df(\widetilde{W}|_x) = w, \quad \forall x\in F_t.
\end{align*}
By definition it holds that
\begin{align}
\left(f_*\left( \zeta\wedge\omega_X^m \right)\right) (v, w) &= \int_{F_t} \iota_{\widetilde{V}, \widetilde{W}}\left(\zeta\wedge\omega_X^m\right),\label{eqs:ClassDefiningCanExt_4}\\
\left(f_*\omega_X^{m+1}\right) (v, w) &= \int_{F_t} \iota_{\widetilde{V}, \widetilde{W}}\left( \omega_X^{m+1} \right)\label{eqs:ClassDefiningCanExt_5}
\end{align}
and we need to prove the equality of both expressions (modulo a scalar factor). Clearly it suffices to prove equality of the integrands (as differential forms) and this is what we will do: Fix a point $x\in F_t$ and denote $\widetilde{v} := \widetilde{V}|_x, \widetilde{w} := \widetilde{W}|_x$.
\begin{Blank}
\emph{Step 1: For all tangent vectors $v'\in T^{1,0}_xX, w'\in T^{0,1}_xX$ it holds that}
\begin{align*}
\zeta(v', w') \overset{\textmd{\cref{eqs:ClassDefiningCanExt_2.5}}}{:=\joinrel=\joinrel=\joinrel=\joinrel=}
i_*\left(s^{(0,1)}\left(\omega_X\right)\right)(v', w')
= \omega_X\left(s^*\left(df(v')\right), w'\right)
\end{align*}
\end{Blank}
Indeed, if more generally $\phi\colon \mathcal{E} \rightarrow \mathcal{F}$ is any morphism between holomorphic vector bundles, then the induced map $\phi^{(0,1)}\colon\mathcal{A}^{0,1}(\mathcal{E}) \rightarrow \mathcal{A}^{0,1}(\mathcal{F})$ is determined by the rule $\phi^{(0,1)}(\sigma\otimes d\bar{z}) = \phi(\sigma) \otimes d\bar{z}$. Accordingly, if $(z^j)$ are some local coordinates centred at $x\in F_t$ and if with respect to these coordinates $\omega_X = \sum h_{k, \ell}\hspace{0.1cm} dz^k\wedge d\bar{z}^{\ell}$, then $s^{(0,1)}(\omega_X)$ is locally given by the expression
\begin{align*}
s^{(0,1)}(\omega_X) = s^{(0,1)}\left(\sum h_{k, \ell}\hspace{0.1cm} dz^k\wedge d\bar{z}^{\ell}\right) = \sum h_{k, \ell} \hspace{0.1cm} s\left(dz^k\right) \otimes d\bar{z}^{\ell}.
\end{align*}
Similarly, $i_*\colon \mathcal{A}^{0,1}(f^*\Omega_T^1) \hookrightarrow \mathcal{A}^{0,1}(\Omega_X^1)$ is by construction the map induced by the bundle morphism $(df)^*\colon f^*\Omega^1_T \hookrightarrow \Omega_X^1$. In other words,
\begin{align*}
i_*\left(s^{(0,1)}\left(\omega_X\right)\right)(v', w')
: &= \left( \sum h_{k, \ell} \hspace{0.1cm} df^*\big(s\left(dz^k\right)\big) \otimes d\bar{z}^{\ell} \right)(v', w')\\
&= \sum h_{k, \ell } \hspace{0.1cm} \big((df^*\circ s)(dz^k)\big)(v') \otimes d\bar{z}^{\ell}(w') \\
&= \sum h_{k, \ell } \hspace{0.1cm} dz^k\big(s^*(df(v'))\big) \otimes d\bar{z}^{\ell}(w') \\
&= \left( \sum h_{k, \ell} \hspace{0.1cm} dz^k \otimes d\bar{z}^{\ell} \right)\left(s^*(df(v')), w'\right)
= \omega_X\left(s^*(df(v')), w'\right).
\end{align*}
\begin{Blank}
\emph{Step 2: The following identity holds true:}
\begin{align*}
\iota_{\widetilde{v}, \widetilde{w}}(\zeta\wedge \omega_X^m)\big|_{F_t} = \big(\omega_X(\widetilde{v}, \widetilde{w}) \cdot \omega_X^m - \iota_{\widetilde{v}}(\omega_X) \wedge \iota_{\widetilde{w}}(\omega_X^m) \big)\big|_{F_t}.
\end{align*}
\end{Blank}
Using the formula in \cref{formulaeWedgeProduct} regarding contractions by vectors of wedge products we compute
\begin{align}
\iota_{\widetilde{w}}\iota_{\widetilde{v}}(\zeta\wedge \omega_X^m)
&= \iota_{\widetilde{w}}\left( \iota_{\widetilde{v}}(\zeta)\wedge \omega_X^m + (-1)^2 \hspace{0.1cm} \zeta\wedge \iota_{\widetilde{v}}(\omega_X^m) \right)\nonumber\\
&= \zeta(\widetilde{v}, \widetilde{w}) \cdot \omega_X^m + (-1) \hspace{0.1cm} \iota_{\widetilde{v}}(\zeta)\wedge \iota_{\widetilde{w}}(\omega_X^m)\nonumber\\
&\qquad + (-1)^2\hspace{0.1cm} \iota_{\widetilde{w}}(\zeta)\wedge \iota_{\widetilde{v}}(\omega_X^m)
+ (-1)^4\hspace{0.1cm} \zeta \wedge \iota_{\widetilde{v}, \widetilde{w}}(\omega_X^m).\label{eqs:ClassDefiningCanExt_10}
\end{align}
Now, according to \emph{Step 1} it holds that
\begin{align}
\zeta(v', -) = \omega_X(s^*(df(v')), -), \quad \forall v'\in T^{0,1}_xX.\label{eqs:ClassDefiningCanExt_11}
\end{align}
In particular, if $v'$ is tangent along the fibres, then $df(v') = 0$ and so $\iota_{v'}\zeta = 0$. This immediately implies that
\begin{align}
\iota_{\widetilde{w}}(\zeta)|_{F_t} = \zeta|_{F_t} = 0.\label{eqs:ClassDefiningCanExt_12}
\end{align}
On the other hand, consider the case $v' = \widetilde{v}$ in \cref{eqs:ClassDefiningCanExt_11} above. Then,
\begin{align*}
s^*(df(\widetilde{v})) \overset{df(\widetilde{v}) = v}{=\joinrel=\joinrel=\joinrel=\joinrel=} s^*(v) =: \widetilde{v}
\end{align*}
by definition of $\widetilde{v}$. In view of \cref{eqs:ClassDefiningCanExt_11} this implies that
\begin{align}
\zeta(\widetilde{v}, \widetilde{w}) = \omega_X(\widetilde{v}, \widetilde{w}), \quad \iota_{\widetilde{v}}(\zeta) = \iota_{\widetilde{v}}(\omega_X).\label{eqs:ClassDefiningCanExt_13}
\end{align}
Substituting the terms in \cref{eqs:ClassDefiningCanExt_10} above using \cref{eqs:ClassDefiningCanExt_12} and \cref{eqs:ClassDefiningCanExt_13} we find
\begin{align*}
\iota_{\widetilde{v}, \widetilde{w}}(\zeta\wedge \omega_X^m)\big|_{F_t} = \big(\omega_X(\widetilde{v}, \widetilde{w}) \cdot \omega_X^m - \iota_{\widetilde{v}}(\omega_X) \wedge \iota_{\widetilde{w}}(\omega_X^m) + 0 \big)\big|_{F_t}.
\end{align*}
which is the identity in question.
\begin{Blank}
\emph{Step 3: It holds that $\iota_{\widetilde{v}, \widetilde{w}}(\omega_X^{m+1}) =(m+1) \left(\omega_X(\widetilde{v}, \widetilde{w}) \cdot \omega_X^m - \iota_{\widetilde{v}}(\omega_X) \wedge \iota_{\widetilde{w}}(\omega_X^m) \right) $.}
\end{Blank}
Using again \cref{formulaeWedgeProduct} we compute
\begin{align*}
\iota_{\widetilde{v}, \widetilde{w}}(\omega_X^{m+1})
&\overset{\textmd{\cref{formulaeWedgeProduct}(iii)}}{=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=} (m+1) \cdot \omega_X(\widetilde{v}, \widetilde{w}) \cdot \omega_X^m \\
&\qquad \qquad \qquad \qquad - m(m+1) \cdot \iota_{\widetilde{v}}(\omega_X)\wedge \iota_{\widetilde{w}}(\omega_X)\wedge \omega_X^{m-1}\\
&\overset{\textmd{\cref{formulaeWedgeProduct}(ii)}}{=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=} (m+1) \cdot \left( \omega_X(\widetilde{v}, \widetilde{w}) \cdot \omega_X^m - \iota_{\widetilde{v}}(\omega_X) \wedge \iota_{\widetilde{w}}(\omega_X^m) \right).
\end{align*}
This finishes the proof of \emph{Step 3}.
\begin{Blank}
\emph{Step 4: Conclusion.}
\end{Blank}
Combining the results of \emph{Step 2} and \emph{Step 3} we find that
\begin{align*}
\iota_{\widetilde{v}, \widetilde{w}}(s(\omega_X)\wedge \omega_X^m)\big|_{F_t} = \frac{1}{m+1} \cdot \iota_{\widetilde{v}, \widetilde{w}}(\omega_X^{m+1})\big|_{F_t}.
\end{align*}
Thus, the integrands in \cref{eqs:ClassDefiningCanExt_4} and \cref{eqs:ClassDefiningCanExt_5} above agree (up to scaling) and, hence,
\begin{align*}
f_*\left( s(\omega_X)\wedge\omega_X^m\right) (v, w)
= \frac{(f_*\omega_X^{m+1})(v, w)}{m+1}, \quad \forall v \in T^{(1,0)}T, \forall w \in T^{(0,1)}T.
\end{align*}
This proves \cref{eqs:ClassDefiningCanExt_3} and, as discussed above in \cref{eqs:ClassDefiningCanExt_3.5}, the result immediately follows.
\end{Proof}
\cref{fibreBundleStructureOnZX} yields a splitting $Z_X \cong Z_{X/T} \times_T Z_T$. Our next goal is to prove that the induced map $Z_X \rightarrow Z_T$ makes $Z_X$ into a holomorphic fibre bundle with typical fibre $Z_F$. To this end, we first need to take a closer look at $Z_F$:
\begin{Proposition}
Let $(F, \omega_F)$ be a compact Kähler manifold and denote $G := \Aut^0(F)$. It is well-known that $G$ is a complex Lie group (see for example \cite[Section 2.3.]{akhiezer_AutsOfComplexMfds}). Moreover,
\begin{itemize}
\item[(1)] the natural action of $G$ on $H^*(F, \mathds{R})$ is trivial.
\item[(2)] If $H^1(F, \mathds{R}) = 0$, then the action of $G$ on $F$ extends naturally to an action by automorphisms of affine bundles on $Z_{[\omega_F ]}$.
\end{itemize}
\label{lifting_action_canonical_extension}
\end{Proposition}
\begin{Proof}
Regarding the first statement, since $G$ is a Lie group, $G=\Aut^0(F)$ is not only the connected component of the identity in $\Aut(F)$ but also the path-connected component. Thus, for any $g \in G$ there exists a (smooth) path from $\id_F$ to $g$ in $G$. But such a path is nothing but a (smooth) homotopy between $\id_F$ and $g$, i.e.\ all maps in $G$ are null homotopic. In particular, they induce the identity maps on de Rahm cohomology.
For the second statement, note that any element $g \in G$ naturally induces an isomorphism of affine bundles
\begin{align*}
g\colon Z_{[\omega_F]} \rightarrow g^*Z_{[\omega_F]} = Z_{[g^*\omega_F]}.
\end{align*}
Since the action of $G$ on $H^*(F, \mathds{R})$ is trivial by item $(1)$, in particular $[g^*\omega_F] = [\omega_F]$ for all $g\in G$. Hence, there exists \emph{an} isomorphism of affine bundles $Z_{[g^*\omega_F]} \cong Z_{[\omega_F]}$. We claim, that in fact there exists only one such isomorphism. In particular, we may identify $Z_{[g^*\omega_F]}$ and $Z_{[\omega_F]}$ in a natural way and so the action of $G$ on $F$ lifts to $Z_F$ as required.
Regarding the claim, by construction any isomorphism as above is induced by an isomorphism of extensions or, in other words, by a commutative diagram as below:
\begin{center}
\includegraphics{./Diagramme/Iso_of_Extensions.pdf}
\end{center}
It is now easily verified by a diagram chase that any morphism $\phi$ making the above diagram commute is of the form $\phi = \id + p \cdot \eta$, where
\begin{align*}
\eta \in \Hom(\mathcal{O}_F, \Omega_F^1) = H^0(F, \Omega_F^1).
\end{align*}
and, as before, $V\overset{p}{\rightarrow} \mathcal{O}_X$. But $\dim_\mathds{C} H^0(F, \Omega^1_F) = \dim_\mathds{R} H^1(F, \mathds{R}) = 0$ by the Hodge decomposition. Thus, there is only one isomorphism of affine bundles $Z_{[g^*\omega_F]} \cong Z_{[\omega_F]}$ and we are done.
\end{Proof}
\begin{Lemma}
Let $f\colon X \rightarrow T$ be a holomorphic fibre bundle with structure group $G$ and with typical fibre $F$. Suppose that $X$ and $T$ are compact Kähler and fix a Kähler metric $\omega_X$ on $X$. Suppose moreover that $G\subseteq\Aut^0(F)$ and that $H^1(F, \mathds{C}) = 0$. Then, also
\begin{align*}
f\circ p \colon Z_{X/T} := Z_{\mathcal{T}_{X/T}, [\omega_{X/T}]}\rightarrow X \rightarrow T
\end{align*}
is a holomorphic fibre bundle. Its typical fibre is $Z_{\mathcal{T}_F, [\omega_X|_F]}$ and the structure group may be chosen to be $G$.\label{extension=fibre_bundle}
\end{Lemma}
Note that $G$ indeed acts on $Z_{\mathcal{T}_F, [\omega_X|_F]}$ by \cref{lifting_action_canonical_extension} so that the assertion about the structure group of the bundle makes sense.
\begin{Proof}
Since both $f\colon X\rightarrow T$ and $p\colon Z_{X/T} \rightarrow X$ are holomorphic fibre bundles, $f\circ p$ is at least a surjective holomorphic submersion. Moreover, it follows from the functoriality of the construction of $Z_{-}$ (see \cref{pullback_canonical_extensions}) that the fibre of $f\circ p$ over $t \in T$ is given by
\begin{align*}
(f\circ p)^{-1}(t) = p^{-1}(F_t) = Z_{X/T}\times_X F_t \overset{\textmd{\cref{pullback_canonical_extensions}}}{=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=} Z_{\mathcal{T}_{X/T}|_{F_t}, \left[\omega_X|_{F_t}\right]} = Z_{\mathcal{T}_{F_t}, [\omega_X|_{F_t}]}.
\end{align*}
Now, fix $t\in T$, denote $F := f^{-1}(t)$ and choose a sufficiently small open polydisc $t\in U\subset T$ so that $f^{-1}(U) \cong U\times F$ is trivial. We want to show that there exists an isomorphism of fibre bundles
\begin{align}
Z_{X/T}|_U \cong U \times Z_{\mathcal{T}_{F}, [\omega_X|{F}]}\label{eqs:BundleStructureCanExt_2}
\end{align}
respecting the affine bundle structure on both sides. Indeed, since $U$ is a polydisc it holds that $H^j(U, \mathds{C}) = 0$ for all $j>0$. Thus, according to the classical Künneth formula the map
\begin{align*}
pr_F^*\colon H^*(F, \mathds{C}) \rightarrow H^*(U\times F, \hspace{0.1cm} \mathds{C})
\end{align*}
is an isomorphism. Note that an inverse is clearly provided by the restriction map
\begin{align*}
\cdot|_{ \{ t \} \times F}\colon H^*(U\times F, \hspace{0.1cm} \mathds{C}) \rightarrow H^*(F, \mathds{C}).
\end{align*}
In particular, we find that
\begin{align}
[\omega_X|_{U\times F}] = pr_{F}^*[\omega_X|_{F}].\label{eqs:BundleStructureCanExt}
\end{align}
Using again the functionality of extensions and the fact that $\mathcal{T}_{U\times F/U} = pr_F^*\mathcal{T}_F$ we compute
\begin{align*}
\left. Z_{\mathcal{T}_{X/T}, [\omega_{X/T}]}\right|_U
= Z_{\mathcal{T}_{U\times F/U}, [\omega_{X/T}]}
&\overset{\textmd{\cref{eqs:BundleStructureCanExt}}}{=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=} Z_{pr_F^*\mathcal{T}_F, pr_F^*[\omega_F]}\\ &\overset{\textmd{\cref{pullback_canonical_extensions}}}{=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=\joinrel=} pr_F^*Z_{\mathcal{T}_F, [\omega_F]} := U \times Z_{\mathcal{T}_F, [\omega_F]}.
\end{align*}
This proves \cref{eqs:BundleStructureCanExt_2} and, hence, that $f\circ p$ is a holomorphic fibre bundle with fibre $Z_F$.
The assertion about the structure group being $G$ is clear, because we already saw as part of the proof of \cref{lifting_action_canonical_extension} that given any $g\in G$, there is one and only one identification of $Z_F$ and $g^*Z_F$ as affine bundles. Hence, both $f\colon X\rightarrow T$ and $f\circ p\colon Z_{X/T} \rightarrow T$ are constructed using the same transition functions.
\end{Proof}
\begin{Remark}
Record for later reference that both the bundles $f\colon X\rightarrow T$ and $f\circ p\colon Z_{X/T} \rightarrow T$ are constructed using the same transition functions. In particular, the first is locally constant if and only if the latter is so.\label{rem:CanExtFibreBunldeFlat}
\end{Remark}
\begin{Corollary}
Let $f\colon X \rightarrow T$ be a holomorphic fibre bundle. Assume that $X$ and $T$ are compact Kähler, fix a Kähler form $\omega_X$ on $X$ and suppose that the typical fibre $F$ of $f$ is a Fano manifold. Suppose moreover that the structure group $G$ of $f$ is contained in $\Aut^0(F)$ and that the short exact sequence
\begin{align*}
0 \rightarrow \mathcal{T}_{X/T} \rightarrow \mathcal{T}_{X} \rightarrow f^*\mathcal{T}_{T}
\rightarrow 0
\end{align*}
admits a global holomorphic splitting (which is satisfied if for example $f$ is locally constant).
Then, there exists an isomorphism of affine bundles
\begin{align}
Z_{\mathcal{T}_X, [\omega_X]} \cong Z_{\mathcal{T}_{X/T}, [\omega_{X/T}]} \times_T Z_{\mathcal{T}_T, [\omega_T]}\label{eqs:SplittingCanExt}.
\end{align}
Here, $\omega_T := f_*(\omega_X^{m+1})$ is the Kähler form on $T$ obtained from $\omega_X$ by integration along the fibres. Moreover, the projection map
\begin{align*}
\bar{f}\colon Z_{\mathcal{T}_X, [\omega_X]} \rightarrow Z_{\mathcal{T}_T, [\omega_T]}
\end{align*}
makes $Z_X$ into a (flat if $f$ is flat) holomorphic fibre bundle over $Z_T$ with fibre $Z_{F, [\omega_X|_F]}$ and structure group $G$.\label{canExtFibreBundleSTructure}
\end{Corollary}
\begin{Proof}
First of all, \cref{eqs:SplittingCanExt} has already been verified in \cref{fibreBundleStructureOnZX}. Regarding the second assertion, note that $H^1(F, \mathds{C}) = 0$ as $F$ is Fano. Thus, \cref{extension=fibre_bundle} above applies and yields that
\begin{align*}
Z_{\mathcal{T}_{X/T}, [\omega_{X/T}]} \rightarrow T
\end{align*}
is a (flat; see \cref{rem:CanExtFibreBunldeFlat}) holomorphic fibre bundle with structure group $G$ and fibre $Z_F$. But \cref{eqs:SplittingCanExt} just says that
\begin{align*}
\bar{f}\colon Z_{X, [\omega_X]}
\rightarrow Z_{T, [\omega_T]}
\end{align*}
is the pull back along $Z_T \rightarrow T$ of the bundle $Z_{X/T} \rightarrow T$. Hence, along with $Z_{X/T} \rightarrow T$ also $\bar{f}$ is a (flat) holomorphic fibre bundle with structure group $G$ and fibre $Z_F$.
\end{Proof}
The following trick may be used to show that the condition $G \subseteq\Aut^0(F)$ in \cref{canExtFibreBundleSTructure} above is essentially superfluous.
\begin{Proposition}
Let $f\colon X \rightarrow T$ be a holomorphic fibre bundle with typical fibre $F$, where both $X$ and $T$ are compact complex manifolds. Suppose that the the group $\Aut(F)/\Aut^0(F)$ is finite (by \cite[Corollar 2.17.]{brion_AutomorphismGroups} this is satisfied for example if $F$ is Fano). Then, there exists a finite étale cover $T'\rightarrow T$ such that the structure group of the holomorphic fibre bundle $X\times_T T' \rightarrow T'$ may be chosen to be contained in $\Aut^0(F)$.\label{covering_trick}
\end{Proposition}
\begin{Proof}
Let us abbreviate $G:= \Aut(F)$ and $G^0:=\Aut^0(F)$. Since $G =\Aut(F)$ acts effectively on $F$, there exists a unique holomorphic principal $G$-bundle $\mathcal{G}\overset{\pi}{\rightarrow} T$ such that $X\overset{f}{\rightarrow}T$ is the associated bundle with typical fibre $F$. Then,
\begin{align*}
T' := \mathcal{G}/G^0\rightarrow T
\end{align*}
is a finite étale cover of $T$ (since $G/G^0$ is finite by assumption) and by construction the structure group of the principal $G$-bundle $\mathcal{G}\times_T T' \rightarrow T'$ may be reduced to $G^0$. In effect, the same is true of the associated bundle $X\times_T T' \rightarrow T'$ and so we are done.
\end{Proof}
We are now finally ready to prove the main result of this chapter:
\begin{Theorem}
Let $(X, \omega_X)$ be a compact Kähler manifold with nef tangent bundle. If the weak Campana-Peternell conjecture \cref{weakCampanaPeternellConjecture} holds true then the canonical extension
\begin{align*}
Z_{X, [\omega_X]}
\end{align*}
is a Stein manifold.
\end{Theorem}
\begin{Proof}
According to \cref{flatness_of_Albanese} there exists a finite étale cover $\pi\colon X'\rightarrow X$ such that the Albanese $\alpha\colon X' \rightarrow \Alb(X') =: T$ is a locally constant holomorphic fibre bundle. Its fibres are Fano manifolds with nef (and, hence, assuming \cref{weakCampanaPeternellConjecture} also big) tangent bundle. Possibly replacing $X'$ by another finite étale cover we may moreover assume by \cref{covering_trick} above that the structure group $G$ of $\alpha$ is contained in $\Aut^0(F)$. But in this situation \cref{canExtFibreBundleSTructure} applies to the compact Kähler manifold $(X', \pi^*\omega_X)$ and shows that that there exists a natural map
\begin{align}
\bar{\alpha}\colon Z_{\widetilde{X}, [\pi^*\omega_X]} \rightarrow Z_{T, [\omega_T]}\label{eqs:CanExtKahlerNefTangent}
\end{align}
making $Z_{X'}$ into a flat holomorphic fibre bundle with structure group $G\subseteq \Aut^0(F)$ and fibre
\begin{align*}
Z_{F, [\pi^*\omega_X|_F]}.
\end{align*}
Here, $\omega_T$ in \cref{eqs:CanExtKahlerNefTangent} above is some (explicitly determined) Kähler from on $T$. Note that by \cref{lifting_action_canonical_extension} $\Aut^0(F)$ acts on $Z_F$ so that we may well assume the structure group of $\bar{\alpha}$ to be $\Aut^0(F)$.
Note moreover, that we already proved in \cref{canExtComplexTori} that $Z_T$ must be Stein as a canonical extension of a complex torus and we showed in \cref{canExtFanosBigandNefTangent} that $Z_F$ must be Stein as a canonical extension of a Fano manifold with big and nef tangent bundle.
In summary, $Z_{X'}$ is naturally a holomorphic fibre bundle over the Stein manifold $Z_T$. The typical fibre of this bundle is $Z_F$, a Stein manifold, and the structure group of the bundle may be chosen to be the connected group $\Aut^0(F)$. But it is a classical theorem by \cite[Théorème 6.]{matsushima_SteinFibreBundles} that in this situation also the total space
\begin{align*}
Z_{X', [\pi^*\omega_X]}
\end{align*}
of the bundle is Stein. Finally, since $\pi\colon X'\rightarrow X$ is finite étale \cref{canonicalExtEtaleCov} yields that also $Z_{X, [\omega_X]}$ is Stein and so we are done.
\end{Proof}
\section{The special case of surfaces}
\label{sec:CanExtSurfaces}
As was already mentioned above, regarding the converse question of whether the tangent bundle of a manifold that admits canonical extensions which are Stein is nef little is known. In \cite[Corollary 1.7.]{HP_Stein_complements} it is proved that the tangent bundle must at least be pseudo-effective (in the weak sense, i.e.\ $\mathcal{O}_{\mathds{P}(\mathcal{T}_X)}(1)$ must be pseudo-effective) but this is far less than the expected nefness. As this question seems very difficult it is natural to concentrate on the low-dimensional cases first. Indeed, in \cite{HP_Stein_complements} it is proved that:
\begin{Theorem}\emph{(Höring-Peternell, \cite[Theorem 1.13.]{HP_Stein_complements})}
\noindent
Let $X$ be a smooth projective surface. Assume that there exists some Kähler class $\omega_X$ on $X$ whose canonical extension is Stein. Then, one of the following holds true:
\begin{itemize}
\item[(1)] $X$ is an étale quotient of a complex torus.
\item[(2)] $X$ is a homogeneous Fano surface, i.e.\ either $X= \mathds{P}^2$ or $X=\mathds{P}^1\times\mathds{P}^1$.
\item[(3)] $X= \mathds{P}(\mathcal{E})\overset{\pi}{\rightarrow} C$ is a ruled surface over a curve of genus $g(C)\geq 1$. Moreover, if $g(C)\geq 2$ then $\mathcal{E}$ must be semi-stable.
\end{itemize}\label{surfaceSteinCanonicalExtension}
\end{Theorem}
Note that item $(3)$ is not quite what we expect: First of all, if $g(C)\geq 2$, then the tangent bundle of $X$ can not be nef and so we would not expect any canonical extension to be Stein. Here, the reason for the first assertion is the relative tangent bundle sequence $0 \rightarrow \mathcal{T}_{X/C} \rightarrow \mathcal{T}_{X} \rightarrow \pi^*\mathcal{T}_{C} \rightarrow 0$: If $\mathcal{T}_X$ were nef then so were its quotient $\pi^*\mathcal{T}_C$ and, hence, $\mathcal{T}_C$ itself.
Moreover, it is well-known that the tangent bundle of a ruled surface $X=\mathds{P}(\mathcal{E})$ over an elliptic curve is nef if and only if the defining bundle $\mathcal{E}$ is semi-stable (cf.\ \cite[Theorem 6.1.]{DPS_ManifoldsWithNefTangentBundle}).
This raises the question of what is true in the remaining cases. Indeed, we are able to rule out the higher genus case as well; to this end, we need the following auxiliary result:
\begin{Proposition}
Let $X = \mathds{P}(\mathcal{E}) \rightarrow C$ be a ruled surface. If $\mathcal{E}$ is semi-stable, then $\pi$ is a locally constant fibre bundle.\label{flatnessRuledSurface}
\end{Proposition}
\begin{Proof}
This fact is rather well-known, see for example \cite[Theorem 1.5, Proposition 1.7.]{jahnke_SemistableRuledSurface}.
\end{Proof}
\begin{Lemma}
Let $X = \mathds{P}(\mathcal{E}) \overset{f}{\rightarrow} C$ be a ruled surface over a curve of genus $g(C)\geq 2$ defined by a semi-stable vector bundle $\mathcal{E}$. Then, no canonical extension of $X$ is Stein.
\end{Lemma}
\begin{Proof}
Assume to the contrary that there exists a Kähler metric $\omega_X$ on $X$ whose canonical extension $Z_X$ is Stein.
By \cref{flatnessRuledSurface} $\pi\colon X \rightarrow C$ is a locally constant fibre bundle. In other words, if we denote by $\widetilde{C} \overset{p}{\rightarrow} C$ the universal cover of $C$, then there exists a group homomorphism $\rho\colon \pi_1(C) \rightarrow \Aut(\mathds{P}^1)=:G$ such that
\begin{align*}
X \cong \pi_1(C)\backslash(\widetilde{C}\times \mathds{P}^1).
\end{align*}
Here, the reason for exceptionally denoting the quotient as one from the left is that shortly we will introduce a second action of a group. It will be crucial below that both of these groups will act from different sides so that the actions commute.
In any case, as $\pi\colon X \rightarrow C$ is a locally constant fibre bundle with fibre $\mathds{P}^1$ - a Fano manifold with connected automorphism group - \cref{canExtFibreBundleSTructure} applies and shows that we may also consider $Z_X$ as a flat fibre bundle over $Z_C$ with typical fibre $Z_{\mathds{P}^1}$ and with the same transition functions as $X\rightarrow C$. Here, for the latter assertion we use \cref{rem:CanExtFibreBunldeFlat} and the fact, that by \cref{lifting_action_canonical_extension} the action of $\Aut(\mathds{P}^1)$ on $\mathds{P}^1$ lifts uniquely to $Z_{\mathds{P}^1}$. In summary,
we may identify
\begin{align}
Z_{X, [\omega_X]}
\cong \pi_1(C)\backslash \left( Z_{\widetilde{C}, [p^*\omega_C]} \times Z_{\mathds{P}^1, [\omega_X|_{\mathds{P}^1}]}\right)
= \pi_1(C)\backslash \left( Z_{\widetilde{C}, [p^*\omega_C]} \times G/L \right) .\label{eqs:CanExtSurf1}
\end{align}
Here, $\omega_C := f_*(\omega_X\wedge\omega_X)$ is the induced Kähler form on $C$. Moreover, we used that according to \cref{rem:CanExtHomFanos} there exists a canonical $G$-equivariant identification of canonical extensions
\begin{align*}
\left( Z_{\mathds{P}^1} \rightarrow \mathds{P}^1 \right) = \left( G/L \rightarrow G/P \right).
\end{align*}
The precise definition of the group $L\subsetneq P \subsetneq G$ is contained in \cref{rem:CanExtHomFanos}; we will only use the fact that $L\cong \mathds{G}_m$ is connected and Stein.
Now, let us consider the manifold
\begin{align}
\mathcal{G}:= \pi_1(C)\backslash \left( Z_{\widetilde{C}, [p^*\omega_C]} \times G\right).\label{eqs:CanExtSurf2}
\end{align}
The natural projection $\mathcal{G}\rightarrow Z_C$ makes it into a (right) principal $G = \Aut(\mathds{P}^1)$-bundle. Then, clearly combining \cref{eqs:CanExtSurf2} with \cref{eqs:CanExtSurf1} we deduce that
\begin{align*}
Z_{X, [\omega_X]}
\cong \pi_1(C) \backslash \left( Z_{\widetilde{C}, [p^*\omega_C]} \times G/L\right)
\cong \pi_1(C) \backslash \left( Z_{\widetilde{C}, [p^*\omega_C]} \times G\right)/L
= \mathcal{G}/L.
\end{align*}
In other words, $\mathcal{G} \rightarrow Z_X$ is naturally a (right) principal $L$-bundle. Note that $Z_X$ is Stein by assumption and that $L$ is connected and Stein (cf.\ \cref{rem:CanExtHomFanos}). Therefore, \cite[Théorème 6.]{matsushima_SteinFibreBundles} again applies and proves that also $\mathcal{G}$ is Stein. On the other hand, $\mathcal{G}\rightarrow Z_C$ is naturally a (right) $G=\Aut(\mathds{P}^1)$-bundle. Since quotients of Stein spaces by reductive groups are again Stein by \cite{snow_reductiveQuotientSteinSpaces}, we infer that also $Z_{C, [\omega_C]} = \mathcal{G}/G$ is Stein. But this contradicts \cite[Example 3.6.]{greb_CanonicalExtensions} as $g(C)\geq2$. Thus, $Z_X$ can not be Stein after all and we are done.
\end{Proof}
\begin{Remark}
Note that essentially ad verbatim the same argument also yields the following: Let $f\colon X\rightarrow Y$ be a locallly constant fibration with fibre $F$ and assume that $F= G/P$ is a homogeneous Fano. If the exists a Kähler form $\omega_X$ on $X$ such that the canonical extension $Z_{X, \omega_X}$ is Stein, then there exists a Kähler form $\omega_Y$ on $Y$ (in fact, $\omega_Y = f_*(\omega_X^{m+1})$ does the job) so that also $Z_{Y, \omega_Y}$ is Stein.
\end{Remark}
The case of unstable ruled surfaces over elliptic curves however is still completely open:
\begin{Question}
Let $X = \mathds{P}(\mathcal{E}) \rightarrow E$ be a ruled surface over an elliptic curve defined by an unstable bundle $\mathcal{E}$ (so that $\mathcal{T}_X$ is not nef). Is it true, that no canonical extension of $X$ Stein?\label{q:CanExtSurfaces}
\end{Question}
This question is interesting because such surfaces lie on the boundary of what is known: One can show that they belong to the very restricted class of surfaces whose tangent bundle is (strongly) pseudo-effective but not nef (compare the discussion in \cite{hosonoIwaiMatsumura_PsefTangentBundle}). Thus, an affirmative answer to \cref{q:CanExtSurfaces} would provide a serious indication towards the correctness of \cref{con:GrebWongCanExt}. On the other hand, it seems very much possible that the answer to \cref{q:CanExtSurfaces} may turn out to be negative. In this case, it would of course be interesting to see how much positivity exactly one can infer from the Steiness of canonical extensions.
\section{Appendix}
\subsection{Integration along fibres}
Since there is no universally agreed upon convention regarding the definition of integration along fibres, let us quickly state below the one we use:
\begin{Definition}
Let $f\colon X \rightarrow T$ be a proper holomorphic submersion with fibres $F_t$. Denote $m := \dim F$. Given any differentiable $k$-form $\eta \in \mathcal{A}_X^k$ on $X$, we define the $(k-2m)$-form $f_*\eta$ on $T$ by the rule
\begin{align*}
\big(f_*\eta\big)(V_1, ..., V_{k-2m})|_t := \int_{F_t} \eta \left(\widetilde{V}_1, ..., \widetilde{V}_{k-2m}, -\right), \quad \forall V_1, ..., V_{k-2m}\in T^{\mathds{C}}T,
\end{align*}
where $\widetilde{V}_1, ..., \widetilde{V}_{k-2m}$ are any locally defined lifts of $ V_1, ..., V_{k-2m}$ to $X$.
We call $f_*\eta$ the form obtained by \emph{integrating} $\eta$ \emph{along the fibres} or the \emph{push forward} of $\eta$ by $f$.\label{def:integrationAlongFibres}
\end{Definition}
With this convention, the following properties of the push-forward are straightforward to verify:
\begin{Proposition}
Integration along the fibres induces well-defined $\mathds{C}$-linear maps
\begin{align*}
f_*\colon \mathcal{A}_X^{k} \rightarrow \mathcal{A}_T^{k-2m}.
\end{align*}
Moreover, it satisfies the following formulae:
\begin{itemize}
\item[(1)] Push forward preserves type: If $\eta\in \mathcal{A}_X^{p,q}$, then $f_*\eta\in\mathcal{A}_X^{p-m, q-m}$.
\item[(2)] Push forward commutes with the exterior derivative: $d\circ f_* = f_* \circ d$. In particular, $f_*$ induces morphisms
\begin{align*}
f_*\colon H^{k}(X, \mathds{C}) \rightarrow H^{k-m}(T, \mathds{C}).
\end{align*}
Similarly, $f_*$ commutes also with $\partial, \bar{\partial}$.
\item[(3)] Push forward satisfies the \emph{projection formula}: For all differential forms $\zeta$ on $T$ and $\eta$ on $X$ it holds that
\begin{align*}
f_*(f^*\zeta \wedge \eta) = \zeta \wedge f_*\eta.
\end{align*}
\item[(4)] The push forward of a (strictly) positive form on $X$ is a (strictly) positive form on $T$.
\end{itemize}
In particular, if $\omega_X$ is a Kähler form on $X$, then $f_*(\omega_X^{m+1})$ is a strictly positive closed $(1,1)$-form on $T$, i.e.\ a Kähler form.\label{integrationAlongFibresProperties}
\end{Proposition}
\subsection{Some formulae from multi-linear algebra}
While we are at it, let us state the following formulae used in the main text:
\begin{Proposition}
Let $V$ be a complex vector space and let $\varphi\in \bigwedge^kV^*$, $\psi\in \bigwedge^\ell V^*$ and $\omega\in \bigwedge^{2k} V^*$ be skew-symmetric forms on $V$ of the indicated degree. Then, for all vectors $v, w\in V$ the following identities are satisfied:
\begin{align*}
\iota_v(\varphi\wedge \psi) &= \iota_v(\varphi)\wedge \psi + (-1)^k \varphi\wedge \iota_v(\psi),\\
\iota_v(\omega^{m}) &= m \cdot \iota_v(\omega) \wedge \omega^{m-1},\\
\iota_w\iota_v(\omega^{m}) &= m\cdot \iota_w\iota_v(\omega) \wedge \omega^{m-1} - m(m-1) \iota_v(\omega)\wedge\iota_w(\omega) \wedge \omega^{m-1}.
\end{align*}
Here, as per usual $\iota_v$ is the contraction by $v$: $\iota_v\varphi = \varphi(v, -)$.\label{formulaeWedgeProduct}
\end{Proposition}
\begin{Proof}
The first identity is proved in \cite[Lemma 14.13.]{lee_SmoothManifolds}. The second formula clearly follows from the first one by an induction argument (note that we assumed $\omega$ to be of even degree to avoid worries about the correct signs). Finally, the third one is obtained by applying the first identity to the second one.
\end{Proof}
\printbibliography
\end{document}
|
1,477,468,750,398 | arxiv |
\section{Introduction}\label{introduction}}
This paper investigates the notion of homotopy of walks to study an
equivalence between two definitions of embeddings in the sphere of
connected and locally finite directed multigraphs. The constructions
are proof-relevant and constructive, powered by homotopy type theory
(HoTT) as the chosen mathematical foundation
\citep{hottbook, Escardo2019}.
The topological graph theory approach inspires our definition of a
combinatorial notion of embedding/map in the sphere for graphs
\citep{planarityHoTT}, referred to as \textit{spherical maps} in this
paper, see \Cref{def:spherical-map}. A graph map can be described by
the graph itself and the circular ordering of the edges incident to
each vertex \citep[§3]{gross}. Using this description, a graph is
understood to be embedded in the sphere if the walks with the same
endpoints are \emph{walk-homotopic}, similar to the topological
concept of a connected closed and simply connected space. We propose a
more pragmatic characterisation of spherical maps, using the fact that
cycles/loops in the graph are walk-homotopic to a point in the sphere.
To prove a map is spherical for a graph with a discrete node set, it
is unnecessary to consider the infinite collection of walks. The set
of walks without inner loops suffices, as we proved in
\Cref{lem:two-spherical-map-definition-are-equivalent}.
To demonstrate our main results, we introduce a reduction relation and
the notion of quasi-simple walks in
\Cref{def:loop-reduction-relation,def:simple-walk}, respectively.
Using this reduction relation, as stated in \Cref{thm:normalisation},
it is possible to define a normal form for walks and prove that every
walk always has a normal form under certain conditions. Additionally,
suppose a spherical map is given for a graph with a discrete node set.
In this case, we provide a normalisation theorem to state that any
walk is merely walk-homotopic to a normal form, see the details in
\Cref{thm:hom-normalisation}.
\paragraph{Outline}
The terminology and notation used throughout the paper is presented in
\Cref{sec:background}. Readers familiar with HoTT may want to skip
this section. The type of graphs discussed in this paper is defined in
\Cref{sec:graph-background}. In \Cref{sec:walks}, we define the type
of walks and the type of quasi-simple walks to introduce the normal
form of a walk in \Cref{sec:loop-reduction-relation}. In
\Cref{sec:homotopy-normalisation}, a normalisation theorem for walks
is given. Related work is reviewed in \Cref{sec:related-work}, and
finally, conclusions are drawn and future work outlined in
\Cref{sec:conclusions}.
\paragraph{Computer Formalisation}
One advantage of using dependent type theories, as in this paper, is
checking the correctness of the mathematical constructions using
computer assistance. A proof assistant is a system with support to
write such programs/proofs. The results in this document were
formalised in the proof assistant Agda v(2.6.2), in a fully
self-contained development, which does not depend on any library. The
digital version of this document contains links to the Agda terms for
some definitions, lemmas, and proofs. For example, we have made
clickable the QED symbol
(\href{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html}{$\square$})
at the end of a proof.
In the implementation, the formalisation is type-checked using the
flag \texttt{without-K} for compatibility with HoTT \citep{COCKX2016}.
Also, the flag \texttt{exact-split} was used to ensure that all
clauses in a definition are definitional equalities. In our Agda
library, to support this development, we required only a postulate for
function extensionality and the corresponding postulates related to
propositional truncation.
\section{Mathematical Foundation}
\label{sec:background}
Homotopy type theory (HoTT) is an intensional Martin-Löf type theory
(MLTT) \citep{hottbook, Awodey2012} containing Voevodsky's
\emph{Univalence
axiom} \citep{voevodsky2014equivalence} and some higher inductive
types, such as propositional truncation.
Revealed thanks to the formalisation, only a subset of HoTT is
required for the results of this work. Precisely, we only need MLTT
with universes, function extensionality and propositional truncation.
However, since this work is part of a more ambitious project in which
the whole theory is used, let us say that HoTT is our mathematical
foundation for studying graph theory. This approach gives us, for
example, the correct encoding of the equality between graphs, in the
sense of the identity type, coinciding with the notion of graph
isomorphism.
In HoTT, there is a natural correspondence between homotopy theory and
the higher structure of the identity type of intensional MLTT. A space
is a type where points are terms of their corresponding type, and
paths from \(a\) to \(b\) are of the identity type between \(a\) and
\(b\). By such a correspondence, one can, for example, study synthetic
homotopy theory, as presented in the HoTT Book \citep[§8]{hottbook}.
An informal type theoretical notation derived from the HoTT book
\citep{hottbook} and the formal system Agda \citep{norell2007} is used
throughout the paper. Definitions are introduced by (\(:≡\)) while
judgmental equalities use (\(≡\)). The identity type is denoted by
(\(=\)). The universe is denoted by \(\mathcal{U}\). The notation \(A : \mathcal{U}\)
indicates that \(A\) is a type. To state that \(a\) is of type \(A\)
we write \(a : A\). The universe \(\mathcal{U}\) is closed under the following
type formers. The coproduct of two types, \(A\) and \(B\), is denoted
by \(A + B\). The corresponding data constructors are the functions
\(\mathsf{inl} : A \to A + B\) and \(\mathsf{inr}: B \to A+B\). The
dependent sum type (Σ-type) is denoted by \(Σ_{x:A}B(x)\). The
dependent product type (Π-type) is denoted by \(Π_{x:A}B(x)\). The
empty type and unit type are denoted by \(\mathbb{0}\) and
\(\mathbb{1}\), respectively. The type \(x \neq y\) denotes the
function type \((x = y) \to \mathbb{0}\). Natural numbers are of type
\(\mathbb{N}\). \(0 : \mathbb{N}\). The successor of \(n : \mathbb{N}\) is denoted by \(S(n)\)
or \(n+1\). Given \(n : \mathbb{N}\), the type with \(n\) elements is denoted
by \(⟦n⟧\) and is defined inductively by setting
\(⟦0⟧ :\equiv \mathbb{0}\), \(⟦1⟧ :\equiv \mathbb{1}\) and
\(⟦n+1⟧ :\equiv ⟦n⟧ + \mathbb{1}\). To define some inductive types, we
adopt a similar notation as in Agda, including the keyword
\(\mathsf{data}\) and the curly braces for implicit arguments,
e.g.~\(\{a : A\}\) denotes \(a\) is of type \(A\), and it is an
implicit variable. The type may be omitted in the former notation, as
they can usually be inferred from the context.
We follow the HoTT Book, with slight changes in notation, for
definitions such as embeddings, equivalence of types denoted by
\((\simeq)\), propositional truncation of type \(A\) denoted by
\(\|A\|\), and \(n\)-types, e.g.~contractible types, propositions, and
sets, with their corresponding predicate, \(\mathsf{isContr}\),
\(\mathsf{isProp}\), and \(\mathsf{isSet}\).
\begin{theorem}[Hedberg's theorem]\label{lem:hedberg}
A type $A$ with decidable equality, i.e. $x = y$ or $x \neq y$
for all $x,y : A$, forms a set, and it is below referred to as \emph{discrete} set.
\end{theorem}
It remains to define two fundamental notions towards studying the
combinatorics of graphs, namely the type of finite sets and cyclic
sets.
\begin{definition}\label{def:finite-type}
Given $X : \mathcal{U}$, let $\mathsf{isFinite}(X) : \mathcal{U}$ be given by
\begin{equation}\label[type]{def:finite}
\mathsf{isFinite}(X) :≡ \sum_{(n~:~\mathbb{N})} \left\| X \simeq \llbracket n \rrbracket \right \|.
\end{equation}
\end{definition}
The finiteness of a type \(X\) is the existence of a bijection between
\(X\) and the type \(⟦n⟧\) for some \(n:\mathbb{N}\). One can prove
that \Cref{def:finite} is a proposition. A type \(X\) is called
\emph{finite} if \(\mathsf{isFinite}(X)\) holds. The corresponding
natural number \(n\) is referred as the cardinal number of \(X\). Any
property on \(⟦n⟧\), for example, \say{being a set} and
\say{being discrete}, can be transported to any finite type.
\begin{lemma}\label{lem:finiteness-closure-property} Finite sets are
closed under (co) products, type equivalences, $\Sigma$-types and
$\Pi$-types.
\end{lemma}
For example, if \(A\) is a finite set and \(B : A \to \mathcal{U}\) is a type
family such that for each \(a:A\) the type \(B(a)\) is a finite set,
one can conclude that the type \(\Pi_{x:A}\,B(x)\) is a finite set.
The formal proof of \Cref{lem:finiteness-closure-property} and other
\href{http://hott.github.io/HoTT/coqdoc-html/HoTT.Spaces.Finite.Finite.html}{related lemmas}
can be found in the Coq-HoTT library \citep{HoTTCoq}. For example, one
of such lemmas, used to demonstrate \Cref{lem:lemma0}, states that the
cardinality of \(X\) is less than or equal to the cardinality of \(Y\)
if there exists an embedding from \(X\) to \(Y\).
As the very first examples of finite sets, we have the empty type,
unit type, decidable propositions and the family of types \(⟦n⟧\) for
every \(n:\mathbb{N}\). To prove the finiteness of other types, as in
\Cref{thm:finite-simple-walks}, we use
\Cref{lem:decidable-implies-finite-path}, a direct consequence of
Hedberg's theorem and finiteness of the empty and unit type.
\begin{lemma}\label{lem:decidable-implies-finite-path} If $A$ is
discrete, then the identity type $x = y$ is a finite set for all
$x,y:A$.
\end{lemma}
We now present a definition of cyclic types, used later to define the
combinatorial characterisation of graphs embedded in a surface in
\Cref{def:graph-map}. Being cyclic for a type is a structure, not a
property, given by preserving the structure of cyclic subgroups of
permutations on \(⟦n⟧\). To endow a type with such a cyclic structure,
let \(\mathsf{pred}\) be the predecessor function of type
\(⟦n⟧ → ⟦n⟧\), defined as the mapping, \(0↦ (n-1)\) and
\((m+1) \mapsto m\) for \(m < n\).
\begin{definition}\label{def:cyclic-type}
Given $A : \mathcal{U}$, we define the type of \emph{cyclic structures}
on $A$, $\mathsf{Cyclic}(A)$, as follows.
\begin{equation*}
\mathsf{Cyclic}(A) :≡ \sum_{(\varphi~:~A → A)}\sum_{(n~:~\mathbb{N})} \| ∑_{(e~:~A\,≃\,⟦n⟧)}
(e ∘ \varphi = \mathsf{pred} ∘ e) \|.
\end{equation*}
A cyclic structure is denoted by a tuple $⟨\varphi, n⟩$ where
$(\varphi, n, p)$ is of type $\mathsf{Cyclic}(A)$. One may omit $n$
for brevity if no confusion arises. A type $A$ with a cyclic structure
$⟨\varphi, n⟩$ is referred as an $n$-cyclic type or simply as a
\emph{cyclic set} with $n$ elements.
\end{definition}
\section{The Type of Graphs}\label{sec:graph-background}
A \emph{graph} is a term of the type in \Cref{def:graph}. The
corresponding data is a set of \emph{nodes} and a set for each pair of
nodes called \emph{edges}.
\begin{definition}\label{def:graph} A directed multigraph is of the following
type.
\begin{equation*}
\ensuremath{\mathsf{Graph}} :≡ \hspace{-2mm}\sum_{(\ensuremath{\mathsf{N}}~:~\mathcal{U})}\sum_{(\ensuremath{\mathsf{E}}~:~\ensuremath{\mathsf{N}} → \ensuremath{\mathsf{N}} →
\mathcal{U})}\hspace{-2mm}\isSet{\ensuremath{\mathsf{N}}} × \prod_{(x,y~:~\ensuremath{\mathsf{N}})} \isSet{\ensuremath{\mathsf{E}}(x,y)}.
\end{equation*}
\end{definition}
Given a graph \(G\), the set of nodes is denoted by \(\ensuremath{\mathsf{N}}_{G}\).
Given two nodes \(x\) and \(y\), the edges between them form a set
denoted by \(\ensuremath{\mathsf{E}}_{G}(x,y)\). If \(e\) is an edge from \(x\) to
\(y\), we denote by \(\mathsf{source}(e)\) the node \(x\) and by
\(\mathsf{target}(e)\) the node \(y\). A \emph{finite graph} is a
graph where the node set is a finite set as well as every family of
sets \(\ensuremath{\mathsf{E}}_{G}(x,y)\). One can prove that the type of graphs in
\Cref{def:graph} forms a homotopy groupoid and is also a univalent
category \citep{hottbook}. The proof of these facts and
\href{https://jonaprieto.github.io/synthetic-graph-theory/lib.graph-definitions.Graph.EquivalencePrinciple.html}{related lemmas}
will be omitted as it is not essential for our work here. The
interested reader can check the formalisation in Agda for the
respective proofs \citep{agdaformalisation}. In the upcoming sections,
unless stated otherwise, we will denote \(G\) to be a graph, and
\(x,y\), and \(z\) to be variables for nodes in \(G\).
\section{Walks in a Graph}\label{sec:walks}
The notion of a walk plays an essential role in graph theory. Many of
the algorithms using graph data structures are based on this object.
One may be interested in finding the \say{distance between two nodes}
in a graph, the shortest walk, and several other variation problems
related to walking in the graph.
\begin{definition}\label{def:walk} A \emph{walk} in $G$ from $x$ to
$y$ is a sequence of connected edges that we construct using the
following inductive data type:
\begin{equation*}
\begin{aligned}
\mathsf{\textbf{data}} & \; \mathsf{W}\,~:~\, \ensuremath{\mathsf{N}}_{G} \to \ensuremath{\mathsf{N}}_{G} \to \mathcal{U}\\
& \langle\_\rangle \,:\, (x~:~\ensuremath{\mathsf{N}}_{G}) \to \mathsf{W}_{G}(x,x)\\
& (\_\hspace{-1mm}\odot\hspace{-1mm}\_) \,:\, \Pi\,\{x\,y\,z~:~\ensuremath{\mathsf{N}}_{G}\}\,.\,(e~:~\ensuremath{\mathsf{E}}_{G}(x,y))\\
& \hspace{8mm} \to (w~:~\mathsf{W}_{G}(y,z))\\
& \hspace{8mm} \to \mathsf{W}_{G}(x,z)
\end{aligned}
\end{equation*}
Let $w$ be a walk from $x$ to $y$, i.e. of type $\mathsf{W}_{G}(x,y)$.
We will denote by $x$ the \emph{head} of $w$ and by $y$ the \emph{end}
of $w$. If $w$ is $\langle x\rangle$ then we refer to $w$ as
\emph{trivial} or \emph{one-point} walk. If $w$ is of the form $(e ⊙
\langle x \rangle)$, then $w$ is the \emph{one-edge} walk $e$.
Nontrivial walks are of the form $(e⊙w)$ and a \emph{loop} is a walk
with the same head and end.
\end{definition}
\hypertarget{structural-induction-for-walks}{%
\subsection{Structural Induction for
Walks}\label{structural-induction-for-walks}}
By \emph{structural induction} or \emph{pattern matching} on a walk,
we will refer to the elimination principle of the inductive type in
\Cref{def:walk}. An induction principle allows us to define outgoing
functions from a type to a type family. For instance, if we want to
use the induction principle to inhabit a predicate on the type of
walks, \(P:\Pi\{x\,y:\ensuremath{\mathsf{N}}_{G}\}.\mathsf{W}_{G}(x,y) \to \mathcal{U}\), one
can inhabit \Cref{eq:structural-induction}. Given a walk
\(w:\mathsf{W}_{G}(x,y)\), to construct a term of type \(P(w)\), the
base case must first be constructed, i.e.~give a term of type
\(P(\langle x \rangle)\), for every \(x~:~\ensuremath{\mathsf{N}}_{G}\). Subsequently,
we must prove the case for composite walks, i.e.~\(P(e ⊙ w)\). To show
this, \(P(w)\) is assumed for any walk \(w\), and we construct a term
of type \(P(e ⊙ w)\) from this assumption. Thus, one gets \(P(w)\) for
any walk \(w\). Another induction principle for walks is stated in
\Cref{thm:walk-induction-by-length}.
\begin{equation}\label[type]{eq:structural-induction}
\begin{split}
& \hspace{3mm}{\prod_{(x~:~\ensuremath{\mathsf{N}}_{G})}\,P(\langle x \rangle)}\, \\
& {\times \prod_{(x, y ,z~:~\ensuremath{\mathsf{N}}_{G})} \prod_{(e~:~\ensuremath{\mathsf{E}}_{G}(x,y))}
\prod_{(w~:~\mathsf{W}_{G}(y,z))} P(w) \to P(e ⊙ w)} \\
&{\to \prod_{(x,y~:~\ensuremath{\mathsf{N}}_{G})} \prod_{(w~:~\mathsf{W}_{G}(x,y))} P(w)}.
\end{split}
\end{equation}
The \emph{composition}, also called concatenation, of walks is an
associative binary operation on walks defined by structural induction
on its left argument. Given walks \(p : \mathsf{W}_{G}(x,y)\) and
\(q : \mathsf{W}_{G}(y,z)\), we refer to their composition as the
\emph{composite} denoted by \(p \cdot q\). The node \(y\) is called
the \emph{joint} of the composition. The \emph{length} of the walk
\(w\) is denoted by \(\mathsf{length}(w)\) and represents the number
of edges used to construct \(w\). A trivial walk has length zero,
whilst a walk \((e ⊙ w)\) has one more length than \(w\). We display a
point to represent trivial walks and with a normal arrow to represent
walks of positive length, as illustrated in \Cref{fig:simple-walk}.
\begin{lemma}\label{lem:walk-is-set} The type of walks forms a set.
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1201}
One can show that the type $\mathsf{W}(x,y)$ is
\href{https://jonaprieto.github.io/synthetic-graph-theory/lib.graph-walks.Walk.SigmaWalks.html}{equivalent}
to $\Sigma_{n :\mathbb{N}}\,\hat{W}(n,x,y)$ with $\hat{W}$
defined as follows.
\begin{subequations}\label{def:walk-2}
\begin{align}
&\hat{W}~:~\mathbb{N} \to \ensuremath{\mathsf{N}}_{G} \to \ensuremath{\mathsf{N}}_{G} \to \mathcal{U}\\
&\hat{W}(0,x,y) :\equiv (x = y),\label{eq:w2-zero}\\
&\hat{W}(S(n),x,y) :\equiv \sum_{(k~:~\ensuremath{\mathsf{N}}_{G})}\,\ensuremath{\mathsf{E}}_{G}(x,k) \times \hat{W}(n,k,y) \label{eq:w2-sn}.
\end{align}
\end{subequations}
It suffices to show that the type $\hat{W}(n,x,y)$ forms a set for
$n:\mathbb{N}$ which will be proven by induction on $n$. If $n=0$, one
obtains the proposition $x = y$ which is a set.
Consequently, we must now show that the type in \Cref{eq:w2-sn} is a
set. By the graph definition, the base type $\ensuremath{\mathsf{N}}_{G}$ and
$\ensuremath{\mathsf{E}}_{G}$ are both sets. Thus, one only requires that $\hat{W}(n,
k,y)$ forms a set, which is precisely the induction hypothesis.
\end{linkproof}
Although it is not included in the formalisation of this work, one can
show that the type of walks forms a category. If \(\mathsf{Graph}\) is
the category of graphs using \Cref{def:graph} and \(\mathcal{C}\) is
the category of small categories. There is a functor
\(R~:~\mathsf{Graph} \to \mathcal{C}\) mapping every graph \(G\) to
its \emph{free} pre-category. The object set of \(R(G)\) is
\(\ensuremath{\mathsf{N}}_{G}\), and the morphisms correspond to the collection of all
possible walks in \(G\). By \cref{lem:walk-is-set}, it follows that
\(R(G)\) is a small category. Let \(L\) be the forgetful functor from
\(\mathcal{C}\) to \(\mathsf{Graph}\). Then, \(L\) is the left adjoint
of \(R\). The \emph{graph of
walks} \(W(G)\) of \(G\) is given by the endofunctor
\(W~:~\mathsf{Graph} \to \mathsf{Graph}\), the monad from the
composite \(L \circ R\).
\subsection{A Well-Founded Order for Walks}\label{sec:well-founded-walks}
Structural induction is a particular case of a more general induction
principle to define recursive programs called \emph{well-founded} or
Noetherian induction. Recall that for the structural induction
principle, one must always guarantee that every argument in a
recursive call in the program is strictly smaller than its arguments.
However, there is no reason to believe this will always be the case.
In constructive mathematics, a binary relation \(R\) on a set \(A\) is
\emph{well-founded} if every element of \(A\) is \emph{accessible}. An
element \(a:A\) is accessible by \(R\), if \(b:A\) is accessible for
every \(bRa\) \citep[§10.3]{Nordstrm1988, hottbook}. Then, if \(a\)
has the property that there is no \(b\) such that \(bRa\), then \(a\)
is vacuously accessible. If (\(\leq\)) represents the
\emph{less or equal than} relation on the natural numbers, then the
number zero is vacuously accessible by \(\leq\) on \(\mathbb{N}\).
Let us define a well-founded order for walks in a graph by considering
their lengths, from where the well-founded induction for walks
follows, see \Cref{thm:walk-induction-by-length}.
\begin{definition}\label{def:walk-order} Given $p,q :
\mathsf{W}_{G}(x,y)$ for $x,y:\ensuremath{\mathsf{N}}_{G}$, the relation
$(\preccurlyeq)$ states that $p \preccurlyeq q$ when
$\mathsf{length}(p)\leq \mathsf{length}(q)$.
\end{definition}
\begin{lemma}\label{lem:well-founded-walk-relation} The relation
($\preccurlyeq$) on $\Sigma_{x,y~:~\ensuremath{\mathsf{N}}_{G}} \mathsf{W}_{G}(x,y)$ is well-founded.
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1278}
It follows from the fact that the poset $(\mathbb{N}, \leq)$ is well-founded.
\end{linkproof}
We refer to the following
\href{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1298}{lemma}
as the \emph{well-founded induction principle for walks} induced by
\Cref{def:walk-order}.
\begin{theorem}
\label{thm:walk-induction-by-length}
Suppose the following is given,
\begin{enumerate}
\item
a predicate $P$ of type $\Sigma_{x,y~:~\ensuremath{\mathsf{N}}_{G}} \mathsf{W}_{G}(x,y) \to \mathcal{U}$ such that,
\item
given $(a,b,q)$ of type $\Sigma_{x,y~:~\ensuremath{\mathsf{N}}_{G}} \mathsf{W}_{G}(x,y)$, if
$P(p)$ for each walk $p~:~\mathsf{W}_{G}(x',y')$ with $x',y': \ensuremath{\mathsf{N}}_{G}$ and $p
\preccurlyeq q$, then $P(a,b,q)$.
\end{enumerate}
Then, given any walk $w : \mathsf{W}_{G}(x,y)$ and $x,y : \ensuremath{\mathsf{N}}_{G}$, we
have $P(x,y,w)$.
\end{theorem}
\begin{remark} The induction principle stated in
\Cref{thm:walk-induction-by-length} using
\Cref{lem:well-founded-walk-relation} is equivalent to performing
induction on the length of the walk.
\end{remark}
\Cref{thm:normalisation,thm:hom-normalisation} define algorithms for
which many of their recursive calls are on subwalks of the input walk.
A \emph{subwalk} of a walk \(w\) is a contiguous subsequence of edges
in \(w\). Subwalks are not structurally smaller than their
corresponding walk, unless one takes for example the subwalk \(w\) or
\(e\) for the composite walk \((e ⊙ w)\). Excluding the previous case,
to deal with other subwalk cases, we can use the \emph{well-founded}
induction principle given in \Cref{thm:walk-induction-by-length}.
\subsection{Quasi-Simple Walks}\label{sec:quasi-simple}
In this subsection, we characterise walks with shapes as in
\Cref{fig:simple-walk} and refer to such as \emph{quasi-simple} walks
in \Cref{def:simple-walk}.
\begin{figure}[!ht]
\centering
\begin{equation*}
\begin{tikzcd}
& && \\
\bullet_{x} &\bullet_{x} &\bullet_{y} &\bullet_{x} & \bullet_{x} & \bullet_{y}
\arrow["w_1", from=2-2, to=2-3]
\arrow["w_2"{description}, from=2-4, to=2-4, loop]
\arrow["w_3", from=2-5, to=2-6]
\arrow["w_4"{description},from=2-6, to=2-6, loop]
\end{tikzcd}
\end{equation*}
\caption{The arrows in the picture can represent edges or walks of a
positive length. In the sense of \Cref{def:simple-walk}, a
quasi-simple walk can only be one of these kinds: i) one-point walk
ii) path iii) loop without inner node repetitions, or iv) composite
walk between a path and a quasi-simple walk of kind iii. The walks
$w_3$ and $w_4$ only share the node $y$.}
\label{fig:simple-walk}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.33\linewidth}
\begin{tikzcd}
\\
\bullet_{x}
\arrow["w_1"{description},from=2-1, to=2-1, loop]
\arrow["w_2",from=2-1, to=2-1, loop right]
\end{tikzcd}
\end{subfigure}
\begin{subfigure}[b]{0.33\linewidth}
\begin{tikzcd}
&\\
\bullet_{x} & \bullet_{y}
\arrow["w_3"{description},from=2-1, to=2-1, loop]
\arrow["w_4",from=2-1, to=2-2]
\end{tikzcd}
\end{subfigure}
\begin{subfigure}[b]{0.33\linewidth}
\begin{tikzcd}
& &\\
\bullet_{x} &\bullet_{y} &\bullet_{z}
\arrow["w_5",from=2-1, to=2-2]
\arrow["w_6"{description},from=2-2, to=2-2, loop]
\arrow["w_7",from=2-2, to=2-3]
\end{tikzcd}
\end{subfigure}
\caption{These are three examples of walks that are not quasi-simple
in the sense of \Cref{def:simple-walk}. The walks $w_1$ and $w_2$ only
share the node $x$, and the same happens with the walks $w_3$ and
$w_4$. The walks $w_5,w_6$ and $w_7$ only share the node $y$.
The walks $w_i$ for $i$ from $1$ to $7$ are nontrivial walks.
}
\label{fig:simple-walk-2}
\end{figure}
The notion of a quasi-simple walk will be used to introduce a
reduction relation on the set of walks to remove their inner loops,
see \Cref{def:loop-reduction-relation}. A related notion to the
quasi-simple walk definition is that of a path \citep{diestel}. The
usual graph-theoretical notion of a \emph{path} is a walk with no
repeated nodes. Here, quasi-simple walks are introduced since paths
are not suitable in our description of graph maps in
\Cref{sec:homotopy-walks-in-sphere}. There, the totality of walks is
considered, which includes closed walks, also called loops. For graph
maps in the sphere, we found out that the type of walks can be
replaced by the type of quasi-simple walks under certain conditions.
Quasi-walks are conveniently defined in a way that permits their end
to appear at most twice in the walk.
To define quasi-simpleness for walks, we introduce a unconventional
relation, denoted by \((x \in w)\), meaning that the node \(x\) is in
the walk \(w\) and it is not the last, see \Cref{def:node-membership}.
\((x \in w)\) is a proposition, and decidable if the walks belong to
graphs with discrete node set. Consequently,
\Cref{lem:being-simple-is-prop} shows that being quasi-simple is also
a decidable proposition on the same kind of graphs. Quasi-simple walks
play a relevant role in this work. They are required to give an
alternative definition of graph maps in the sphere, as stated in
\Cref{def:spherical-map-simple}.
\begin{definition}\label{def:node-membership} Let $x,y,z:\ensuremath{\mathsf{N}}_{G}$
and $w: \mathsf{W}_{G}(x,z)$. The relation $(\in)$ on a walk $w$
for a node $y$ is defined as the node $y$ that is not $z$ but
belongs to $w$, i.e. whenever the type $(y \in w)$ is inhabited.
\begin{enumerate}
\item $y \in ⟨z⟩ :\equiv \mathbb{0}$.
\item $y \in (e ⊙ w) :\equiv (y = \mathsf{source}(e)) + (y \in w)$.
\end{enumerate}
\end{definition}
\begin{lemma} If the node set of the graph $G$ is discrete, then the
type $(x ∈ w)$ is decidable proposition for any node $x$ and walk $w$
in $G$.
\end{lemma}
\begin{definition}\label{def:simple-walk} Given $x,y: \ensuremath{\mathsf{N}}_{G}$, a walk in $G$ from $x$ to $y$ is \emph{quasi-simple} if $\mathsf{isQuasi}(w)$ holds.
\begin{equation}\label{eq:simple-walk}
\mathsf{isQuasi}(w) :≡ \prod_{(z~:~\ensuremath{\mathsf{N}}_{G})} \mathsf{isProp}(z \in w).
\end{equation}
\end{definition}
\begin{lemma}\label{lem:simple-is-prop}
Being quasi-simple is a proposition.
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1423}
It follows since $\mathsf{isProp}(z \in w)$ is a proposition.
\end{linkproof}
Thus, \Cref{def:simple-walk} presents a quasi-simple walk as a path
where the end could only be present at most twice. Examples of walks
that are not quasi-simple are illustrated in \Cref{fig:simple-walk-2}.
\begin{lemma}\label{lem:e-simple-is-simple} Given $x,y,z : \ensuremath{\mathsf{N}}_{G}$,
$e : \ensuremath{\mathsf{E}}_{G}(x,y)$ and a quasi-simple walk $w : \mathsf{W}_{G}(y,z)$, if
$x~\not \in~w$ then the walk $(e ⊙ w)$ is quasi-simple.
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1439}
Given a node $r$, we must show that $r \in (e ⊙ w)$ is a
proposition. That is equivalent to showing that the type
$(r = x) + (r \in w)$ is a proposition. The coproduct of mutually
exclusive propositions is a proposition. Then, remember that $r = x$
is a given proposition and that the type $(r \in w)$ is also a
proposition since the walk $w$ is quasi-simple by hypothesis. Thus, it
remains to show that there is no term $(p, q)$ where $p: (r = x)$ and
$ q : (r \in w) $. A contradiction arises, since by hypothesis $x~\not
\in~w$ but from $\mathsf{tr}^{λ z \to z \in w}(p)(q) : x \in w$.
\end{linkproof}
\begin{lemma}\label{lem:conservation-simple-walks} Given $x,y,z :
\ensuremath{\mathsf{N}}_{G}$, $e:\mathsf{Edge}_{G}(x,y)$, and a walk $w:\mathsf{W}_{G}(y,z)$, if
the walk $(e~⊙~w)$ is a quasi-simple walk then $w$ is also a quasi-simple walk.
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1481}
Given any node $u~:~\ensuremath{\mathsf{N}}_{G}$ and two proofs $p,q~:~u \in w$, we must
show that $p=q$. By definition, $\mathsf{inr}(p)$ and $\mathsf{inr}(q)
$ are proofs that $u \in (e~⊙~w)$. Because $(e~⊙~w)$ is a quasi-simple walk,
the equality $\mathsf{inr}(p) = \mathsf{inr}(q)$ holds. The
constructor $\mathsf{inr}$ is an injective function, and one therefore
obtains $p=q$ as required.\qedhere
\end{linkproof}
\begin{corollary}\label{lem:basic-simple-walks} Trivial and one-edge
walks are quasi-simple walks.
\end{corollary}
\begin{lemma}\label{lem:being-simple-is-prop}
If the node set of the
graph is discrete, then being quasi-simple for a walk is a decidable
proposition.
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1604}
Let $x,z: \ensuremath{\mathsf{N}}_{G}$ and $w : \mathsf{W}_{G}(x,z)$, we want to show
that $\mathsf{isQuasi}(w)$ is decidable. The proof is by induction on
the structure of $w$.
\begin{enumerate}
\item If $w$ is trivial then, by \Cref{lem:basic-simple-walks},
the walk $w$ is quasi-simple.
\item If $w$ is the composite walk $(e ⊙ w')$ for $e : \ensuremath{\mathsf{E}}_{G}(x,y)$
and $w'~:~\mathsf{W}_{G}(y,z)$, we recursively ask whether the walk
$w'$ is quasi-simple or not.
\begin{enumerate}
\item If $w'$ is not quasi-simple, then $w$ is not quasi-simple by
the contrapositive of \Cref{lem:conservation-simple-walks}.
\item If $w'$ is quasi-simple, then we ask if $x \in w'$. If so, then $w$ is
not quasi-simple. Otherwise, that would contradict the quasi-simpleness definition,
as the node $x$ would appear twice in $w$. Now, if $x \not \in w'$,
one obtains that $w$ is quasi-simple by \cref{lem:e-simple-is-simple}. \qedhere
\end{enumerate}
\end{enumerate}
\end{linkproof}
\hypertarget{a-finiteness-property}{%
\subsection{A Finiteness Property}\label{a-finiteness-property}}
The goal in this subsection is to prove that the collection of
quasi-simple walks in a finite graph \(G\) forms a finite set, as
stated in \Cref{thm:finite-simple-walks}. To show this, a proof on the
finiteness of an equivalent type to \Cref{def:simple-walk-collection}
is given. To establish such equivalence, see \Cref{lem:lemma1}, we
first need to demonstrate some intermediate results as the following.
\begin{equation}\label[type]{def:simple-walk-collection}
\sum_{(w~:~\mathsf{W}_{G}(x,y))} \mathsf{isQuasi}(w).
\end{equation}
\begin{lemma}\label{lem:number-of-nodes-in-walk}
Given any walk $w : \mathsf{W}_{G}(x,z)$ of length $n$, then
\begin{equation}\label[equiv]{eq:number-of-nodes-in-walk}
⟦ n ⟧ \simeq \sum_{(y~:~\ensuremath{\mathsf{N}}_{G})} (y ∈ w).
\end{equation}
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1651}
By induction on the structure of $w$.
\begin{enumerate}
\item If the walk is trivial, the required equivalence follows
from the type equivalence between $\mathbb{0}$ and $\Sigma_{z
:\ensuremath{\mathsf{N}}_{G}}\mathbb{0}$.
\item If the walk is $(e ⊙ w)$ for $e~:~\ensuremath{\mathsf{E}}_{G}(x,y)$ and $w :
\mathsf{W}_{G}(y,z)$, the equivalence is established by the
following calculation. Let $n$ be the length of $w$.
\begin{subequations}
\begin{align}
\sum_{(y~:~\ensuremath{\mathsf{N}}_{G})} (y \in (e ⊙ w )) &\equiv \sum_{(y~:~\ensuremath{\mathsf{N}}_{G})} (y = x) + (y \in w ) \label[equiv]{eq:number-nodes-1}\\
&\simeq \sum_{(y~:~\ensuremath{\mathsf{N}}_{G})} (y = x) + \sum_{(y~:~\ensuremath{\mathsf{N}}_{G})} (y \in w ) \label[equiv]{eq:number-nodes-2}\\
&\simeq \mathbb{1} + \sum_{(y~:~\ensuremath{\mathsf{N}}_{G})} (y \in w ) \label[equiv]{eq:number-nodes-3}\\
&\simeq \mathbb{1} + ⟦ n ⟧ \label[equiv]{eq:number-nodes-4} \\
&\simeq ⟦ n + 1 ⟧. \label[equiv]{eq:number-nodes-5}
\end{align}
\end{subequations}
\cref{eq:number-nodes-1} is accomplished by
\Cref{def:node-membership}. $\Sigma$-type distributes coproducts
as in \Cref{eq:number-nodes-2}. We can simplify in
\Cref{eq:number-nodes-3} because the type $\Sigma_{y:\ensuremath{\mathsf{N}}_{G}} (y
= x)$ is contractible. Note that the inner path is fixed and it is
then equivalent to the unit type. \Cref{eq:number-nodes-4} is by
the induction hypothesis applied to $w$. \Cref{eq:number-nodes-5}
is accomplished by the definition of $⟦ n ⟧$ using the coproduct
definition. \qedhere
\end{enumerate}
\end{linkproof}
\begin{lemma}\label{lem:inw-is-finite} Given $x,y,z : \ensuremath{\mathsf{N}}_{G}$, and
$w : \mathsf{W}_{G}(x,y)$ the type $(z ∈ w)$ is a finite set if
the node set of $G$ is discrete.
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1680}
By induction on the structure of $w$: in case the walk is trivial, the
type in question is finite as it is equal to the empty type by
definition. In the composite walk case, $z \in (e ⊙ w)$, we must prove
that the type $(z = x) + (z \in w)$ is finite. Note that the former is
finite by \Cref{lem:decidable-implies-finite-path}. By the induction
hypothesis: the type $z \in w$ is finite. The required conclusion
then follows since finite sets are closed under coproducts. \qedhere
\end{linkproof}
We can now prove that for finite graphs there exists a finiteness
property for the collection of all quasi-simple walks, derived from
the finiteness of the set of quasi-simple walks of a fixed length
\(n\) for \(n:\mathbb{N}\).
\begin{definition}\label{def:finite-simple-walks}
Given $x,y : \ensuremath{\mathsf{N}}_{G}$ and $n:\mathbb{N}$, the
type $\mathsf{qswalk}$ collects all quasi-simple walks of a
fixed length $n$.
\begin{equation*}
\mathsf{qswalk}(n,x,y):\equiv\hspace{-3mm} \sum_{(w~:~\mathsf{W}_{G}(x,y))}
\hspace{-3mm}\mathsf{isQuasi}(w)~\times~(\mathsf{length}(w)=n).
\end{equation*}
\end{definition}
\begin{lemma}\label{lem:equiv-type-fswalk} Given a graph $G$, $n~:~\mathbb{N}$, and
$x, z~:~\ensuremath{\mathsf{N}}_{G} $, the following equivalence holds.
\begin{equation}\label[equiv]{eq:equiv-type-fswalk}
\mathsf{qswalk}(S(n), x, z)
\simeq\hspace{-3mm}\sum_{(y~:~\ensuremath{\mathsf{N}}_{G})}
\sum_{(e~:~\ensuremath{\mathsf{E}}_{G}(x,y))}\sum_{(w~:~\mathsf{qswalk}(n, y, z))}\hspace{-3mm} (x \not \in w).
\end{equation}
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1746}
The back-and-forth functions are extensions of the functions derived
from \Cref{lem:e-simple-is-simple,lem:conservation-simple-walks}. \qedhere
\end{linkproof}
\begin{lemma}\label{lem:lemma2} Given a finite graph, $x, y :
\ensuremath{\mathsf{N}}_{G}$ and $n~:~\mathbb{N}$, the type $\mathsf{qswalk}(n,x,y)$
in \Cref{def:finite-simple-walks} is a finite set.
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1777}
It suffices to show that the type $\mathsf{qswalk}(n,x,y)$ is finite.
The proof is by induction on $n$.
\begin{enumerate}
\item If $n = 0$, the type defined by
$\mathsf{qswalk}(0,x,z)$ is equivalent to the identity type $x=y$,
as the only walks of length zero are the trivial walks. Given that
the node set is discrete, the path space $x=y$ is
finite by \Cref{lem:decidable-implies-finite-path}.
\item Otherwise, given $x,z:\ensuremath{\mathsf{N}}_{G}$, we must prove that the
type $\mathsf{qswalk}(S(n),x,z)$ is finite, for $n :
\mathbb{N}$, assuming that $\mathsf{qswalk}(n,x,z)$ is
finite. This is equivalent to showing that the equivalent
type given by \Cref{eq:equiv-type-fswalk} is finite. The
required conclusion follows by
\Cref{lem:finiteness-closure-property}, as each type of the
$\Sigma$-type in the right-hand side of the equivalence in
\Cref{eq:equiv-type-fswalk} is finite. The set $\ensuremath{\mathsf{N}}_{G}$
and the sets by $\ensuremath{\mathsf{E}}_{G}$ are each finite, as $G$ is a
finite graph. The type $\mathsf{qswalk}(n, y, z)$ is finite
by induction hypothesis. Lastly, any decidable proposition is finite
i.e. $(x \not \in w')$ is finite.\qedhere
\end{enumerate}
\end{linkproof}
\Cref{lem:lemma0,lem:lemma1} prove the fact mentioned earlier on the
node repetition condition in a quasi-simple walk. A node can only
appear once in a quasi-simple walk, unless the node is the end of the
walk. From now on, unless stated otherwise, we will refer to \(n\) as
the cardinality of \(\ensuremath{\mathsf{N}}_{G}\) whenever the node set of the graph
\(G\) is finite. The number of nodes in any quasi-simple walk is
bounded by \(n+1\).
\begin{lemma}\label{lem:lemma1b} Let $G$ be a finite graph. Then
\Cref{eq:lemma1b} is a finite set.
\begin{equation}\label[type]{eq:lemma1b}
\sum_{(x,y~:~\ensuremath{\mathsf{N}}_{G})}\sum_{(m~:~ ⟦ n + 1 ⟧ )} \mathsf{qswalk}(m,x,y).
\end{equation}
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1803}
The conclusion follows since finite sets are closed under
$\Sigma$-types. $\ensuremath{\mathsf{N}}_G$ is finite since $G$ is a finite graph. $⟦ n
+ 1 ⟧$ is finite. The type $\mathsf{qswalk}(m,x,y)$ is finite by
\Cref{lem:lemma2}.\qedhere
\end{linkproof}
\begin{lemma}\label{lem:lemma0} Given a graph $G$ with finite node set
of cardinality $n$, $x,y:\ensuremath{\mathsf{N}}_{G}$ and a quasi-simple walk $w :
\mathsf{W}_{G}(x,y)$ of length $m$, then it holds that $m \leq n$.
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1845}
It suffices to generate an embedding between the finite set $⟦m⟧$ and
the finite node set in $G$. Such an embedding is the projection
function $\pi_1~:~\Sigma_{x :\ensuremath{\mathsf{N}}_{G}} (x \in w) \to \ensuremath{\mathsf{N}}_{G}$.
Recall that the domain of the function $\pi_1$ is equivalent to $⟦m⟧$
by \Cref{lem:number-of-nodes-in-walk}.\qedhere
\end{linkproof}
Now, even when the type of walks forms an infinite set, thanks to
\Cref{lem:lemma0,thm:finite-simple-walks}, we will be able to prove
that for any nodes \(x\) and \(y\), the collection of quasi-simple
walks from \(x\) to \(y\) forms a finite set as long as the graph is
finite.
\begin{lemma}\label{lem:lemma1}
Given a graph $G$ with finite node set of cardinality $n$ and $x, y~:~\ensuremath{\mathsf{N}}_{G}$,
the following equivalence holds.
\begin{equation}\label[equiv]{eq:lemma1}
\sum_{(w~:~\mathsf{W}_{G}(x,y))} \mathsf{isQuasi}(w)
\simeq \sum_{(m~:~⟦ n + 1 ⟧)} \mathsf{qswalk}(m,x,y).
\end{equation}
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1883}
Apply \Cref{lem:lemma0}.\qedhere
\end{linkproof}
It is not immediately clear that quasi-simple walks forms a finite
set, even when the graph is finite. A quasi-simple walk can contain a
loop at its terminal node. One might think there are infinitely many
walks if each walk loops at its terminal nodes. However, it is by
constraining walks to be quasi-simple that we obtain the finiteness
property.
\begin{theorem}\label{thm:finite-simple-walks}
The quasi-simple walks of a finite graph $G$ forms a finite set, i.e.
\Cref{eq:finite-simple-walks} is inhabited.
\vspace{1mm}
\begin{equation}\label[type]{eq:finite-simple-walks}
\mathsf{isFinite}\left(\sum_{(x , y~:~\ensuremath{\mathsf{N}}_{G})} \sum_{(w~:~\mathsf{W}(x,y))} \mathsf{isQuasi}(w)\right).
\end{equation}
\end{theorem}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#1927}
The conclusion clearly follows from \Cref{lem:lemma1,lem:lemma1b}, since
finite sets are closed under type equivalences and $\Sigma$-types
by \Cref{lem:finiteness-closure-property}. \qedhere
\end{linkproof}
\hypertarget{walk-splitting}{%
\subsection{Walk Splitting}\label{walk-splitting}}
In this subsection, a function to split/divide a walk \(w\) from \(x\)
to \(z\) into subwalks, \(w_1\) and \(w_2\), is given. Such a division
of \(w\), of type \Cref{type:walk-division}, is handy e.g.~for proving
statements where the induction is not on the structure but on the
length of the walk.
\vspace{1mm}
\begin{equation}\label[type]{type:walk-division}
\sum_{(y~:~\ensuremath{\mathsf{N}}_{G})} \sum_{(w_1~:~\mathsf{W}_{G}(x,y))}\sum_{(w_2~:~\mathsf{W}_{G}(y,z))} (w = w_1 \cdot w_2).
\end{equation} \vspace{1mm}
Let \(x,y,z\) be variables for nodes in \(G\) and let \(w\) be a walk
from \(x\) to \(z\), unless stated otherwise. We refer to the walk
\(w_1\) in \Cref{type:walk-division} as a prefix of \(w\) and \(w_2\)
as the corresponding suffix given \(w_1\).
\begin{definition}\label{def:prefixes}
Given two walks $p$ and $q$ with the same head, one says that $p$ is a
\emph{prefix} of $q$ if the type $\mathsf{Prefix}(p,q)$ is inhabited.
\vspace{1mm}
\begin{equation*}
\begin{aligned}
\mathsf{\textbf{data}} & \; \mathsf{Prefix}~: \Pi\,\{x,y,z\}\,.\,\mathsf{W}_{G}(x,y) \to \mathsf{W}_{G}(x,z) → \mathcal{U} \; \mathsf{ } \\
& \mathsf{head}~:~\Pi\,\{x\,y\}\, .\,\Pi\,\{w~:~\mathsf{W}_{G}(x,y)\}\,.\,\mathsf{Prefix}(⟨x⟩,w)\\
& \mathsf{by\mbox{-}edge}
: \Pi\,\{x\,y\,z\,k\}\,.\,\Pi\,\{e~:~\ensuremath{\mathsf{E}}_{G}(x,y)\} \\
&\hspace{12.5mm} .\ \Pi\,\{p~:~\mathsf{W}_{G}(y,z)\}\,.\,\Pi\,\{q~:~\mathsf{W}_{G}(y,k)\} \\
&\hspace{12mm} \to \mathsf{Prefix}(p, q) \to \mathsf{Prefix}(e ⊙ p, e ⊙ q)\\
\end{aligned}
\end{equation*}
\vspace{.5mm}
\end{definition}
\begin{lemma}\label{lem:find-suffix} Given a prefix $w_1$ for a walk
$w$, we can prove that there is a term of type \Cref{type:suffix}
named $\textsf{suffix}(w_1, w, t)$, referring to as the suffix of $w$
given $w_1$, where $t~:~w = w_1 \cdot w_2$.
\begin{equation}\label[type]{type:suffix}
\sum_{(w_2~:~\mathsf{W}_{G}(y,z))}\, (w = w_1 \cdot w_2) .
\end{equation}
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#2022}
For brevity, we skip the trivial cases for $w_1$ and $w$. The
remaining cases are proved by induction; first, on $w_1$, and secondly
on $w$. The resulting nontrivial case occurs when $w_1 = e ⊙ p$, $w =
e ⊙ q$ and $t:\mathsf{Prefix}(p,q)$ for two walks $p$ and $q$. By
the induction hypothesis applied to $p,q$, and $t$, the term
$\mathsf{suffix}(p,q,t)$ is obtained, from which one gets the suffix
walk $w_2$ along with a proof $i~:~q = p \cdot w_2$. Thus, the
required term is the pair $(w_2, \mathsf{ap}(e ⊙\mbox{-},i))$.\qedhere
\end{linkproof}
We now encode the case where the walk \(w\) is divided at the first
occurrence of the node \(y\), using the type family
\(\mathsf{SplitAt}(w,y)\) defined in \Cref{def:type-splitat}. The
corresponding method to inhabit the type \(\mathsf{SplitAt}(w, y)\) is
the function given in \Cref{def:view-splitat}, assuming the node set
in the graph is discrete. This walk splitting encoding is implicitly
used in several parts of the proof of \Cref{thm:hom-normalisation}.
\begin{definition}\label{def:type-splitat} The type
$\mathsf{SplitAt}(w, y)$ is the inductive type defined as:
\vspace{.5mm}
\begin{equation*}
\begin{aligned}
\mathsf{\textbf{data}} & \; \mathsf{SplitAt}\ \{x\,z\} (w~:~\mathsf{W}_{G}(x,z))\,(y~:~\ensuremath{\mathsf{N}}_{G}) :\; \mathcal{U} \; \mathsf{ } \\
& \mathsf{nothing}~:~\Pi\,\{x\,y\}\,.\,\Pi\,\{w~:~\mathsf{W}_{G}(x,y)\}\\
&\hspace{12mm} \to (y \not \in w)\\
&\hspace{12mm} \to \mathsf{SplitAt}(w,y)\\
& \mathsf{just} : \Pi\,\{x\,y\}\,.\,\Pi\,\{w~:~\mathsf{W}_{G}(x,y)\}\\
&\hspace{5.5mm} \to (p : \mathsf{W}_{G}(x,y)) \\
&\hspace{5.5mm} \to \mathsf{Prefix}(p, w) \to (y \not \in p)\\
&\hspace{5.5mm}\to \mathsf{SplitAt}(w,y)\\
\end{aligned}
\end{equation*}
\vspace{.5mm}
\end{definition}
\begin{lemma}\label{def:view-splitat} The type $\mathsf{SplitAt}(w,y)$
is inhabited if the node set of the graph is discrete.
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#2064}
By induction on the structure of the walk.
\begin{enumerate}
\item If the walk is trivial, then the required term is
$\mathsf{nothing} \mathsf{id}$, as by definition, $y \not \in \mathbb{0}$.
\item If the walk is the composite $(e ⊙ w)$ with $e :
\ensuremath{\mathsf{E}}_{G}(x,y')$ and $w~:~\mathsf{W}_{G}(y',z)$, we ask whether
$y$ is equal to $x$ or not.
\begin{enumerate}
\item If $y = x$ then the required term is
$\mathsf{just}(\langle y
\rangle,\mathsf{head},\mathsf{id})$.
\item If $y \neq x$ then by the induction hypothesis on $w$ and
$y$, the following cases need to be considered.
\begin{enumerate}
\item If the case is $\mathsf{nothing}$, then there is enough evidence
that $y\not \in w$ and we use for the required term
the $\mathsf{nothing}$ constructor.
\item Otherwise, there is a prefix $w_1$ for $w$ and a
proof $r : y \not \in w_1$. Using $r$ and the fact $x \neq
y$, we can construct $r' : y \not \in (e ⊙ w_1)$. Then,
the term that we are looking for is $\mathsf{just}(e ⊙
w_1, \mathsf{by\mbox{-}edge}(p), r')$ of type
$\mathsf{SplitAt}(e ⊙ w, y)$, as required in the
conclusion. \qedhere
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{linkproof}
\subsection{Normal Forms for Walks}\label{sec:loop-reduction-relation}
In this subsection, a reduction relation in
\Cref{def:loop-reduction-relation} is established on the set of walks
of equal endpoints. Some cases considered by such a relation are
illustrated in \Cref{fig:loop-reduction}. This relation provides a way
to remove loops from walks in a graph with a discrete set of nodes.
The notion of normal form for walks presented in this work is based on
the loop reduction relation in \Cref{def:normal-form}.
\begin{figure}
\centering
\begin{tikzcd}
& {} & \bullet_{x} &[-2em] {} &[3em] {} &[-3em]\bullet_{x} &{} &{}\\
\bullet_{x} &[2em] \bullet_{y} & \bullet_{z} & {} &{} &\bullet_{y} &\bullet_{z}
\arrow[from=1-3, to=1-3, loop]
\arrow["e"{description}, from=2-1, to=2-2]
\arrow["p", curve={height=-18pt}, from=2-2, to=2-1]
\arrow["q", from=2-2, to=2-3]
\arrow["q", from=2-6, to=2-7]
\arrow["{\xi_1}", maps to, squiggly, from=1-4, to=1-5]
\arrow["{\xi_3}", maps to, squiggly, from=2-4, to=2-5]
\end{tikzcd}
\caption{The rules $\xi_1$ and $\xi_3$ of the loop-reduction relation
in \Cref{eq:reduction-relation}.}
\label{fig:loop-reduction}
\end{figure}
The following definitions establish a few type families to encode
walks of a certain basic structure---for example, nontrivial walks and
loops---necessary for the formalisation.
\begin{definition}
Let $x,y~:~\ensuremath{\mathsf{N}}_{G}$ and $w :\mathsf{W}_{G}(x,y)$.
\begin{enumerate}
\item The walk $w$ is a loop whenever the head is equal to the end, i.e.
$\mathsf{Loop}(w)$.
\begin{equation*}
\begin{aligned}
\mathsf{\textbf{data}} & \; \mathsf{Loop}~:~\, \Pi\,\{x,y\}\,.\,\mathsf{W}_{G}(x,y) → \mathcal{U} \; \mathsf{ } \\
& \mathsf{is\mbox{-}loop}: \,\Pi \{x\,y\}.\,\Pi\{w~:~\mathsf{W}_{G}(x,y)\}\\
&\hspace{10mm} → x = y \to \mathsf{Loop}(w)
\end{aligned}
\end{equation*}
\item The walk $w$ is trivial if its length is zero, i.e. $\mathsf{Trivial}(w)$.%
\begin{equation*}
\begin{aligned}
\mathsf{\textbf{data}} & \; \mathsf{Trivial}~:~\, \Pi\,\{x,y\}\,.\,\mathsf{W}_{G}(x,y) → \mathcal{U} \; \mathsf{ } \\
& \mathsf{is\mbox{-}trivial}: \,\Pi \{x\,y\}.\,\Pi\{w : \mathsf{W}_{G}(x,y)\}\\
&\hspace{12.5mm} → \mathsf{length}(w) = 0 \to \mathsf{Trivial}(w)
\end{aligned}
\end{equation*}
\item A walk $w$ is not trivial, if it has one edge at least, i.e.
$\mathsf{NonTrivial}(w)$.
\begin{equation*}
\begin{aligned}
\mathsf{\textbf{data}} & \; \mathsf{NonTrivial} : \, \Pi\,\{x,y\}\,.\,\mathsf{W}_{G}(x,y) → \mathcal{U} \; \mathsf{ } \\
& \mathsf{has\mbox{-}edge}\,:\,\Pi \{x\,y\,z\}.\,\Pi\{w : \mathsf{W}_{G}(y,z)\}\\
&\hspace{14mm}\to(e : \ensuremath{\mathsf{E}}_{G}(x,y)) → \mathsf{NonTrivial}(e ⊙ w).
\end{aligned}
\end{equation*}
\item A walk $w$ does not \emph{reduce} if $\mathsf{NoReduce}(w)$.
\begin{equation*}
\begin{aligned}
\mathsf{\textbf{data}} & \; \mathsf{NoReduce} : \, \Pi\,\{x,y\}\,.\,\mathsf{W}_{G}(x,y) → \mathcal{U} \; \mathsf{ } \\
& \mathsf{is}\mbox{-}\mathsf{dot} : \Pi\{x\}\,.\,\mathsf{NoReduce}(⟨ x ⟩) \\
& \mathsf{is}\mbox{-}\mathsf{edge} : \Pi\{x\,y\}.\,\Pi\,\{e : \ensuremath{\mathsf{E}}_{G}(x,y)\} \\
&\hspace{10.5mm} → (x ≠ y) → \mathsf{NoReduce}(e ⊙ ⟨ y ⟩)
\end{aligned}
\end{equation*}
\item A walk $w$ is not a trivial loop if $\mathsf{NonTrivialLoop}(w)$.
\end{enumerate}
\begin{equation*}
\begin{aligned}
\mathsf{\textbf{data}} & \; \mathsf{NonTrivialLoop} : \, \Pi\,\{x,y\}\,.\,\mathsf{W}_{G}(x,y) → \mathcal{U} \; \mathsf{ } \\
&\mathsf{is}\mbox{-}\mathsf{loop} : \Pi\{x\,y\,z\}\,.\{e : \ensuremath{\mathsf{E}}_{G}(x,y)\}\\
&\hspace{10mm} → (p : x = z )\,→\,(w : \mathsf{W}_{G})\\
&\hspace{10mm} → \,\mathsf{NonTrivialLoop}(e ⊙ w)
\end{aligned}
\end{equation*}
\end{definition}
\begin{lemma} Given $x,y : \ensuremath{\mathsf{N}}_{G}$ and $u: \mathsf{W}_{G}(x,y)$, the
following claims hold.
\begin{enumerate}
\item If $x \neq y$ then $\mathsf{NonTrivial}(u)$.
\item If $\mathsf{NonTrivial}(u)$ then $x \in u$.
\item Given $z : \ensuremath{\mathsf{N}}_{G}$, if $\mathsf{NonTrivial}(u)$ and
$v : \mathsf{W}_{G}(y,z)$ then $\mathsf{NonTrivial}(u \cdot v)$.
\end{enumerate}
\end{lemma}
\noindent Remember that a reduction relation \(R\) on a set \(M\) is
an irreflexive binary relation on \(M\). If \(R\) is a reduction
relation, we use \(xRy\) to refer to the pair \((x,y)\) in \(R\). If
\(xRy\) then one says that \(x\) \emph{reduces} to \(y\) or simply
\(x\) \emph{reduces}.
\begin{definition}\label{def:loop-reduction-relation} The
\href{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#2358}{\emph{loop-reduction}
relation} ($\rightsquigarrow$) on walks is
\Cref{eq:reduction-relation}.
\begin{equation}\label[type]{eq:reduction-relation}
\begin{aligned}
\mathsf{\textbf{data}} & \;(\rightsquigarrow) : \, \Pi\,\{x,y : \ensuremath{\mathsf{N}}_{G}\}.\mathsf{W}_{G}(x,y) \to \mathsf{W}_{G}(x,y) → \mathcal{U} \; \mathsf{ } \\
& ξ₁ : \Pi\,\{x\,y\}\,.\,(p : \mathsf{W}_{G}(x,y))\,(q : \mathsf{W}_{G}(x,y)) \\
&\hspace{3mm} → \mathsf{NonTrivialLoop}(p) → \mathsf{Trivial}(q)
\\
&\hspace{3mm} → p \rightsquigarrow q \\
& ξ₂ : \Pi\,\{x\,y\,z\}\,.\,(e : \ensuremath{\mathsf{E}}_{G}(x,y))\,(p, q : \mathsf{W}_{G}(y,z))
\\
&\hspace{3mm} → ¬\,\mathsf{Loop}(e ⊙ p) → x ≠ y\\
&\hspace{3mm}
→ (p \rightsquigarrow q)
→ (e ⊙ p) \rightsquigarrow (e ⊙ q)
\\
& ξ₃ : \Pi\,\{x\,y\,z\}\,.\, (e : \ensuremath{\mathsf{E}}_{G}(x,y))\,(p : \mathsf{W}_{G}(y,x))\\
&\hspace{3mm} → (q : \mathsf{W}_{G}(x,z)) \\
&\hspace{3mm} → ¬\,\mathsf{Loop} ((e ⊙ p) \cdot q)
→ \mathsf{Loop} (e ⊙ p)\\
&\hspace{3mm} → \mathsf{NonTrivial}(q)
\\
&\hspace{3mm} → (w : \mathsf{W}_{G}(x,z))
→ w = (e ⊙ p) \cdot q \\
&\hspace{3mm}
→ w \rightsquigarrow q
\end{aligned}
\end{equation}
The following provides hints to the intuition behind each of the data
constructors above.
\begin{enumerate}
\item The rule ξ₁ is \say{a nontrivial loop reduces to the trivial
walk of its endpoint}.
\item The rule ξ₂ is \say{the relation ($\rightsquigarrow$) is right
compatible with edge concatenation}.
\item The rule ξ₃ is \say{the relation ($\rightsquigarrow$) removes
right attached loops}.
\end{enumerate}
\end{definition}
\begin{remark} The data constructors in \Cref{eq:reduction-relation}
follow a design principle to avoid certain unification problems
occurring in dependently type programs \citep{conorgreenslime, plfa}.
\end{remark}
\begin{definition}
The relation $(\rightsquigarrow^{*})$ is the reflexive and transitive
closure of the relation $(\rightsquigarrow)$ in
\Cref{def:loop-reduction-relation}.
\end{definition}
\begin{lemma}\label{lem:nf} Given $x,y : \ensuremath{\mathsf{N}}_{G}$ and $p,q :
\mathsf{W}_{G}(x,y)$, the following claims hold:
\begin{enumerate}
\item\label{lem:nf-positive} If $x ∈ q$ and $p
\rightsquigarrow^{*} q$ then $x ∈ p$.
\item\label{lem:nf-length} If $p \rightsquigarrow q$ then
$\mathsf{length}(q) < \mathsf{length}(p)$.
\end{enumerate}
\end{lemma}
One can prove that our reduction relation in
\Cref{def:loop-reduction-relation} satisfies the progress property,
similarly as proved for simply-typed lambda calculus in Agda \citep[
§2]{plfa}. The evidence that a walk reduces is encoded using the
following predicate.
\begin{definition}\label{def:Reduce}
Given a walk $p : \mathsf{W}_{G}(x,y)$,
\hspace*{5mm}
\begin{equation*}
\mathsf{Reduce}(p) :\equiv \sum_{(q~:~\mathsf{W}_{G}(x,y))} (p \rightsquigarrow q).
\end{equation*}
\end{definition}
The predicate \(\mathsf{Normal}\) defined in \Cref{def:normal-form} is
the evidence that a walk is a quasi-simple walk that can no longer
reduce.
\begin{definition}\label{def:normal-form} Given a walk $p$, one states
that $p$ is in \emph{normal form} if $\mathsf{Normal}(p)$. If
$p\rightsquigarrow q$ and $q$ is in normal form, we refer to $q$ as the
normal formal of $p$.
\begin{equation*}\label{eq:normal-form}
\mathsf{Normal}(p) :\equiv \mathsf{isQuasi}(p) \times ¬\,\mathsf{Reduce}(p).
\end{equation*}
\end{definition}
\begin{lemma}
Being in normal form for a walk is a proposition.
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#2479}
It follows from \Cref{lem:finiteness-closure-property,lem:being-simple-is-prop}.\qedhere
\end{linkproof}
\begin{example} \label{No-reduce-no-step} The very basic normal forms
for walks are the trivial ones and the one-edge walks with
different endpoints. Given a walk $w$ and a term of
$\mathsf{NoReduce}(w)$, one can easily show that the walk $w$ is
in normal form.
\end{example}
\begin{definition}\label{def:progress} Given nodes $x$ and $y$ in a graph
$G$, we encode the fact a walk can reduce or not by using the inductive data
type $\mathsf{Progress}$.\\[2mm]
\begin{equation*}
\begin{aligned}
\mathsf{\textbf{data}} & \; \mathsf{Progress}\,\{x\,y\}\;(p : \mathsf{W}_{G}(x,y)) : \, \mathcal{U} \; \mathsf{ } \\
& \mathsf{step}\,:\,\mathsf{Reduce}(p) → \mathsf{Progress}(p)\\
& \mathsf{done}\,:\,\mathsf{Normal}(p) → \mathsf{Progress}(p)
\end{aligned}
\end{equation*}
\vspace{.5mm}
\end{definition}
\begin{theorem
\label{thm:normalisation}
Given a graph $G$ with a discrete node set, there
exists a reduction for each walk to one of its normal forms, i.e.
\Cref{eq:predicate-P} is inhabited for all $w :
\mathsf{W}_{G}(x,y)$.
\begin{equation}\label[type]{eq:predicate-P}
\sum_{(v~:~\mathsf{W}_{G}(x,z))} (w \rightsquigarrow^{*} v) × \mathsf{Normal}(v).
\end{equation}\vspace*{-2mm}
\end{theorem}
\begin{remark} The reduction relation $(\rightsquigarrow)$ has the
termination property. There is no infinite sequence of walks reducing,
since the length of each walk in a chain like $w_1\rightsquigarrow
w_2\rightsquigarrow w_3\rightsquigarrow \cdots$, decreases at each
reduction step. See also \Cref{lem:well-founded-walk-relation}.
\end{remark}
\begin{corollary
\label{thm:progress} Given a graph $G$ with a discrete node set,
and a walk $w$ of type $\mathsf{W}_{G}(x,y)$ for two $x,y :
\ensuremath{\mathsf{N}}_{G}$, the following claims hold.
\begin{enumerate}
\item The type $\mathsf{Reduce}(w)$ is decidable.
\item The proposition $\mathsf{Normal}(w)$ is decidable.
\item The walk $w$ progresses in the sense of \Cref{def:progress}.
\end{enumerate}
\end{corollary}
For simplicity, the proofs of \Cref{thm:normalisation,thm:progress}
are omitted. Neither of them requires the law of excluded middle.
However, if we want to construct the normal form for a walk, the node
set of the graph has to be discrete. In the case of
\Cref{thm:normalisation}, its proof can use the same reasoning given
for the proof of \Cref{thm:hom-normalisation}.
\section{The Notion of Walk Homotopy}\label{sec:homotopy-normalisation}
\begin{figure*}[!htb]
\includegraphics[width=0.9\textwidth]{ipe-images/walk-homotopy.pdf}
\caption{It is shown three homotopies between two walks from $x$ to
$y$ in a graph embedded in the sphere. In each case, the arrow
$(\Downarrow)$ indicates the face and the direction in which the
corresponding walk deformation is performed. We obtain a homotopy
between the two highlighted walks, $w_1$ and $w_2$, by composing, from
left to right, the homotopies from each figure.}
\label{fig:walk-homotopies}
\end{figure*}
This section introduces the notion of homotopy for walks denoted by
\((\sim_{\mathcal{M}})\). We define such a relation in
\Cref{def:congruence-relation} as a congruence relation on the
category induced by the endofunctor (W) on the corresponding graph.
Because homotopy for walks depends on the surface in which the graph
is embedded, it is necessary to first define an embedding of graphs in
a surface.
A map/embedding of a graph is a cellular decomposition of the surface
where the graph is embedded. This topological definition also requires
defining what a surface is. To avoid this, we consider instead a
combinatorial approach in \Cref{def:graph-map} based on the work by
Edmonds and Tutte \citep{Tutte1960, Tutte1963}. A more complete
description of graph maps can be found in \citep[§3]{gross}.
Given a graph \(G\), the graph formed by taking the same node set of
\(G\) and the edge set as the type \(\ensuremath{\mathsf{E}}_{G}(x,y) + \ensuremath{\mathsf{E}}_{G}(y,x)\)
for \(x,y:\ensuremath{\mathsf{N}}_{G}\) is denoted by \(U(G)\) and referred as the
\emph{symmetrisation} of \(G\).
\begin{definition}\label{def:graph-map} A map for a graph $G$
of type $\mathsf{Map}(G)$ is a local \emph{rotation system} at each
node in $U(G)$.
\begin{align*}
\mathsf{Map}(G) &:≡ \prod_{(x~:~\ensuremath{\mathsf{N}}_{G})} \mathsf{Cyclic}\left(\sum_{(y~:~\ensuremath{\mathsf{N}}_{G})}\ensuremath{\mathsf{E}}_{U(G)}(x,y) \right).
\end{align*}
\end{definition}
Given a map \(\mathcal{M}\), the \emph{faces} of \(\mathcal{M}\) are
the regions obtained by the cellular decomposition of the
corresponding surface by \(\mathcal{M}\). We omit
\href{https://jonaprieto.github.io/synthetic-graph-theory/lib.graph-embeddings.Map.Face.html\#2355}{the
formal type} of faces herein, so as not to distract the reader from
the goals of this paper. The type of faces requires proper attention
\citep{planarityHoTT}. Put briefly, a face is a cyclic walk in the
embedded graph without repeating nodes and without edges inside
\citep{gross}. The corresponding data of a face is a cyclic subgraph
\(A\) in \(U(G)\) and a function \(f : A \to N_{G}\) that picks nodes
in \(A\). Consequently, for each face \(\mathcal{F}\) given by
\(\langle A, f\rangle\), there are at least two quasi-simple walks in
\(U(G)\) associated with \(\mathcal{F}\) for every node-pair. Given
\(x,y:\ensuremath{\mathsf{N}}_{G}\), the corresponding walks given by \(\mathcal{F}\)
are, namely, the clockwise and counter-clockwise closed walks in
\(U(G)\), denoted by \(\mathsf{cw}_{A}(x,y)\) and
\(\mathsf{ccw}_{A}(x,y)\), respectively. If the endpoints are equal,
the trivial walk \(\langle x\rangle\) must also be considered.
\begin{figure}[!htb]
\centering
\[\begin{tikzcd}[column sep=normal]
{\bullet_{x}} & {\bullet_{f(a)}} && {\bullet_{f(b)}} & {\bullet_{y}}
\arrow["{w_1}", from=1-1, to=1-2]
\arrow[""{name=0, anchor=center, inner sep=0}, "{\mathsf{ccw}_\mathcal{F}(a,b)}"', curve={height=18pt}, from=1-2, to=1-4]
\arrow["{w_2}", from=1-4, to=1-5]
\arrow[""{name=1, anchor=center, inner sep=0}, "{\mathsf{cw}_\mathcal{F}(a,b)}", curve={height=-18pt}, from=1-2, to=1-4]
\arrow[shorten <=5pt, shorten >=5pt, Rightarrow, from=1, to=0]
\end{tikzcd}\] \caption{Given a face $\mathcal{F}$ of the map
$\mathcal{M}$, we illustrate here $\mathsf{hcollapse}$, one of the
four constructors of the homotopy relation on walks in
\Cref{def:congruence-relation}. The arrow $(\Downarrow)$ represents a
homotopy of walks.}
\label{fig:constructors-for-homotopic-walks}
\end{figure}
\subsection{Homotopy of Walks}\label{sec:homotopy-of-walks}
\begin{definition}\label{def:congruence-relation} Let $w₁,w₂$ be two
walks from $x$ to $y$ in $U(G)$. The expression
$\HomWalk{\mathcal{M}}{w₁}{w₂}$ denotes that one can \emph{deform}
$w_1$ into $w_2$ along the faces of $\mathcal{M}$, as illustrated in
\Cref{fig:walk-homotopies}. We acknowledge the evidence of this
deformation as a walk homotopy between $w_1$ and $w_2$, of type
$\HomWalk{\mathcal{M}}{w₁}{w₂}$. The relation $(\sim_{\mathcal{M}})$
has four constructors as follows. The first three constructors are
functions to indicate that homotopy for walks is an equivalence
relation, they are $\mathsf{hrefl}$, $\mathsf{hsym}$, and
$\mathsf{htrans}$. The fourth constructor, illustrated in
\Cref{fig:constructors-for-homotopic-walks}, is the
$\mathsf{hcollapse}$ function that establishes the walk homotopy:
\begin{equation*}\label{eq:collapse}
\HomWalk{\mathcal{M}}{(w₁ \cdot
\mathsf{ccw}_{\mathcal{F}}(a,b) \cdot w₂)} {(w₁ \cdot
\mathsf{cw}_{\mathcal{F}}(a,b) \cdot w₂)},
\end{equation*}
supposing one has the following,
\begin{itemize}
\item[(i)] a face $\mathcal{F}$ given by $\langle A, f \rangle$ of the
map $\mathcal{M}$,
\item[(ii)] a walk $w₁$ of type $\mathsf{W}_{U(G)}(x,f(a))$ for a node $x$ in $G$
with a node $a$ in $A$, and
\item[(iii)] a walk $w₂$ of type $\mathsf{W}_{U(G)}(f(b),y)$ for a node $b$ in $A$
with a node $y$ in $G$.
\end{itemize}
\end{definition}
The following lemma shows how to compose walk homotopies horizontally
and vertically. We consider a map \(\mathcal{M}\) for a graph \(G\)
and distinguishable nodes, \(x,y\), and \(z\) where \(w\), \(w_1\),
and \(w_2\) are walks from \(x\) to \(y\).
\begin{lemma} \label{lem:whiskering}
\hspace*{5cm}
\begin{enumerate}
\item (Right whiskering) Let $w_3$ be a walk of type
$\mathsf{W}_{U(G)}(y,z)$. If $\HomWalk{\mathcal{M}}{w₁}{w₂}$ then $\HomWalk{\mathcal{M}}{(w₁
\cdot w₃)}{(w₂ \cdot w₃)}$.
\begin{center}
\begin{tikzcd}
\bullet_{x} & \bullet_{y} & \bullet_{z} & [-5mm]{\color{darkblue} \to} &[-3mm]\bullet_{x} & \bullet_{z}
\arrow[""{name=0, anchor=center, inner sep=0}, "{w_1}", curve={height=-12pt}, from=1-1, to=1-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{w_2}"', curve={height=12pt}, from=1-1, to=1-2]
\arrow["{w_3}", from=1-2, to=1-3]
\arrow[""{name=2, anchor=center, inner sep=0}, "{w_1 \cdot w_3}", curve={height=-12pt}, from=1-5, to=1-6]
\arrow[""{name=3, anchor=center, inner sep=0}, "{w_2 \cdot w_3}"', curve={height=12pt}, from=1-5, to=1-6]
\arrow[shorten <=3pt, shorten >=3pt, Rightarrow, from=2, to=3]
\arrow[shorten <=3pt, shorten >=3pt, Rightarrow, from=0, to=1]
\end{tikzcd}
\end{center}
\item (Left whiskering) Let $p₁,p₂$ be walks of type
$\mathsf{W}_{U(G)}(y,z)$. If $\HomWalk{\mathcal{M}}{p₁}{p₂}$ then $\HomWalk{\mathcal{M}}{(w
\cdot p₁)}{(w \cdot p₂)}$.
\begin{center}
\begin{tikzcd}
\bullet_{x} & \bullet_{y} & \bullet_{z} &[-5mm]{\color{darkblue} \to} &[-3mm]\bullet_{x} & \bullet_{z}
\arrow["w", from=1-1, to=1-2]
\arrow[""{name=0, anchor=center, inner sep=0}, "{p_1}", curve={height=-12pt}, from=1-2, to=1-3]
\arrow[""{name=1, anchor=center, inner sep=0}, "{w\cdot p_1}", curve={height=-12pt}, from=1-5, to=1-6]
\arrow[""{name=2, anchor=center, inner sep=0}, "{w \cdot p_2}"', curve={height=12pt}, from=1-5, to=1-6]
\arrow[""{name=3, anchor=center, inner sep=0}, "{p_2}"', curve={height=12pt}, from=1-2, to=1-3]
\arrow[shorten <=3pt, shorten >=3pt, Rightarrow, from=1, to=2]
\arrow[shorten <=3pt, shorten >=3pt, Rightarrow, from=0, to=3]
\end{tikzcd}
\end{center}
\item (Full whiskering) Let $p₁,p₂$ be walks of type
$\mathsf{W}_{U(G)}(y,z)$. If $\HomWalk{\mathcal{M}}{w₁}{w₂}$ and
$\HomWalk{\mathcal{M}}{p₁}{p₂}$, then $\HomWalk{\mathcal{M}}{(w₁ \cdot p₁)}{(w₂ \cdot
p₂)}$.
\begin{center}
\begin{tikzcd}
\bullet_{x} & \bullet_{y} & \bullet_{z} & [-5mm]{\color{darkblue} \to} &[-3mm]\bullet_{x} & \bullet_{z}
\arrow[""{name=0, anchor=center, inner sep=0}, "{w_1}", curve={height=-12pt}, from=1-1, to=1-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{w_2}"', curve={height=12pt}, from=1-1, to=1-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{p_1}", curve={height=-12pt}, from=1-2, to=1-3]
\arrow[""{name=3, anchor=center, inner sep=0}, "{p_2}"', curve={height=12pt}, from=1-2, to=1-3]
\arrow[""{name=4, anchor=center, inner sep=0}, "{w_1 \cdot p_1}", curve={height=-12pt}, from=1-5, to=1-6]
\arrow[""{name=5, anchor=center, inner sep=0}, "{w_2 \cdot p_2}"', curve={height=12pt}, from=1-5, to=1-6]
\arrow[shorten <=3pt, shorten >=3pt, Rightarrow, from=2, to=3]
\arrow[shorten <=3pt, shorten >=3pt, Rightarrow, from=4, to=5]
\arrow[shorten <=3pt, shorten >=3pt, Rightarrow, from=0, to=1]
\end{tikzcd}
\end{center}
\end{enumerate}
\end{lemma}
\subsection{Homotopy Walks in the Sphere}\label{sec:homotopy-walks-in-sphere}
In topology, the property of being simply connected to the sphere
states that one can freely deform/contract any walk on the sphere into
another whenever they share the same endpoints. This topological
property of the sphere motivates the predicate in
\Cref{def:spherical-map}, which establishes the conditions necessary
for embedding a graph into a sphere. Later, we show an alternative
definition for graphs with a node set in
\Cref{def:spherical-map-simple}. Given a distinguished face in a
connected graph, being spherical for a graph embedding serves to
establish elementary planarity criteria for graphs
\citep{planarityHoTT}.
\begin{definition}\label{def:spherical-map} Given a graph $G$, a map
$\mathcal{M}$ for $G$ is \emph{traditionally} spherical if
\Cref{eq:traditional-spherical} is inhabited.
\begin{equation}\label[type]{eq:traditional-spherical}
\prod_{(x,y~:~\ensuremath{\mathsf{N}}_{G})} \prod_{(w₁, w₂~:~\mathsf{W}_{U(G)}(x,y))}
∥\HomWalk{\mathcal{M}}{w₁}{w₂} ∥.
\end{equation}
\end{definition}
To prove a given map is spherical following \Cref{def:spherical-map},
one must consider the set of all possible walk-pairs for each
node-pair. This is not easy, unless the set of walks follows a certain
property, since the type of walks forms an infinite set. Therefore, it
is proposed an alternative formulation for spherical maps based on
\Cref{def:loop-reduction-relation}. Any walk is homotopic to its
normal form, and only quasi-simple walks can be in normal form. By
removing such a \say{redundancy} created by loops in the graph, a more
convenient definition is obtained for spherical maps for graphs with
discrete node set, see \Cref{def:spherical-map-simple}. Furthermore,
using \Cref{thm:hom-normalisation}, we show that both definitions are
equivalent for graphs with discrete node set in
\Cref{lem:two-spherical-map-definition-are-equivalent}.
\begin{definition}\label{def:spherical-map-simple} Given a graph $G$,
a map $\mathcal{M}$ for $G$ is spherical if the type
\Cref{eq:quasi-spherical} is inhabited.
\begin{equation}\label[type]{eq:quasi-spherical}
\begin{split}
\prod_{(x,y~:~\ensuremath{\mathsf{N}}_{G}}\prod_{(w₁, w₂~:~\mathsf{W}_{U(G)}(x,y))}\,
\mathsf{isQuasi}(w₁)\,&\times\,\mathsf{isQuasi}(w₂)\\[-5mm]
&\to\, ∥ \HomWalk{\mathcal{M}}{w₁}{w₂} ∥.
\end{split}
\end{equation}
\end{definition}
We will only refer to spherical maps as maps that follow
\Cref{def:spherical-map-simple}, unless stated otherwise. It is
straightforward to prove that loops are homotopic to the corresponding
trivial walk if a spherical map is given.
\begin{lemma}
\label{lem:loop-edges-homotopic-to-trivial-walks} Given a
graph $G$, a spherical map $\mathcal{M}$ and $x : \ensuremath{\mathsf{N}}_{G}$, it
follows that $\| (e ⊙ ⟨ x ⟩) \sim_{\mathcal{M}} ⟨x⟩ \|$ for all
$e : \ensuremath{\mathsf{E}}_{U(G)}(x,x)$.
\end{lemma}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#2923}
Apply $\mathcal{M}$ to the walks $(e ⊙ ⟨ x ⟩)$ and $⟨x⟩$.\qedhere
\end{linkproof}
\begin{theorem
\label{thm:hom-normalisation} Given a graph $G$ with a
spherical map $\mathcal{M}$ and discrete set of nodes, for any
walk $p : \mathsf{W}_{U(G)}(x,z)$, there exists a normal form of
$p$, denoted by $\mathsf{nf}(p)$, such that $p$ is merely
homotopic to $\mathsf{nf}(p)$, in the sense of
\Cref{def:congruence-relation}. \end{theorem}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#2957}
\label{proof:thm:hom-normalisation}
Given a walk $p$ in $U(G)$ from $x$ to $z$ of length $n$, we will
construct a term of type $Q(\mathcal{M},x,z,p)$ defined as follows.%
\begin{equation*}\label{eq:predicate-Q}
Q(\mathcal{M}, x,z,w) :\equiv \hspace{-7mm}\sum_{(v~:~\mathsf{W}_{U(G)}(x,z))}
\hspace{-3mm}(w \rightsquigarrow^{*} v)\,\times\,\mathsf{Normal}(v)\,\times\,\| w\sim_{\mathcal{M}}v \|.
\end{equation*}%
The proof is done by using strong induction on $n$.
\begin{itemize}
\item Case $n$ equals zero. The walk $p$ is the trivial walk
$\langle x \rangle$, and it is then in normal form and also, by
$\mathsf{hrefl}$, homotopic to itself.
\item Case $n$ equals one. The walk $p$ is a one-edge walk. We
then ask if $x = z$.
\begin{enumerate}
\item If $x = z$, the walk $p$ reduces to the trivial walk
$\langle x\rangle$ by $\xi_1$. Applying $\mathcal{M}$, one
obtains evidence of a homotopy between $p$ and $\langle x
\rangle$, as the two walks are quasi-simple.
\item If $x \neq z$, the one-edge walk $p$ is its own normal form
and homotopic to itself by $\mathsf{hrefl}$.
\end{enumerate}
\item Assuming that $Q(x', z', w)$ for any walk $w$ from $x'$ to
$y'$ of length $k \leq n$, we must prove that $Q(x,z,p)$ when the
length of $p$ is $n + 1$.
\item Therefore, let $p$ be a walk $(e~⊙~w)$ where $e~:~\ensuremath{\mathsf{E}}_{U(G)}(x,y)$ and the walk $w~:~\mathsf{W}_{U(G)}(y,z)$ is of length
$n$. The following cases need to be considered concerning with the
equality $x = y$.
\begin{enumerate}
\item If $x = y$ then by the induction hypothesis applied to $w$,
one obtains the normal form $\mathsf{nf}(w)$ of the walk $w$,
along with $r : w \rightsquigarrow~\mathsf{nf}(w)$ and $h_1 : \| w
\sim_{\mathcal{M}} \mathsf{nf}(w)\| $. We ask if $x = z$ to see if
$p$ is a loop.
\begin{enumerate}
\item \label{item-x-is-z} If $x = z$ then the walk $p$ reduces
to the trivial walk $\langle x \rangle$ by $\xi_1$. By
applying $\mathcal{M}$ to the quasi-simple walk
$\mathsf{nf}(w)$ and $\langle x \rangle$, $h_2~:~\|~\mathsf{nf}(w)~\sim_{\mathcal{M}}~\langle~z~\rangle~\|$ is
obtained. It remains to show that $p$ is homotopic to $\langle
x \rangle$. Because being homotopic is a proposition, the
propositional truncation in $h_1$ and $h_2$ can be eliminated
to get access to the corresponding homotopies. The required
walk homotopy is as follows.
\begin{equation*}
\begin{array}{ll}
p \equiv (e ⊙ w) &\\
{\color{white} p}\sim_{\mathcal{M}} e ⊙ \mathsf{nf}(w) &(\mbox{By \Cref{lem:whiskering} and }h_1)\\
{\color{white} p}\sim_{\mathcal{M}} e ⊙ \langle z \rangle &(\mbox{By \Cref{lem:whiskering} and }h_2)\\
{\color{white} p}\sim_{\mathcal{M}} \langle x \rangle &(\mbox{By \Cref{lem:loop-edges-homotopic-to-trivial-walks} applied to }\mathcal{M}).\\
\end{array}
\end{equation*}
\item If $x \neq z$ then the walk $p$ reduces to
$\mathsf{nf}(w)$ by the following calculation using
$h_1$.
\begin{equation*}
\begin{array}{ll}
p \equiv (e ⊙ w) &\\
{\color{white} p}\equiv (e ⊙ ⟨ x ⟩) \cdot w &(\mbox{By def. of walk composition})\\
{\color{white} p}\rightsquigarrow^{*} w &(\mbox{By }\xi_{3})\\
{\color{white} p}\rightsquigarrow^{*} \mathsf{nf}(w) &(\mbox{By }r\mbox{}).\\
\end{array}
\end{equation*}
\end{enumerate}
\item
\label[proof]{proof:split-at-in:thm:hom-normalisation} If $x \neq
y$, then we split $w$ at $x$ using \Cref{def:view-splitat}. Hence,
two cases have to be considered: whether $x$ is in $w$ or not, see
\Cref{def:type-splitat}.
\begin{enumerate}
\item If $x \in w$, then, for every node $k$ in $G$, there are walks
$w_1~:~\mathsf{W}_{U(G)}(y,k)$ and
$w_2 : \mathsf{W}_{U(G)}(k,z)$ such that
$\gamma : w = w_1 \cdot w_2$, along with evidence that $x \not
\in w_1$ by \Cref{def:view-splitat}. By the induction
hypothesis applied to $w_1$ and to $w_2$, we obtain the normal
forms $\mathsf{nf}(w_1)$ and $\mathsf{nf}(w_2)$, and the terms
$r_i : w_i \rightsquigarrow \mathsf{nf}(w_i)$ and
$h_i : \| w_i \sim_{\mathcal{M}} \mathsf{nf}(w_i) \|$ for
$i=1,2$. The following cases concern with whether $x = z$ or
not.
\begin{enumerate}
\item If $x = z$, the walk $p$ reduces to $\langle x
\rangle$ by the rule $\xi_1$. To show that $p$ is
homotopic to $\langle x \rangle$, let $s_1$ and $s_2$ of
type, respectively, $\| p \sim_{\mathcal{M}} \mathsf{nf}(w_2) \|$ and
$\|~\mathsf{nf}(w_2) \sim_{\mathcal{M}} ⟨ x ⟩ \|$, as given below.
Assuming one has the terms $s_1$ and $s_2$, by elimination of
the propositional truncation and the transitivity property
of walk homotopy with $s_1$ and $s_2$, the required
conclusion follows. The walk homotopy $s_1$ is as follows.
\begin{equation*}\label{eq:suble-case}
\begin{array}{ll}
p \equiv \ (e ⊙ w) &\\
{\color{white} p} \sim_{\mathcal{M}} e ⊙ (w_1 \cdot w_2) &(\mbox{By the equality }\gamma) \\
{\color{white} p} \sim_{\mathcal{M}} (e ⊙ w_1) \cdot w_2 &(\mbox{By assoc. property of }(\cdot))\\
{\color{white} p} \sim_{\mathcal{M}} (e ⊙ \mathsf{nf}(w_1)) \cdot \mathsf{nf}(w_2) &(\mbox{By \Cref{lem:whiskering}, } h_1\mbox{, and }h_2)\\
{\color{white} p} \sim_{\mathcal{M}} \langle x \rangle \cdot \mathsf{nf}(w_2) &(\mbox{By the homotopy from }h_4)\\
{\color{white} p} \sim_{\mathcal{M}} \mathsf{nf}(w_2) &(\mbox{By definition}),\\
\end{array}
\end{equation*}
where $h_4 : \| (e ⊙ \mathsf{nf}(w_1)) \sim_{\mathcal{M}} \langle x \rangle\|$
is given by applying the map $\mathcal{M}$ to
the quasi-simple walks, $(e~⊙~\mathsf{nf}(w_1))$ and
$\langle x \rangle$. The walk $(e~⊙~\mathsf{nf}(w_1))$ is
quasi-simple by \Cref{lem:e-simple-is-simple}. Also, note that
$x~\not \in~\mathsf{nf}(w_1)$ by \Cref{lem:nf} and the
assumption $x~\not \in w_1$. Finally, the remaining walk homotopy
$s_2$ is obtained by applying $\mathcal{M}$ to the quasi-simple
walks, $\mathsf{nf}(w_2)$ and the trivial walk at $x$.
\item If $x \neq z$, then the walk $p$ reduces to
$\mathsf{nf}(w_2)$ by the reduction reasoning in
\Cref{eq:p-reduces-w2}. As the walk $\mathsf{nf}(w_2)$ is
in normal form, it remains to show that $p$ is homotopic
to $\mathsf{nf}(w_2)$. However, the reasoning is similarly
to \Cref{eq:suble-case}.
\begin{equation}\label{eq:p-reduces-w2}
\begin{array}{ll}
p \equiv \ (e ⊙ w) &\\
{\color{white} p} \rightsquigarrow^{*} e ⊙ (w_1 \cdot w_2) &(\mbox{By splitting }w\mbox{ using \Cref{def:view-splitat}}) \\
{\color{white} p} \rightsquigarrow^{*} (e ⊙ w_1) \cdot w_2 &(\mbox{By assoc. property of }(\cdot))\\
{\color{white} p} \rightsquigarrow^{*} \langle x \rangle \cdot w_2 &(\mbox{By }\xi_{2}\mbox{ applied to the loop }(e ⊙ w_1))\\
{\color{white} p} \rightsquigarrow^{*} w_2 &(\mbox{By definition of walk composition})\\
{\color{white} p} \rightsquigarrow^{*} \mathsf{nf}(w_2) &(\mbox{By the induction hypothesis}).\\
\end{array}
\end{equation}
\end{enumerate}
\item Otherwise, there is evidence that $x~\not \in w$. By the
induction hypothesis applied to $w$, the walk $\mathsf{nf}(w)$
is obtained, along with a reduction
$r~:~w~\rightsquigarrow~\mathsf{nf}(w)$ and evidence
$h~:~\|~w~\sim_{\mathcal{M}}~\mathsf{nf}(w)~\|$. The proof is
by structural induction on the walk $\mathsf{nf}(w)$.
\begin{enumerate}
\item If $\mathsf{nf}(w)$ is the trivial walk
$\langle y \rangle$, then the walk $p$ reduces either to
$\langle x \rangle$, if $x~=~z$, or to the walk
$(e ⊙ \langle z \rangle)$, if $x \neq z$. Either way, it is
possible to construct the corresponding homotopies, similarly
as for \Cref{item-x-is-z}.
\item If the walk $\mathsf{nf}(w)$ is the composite walk $(u~⊙~v)$ for
$u : \ensuremath{\mathsf{E}}_{U(G)}(y, y')$, $v : \mathsf{W}_{U(G)}(y',z)$ and nodes
$y',z~:~\ensuremath{\mathsf{N}}_{G}$, then we ask if $x~=~z$.
\begin{itemize}
\item If $x=z$ then the walk $p$ reduces to the trivial walk
$\langle x \rangle$ by $\xi_1$. It remains to show that the walk
$(e~⊙~\mathsf{nf}(w))$ is homotopic to $\langle x \rangle$. To see
this, the spherical property of the map $\mathcal{M}$ is applied.
Note that the walk $(e~⊙~\mathsf{nf}(w))$ is quasi-simple by
\Cref{lem:e-simple-is-simple}, as $x~\not \in~\mathsf{nf}(w)$ by
\Cref{lem:nf} applied to the assumption $x \not \in w$.
\item If $x\neq z$ then the walk $p$ reduces to the walk
$(e~⊙~\mathsf{nf}(w))$ by $\xi_2$. By the propositional truncation
elimination applied to the evidence of \Cref{lem:whiskering} and
to the homotopy $h$, one can obtain evidence that the walk
$(e ⊙ w)$ is homotopic to $(e ⊙ \mathsf{nf}(w))$. It remains to show that the composite walk
$(e ⊙ \mathsf{nf}(w))$ is in normal form. By \Cref{lem:e-simple-is-simple},
this walk is quasi-simple. By case
analysis on the possible reductions using \Cref{def:loop-reduction-relation}, one
proves that this walk does not reduce. Therefore, $(e ⊙ \mathsf{nf}(w))$ is in normal form.
{\qedhere}
\end{itemize}
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{itemize}
\end{linkproof}
\begin{corollary}\label{lem:two-spherical-map-definition-are-equivalent}
The two spherical map definitions, \Cref{def:spherical-map} and
\Cref{def:spherical-map-simple}, are equivalent when considering
graphs with discrete set of nodes.
\end{corollary}
\begin{linkproof}[]{https://jonaprieto.github.io/synthetic-graph-theory/CPP2022-paper.html\#2999}
The definitions in question are propositions. Thus, it is only
necessary to show that they are logically equivalent.
\begin{enumerate}
\item Every spherical map by \Cref{def:spherical-map-simple} is a
spherical map with additional data in the sense of
\Cref{def:spherical-map}
\item Let $\mathcal{M}$ be a spherical map by
\Cref{def:spherical-map-simple}. To see $\mathcal{M}$ also satisfies
\Cref{def:spherical-map}, let $w_1$ and $w_2$ be two quasi-simple
walks from $x$ to $y$. We must now exhibit evidence that $w_1$ is
homotopic to $w_2$. By \Cref{thm:hom-normalisation}, a walk homotopy
$h_1$ between $w_1$ and the normal form $\mathsf{nf}(w_1)$ exists.
Similarly, one can obtain a term $h_2$ of type $\| w_2 \sim_{\mathcal{M}} \mathsf{nf}(w_2) \|$.
\begin{equation}\label{eq:eq-spherical-notions}
\begin{array}{ll}
w_{1} \sim_{\mathcal{M}} \mathsf{nf}(w_{1}) &(\mbox{By } h_1\mbox{ from \Cref{thm:hom-normalisation}}) \\
{\color{white} w_{1} } \sim_{\mathcal{M}} \mathsf{nf}(w_{2}) &(\mbox{By } h_3\mbox{ from \Cref{def:spherical-map-simple}})\\
{\color{white} w_{1} } \sim_{\mathcal{M}} w_{2} &(\mbox{By } h_2\mbox{ from \Cref{thm:hom-normalisation}}).\\
\end{array}
\end{equation}
On the other hand, recall that walks in normal form are
quasi-simple walks by definition. Therefore, it is possible to get
$h_3 : \| \mathsf{nf}(w_1) \sim_{\mathcal{M}}\mathsf{nf}(w_2) \|$ by
applying the spherical property of the map $\mathcal{M}$ to
$\mathsf{nf}(w_1)$ and $\mathsf{nf}(w_2)$. By the elimination of the
propositional truncation applied to $h_1$, $h_2$, and $h_3$, the
required evidence of a homotopy between $w_1$ and $w_2$ can be
obtained, as stated in \Cref{eq:eq-spherical-notions}.\qedhere
\end{enumerate}
\end{linkproof}
\section{Related Work}\label{sec:related-work}
In other areas of mathematics unrelated to type theory, considering
homotopy for graph-theoretical concepts, for example, is not new.
There are several proposals of the concept of homotopy for graphs
using a few discrete categorical constructions \citep{Grigoryan2014}.
Many of these constructions use the \(\times\mbox{-}\text{homotopy}\)
notion, defined as a relation based on the categorical product of
graphs in the Cartesian closed category of undirected graphs. Since a
walk of length \(n\) in a graph \(G\) is simply a morphism between a
path graph \(P_n\) into \(G\), the notion of homotopy for walks is
there defined as homotopy between graph homomorphisms. The looped path
graph \(I_n\) is used to define the homotopy of these morphisms---in a
manner similar to the interval \([0,1]\) for the concept of homotopy
between functions in homotopy theory. As a source of more results, it
is possible to endow the category of undirected graphs with a
\(2\)-category structure by considering homotopies of walks as
\(2\)-cells, as described by Chih and Scull \citep{Chih2020}.
On the reduction relation on walks and spherical maps, this work is
related to polygraphs used in the context of higher-dimensional
rewriting systems. Recent works by Kraus and von Raumer
\citep{Kraus2021Rewriting, Kraus2020} use ideas in graph theory,
higher categories, and abstract rewriting systems to approximate a
series of open problems in HoTT. In the same vein, the internalisation
of rewriting systems and the implementation of polygraphs in Coq by
Lucas \citep{lucas2020abstract, lucas:hal-02385110} was found to be
related to Kraus and von Raumer's approach. One fundamental object in
the work by the authors mentioned above is that of an \(n\)-polygraph,
also called \emph{computad}.
An \(n\)-polygraph is a (higher dimensional) structure that can serve,
for example, to analyse reducing terms to normal forms and comparing
reduction sequences on abstract term rewriting systems. The following
is a possible correspondence to relate these ideas within the context
of our work. The notion of a \(1\)-polygraph
\citep[§2]{Kraus2021Rewriting}---which is given by two sets
\(\Sigma_0\) and \(\Sigma_1\), and two functions
\(s_0, t_0 : \Sigma_1 \to \Sigma_0\)---is
\href{https://jonaprieto.github.io/synthetic-graph-theory/lib.graph-definitions.Alternative-definition-is-equiv.html}{equivalent}
to the type of graphs in \Cref{def:graph}. An \emph{object} is a node,
a \emph{reduction step} is an edge, and a \emph{reduction sequence}
\(a \rightsquigarrow^* b\) is a walk between nodes \(a\) to \(b\). A
(closed) \emph{zig-zag} is a (cycle) walk in the symmetrisation of the
graph representing the reduction relation. A (generalised)
\(2\)-polygraph \citep[Def. 25]{Kraus2021Rewriting} consists of a type
\(A\), a set of reduction steps on \(A\), and all rewriting steps
between zig-zags. Then, the notion of \(2\)-polygraph on \(A\) will
correspond to a graph \(G\) representing the type \(A\) with the set
of all walks in \(G\) and the collection of walk homotopies in the
symmetrisation \(U(G)\) for a given combinatorial map.
Using the previous interpretation for polygraphs, one may state that a
graph with a spherical map holds properties such as
\emph{terminating}, \emph{closed under congruence}, \emph{cancels
inverses}, and it has a \emph{Winkler-Buchberger structure} \citep[Eq.
32-35]{Kraus2021Rewriting}. The related concept of
\emph{homotopy basis} of a \(2\)-polygraph \citep[Def.
28]{Kraus2021Rewriting} may be seen as the set obtained from
\Cref{def:spherical-map} without using propositional truncation in the
corresponding type.
On the other hand, \emph{Noetherian induction for closed zig-zags}
\citep[§ 3.5]{Kraus2021Rewriting} addresses a similar issue we
investigated herein. In this work, we found out that to prove certain
properties, as the normalisation theorem in
\Cref{thm:hom-normalisation} for graphs with a spherical map and a
discrete set, it was only necessary to consider (cycle) walks without
inner loops. One can prove other properties related to walk homotopies
for graphs with spherical maps, not only considering the property on a
cycle walk but any walk. This approach relies on the machinery of
quasi-simple walks in \Cref{sec:quasi-simple} and the loop reduction
relation on walks in \Cref{sec:loop-reduction-relation}. Our
loop-reduction relation is likely \emph{locally confluent} \citep[§
3.3]{Kraus2021Rewriting}, but without uniqueness of normal forms. We
leave the proof of these properties as future work because they were
not required here. We will also investigate in-depth the extent to
which the constructions given by Kraus and von Raumer, as well as by
Lucas, are not only related but applicable to our main project of
graph theory in HoTT \citep{agdaformalisation}.
Finally, on the computer formalisation side, the use of formal systems
to formalise graph-theoretical results on the computer is not a
novelty. The proof of the four-colour theorem (FCT) in Coq by Gonthier
\citep{Gonthier2008} is one famous example that works with
hypermaps---a similar notion to combinatorial maps, as defined in
\Cref{def:graph-map}. However, both the type theory and the goal of
the constructions are substantially different from our exposition.
There are other relevant projects in the field and extensive libraries
of graph theory in Coq \citep{doczkal2020}, Isabelle/HOL
\citep{Noschinski2015}, and Lean \citep{halltheoremLean}. However, to
the best of our knowledge, few efforts use a proof-relevant dependent
type theory like HoTT and a proof assistant like Agda. We find only
the work mentioned earlier by Kraus and von Raumer
\citep{Kraus2021Rewriting, Kraus2020} to be related to our Agda
development; their work contains a
\href{https://gitlab.com/fplab/freealgstr}{formalisation} of their
results in a version of the proof assistant Lean compatible with HoTT.
In other formal developments like
\href{https://unimath.github.io/doc/UniMath/d4de26f//UniMath.Combinatorics.Graph.html}{the
HoTT Coq Library} \citep{HoTTCoq},
\href{https://unimath.github.io/doc/UniMath/d4de26f//UniMath.Combinatorics.Graph.html}{the
UniMath Library} \citep{UniMath}, and
\href{https://github.com/agda/cubical/tree/9bedca9b94d41b8efe9fb541ceb0d158464e9497/Cubical/Data/Graph}
{the Standard Library of Cubical Agda}, only the basic definitions are
available (e.g.~the type of graphs, graph homomorphisms, and
diagrams). Future work might involve porting our development into one
of these libraries.
\section{Concluding Remarks}\label{sec:conclusions}
This work proves some non-trivial results for directed multigraphs
using a proof-relevant approach in the language of homotopy type
theory. This work supports an ongoing project to define planarity
criteria and other concepts of graph theory in HoTT
\citep{types2019abstract, planarityHoTT} formalised in Agda
\citep{agdaformalisation}.
In our formalisation, each definition and theorem presented herein is
related to a term in the proof assistant Agda. This approach was
helpful to reveal and confirm that only a subset of HoTT was necessary
to perform all the proofs in this development. Precisely, we only need
the intensional Martin--Löf type theory equipped with universes,
function extensionality, and propositional truncation. No other higher
inductive type is required. It is worth noting that without
considering propositional truncation, it would not have been possible
to define our main theorems. The propositional truncation allows us to
model the mere existence of an object in theory correctly.
This work's primary contributions are \Cref{thm:hom-normalisation},
and especially \Cref{lem:two-spherical-map-definition-are-equivalent}.
In summation, \Cref{thm:hom-normalisation} states that we can
normalize any walk to a normal form that is walk-homotopic to it
whenever the graph has a discrete node set and is embedded in the
sphere. On the other hand,
\Cref{lem:two-spherical-map-definition-are-equivalent} establishes an
equivalence between two definitions of embeddings in the sphere for
graphs with a discrete node set. Except for this last result, the
machinery shown in this paper was utterly unexpected and developed
solely to find evidence for our initial conjecture. For characterising
embeddings of finite graphs in the sphere, one needs to consider only
the finite set of walks without internal loops. Using the results
given herein, one can devise a (brute-force) algorithm to determine
whether an embedding is spherical or not. Future work will be devoted
to implementing this algorithm. To the best of our knowledge, we
provided the minimum to demonstrate
\Cref{thm:hom-normalisation,thm:normalisation,lem:two-spherical-map-definition-are-equivalent}.
\begin{acks}
The author thanks Håkon R. Gylterud for very helpful discussions on
various issues related to this paper. Thanks to Marc Bezem and the
anonymous reviewers for the comments, references and suggestions that
improved this document. Thanks to the Department of Informatics at the
University of Bergen for funding this research. Last but not least,
thanks to the Agda developer team for providing and maintaining the
proof assistant used to check the results of this work.
\end{acks}
|
1,477,468,750,399 | arxiv | \section{Introduction}
The region encompassing the Herbig-Haro (HH) object 30 and the stars
HL/XZ Tau, in the northeastern part of the L1551 dark cloud, lies at a
distance of 140 pc (e.g., Kenyon et al.\ 1994). It is particularly rich in
HH jets, being one of the regions where this phenomenon was first
identified (Mundt \& Fried 1983).
The HH~30 outflow is considered a prototypical jet/disk system. It presents a
clear jet/counterjet structure, which has been described, e.~g., by Mundt et
al.\ (1987, 1988) and by Graham \& Heyer (1990). The HH~30 exciting source is
an optically invisible star (Vrba, Rydgren, \& Zak 1985) highly extincted by an
edge-on disk (Burrows et al.\ 1996, Stapelfeldt et al. 1999), that extends up
to a radius of $\sim250$ AU perpendicularly to the jet, and divides the
surrounding reflection nebulosity into two lobes. Kenyon et al.\ (1998) propose
a spectral type M0 for the HH~30 star, and Cotera et al.\ (2001) estimate a
bolometric luminosity of 0.2--0.9 $L_\odot$. L\'opez et al.\ (1995, 1996) argue
that a number of knots located to the northeast of the HH~30 object are
possibly also part of the same flow, resulting in a total angular size of
$\sim7'$ for the whole outflow. Several studies have explored the spatial
morphology (both along and across the symmetry axis; Mundt et al.\ 1991, Ray et
al.\ 1996), line ratios (Mundt et al.\ 1990; Bacciotti, Eisl\"offel, \& Ray
1999), radial velocities (Raga et al.\ 1997), and proper motions (Mundt et al.\
1990; Burrows et al.\ 1996; L\'opez et al.\ 1996) of the HH~30 flow.
HL Tau, located $\sim1\rlap.'5$ north from the HH~30 source, has been one
of the most intensively studied T Tauri stars. Since the first proposal
that this star is associated with a nearly edge-on circumstellar disk
(Cohen 1983), numerous studies have been carried out in order to image the
proposed disk (e.g., Sargent \& Beckwith 1987, 1991; Lay et al.\ 1994;
Lay, Carlstrom, \& Hills 1997; Rodr\'\i guez et al.\ 1994; Mundy et al.\
1996; Wilner, Ho, \& Rodr\'\i guez 1996; Looney, Mundy \& Welch 2000),
although recent studies suggest that the flattened molecular structure
around HL Tau is part of a larger molecular shell-like structure (Welch et
al. 2000). This star has been proposed as the source of a molecular
outflow (Calvet, Cant\'o \& Rodr\'\i guez 1983; Torrelles et al.\ 1987;
Monin, Pudritz, \& Lazareff 1996). HL Tau is also the source of a
collimated jet/counterjet system, that has been extensively studied (e.g.,
Mundt et al.\ 1987, 1988, 1990, 1991, Rodr\'\i guez et al.\ 1994). L\'opez
et al. (1995, 1996), on the basis of geometrical alignment and proper
motion measurements, propose that HH 266, an extended structure $\sim4'$
to the northeast of HL Tau, might correspond to the head of the HL Tau
jet.
XZ Tau, located $\sim25''$ to the east of HL Tau, is a close binary system
composed of a T Tauri star and a cool companion separated by $0\rlap.''3$
(Haas, Leinert, \& Zinnecker 1990). XZ Tau is also the source of an
optical outflow, as revealed, e.g., by the studies of Mundt et al.\ (1988,
1990), and more recently by direct evidence of the expansion of the
nebular emission, moving away from XZ Tau, in the spectacular sequence of
{\em HST} images of Krist et al.\ (1999). XZ Tau has also been proposed as an
alternative driving source for the molecular high-velocity gas observed
towards the HL/XZ Tau complex (Torrelles et al.\ 1987), and is located at
the center of the ring- or shell-shaped molecular structure imaged by
Welch et al.\ (2000).
Despite the numerous studies carried out in this region, proper motions
have only been measured for a relatively small number of knots of the HH
jets present in this region, using images obtained under quite unequal
conditions (different filters, sensitivities or telescopes). Mundt et al.\
(1990) measured proper motions for one knot of the HH~30 jet, and five
additional knots near HL/XZ Tau by comparing a Gunn $r$ filter image with
a [SII]+H$\alpha$ image, separated by an interval of four years; Burrows
et al.\ (1996) measured proper motions for the small scale structure of
knots in the HH~30 jet/counterjet from two {\em HST} F675W images
(including [SII], H$\alpha$, and [OI] lines) separated by one year, but
because of differences in sensitivity between both images, proper motion
measurements were restricted to the region within $5''$ from the star;
finally, L\'opez et al.\ (1996) obtained proper motions for five knots of
the HH~30 jet, and for HH 266, from two [SII] images separated by about
one year, but obtained with different instrumentation.
In the present paper, we used sensitive CCD frames obtained at two epochs
with the same instrument, in order to make a detailed study of the proper
motions along the region. Our observations are complementary to those
carried out with the {\em HST} telescope, since the {\em HST} covers with
very high angular resolution the brightest region near (a few arcsec) the
exciting sources, while our observations cover a much more extended, low
brightness region, up to $\sim5'$ from the exciting sources. We measured
proper motions for a number of knots much larger than in previous studies,
in order to obtain the full kinematics of the region. Given the proximity
of the region ($D=140$ pc), a time span of one year between the two images
allowed us to measure the proper motions with enough accuracy, and made
easier the identification of the knots. We compared our proper motion
results with those obtained in previous studies in order to obtain the
time evolution of the jet structures.
For the HH~30 jet, we have also carried out high resolution spectroscopy to
measure the radial velocity along the jet, in order to obtain the full
kinematics of this object.
The paper is organized as follows. The observations and procedures used to
measure the proper motions are described in \S\ 2. In \S\ 3 we present
the results we obtained from our proper motion and spectroscopic measurements
and we compare them with those of previous observations reported in the
literature. In \S\ 4 we discuss the origin of the HH~30 jet wiggling
structure in terms of orbital motions of the jet source and precession of the
jet axis; we also discuss the large-scale structure of the HH 30 jet.
In \S\ 5 we give our conclusions.
\section{Observations and Data Analysis}
\subsection{CCD Imaging}
CCD images of the HL Tauri region, including the HH~30 jet and the HH~30-N
and HH 266 emission structures, were obtained at two different epochs
(1998 November 20 and 1999 November 8). The observations were made with
the 2.5~m Nordic Optical Telescope (NOT) at the Observatorio del Roque de
los Muchachos (La Palma, Spain). The same setup was used for the two
runs. The images were obtained with the Andalucia Faint Object
Spectrograph and Camera (ALFOSC), equipped with a Ford-Loral CCD with
$2048\times2048$ pixels with an image scale of $0\farcs188$~pixel$^{-1}$.
A square filter, with a central wavelength $\lambda=6720$~\AA\ and
bandpass $\Delta\lambda=56$~\AA\ (that includes the red [SII] 6717,
6731~\AA\ lines) was used for these observations. On the NOT, the nominal
ALFOSC field is $6\rlap.'5\times6\rlap.'5$. However, because of the square
geometry of the [SII] filter used and the misalignment with the CCD axes,
the effective field sampled through the filter was only $\sim5'\times5'$.
The images were processed with the standard tasks of the
IRAF\footnote{IRAF is distributed by the National Optical Astronomy
Observatories, which are operated by the Association of Universities for
Research in Astronomy, Inc., under cooperative agreement with the National
Science Foundation.} reduction package. Individual frames were recentered
using the reference positions of several field stars, in order to correct
for the misalignments among them.
The ``first epoch'' NOT image was obtained by combining five frames of
1800~s exposure each, in order to get a deep [SII] image of 2.5 h of
integration time. Typical values of seeing for individual frames were
$1''$--$1\farcs2$, resulting in a seeing of $1''$ for the final ``first
epoch'' image. The ``second epoch'' NOT image was obtained by combining
eight frames of 1800~s exposure each in order to get a final image with a
total integration time of 4 h. For this second epoch, the typical values
of seeing for individual frames were $0\farcs7$--$1''$, resulting in a
seeing of $0\farcs7$ for the final ``second epoch'' image.
Images were not flux-calibrated. However, we have performed aperture
photometry of comon field stars on the images of the two epochs and we
estimate that the deepness of the two images differs by less than 0.5 mag
in the filter used.
\subsection{Measurement of Proper Motions
\label{smpm}}
To measure the proper motions, the first- and second-epoch [SII] CCD final
images were converted into a common reference system by using the
positions of eleven field stars plus the position of the HH~30 star. The
$y$ axis is oriented at a position angle (P.A.) of $30\rlap.^{\circ}6$,
roughly coincident with the HH~30 jet axis. The GEOMAP and GEOTRAN tasks
of IRAF were used to perform a linear transformation with six free
parameters taking into account relative translation, rotations and
magnifications between frames. After the transformation, the average and
rms of the difference in position for the reference stars in the two
images was $-0.08\pm0.34$ pixels in the $x$ coordinate and $-0.18\pm0.21$
pixels in the $y$ coordinate.
In order to improve the signal-to-noise ratio of the diffuse knot
structures, the aligned images were convolved with Gaussian filters using
the GAUSS IRAF task. For the knots nearest to the HH~30 star (i.e., knots
A--D and counterjet), a Gaussian filter with a FWHM of $0\farcs36$ was
applied to both images. For the rest of the knots the Gaussian filter
applied to the images had a FWHM of $0\farcs63$.
We defined boxes that included the emission of the individual
condensations in each epoch, we computed the two-dimensional
cross-correlation function of the emission within the boxes, and finally
we determined the proper motion through a parabolic fit to the peak of the
cross-correlation function (see the description of this method in Reipurth
et al.\ 1992, and L\'opez et al.\ 1996). The uncertainty in the position
of the correlation peak has been estimated through the scatter of the
correlation peak positions obtained from boxes differing from the nominal
one in 0 or $\pm2$ pixels ($0\farcs38$) in any of its four sides, making a
total $3^4=81$ different boxes for each knot. The error adopted has been
twice the rms deviation of position, for each coordinate, added
quadratically to the rms alignment error.
For the knots near the HH~30 source (knots A--D and counterjet), the
intensity gradient in the jet direction made unreliable the usage of the
correlation method. For this region, we integrated the emission across the
jet over a width of 11 pixels ($2\farcs1$) and identified the position of
each knot along the jet through a parabolic fit after baseline removal.\ A
similar procedure, but integrating the emission along the jet for a
typical width of 6 pixels ($1\farcs1$) at the position of each knot,
allowed us to measure the $x$ coordinate of the knots. For these knots we
estimated that the major source of error was the residual misalignment of
the two images, and the uncertainty adopted for the proper motions was
$\sqrt{2}$ times the rms alignment error for each coordinate. However, we
cannot discard that residual contamination from continuum emission
constitutes an additional source of uncertainty in our proper motion
measurements of the knots nearest to the HH~30 source.
\subsection{Optical Spectroscopy}
Optical spectroscopy of the HH~30 jet and HH~30-N was acquired during 1998
December 11 and 12 using the red arm of the double-armed spectrograph ISIS
(Carter et al.\ 1994), and a Tektronics CCD detector of $1024\times1024$
pixels, at the Cassegrain focus of the 4.2~m William Herschel Telescope
(WHT) at the Observatorio del Roque de los Muchachos (La Palma, Spain).
The high resolution grating R1200R (dispersion of 0.41~\AA~pixel$^{-1}$),
centered at 6600~\AA\ and covering a wavelength range of 420~\AA\ (that
includes the H$\alpha$ and [SII] 6717, 6731 \AA\ lines), was employed. The
effective spectral resolution achieved was 0.7~\AA\
($\sim32$~km~s$^{-1}$). The angular scale was $0\farcs36$~pixel$^{-1}$.
In order to obtain the spectrum of the HH~30 jet, the $3\farcm7$ long slit
was centered on the HH~30 star with a P.A. of $30^{\circ}$. One exposure
of 600 s of the HH~30 jet was obtained on 1998 December 12, with a slit
width of $1\farcs5$.
Four exposures of 1800 s each, with a total integration time of 2~h, were
obtained on 1998 December 11 through HH~30-N. The slit was centered on
the position of knot NE (L\'opez et al.\ 1996), at a P.A. of $30^{\circ}$,
covering the emission from knots NA to NH.
The data were reduced using the standard procedures for the long-slit
spectroscopy within the IRAF package.
The spectra were not flux-calibrated. The line-of-sight velocity as a
function of position along the HH~30 jet and HH~30-N has been obtained by
fitting multiple Gaussians to the observed [SII] 6717, 6731~\AA\ and
H$\alpha$ emission line profiles, using the SPLOT task of IRAF. The
Gaussian profiles are described in terms of line center, which is
transformed into heliocentric velocity, and line width, given as the full
width at half maximum (FWHM).
\section{Results}
\subsection{Overall Description}
In Figure \ref{figcromo} we show the [SII] image of the overall region,
about $5'$ in size, corresponding to the first epoch. The field includes
the HH~30 source, near the southern edge of the image, which appears at
the vertex of a cone of nebulosity that apparently extends along several
tens of arcsec. The HH~30 jet crosses the image from southwest to
northeast, and its wiggling as it propagates away from its exciting source
is clearly visible. The image also includes the HH~30-N knots, near the
northern edge of the image, which have been proposed to also belong to the
HH~30 jet (L\'opez et al.\ 1995). Unfortunately only the very first knots
of the HH~30 counterjet fall inside the field.
The bright stars HL and XZ Tau and their associated jets are also included in
the observed field. A large fraction of the HL Tau counterjet is visible in the
southwestern part of the image, while the HH~266 object, proposed to be
also associated with HL Tau (L\'opez et al.\ 1995) falls near the northeastern
corner of the image. The young stellar object LkH$\alpha$~358 is also visible
near the western edge of the image.
Faint knots, corresponding to the emission-line knots K1-K3 identified by
Mundt et al.\ (1988) (see their Fig.~1), can be seen in our images
$\sim30''$ to the southeast of XZ Tau. Additional knots, extending
$\sim40''$ to the east of K1-K3 can also be seen in our images. These
knots are connected by diffuse emission that extends further away, forming
an apparent ``ring'' with a radius of $\sim1\rlap.'5$, surrounding the
darker region observed towards the center of the frame. We detected these
knots and the diffuse emission in our images at the two epochs, but we did
not find evidence for systematic proper motions. Thus, this apparent ring
probably arises as a result of an increase in extinction towards the
center of the field, because of a foreground clump. It is interesting to
note that a molecular ring, centered near XZ Tau, has been mapped in
$^{13}$CO by Welch et al.\ (2000). The dark region at the center of our
image coincides with the eastern side of the molecular ring, suggesting
that this dark region corresponds to the increase of extinction because of
this foreground molecular cloud, with the diffuse optical emission
probably originating at the edges of this molecular structure. HL~Tau
falls close to the western side of the molecular ring and, as noted by
Welch et al.\ (2000), the large arc of scattered light that appears to
pass through HL~Tau, apparently corresponds to the western side of the
molecular structure.
\subsection{HH~30}
\subsubsection{Proper Motions}
In Table \ref{tabpm} and Figure \ref{fignot99} we give the proper motion
results obtained for the HH~30 jet using the procedures described in \S
\ref{smpm}. The proper motion velocity has been calculated assuming a
distance of 140~pc. In the proper motion calculations, the $y$ axis is at
a position angle of $30\rlap.\degr6$, in the approximate direction of the
jet. We have set the zero point of our coordinate system on the bright
knot closest to the HH 30 star, labeled as A0. We identified this knot
with the knot 95-01N of the {\em HST} image of Burrows et al.\ (1996).
These authors estimate from a fit of the {\em HST} image to a flared disk
using scattered-light models that the position of the exciting source is
shifted from 95-01N, corresponding to a position $y=-0\farcs51$.
The nomenclature used for the knots maintains the single letter names used
by Mundt et al.\ (1990) and L\'opez et al.\ (1995). However, the number
after the letter indicates only an order of distance from the HH~30 star
and no attempt has been made to be consistent with previous observations.
Thus, for instance, the group of knots B is roughly at the same distance
from the HH~30 source in the images of Mundt et al.\ (1990), L\'opez et
al.\ (1995), and in this paper; however, knots B1, B2, and B3 identified
in the present images do not have a clear correspondence with knots B1 and
B2 of Mundt et al.\ (1990) (see the discussion below).
The proper motion velocities obtained along the HH~30 jet range from
$\sim100$ to $\sim300$~km~s$^{-1}$, in a direction close to the jet axis.
Near the source the velocities appear to be higher, with velocities of
$\sim200$ to $\sim300$~km~s$^{-1}$ for knots A to D (corresponding to
distances from $3''$ to $20''$ from the source; first panel of Fig.\
\ref{fignot99}), except for the edges of the B condensation (knots B1 and
B3), where the velocities are lower ($\sim150$~km~s$^{-1}$). The velocity
of the central knot (B2) is similar to the other knots in the A--D region.
The velocity decreases beyond knot E1 (at $35''$, with a velocity of
$\sim200$~km~s$^{-1}$), reaching values of $\sim150$~km~s$^{-1}$ for knots
E2 to E4 (from $40''$ to $50''$), and $\sim100$~km~s$^{-1}$ for knots G
(at $75''$). Finally, the velocity increases slightly up to
$\sim150$~km~s$^{-1}$ for knots H to I (from $90''$ to $135''$). It should
be noted that up to knot E3, the direction of the velocity is in general
quite close to the jet axis ($|\Delta\mathrm{P.A.}|\la10\arcdeg$), while
beyond knot E3 some of the knots present a significant velocity component
westwards, perpendicularly to the jet axis
($|\Delta\mathrm{P.A.}|\simeq30\arcdeg$--$40\arcdeg$), so that the
decrease in velocity is still more noticeable for the velocity component
along the jet axis.
We have only been able to measure the proper motions of two knots of the
counterjet. For knot Z2 (at $4.5''$ from the source) the velocity is
$\sim250$~km~s$^{-1}$, similar to that of knot A2 in the jet. However we
measured a significantly lower velocity, of $\sim70$~km~s$^{-1}$, for knot Z1
(at $2.5''$).
We think that this abnormally low value of the velocity, as compared with
the values measured for the remaining knots of the jet and the counterjet,
is likely caused by contamination by scattered light and might not
represent a measure of a true acceleration in the counterjet.
The proper motions measured for the HH~30-N knots are on the average
aligned with the direction of the HH~30 jet. However, the values obtained
both for the velocity and the position angle, show a dispersion
considerably larger than in the knots of the HH~30 jet (see last panel in
Fig.\ \ref{fignot99} and Table \ref{tabpm}). This larger dispersion in the
measured values can be due to the fact that in the HH~30-N knots the
emission is fainter than in the HH~30 jet, but could also be a consequence
of the interaction between the head of the jet and its surroundings. The
global proper motion velocity of the HH~30-N structure is
$\sim120$~km~s$^{-1}$ with $\mathrm{P.A.}\simeq30\arcdeg$, similar to the
direction of the HH~30 jet, thus supporting the hypothesis that this group
of knots corresponds to the head of the HH~30 jet (L\'opez et al.\ 1996).
The velocity values for individual knots range from $\sim50$ to
$\sim300$~km~s$^{-1}$. The largest velocity is measured for knot NC, but
most of the knots have velocity values of $\sim50$~km~s$^{-1}$. It should
be noted that NF, that was identified as a HH knot in previous works,
appears very circular and compact in our higher quality images presented
here (see Fig.\ \ref{fignot99}); since the proper motion is compatible
with zero ($v_t=34\pm30$~km~s$^{-1}$), we conclude that it is
most probably a field star.
\subsubsubsection{Comparison with Previous Observations}
L\'opez et al.\ (1996) measure proper motions for knots C/D and E of the
HH~30 jet. The directions of the proper motions we measured are roughly
consistent with those derived by L\'opez et al.\ (1996). For the HH~30 jet
we obtained a similar velocity for knot E. For knot C/D where L\'opez et
al.\ (1996) note that they obtain an absurdly large velocity of
$\sim700$~km~s$^{-1}$, we now obtain a more reasonable value of
$\sim300$~km~s$^{-1}$. Mundt et al.\ (1990) report a still lower proper
motion velocity of $\sim150$~km~s$^{-1}$ for knot C (the only knot for
which they measure the proper motion), but since they use images of two
epochs obtained through different filters, we think that our value is more
reliable. Burrows et al.\ (1996), from {\em HST} observations, measure proper
motion velocities of $\sim100-250$~km~s$^{-1}$ in the inner $\sim5''$ of
the jet and velocities of $\sim250$~km~s$^{-1}$ for distances of $\la1''$
in the counterjet. These values are roughly in agreement with the
velocities we measured for knots A2, A3 in the jet and Z2 in the
counterjet.
Regarding HH~30-N, the only previous available proper motion measurements
are those of L\'opez et al.\ (1996) for knots ND, NE, and NF. Our results
confirm that the direction of the proper motion velocities of HH~30-N is
consistent with HH~30-N belonging to the HH~30 jet, thus supporting the
claim of L\'opez et al.\ (1996). However, we obtained significantly lower
values for the velocities. In particular, we conclude that knot NF is a
field star (see discussion above). Since the knots of HH~30-N are weak and
diffuse, we attribute these discrepancies to the lower sensitivity and
angular resolution of the images used by L\'opez et al.\ (1996).
In order to better illustrate the comparison with previous observations of
the positions and proper motions of the HH~30 jet knots, we plotted in
Figure \ref{figcpm} their positions as a function of time. The vertical
axis is the distance along the jet, at a position angle of
$30\rlap.\degr6$. The five vertical lines of each panel mark the epoch of
different observations and the circles indicate the position of the knots.
Knots are labeled with the last two digits of the year of the observation,
followed by the name used by each author. The vertical line with knots
labeled ``87'' corresponds to the observation carried out in 1987 January
by Mundt et al.\ (1990). Knots labeled ``93'' correspond to the
observation carried out in 1993 December by L\'opez et al.\ (1995). Knots
labeled ``95'' were observed with the Hubble Space Telescope in 1995
January by Burrows et al.\ (1996). {\em HST} observations carried out in
1994 February and 1995 March reported in Burrows et al.\ (1996) and Ray et
al.\ (1996) have not been included for clarity of the figure. The two
vertical lines labeled ``98'' and ``99'' correspond to the present
observations with the NOT telescope. The dotted lines indicate the proper
motions of the knots, as measured from the 1998 and 1999 observations,
taken from Table~\ref{tabpm}. The shaded area along the proper motion line
of knot 99-C indicates the formal uncertainty of the proper motion
measurement for this knot, as it propagates with time. The uncertainties
for other knots are similar to this case, shown here as an example. We
have not included the data of the observations described in L\'opez et
al.\ (1996), observed in 1995 February since they are of much lower
quality.
The extrapolation of the proper motions measured from our NOT 1998-1999
data to the epoch of the previous observations intersects the vertical
lines at points that, in general, are in agreement with the positions of
observed knots. However, this correspondence is better for knots located
farther away from the exciting source, while for knots closer to the
source the correspondence is more complex. As the right panel of Figure
\ref{figcpm} illustrates, a backward extrapolation of our proper motion
measurements for knots E1-E4 and G1-G2 (at distances of $\sim30''$ to
$\sim70''$ from the source) provides a good agreement with the positions
of the knots E and G actually observed in both the 1987 and 1993 images.
The extrapolated position of knot 99-D1 falls very close to the position
of knot 95-14N in the {\em HST} image, while the extrapolated position of
knot 99-D2 falls outside the region where the {\em HST} observations are
sensitive enough to detect the jet emission. In the 1993 image, the
extrapolation of the proper motions of knots 99-D1 and 99-D2 falls in a
region where it is difficult to separate the emission into several knots,
and it is designated as 93-D1, while the D2 knot in the 1993 image would
probably be displaced too far away (and, thus, too faint) in the 1998-1999
images to be detected. Interestingly, our proper motion extrapolation
suggests that knots D (at $\sim18''$ from the source) in the 1998-1999
images arise from knot C (at $\sim14''$ from the source) in the 1987
image, while knots D in that image would have faded away in our 1998-1999
images. Knot C in our 1998-99 images appears to correspond to knot 93-C,
and perhaps to knot 95-13N. For knots B1--B3 in the 1998-1999 images, the
extrapolation suggests a correspondence with knots 11N and 12N in the 1995
image, and knots B1 and B2 in the 1987 image. As we noted previously, we
derived abnormally low values for the proper motions of knots 99-B1 and
99-B3; an extrapolation of their positions using the proper motion value
obtained for B2 would result in a one-to-one correspondence between knots
99-B1, 99-B2 and 99-B3 with knots 95-10N, 95-11N, and 95-12N,
respectively. Also, within current uncertainties, knot 87-B2 could
correspond either to the extrapolation of knot 99-C or knot 99-B3.
For knots within $5''$ from the source, the {\em HST} image provides much
more detail than our ground based 1998-99 images, thus making less useful
the backward extrapolation of the NOT proper motions, since each 98 and 99
knot corresponds to more than one 95 knot. However, from the inspection of
Figure \ref{figcpm}, and using the proper motions reported in Burrows et
al.\ (1996), it is clear that at least some of the 1995 {\em HST} knots do
appear to correspond to structures observed in our ground based images.
For example, both the proper motion velocity of 170~km~s$^{-1}$ reported
by Burrows et al.\ (1996) for knot 95-06N\,+\,95-07N, and our proper
motion velocity of $213\pm43$~km~s$^{-1}$ obtained for knot 99-A3,
suggests that their extrapolated positions coincide. Also the proper
motion velocity of 260~km~s$^{-1}$ reported for knot 95-02N, leads its
extrapolated position to coincide with that of knot 99-A2, for which we
obtained the same value of the proper motion velocity. Knot 99-A1, which
only appears in our 1999 NOT image, had probably still not emerged at the
epoch of the {\em HST} observations. The extrapolated position for the
counterjet knot 95-02S with a proper motion velocity of 280~km~s$^{-1}$
reported by Burrows et al.\ (1996) appears to correspond with the position
of knot 99-Z2, while it is not clear if knot 95-01S corresponds to knot
99-Z1, given its proximity to the source that makes difficult its proper
identification in our NOT images.
Globally, it seems that near the source the interaction of the knots with the
medium is stronger, and fading out is more important, and thus the changes in
intensity and shape of structure makes more difficult the cross-identifications
of knots in different epochs.
\subsubsection{Radial Velocity\label{rvel}}
Since close to the HH~30 star the [SII] emission of the jet is stronger
than the H$\alpha$ emission, being also less contaminated by the emission
of the star, the heliocentric radial velocities along the HH~30 jet have
been derived from the [SII] line profiles. The [SII] 6717, 6731~\AA\
spectra were averaged over regions comparable to the size of the knots and
the heliocentric radial velocities obtained for each averaged region (from
knot F3 to I1) are listed in Table \ref{tabrjet} and shown in Figure
\ref{figdens}a. The lines are unresolved with our spectral resolution of
32~km~s$^{-1}$.
The heliocentric velocity for the region within $40''$ of the source
(knots F3 to E2+E3) remains almost constant, with a value of
$\sim16$~km~s$^{-1}$, similar to that of the surrounding cloud
(19~km~s$^{-1}$; see Mundt et al.\ 1990). The heliocentric velocity
increases with distance from 16~km~s$^{-1}$ at $40''$ (knots E2+E3) to
47~km~s$^{-1}$ at $125''$ from the source (knot I1), defining a radial
velocity gradient of $\sim0.4$~km~s$^{-1}$~arcsec$^{-1}$ between knots G
and I1. These values are consistent with the preliminary results of Raga
et al.\ (1997), although for the knots close to the source our velocities
are $\sim10$--15~km~s$^{-1}$ lower. Using a value $v_t=200$ km s$^{-1}$
for the proper motions, we derive that the inclination angle, $\phi$, of
the jet with respect to the plane of the sky ranges from $4^\circ$ to
$9^\circ$ for distances to the source from $70''$ to $120''$. At smaller
distances from the source, the jet lies essentially in the plane of the
sky ($\phi\simeq0\arcdeg$). Burrows et al.\ (1996) and Wood et al.\
(1998), from a fit of the {\em HST} images to a flared disk using
scattered-light models, derive a value of $\sim8\arcdeg$ for the
inclination angle of the disk axis with respect to the plane of the sky.
However, it should be noted that, although the values obtained for the
inclination angle are similar, the derived axis of the jet points away
from the observer, while the derived axis of the disk points towards the
observer.
Since the [SII] lines are fainter than H$\alpha$ in HH~30-N (only knot NC
has been detected in [SII]), the radial velocities of HH~30-N have been
determined from the H$\alpha$ line profiles. After averaging the spectra
over regions comparable to the size of the knots, the heliocentric radial
velocity and the FWHM for each region were obtained (see Table
\ref{tabr30n}). The velocity dispersion in HH 30-N is much larger than in
the HH 30 jet, with values of the FWHM of the lines of typically
$\sim100$~km~s$^{-1}$. The values obtained for the heliocentric velocity
are blueshifted with respect to the ambient cloud, being in agreement with
our previous results (Raga et al.\ 1997). Using a value of $v_t\simeq120$
km s$^{-1}$ for the proper motions in HH30-N, we derive that the inclination
angle of the jet with respect to the plane of the sky is $\sim40\arcdeg$
(with the jet pointing towards the observer).
In order to study at higher spatial resolution the kinematical pattern we
also performed Gaussian fits at different positions inside the knots of
HH~30-N. The results are shown in Figure \ref{radial}, where the
heliocentric radial velocity is plotted as a function of projected
distance to the HH~30 star. A more detailed inspection of
Figure~\ref{radial} reveals that along knot NA the value of the
heliocentric radial velocity progressively decreases (i.e., the absolute
value of the velocity relative to the ambient cloud increases) from south
to north, with values ranging from $\sim-70$~km~s$^{-1}$ to
$\sim-90$~km~s$^{-1}$. Towards knot NC there is no clear velocity
gradient ($v_\mathrm{hel}\simeq-100$~km~s$^{-1}$), while the velocity
increases from $\sim-100$~km~s$^{-1}$ to $\sim-50$~km~s$^{-1}$ from the
southern edge of NE to the northern edge of NH. Globally, over the HH~30-N
structure, the velocity increases from south to north with an overall
gradient of $0.76$~km~s$^{-1}$~arcsec$^{-1}$. The FWHM decreases slightly
towards knot NH.
\subsubsection{Electron Density}
In addition to the kinematical information, our spectra also provide
information on the electron densities, derived from the [SII] 6717, 6731
\AA\ line ratio. Densities have been obtained assuming a temperature in
the [SII] emitting zone of $10^4$~K. Table \ref{tabrjet} and Figure
\ref{figdens}b show the electron densities in the knots of the HH~30 jet.
A high electron density has been derived for the central region (within a
few arcsec of the source), followed by a rapid decrease for the outer
knots.
Previous determinations of the electron density in HH~30 (Mundt et al.\
1990; Bacciotti et al.\ 1999) traced only the inner region ($\sim10''$) of
the jet. Our results cover a much more extended region of the jet, up to
$\sim120''$ from the source. The overall behavior of the electron
density, strongly decreasing with distance to the source, agrees with the
results of Mundt et al.\ (1990) and Bacciotti et al.\ (1999). These
authors, with higher angular resolution, measure a strong decrease of
electron density from $\sim10^4$~cm$^{-3}$ to $\sim10^3$~cm$^{-3}$ in the
inner $5''$ of the jet and counterjet; this region approximately
corresponds to the region binned for our measurement of
$3.7\times10^3$~cm$^{-3}$ at zero position offset. For the region of the
jet between $\sim5''$ and $\sim10''$ from the source, Mundt et al.\ (1990)
give a density of $\sim1000$~cm$^{-3}$ while Bacciotti et al.\ (1999)
measure a decrease from $\sim1100$~cm$^{-3}$ to $\sim370$~cm$^{-3}$,
consistent with our value of 430~cm$^{-3}$ measured at $11''$
from the source. At distances larger than $\sim20''$ the density falls to
values below $\sim100$~cm$^{-3}$.
In the spectrum of the HH~30-N region the [SII] 6717, 6731~\AA\ emission
lines were only detected at knot NC, for which an electron density of
$\sim380$~cm$^{-3}$ was derived. This increase in the electron density may
be produced by the interaction of the jet with an ambient medium of
locally enhanced density.
\subsection{HL/XZ Tau and HH 266}
In Table \ref{tabpmhl} and in Figure~\ref{fighltau} we show the proper
motion results obtained for the knots near HL/XZ~Tau and for HH~266. The
nomenclature used for the knots near HL/XZ~Tau is consistent with that
used by Mundt et al.\ (1990). These authors consider that knots HL-A and
HL-E belong to the HL Tau jet, and HL-B to HL-G belong the HL Tau
counterjet. According to Mundt et al.\ (1990), knots A--D define another
jet emanating from a source, VLA 1, with a counterjet defined by knot J.
However, the existence of the source VLA 1 is highly doubtful, as shown by
more sensitive observations carried out by Rodr\'{\i}guez et al.\ (1994).
Our results for the proper motions show no significant differences in the
values of the velocity and position angle for knots HL-A, HL-E, A, B, and
D. Thus, our results are consistent with all these knots belonging to the
same jet, emanating from HL~Tau. Typical values are $\sim120$~km~s$^{-1}$
for the velocity and $\sim45\arcdeg$ for the P.A., similar to the position
angle of the jet axis. Proper motion measurements are more difficult for
the knots of the counterjet because many of them are weak and appear split
into several subcondensations; in fact, our results are the first
determination of proper motions in the HL~Tau counterjet. For all the
counterjet knots the proper motion velocities are pointing away from
HL~Tau. For those with a better determination of their proper motion, HL-C
and HL-G, the velocity is $\sim120$~km~s$^{-1}$, similar to the values
found for the jet knots, and the position angle is $\sim-110\arcdeg$. For
knot J we measure a very small proper motion, which does not support the
claim of Mundt et al.\ (1990) that it belongs to the VLA~1 counterjet.
Instead, the proper motion of knot J appears to point away from the
position of HL~Tau as the nearby H$\alpha$B--C knots do too. Perhaps this
group of knots could constitute a low velocity, poorly collimated ejection
from HL~Tau (at $\mathrm{P.A.}\simeq170\arcdeg$), although given their
very low velocities we do not discard that they constitute a nearly static
condensation.
For HH~266 we measured a proper motion velocity of $\sim100$~km~s$^{-1}$
with a P.A. of $43\arcdeg$. L\'opez et al.\ (1996) note that HH 266 lies
at a position that is aligned within only a few degrees with the direction
of the HL Tau jet ($\mathrm{P.A.}=45\arcdeg$), suggesting that HH 266
constitutes the head of the HL Tau jet. Our proper motion results give
support to this suggestion, since both the position angle of the HH~266
proper motion, which points away from HL Tau, as well as the value of the
velocity that is similar to the velocities measured for the knots of the
HL Tau jet, give support to this interpretation. However, the uncertainty
in the P.A. of the proper motion is not small enough to discard that
another object nearby to HL Tau (such as XZ Tau or even the HH~30 star)
was the driving source for HH 266.
Mundt et al.\ (1990) measure proper motions for several knots. In Table
\ref{tabhlxz} we list the values obtained by these authors. As can be seen
in the table, the direction of the proper motion velocities we have
measured for the HL Tau jet is similar to that found by Mundt et al.\
(1990). However, the values obtained by these authors are significantly
higher than ours. We attribute this discrepancy to the fact that Mundt et
al.\ (1990) derive their proper motions using images obtained with
different filters; so, we think that our values are more reliable. In
order to obtain an estimate of the proper motions with a longer time
baseline, we used the [SII] images published in Mundt et al.\ (1990) to
obtain 1987 epoch positions for several of the knots. By comparing these
positions with our NOT 1999 positions we derived proper motions with a
time baseline of 12.9 years. The results are given in Table \ref{tabhlxz}.
As can be seen in the table, the proper motions obtained in this way are
in better agreement with our NOT 1998-1999 measurements.
Since the radial velocity of the HL Tau jet with respect to the ambient
cloud is $\sim-200$ km s$^{-1}$ (Mundt et al.\ 1990), and proper motions
are $\sim120$ km s$^{-1}$, we infer that the total velocity of the HL Tau
jet is $\sim230$ km s$^{-1}$, being the angle of the jet with respect to
the plane of the sky of $\sim60\arcdeg$ (towards the observer).
The radial velocity of the counterjet measured by Mundt et al.\ (1990) is
$\sim100$ km s$^{-1}$, while the proper motions are similar to those of
the jet. This results in a significant difference of $\sim20\arcdeg$
between the inclination of the jet and the counterjet with respect to the
plane of the sky, at scales $\ga10''$. Pyo et al.\ (2006) also noticed a
slight asymmetry of the same sign in the velocities of the jet and the
counterjet, resulting in a difference of $\sim4\arcdeg$ between the
inclination of the jet and the counterjet, at scales of less than $2''$
\section{Discussion on the HH~30 Jet Structure}
\subsection{On the Origin of the Wiggling of the Jet
\label{swiggling}}
As noted before, the HH~30 jet presents a wiggling morphology that is
particularly evident in the group of knots E1--E4 (see Fig.\
\ref{fignot99}b). If the wiggling was produced as the result of a true
helical trajectory of the knots as they move away from the exciting source
(e.g., by a deflection, as the jet propagates, because of the encounter
with a set of high density clumps), one would expect to see changes in the
direction of their proper motions, following a velocity pattern tangent to
the helical trajectory (e.g., one would expect knot E1 to move to the
right of the jet axis, while knots E2 and E3 to move to the left). Despite
the uncertainties in the $x$ component of the velocities, we do not see
evidence for such a systematic pattern; rather, the direction of the
proper motion vectors appears to be quite close to that of the axis of the
jet. Furthermore, when comparing, for example, the E1--E4 structure in our
images with the previous images of Mundt et al.\ (1990) and L\'opez et
al.\ (1995), a displacement can be seen of the whole knot structure,
keeping the same morphology, while a change in this morphology would be
expected if the knots were following an helical trajectory. This is
further illustrated in Figure \ref{figcpm}, where it can be seen that the
group of knots E1--E4 moves essentially as a whole from one epoch to
another, maintaining the same relative distance between knots. Thus, we
conclude that the motion of the knots is essentially ballistic and that
the observed wiggling of the jet structure is most likely produced by
variations in the direction of the ejection at the origin of the jet.
\subsubsection{Orbital Motion of the Jet Source\label{orbital}}
We will test the possibility that the observed wiggling in the HH~30 jet
results mainly from the orbital motion of the jet source around a binary
companion. Following the formulation given by Masciadri \& Raga (2002), we
will consider a ballistic jet (i.e., a jet where the fluid parcels
preserve the velocity with which they are ejected) from a star in a
circular orbit, and we will further assume that the ejection velocity
(measured in a frame moving with the outflow source) is time independent
and parallel to the orbital axis.
Let $m_1$ be the mass of the jet source, $m_2$ the mass of the companion,
and $m=m_1+m_2$ the total mass of the system. We will call $\mu$ the mass
of the companion relative to the total mass, so that
\begin{eqnarray}
m_1&=&(1-\mu)\,m, \nonumber\\
m_2&=&\mu\,m.
\label{m1m2}
\end{eqnarray}
Let $a$ be the binary separation (i.e., the radius of the relative
orbit). Therefore, the orbital radius of the jet source with respect to
the binary's center of mass (i.e., the radius of the jet source
absolute orbit) is
\begin{equation}
r_o=\mu\,a,
\label{mua}
\end{equation}
and the orbital velocity of the jet source is given by
\begin{equation}
v_o={2\pi{}r_o\over\tau_o},
\label{eqvo}
\end{equation}
where $\tau_o$ is the orbital period.
We will use a Cartesian coordinate system $(x',y',z')$, where $(x',z')$ are in
the orbital plane, being the $x'$ axis the intersection of the orbital plane
with the plane of the sky. The $y'$ axis coincides with the orbital
axis, at an angle $\phi$ with respect to the plane of the sky. The ejection
velocity of the jet will have a component in the orbital plane due to the
orbital motion, $v_o$, and a component perpendicular to this plane,
$v_j$, assumed to be constant. In this coordinate system the
shape of the jet is given by
\begin{equation}
\frac{x'}{r_o}= \kappa\frac{|y'|}{r_o}
\sin\left(\kappa\frac{|y'|}{r_o}-\psi\right)+
\cos\left(\kappa\frac{|y'|}{r_o}-\psi\right),
\label{eq8raga}
\end{equation}
where $\kappa\equiv v_o/v_j$, and $\psi$ is the orbital
phase angle (with respect to the $x'$ axis) at the epoch of observation.
The equation for the $z'$ coordinate is obtained by substituting $\psi$ by
$\psi+\pi/2$ in Eq.\ \ref{eq8raga}. Note that the jet ($y'>0$) and
counterjet ($y'<0$) shapes will have a reflection symmetry with respect to
the orbital plane.
If $D$ is the distance from the source to the observer,
the positions $(x,y)$ measured on the observed images (i.e., in the
plane of the sky) are given by
\begin{eqnarray}
x &=& \frac{x'}{D\,} , \nonumber\\
y &=& \frac{y'\cos\phi-z'\sin\phi}{D}\simeq \frac{y'\cos\phi}{D},
\end{eqnarray}
the last approximation being valid for a collimated jet with a small inclination
$\phi$ at distances large
enough from the source ($y'\gg z'$). The parameters of the model are directly
related to the observables: $\kappa$ is related to the half-opening angle,
$\alpha$, of the jet cone measured in the plane of the sky,
\begin{equation}
\kappa =\tan\alpha\,\cos\phi,
\label{eqkappa}
\end{equation}
and the orbital radius $r_o$ is related to the observed
period $\lambda_y$ of the wiggles (i.e.\ the angular distance in the plane of
the sky between the positions of two successive maximum elongations),
\begin{equation}
r_o={\lambda_y\tan\alpha\over2\pi}D.
\label{eqro}
\end{equation}
In terms of these parameters, Eq.\ \ref{eq8raga} becomes
\begin{equation}
x=|y|\tan\alpha\,\sin\left[\frac{2\pi}{\lambda_y}(|y|-y_0)\right]+
\frac{\lambda_y}{2\pi}\tan\alpha\,
\cos\left[\frac{2\pi}{\lambda_y}(|y|-y_0)\right],
\end{equation}
where $y_0=\lambda_y\,\psi/2\pi$ is the offset from the origin, in the plane of
the sky, of the knot ejected when the source was at the $\psi=0$ position.
Wiggling in the HH~30 jet is most evident in the group of knots B, C, D,
and E. Therefore, we used knots A1 to E4 to determine the parameters of
the jet, and we found a very good match of the predicted shape with the
observed image of the jet for
$\alpha=1\rlap.\arcdeg43\pm0\rlap.\arcdeg12$, $\lambda_y=16''\pm1''$
(corresponding to $2240\pm140$~AU at a distance $D=140$ pc), and
$y_0=4''\pm2''$ (corresponding to $560\pm280$ AU). The result is shown in
Figure \ref{figwiggle}a. If this jet shape is translated to the epoch of
the {\em HST} observations reported by Burrows et al.\ (1996) it is also
consistent with the observed positions of the knots, thus confirming the
validity of the fit.
Using Eq.~\ref{eqro} and the values obtained for $\alpha$ and $\lambda_y$
we derived an orbital radius $r_o=8.9\pm0.9$~AU (corresponding to
$0\rlap.''064\pm0\rlap.''006$). In addition, the fit allowed us to obtain
a more accurate value for the position angle of the axis of the jet,
$\mathrm{P.A.}=31\rlap.\arcdeg6$, in the range of offsets from the source
of $0''$ to $50''$.
Since the observed ratio of radial to proper motion velocities in the
HH~30 jet is very small (see Tables \ref{tabpm} and \ref{tabrjet}), we
infer that the inclination angle should be also very small,
$\phi\la5\arcdeg$ (\S~\ref{rvel}). Then, from the proper motion
measurements (Table
\ref{tabpm}) we obtained an estimate of the ejection velocity,
$v_j=v_t/\cos\phi\simeq200\pm50$~km~s$^{-1}$, and we
derived the remaining orbital parameters. Using Eq.~\ref{eqkappa} we
obtained the orbital velocity, $v_o=5.0\pm1.3$~km~s$^{-1}$, and
using Eq.\ \ref{eqvo} we obtained the orbital period,
$\tau_o=53\pm15$~yr. The line-of-sight component of the orbital
velocity will produce an oscillation of the radial velocity along the path
of the jet with a peak-to-peak amplitude of
$2v_o\cos\phi\simeq10.0$~km~s$^{-1}$. This value is small
compared to the spectral resolution of our observations ($\sim32$
km~s$^{-1}$), so it is not expected to produce significant oscillations in
the observed radial velocity, consistent with what is observed (see
Table~\ref{tabrjet} and Fig.~\ref{figdens}). In the case of the HH 43 jet,
noticeable radial velocity oscillations with a peak-to-peak amplitude of
16~km~s$^{-1}$ were observed with higher spectral resolution by Schwartz
\& Greene (1999).
The total mass of the binary system is given by
\begin{equation}
\left(\frac{m}{M_\sun}\right)=
\mu^{-3}
\left(\frac{r_o}{\mathrm{AU}}\right)^3
\left(\frac{\tau_o}{\mathrm{yr}}\right)^{-2},
\label{mbin}
\end{equation}
corresponding to $m=0.25\,\mu^{-3}\,M_\sun$, for the values of
$r_o$ and $\tau_o$ derived above. For a system with two
stars of the same mass ($\mu=0.5$), each component will have a mass of
$1.0\pm0.3~M_\sun$, and the separation between the two components would
be $a=r_o/\mu=17.8$~AU ($0\rlap.''128$). Values of
$\mu<0.5$ (corresponding to the jet source being the primary) would result
in $m>2~M_\sun$, and appear unlikely, given the estimated low bolometric
luminosity of the system ($L\simeq0.2$--$0.9\,L_\sun$ that, according to
D'Antona \& Mazzitelli 1997 would correspond to stellar masses roughly in
the range 0.1--1~$M_\sun$). Smaller masses would be obtained if $\mu>0.5$
(corresponding to the jet source being the secondary), with a lower limit
of $m_2=0.25~M_\sun$ ($\mu=1$), and a separation $a=r_o$. Thus,
under this scenario we expect that the exciting source of the HH~30 jet
belongs to a close binary system, with the two components separated by
$0\rlap.''064$--$0\rlap.''128$, and the total mass of the system in the
range 0.25--2~$M_\sun$.
\subsubsection{Precession of the Jet Axis}
An alternative possibility is that the observed wiggling of the HH~30 jet
is due to precession of the ejection axis of the jet, being driven by tidal
interactions between the disk from which the jet originates and a
companion star in a non coplanar orbit. For this model we will neglect the
orbital motion of the jet source, and we will assume that the wiggling of
the jet is the result of the changing direction of ejection of the jet.
Masciadri \& Raga (2002) show that the shape of the jet is given by
\begin{equation}
x'=y'\,\tan\beta\,
\cos\left(\frac{2\pi}{\tau_p}
\frac{|y'|}{v_j\cos\beta}-\psi\right),
\label{eq19raga}
\end{equation}
where $\beta$ is the angle between the central flow axis and the line of
maximum deviation of the flow from this axis, and $\tau_p$ is the
precession period. Note that in this case the jet ($y'>0$) and counterjet
($y'<0$) shapes will have a point symmetry with respect to the jet source.
The precession angle, $\beta$, is related to the observables $\alpha$ (the
half-opening angle of the jet cone measured in the plane of the sky) and
$\phi$ (the inclination angle of the jet axis with respect to the plane of
the sky) as,
\begin{equation}
\tan \beta = \tan \alpha \cos \phi,
\label{tanbeta}
\end{equation}
and the precesssion period $\tau_p$ is related to $\lambda_y$ (the
observed angular period of the wiggles) and $v_t$ (the measured proper
motion velocity, where $v_t=v_j \cos \phi$) as,
\begin{equation}
\tau_p = \frac{\lambda_y D}{\tau_p v_t \cos \beta}.
\label{taup}
\end{equation}
Therefore, Eq.~\ref{eq19raga} can be rewritten in angular coordinates in
the plane of the sky and in terms of observable parameters as,
\begin{equation}
x=y\,\tan\alpha\,\cos\left[\frac{2\pi}{\lambda_y}(|y|-y_0)\right].
\end{equation}
An approximate expression relating the orbital and precession periods can
be derived from Eq.\ 24 of Terquem (1998), valid for a disk precessing as a
rigid body, by assuming that the disk surface density is uniform and that
the rotation is Keplerian,
\begin{equation}
\frac{\tau_o}{\tau_p}=
\frac{15}{32}\,\frac{\mu}{(1-\mu)^{1/2}}\,{\sigma^{3/2}\cos\beta},
\label{tauo}
\end{equation}
where $\mu=m_2/m$ is the ratio between the mass of the companion and
the total mass of the system, $\beta$ is the inclination of the orbital
plane with respect to the plane of the disk (coincident with half the
opening angle of the precession cone), and $\sigma=r_d/a$ is the
ratio of disk radius to binary separation.
From the fit to the observed shape of the jet for knots A--E we obtained
the values of $\alpha$ and $\lambda_y$ (\S~\ref{orbital}), and using
Eqs.~\ref{tanbeta} and \ref{taup} we obtain a precession angle
$\beta=1\rlap.\arcdeg42\pm0\rlap.\arcdeg12$ and a precession period
$\tau_p=53\pm15$~yr, for $v_j=200$~km~s$^{-1}$
(\S~\ref{orbital}). The expected peak-to-peak oscillation of the observed
radial velocity corresponding to this precession motion is
$2\,v_j \sin\beta \cos\phi=9.9$~km~s$^{-1}$, a value similar to
that expected in the case of pure orbital motion, and that is too small to
be detectable, given the spectral resolution of our observations. Unlike
the case of orbital motion, in the case of precession, the observables do
not tightly constrain the orbital parameters, and a number of additional
assumptions are required to infer their values. Since it is expected that
the size of the disk is truncated by tidal interaction with the companion
star in such a way that $1/4\le\sigma\le1/2$ (Lin \& Papaloizou 1993;
Artymowicz \& Lubow 1994; Larwood et al.\ 1996; Terquem et al.\ 1999), we
will adopt a value of $\sigma=1/3$, so that using Eq.\ \ref{tauo} the
orbital period can be obtained from the observables as a function of only
the parameter $\mu$. An additional constraint comes from the observed
luminosity of the source that, according to D'Antona \& Mazzitelli (1997),
suggests that the mass of the more massive of the two components should
fall in the range 0.1--1~$M_\odot$. Finally, the hypothesis that the
observed wiggling in the jet is mainly due to precession implies that the
effect on the jet opening angle produced by the orbital motion of the jet
source should be smaller than the precession angle
($v_o/v_j<\tan\beta$). According to Eq.\ \ref{eqkappa},
the orbital velocity of the jet source should be
$v_o<5$~km~s$^{-1}$ in order to fulfill this condition.
Taking into account these constraints, we investigated the parameter space for
different values of $\mu$. For a given value of $\mu$ we calculated the orbital
period using Eq.~\ref{tauo}. Then, for each value of $v_o$ we derived
the value for $r_o$ by using Eq.~\ref{eqvo}, and using Eq.~\ref{mua}
the corresponding separation between the two stars. The mass of the binary
system was calculated using Eq.~\ref{mbin}, and the corresponding masses of the
two components were calculated using Eq.~\ref{m1m2}.
Following this procedure we found that the orbital velocity and
luminosity constraints lead to the result that the more massive of the two
components (with a mass between 0.1 and 1~$M_\odot$) should be the jet
source. The maximum orbital velocity of 5 km s$^{-1}$ gives the largest
values of the mass of the companion ($m_2=0.17~M_\odot$). Specifically,
for $v_o=5$~km~s$^{-1}$, the mass of the jet source should be in
the range $0.1<m_1<1~M_\odot$, resulting in a mass of the companion
$0.07<m_2<0.17~M_\odot$, a binary separation $1.04>a>0.86$ AU (corresponding to
an angular separation between
$0\rlap.''007$ and $0\rlap.''006$), and an orbital period
$2.5>\tau_o>0.8$ yr, respectively. The actual value of the
orbital velocity should be significantly lower than 5~km~s$^{-1}$,
resulting in very low values for the mass of the companion and the binary
separation. For instance, for $v_o=2$~km~s$^{-1}$, the mass of
the companion is $0.01<m_2<0.04~M_\odot$, the binary separation is
$a=0.33$ AU ($0\rlap.''002$), and the orbital period is
$0.6>\tau_o>0.2$ yr. Therefore, in the precession scenario
the companion is expected to be a brown dwarf star or even a giant
exoplanet.
\subsubsection{Evidence for a Binary Exciting Source}
After the discussion in the previous section we conclude that both the orbital
motion and the precession models are feasible. In the first case, the orbital
period would be 53 yr, the expected angular separation would be
$0\rlap.''064$--$0\rlap.''128$ (9--18 AU), and the jet source is expected to be
the secondary, while the mass of the primary is expected to fall in the range
0.25--1~$M_\odot$. In the case of precession, in order to fulfill the
observational constraints, the jet source should be the primary, with a mass in
the range $\sim0.1$--1~$M_\odot$, resulting in much smaller values for the
derived parameters: the orbital period is expected to be less than $\sim1$ yr,
the mass of the companion less than a few times $\sim0.01~M_\sun$, and the
angular separation $<0\rlap.''007$ ($<1$ AU).
We want to emphasize that both scenarios are consistent with the current
observational data, and that both imply that the exciting source of the
HH~30 jet should be a close binary (separation $<0\rlap.''1$). We take the
very good agreement between the predicted and observed wiggling of the
HH~30 jet as a strong (although indirect) evidence for the existence of
such a binary system. A direct evidence would require to resolve the HH~30
exciting source with an angular resolution better than $\sim0\rlap.''1$ in
the case of orbital motions and better than $\sim0\rlap.''01$ in the case
that the wiggling is originated by precession. Given the strong extinction
towards the source and the high angular resolution required, observations
at centimeter, millimeter, or submillimeter wavelengths will be necessary.
In the first scenario, the required angular resolution can be currently
achieved with the VLA although, unfortunately, the source appears to be
weak at centimeter wavelengths, and has not been detected yet
(Carrasco-Gonz\'alez et al. 2007). In the precession scenario, the
angular resolution required is near the limits of the expected
capabilities of ALMA.
In both scenarios the range of values inferred for the mass of the system
is consistent with the estimate for the stellar mass of
$0.45\pm0.04~M_\sun$ obtained from IRAM Plateau de Bure $^{13}$CO
observations of the disk (Pety et al.\ 2006). We also note that
Stapelfeldt et al.\ (1999) find variability in the asymmetry of the disk
suggesting a characteristic time scale of 3 yr or less, which is of the
order of the values of the orbital period derived in the precession
scenario. This coincidence should be expected if the variability of the
illumination pattern were produced by an eclipsing binary system with this
orbital period.
The two proposed scenarios could be discriminated by taking into account
that in the first case mirror symmetry between the jet and counterjet is
expected, while in the second case point symmetry is expected to be found.
Unfortunately, our images do not cover the counterjet and we cannot
discriminate between both possibilities. We expect that future
observations will allow us to discriminate between the two scenarios.
We also note that, in both scenarios, the expected separation between the
two components of the binary system is $<18$ AU, a value much smaller than
the radius of the disk nearly perpendicular to the HH~30 jet observed with
the {\em HST} (Burrows et al.\ 1996, Stapelfeldt et al.\ 1999), which is
$\sim250$ AU. Given that the radius of any circumstellar disk associated
with the jet source should be smaller than the binary separation, this
implies that the {\em HST} disk should be a circumbinary disk and not a
circumstellar disk. Also, since the scale of the jet collimating
mechanism should be much smaller than that of the mechanism that drives
the jet wiggling, which is of the order of the binary separation ($<18$
AU), we conclude that the $\sim$250 AU disk observed with the {\em HST} is
unlikely to have a relevant role in the jet collimation, contrary to what
has been thought up to now. These results suggest that the search for
the true collimating agent of the HH~30 jet (likely a circumstellar disk
associated with the jet source) should be done at very small angular
scales, $<0\rlap.''13$ ($<18$ AU).
\subsection{The Large-Scale Structure of the HH~30 Jet}
As a general trend, we observe that the direction of the proper motions
measured for the knots of the HH~30 jet approximately coincides with that
of the geometrical axis of the jet (see Fig.\ \ref{fignot99} and Table
\ref{tabpm}). There is, however, some indication of a systematic velocity
component perpendicular to the axis of the jet, so that the resulting
proper motion velocities deviate to the right of the jet axis (i.e.,
the velocities are orientated at a P.A. that, in general, is smaller than
that of the jet axis, whose P.A. is $\sim30^\circ$). The presence of this
velocity component perpendicular to the jet axis is in agreement with the
suggestion of L\'opez et al.\ (1995) that the observed ``axial rotation''
effect in the HH~30 jet (i.e., the direction of the axis of the jet and
counterjet curves westward as one moves away from the exciting source) is
a consequence of the relative motion between the source and the
environment. Such a scenario has been modeled in detail by Cant\'o \& Raga
(1995). The proximity of the powerful L1551-IRS5 molecular outflow, to the
southeast of the HH~30 jet, could also contribute to the velocity
component perpendicular to the jet axis.
The wiggling-model fit obtained for the jet at distances smaller than
$\sim50''$ (see \S\ \ref{swiggling}) is also essentially valid for larger
scales. In Figures \ref{figcromo} and \ref{figwiggle}b we show the
resulting structure of the jet up to distances of $\sim300''$. In this fit
we introduced a slight change of $-5^\circ$ in the P.A., at a distance of
$72''$ from the source, in order to reproduce the bending of the jet (see
discussion above). As can be seen in the Figure, in this way the overall
structure of the jet is reproduced quite well, including the width of the
jet up to the distance of HH~30-N. We take this as an additional proof
that the HH~30-N knots do belong to the HH~30 jet (and that HH 266 does
not belong to the HH~30 jet). Also, we take this good agreement as
additional evidence for the presence of a binary system in the exciting
source of the HH~30 jet.
We note that the detailed oscillations of the jet are not well reproduced
for distances higher than $\sim50''$. This can be due to slight variations
in the ejection velocity (in fact, proper motions are not constant) that
would result in increasingly larger deviations of the periodic pattern as
the distance from the source increases. In fact, a change in the radial
velocity is observed at a distance of $\sim70''$ from the source,
coinciding with the $\Delta\mathrm{P.A.}=-5^\circ$ of the jet.
Finally, we point out that if one wants to explain the knot/inter-knot
pattern of the HH 30 jet as the result of source variability with a
multiple mode model (as derived by Raga \& Noriega-Crespo 1998 for the
HH~34 jet), we would need at least four modes. One of these modes would
have a $\tau_1\simeq2.5$~yr period, as derived by Burrows et al.\ (1996)
for the {\em HST} knots 02--04N. We would then need a second
$\tau_2\simeq30$--40 yr period to explain the separation between features
such as the knot that corresponds to the {\em HST} knots 06+07N and the
condensation composed by the {\em HST} knots 02--04N (see Fig.\
\ref{figcpm}). A similar period is found from the velocities and
separation between knots B and C, and knots E1 and E2 (see Figs.\
\ref{fignot99} and \ref{figcpm}). A third, $\tau_3\simeq150$ yr period is
necessary to produce the separations between knots B/C, E, G and H.
Finally, in order to produce the NA-NH knot structure (see Figs.\
\ref{figcromo} and \ref{fignot99}), one would need a fourth source
variability mode. The dynamical timescale
$\tau_\mathrm{dyn}\simeq1500~\mathrm{yr}\simeq\tau_4$ of the NA--NH knots
indicates the order of magnitude of the period of this fourth mode.
Even though there is little doubt that the {\em HST} knot spacing pattern is
generated very close to the outflow source (see Burrows et al.\ 1996), it
is clearly possible that the larger scale knot patterns could originate at
larger distances from the source (e.~g., through instabilities developed
along the jet beam). The question of whether or not these larger scale
knot patterns are also generated as a consequence of source variability
should be settled in the future through high angular resolution monitoring
of the HH~30 jet during the following $\sim30$ years.
\section{Conclusions}
Using the NOT we obtained [S~II] CCD frames at two epochs with a time span
of one year of the region enclosing HH 30, HH 30-N, HH 266, as well as
HL/XZ Tau. We also obtained high-resolution optical spectroscopy of the HH
30 jet (including HH 30-N) using the WHT. The main conclusions from the
analysis of our results can be summarized as follows:
\begin{enumerate}
\item
We measured proper motions in the HH 30 jet, with velocities ranging from
$\sim100$ to $\sim300$ km~s$^{-1}$. We found the highest values of the velocity
(200--300 km~s$^{-1}$) near the driving source ($<3000$ AU), decreasing to
$\sim150$ km~s$^{-1}$ at distances of $\ga5000$ AU.
\item
Although the jet shows a wiggling morphology, the proper motions of the
jet knots are roughly parallel to the jet axis, with no signs of a pattern of
changes in the direction as would be expected for a true helical motion of the
knots. This suggests that the motion of the knots is essentially ballistic and
that the observed wiggling is most likely produced by variations in the
direction of the velocity at the origin of the jet. Nevertheless, there is a
small but systematic drift of the velocities westwards, which could be due to
the effect of a side wind.
\item
We have been able to measure reliable proper motions only for one of the
knots of the counterjet, obtaining a velocity of $\sim250$ km~s$^{-1}$,
which is similar to the velocities measured in the jet at similar distances.
\item
The proper motions measured for the HH 30-N knots are, on the average,
aligned with the direction of the HH 30 jet, thus supporting the
hypothesis that this group of knots corresponds to the head of the HH~30
jet (L\'opez et al.\ 1996). However, the values obtained for both
magnitude and position angle of the velocity show a dispersion
considerably larger than in the HH 30 jet, which could be a consequence of
the interaction between the head of the jet and its surroundings.
Knot NF, which was previously identified as a HH knot, appears very circular
and compact in our higher quality images, and with a proper motion
compatible with being static. We thus conclude that it is most probably a
field star.
\item
The values we obtained for the proper motions along the HH 30 jet are similar
to those derived from {\em HST} data by Burrows et al.\ (1996) for the inner
($<5''$) region of the jet. We found a good agreement between the direction of
our proper motions and that obtained by L\'opez et al.\ (1996), although we
found discrepancies with the values of the velocity derived by these authors
(and by Mundt et al.\ 1990) that we attribute to the poorer quality of their
data.
\item
In general, we found a pretty good correspondence between the extrapolation
back in time of our proper motion estimates for the HH 30 knots and the
positions of the knots identified in previous observations. This result
indicates that most of the knots probably consist of persisting outflowing
structures. However, in some parts of the jet, particularly near the source
($\la20''$), where the interaction of the jet with the medium appears to be
stronger and fading is more noticeable than at larger distances, the knot
structure shows indications for an additional static pattern, that could arise
from the interaction with the ambient cloud.
\item
Our spectroscopic observations show that the radial velocity of the jet is
similar to the systemic velocity of the cloud.
From the ratio of the radial to proper motion velocities we inferred that the
jet lies essentially in the plane of the sky ($\phi\simeq0^\circ$) for
distances to the source $<40''$. For distances to the source
$70''<y<120''$ the jet is redshifted with respect to the ambient cloud, with an
inclination angle with respect to the plane of the sky $4^\circ<\phi<9^\circ$.
For the HH 30-N structure, the radial velocity is blueshifted with respect to
the ambient cloud, with an inclination angle with respect to the plane of the
sky $\phi\simeq40^\circ$, suggesting a bending of the direction of the jet
propagation, as previously proposed by Raga et al.\ (1997).
\item
We estimated the electron density of the HH 30 jet knots up to
$\sim120''$ from the source, and in HH 30-N, covering a region much more
extended than in previous studies. The density in the HH 30 jet decreases
with distance to the source, remaining below $\sim 100$~cm$^{-3}$ for
distances $\sim20''$-$\sim120''$. The density increases to
$\sim400$~cm$^{-3}$ in HH 30-N, suggesting an interaction of the jet with
an ambient medium of locally enhanced density.
\item
Our images reveal a clear wiggling of the HH 30 jet knots, with a spatial
periodicity of $16''\pm1''$ ($2240\pm140$ AU). The width of the jet beam
increases with distance to the source, with a half-opening angle in the
plane of the sky of $1\rlap.^\circ43\pm0.^\circ12$. We found that the
wiggling structure of the HH 30 can be accounted for either by the orbital
motion of the jet source or by the precession of the jet axis.
In the first case the orbital period of the binary system would be 53 yr,
the expected angular separation of the two components would be
$0\rlap.''064$--$0\rlap.''128$ (9--18 AU), and the jet source is expected
to be the secondary, while the mass of the primary is expected to fall in
the range 0.25--1~$M_\odot$. In the case of precession the jet source
should be the primary, with a mass in the range $\sim0.1$--1~$M_\odot$.
The orbital period should be less than 1 yr, the mass of the companion
less than a few times $0.01~M_\odot$, and the angular separation
$<0\rlap.''007$ ($<1$ AU). Therefore, it is feasible that the secondary is
a substellar object, or even a giant exoplanet.
\item
We take the very good agreement between the predicted and observed wiggling of
the HH~30 jet as a strong (although indirect) evidence for the existence of a
binary system. The angular separation between the two components is very small
($\la0\rlap.''1$ in the case of orbital motion and $\la0\rlap.''01$ in the case
of precession), and would require the use of VLA or ALMA to resolve the
system.
In either case, the separation between the two components of the binary is
well below the size ($\sim450$ AU) of the observed disk perpendicular to
the jet, indicating that this disk should be a circumbinary disk instead
of a circumstellar disk, contrary to what has been thought up to now. This
leaves unclear the role of the observed disk in the jet collimation,
suggesting that the search for a circumstellar disk (likely the true
collimating agent of the HH~30 jet) should be carried out at very small
scales ($<0\rlap.''1$).
\item
Our fit of the observed knot structure of the jet allowed us to refine the
value of the position angle of the axis of the jet, obtaining a value of
$\mathrm{P.A.}=31\rlap.^\circ6$. In fitting the large scale structure (up to
$300''$) we need to introduce a change in the direction of the jet,
$\Delta\mathrm{P.A.}=-5^\circ$ at a distance $\sim 70''$. This change in the
direction occurs roughly at the same position where there is a change
in the radial velocities observed, suggesting a change in the
inclination angle with respect to the plane of the sky.
\item
We obtained a more accurate estimate of the proper motions of the HL Tau jet,
which are of the order of $\sim120$ km~s$^{-1}$, a value significantly lower
than previous estimates. From our proper motions and using the radial velocity
measurements of Mundt et al.\ (1990), we estimated that the inclination angle
of the HL Tau jet with respect to the plane of the sky is $\sim60^\circ$. We
measured for the first time the proper motions in the HL Tau counterjet,
obtaining values similar to those of the jet.
\end{enumerate}
\acknowledgments
G. A., R. L., R. E., and A. R. are supported by the MEC AYA2005-05823-C03
grant (co-funded with FEDER funds).
G.A. and J.M. acknowledge support from Junta de Andaluc\'{\i}a.
The work of A. C. R. was supported by the CONACyT.
We thank Luis F. Miranda for his valuable comments and his help in
preparing Figure 1.
We thank an anonymous referee for helpful comments.
The data presented here were taken using ALFOSC, which is owned by the
Instituto de Astrof\'{\i}sica de Andaluc\'{\i}a (IAA) and operated at the
Nordic Optical Telescope under agreement between the IAA and the NBIfA of the
Astronomical Observatory of Copenhagen.
\clearpage
|
1,477,468,750,400 | arxiv | \section{Introduction}
By installing hundreds of antennas at the \emph{base station }(BS), \emph{large-scale antenna} (LSA) systems can significantly improve performance of cellular networks \cite{EGLarsson,FRusek}. Even if LSA can be regarded as an extension of the traditional \emph{multiple-input multiple-output} (MIMO) systems, which has been widely studied during the last couple of decades \cite{GJFoschini}, many special properties of LSA due to extremely large number of antennas make it a potential technique for future wireless systems and thus has gained lots of attention recently.\par
When the antenna number is sufficiently large, the performance in an LSA system becomes deterministic \cite{HQNgo}. From the power scaling law for LSA \cite{HQNgo}, the transmit power of each user is inversely proportional to the antenna number or the square root of the antenna number, depending on whether accurate channel state information is available or not. For downlink transmission with multiple users, precoding techniques are required at the BS to achieve the system capacity \cite[Ch. 10]{DTse}. When the antenna number is large enough and the channels corresponding to different antennas or users are independent, the channel vectors for different users are asymptotically orthogonal. If the user number is much smaller than the antenna number which is always true in LSA systems, the \emph{matched filter} (MF) will perform as well as the typical linear precoders, such as \emph{zero-forcing }(ZF) or \emph{minimum mean-square-error} (MMSE). Therefore, the complexity can be greatly reduced since no matrix inversion is required for precoding\cite{LLu,EGLarsson}.\par
Similar to the philosophy of MIMO-\emph{orthogonal frequency division multiplexing} (OFDM) \cite{LJCimini} or MIMO-OFDM \cite{YLi_99}, LSA can be also combined with OFDM to deal with frequency selectivity in wireless channels. Although straightforward, such combination suffers from substantially increased complexity.\par
First, the precoding is conducted in the frequency domain for traditional MIMO-OFDM \cite{3GPP_series}. In this case, each antenna at the BS requires an \emph{inverse fast Fourier transform} (IFFT) for OFDM modulation and the number of IFFTs is equal to the antenna number. Therefore, the number of IFFTs will increase substantially as the rising of the antenna number in LSA systems, leading to a huge computational burden.\par
Second, \emph{zero-forcing} (ZF) precoding is required to support more users in LSA systems. As indicated in \cite{EGLarsson,FRusek}, the MF precoding can perform as well as the ZF precoding in LSA systems because the \emph{inter-user-interference} (IUI) can be suppressed asymptotically through the MF precoding if the antenna number is large enough and the channels at different antennas and different users are independent. In practical systems, however, the antenna number is always finite. Moreover, the channels at different antennas will be correlated when placing so many antennas in a small area. In this sense, there will be residual IUI for the MF precoding, and the ZF precoding is thus still required \cite{JHoydis}. As a result, the matrix inversion of the ZF precoding will substantially increase the complexity, especially when the user number is large.\par
To address the issues above, we propose a low-complexity recursive convolutional precoding for LSA-OFDM in this paper.\par
First, a convolutional precoding filter in the time domain is used to replace the traditional precoding in the frequency domain. In this way, only one IFFT is required for each user no matter how many antennas there are. Meanwhile, by exploiting the frequency-domain correlation of the traditional precoding coefficients, the length of the precoding filter can be much smaller than the FFT size. As a result, the complexity can be greatly reduced, especially when the antenna number is large. Even though the convolutional precoding has been studied in \cite{YWLiang} for traditional MIMO-OFDM systems, its advantage is not as significant as in LSA systems. In this paper, we highlight that such advantage becomes remarkable when the antenna number is large and thus it is more suitable to adopt the convolutional precoding rather than the traditional frequency-domain precoding for the transceiver design in LSA-OFDM systems.
Second, based on the order recursion of Taylor expansion, the convolutional precoding filter works recursively in this paper such that we can not only avoid direct matrix inverse of traditional ZF precoding but also provide a way to implement the traditional ZF precoding through the convolutional precoding filter with low complexity. Taylor expansion has already been used for \emph{Truncated polynomial expansion} (TPE) in \cite{AMuller,AKammoun,NShariati,GMASessler}. In \cite{AMuller}, it is used to approximate the matrix inverse in ZF precoding. The precoding can be conducted iteratively so that the matrix inverse can be avoided. A similar approach is adopted in \cite{AKammoun} where the TPE is based on Cayley-Hamilton theorem and Taylor expansion is used for optimization of polynomial coefficients. The order recursion of Taylor expansion has also been used in \cite{NShariati,GMASessler} for channel estimation and multiuser detection. Different from the existing works that are based on a matrix form Taylor expansion in the frequency domain, the recursive ZF precoding in this paper is implemented through the recursive filter in the time domain such that it can be naturally combined with the convolutional precoding. Moreover, the order recursion is converted to a time recursion in this paper so that the proposed approach can track the time-variation of channels. Based on the time recursion, the tracking property is further analyzed for large-scale regime, resulting in new theoretical insights for the behaviors of time recursion in LSA systems that are not revealed before.\par
The rest of this paper is organized as follows. The system model is introduced in Section II. The proposed approach is derived in Section III, and its performance is analyzed in Section III. Simulation results are presented Section V. Finally, conclusions are drawn in Section VI.
\section{System Model}
Consider downlink transmission in an LSA-OFDM system where a BS employs $M$ antennas to serve $P$ users, each with one antenna, simultaneously at the same frequency band. As in \cite{EGLarsson}, we assume $M\gg P$.\par
Denote $x_p[n,k]$ with $\mathrm{E}(|x_p[n,k]|^2)=E_s$ to be the transmit symbol for the $p$-th user at the $k$-th subcarrier of the $n$-th OFDM block. In an LSA-OFDM based on traditional OFDM implementation, the precoding is carried out in the frequency domain, and therefore the transmit signal at the $l$-th sample of the $n$-th OFDM block at the $m$-th antenna for the $p$-th user is
\begin{align}
s_{m,p}[n,l]=\frac{1}{\sqrt{K}}\sum_{k=0}^{K-1}u_{m,p}[n,k]x_p[n,k]e^{j\frac{2\pi kl}{K}},
\end{align}
where $K$ denotes the subcarrier number for the OFDM modulation and $u_{m,p}[n,k]$ denotes the precoding coefficient for the $k$-th subcarrier of the $n$-th OFDM block at the $m$-th antenna for the $p$-th user. A \emph{cyclic prefix }(CP) will be added in front of the transmit signal to deal with the delay spread of wireless channels.\par
After removing the CP and OFDM demodulation, the received signal at the $p$-th user can be expressed as
\begin{align}\label{2-2}
y_p[n,k]=\sum_{m=1}^Mh_{p,m}[n,k]\left(\sum_{p=1}^Pu_{m,p}[n,k]x_p[n,k]\right)+z_p[n,k],
\end{align}
where $z_p[n,k]$ is the additive white noise with $\mathrm{E}(|z_p[n,k]|^2)=N_0$, and $h_{p,m}[n,k]$ is the \emph{channel frequency response} (CFR) corresponding to the $k$-th subcarrier of the $n$-th block at the $m$-th antenna for the $p$-th user, which can be expressed as
\begin{align}
h_{p,m}[n,k]=\sum_{l=0}^{L-1}c_{p,m}[n,l]e^{-j\frac{2\pi lk}{K}},
\end{align}
where $c_{p,m}[n,l]$ is the \emph{channel impulse response} (CIR) and $L$ denotes the channel length which is usually much smaller than the FFT size. The CFR is assumed to be complex Gaussian distributed with zero mean and $\mathrm{E}\{h_{p,m}[n,k]h_{p_1,m_1}^*[n,k]\}=g_p\rho[m - m_1]\delta[p-p_1]$, where $g_p$ denotes the square of the large-scale fading coefficient for the $p$-th user, $\rho[\cdot]$ denotes the correlation function of the channels at different antennas for the same user, and $\delta[\cdot]$ denotes the Kronecker delta function. It means the CFRs have been assumed to be independent for different users while they depend on the correlation function, $\rho[\cdot]$, for different antennas. In particular, we have $\rho[\cdot]=\delta[\cdot]$ when the CFRs at different antennas are independent.\par
From (\ref{2-2}), the received signal vector corresponding to the $k$-th subcarrier of the $n$-th OFDM block for all users can be expressed as
\begin{align}\label{2-1}
\mathbf{y}[n,k]&\triangleq(y_1[n,k],\cdots,y_P[n,k])^{\mathrm{T}}\nonumber\\
&=\mathbf{H}[n,k]\mathbf{U}[n,k]\mathbf{x}[n,k]+\mathbf{z}[n,k],
\end{align}
where
\begin{align}
\mathbf{x}[n,k]&=(x_1[n,k],\cdots,x_P[n,k])^{\mathrm{T}},\nonumber\\
\mathbf{z}[n,k]&=(z_1[n,k],\cdots,z_P[n,k])^{\mathrm{T}},\nonumber\\
\mathbf{U}[n,k]&=\{u_{m,p}[n,k]\}_{m,p=1}^{M,P}=(\mathbf{u}_1[n,k],\cdots,\mathbf{u}_P[n,k]),\nonumber\\
\mathbf{H}[n,k]&=\{h_{p,m}[n,k]\}_{p,m=1}^{P,M}=\left(\mathbf{h}_1[n,k],\cdots,\mathbf{h}_P[n,k]\right)^{\mathrm{T}},\nonumber
\end{align}
with $\mathbf{u}_p[n,k]=(u_{1,p}[n,k],\cdots,u_{M,p}[n,k])^{\mathrm{T}}$ being the corresponding precoding vector of the $p$-th user and $\mathbf{h}_p[n,k]=\left(h_{p,1}[n,k],\cdots,h_{p,M}[n,k]\right)^{\mathrm{T}}$ being the CFR vector for the $p$-th user with correlation matrix $\mathrm{E}\{\mathbf{h}_p[n,k]\mathbf{h}_p^{\mathrm{H}}[n,k]\}\triangleq g_p\mathbf{R}$ where $\{\mathbf{R}\}_{(m,m_1)}=\rho[m-m_1]$.\par
\section{Low-Complexity Recursive Convolutional Precoding}
In this section, we will first present recursive updating of precoding matrices, then derive the low-complexity convolutional precoding, and discuss its complexity at the end of this section.
\subsection{Recursive Updating}
The ZF precoding is considered in this paper although the proposed approach can be also used for other precodings, such as the MMSE precoding. Assuming the downlink channels are known at the BS, the desired precoding matrix can be expressed as
\begin{align}\label{3-1}
\mathbf{U}_o[n,k]=\mathbf{H}^{\mathrm{H}}[n,k]\left(\mathbf{H}[n,k]\mathbf{H}^{\mathrm{H}}[n,k]\right)^{-1}.
\end{align}\par
Using Taylor expansion in Appendix A, the matrix inverse in (\ref{3-1}) can be substituted by an order-recursive relation as
\begin{align}\label{3-2}
&\mathbf{U}^{(Q+1)}[n,k]=\mathbf{U}^{(Q)}[n,k]+\nonumber\\
&\frac{\mu}{M}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}(\mathbf{I}-\mathbf{H}[n,k]\mathbf{U}^{(Q)}[n,k]),
\end{align}
where $\mathbf{G}=\mathrm{diag}\{g_p\}_{p=1}^P$ and $\mathbf{U}^{(Q)}[n,k]$ denotes the corresponding precoding matrix with the $Q$-th order expansion and $\mu$ is a step size that affects the convergence, as we will discuss in Section IV. The order-recursive relation in (\ref{3-2}) can be also rewritten in a vector form as
\begin{align}\label{3-3}
&\mathbf{u}_p^{(Q+1)}[n,k]=\mathbf{u}_p^{(Q)}[n,k]+\nonumber\\
&\frac{\mu}{M}\sum_{i=1}^Pg_i^{-1}\mathbf{h}_i^*[n,k](\delta[i-p]-\mathbf{h}_i^{\mathrm{T}}[n,k]\mathbf{u}_p^{(Q)}[n,k]),
\end{align}
where $\mathbf{u}_p^{(Q)}[n,k]$ denotes the $p$-th column of $\mathbf{U}^{(Q)}[n,k]$.\par
In (\ref{3-3}), the order-recursive updating is driven by the expansion order, $Q$. Mathematically, the expansion order in (\ref{3-3}) can be viewed as a \emph{recursion counter}, which increases as the recursion proceeds. In this sense, the OFDM block index can be also used as that \emph{recursion counter}. In other words, (\ref{3-3}) can be also driven by the OFDM block index if replacing expansion order, $Q$, with OFDM block index, $n$, that is
\begin{align}\label{3-4}
&\mathbf{u}_p[n+1,k]=\mathbf{u}_p[n,k]+\nonumber\\
&\frac{\mu}{M}\sum_{i=1}^Pg_i^{-1}\mathbf{h}_i^*[n,k]({\delta[i-p]-\mathbf{h}_i^{\mathrm{T}}[n,k]\mathbf{u}_p[n,k]}).
\end{align}
As a result, the order recursion in (\ref{3-3}) is converted to the time recursion in (\ref{3-4}). Essentially, the order recursion in (\ref{3-3}) can be converted to the time recursion in (\ref{3-4}) is just because they have a similar expression except that one is driven by $Q$ and the other is driven by $n$. Using the time recursion in (\ref{3-4}), the actual calculation can be conducted in the time domain even though the principle for avoiding the matrix inverse is based on the order recursion in (\ref{3-3}). In this way, we can not only reduce the complexity since there is not need to repeat the order recursions from the zeroth order for each OFDM block, but also track the time-varying channels as long as the channel changes slowly. Strictly speaking, the above conversion is only valid when the channel is time invariant. In this case, (\ref{3-3}) and (\ref{3-4}) have exactly the same expression except for different \emph{recursion counter}s. In practice, the time recursion in (\ref{3-4}) can still work as long as the channel is slowly time-varying. Our analysis in Section IV shows that the time recursion can track the time variation of the channels when Doppler frequency is small but the performance will degrade as the rising of Doppler frequency.
\begin{figure}
\center
\includegraphics[width=3in]{hybrid.eps}\\
\caption{The order recursion is used for initialization and the time recursion is used for updating the coefficients of the sequent OFDM blocks.}\label{hybrid}
\end{figure}
Actually, the order recursion and the time recursion can be used in a hybrid manner as in Fig.~\ref{hybrid}. The order recursion is used for initialization and the time recursion is used for tracking. Once the expansion order for initialization is large enough to achieve satisfied performance, the time recursion will be on to update the precoding coefficients in the subsequent OFDM blocks. In this way, we can save the complexity since only one recursion is needed to update the coefficients during the tracking stage.
\begin{figure*}
\centering
\includegraphics[angle=90,width=3.5in]{figure_conv.eps}
\caption{Recursive convolutional precoding with convolutional precoding, recursive coefficient updating, and estimation error calculation.}\label{structure}
\end{figure*}
\subsection{Convolutional Precoding}
Although the matrix inverse is avoided through (\ref{3-4}), the precoding is still conducted in the frequency domain. In this subsection, we will convert it into the time-domain convolutional precoding by exploiting the frequency-domain correlation of the precoding matrices. Denote $\mathbf{u}_{m,p}[n]=(u_{m,p}[n,0],\cdots,u_{m,p}[n,K-1])^{\mathrm{T}}$, which contains the precoding coefficients from all subcarriers of the $n$-th OFDM block at the $m$-th antenna for the $p$-th user. Then, (\ref{3-4}) can be rewritten as
\begin{align}\label{3-5}
&\mathbf{u}_{m,p}[n+1]=\mathbf{u}_{m,p}[n]+\nonumber\\
&\frac{\mu}{M}\sum_{i=1}^{P}g_i^{-1}\left({\delta[i-p]\mathbf{I}}-\mathbf{D}_{i,p}[n]\right)\mathbf{h}^*_{i,m}[n],
\end{align}
where $\mathbf{h}_{p,m}[n]=(h_{p,m}[n,0],\cdots,h_{p,m}[n,K-1])^{\mathrm{T}}$ is the corresponding CFR vector from the $m$-th antenna to the $p$-th user, and $\mathbf{D}_{i,p}[n]$ is a $K\times K$ diagonal matrix with the $(k,k)$-th element given by
\begin{align}
\{\mathbf{D}_{i,p}[n]\}_{(k,k)}=\sum_{m=1}^M{h}_{i,m}[n,k]u_{m,p}[n,k].
\end{align}
Denote $w_{m,p}[n,l]$ to be the coefficient for the $l$-th tap of the precoding filter at the $m$-th antenna for the $p$-th user corresponding to the $n$-th OFDM block. Then, we have
\begin{align}
\mathbf{w}_{m,p}[n]&\triangleq(w_{m,p}[n,0],\cdots,w_{m,p}[n,K-1])^{\mathrm{T}}\nonumber\\
&=\frac{1}{K}\mathbf{F}^{\mathrm{H}}\mathbf{u}_{m,p}[n],
\end{align}
where $\mathbf{w}_{m,p}[n]$ is the corresponding precoding vector, and $\mathbf{F}$ is the \emph{discrete Fourier transform} (DFT) matrix with the $(m,n)$-th element given by
\begin{align}\label{B2}
\{\mathbf{F}\}_{(m,n)}=e^{-j\frac{2\pi m n}{K}},~~~~m,n\in[0,K-1].
\end{align}
From Appendix B, by taking the inverse DFT of (\ref{3-5}), we can obtain the coefficients for the time-domain convolutional precoding filter as
\begin{align}\label{3-6}
w_{m,p}[n+1,l]=w_{m,p}[n,l]+\frac{\mu}{M}\sum_{i=1}^{P}g_i^{-1}c_{i,m}^*[n,-l] * e_{i,p}[n,l],
\end{align}
where $e_{i,p}[n,l]$ is the estimation error given by
\begin{align}
e_{i,p}[n,l]=\delta[i-p]\delta[l]-\sum_{m=1}^Mc_{i,m}[n,l] * w_{m,p}[n,l].
\end{align}\par
The resulted recursive convolutional precoding is shown in Fig.~\ref{structure}, where large-scale fading is omitted by setting $g_p=1$. The precoding is carried out in the time domain via the precoding filter. In this case, only one IFFT is required for each user no matter how many antennas there are at the BS. Therefore, the number of IFFTs is equal to the number of users, which is much smaller than the antenna number in LSA systems. By exploiting the correlation of frequency-domain precoding coefficients, the coefficients of the precoding filter is sparse and thus can be truncated. For the single user case, the precoding filter is exactly the conjugate of the CIR and thus $0\leq l\leq L-1$. In the case of multiple users, we use one more tap, as a rule of thumb, for the positive taps and another $L$ taps to include the significant coefficients on the negative taps. As a result, $w_{m,p}[n,l]$ can be truncated within the range $-L\leq l\leq L$ (modulo $K$). Following the order recursion based initialization, the coefficients of the precoding filter can be updated recursively.\par
Note that the transmit signal after the IFFT should be circularly extended before sending to the precoding filter so that the signal can be circularly convolved with the precoding filter because the production in the frequency domain corresponds to the circular convolution in the time domain \cite{AVOppenheim}.
\subsection{Complexity}
\begin{table*}
\caption{Comparison of Complexities for convolutional precoding and traditional ZF precoding.}\centering
\begin{tabular}{|c||c|c|c|}
\hline
\multirow{2}{*}{\backslashbox{Complexity}{Approaches}} & \multirow{2}{*}{Proposed} & \multirow{2}{*}{Traditional ZF} & \multirow{2}{*}{TPE} \\
& & & \\
\hline
\multirow{2}{*}{IFFT} & \multirow{2}{*}{${\frac{1}{2}}PK\log_2K$} & \multirow{2}{*}{${\frac{1}{2}}MK\log_2K$} & \multirow{2}{*}{${\frac{1}{2}}MK\log_2K$}\\
& & & \\
\hline
\multirow{2}{*}{Precoding operation} & \multirow{2}{*}{$PM(2L+1)$} & \multirow{2}{*}{$PMK$} & \multirow{2}{*}{$PMK(2Q-1)$}\\
& & &\\
\hline
\multirow{2}{*}{Coefficient calculation} & \multirow{2}{*}{$2P^2ML$} & \multirow{2}{*}{$\frac{1}{B}(2P^2MK+\mathcal{O}(P^3)K)$} & \multirow{2}{*}{$-$} \\
& & &\\
\hline
\end{tabular}\label{tab}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=3.5in]{complexity1.eps}\\
\caption{An example for the complexity comparison for $P=8$ user case.}\label{complexity}
\end{figure}
In Tab.~\ref{tab}, the complexity is evaluated in terms of the number of \emph{complex multiplications} (CMs) required by the IFFT, the actual precoding operation, and the coefficient calculation for the precoding \cite[Ch. 2]{TSauer}. As comparisons, the complexities of the traditional ZF precoding and the TPE precoding in \cite{AMuller} are also included in the table. For the traditional ZF precoding, $B$ consecutive subcarriers ($B=12$ in \emph{long-term evolution }(LTE)) can share the same precoding coefficients by exploiting the frequency-domain correlation of the precoding coefficients. For the TPE precoding, it requires $Q-1$ iterations for each OFDM block because the iterations are repeated from the zeroth order for each OFDM block.\par
We have the following observations from the table. First, the number of IFFTs is equal to the antenna number for the traditional ZF precoding and the TPE precoding, and thus the number of IFFTs for the proposed approach is greatly reduced since the user number is much smaller than the antenna number in LSA systems. Second, the precoding filter length for the convolutional precoding is much smaller than the FFT size, while the precoding operation has to be conducted on each subcarrier individually for the traditional ZF precoding and the TPE precoding. Third, the number of CMs can be reduced for the proposed approach because the coefficient calculation is conducted recursively, while the traditional ZF precoding can also reduce the number of CMs since $B$ consecutive subcarriers can have the same precoding coefficients.\par
As an example, Fig.~\ref{complexity} presents the CMs required by the proposed approach, the traditional ZF precoding, and the TPE precoding for the typical $5$ MHz bandwidth in LTE where the size of FFT is $K=512$ \cite{3GPP}. For a typical \emph{extended typical urban} (ETU) channel whose maximum delay $\tau_{\mathrm{max}} = 5\mu\text{s}$, a channel length $L=38$ is enough to contain most of the channel power. As expected, the complexity of the convolutional precoding is substantially reduced compared with existing approaches when the antenna number is large. When antenna number is small, however, the complexity reduction is not so significant as that for the case of large antenna number. The traditional ZF or TPE may even require fewer CMs than the proposed approach with larger $B$ or smaller $Q$, at the cost of performance degradation, as will be shown in Section V. In fact, the advantage of the convolutional precoding can be hardly observed in traditional systems since the antenna number there is small, and it only becomes remarkable when the antenna number is very large. Therefore, it is more suitable to adopt the convolutional precoding rather than the traditional frequency-domain precoding for the transceiver design in LSA-OFDM systems. Note that the convolutional precoding will cause some delay of the signal transmission. However, the complexity reduction is favorable if the delay due to the convolution is tolerable.
\section{Performance Analysis}
In this section, we will first analyze the convergence performances of initialization and tracking, respectively, and then discuss the impacts of imperfect channels. Since the time-domain convolutional precoding is equivalent to the frequency-domain precoding, the performance analysis is conducted in the frequency domain for simplicity.
\subsection{Initialization}
We focus on the OFDM block with $n=0$ where the order-recursion is used for initialization. Define $\Delta{\mathbf{U}}^{(Q)}[0,k]\mathbf{G}^{\frac{1}{2}}\triangleq \mathbf{U}_o[0,k]\mathbf{G}^{\frac{1}{2}}-\mathbf{U}^{(Q)}[0,k]\mathbf{G}^{\frac{1}{2}}$ to be the normalized precoding matrix error for initialization, where the large-scale fading effect has been taken into account. Then it is shown in Appendix C that
\begin{align}\label{3-6_1}
\|\Delta{\mathbf{U}}^{(Q)}[0,k]\mathbf{G}^{\frac{1}{2}}\|_{\mathrm{F}}^2=\frac{1}{M}\sum_{p=1}^P\lambda_p^{-1}(1-\mu\lambda_p)^{2(Q+1)},
\end{align}
where $\lambda_p$ is the $p$-th eigenvalue of $\frac{1}{M}\mathbf{H}^{\mathrm{H}}[0,k]\mathbf{G}^{-1}\mathbf{H}[0,k]$ or $\frac{1}{M}\mathbf{G}^{-\frac{1}{2}}\mathbf{H}[0,k]\mathbf{H}[0,k]^{\mathrm{H}}\mathbf{G}^{-\frac{1}{2}}$.\par
Denote $\lambda_{\mathrm{max}}$ and $\lambda_{\mathrm{min}}$ to be the maximum and the minimum eigenvalues of $\frac{1}{M}\mathbf{H}^{\mathrm{H}}[0,k]\mathbf{G}^{-1}\mathbf{H}[0,k])$, respectively. From (\ref{3-6_1}), the convergence can be achieved as long as $0<\mu<2/\lambda_{\mathrm{max}}$, and the optimal step size for the fastest convergence will be $\mu_0=2/(\lambda_{\mathrm{max}}+\lambda_{\mathrm{min}})$ \cite{SHaykin}. Depending on whether the channels at different antennas are independent or not, we have the following discussions:
\begin{itemize}
\item
If the CFRs corresponding to different antennas are independent, we have $\lambda_p\approx 1$ for $p=1,2,\cdots,P$ \cite[Cha. 1]{ZBai}. In this case, fast convergence can be achieved by setting $\mu_o= 1$, and the convergence can be almost achieved within only one recursion as we can see from the simulation results in the next section.
\item
If the CFRs corresponding to different antennas are correlated, the maximum and the minimum eigenvalues will rely on $\mathbf{G}^{-\frac{1}{2}}\mathbf{H}[0,k]\mathbf{H}[0,k]^{\mathrm{H}}\mathbf{G}^{-\frac{1}{2}}$. Inspired by $\mathrm{E}\{\mathbf{G}^{-\frac{1}{2}}\mathbf{H}[0,k]\mathbf{H}[0,k]^{\mathrm{H}}\mathbf{G}^{-\frac{1}{2}}\}=P\mathbf{R}$, we let $\lambda_{\mathrm{max}}=\lambda_{\mathrm{max}}{(\mathbf{R})}$ and $\lambda_{\mathrm{min}}=\lambda_{\mathrm{min}}{(\mathbf{R})}$ for simplicity, and thus $\mu_0=2/[\lambda_{\mathrm{max}}(\mathbf{R}) + \lambda_{\mathrm{min}}(\mathbf{R})]$ in this case. Obviously, such step size can cover the case where the channels at different antennas are independent because $\mathbf{R}=\mathbf{I}$ in that situation. Simulation results in Section V shows such step size can work well for the proposed approach.
\end{itemize}
\subsection{Tracking}
When the channel is static, the performance of tracking will be the same with that in (\ref{3-6_1}) except that the expansion order, $Q$, is replaced by the block index, $n$. On the other hand, if the channel is time-varying, the variation of the desired precoding matrix is given, from (\ref{3-1}), by
\begin{align}\label{3-8_1}
\mathbf{\Phi}[n,k]&\triangleq\mathbf{U}_o[n+1,k]-\mathbf{U}_o[n,k]\nonumber\\
&=\mathbf{H}^{\mathrm{H}}[n+1,k]\left(\mathbf{H}[n+1,k]\mathbf{H}^{\mathrm{H}}[n+1,k]\right)^{-1}-\nonumber\\
&~~~~\mathbf{H}^{\mathrm{H}}[n,k]\left(\mathbf{H}[n,k]\mathbf{H}^{\mathrm{H}}[n,k]\right)^{-1}.
\end{align}\par
Exact analysis based on (\ref{3-8_1}) is difficult. To gain analytical insights, we assume the channels corresponding to different antennas and different users are independent. In that case, $\mathbf{\Phi}[n,k]$ can be approximated by
\begin{align}
\mathbf{\Phi}[n,k]\approx\frac{1}{M}\left(\mathbf{H}^{\mathrm{H}}[n+1,k]-\mathbf{H}^{\mathrm{H}}[n,k]\right)\mathbf{G}^{-1}.
\end{align}
Furthermore, we assume that the expansion order for initialization is sufficiently large so that $\mathbf{U}[0,k]=\mathbf{U}_o[0,k]$.\par
Define $\Delta{\mathbf{U}}[n,k]\mathbf{G}^{\frac{1}{2}}\triangleq\mathbf{U}_o[n,k]\mathbf{G}^{\frac{1}{2}}-\mathbf{U}[n,k]\mathbf{G}^{\frac{1}{2}}$ to be the normalized precoding matrix error for tracking, where the large-scale fading effect has been taken into account. When the Doppler frequency, $f_d$, is small, then it is shown in Appendix D that the \emph{mean-square-error } (MSE) can be expressed by
\begin{align}\label{3-8_3}
\mathrm{MSE}_n(M,P)&\triangleq\mathrm{E}\{\|\Delta{\mathbf{U}}[n,k]\mathbf{G}^{\frac{1}{2}}\|_{\mathrm{F}}^2\}\nonumber\\
&=\frac{2\pi^2 f_d^2 T^2 M}{P}\left[1-\left(1-\frac{P}{M}\right)^n\right]^2,
\end{align}
where $T$ denotes the OFDM symbol duration. From (\ref{3-8_3}), we have the following observations:
\begin{itemize}
\item The MSE of tracking depends only on the ratio of user number and antenna number. As the antenna number is much larger than the user number in an LSA system, we have
\begin{align}
\mathrm{MSE}_n(M,P)\approx 2\pi^2f_d^2T^2n^2\frac{P}{M}.
\end{align}
\item The MSE of tracking increases as the rising of OFDM block index. It means that the performance will be degraded as the time recursion proceeds which can be also confirmed by our simulation results.
\item The MSE of tracking increases as the rising of Doppler frequency, that is, the performance will be degraded as the rising of Doppler frequency, which also coincides with our intuition.
\end{itemize}
\subsection{Impact of Imperfect Channel}
In the above, we have assumed that the accurate downlink channel is known at the BS. In practical systems, the downlink channel at the BS can be obtained by estimating the uplink channel due to the reciprocity in time-division duplexing systems \cite{survey}. In any case, only imperfect channel is known at the BS.\par
To analyze the impacts of channel estimation error, denote the imperfect channel to be
\begin{align}\label{3-9}
\widehat{\mathbf{H}}[n,k]=\mathbf{H}[n,k]+\widetilde{\mathbf{H}}[n,k],
\end{align}
where $\widetilde{\mathbf{H}}[n,k]=\{h_{p,m}[n,k]\}_{p,m=1}^{P,M}$ denotes the channel estimation error with $\mathrm{E}\{\widetilde{h}_{p,m}[n,k]\widetilde{h}_{p_1,m_1}^*[n,k]\}=g_p\sigma_h^2\delta[p-p_1]\delta[m-m_1]$ with $\sigma_h^2$ being the variance of the error when $g_p=1$. Assuming the CFRs and the channel errors are independent, we can obtain, when the antenna number is large enough, that,
\begin{align}\label{3-9_0}
\frac{1}{M}\widehat{\mathbf{H}}[n,k]\widehat{\mathbf{H}}^{\mathrm{H}}[n,k]\approx \frac{1}{M}\mathbf{H}[n,k]\mathbf{H}^{\mathrm{H}}[n,k]+\sigma_h^2\mathbf{G}.
\end{align}
From (\ref{3-9_0}), we have $\widehat{\lambda}_{p}=\lambda_p+\sigma_h^2$ where $\widehat{\lambda}_p$ denotes the $p$-th eigenvalue of $\frac{1}{M}\widehat{\mathbf{H}}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}\widehat{\mathbf{H}}[n,k]$ or $\frac{1}{M}\mathbf{G}^{-\frac{1}{2}}\widehat{\mathbf{H}}[n,k]\widehat{\mathbf{H}}[n,k]^{\mathrm{H}}\mathbf{G}^{-\frac{1}{2}}$. For simplicity, we will only focus on the initialization in the subsequential of this subsection, although our results are also available for the tracking stage.\par
In the presence of the channel estimation error, the order recursion for initialization can be rewritten by
\begin{align}\label{3-9_1}
&\widehat{\mathbf{U}}^{(Q+1)}[0,k]=\widehat{\mathbf{U}}^{(Q)}[0,k]+\nonumber\\
&\frac{\mu}{M}\widehat{\mathbf{H}}^{\mathrm{H}}[0,k]\mathbf{G}^{-1}(\mathbf{I}-\widehat{\mathbf{H}}[0,k]\widehat{\mathbf{U}}^{(Q)}[0,k]),
\end{align}
where $\widehat{\mathbf{U}}^{(Q)}[0,k]$ denotes the precoding coefficients with imperfect channel. Correspondingly, the normalized precoding matrix error is $\Delta{\widehat{\mathbf{U}}}^{(Q)}[0,k]\mathbf{G}^{\frac{1}{2}}\triangleq\widehat{\mathbf{U}}_o[0,k]\mathbf{G}^{\frac{1}{2}}-\widehat{\mathbf{U}}^{(Q)}[0,k]\mathbf{G}^{\frac{1}{2}}$ where $\widehat{\mathbf{U}}_o[0,k]=\widehat{\mathbf{H}}^{\mathrm{H}}[0,k](\widehat{\mathbf{H}}[0,k]\widehat{\mathbf{H}}^{\mathrm{H}}[0,k])^{-1}$ indicates the desired precoding matrix with imperfect channel. Following the same analysis in Section IV.A, the convergence of (\ref{3-9_1}) can be achieved by choosing $\mu_o=1+\sigma_h^2$ when the channels at different antennas are independent or $\mu_o=2/[\lambda_{\mathrm{max}}(\mathbf{R})+\lambda_{\mathrm{min}}(\mathbf{R}) + 2\sigma_h^2]$ when they are correlated. We have $\|\Delta{\widehat{\mathbf{U}}}^{(\infty)}[0,k]\mathbf{G}^{\frac{1}{2}}\|_{\mathrm{F}}^2=0$ and thus ${\widehat{\mathbf{U}}}^{(\infty)}[0,k]=\widehat{\mathbf{U}}_o[0,k]$ when $Q\rightarrow\infty$.\par
In addition to changing the step size, the channel estimation error will also cause the performance degradation when the convergence has been achieved. Denote $\Delta \mathbf{U}_o[0,k]\mathbf{G}^{\frac{1}{2}}\triangleq \mathbf{U}_o[0,k]\mathbf{G}^{\frac{1}{2}}-\widehat{\mathbf{U}}_o[0,k]\mathbf{G}^{\frac{1}{2}}$ to be the error for the desired precoding matrix due to the channel estimation error. To gain analytical insights, we assume the channels corresponding to different antennas and different users are independent. In that case,
\begin{align}\label{3-9_2}
\Delta \mathbf{U}_o[0,k]\mathbf{G}^{\frac{1}{2}}&\approx\frac{1}{M}\mathbf{H}^{\mathrm{H}}[0,k]\mathbf{G}^{-\frac{1}{2}}-\frac{1}{M(1+\sigma_h^2)}\widehat{\mathbf{H}}^{\mathrm{H}}[0,k]\mathbf{G}^{-\frac{1}{2}}\nonumber\\
&=\frac{1}{M(1+\sigma_h^2)}(\sigma_h^2\mathbf{H}^{\mathrm{H}}[0,k]-\widetilde{\mathbf{H}}^{\mathrm{H}}[0,k])\mathbf{G}^{-\frac{1}{2}}.
\end{align}
With the assumption that the CFRs and the channel errors are independent, the MSE can be expressed by
\begin{align}\label{3-9_3}
\mathrm{E}\{\|\Delta \mathbf{U}_o[0,k]\mathbf{G}^{\frac{1}{2}}\|_{\mathrm{F}}^2\}=\frac{P\sigma_h^2}{M(1+\sigma_h^2)}.
\end{align}
From (\ref{3-9_3}), we have the following observations:
\begin{itemize}
\item If assuming $\sigma_h^2$ is very small, we have
\begin{align}
\mathrm{E}\{\|\Delta \mathbf{U}_o[0,k]\mathbf{G}^{\frac{1}{2}}\|_{\mathrm{F}}^2\}\approx \frac{P\sigma_h^2}{M},
\end{align}
which is approximately proportional to the variance of the channel estimation error.
\item By increasing the antenna number or reducing the user number, the impact of the channel estimation error can be mitigated. In the extreme case where $M\rightarrow\infty$, we have $\mathrm{E}\{\|\Delta \mathbf{U}_o[0,k]\mathbf{G}^{\frac{1}{2}}\|_{\mathrm{F}}^2\}=0$, which means the impact of the channel estimation error vanishes when the antenna number is very large.
\end{itemize}
\section{Simulation Results}
\begin{figure}
\centering
\subfigure{
\includegraphics[width=3.5in]{MSE1.eps}
}\\
(a)\\
\subfigure{
\includegraphics[width=3.5in]{SER1.eps}
}\\
(b)\\
\caption{Performances of initialization (a) MSE (b) SER.}\label{initialization}
\end{figure}
\begin{figure}
\centering
\subfigure{
\includegraphics[width=3.5in]{MSE2.eps}
}\\
(a)\\
\subfigure{
\includegraphics[width=3.5in]{SER2.eps}
}\\
(b)\\
\caption{Performances of tracking (a) MSE (b) SER.}\label{tracking}
\end{figure}
\begin{figure}
\centering
\subfigure{
\includegraphics[width=3.5in]{MSEvsChanError.eps}
}\\
(a)\\
\subfigure{
\includegraphics[width=3.5in]{SERvsChanError.eps}
}\\
(b)\\
\caption{Impacts of imperfect channel information for (a) MSE (b) SER.}\label{chan_error}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.5in]{SERvsSNR.eps}\\
\caption{SER versus $Es/N_0$.}\label{SERvsSNR}
\end{figure}
In this section, we evaluate the proposed approach using computer simulation. We consider a BS equipped with $M=100$ antennas and $P=10$ users in the system. A \emph{quadrature-phase-shift-keying} (QPSK) modulated OFDM signal is used, where the subcarrier spacing is $15$ KHz corresponding to an OFDM symbol duration about $66.7\mu\text{s}$. For a typical $5$ MHz channel, the size of FFT is $512$ with $300$ subcarriers used for data transmission and the others used as guard band as in LTE \cite{3GPP}. Each frame consists of $14$ OFDM symbols. A normalized ETU channel model is used, which has $9$ taps and the maximum delay $\tau_{\mathrm{max}} = 5\mu\text{s}$. The channels at different antennas can be independent or correlated. For the latter, a \emph{uniform-linear-array} (ULA) is used where the antennas are placed along a straight line \cite{YLiu2}. In this case, the correlation of channels at $m$-th antenna and $m_1$-th antenna is $\rho[m-m_1]=J_0[2\pi (m - m_1) D/(M-1)]$, where $D$ is the array size normalized by the wavelength. Apparently, the channels at different antennas will be more correlated for smaller $D$. Without loss of generality, we assume $g_p=1$ for all users.\par
Fig.~\ref{initialization} shows the MSE and \emph{symbol-error-ratio} (SER) for the initialization of the proposed approach. From Fig.~\ref{initialization} (a), the MSE reduces as the order recursion proceeds. However, the MSEs for the correlated channels cannot reduce as fast as that for the independent channels. It means that more order recursions are required to achieve a satisfied performance for the initialization when the channels at different antennas are correlated. This coincides with the observation in Fig.~\ref{initialization} (b). From Fig.~\ref{initialization} (b), the SER can be improved as the order recursion proceeds. When the channels at different antennas are independent, the proposed approach can achieve the same SER with the ZF precodings within only two recursions. However, more recursions are required when the channels at different antennas are correlated.\par
Fig.~\ref{tracking} shows the MSE and SER for the tracking of the proposed approach with different Doppler frequencies. We assume the expansion order for initialization is large enough such that $\mathbf{U}[0,k]=\mathbf{U}_o[0,k]$. From Fig.~\ref{tracking} (a), the channel correlation causes smaller impact to the tracking MSE than it does to the initialization MSE. From Fig.~\ref{tracking} (b), the time-varying channels can be efficiently tracked when the Doppler frequency is small and therefore the SERs over different OFDM blocks will be almost the same. On the other hand, it becomes difficult to track the channel time variation as the increasing of the Doppler frequency, and thus the SERs for the OFDM blocks at the end of the frame will get worse. This problem can be easily addressed by re-initialization when the precoding coefficients are getting far from the desired ones.\par
Fig.~\ref{chan_error} shows the impacts of the channel estimation error. From (\ref{chan_error}) (a), the MSE is approximately proportional to the variance of the channel estimation error when the latter is small, which coincides with our analysis in Section IV. Fig.~\ref{chan_error} (b) shows that the channel estimation error has little affects on the SER when $\sigma_h^2<-15$ dB. Otherwise, the SER performances will be seriously degraded as the increasing of the channel estimation error.\par
Fig.~\ref{SERvsSNR} shows the SER versus $Es/N_0$ with different Doppler frequencies. For the proposed approach, we also assume the expansion order is large enough for initialization such that $\mathbf{U}[0,k]=\mathbf{U}_o[0,k]$. As the increasing of the Doppler frequencies, the SER performances degrade because the channels cannot be efficiently tracked when the Doppler frequency is large. As comparisons, the MF precoding and the traditional ZF precodings with $B=1,6,12$ are also included. Since the ZF and MF precodings are conducted for each OFDM block individually, the SER performances will be the same for different Doppler frequencies. When the Doppler frequency is small, the proposed approach can achieve the same SER as the traditional ZF precoding with $B=1$. As the increasing of $B$, the performance of ZF precoding will degrade although the complexity can be reduced. Meanwhile, the proposed approach can significantly outperform the MF precoding since the latter cannot completely remove the IUI.
\section{Conclusions}
In this paper, low-complexity convolutional precoding has been proposed for the precoder design in an LSA-OFDM system. The traditional frequency-domain precoding has been converted into a time-domain convolutional precoding so that the number of IFFTs is substantially reduced. On the other hand, based on the order recursion of Taylor expansion, the convolutional precoding filter works recursively in this paper such that we can not only avoid direct matrix inverse of traditional ZF precoding but also provide a way to implement the traditional ZF precoding through the convolutional precoding filter with low complexity. Our results have shown that it is more suitable to adopt the convolutional precoding rather than the traditional frequency-domain precoding for the transceiver design in LSA-OFDM systems.
\appendices
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
\section{Derivation of (\ref{3-2})}
When the antenna number is sufficiently large and the CFRs corresponding to different users and different antennas are independent, the CFR vectors for different users are asymptotically orthogonal and therefore we have
\begin{align}
\frac{1}{M}\mathbf{H}[n,k]\mathbf{H}^{\mathrm{H}}[n,k]=\mathbf{G}.
\end{align}
In practical systems, however, the antenna number is always finite, and the channels at different antennas can be correlated when placing so many antennas in a small area. In such case,
\begin{align}\label{A1}
\frac{1}{M}\mathbf{H}[n,k]\mathbf{H}^{\mathrm{H}}[n,k]=\mathbf{G}-\mathbf{\Delta}[n,k],
\end{align}
where $\mathbf{\Delta}[k]$ can be viewed as a perturbation matrix. When scaled by a factor $\mu$, we have
\begin{align}\label{A2}
\frac{\mu}{M}\mathbf{H}[n,k]\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}=\mathbf{I}-\mathbf{\Lambda}[n,k],
\end{align}
where $\mathbf{\Lambda}[n,k]=(1-\mu)\mathbf{I}+\mu\mathbf{\Delta}[n,k]\mathbf{G}^{-1}$. Using the Taylor expansion, the inverse of (\ref{A2}) can be expressed by
\begin{align}\label{A3}
\mathbf{P}^{(Q)}[n,k]\triangleq\left(\mathbf{I}-\mathbf{\Lambda}[n,k]\right)^{-1}=\sum_{q=0}^Q\mathbf{\Lambda}^q[n,k].
\end{align}
Substituting (\ref{A3}) into (\ref{3-1}), we can obtain
\begin{align}\label{A4}
\mathbf{U}^{(Q)}[n,k]=\frac{\mu}{M}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}\mathbf{P}^{(Q)}[n,k],
\end{align}
where $\mathbf{U}^{(Q)}[n,k]$ denotes the precoding matrix with the $Q$-th order Taylor expansion. Exploiting the relation between consecutive expansion orders, we have
\begin{align}\label{A5}
\mathbf{P}^{(Q+1)}[n,k]=\mathbf{I}+\mathbf{\Lambda}[n,k]\mathbf{P}^{(Q)}[n,k].
\end{align}
Substituting (\ref{A5}) into (\ref{A4}),
\begin{align}
&~~~~\mathbf{U}^{(Q+1)}[n,k]\nonumber\\
&=\frac{\mu}{M}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}+\frac{\mu}{M}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}\mathbf{\Lambda}[n,k]\mathbf{P}^{(Q)}[n,k]\nonumber\\
&=\frac{\mu}{M}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}+\frac{\mu}{M}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}\cdot\nonumber\\
&~~~~\left(\mathbf{I}-\frac{\mu}{M}\mathbf{H}[n,k]\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}\right)\mathbf{P}^{(Q)}[n,k]\nonumber\\
&=\frac{\mu}{M}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}+\frac{\mu}{M}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}\mathbf{P}^{(Q)}[n,k]-\nonumber\\
&~~~~\frac{\mu^2}{M^2}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}\mathbf{H}[n,k]\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}\mathbf{P}^{(Q)}[n,k]\nonumber\\
&=\mathbf{U}^{(Q)}[n,k]+\frac{\mu}{M}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}\left(\mathbf{I}-\mathbf{H}[n,k]\mathbf{U}^{(Q)}[n,k]\right).
\end{align}
\renewcommand{\theequation}{B.\arabic{equation}}
\setcounter{equation}{0}
\section{Derivation of (\ref{3-6})}
Taking the inverse DFT on both sides of (\ref{3-5}), we have
\begin{align}\label{B3}
&~~~~\mathbf{w}_{m,p}[n+1]\nonumber\\
&=\mathbf{w}_{m,p}[n]+\frac{\mu}{M}\sum_{i=1}^Pg_i^{-1}\frac{\mathbf{F}^{\mathrm{H}}}{\sqrt{K}}\cdot\nonumber\\
&~~~~\left({\delta[i-p]\mathbf{I}}-\mathbf{D}_{i,p}[n]\right)\frac{\mathbf{F}}{\sqrt{K}}\frac{1}{K}\mathbf{F}^{\mathrm{H}}\mathbf{h}_{i,m}^*[n]\nonumber\\
&=\mathbf{w}_{m,p}[n]+\frac{\mu}{M}\sum_{i=1}^Pg_i^{-1}\frac{\mathbf{F}^{\mathrm{H}}}{\sqrt{K}}\cdot\nonumber\\
&~~~~\left({\delta[i-p]\mathbf{I}}-\mathbf{D}_{i,p}[n]\right)\frac{\mathbf{F}}{\sqrt{K}}
\left(\begin{array}{c}
c_{i,m}^*[n,0] \\
\vdots \\
c_{i,m}^*[n,-(K-1)]
\end{array}
\right).
\end{align}
To proceed, we can derive that
\begin{align}\label{B4}
&~~~~\frac{\mathbf{F}^{\mathrm{H}}}{\sqrt{K}}\mathbf{D}_{i,p}[n]\frac{\mathbf{F}}{\sqrt{K}}\nonumber\\
&=\sum_{m_0=1}^M\frac{\mathbf{F}^{\mathrm{H}}}{\sqrt{K}}\mathrm{diag}\{{h}_{i,m_0}[n,k]u_{m_0,p}[n,k]\}\frac{\mathbf{F}}{\sqrt{K}}\nonumber\\
&=\sum_{m_0=1}^M\mathrm{circ}\{c_{i,m_0}[n,l]\circledast w_{m_0,p}[n,l]\},
\end{align}
where $\circledast$ denotes the circular convolution and $\mathrm{circ}\{a_0,a_1,\cdots,a_{K-1}\}$ denotes a circular matrix constructed using $a_0,a_1,\cdots,a_{K-1}$. Substituting (\ref{B4}) into (\ref{B3}), we have
\begin{align}\label{B5}
&~~~~\mathbf{w}_{m,p}[n+1]\nonumber\\
&=\mathbf{w}_{m,p}[n]+\frac{\mu}{M}\sum_{i=1}^Pg_i^{-1}\left[\delta[i-p]\left(\begin{array}{c}
c_{i,m}^*[n,0] \\
\vdots \\
c_{i,m}^*[n,-(K-1)]
\end{array}
\right)-\right.\nonumber\\
&~~\left.\sum_{m_0=1}^M\mathrm{circ}\{c_{i,m_0}[n,l]\circledast w_{m_0,p}[n,l]\}
\left(\begin{array}{c}
c_{i,m}^*[n,0] \\
\vdots \\
c_{i,m}^*[n,-(K-1)]
\end{array}
\right)\right],
\end{align}
which can be rewritten in a scalar form as
\begin{align}\label{B7}
w_{m,p}[n+1,l]&=w_{m,p}[n,l]+\frac{\mu}{M}\sum_{i=1}^Pg_i^{-1}c_{i,m}^*[n,-l]\circledast\nonumber\\
&~~\left(\delta[i-p]\delta[l]-\sum_{m_0=1}^Mc_{i,m_0}[n,l] \circledast w_{m_0,p}[n,l]\right).
\end{align}\par
In general, the channel length, $L$, is much smaller than the FFT size. In other words, the power of CIR, $c_{i,m}[n,l]$, may concentrate only on the taps at the beginning and the others are small enough and thus can be omitted. This is also the case for the precoding coefficients, $w_{m,p}[n,l]$, due to the correlation of frequency-domain precoding matrices. As a result, the circular convolution in (\ref{B7}) can be replaced by the linear convolution, leading exactly to (\ref{3-6}).
\renewcommand{\theequation}{C.\arabic{equation}}
\setcounter{equation}{0}
\section{Derivation of (\ref{3-6_1})}
By subtracting $\mathbf{U}_o[0,k]$ on both sides of (\ref{3-2}) and then multiplying $\mathbf{G}^{\frac{1}{2}}$, we obtain
\begin{align}\label{C1}
\Delta{\mathbf{U}}^{(Q+1)}[0,k]\mathbf{G}^{\frac{1}{2}}=&\left(\mathbf{I}-\frac{\mu}{M}\mathbf{H}^{\mathrm{H}}[0,k]\mathbf{G}^{-1}\mathbf{H}[0,k]\right)\nonumber\\
&\Delta{\mathbf{U}}^{(Q)}[0,k]\mathbf{G}^{\frac{1}{2}},
\end{align}
Denote
\begin{align}\label{C2}
\frac{1}{\sqrt{M}}\mathbf{G}^{-\frac{1}{2}}\mathbf{H}[0,k]=\mathbf{W}\mathbf{\Sigma}\mathbf{V}^{\mathrm{H}}=\mathbf{W}\mathbf{\Sigma}_0\mathbf{V}_0^{\mathrm{H}},
\end{align}
where $\mathbf{W}$ is a $P\times P$ unitary matrix, $\mathbf{\Sigma}=\left(\mathbf{\Sigma}_0,\mathbf{0}\right)$ with $\mathbf{\Sigma}_0=\mathrm{diag}\{{\lambda_p}^{\frac{1}{2}}\}_{p=1}^P$ being a $P\times P$ diagonal matrix, and $\mathbf{V}=\left(\mathbf{V}_0,\mathbf{V}_1\right)$ is an $M\times M$ unitary matrix where $\mathbf{V}_0$ includes the first $P$ columns and $\mathbf{V}_1$ includes the last $M-P$ columns. Then, we have
\begin{align}\label{C3}
\frac{1}{M}\mathbf{H}^{\mathrm{H}}[0,k]\mathbf{G}^{-1}\mathbf{H}[0,k]=\mathbf{V}\mathbf{\Sigma}^{\mathrm{H}}\mathbf{\Sigma}\mathbf{V}^{\mathrm{H}}=\mathbf{V}_0\mathbf{\Sigma}_0^{\mathrm{H}}\mathbf{\Sigma}_0\mathbf{V}_0^{\mathrm{H}}.
\end{align}
Substituting (\ref{C3}) into (\ref{C1}), we can obtain
\begin{align}\label{C4}
\Delta{\mathbf{U}}^{(Q+1)}[0,k]\mathbf{G}^{\frac{1}{2}}=\mathbf{V}\left(\begin{array}{cc}
\mathbf{I}-\mu\mathbf{\Sigma}_0^{\mathrm{H}}\mathbf{\Sigma}_0 & \\
& \mathbf{I}
\end{array}
\right)\mathbf{V}^{\mathrm{H}}\Delta{\mathbf{U}}^{(Q)}[0,k]\mathbf{G}^{\frac{1}{2}}.
\end{align}
Using the recursive relation in (\ref{C4}), we can derive that
\begin{align}\label{C5}
&~~~~\mathbf{V}^{\mathrm{H}}\Delta{\mathbf{U}}^{(Q)}[0,k]\mathbf{G}^{\frac{1}{2}}\nonumber\\
&=\left(\begin{array}{cc}
\left(\mathbf{I}-\mu\mathbf{\Sigma}_0^{\mathrm{H}}\mathbf{\Sigma}_0\right)^{Q} & \\
& \mathbf{I}
\end{array}
\right)\left(\begin{array}{c}
\mathbf{V}_0^{\mathrm{H}} \\
\mathbf{V}_1^{\mathrm{H}}
\end{array}
\right)\Delta{\mathbf{U}}^{(0)}[0,k]\mathbf{G}^{\frac{1}{2}}.
\end{align}\par
Recall that
\begin{align}\label{C6}
\mathbf{U}_0[0,k]&=\mathbf{H}^{\mathrm{H}}[0,k]\left(\mathbf{H}[0,k]\mathbf{H}^{\mathrm{H}}[0,k]\right)^{-1}\nonumber\\
&=\frac{1}{\sqrt{M}}\mathbf{V}_0\mathbf{\Sigma}_0^{-1}\mathbf{W}^{\mathrm{H}}\mathbf{G}^{-\frac{1}{2}},\\
\mathbf{U}^{(0)}[0,k]&=\frac{\mu}{M}\mathbf{H}^{\mathrm{H}}[0,k]\mathbf{G}^{-1}=\frac{\mu}{\sqrt{M}}\mathbf{V}_0\mathbf{\Sigma}_0^{\mathrm{H}}\mathbf{W}^{\mathrm{H}}\mathbf{G}^{-\frac{1}{2}}.
\end{align}
Therefore,
\begin{align}\label{C8}
\Delta{\mathbf{U}}^{(0)}[0,k]&=\mathbf{U}_0[0,k]-\mathbf{U}^{(0)}[0,k]\nonumber\\
&=\frac{1}{\sqrt{M}}\mathbf{V}_0\mathbf{\Sigma}_0^{-1}(\mathbf{I}-\mu\mathbf{\Sigma}_0^{\mathrm{H}}\mathbf{\Sigma}_0)\mathbf{W}^{\mathrm{H}}\mathbf{G}^{-\frac{1}{2}}.
\end{align}
As a result, we have $\mathbf{V}_1^{\mathrm{H}}\Delta{\mathbf{U}}^{(0)}[0,k]=\mathbf{0}$ since $\mathbf{V}_1^{\mathrm{H}}\mathbf{V}_0=\mathbf{0}$. Using this relation, (\ref{C5}) can be simplified as
\begin{align}\label{C9}
\mathbf{V}_0^{\mathrm{H}}\Delta{\mathbf{U}}^{(Q)}[0,k]\mathbf{G}^{\frac{1}{2}}&=\left(\mathbf{I}-\mu\mathbf{\Sigma}_0^{\mathrm{H}}\mathbf{\Sigma}_0\right)^{Q}
\mathbf{V}_0^{\mathrm{H}}\Delta{\mathbf{U}}^{(0)}[0,k]\mathbf{G}^{\frac{1}{2}}\nonumber\\
&=\frac{1}{\sqrt{M}}\mathbf{\Sigma}_0^{-1}\left(\mathbf{I}-\mu\mathbf{\Sigma}_0^{\mathrm{H}}\mathbf{\Sigma}_0\right)^{Q+1}\mathbf{W}^{\mathrm{H}}.
\end{align}
Therefore, $\|\Delta{\mathbf{U}}^{(Q)}[0,k]\mathbf{G}^{\frac{1}{2}}\|_{\mathrm{F}}^2$ can be expressed by
\begin{align}\label{C10}
\|\Delta{\mathbf{U}}^{(Q)}[0,k]\mathbf{G}^{\frac{1}{2}}\|_{\mathrm{F}}^2&=\|\mathbf{V}^{\mathrm{H}}\Delta{\mathbf{U}}^{(Q)}[0,k]\mathbf{G}^{\frac{1}{2}}\|_{\mathrm{F}}^2\nonumber\\
&=\|\mathbf{V}_0^{\mathrm{H}}\Delta{\mathbf{U}}^{(Q)}[0,k]\mathbf{G}^{\frac{1}{2}}\|_{\mathrm{F}}^2\nonumber\\
&=\left\|\frac{1}{\sqrt{M}}\mathbf{\Sigma}_0^{-1}\left(\mathbf{I}-\mu\mathbf{\Sigma}_0^{\mathrm{H}}\mathbf{\Sigma}_0\right)^{Q+1}\mathbf{W}^{\mathrm{H}}\right\|_{\mathrm{F}}^2\nonumber\\
&=\frac{1}{M}\sum_{p=1}^P\lambda_p^{-1}(1-\mu\lambda_p)^{2(Q+1)},
\end{align}
where the fact that $\lambda_p$ is a real number since $\mathbf{H}^{\mathrm{H}}[0,k]\mathbf{G}^{-1}\mathbf{H}[0,k]$ is a Hermite matrix has been used.
\renewcommand{\theequation}{D.\arabic{equation}}
\setcounter{equation}{0}
\section{Derivation of (\ref{3-8_3})}
To analyze the tracking performance, rewrite (\ref{3-4}) in a matrix form as
\begin{align}\label{D1}
\mathbf{U}[n+1,k]=\mathbf{U}[n,k]+\frac{1}{M}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}(\mathbf{I}-\mathbf{H}[n,k]\mathbf{U}[n,k]),
\end{align}
where $\mu_o=1$ has been used. By subtracting the $\mathbf{U}_o[n+1,k]$ on both sides of (\ref{D1}) and multiplying $\mathbf{G}^{\frac{1}{2}}$ , we have
\begin{align}\label{D2}
\Delta{\mathbf{U}}[n+1,k]\mathbf{G}^{\frac{1}{2}}=\left(\mathbf{I}-\frac{1}{M}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}\mathbf{H}[n,k]\right)\cdot\nonumber\\
\Delta{\mathbf{U}}[n,k]\mathbf{G}^{\frac{1}{2}}+\mathbf{\Phi}[n,k]\mathbf{G}^{\frac{1}{2}},
\end{align}
which is a random differential equation whose system matrix is $\mathbf{I}-\frac{1}{M}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}\mathbf{H}[n,k]$ \cite[Ch. 5]{SHaykin}. Due to low-pass filter effect of \emph{least-mean-square} (LMS) filter, we can adopt the direct-averaging method so that the instantaneous system matrix can be replaced by an average system matrix \cite{HJKushner},
\begin{align}\label{DC3}
\mathrm{E}\left\{\mathbf{I}-\frac{1}{M}\mathbf{H}^{\mathrm{H}}[n,k]\mathbf{G}^{-1}\mathbf{H}[n,k]\right\}=\left(1-\frac{P}{M}\right)\mathbf{I}.
\end{align}
In other words, the solution of (\ref{D2}) can be approximated by the solution of the following differential equation
\begin{align}\label{D4}
\Delta{\mathbf{U}}[n+1,k]\mathbf{G}^{\frac{1}{2}}=\left(1-\frac{P}{M}\right)\Delta{\mathbf{U}}[n,k]\mathbf{G}^{\frac{1}{2}}+\mathbf{\Phi}[n,k]\mathbf{G}^{\frac{1}{2}}.
\end{align}
Direct calculation of (\ref{D4}) yields that
\begin{align}\label{D5}
\Delta{\mathbf{U}}[n,k]\mathbf{G}^{\frac{1}{2}}=&\left(1-\frac{P}{M}\right)^n\Delta{\mathbf{U}}[0,k]\mathbf{G}^{\frac{1}{2}}+\nonumber\\
&\sum_{i=0}^{n-1}\left(1-\frac{P}{M}\right)^{n-1-i}\mathbf{\Phi}[i,k]\mathbf{G}^{\frac{1}{2}},
\end{align}
where the first term is a natural component and the second term is a forced component. Since the expansion order for initialization is assumed large enough so that $\Delta{\mathbf{U}}[0,k]=0$, (\ref{D5}) can be reduced to
\begin{align}\label{D6}
\Delta{\mathbf{U}}[n,k]\mathbf{G}^{\frac{1}{2}}=\sum_{i=0}^{n-1}\left(1-\frac{P}{M}\right)^{n-1-i}\mathbf{\Phi}[i,k]\mathbf{G}^{\frac{1}{2}},
\end{align}
or equivalently in a scalar form as
\begin{align}\label{D7}
&~~~~\sqrt{g_p}\Delta{u}_{p,m}[n,k]\nonumber\\
&=\sum_{i=0}^{n-1}\left(1-\frac{P}{M}\right)^{n-1-i}\frac{h_{p,m}^*[i+1,k]-h_{p,m}^*[i,k]}{M\sqrt{g_p}}.
\end{align}
Therefore, the MSE can be obtained as
\begin{align}\label{D8}
&\mathrm{E}\{|\sqrt{g_p}\Delta{u}_{p,m}[n,k]|^2\}=\frac{1}{M^2}\sum_{i_1=0}^{n-1}\sum_{i_2=0}^{n-1}\left(1-\frac{P}{M}\right)^{2(n-1)-(i_1+i_2)}\cdot\nonumber\\
&\left\{2J_0\left[2\pi f_d(i_1-i_2)T\right] - J_0\left[2\pi f_d(i_1-i_2+1)T\right] - \right.\nonumber\\
&\left.J_0\left[2\pi f_d(i_1-i_2-1)T\right]\right\},
\end{align}
where $J_0(\cdot)$ is the zeroth order Bessel function of the first kind. When the Doppler frequency, $f_d$, is small, we have
\begin{align}\label{D9}
J_0(2\pi f_d nT)\approx 1- \pi^2 f_d^2 n^2 T^2.
\end{align}
By substituting (\ref{D9}) into (\ref{D8}), we can obtain
\begin{align}\label{D10}
\mathrm{E}\{|\sqrt{g_p}\Delta{u}_{p,m}[n,k]|^2\}=\frac{2\pi^2f_d^2T^2}{P^2}\left[1-\left(1-\frac{P}{M}\right)^n\right]^2.
\end{align}
As a result, we have
\begin{align}
\mathrm{E}\{\|\Delta{\mathbf{U}}[n,k]\mathbf{G}^{\frac{1}{2}}\|_{\mathrm{F}}^2\}&=\sum_{m=1}^M\sum_{p=1}^P\mathrm{E}\{|\sqrt{g_p}\Delta{u}_{p,m}[n,k]|^2\}\nonumber\\
&=\frac{2\pi^2f_d^2T^2M}{P}\left[1-\left(1-\frac{P}{M}\right)^n\right]^2.
\end{align}
which is exactly (\ref{3-8_3}).
\bibliographystyle{IEEEtran}
|
1,477,468,750,401 | arxiv | \section{Introduction}
We focus on recovering a sparse piece-wise smooth
two-dimensional (2D) signal (an image) ${\bf X}$ from 1-bit measurements,
\begin{equation}\label{1bitcsnoisy}
{\bf Y}=\mbox{sign}\left({\bf A}{\bf X} + {\bf W}\right),
\end{equation}
where ${\bf Y}\in\{-1,1\}^{M \times L}$ is the measurement matrix, $\mbox{sign}$ is the element-wise
sign function that returns $+1$ for positive arguments and $-1$ otherwise, ${\bf A}\in\mathbb{R}^{M\times N}$ is
the known sensing matrix, ${\bf X} \in {\mathbb R}^{N \times L}$ is the original 2D signal, and ${\bf W} $ is additive noise.
Unlike in conventional {\it compressive sensing} (CS), 1-bit measurements loose any information about the magnitude of
the original signal ${\bf X}$. The goal is then to recover ${\bf X}$, but only up to an unknown and unrecoverable magnitude
\cite{boufounos20081}, \cite{jacques2011robust}.
Our innovations, with respect to 1-bit CS as proposed in \cite{boufounos20081}, are twofold:
a) we address the 2D case; b) more importantly, we introduce a new regularizer favoring both sparsity and
piece-wise smoothness, which can be seen as a modified 2D version of {\it fused lasso} \cite{tibshirani2004sparsity}.
This new regularizer is the indicator of a union of convex subsets (total-variation balls) of the canonical subspaces,
simultaneously enforcing sparsity and smoothness within each connected subset of
non-zero elements. The rationale is that, when imposing smoothness and sparseness, smoothness should not interfere with
sparsity, {\it i.e.}, it should not be imposed across the transitions from zero to non-zero elements.
The proposed regularizer promotes sparseness and smoothness
and (although it is non-convex) has a computationally feasible projection,
based on which we propose a modified version of the {\it binary iterative hard
thresholding} (BIHT) algorithm \cite{jacques2011robust}.
\section{2D Binary Iterative Hard Thresholding (2DBIHT)}
To recover ${\bf X}$ from ${\bf Y}$, we first consider a 2D version of the
criterion proposed by Jacques {\it et al} \cite{jacques2011robust}
\begin{equation}
\label{2DBIHT_model}
\begin{split}
& \min_{\bf X} f({\bf Y}\odot {\bf A}{\bf X}) + \iota_{\Sigma_K}({\bf X})\\
& \mbox{subject\; to }\; \left\|{\bf X} \right\|_2 = 1,
\end{split}
\end{equation}
where: the operation ``$\odot$" denotes element-wise (Hadamard) product;
$\iota_C\left({\bf X}\right)$ denotes the indicator function of set $C$,
\begin{equation}
\iota_C\left({\bf X}\right) = \left\{
\begin{array} {ll}
0, & {\bf X} \in C \\
+\infty, & {\bf X} \not\in C;
\end{array}\right.
\label{indicator}
\end{equation}
${\Sigma_K} = \left\{{\bf X}\in \mathbb{R}^{N \times L} : \left\|{\bf X}\right\|_0 \leq K \right\}$ (with
$\|{\bf V}\|_0$ denoting the number of non-zero components in ${\bf V}$)
is the set of $K$-sparse $N \times L$ images; $\|{\bf X}\|_2 = \bigl(\sum_{ij} X_{(i,j)}^2\bigr)^{\frac{1}{2}} $ is the Euclidean norm,
and $f$ is one of the penalty functions defined next.
To penalize linearly the violations of the sign consistency between the observations and the estimate
\cite{jacques2011robust}, the barrier function is chosen as $f({\bf Z}) = 2\left\|{\bf \left[Z\right]}_{-}\right\|_1$,
where ${\bf \left[Z\right]}_{-} = \min \left({\bf Z}, 0\right)$ (with the minimum
applied entry-wise and the factor 2 included for later convenience) and $\|{\bf V}\|_1 = \sum_{ij} |V_{(i,j)}|$ is the
$\ell_1$ norm of ${\bf V}$. A quadratic barrier for sign violations (see \cite{boufounos20081})
is achieved by using $f({\bf Z})=\frac{1}{2} \left\|{\bf \left[Z\right]}_{-}\right\|_2^2 $,
where the factor $1/2$ is also included for convenience.
The iterative hard thresholding (IHT) \cite{blumensath2009iterative} algorithm applied to \eqref{2DBIHT_model}
(ignoring the norm constraint during the iterations) leads to the 2DBIHT algorithm, which is a 2D version of the {\it binary iterative hard thresholding} (BIHT) \cite{jacques2011robust}:
\begin{algorithm}{2DBIHT}{
\label{alg:2DBIHT}}
set $k =0, \tau >0, {\bf X}_0$ and $K$ \\
\qrepeat\\
${\bf V}_{k+1} = {\bf X}_{k} - \tau {\partial} f\left({\bf Y}\odot \left({\bf A}{\bf X}_k\right)\right)$\\
${\bf X}_{k+1} = {\mathcal P}_{\Sigma_K} \left( {\bf V}_{k+1}\right)$\\
$ k \leftarrow k+ 1$
\quntil some stopping criterion is satisfied.\\
\qreturn ${\bf X}_k/\left\|{\bf X}_k\right\|_2$
\end{algorithm}
In this algorithm, ${\partial} f$
denotes the subgradient of the objective (see \cite{jacques2011robust}, for details),
which is given by
\begin{equation} \label{subgradient}
{\partial} f\left({\bf Y}\odot \left({\bf A}{\bf X} \right)\right) = \left\{
\begin{array} {ll}
{\bf A}^T\left(\mbox{sign}\left({\bf A}{\bf X} \right) - {\bf Y}\right), & \ell_1 \;\mbox{barrier,} \\
{\bf A}^T \left( {\bf Y}\odot\left[{\bf Y}\odot \left({\bf A}{\bf X} \right)\right]_- \right), & \ell_2 \;\mbox{barrier,}
\end{array}\right.
\end{equation}
Step 3 corresponds to a sub-gradient descent step (with step-size $\tau$), while
Step 4 performs the projection onto the non-convex set ${\Sigma_K}$, which
corresponds to computing the best $K$-term approximation of ${\bf V}$, that is,
keeping $K$ largest components in magnitude and setting the others to zero. Finally, the
returned solution is projected onto the unit sphere to satisfy the constraint $\left\|{\bf X} \right\|_2 = 1$ in (\ref{2DBIHT_model}).
The versions of BIHT with $\ell_1$ and $\ell_2$ objectives are referred to as 2DBIHT-$\ell_1$ and 2DBIHT-$\ell_2$, respectively.
\section{2D Fused Binary Compressive Sensing (2DFBCS)}
The proposed formulation essentially adds a new constraint of
low (modified) total variation to the criterion of 2DBIHT \eqref{2DBIHT_model}, which encourages
4-neighbor elements to be similar, justifing the term ``fused".
\subsection{2DFBCS with Total Variation}
We first propose the following model:
\begin{equation}\label{2DBFCS_model1}
\begin{split}
& \min_{\bf X} f\left({\bf Y}\odot {\bf A}{\bf X}\right) + \iota_{\Sigma_K} \left({\bf X}\right) + \iota_{T_{\epsilon}} \left({\bf X}\right)\\
& \mbox{subject to} \; \left\|{\bf X} \right\|_2 = 1,
\end{split}
\end{equation}
where $T_{\epsilon}= \left\{{\bf X}\in \mathbb{R}^{N \times L} :\; \mbox{TV} \left({\bf X}\right) \leq \epsilon \right\}$, with
$\mbox{TV} \left({\bf x}\right) $ denoting the total variation (TV), which in the two-dimensional (2D) case is defined as
\begin{equation} \label{conv_TV}
\mbox{TV}({\bf X}) = \sum_{i=1}^{N-1}\sum_{j=1}^{L-1} \left(|X_{(i+1,j)} - X_{(i,j)}| + |X_{(i,j+1)} -X_{(i,j)}|\right),
\end{equation}
and $\epsilon$ is a positive parameter. In the same vein as 2DBIHT, the proposed algorithm is as follows:
{\begin{algorithm}{2DFBCS-TV}{
\label{alg:2DFBCS_TV}}
Set $\tau >0, \epsilon>0, K$, and ${\bf X}_0$ \\
\qrepeat\\
${\bf V}_{k+1} = {\bf X}_k - \tau \,{\partial} f\left({\bf Y}\odot \left({\bf A}{\bf X}_k\right)\right)$\\
${\bf X}_{k+1} = {\mathcal P}_{\Sigma_K} \bigl( {\mathcal P}_{\mathcal{T}_{\epsilon}} \bigl({\bf V}_{k+1}\bigr)\bigr) $\\
$ k \leftarrow k+ 1$
\quntil some stopping criterion is satisfied.\\
\qreturn ${\bf X}/\left\|{\bf X}\right\|_2$
\end{algorithm}}
where line 4 is to compute the projection onto $T_{\epsilon}$, which can be obtained
by using the algorithm proposed by Fadili and Peyr\'e \cite{fadili2011total}. The versions of the
2DBFCS-TV algorithm with $\ell_1$ and $\ell_2$ objectives are referred to as
2DBFCS-TV-$\ell_1$ and 2DBFCS-TV-$\ell_2$, respectively.
\subsection{2DFBCS with Modified Total Variation}
We propose the following problem formulation of 2DFBCS:
\begin{equation} \label{2DBFCS_model2}
\begin{split}
& \min_{{\bf X}\in \mathbb{R}^{N\times L}}
f\bigl({\bf Y}\odot \left({\bf A}{\bf X}\right)\bigr) + \iota_{\mathcal{F}^K_{\varepsilon}} \left({\bf X}\right)\\
& \mbox{subject\; to } \left\|{\bf X} \right\|_2 = 1,
\end{split}
\end{equation}
where the set $\mathcal{F}^K_{\varepsilon}$ requires a more careful explanation. As usual, define
${\Sigma_K} = \left\{{\bf X}\in \mathbb{R}^{N \times L} : \left\|{\bf X}\right\|_0 \leq K \right\}$ (with
$\|{\bf X}\|_0$ denoting the number of non-zeros in ${\bf X}$) as the set of $K$-sparse $N\times L$ images.
Consider the undirected 4-nearest-neighbors graph on the sites of $N\times L$ images, {\it i.e.},
$\mathcal{G}=(\mathcal{N},\mathcal{E})$, where $\mathcal{N} = \{(i,j),\, i=1,...,N, \, j=1,...,L\}$ and
$[(i,j),(k,l)]\in \mathcal{E}$ $\Leftrightarrow$
$\bigl((i=k)\wedge(|j-l|=1)\bigr)\vee \bigl((|i-k|=1)\wedge (j=l)\bigr)$. Given some ${\bf V} \in \mathbb{R}^{N \times L}$,
let $\widetilde{\mathcal{G}}({\bf V})=(\widetilde{\mathcal{N}}({\bf V}),\widetilde{\mathcal{E}}({\bf V}))$ be the
subgraph of $\mathcal{G}$ obtained by removing all the nodes corresponding to zero
elements of ${\bf V}$ (that is, $(i,j) \in \widetilde{\mathcal{N}}({\bf V}) \Leftrightarrow V_{(i,j)}\neq 0$), as well as
the corresponding edges. Naturally, $\widetilde{\mathcal{G}}({\bf V})$ may not be a connected graph; define
$\{\mathcal{G}_1({\bf V}) ,...,\mathcal{G}_{\mathcal{K}({\bf V})}({\bf V}) \}$ as the set of the
$\mathcal{K}({\bf V})$ connected subgraphs of
$\widetilde{\mathcal{G}}({\bf V})$, where $\mathcal{G}_k({\bf V}) = \bigl( \mathcal{N}_k ({\bf V}),\mathcal{E}_k({\bf V})\bigr)$.
Define the normalized TV of the sub-image of ${\bf V}$ corresponding to each of these connected
subgraphs as
\begin{equation} \label{normalizedTV}
\overline{\mbox{TV}}({ {\bf V}_{\mathcal{G}_k({\bf V})}}) =
|\mathcal{E}_k({\bf V})|^{-1} \sum_{\left[(i,j), (k,l)\right] \in \mathcal{E}_k({\bf V})} |V_{(i,j)} - V_{(k,l)}|
\end{equation}
(assuming $|\mathcal{E}_k({\bf V})|>0$) where ${ {\bf V}_{\mathcal{G}_k({\bf V})}}$ is the subgraph indexed by
$\mathcal{G}_k({\bf V})$.
Finally, the set $\mathcal{F}^K_{\varepsilon} \subseteq \Sigma_K $ is defined as
\begin{equation} \label{F_K}
\mathcal{F}^K_{\varepsilon} = \bigl\{ {\bf X} \in \Sigma_K:\overline{\mbox{TV}}\bigl( {\bf X}_{{\mathcal{G}}_k({\bf X})} \bigr) \leq \varepsilon,\,
k=1,...,\mathcal{K}({\bf X}) \bigr\}
\end{equation}
In short, $\mathcal{F}^K_{\varepsilon}$ is the set of
$K$-sparse images such that the normalized TV of each of its connected blocks of non-zeros
doesn't exceed $\varepsilon$. Notice that this is different from the intersection of a
TV ball with $\Sigma_K$, as considered in \cite{kyrillidis2012hard}.
In the same vein as the 2DBIHT, We propose the following BIHT-type algorithm to solve (\ref{2DBFCS_model1}):
\begin{algorithm}{2DFBCS-MTV}{
\label{alg:2DFBCS_MTV}}
Set $\tau >0, \varepsilon>0, K$, and ${\bf X}_0$ \\
\qrepeat\\
${\bf V}_{k+1} = {\bf X}_k - \tau \,{\partial} f\left({\bf Y}\odot \left({\bf A}{\bf X}_k\right)\right)$\\
${\bf X}_{k+1} = {\mathcal P}_{\mathcal{F}^K_{\varepsilon}} \bigl({\bf V}_{k+1}\bigr) $\\
$ k \leftarrow k+ 1$
\quntil some stopping criterion is satisfied.\\
\qreturn ${\bf X}/\left\|{\bf X}\right\|_2$
\end{algorithm}
In this algorithm, line 3 is also a sub-gradient descent step, where
${\partial} f$ is defined as \eqref{subgradient}
while line 4 performs the projection onto $\mathcal{F}^K_{\varepsilon}$. Although $\mathcal{F}^K_{\varepsilon}$
is non-convex, here we can briefly show that ${\mathcal P}_{\mathcal{F}^K_{\varepsilon}}$
can be computed as the follows (the details of computing ${\mathcal P}_{\mathcal{F}^K_{\varepsilon}}$
are shown in Appendix): first, project onto $\Sigma_K$, {\it i.e.},
${\bf U} = \mathcal{P}_{\Sigma_K} ({\bf V}) $; then, ${\bf X} = {\mathcal P}_{\mathcal{F}^K_{\varepsilon}} \bigl({\bf V}\bigr)$
is obtained by projecting every connected group of non-zeros in ${\bf U}$ onto
the $\varepsilon$-radius normalized TV ball $\mathcal{B}_{\varepsilon}^{k}$:
\begin{equation} \label{B_TVbar}
\mathcal{B}_{\varepsilon}^k =
\bigl\{ {\bf X}_{{{\mathcal{G}}_k({\bf X})}} \in \mathbb{R}^{{\mathcal{G}}_k({\bf X})}:
\overline{\mbox{TV}}\bigl( {\bf X}_{{\mathcal{G}}_k({\bf X})} \bigr) \leq \varepsilon, \bigr\}
\end{equation}
for $k=1,...,\mathcal{K}({\bf X})$, {\it i.e.},
${\bf X}_{{\mathcal{G}}_k({\bf U})} = \mathcal{P}_{\mathcal{B}_{\varepsilon}^k}
\bigl({\bf U}_{{\mathcal{G}}_k({\bf U})}\bigr)$, for $k=1,...,\mathcal{K}({\bf U})$,
and keeping the zeros of ${\bf U}$, {\it i.e.}, ${\bf X}_{\mathcal{G}-\widetilde{\mathcal{G}}({\bf U})} =
{\bf U}_{\mathcal{G}-\widetilde{\mathcal{G}}({\bf U})}$.
Finally, as in \cite{jacques2011robust}, projection onto the unit sphere (line 6) satisfies the
constraint in (\ref{2DBFCS_model1}). The versions of the
2DBFCS-MTV algorithm with $\ell_1$ and $\ell_2$ objectives are referred to as
2DBFCS-MTV-$\ell_1$ and 2DBFCS-MTV-$\ell_2$, respectively.
Of course, the objective functions in \eqref{2DBFCS_model1} and \eqref{2DBFCS_model2} are not convex
(since $\Sigma_K$ is not a convex set and the $\{{\bf X}\in \mathbb{R}^{N \times L}: \; \|{\bf X}\|_2=1\}$ is also not a convex set),
thus there is no guarantee that the algorithm finds a global
minimum. If the original signal is known to be non-negative,
then the algorithm should include a projection onto $\mathbb{R}_+^{N \times L}$ in each iteration.
\section{Experiments}
In this section, we report results of experiments aimed at comparing the performance of 2DBFCS with that of 2DBIHT.
Without loss of generality, we assume the original group-sparse image
${\bf X} \in \mathbb{R}^{400 \times 100}$ , in which, 10
line-groups are randomly generated, and each line-group has
9 elements valued by 10 or -10, and Then it is followed by a normalized operation ${\bf X} = {\bf X}/\left\|{\bf X}\right\|_2$.
The sensing matrix ${\bf A}$ is a $200 \times 400$ matrix whose components are sampled from the standard normal distribution.
And the variance of white Gaussian noise ${\bf W} \in \mathbb{R}^{200 \times 100}$ is $0.01$. Then the observations ${\bf Y}$ are obtained by (\ref{1bitcsnoisy}).
We run the aforementioned six algorithms, the stepsizes of 2DBIHT-$\ell_1$ and 2DBIHT-$\ell_2$ are set as $\tau =1$ and $1/M$, respectively, and the
parameters of 2DFBCS-TV-$\ell_1$, 2DFBCS-TV-$\ell_2$, 2DFBCS-MTV-$\ell_1$ and 2DFBCS-MTV-$\ell_2$ are hand tuned for the best improvement in signal-to-noise.
The recovered signals are shown in Figure \ref{fig:2DFBCS_10groups}, from which,
we can clearly see that the proposed 2DBFCS basically performs better than 2DBIHT. In general, the algorithms with the $\ell_2$ barrier outperforms
that with the $\ell_1$ barrier. Especially, the 2DFBCS-MTV-$\ell_2$ shows its superiority
over other algorithms, and nevertheless, the 2DFBCS-TV-$\ell_2$ is also good at recovering
sparse piece-wise images.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{figures/2DFBCS_10groups.eps}
\caption{Recovered images by different algorithms}
\label{fig:2DFBCS_10groups}
\end{figure}
\section{Conclusions}
We have proposed the {\it 2D binary fused compressive sensing} (2DBFCS) to recover 2D sparse piece-wise smooth signals
from 2D 1-bit compressive measurements. We have shown that if the original signals
are in fact sparse and piece-wise smooth, the proposed method,
is able to take advantage of the piece-wise smoothness of the original signal,
outperforms (under several accuracy measures) the 2D version of the previous method
{\it binary iterative hard thresholding} (termed 2DBIHT),
which relies only on sparsity of the original signal.
Future work will involve using the
technique of detecting sign flips to obtain a robust version of 2DBFCS.
|
1,477,468,750,402 | arxiv | \section{Conclusions}
In summary, we developed a finite-field approach to compute density response functions ($\chi_0$, $\chi_{\text{RPA}}$ and $\chi$) for molecules and materials. The approach is non-perturbative and can be used in a straightforward manner with both semilocal and orbital-dependent functionals. Using this approach, we computed the exchange-correlation kernel $f_{\text{xc}}$ and performed $GW$ calculations using dielectric responses evaluated beyond the RPA.
We evaluated quasiparticle energies for molecules and solids and compared results obtained within and beyond the RPA, and using DFT calculations with semilocal and hybrid functionals as input. We found that the effect of vertex corrections on quasiparticle energies is more notable when using input wavefunctions and single-particle energies from hybrid functionals calculations. For the small molecules in the GW100 set, $G_0W_0^{f_\text{xc}}$ calculations yielded higher VIP compared to $G_0W_0^{\text{RPA}}$ results, leading to a better agreement with experimental and high-level quantum chemistry results when using LDA and PBE starting points, and to a slight overestimate of VIP when using PBE0 as the starting point. $G_0W_0\Gamma_0$ calculations instead yielded a systematic underestimate of VIP of molecules. VEA of molecules were found to be less sensitive to vertex corrections compared to VIP. In the case of solids, the energy of the VBM and CBM shifts in the same direction, relative to RPA results, when vertex corrections are included, and overall the band gaps were found to be rather insensitive to the choice of the $GW$ approximation.
In addition, we reported a scheme to renormalize $f_{\text{xc}}$, which is built on previous work \cite{Schmidt2017} using the LDA functional. The scheme is general and applicable to any exchange-correlation functional and to inhomogeneous systems including molecules and solids. Using the renormalized $\tilde{f}_{\text{xc}}$, the basis set convergence of $G_0W_0\Gamma_0$ results was significantly improved.
Overall, the method introduced in our work represents a substantial progress towards efficient computations of dielectric screening and large-scale $G_0W_0$ calculations for molecules and materials beyond the random phase approximation.
\section{The finite-field approach}
We first describe the FF approach for iterative diagonalization of density response functions and we then discuss its robustness and accuracy.
\subsection{Formalism}
Our $G_0W_0$ calculations are based on DFT single-particle energies and wavefunctions, obtained by solving the Kohn-Sham (KS) equations:
\begin{equation} \label{KS}
H_{\text{KS}} \psi_{m}(\bm{r}) = \varepsilon_{m} \psi_{m}(\bm{r}),
\end{equation}
where the KS Hamiltonian $H_{\text{KS}} = T + V_{\text{SCF}} = T + V_{\text{ion}} + V_{\text{H}} + V_{\text{xc}}$. $T$ is the kinetic energy operator; $V_{\text{SCF}}$ is the KS potential that includes the ionic $V_{\text{ion}}$, the Hartree $V_{\text{H}}$ and the exchange-correlation potential $V_{\text{xc}}$. The charge density is given by $n(\bm{r}) = \sum_m^{\text{occ.}} \left| \psi_{m}(\bm{r}) \right|^2$. For simplicity we omitted the spin index.
We consider the density response function (polarizability) of the KS system $\chi_0(\bm{r}, \bm{r}')$ and that of the physical system $\chi(\bm{r}, \bm{r}')$; the latter is denoted as $\chi_{\text{RPA}}(\bm{r}, \bm{r}')$ when the random phase approximation (RPA) is used. The variation of the charge density due to either a variation of the KS potential $\delta V_{\text{SCF}}$ or the external potential $\delta V_{\text{ext}}$ is given by:
\begin{equation} \label{kernel}
\delta n(\bm{r}) = \int K(\bm{r}, \bm{r}') \delta V(\bm{r}') d\bm{r}',
\end{equation}
where $K = \chi_0(\bm{r}, \bm{r}')$ if $\delta V(\bm{r}') = \delta V_{\text{SCF}}(\bm{r}')$ and $K = \chi(\bm{r}, \bm {r'})$ if $\delta V(\bm{r}') = \delta V_{\text{ext}}(\bm{r}')$. The density response functions of the KS and physical system are related by a Dyson-like equation:
\begin{equation} \label{Dyson}
\chi(\bm{r}, \bm{r}') = \chi_{0}(\bm{r}, \bm{r}') + \int d\bm{r}'' \int d\bm{r}''' \chi_{0}(\bm{r}, \bm{r}'') \left[v_{\text{c}}(\bm{r}'', \bm{r}''') + f_{\text{xc}}(\bm{r}'', \bm{r}''')\right] \chi(\bm{r}''', \bm{r}')
\end{equation}
where $v_{\text{c}}(\bm{r}, \bm{r}') = \frac{1}{|\bm{r} - \bm{r}'|}$ is the Coulomb kernel and $f_{\text{xc}}(\bm{r}, \bm{r}') = \frac{\delta V_{\text{xc}}(\bm{r})}{\delta n(\bm{r}')}$ is the exchange-correlation kernel.
Within the RPA, $f_{\text{xc}}$ is neglected and $\chi(\bm{r}, \bm{r}')$ is approximated by:
\begin{equation} \label{DysonRPA}
\chi_{\text{RPA}}(\bm{r}, \bm{r}') = \chi_{0}(\bm{r}, \bm{r}') + \int d\bm{r}'' \int d\bm{r}''' \chi_{0}(\bm{r}, \bm{r}'') v_{\text{c}}(\bm{r}'', \bm{r}''') \chi(\bm{r}''', \bm{r}').
\end{equation}
In the plane-wave representation (for simplicity we only focus on the $\Gamma$ point of the Brillouin zone), $v_{\text{c}}(\bm{G}, \bm{G}') = \frac{4\pi \delta(\bm{G}, \bm{G}')}{|\bm{G}|^2}$ (abbreviated as $v_{\text{c}}(\bm{G}) = \frac{4\pi}{|\bm{G}|^2}$). We use $K(\bm{G}, \bm{G}')$ to denote a general response function ($K \in \{ \chi_0, \chi_\text{RPA}, \chi \}$), and define the dimensionless response function $\tilde{K}(\bm{G}, \bm{G}')$ ($\tilde{K} \in \{ \tilde{\chi}_0, \tilde{\chi}_\text{RPA}, \tilde{\chi} \}$) by symmetrizing $K(\bm{G}, \bm{G}')$ with respect to $v_{\text{c}}$:
\begin{equation}
\tilde{K}(\bm{G}, \bm{G}') = v_{\text{c}}^{\frac{1}{2}}(\bm{G}) K(\bm{G}, \bm{G}') v_{\text{c}}^{\frac{1}{2}}(\bm{G'}).
\end{equation}
The dimensionless response functions $\tilde{\chi}_{\text{RPA}}$ and $\tilde{\chi}_0$ (see eq \ref{DysonRPA}) have the same eigenvectors, and their eigenvalues are related by:
\begin{equation} \label{lambdarpa}
\lambda_i^{\text{RPA}} = \frac{\lambda_i^0}{1-\lambda_i^0}
\end{equation}
where $\lambda_i^{\text{RPA}}$ and $\lambda_i^0$ are eigenvalues of $\tilde{\chi}_{\text{RPA}}$ and $\tilde{\chi}_0$, respectively. In general the eiegenvalues and eigenvectors of $\tilde{\chi}_{\text{RPA}}$ are different from those of $\tilde{\chi}$ due to the presence of $f_{\text{xc}}$ in eq \ref{Dyson}.
In our \textit{GW} calculations we use a low rank decomposition of $\tilde{K}$:
\begin{equation} \label{pdep}
\tilde{K} = \sum_i^{N_{\text{PDEP}}} \lambda_i \ket{\xi_i} \bra{\xi_i}
\end{equation}
where $\lambda$ and $\ket{\xi}$ denote eigenvalue and eigenvectors of $\tilde{K}$, respectively. The set of $\xi$ constitute a projective dielectric eigenpotential (PDEP) basis \cite{Nguyen2012, Pham2013, Govoni2015}, and the accuracy of the low rank decomposition is controlled by $N_{\text{PDEP}}$, the size of the basis. In the limit of $N_{\text{PDEP}} = N_{\text{PW}}$ (the number of plane waves), the PDEP basis and the plane wave basis are related by a unitary transformation. In practical calculations it was shown that \cite{Nguyen2012, Pham2013} one only need $N_{\text{PDEP}} \ll N_{\text{PW}}$ to converge the computed quasiparticle energies. To obtain the PDEP basis, an iterative diagonalization is performed for $\tilde{K}$, e.g. with the Davidson algorithm \cite{Davidson1975}. The iterative diagonalization requires evaluating the action of $\tilde{K}$ on an arbitrary trial function $\xi$:
\begin{equation} \label{operation}
\begin{split}
(\tilde{K}\xi)(\bm{G})
&= \sum_{\bm{G}'} v_{\text{c}}^{\frac{1}{2}}(\bm{G}) K(\bm{G}, \bm{G}') v_{\text{c}}^{\frac{1}{2}}(\bm{G}') \xi(\bm{G}') \\
&= v_{\text{c}}^{\frac{1}{2}}(\bm{G}) \mathcal{FT} \left\{ \int K(\bm{r}, \bm{r'}) \left( \mathcal{FT}^{-1} \left[ v_{\text{c}}^{\frac{1}{2}}(\bm{G}') \xi(\bm{G}') \right] \right) (\bm{r}') d\bm{r}' \right\}(\bm{G})
\end{split}
\end{equation}
where $\mathcal{FT}$ and $\mathcal{FT}^{-1}$ denote forward and inverse Fourier transforms respectively. By using eq \ref{operation} we cast the evaluation of $\tilde{K}\xi$ to an integral in real space.
Defining a perturbation $\delta V(\bm{G}')$ = $v_{\text{c}}^{\frac{1}{2}}(\bm{G'}) \xi(\bm{G}')$, the calculation of the real space integral in eq \ref{operation} is equivalent to solving for the variation of the charge density $\delta n$ due to $\delta V$:
\begin{equation} \label{koperation}
\int K(\bm{r}, \bm{r'}) \left( \mathcal{FT}^{-1} \left[ v_{\text{c}}^{\frac{1}{2}}(\bm{G}') \xi(\bm{G}') \right] \right) (\bm{r}') d\bm{r}' = \int K(\bm{r}, \bm{r'}) \delta V(\bm{r}') d\bm{r}' \equiv \delta n(\bm{r}).
\end{equation}
In previous works $\delta n(\bm{r})$ was obtained using DFPT for the case of $K = \chi_0$ \cite{Govoni2015}. In this work we solve eq \ref{koperation} by a finite-field approach. In particular, we perform two SCF calculations under the action of the potentials $\pm \delta V$:
\begin{equation} \label{perturbedKS}
(H_{\text{KS}} \pm \delta V) \psi_{m}^{\pm}(\bm{r}) = \varepsilon_{m}^{\pm} \psi_{m}^{\pm}(\bm{r}),
\end{equation}
and $\delta n(\bm{r})$ is computed through a finite difference:
\begin{equation} \label{finitediff}
\delta{n(\bm{r})} = \frac{1}{2} \left[ \sum_m^{\text{occ.}} \left| \psi_{m}^{+}(\bm{r}) \right|^2 - \sum_m^{\text{occ.}} \left| \psi_{m}^{-}(\bm{r}) \right|^2\right]
\end{equation}
In eq \ref{finitediff} we use a central difference instead of forward/backward difference to increase the numerical accuracy of the computed $\delta{n(\bm{r})}$.
If in the SCF procedure adopted in eq \ref{perturbedKS} all potential terms in the KS Hamiltonian are computed self-consistently, then the solution of eq \ref{finitediff} yields $K = \chi$ (see eq \ref{koperation}). If $V_{\text{xc}}$ is evaluated for the initial charge density (i.e. $V_{\text{xc}} = V_{\text{xc}}[n_0]$) and kept fixed during the SCF iterations, then the solution of eq \ref{finitediff} yields $K = \chi_{\text{RPA}}$. If both $V_{\text{xc}}$ and $V_{\text{H}}$ are kept fixed, the solution of eq \ref{finitediff} yields $K = \chi_0$.
Unlike DFPT, the finite-field approach adopted here allows for the straightforward calculation of response functions beyond the RPA (i.e. for the calculation of $\chi$ instead of $\chi_0$ or $\chi_{\text{RPA}}$), and it can be readily applied to hybrid functionals for which analytical expressions of $f_{\text{xc}}$ are not available. We note that finite-field calculations with hybrid functionals can easily benefit from any methodological development that reduces the computational complexity of evaluating exact exchange potentials \cite{Gygi2009, Gygi2013, Dawson2015}.
Once the PDEP basis is obtained by iterative diagonalization of $\tilde{\chi}_{0}$ \bibnote{Here, we defined the PDEP basis to be the eigenvectors of $\tilde{\chi}_{0}$. Alternatively, one may first iteratively diagonalize $\tilde{\chi}$ and define its eigenvectors as the PDEP basis. Then $\tilde{\chi}_0$ and $\tilde{f}_{\text{xc}}$ can be evaluated in the space of the eigenvectors of $\tilde{\chi}$. This choice is not further discussed in the paper; we only mention that some comparisons for the quasiparticle energies (at the $G_0W_0^{f_\text{xc}}$ level, see Section 3) of selected molecules obtained using either $\tilde{\chi}_0$ or $\tilde{\chi}$ eigenvectors as the PDEP basis are identical within 0.01 (0.005) eV for the HOMO (LUMO) state.}, the projection of $\tilde{\chi}$ on the PDEP basis can also be performed using the finite-field approach. Then the symmetrized exchange-correlation kernel $\tilde{f}_{\text{xc}} = v_{\text{c}}^{-\frac{1}{2}} f_{\text{xc}} v_{\text{c}}^{-\frac{1}{2}}$ can be computed by inverting the Dyson-like equation (eq \ref{Dyson}):
\begin{equation} \label{fxcmatrix}
\tilde{f}_{\text{xc}} = {\tilde{\chi}_{0}}^{-1} - \tilde{\chi}^{-1} - 1.
\end{equation}
On the right hand side of eq \ref{fxcmatrix} all matrices are $N_{\text{PDEP}} \times N_{\text{PDEP}}$ and therefore the resulting $\tilde{f}_{\text{xc}}$ is also defined on the PDEP basis.
When using orbital-dependent functionals such as meta-GGA and hybrid functionals, the $\tilde{f}_{\text{xc}}$ computed from eq \ref{fxcmatrix} should be interpreted with caution. In this case, DFT calculations for $H_{\text{KS}} \pm \delta V$ can be performed using either the optimized effective potential (OEP) or the generalized Kohn-Sham (GKS) scheme. In the OEP scheme, $v_{\text{xc}}$ is local in space and $f_{\text{xc}}(\bm{r}, \bm{r}') = \frac{\delta V_{\text{xc}}(\bm{r})}{\delta n(\bm{r}')}$ depends on $\bm{r}$ and $\bm{r'}$, as in the case of semilocal functionals. In the GKS scheme, $V_{\text{xc}}$ is non-local and $f_{\text{xc}}(\bm{r}, \bm{r}'; \bm{r}'') = \frac{\delta V_{\text{xc}}(\bm{r}, \bm{r}')}{\delta n(\bm{r}'')}$ depends on three position vectors. We expect $\delta n$ to be almost independent of the chosen scheme, whether GKS or OEP, since both methods yield the same result within first order in the charge density \cite{Kummel2008}. We conducted hybrid functional calculations within the GKS scheme, assuming that for every GKS calculation an OEP can be defined yielding the same charge density; with this assumption the $f_{\text{xc}}$ from eq \ref{fxcmatrix} is well defined within the OEP formalism.
\subsection{Implementation and Verification}
We implemented the finite-field algorithm described above by coupling the WEST \cite{Govoni2015} and Qbox \cite{Gygi2008} \bibnote{Qbox. http://www.qboxcode.org (accessed Aug. 1, 2018).} codes in client-server mode, using the workflow summarized in Figure \ref{qbox_west_coupling}. In particular, in our implementation the WEST code performs an iterative diagonalization of $\tilde{K}$ by outsourcing the evaluation of the action of $\tilde{K}$ on an arbitrary function to Qbox, which performs DFT calculations in finite field. The two codes communicate through the filesystem.
\begin{figure}[H]
\includegraphics[width=7in]{figures/qbox_west_coupling.png}
\caption{Workflow of finite-field calculations. The WEST code performs an iterative diagonalization of $\tilde{K}$ ($\tilde{\chi}_0$, $\tilde{\chi}_{\text{RPA}}$, $\tilde{\chi}$). In $GW$ calculations beyond the RPA, $\tilde{f}_{\text{xc}}$ is computed from eq \ref{fxcmatrix}, which requires computing the spectral decomposition of $\tilde{\chi}_0$ and evaluating $\tilde{\chi}$ in the space of $\tilde{\chi}_0$ eigenvectors. Finite-field calculations are carried out by the Qbox code. If the Hartree ($V_{\text{H}}$) and exchange correlation potential ($V_{\text{xc}}$) are updated self-consistently when solving eq \ref{perturbedKS}, one obtains $K = \chi$; if $V_{\text{xc}}$ is evaluated at the initial charge density $n_0$ and kept fixed during the SCF procedure, one obtains $K = \chi_{\text{RPA}}$; if both $V_{\text{xc}}$ and $V_{\text{H}}$ are evaluated for $n_0$ and kept fixed, one obtains $K = \chi_0$. The communications of $\delta n$ and $\delta V$ between WEST and Qbox is carried through the filesystem.}
\label{qbox_west_coupling}
\end{figure}
To verify the correctness of our implementation, we computed $\tilde{\chi}_0$, $\tilde{\chi}^{\text{RPA}}$, $\tilde{\chi}$ for selected molecules in the GW100 set and we compared the results to those obtained with DFPT. Section 1 of the SI summarizes the parameters used including plane wave cutoff $E_\text{cut}$, $N_{\text{PDEP}}$ and size of the simulation cell. In finite-field calculations we optimized the ground state wavefunction using a preconditioned steepest descent algorithm with Anderson acceleration\cite{Anderson1965}. The magnitude of $\delta V$ was chosen to insure that calculations were performed within the linear response regime (see Section 2 of the SI). All calculations presented in this section were performed with the PBE functional unless otherwise specified.
Figure \ref{ff_vs_dfpt}a shows the eigenvalues of $\tilde{\chi}_{\text{RPA}}$ for a few molecules obtained with three approaches: iterative diagonalization of $\tilde{\chi}_{\text{RPA}}$ with the finite-field approach; iterative diagonalization of $\tilde{\chi}_0$ with either the finite-field approach or with DFPT, followed by a transformation of eigenvalues as in eq \ref{lambdarpa}. The three approaches yield almost identical eigenvalues.
\begin{figure}[H]
\includegraphics[width=7in]{figures/ff_vs_dfpt_a.png}
\includegraphics[width=7in]{figures/ff_vs_dfpt_b.png}
\caption{Comparison of the eigenvalues(a) and eigenfunctions(b) of $\tilde{\chi}_{\text{RPA}}$ obtained from density functional perturbation theory (DFPT) and finite-field (FF) calculations. Three approaches are used: diagonalization of $\tilde{\chi}_0$ by DFPT, diagonalization of $\tilde{\chi}_0$ by FF (denoted by FF(0)) and diagonalization of $\tilde{\chi}_{\text{RPA}}$ by FF (denoted by FF(RPA)). In the case of DFPT and FF(0), eq \ref{lambdarpa} was used to obtain the eigenvalues of $\tilde{\chi}_{\text{RPA}}$ from those of $\tilde{\chi}_0$. In (b) we show the first $32 \times 32$ elements of the $\braket{\xi^{\text{DFPT}} | \xi^{\text{FF(0)}}}$ and $\braket{\xi^{\text{DFPT}} | \xi^{\text{FF(RPA)}}}$ matrices (see eq \ref{pdep}). }
\label{ff_vs_dfpt}
\end{figure}
The eigenvectors of the response functions are shown in Figure \ref{ff_vs_dfpt}b, where we report elements of the matrices defined by the overlap between finite-field and DFPT eigenvectors. The inner product matrices are block-diagonal, with blocks corresponding to the presence of degenerate eigenvalues. The agreement between eigenvalues and eigenvectors shown in Figure \ref{ff_vs_dfpt} verifies the accuracy and robustness of finite-field calculations.
Figure \ref{chi_vs_chirpa} shows the eigendecomposition of $\tilde{\chi}$ compared to that of $\tilde{\chi}_{\text{RPA}}$.
\begin{figure}[H]
\includegraphics[width=7in]{figures/chi_vs_chirpa_a.png}
\includegraphics[width=7in]{figures/chi_vs_chirpa_b.png}
\caption{Comparison of eigenvalues(a) and eigenfunctions(b) of $\tilde{\chi}$ and $\tilde{\chi}_{\text{RPA}}$ obtained from finite-field calculations. In (b), the first $32 \times 32$ elements of the $\braket{\xi^{\text{RPA}} | \xi^{\text{full}}}$ matrices are presented.}
\label{chi_vs_chirpa}
\end{figure}
As indicated by Figure \ref{chi_vs_chirpa}a, including $f_{\text{xc}}$ in the evaluation of $\chi$ results in a stronger screening. The eigenvalues of $\tilde{\chi}$ are systematically more negative than those of $\tilde{\chi}_{\text{RPA}}$, though they asymptotically converge to zero in the same manner. While the eigenvalues are different, the eigenvectors (eigenspaces in the case of degenerate eigenvalues) are almost identical, as indicated by the block-diagonal form of the eigenvector overlap matrices (see Figure \ref{chi_vs_chirpa}b).
Finally, $\tilde{f}_{\text{xc}}$ can be computed from $\tilde{\chi}$ and $\tilde{\chi}_0$ according to eq \ref{fxcmatrix}. Due to the similarity of the eigenvectors of $\tilde{\chi}$ and $\tilde{\chi}_{\text{RPA}}$ (identical to that of $\tilde{\chi}_0$), the $\tilde{f}_{\text{xc}}$ matrix is almost diagonal. In Section 3 of the SI we show the $\tilde{f}_{\text{xc}}$ matrix in the PDEP basis for a few systems. To verify the accuracy of $\tilde{f}_{\text{xc}}$ obtained by the finite-field approach, we performed calculations with the LDA functional, for which $f_{\text{xc}}$ can be computed analytically. In Figure \ref{fxc_lda} we present for a number of systems the average relative difference of the diagonal terms of the $\tilde{f}_{\text{xc}}$ matrices obtained analytically and through finite-field (FF) calculations. We define $\Delta f_{\text{xc}}$ as
\begin{equation} \label{deltafxc}
\Delta f_{\text{xc}} = \frac{1}{N_{\text{PDEP}}} \sum_i^{N_{\text{PDEP}}} \frac{\left| \mel{\xi_i}{\tilde{f}_{\text{xc}}^{\text{FF}}}{\xi_i} - \mel{\xi_i}{\tilde{f}_{\text{xc}}^{\text{analytical}}}{\xi_i} \right|}{\left| \mel{\xi_i}{\tilde{f}_{\text{xc}}^{\text{analytical}}}{\xi_i} \right|} .
\end{equation}
As shown in Figure \ref{fxc_lda}, $\Delta f_{\text{xc}}$ is smaller than a few percent for all systems studied here. To further quantify the effect of the small difference found for the $\tilde{f}_{\text{xc}}$ matrices on $GW$ quasiparticle energies, we performed $G_0W_0^{f_\text{xc}}@\text{LDA}$ calculations for all the systems shown in Figure \ref{fxc_lda}, using the analytical $f_{\text{xc}}$ and $f_{\text{xc}}$ computed from finite-field calculations. The two approaches yielded almost identical quasiparticle energies, with mean absolute deviations of 0.04 and 0.004 eV for HOMO and LUMO levels, respectively.
\begin{figure}[H]
\includegraphics[width=3.33in]{figures/fxc_lda.png}
\caption{Average relative differences $\Delta f_{\text{xc}}$ (see eq \ref{deltafxc}) between diagonal elements of the $\tilde{f}_{\text{xc}}$ matrices computed analytically and numerically with the finite-field approach. Calculations were performed with the LDA functional.}
\label{fxc_lda}
\end{figure}
\section{$GW$ calculations}
\subsection{Formalism}
In this section we discuss $GW$ calculations within and beyond the RPA, utilizing $f_{\text{xc}}$ computed with the finite-field approach. In the following equations we use 1, 2, ... as shorthand notations for $(\bm{r}_1, t_1)$, $(\bm{r}_2, t_2)$, ... Indices with bars are integrated over. When no indices are shown, the equation is a matrix equation in reciprocal space or in the PDEP basis. The following discussion focuses on finite systems; for periodic systems a special treatment of the long-range limit of $\chi$ is required and relevant formulae are presented in Section 4 of the SI.
Based on a KS reference system, the Hedin equations \cite{Hedin1965} relate the exchange-correlation self-energy $\Sigma_{\text{xc}}$ (abbreviated as $\Sigma$), Green's function $G$, the screened Coulomb interaction $W$, the vertex $\Gamma$ and the irreducible polarizability $P$:
\begin{equation} \label{Hedin1}
\Sigma(1, 2) = i G(1, \bar4) W(1^+, \bar3) \Gamma(\bar4, 2; \bar3),
\end{equation}
\begin{equation} \label{Hedin2}
W(1, 2) = v_{\text{c}}(1, 2) + v_{\text{c}}(1, \bar3) P(\bar3, \bar4) W(\bar4, 2),
\end{equation}
\begin{equation} \label{Hedin3}
P(1, 2) = -i G(1, \bar3) G(\bar4, 1) \Gamma(\bar3, \bar4, 2),
\end{equation}
\begin{equation} \label{Hedin4}
\Gamma(1, 2; 3) = \delta(1, 2) \delta(1, 3) + \frac{\delta \Sigma(1, 2)}{\delta G(\bar4, \bar5)} G(\bar4, \bar6) G(\bar7, \bar5) \Gamma(\bar6, \bar7, 3),
\end{equation}
\begin{equation} \label{Hedin5}
G(1, 2) = G^0(1, 2) + G^0(1, \bar3) \Sigma(\bar3, \bar4) G(\bar4, 2).
\end{equation}
We consider three different $G_0W_0$ approximations: the first is the common $G_0W_0$ formulation within the RPA, here denoted as $G_0W_0^{\text{RPA}}$, where $\Gamma(1, 2; 3) = \delta(1, 2) \delta(1, 3)$ and $\Sigma$ is given by:
\begin{equation}
\Sigma(1, 2) = i G(1, 2) W_{\text{RPA}}(1^+, 2),
\end{equation}
where
\begin{equation} \label{wrpa}
W_{\text{RPA}}(1, 2) = v_{\text{c}}(1, 2) + v_{\text{c}}(1, \bar3) \chi_{\text{RPA}}(\bar3, \bar4) v_{\text{c}}(\bar4, 2),
\end{equation}
and
\begin{equation} \label{chirpa}
\chi_{\text{RPA}} = (1 - \chi_0 v_{\text{c}})^{-1} \chi_0.
\end{equation}
The second approximation, denoted as $G_0W_0^{f_\text{xc}}$, includes $f_{\text{xc}}$ in the definition of $W$. Specifically, $\chi$ is computed from $\chi_0$ and $f_{\text{xc}}$ with eq \ref{Dyson}:
\begin{equation} \label{chifxc}
\chi = (1 - \chi_0 (v_c + f_{\text{xc}}))^{-1} \chi_0,
\end{equation}
and is used to construct the screened Coulomb interaction beyond the RPA:
\begin{equation} \label{wfxc}
W_{f_\text{xc}} = v_{\text{c}}(1, 2) + v_{\text{c}}(1, \bar3) \chi(\bar3, \bar4) v_{\text{c}}(\bar4, 2).
\end{equation}
The third approximation, denoted as $G_0W_0\Gamma_0$, includes $f_{\text{xc}}$ in both $W$ and $\Sigma$. In particular, an initial guess for $\Sigma$ is constructed from $V_{\text{xc}}$:
\begin{equation} \label{sigmaxc}
\Sigma_0(1, 2) = \delta(1, 2) V_{\text{xc}}(1)
\end{equation}
from which one can obtain a zeroth order vertex function by iterating Hedin's equations once \cite{DelSole1994}:
\begin{equation}
\begin{split}
\Gamma_0(1, 2; 3) = \delta(1, 2) (1 - f_{\text{xc}}\chi_0)^{-1}(1, 3).
\end{split}
\end{equation}
Then the self-energy $\Sigma$ is constructed using $G$, $W_{f_\text{xc}}$ and $\Gamma_0$:
\begin{equation} \label{Hedin1vertex}
\begin{split}
\Sigma(1, 2)
& = i G(1, \bar4) W_{f_\text{xc}}(1^+, \bar3) \Gamma_0(\bar4, 2; \bar3) \\
& = i G(1, 2) W_{\Gamma}(1^+, \bar3)
\end{split}
\end{equation}
where we defined an effective screened Coulomb interaction\bibnote{One may note that $\tilde{\chi}_{\Gamma}$ is not symmetric with respect to its two indices, and it can be symmetrized by using $\tilde{\chi}_0 \tilde{f}_{\text{xc}} \rightarrow \frac{1}{2}(\tilde{\chi}_0 \tilde{f}_{\text{xc}} + \tilde{f}_{\text{xc}} \tilde{\chi}_0 )$ in eq \ref{chitildegamma}. We found that the symmetrization has negligible effects on quasiparticle energies. We performed $G_0W_0^{f_\text{xc}}$ calculations for systems as shown in Figure \ref{fxc_lda} with either symmetrized or unsymmetrized $\tilde{\chi}_{\Gamma}$, the mean absolute deviations for HOMO and LUMO quasiparticle energies are 0.006 eV and 0.001 eV respectively.}
\begin{equation} \label{wgamma}
W_{\Gamma} = v_{\text{c}}(1, 2) + v_{\text{c}}(1, \bar3) \chi_{\Gamma}(\bar3, \bar4) v_{\text{c}}(\bar4, 2),
\end{equation}
\begin{equation} \label{chigamma}
\chi_{\Gamma} = [v_{\text{c}} - v_{\text{c}} \chi_0 (v_{\text{c}} + f_{\text{xc}})]^{-1} - v_{\text{c}}^{-1}.
\end{equation}
The symmetrized forms of the three different density response functions (reducible polarizabilities) defined in eq \ref{chirpa}, \ref{chifxc}, \ref{chigamma} are:
\begin{equation} \label{chitilderpa}
\tilde{\chi}_{\text{RPA}} = [1 - \tilde{\chi}_0]^{-1} \tilde{\chi}_0
\end{equation}
\begin{equation} \label{chitildefxc}
\tilde{\chi} = [1 - \tilde{\chi}_0 (1 + \tilde{f}_{\text{xc}})]^{-1} \tilde{\chi}_0
\end{equation}
\begin{equation} \label{chitildegamma}
\tilde{\chi}_{\Gamma} = [1 - \tilde{\chi}_0 (1 + \tilde{f}_{\text{xc}})]^{-1} - 1
\end{equation}
Eqs. \ref{chitilderpa}-\ref{chitildegamma} have been implemented in the WEST code \cite{Govoni2015}.
We note that finite-field calculations yield $\tilde{f}_{\text{xc}}$ matrices at zero frequency. Hence the results presented here correspond to calculations performed within the adiabatic approximation, as they neglect the frequency dependence of $\tilde{f}_{\text{xc}}$. An interesting future direction would be to compute frequency-dependent $\tilde{f}_{\text{xc}}$ by performing finite-field calculations using real-time time-dependent DFT (RT-TDDFT).
When using the $G_0W_0\Gamma_0$ formalism, the convergence of quasiparticle energies with respect to $N_\text{PDEP}$ turned out to be extremely challenging. As discussed in ref \citenum{Schmidt2017} the convergence problem originates from the incorrect short-range behavior of $\tilde{f}_{\text{xc}}$. In Section 3.2 below we describe a renormalization scheme of $\tilde{f}_{\text{xc}}$ that improves the convergence of $G_0W_0\Gamma_0$ results.
\subsection{Renormalization of $f_{\text{xc}}$}
Thygesen and co-workers \cite{Schmidt2017} showed that $G_0W_0\Gamma_0@\text{LDA}$ calculations with $f_{\text{xc}}$ computed at the LDA level exhibit poor convergence with respect to the number of unoccupied states and plane wave cutoff. We observed related convergence problems of $G_0W_0\Gamma_0$ quasiparticle energies as a function of $N_\text{PDEP}$, the size of the basis set used here to represent response functions (see Section 5 of the SI). In this section we describe a generalization of the $f_{\text{xc}}$ renormalization scheme proposed by Thygesen and co-workers \cite{Olsen2012, Olsen2013, Patrick2015} to overcome convergence issues.
The approach of ref \citenum{Schmidt2017} is based on the properties of the homogeneous electron gas (HEG). For an HEG with density $n$, $f^{\text{HEG}}_{\text{xc}}[n] (\bm{r}, \bm{r}')$ depends only on $(\bm{r} - \bm{r}')$ due to translational invariance, and therefore $f^{\text{HEG}}_{\text{xc}}[n]_{\bm{G}\bm{G}'}(\bm{q})$ is diagonal in reciprocal space. We denote the diagonal elements of $f^{\text{HEG}}_{\text{xc}}[n]_{\bm{G}\bm{G}'}(\bm{q})$ as $f^{\text{HEG}}_{\text{xc}}[n] (\bm{k})$ where $\bm{k} = \bm{q} + \bm{G}$. When using the LDA functional, the exchange kernel $f_x$ exactly cancels the Coulomb interaction $v_c$ at wavevector $k = 2k_F$ (the correlation kernel $f_c$ is small compared to $f_\text{x}$ for $k \geq 2k_F$), where $k_F$ is the Fermi wavevector. For $k \geq 2k_F$, $f^{\text{HEG-LDA}}_{\text{xc}}$ shows an incorrect asymptotic behavior, leading to an unphysical correlation hole \cite{Olsen2012, Olsen2013}. Hence Thygesen and co-workers introduced a renormalized LDA kernel $f^{\text{HEG-rLDA}}_{\text{xc}} (k)$ by setting $f^{\text{HEG-rLDA}}_{\text{xc}} (k) = f^{\text{HEG-LDA}}_{\text{xc}} (k)$ for $k \leq 2k_F$ and $f^{\text{HEG-rLDA}}_{\text{xc}} (k) = - v_c (k)$ for $k > 2k_F$. They demonstrated that the renormalized $f_\text{xc}$ improves the description of the short-range correlation hole as well as the correlation energy, and when applied to $GW$ calculations substantially accelerates the basis set convergence of $G_0W_0\Gamma_0$ quasiparticle energies.
While within LDA $f_\text{xc}$ can be computed analytically and $ v_c + f_\text{x} = 0$ at exactly $k = 2k_F$, for a general functional it is not known \textit{a priori} at which $k$ this condition is satisfied. In addition, for inhomogenous systems such as molecules and solids the $f_{\text{xc}}$ matrix is not diagonal in reciprocal space. The authors of Ref \citenum{Schmidt2017} used a wavevector symmetrization approach to evaluate $f^{\text{HEG-rLDA}}_{\text{xc}}$ for inhomogenous systems, which is not easily generalizable to the formalism adopted in this work, where $f_{\text{xc}}$ is represented in the PDEP basis.
To overcome these difficulties, here we first diagonalize the $\tilde{f}_{\text{xc}}$ matrix in the PDEP basis:
\begin{equation} \label{fxcraw}
\tilde{f}_{\text{xc}} = \sum_i^{N_\text{PDEP}} f_i \ket{\zeta_i}\bra{\zeta_i},
\end{equation}
where $f$ and $\zeta$ are eigenvalues and eigenvectors of $\tilde{f}_{\text{xc}}$. Then we define a renormalized $\tilde{f}_{\text{xc}}$ as:
\begin{equation} \label{fxcrenormalized}
\tilde{f}_{\text{xc}}^r = \sum_i^{N_\text{PDEP}} \max(f_i, -1) \ket{\zeta_i}\bra{\zeta_i}.
\end{equation}
Note that for $\tilde{f}_{\text{xc}} = -1$, $f_{\text{xc}} = - v_c$, therefore $f_{\text{xc}}^r$ is strictly greater or equal to $-v_c$. When applied to the HEG, the $f_{\text{xc}}^r@\text{LDA}$ is equivalent to $f^{\text{HEG-rLDA}}_{\text{xc}}$ in the limit $N_{\text{PDEP}} \rightarrow \infty$, where the PDEP and plane-wave basis are related by a unitary transformation. Thus, eq \ref{fxcrenormalized} represents a generalization of the scheme of Thygesen \textit{et al.} to any functional and to inhomogeneous electron gases. When using $f_{\text{xc}}^r$, we observed a faster basis set convergence of $G_0W_0\Gamma_0$ results than $G_0W_0^\text{RPA}$ results, consistent with ref \citenum{Schmidt2017}. In Section 5 of the SI we discuss in detail the effect of the $f_{\text{xc}}$ renormalization on the description of the density response functions $\chi$ and $\chi_{\Gamma}$, and we rationalize why the renormalization improves the convergence of $G_0W_0\Gamma_0$ results. Here we only mention that the response function $\tilde{\chi}_\Gamma$ may possess positive eigenvalues for large PDEP indices. When the renormalized $f_{\text{xc}}$ is used, the eigenvalues of $\tilde{\chi}_\Gamma$ are guaranteed to be nonpositive and they decay rapidly toward zero as the PDEP index increase, which explains the improved convergence of $G_0W_0\Gamma_0$ quasiparticle energies.
All $G_0W_0\Gamma_0$ results shown in Section 3.3 were obtained with renormalized $f_\text{xc}$ matrices, while $G_0W_0^{f_\text{xc}}$ calculations were performed without renormalizing $f_\text{xc}$, since we found that the renormalization had a negligible effect on $G_0W_0^{f_\text{xc}}$ quasiparticle energies (see SI Section 5).
\subsection{Results}
In this section we report $GW$ quasiparticle energies for molecules in the GW100 set \cite{vanSetten2015} and for several solids. Calculations are performed at $G_0W_0^{\text{RPA}}$, $G_0W_0^{f_\text{xc}}$ and $G_0W_0\Gamma_0$ levels of theory and with semilocal and hybrid functionals. Computational parameters including $E_\text{cut}$ and $N_\text{PDEP}$ for all calculations are summarized in Section 1 of the SI. A discussion of the convergence of $G_0W_0^{\text{RPA}}$ quasiparticle energies with respect to these parameters can be found in ref \citenum{Govoni2018}.
We computed the vertical ionization potential (VIP), vertical electron affinity (VEA) and fundamental gaps for molecules with LDA, PBE and PBE0 functionals. VIP and VEA are defined as $\text{VIP} = \varepsilon^{\text{vac}} - \varepsilon^{\text{HOMO}}$ and $\text{VEA} = \varepsilon^{\text{vac}} - \varepsilon^{\text{LUMO}}$ respectively, where $\varepsilon^{\text{vac}}$ is the vacuum level estimated with the Makov-Payne method \cite{Makov1995}; $\varepsilon^{\text{HOMO}}$ and $\varepsilon^{\text{LUMO}}$ are HOMO and LUMO $GW$ quasiparticle energies, respectively. The results are summarized in Figure \ref{gw100_gw_compare}, where VIP and VEA computed at $G_0W_0^{f_\text{xc}}$ and $G_0W_0\Gamma_0$ levels are compared to results obtained at the $G_0W_0^{\text{RPA}}$ level \bibnote{For \ce{KH} molecule, $G_0W_0^{f_\text{xc}}@\text{PBE}$ calculation for the HOMO converged to a satellite instead of the quasiparticle peak. The spectral function of \ce{KH} is plotted and discussed in SI Section 6 and the correct quasiparticle energy is used here.}.
\begin{figure}[H]
\centering
\includegraphics[width=5in]{figures/gw100_gw_compare.png}
\caption{Difference ($\Delta E$) between vertical ionization potential (VIP) and vertical electron affinity (VEA) of molecules in the GW100 set computed at the $G_0W_0^{f_\text{xc}}$/$G_0W_0\Gamma_0$ level and corresponding $G_0W_0^{\text{RPA}}$ results. Mean deviations (MD) in eV are shown in brackets and represented with black dashed lines. Results are presented for three different functionals (LDA, PBE and PBE0) in the top, middle and bottom panel, respectively. }
\label{gw100_gw_compare}
\end{figure}
Compared to $G_0W_0^{\text{RPA}}$ results, the VIP computed at the $G_0W_0^{f_\text{xc}}$/$G_0W_0\Gamma_0$ level are systematically higher/lower, and the deviation of $G_0W_0\Gamma_0$ from $G_0W_0^{\text{RPA}}$ results is more than twice as large as that of $G_0W_0^{f_\text{xc}}$ results. The differences reported in Figure \ref{gw100_gw_compare} are more significant with hybrid functional starting point, as indicated by the large mean deviations (MD) for $G_0W_0\Gamma_0$/$G_0W_0\Gamma_0$ results obtained with the PBE0 functional (0.58/-1.25 eV) compared to the MD of semilocal functionals (0.30/-0.74 eV for LDA and 0.31/-0.76 eV for PBE). In contrast to VIP, VEA appear to be less affected by vertex corrections. $G_0W_0^{f_\text{xc}}$ does not systematically shift the VEA from $G_0W_0^{\text{RPA}}$ results. $G_0W_0\Gamma_0$ calculations result in systematically lower VEA than those obtained at the $G_0W_0^{\text{RPA}}$ level by about 0.3 eV with all DFT starting points, but overall the deviations are much smaller than for the VIP.
In Figure \ref{gw100_ref_compare} we compare $GW$ results with experiments \bibnote{WEST GW100 data collection. http://www.west-code.org/database (accessed Aug. 1, 2018).} and quantum chemistry CCSD(T) results \cite{Krause2015}. The corresponding MD and mean absolute deviations (MAD) are summarized in Table \ref{gw100_ref_deviation}. At the $G_0W_0^{\text{RPA}}@\text{PBE}$ level, the MAD for the computed VIP values compared to CCSD(T) and experimental results are 0.50 and 0.55 eV respectively, and the MAD for the computed VEA compared to experiments is 0.46 eV. These MAD values (0.50/0.55/0.46 eV) are comparable to previous benchmark studies on the GW100 set using the FHI-aims (0.41/0.46/0.45 eV) \cite{vanSetten2015}, VASP (0.44/0.49/0.42 eV) \cite{Maggio2017} and WEST codes (0.42/0.46/0.42 eV) \cite{Govoni2018}, although in this work we did not extrapolate our results with respect to the basis set due to the high computational cost.
Compared to experiments and CCSD(T) results, $G_0W_0^{f_\text{xc}}$ improves over $G_0W_0^{\text{RPA}}$ for the calculation of VIP when semilocal functional starting points (LDA, PBE) are used, as indicated by the values of MD and MAD of $G_0W_0^{f_\text{xc}}@\text{LDA}/\text{PBE}$ results compared to that of $G_0W_0^{\text{RPA}}@\text{LDA}/\text{PBE}$. When using the PBE0 functional as starting point, $G_0W_0^{f_\text{xc}}$ leads to an overestimation of VIP by 0.53 eV on average. $G_0W_0\Gamma_0$ calculations underestimate VIP by about 1 eV with all functionals tested here. For the calculation of VEA, $G_0W_0^{f_\text{xc}}$ performs similarly to $G_0W_0^{\text{RPA}}$ as discussed above, and $G_0W_0\Gamma_0$ yields an underestimation of 0.25/0.43/0.64 eV on average with LDA/PBE/PBE0 starting points compared to experiments.
\begin{figure}[H]
\centering
\includegraphics[width=7in]{figures/gw100_ref_compare.png}
\caption{Vertical ionization potential (VIP), vertical electron affinity (VEA) and electronic gap of molecules in the GW100 set computed at $G_0W_0^{\text{RPA}}$, $G_0W_0^{f_\text{xc}}$ and $G_0W_0\Gamma_0$ levels of theory, compared to experimental and CCSD(T) results (black dashed lines). }
\label{gw100_ref_compare}
\end{figure}
\begin{table}[H]
\caption{Mean deviation and mean absolute deviation (in brackets) for $GW$ results compared to experimental results and CCSD(T) calculations. We report vertical ionization potentials (VIP), vertical electron affinities (VEA) and the fundamental electronic gaps. All values are given in eV.}
\begin{tabular}{llrrrr}
\hline
{} & CCSD(T) VIP & Exp. VIP & Exp. VEA & Exp. Gap \\
\hline
$G_0W_0^{\mathrm{RPA}}@\mathrm{LDA}$ & -0.23 (0.34) & -0.19 (0.43) & 0.04 (0.45) & 0.21 (0.56) \\
$G_0W_0^{f_\mathrm{xc}}@\mathrm{LDA}$ & 0.06 (0.29) & 0.11 (0.37) & 0.03 (0.48) & -0.10 (0.49) \\
$G_0W_0\Gamma_0@\mathrm{LDA}$ & -0.97 (0.98) & -0.93 (0.95) & -0.25 (0.41) & 0.59 (0.75) \\
\hline
$G_0W_0^{\mathrm{RPA}}@\mathrm{PBE}$ & -0.43 (0.50) & -0.39 (0.55) & -0.09 (0.46) & 0.28 (0.57) \\
$G_0W_0^{f_\mathrm{xc}}@\mathrm{PBE}$ & -0.12 (0.32) & -0.07 (0.43) & -0.10 (0.49) & -0.05 (0.46) \\
$G_0W_0\Gamma_0@\mathrm{PBE}$ & -1.19 (1.20) & -1.15 (1.16) & -0.43 (0.53) & 0.64 (0.79) \\
\hline
$G_0W_0^{\mathrm{RPA}}@\mathrm{PBE0}$ & -0.05 (0.20) & -0.01 (0.34) & -0.26 (0.41) & -0.26 (0.47) \\
$G_0W_0^{f_\mathrm{xc}}@\mathrm{PBE0}$ & 0.53 (0.57) & 0.57 (0.65) & -0.27 (0.50) & -0.83 (0.83) \\
$G_0W_0\Gamma_0@\mathrm{PBE0}$ & -1.30 (1.30) & -1.26 (1.26) & -0.64 (0.68) & 0.50 (0.72) \\
\hline
\end{tabular}
\label{gw100_ref_deviation}
\end{table}
Finally we report $G_0W_0^{\text{RPA}}$, $G_0W_0^{f_\text{xc}}$ and $G_0W_0\Gamma_0$ results for several solids: \ce{Si}, \ce{SiC} (4H), \ce{C} (diamond), \ce{AlN}, \ce{WO3} (monoclinic), \ce{Si3N4} (amorphous). We performed calculations starting with LDA and PBE functionals for all solids, and for Si we also performed calculations with a dielectric-dependent hybrid (DDH) functional \cite{Skone2014}. All solids are represented by supercells with 64-96 atoms (see Section 1 of the SI) and only the $\Gamma$-point is used to sample the Brillioun zone. In Table \ref{bandgaps} we present the band gaps computed with different $GW$ approximations and functionals. Note that the supercells used here do not yield fully converged results as a function of supercell size (or k-point sampling); however the comparisons between different $GW$ calculations are sound and represent the main result of this section.
\begin{table}[H]
\caption{Band gaps (eV) for solids computed by different $GW$ approximations and exchange-correlation (XC) functionals (see text). All calculations are performed at the $\Gamma$-point of supercells with 64-96 atoms (see Section 1 of the SI for details). }
\begin{tabular}{llrrrr}
\hline
& & DFT & $G_0W_0^{\mathrm{RPA}}$ & $G_0W_0^{f_\mathrm{xc}}$ & $G_0W_0\Gamma_0$ \\
System & XC & & & & \\
\hline
\ce{Si} & LDA & 0.55 & 1.35 & 1.33 & 1.24 \\
& PBE & 0.73 & 1.39 & 1.37 & 1.28 \\
& DDH & 1.19 & 1.57 & 1.50 & 1.48 \\
\hline
\ce{C} (diamond) & LDA & 4.28 & 5.99 & 6.00 & 5.89 \\
& PBE & 4.46 & 6.05 & 6.06 & 5.95 \\
\hline
\ce{SiC} (4H) & LDA & 2.03 & 3.27 & 3.23 & 3.26 \\
& PBE & 2.21 & 3.28 & 3.23 & 3.28 \\
\hline
\ce{AlN} & LDA & 3.85 & 5.67 & 5.72 & 5.66 \\
& PBE & 4.04 & 5.67 & 5.74 & 5.68 \\
\hline
\ce{WO3} (monoclinic) & LDA & 1.68 & 3.10 & 3.07 & 3.15 \\
& PBE & 1.78 & 2.97 & 2.87 & 3.03 \\
\hline
\ce{Si3N4} (amorphous) & LDA & 3.04 & 4.84 & 4.92 & 4.81 \\
& PBE & 3.19 & 4.86 & 4.96 & 4.83 \\
\hline
\end{tabular}
\label{bandgaps}
\end{table}
Overall, band gaps obtained with different $GW$ approximations are rather similar, with differences much smaller than those observed for molecules. To further investigate the positions of the band edges obtained from different $GW$ approximations, we plotted in Figure \ref{bandedges_solids} the $GW$ quasiparticle corrections to VBM and CBM, defined as $\Delta_{\mathrm{VBM/CBM}} = \varepsilon^{\text{GW}}_{\text{VBM/CBM}} - \varepsilon^{\text{DFT}}_{\text{VBM/CBM}}$ where $\varepsilon^{\text{GW}}_{\text{VBM/CBM}}$ and $\varepsilon^{\text{DFT}}_{\text{VBM/CBM}}$ are the $GW$ quasiparticle energy and the Kohn-Sham eigenvalue corresponding to the VBM/CBM, respectively.
\begin{figure}[H]
\centering
\includegraphics[width=3.33in]{figures/bandedge_solids.png}
\caption{$GW$ quasiparticle corrections to the valance band maximum (VBM) and the conduction band minimum (CBM). Circles, squares and triangles are $G_0W_0^{\text{RPA}}$, $G_0W_0^{f_\text{xc}}$ and $G_0W_0\Gamma_0$ results respectively; red, blue, green markers correspond to calculations with LDA, PBE and DDH functionals.}
\label{bandedges_solids}
\end{figure}
Compared to $G_0W_0^{\text{RPA}}$, VBM and CBM computed at the $G_0W_0^{f_\text{xc}}$ level are slightly lower, while VBM and CBM computed at the $G_0W_0\Gamma_0$ level are significantly higher. For Si, $\Delta_{\mathrm{VBM/CBM}}$ obtained with LDA starting points are -0.75/0.06 ($G_0W_0^{\text{RPA}}$), -0.86/-0.08 ($G_0W_0^{f_\text{xc}}$), -0.21/0.49 ($G_0W_0\Gamma_0$) eV respectively, showing a trend in agreement with the results reported by Del Sole \textit{et al} (-0.36/0.27, -0.44/0.14, 0.01/0.67 eV) \cite{DelSole1994}, but with an overall overestimate of the band gap due a lack of convergence in our Brillouin zone sampling. The difference between band edge energies computed by different $GW$ approximations is larger with the DDH functional, compared to that of semilocal functionals. Overall the trends observed for solids are consistent with those found for molecules, except that for solids the shift of the CBM resembles those of the VBM when vertex corrections are included, while for molecules VEA is less sensitive to vertex corrections.
\section{Introduction}
Accurate, first principles predictions of the electronic structure of molecules and materials are important goals in chemistry, condensed matter physics and materials science \cite{Onida2002}. In the past three decades, density functional theory (DFT) \cite{Hohenberg1964, Kohn1965} has been successfully adopted to predict numerous properties of molecules and materials \cite{Becke2014}. In principle, any ground or excited state properties can be formulated as functionals of the ground state charge density. In practical calculations, the ground state charge density is determined by solving the Kohn-Sham (KS) equations with approximate exchange-correlation functionals, and many important excited state properties are not directly accessible from the solution of the KS equations. The time-dependent formulation of DFT (TDDFT) \cite{Runge1984} in the frequency domain \cite{Casida1995} provides a computationally tractable method to compute excitation energies and absorption spectra. However, using the common adiabatic approximation to the exchange-correlation functional, TDDFT is often not sufficiently accurate to describe certain types of excited states such as Rydberg and charge transfer states \cite{Casida2012}, especially when semilocal functionals are used.
A promising approach to predict excited state properties of molecules and materials is many-body perturbation theory (MBPT) \cite{Hedin1965, Hybertson1986, Martin2016}. Within MBPT, the $GW$ approximation can be used to compute quasiparticle energies that correspond to photoemission and inverse photoemission measurements; furthermore, by solving the Bethe-Salpeter equation (BSE), one can obtain neutral excitation energies corresponding to optical spectra. For many years since the first applications of MBPT \cite{Hybertson1986}, its use has been hindered by its high computational cost. In the last decade, several advances have been proposed to improve the efficiency of MBPT calculations \cite{Umari2009, Neuhauser2014, Liu2016}, which are now applicable to simulations of relatively large and complex systems, including nanostructures and heterogeneous interfaces \cite{Ping2013, Pham2017, Leng2016}. In particular, $GW$ and BSE calculations can be performed using a low rank representation of density response functions \cite{Nguyen2012, Pham2013, Govoni2015, Govoni2018}, whose spectral decomposition is obtained through iterative diagonalization using density functional perturbation theory (DFPT) \cite{Baroni1987, Baroni2001}. This method does not require the explicit calculation of empty electronic states and avoids the inversion or storage of large dielectric matrices. The resulting implementation in the WEST code \bibnote{WEST. http://www.west-code.org/ (accessed Aug. 1, 2018).} has been successfully applied to investigate numerous systems including defects in semiconductors \cite{Seo2016, Seo2017}, nanoparticles\cite{Scherpelz2016}, aqueous solutions\cite{Gaiduk2016,Pham2017,Gaiduk2018}, and solid/liquid interfaces\cite{Govoni2015,Gerosa2018} .
In this work, we developed a finite-field (FF) approach to evaluate density response functions entering the definition of the screened Coulomb interaction $W$. The FF approach can be used as an alternative to DFPT, and presents the additional advantage of being applicable, in a straightforward manner, to both semilocal and hybrid functionals. In addition, FF calculations allow for the direct evaluation of density response functions beyond the random phase approximation (RPA).
Here we first benchmark the accuracy of the FF approach for the calculation of several density response functions, from which one can obtain the exchange correlation kernel ($f_{\text{xc}}$), defined as the functional derivative of the exchange-correlation potential with respect to the charge density. Then we discuss $G_0W_0$ calculations for various molecules and solids, carried out with either semilocal or hybrid functionals, and by adopting different approximations to include vertex corrections in the self-energy. In the last two decades a variety of methods \cite{DelSole1994, Fleszar1997, Schindlmayr1998, Marini2004, Bruneval2005, Tiago2006, Morris2007, Shishkin2007, Shaltaf2008, Romaniello2009, Gruneis2014, Chen2015, Kutepov2016, Kutepov2017, Maggio2017} \bibnote{Lewis, A. M.; Berkelbach, T. C. Vertex corrections to the polarizability do not improve the GW approximation for molecules. 2004, arXiv:1810.00456. arXiv.org ePrint archive. http://arxiv.org/abs/1810.00456 (accessed Oct 1, 2018).} has been proposed to carry out vertex-corrected $GW$ calculations, with different approximations to the vertex function $\Gamma$ and including various levels of self-consistency between $G$, $W$ and $\Gamma$. Here we focus on two formulations that are computationally tractable also for relatively large systems, denoted as $G_0W_0^{f_\text{xc}}$ and $G_0W_0\Gamma_0$. In $G_0W_0^{f_\text{xc}}$, $f_{\text{xc}}$ is included in the evaluation of the screened Coulomb interaction $W$; in $G_0W_0\Gamma_0$, $f_{\text{xc}}$ is included in the calculation of both $W$ and the self-energy $\Sigma$ through the definition of a local vertex function. Most previous $G_0W_0^{f_\text{xc}}$ and $G_0W_0\Gamma_0$ calculations were restricted to the use of the LDA functional \cite{DelSole1994, Fleszar1997, Tiago2006, Morris2007}, for which an analytical expression of $f_{\text{xc}}$ is available. Paier \textit{et al.} \cite{Paier2008} reported $GW_0^{f_\text{xc}}$ results for solids obtained with the HSE03 range-separated hybrid functional \cite{Heyd2003}, and the exact exchange part of $f_{\text{xc}}$ is defined using the nanoquanta kernel \cite{Reining2002, Marini2003, Sottile2003, Bruneval2005}. In this work semilocal and hybrid functionals are treated on equal footing, and we present calculations using LDA \cite{Perdew1981}, PBE \cite{Perdew1997} and PBE0 \cite{Perdew1996} functionals, as well as a dielectric-dependent hybrid (DDH) functional for solids \cite{Skone2014}.
A recent study of Thygesen and co-workers \cite{Schmidt2017} reported basis set convergence issues when performing $G_0W_0\Gamma_0@\text{LDA}$ calculations, which could be overcome by applying a proper renormalization to the short-range component of $f_{\text{xc}}$ \cite{Olsen2012, Olsen2013, Patrick2015}. In our work we generalized the renormalization scheme of Thygesen \textit{et al.} to functionals other than LDA, and we show that the convergence of $G_0W_0\Gamma_0$ quasiparticle energies is significantly improved using the renormalized $f_\text{xc}$.
The rest of the paper is organized as follows. In Section 2 we describe the finite-field approach and benchmark its accuracy. In Section 3 we describe the formalism used to perform $GW$ calculations beyond the RPA, including a renormalization scheme for $f_{\text{xc}}$, and we compare the quasiparticle energies obtained from different $GW$ approximations (RPA or vertex-corrected) for molecules in the GW100 test set \cite{vanSetten2015} and for several solids. Finally, we summarize our results in Section 4.
|
1,477,468,750,403 | arxiv | \section{Introduction}
The Korteweg--de Vries equation
\begin{align}\label{KdV}\tag{KdV}
\frac{d\ }{dt} q = - q''' + 6qq'
\end{align}
was derived in \cite{KdV1895} to explain the observation of solitary waves in a shallow channel of water. Specifically, they sought to definitively settle (to use their words) the debate over whether such solitary waves are consistent with the mathematical theory of a frictionless fluid, or whether wave fronts must necessarily steepen. The equation itself, however, appears earlier; see \cite[p. 77]{Boo}. The term \emph{solitary wave} has now been supplanted by \emph{soliton}, a name coined in \cite{KruskalZubusky} and inspired by the particle-like interactions they observed between solitary waves in their numerical simulations of \eqref{KdV}.
In a series of papers, researchers at Princeton's Plasma Physics Laboratory demonstrated that equation \eqref{KdV} exhibits a wealth of novel features, including the existence of infinitely many conservation laws \cite{MR0252826} and the connection to the scattering problem for one-dimensional Schr\"odinger equations \cite{GGKM}. Nowadays, we say that \eqref{KdV} is a completely integrable system (cf. \cite{MR0303132}).
Although we shall focus on mathematical matters here, \eqref{KdV} continues to be an important effective model for a diverse range of physical phenomena; see, for example, the review \cite{MR1329553} occasioned by the centenary of \cite{KdV1895}.
One of the most basic mathematical questions one may ask of \eqref{KdV} is whether it is well-posed. This is the question of the existence and uniqueness of solutions, together with the requirement that the solution depends continuously on time and the initial data. As we shall discuss, this topic has attracted several generations of researchers who have successively enlarged the class of initial data for which well-posedness can be shown. Our principal contribution is the following:
\begin{theorem}[Global well-posedness]\label{T:main}
The equation \eqref{KdV} is globally well-posed for initial data in $H^{-1}({\mathbb{R}})$ or $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$ in the following sense: In each geometry, the solution map extends uniquely from Schwartz space to a jointly continuous map $\Phi:{\mathbb{R}}\times H^{-1}\to H^{-1}$. Moreover, for each initial data $q\in H^{-1}$, the orbit $\{\Phi(t,q) : t\in{\mathbb{R}}\}$ is uniformly bounded and equicontinuous in $H^{-1}$.
\end{theorem}
On the circle ${\mathbb{R}}/{\mathbb{Z}}$, Schwartz space is coincident with $C^\infty({\mathbb{R}}/{\mathbb{Z}})$; on the line, it is comprised of those $C^\infty({\mathbb{R}})$ functions that decay (along with their derivatives) faster than any polynomial as $|x|\to\infty$. For the definition of $H^{-1}({\mathbb{R}})$ and $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$, see subsection~\ref{S:1.1}; informally, they are comprised of those tempered distributions that are derivatives of $L^2$ functions. For the precise definition of equicontinuity in the $H^s$ setting, see \eqref{E:equi1}.
The fact that Schwartz-space initial data leads to unique global solutions to \eqref{KdV} that remain in Schwartz class has been known for some time; see, for example, \cite{MR0385355,MR0759907,Sj,MR0410135,MR0261183}. Indeed, in this class, the solution map is known not only to be continuous, but infinitely differentiable in both variables. We shall rely on this result in what follows.
In the case of \eqref{KdV} posed on the torus (or equivalently for periodic initial data), Theorem~\ref{T:main} reproduces the principal results of \cite{MR2267286}. Note that because the circle is compact, uniform boundedness and equicontinuity of the orbit is equivalent to it being pre-compact. In the line case, one does not expect orbits to be pre-compact; both solitons and radiation preclude tightness from holding globally in time.
The papers \cite{MR2830706,MR2927357} show that well-posedness cannot persist (in either geometry) in $H^{s}$ for any $s<-1$. In this sense, Theorem~\ref{T:main} is sharp. On the other hand, one may consider well-posedness at higher regularity $s>-1$. Existence and uniqueness are immediate from the case $s=-1$; the key is to demonstrate that continuous dependence remains valid in this stronger topology. In this paper, we will settle the cases left open by prior work, namely, the wellposedness of \eqref{KdV} in $H^{s}({\mathbb{R}})$ with $-1\leq s< -\frac34$. (On the circle, all $s\geq -1$ were treated already in \cite{MR2267286}.) In fact, the proof of Corollary~\ref{C:2} provides a simple uniform treatment of all $-1\leq s<0$ and adapts trivially to the case of the circle also.
The notion of solution used here (unique limits of Schwartz solutions) coincides with that in \cite{MR2267286} and is informed by several important considerations. Firstly, as the notion of a solution in the case of Schwartz initial data is firmly settled, any notion of a solution to \eqref{KdV} leading to well-posedness in $H^{-1}$ must produce solutions identical to those given by Theorem~\ref{T:main}.
Secondly, for functions that are merely $C_t H^{-1}_x$ it is not possible to make sense of the nonlinearity in \eqref{KdV} as a space-time distribution in either geometry. While the local smoothing effect (see subsection~\ref{SS:ls}) provides a potential resolution of this problem in the line setting, there is no natural alternative notion of a weak solution in the circle geometry. Any methodology that purports to apply in wide generality must adopt a notion of solution that applies in wide generality.
A wider notion of solution was considered in \cite{Christ'05}, namely, limits of smooth solutions in the presence of smooth asymptotically vanishing forcing. That paper shows (see \cite[\S2.7]{Christ'05}) that with this wider notion of solution, uniqueness cannot be guaranteed for $C_t H^{s}({\mathbb{R}}/{\mathbb{Z}})$ solutions to \eqref{KdV} already for $s<0$.
From Theorem~\ref{T:main}, we see that the map $\Phi$ is continuous, as was also shown in \cite{MR2267286} for the circle case. It is natural to ask if this continuity may be expressed more quantitatively. In some sense, the answer is no: it is shown in \cite{MR2018661} that the data to solution map cannot be uniformly continuous on bounded sets when $s<-\frac34$ in the line case or when $s<-\frac12$ in the circle case. Nevertheless, the arguments presented here are sufficiently transparent that one may readily obtain information on the modulus of continuity of $q\mapsto\Phi(t,q)$. Specifically, we find that the key determiners of the modulus of continuity at an initial datum $q\in H^{-1}({\mathbb{R}})$ are the time $t$ in question and the rate at which
$$
\int \frac{|\hat q(\xi)|^2\,d\xi}{\xi^2+4\kappa^2} \to 0\qtq{as} \kappa\to\infty.
$$
Evidently, this integral does not converge to zero uniformly on any open set of initial data.
Let us now turn our attention to a discussion of prior work proving well-posedness for \eqref{KdV}. Discussion of weak solutions (without uniqueness) is postponed until subsection~\ref{SS:ls}. Our discussion will not be exhaustive; the body of literature on \eqref{KdV} is simply immense. Nor will we insist on a strict chronology.
Early work on the local and global well-posedness of \eqref{KdV} treated it as a quasi-linear hyperbolic problem. The appearance of the derivative in the nonlinearity prohibits simple contraction mapping arguments from closing. The principal methods employed were (i) compactness and uniqueness arguments (e.g. \cite{MR0454425,MR0261183}) combined with parabolic regularization, (ii) convergence of Picard iterates with (e.g. \cite{MR0312097}) or without (e.g. \cite{MR0407477}) parabolic regularization, and (iii) approximation by the Benjamin--Bona--Mahoney (BBM) equation \cite{MR0393887,MR0385355}.
The BBM equation was introduced in \cite{MR0427868}; this equation has a much more regular nonlinearity and global well-posedness was shown there by simple contraction mapping arguments. The BBM equation has the same Hamiltonian as \eqref{KdV}, namely,
\begin{equation}\label{I HKdV}
H_\text{KdV} (q) := \int \tfrac12 q'(x)^2 + q(x)^3\,dx;
\end{equation}
however the underlying symplectic structure is different. Of all the prior approaches we know of, the one we follow here is closest in spirit to that of Bona--Smith \cite{MR0385355}, since both we and they employ the idea of approximating the full flow by another Hamiltonian evolution that is more readily controlled.
Incidentally, the problem of local well-posedness of \eqref{KdV} in Schwartz space, which we shall take for granted here, is rather easier than the works just cited, because one may safely lose regularity in proving continuous dependence of the solution on the initial data.
While multiple authors sought to obtain well-posedness in $H^{s}$ for $s$ as small as possible, these early attempts did not succeed in proving local well-posedness beyond the regime $s>3/2$. Most significantly, this does not reach the level $s=1$ at which one may upgrade local to global well-posedness by exploiting conservation of the Hamiltonian. Nevertheless, global well-posedness was obtained at this time for $s\geq 2$ by using the conserved quantities at such higher regularity discovered in \cite{MR0252826}.
To progress further in this vein, the key has been to exploit the dispersive property of \eqref{KdV}. Global well-posedness for finite energy initial data on the line was first proved in \cite{MR1086966}, by utilizing local smoothing and maximal function estimates. The paper actually proves local well-posedness in $H^s({\mathbb{R}})$ for $s>3/4$; the global $H^1({\mathbb{R}})$ result follows trivially from this and conservation of the Hamiltonian.
The next conspicuous benchmark for the well-posedness theory was the treatment of initial data with finite momentum
\begin{equation}\label{I P}
P(q) := \int \tfrac12 q(x)^2 \,dx,
\end{equation}
that is, data in $L^2$. \emph{Momentum} is the appropriate term here; this quantity is the generator of translations with respect to the standard symplectic structure. Moreover, this quantity is conserved under the KdV flow. The \emph{mass} of a wave is given by $\int q(x)\,dx$, which is also conserved, and represents the total deficit (or surplus) of water relative to $q\equiv 0$.
Well-posedness of \eqref{KdV} in $L^2$ was proved both on the line and on the circle by Bourgain in \cite{MR1215780}. At the heart of this work is the use of $X^{s,b}$ spaces, which efficiently capture the dispersive nature of the equation and effectively control the deviation of the KdV dynamics from solutions to the linear equation $\partial_t q = - q'''$. After developing suitable estimates in these spaces, the proof proceeds by contraction mapping arguments; thus the solutions constructed depend analytically on the initial data.
Further development and refinement of the methods employed in \cite{MR1215780} ultimately led to a proof of local well-posedness for \eqref{KdV} in $H^{s}({\mathbb{R}})$ for $s\geq -3/4$ and in $H^s({\mathbb{R}}/{\mathbb{Z}})$ for $s\geq -1/2$. Excepting the endpoints, this was proved by Kenig--Ponce--Vega in \cite{MR1329387}. For a discussion of the endpoints, see \cite{MR2018661} and \cite{MR1969209,MR2054622,MR2233689}. These ranges of $s$ are sharp if one requires the data to solution map to be uniformly continuous on bounded sets; see \cite{MR2018661}.
These local well-posedness results were made global in time in \cite{MR1969209}, excepting the endpoint case $H^{-3/4}({\mathbb{R}})$, which was proved later in \cite{MR2531556,MR2501679}. At that time, no exact conservation laws were known that were adapted to negative regularity. To obtain such global results, these authors constructed almost conserved quantities, whose growth in time they were able to control.
While it is true that the conspicuous manifestations of complete integrability of KdV played no particular role in the series of works we have just described, it is difficult to completely decouple these successes from the exact structure of the KdV equation. In the first place, many of these arguments rely on the absence of unfavorable resonances. This appears in the multilinear $X^{s,b}$ estimates and (rather more explicitly) in the construction of almost conserved quantities in \cite{MR1969209}. This is akin to the construction of Birkoff normal form, which may fail due to resonances, but which does succeed in completely integrable systems (cf. \cite{MR0501141,MR2150385}). As we will discuss below, we now know that KdV admits exact conservation laws adapted to every regularity $s\geq -1$; this offers a rather transparent explanation for the otherwise startling success of \cite{MR1969209} in constructing almost conserved quantities.
The Miura map \cite{MR0252825}, implements a first iteration toward the construction of Birkoff normal form by converting the KdV equation to the mKdV equation, which has a nonlinearity that is one degree higher. This transformation was one of the first indications that there was something peculiar about \eqref{KdV}. Moreover, a one-parameter generalization of this transformation, due to Gardner, led to the first proof of the existence of infinitely many polynomial conservation laws; see \cite{MR0252826}. The Miura map has been very popular in the study of KdV at low regularity. Most particularly, it allows one to work at positive regularity, where many nonlinear transformations (e.g., pointwise products) are much better behaved.
The breakdown of traditional PDE techniques ultimately stems from a high-high-low frequency interaction that makes it impossible to approximate the KdV flow by a linear evolution even locally in time. This particular frequency interaction appears in many fluid models, due to the ubiquity of the advection nonlinearity $(u\cdot\nabla) u$, and is exploited crucially in the construction of solutions exhibiting energy growth.
It is worth noting that among the family of monomial gKdV equations, namely, those of the form $\partial_t q = -\partial_x^3 q \pm \partial_x (q^k)$, only for the completely integrable models (i.e., $k=2,3$) does the local well-posedness threshold deviate from scaling. Indeed, the completely integrable models are \emph{less} well-posed relative to scaling than those with $k\geq 4$. Ultimately, we see that complete integrability does not completely ameliorate the severity of this nonlinearity when acting on solutions of low regularity.
In this vein, we contend that the complete integrability of a system is not divorced from the class of initial data on which it is studied. The PDE $\partial_t q = \partial_x q$ posed on the line might immediately be classed as completely integrable; it even belongs to the KdV hierarchy. However, when the initial data is white-noise, we see that the dynamics is mixing! On the basis of the results of this paper, we may say that the term \emph{completely integrable} continues to apply to \eqref{KdV} in the class $H^s({\mathbb{R}})$ when $s\geq -1$.
We have not yet explained in what sense \eqref{KdV} can be regarded as completely integrable. The most common definition applied in finite-dimensional mechanics is that the system has sufficiently many Poisson commuting, functionally independent, conserved quantities. Here, sufficiently many means half the dimension of the ambient symplectic manifold. As noted earlier, the fact that \eqref{KdV} admits infinitely many independent conserved quantities was first proved in \cite{MR0252826}. We have already seen three: the mass, momentum, and energy. In the original paper, the conservation laws were presented in a microscopic form, that is, as
\begin{equation}\label{micro law}
\partial_t \rho(t,x) + \partial_x j(t,x) = 0,
\end{equation}
where the densities $\rho$ and the currents $j$ are given by particular polynomials in $q$ and its derivatives. The (macroscopic) conserved quantities are then obtained integrating $\rho$ over the whole line or circle, as appropriate.
The polynomial nature of these conserved quantities is such that, except in the case of $H^\infty$ data (i.e. all derivatives square integrable), all but finitely many of them are infinite. Moreover, it is also not immediately clear whether these constitute a sufficient number of conserved quantities to call the system completely integrable, even in Schwartz space. These concerns turn out to be unwarranted. To explain, we begin with an innovation of Lax \cite{MR0235310}, namely, the introduction of the Lax pair: Defining
\begin{align*}
L(t) &:= -\partial_x^2 + q(t,x) \qtq{and} P(t) := - 4 \partial_x^3 + 3\bigl(\partial_x q(t,x) + q(t,x) \partial_x \bigr)
\end{align*}
it is easy to verify that
\begin{align*}
\text{$q(t)$ solves \eqref{KdV}} \iff \frac{d\ }{dt} L(t) = [P(t),\, L(t)].
\end{align*}
As $P(t)$ is always anti-self-adjoint, this shows that at each time slice, the Schr\"odinger operator with potential $q(t,x)$ is unitarily equivalent to that built from the initial data $q(0,x)$. Speaking loosely, we may say that all spectral properties of $L(t)$ are conserved under the KdV flow.
One of the beauties of the Lax pair is that it works equally well in both geometries. However, once we try to speak more precisely about which spectral properties are conserved, this unity quickly dissolves. We will first discuss the periodic case where related ideas have been most successful in tackling the well-posedness problem.
The Schr\"odinger operator on the circle with (periodic) potential $q$ has purely discrete spectrum. This remains true for potentials that are merely $H^{-1}$ because such perturbations of $-\partial_x^2$ are relatively compact. The Lax pair shows that these (periodic) eigenvalues are then conserved under the flow and so we obtain an infinite sequence of conserved quantities that extend to the case of very low regularity. There is a direct connection between these eigenvalues and the polynomial conservation laws mentioned earlier; see, for example, \cite[\S3]{MR0397076}.
As it turns out, these eigenvalues are not the most convenient objects for further development. Rather, one should consider the spectrum of the Schr\"odinger operator associated to the $1$-periodic potential, acting on the whole line. This set is wholly determined by the periodic eigenvalues; see \cite{MR0749109}. Nevertheless, this new perspective suggests an alternate set of conserved quantities, namely, the lengths of the gaps in the spectrum. The virtue of these new quantities can be seen already in the fact that these numbers effectively capture the $H^s$ norms of the potential, at least if $s\geq0$; see \cite{MR0409965}. While such a priori bounds are useful for well-posedness questions (particularly, to extend solutions globally in time), they do not suffice.
For the purposes of well-posedness, there is no better expression of complete integrability than the existence of action-angle coordinates. Such coordinates are now known to exist for \eqref{KdV} with data in $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$ and this result was decisive in the proof of global well-posedness in \cite{MR2267286}. A key step down this path was the discovery that one should adopt the Dirichlet spectrum (together with the gap lengths) to form a complete set of coordinates and secondly, that these points (which lie in the gaps) should properly be interpreted as lying on the Riemann surface obtained by gluing together two copies of the complex plane cut along the spectrum. These considerations lead to the definition of angle variables (cf. \cite{MR0427869,MR0427731}) and thence to associated actions \cite{MR0403368}. A very pedagogical account of these constructions can be found in \cite{MR1997070}; moreover, this monograph culminates in a proof that these variables define \emph{global} action-angle coordinates on each symplectic leaf in the phase-space $L^2({\mathbb{R}}/{\mathbb{Z}})$.
The proof in \cite{MR2267286} of global well-posedness in $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$ required two more steps. The first, carried out in \cite{MR2179653}, was the extension of these coordinates (as a global analytic symplectic diffeomorphism) to $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$. The second was to gain adequate control of the frequencies (i.e., the time derivatives of the angles). Usually, these frequencies are computed as the derivatives of the Hamiltonian with respect to the corresponding actions. However, the Hamiltonian $H_\text{KdV}$ does not make sense as a function on $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$!
Let us now turn our attention to the case of \eqref{KdV} posed on the line. We begin by describing a system of coordinates discovered already in \cite{GGKM} that linearize the flow (at least for a suitable class of data). While not action-angle variables themselves, such variables can be readily expressed in terms of them; see \cite{MR0303132}.
To the best of our knowledge, the broadest class in which the construction that follows has been successfully completed is $\mathcal L^1_1:= \{q\in L^1({\mathbb{R}}): x q(x) \in L^1({\mathbb{R}})\}$; see \cite{MR0897106,MR0792566}. As we will discuss later, there are compelling reasons to doubt that this construction can be taken much further without substantial new ideas.
Given $q\in \mathcal L^1_1$ and $k\in{\mathbb{C}}$ with $\Im k\geq 0$, there are unique solutions $f_\pm(x;k)$ to
$$
-f''(x) + q(x)f(x) = k^2 f(x) \qtq{satisfying} f_\pm(x) = e^{\pm ik x} +o(1) \quad\text{as $x\to\pm\infty$.}
$$
These are known as Jost solutions and depend analytically on $k$. For $k\in{\mathbb{R}}\setminus\{0\}$, $f_+(x;\pm k)$ are linearly independent solutions to our ODE. Thus we may define connection coefficients, say $a(k)$ and $b(k)$, so that
\begin{equation}\label{a and b}
f_-(x;k) = a(k) f_+(x;-k) + b(k) f_+(x;k).
\end{equation}
Note that $a(k)$ extends analytically to the upper half-plane, since it can be expressed through the Wronskian of $f_+$ and $f_-$. There is no such extension of $b(k)$. The relation to the Wronskian also shows that $a(k)$ has zeros in the upper half-plane precisely at those points $i\kappa_n$ for which $-\kappa_n^2$ is an eigenvalue of the Schr\"odinger operator.
The objects introduced so far do not uniquely characterize the potential $q$. To do so, one must also consider \emph{norming constants}, $c_n>0$, associated to each eigenvalue $-\kappa_n^2$. These describe the large-$x$ asymptotics of the $L^2$-normalized eigenfunction $\psi_n(x)$; specifically, $e^{\kappa_n x} |\psi(x)| \to c_n$ as $x\to+\infty$.
As shown already in \cite{GGKM}, the objects just described evolve very simply under \eqref{KdV}: $a(k)$, $|b(k)|$, and the eigenvalues remain constant, while $\arg( b(k))$ and $\log(c_n)$ evolve linearly. As the forward and inverse scattering problems have been shown to be well-posed in the class $\mathcal L^1_1$, this yields a proof of well-posedness of \eqref{KdV} in this class. This is the natural analogue of the argument that has proven so successful in the circle geometry.
Unfortunately, well-posedness of the forward/inverse scattering problems (as they are currently understood) begins to break down under very mild relaxations of the condition $q\in \mathcal L^1_1$. For example, the scattering data fails to determine $q$ already for potentials that are bounded and $O(x^{-2})$ at infinity, due to the presence of zero-energy eigenvalues; see \cite{MR0875319}. Relaxing our decay restrictions on $q$ to merely $O(|x|^{-1})$ at infinity, gives rise to further problems: positive energy (embedded) eigenvalues may occur (cf. \cite[\S XIII.13]{MR0493421}); moreover, Jost solutions may fail to exist (without WKB correction) at every positive energy.
In \cite{MR2138138}, it is shown that embedded singular continuous spectrum can occur as soon as one passes beyond $O(|x|^{-1})$ decay, even in the slightest. Moreover, potentials $q\in L^2({\mathbb{R}})$ can yield essentially arbitrary embedded singular spectrum; see \cite{MR2552106}. The appearance of such exotic spectra leads us to believe that seeking a solution to the well-posedness problem for \eqref{KdV} in $H^{-1}({\mathbb{R}})$ through the inverse scattering methodology has little chance of success at this time. In particular, we are not aware of any proposal for action-angle variables in such a scenario. This raises the following question: What other manifestation of complete integrability may hold the key to further progress on the well-posedness problem?
Our answer, in this paper, is the existence of a wealth of commuting flows. As we will see, the method we propose does not completely supplant PDE techniques, but rather, like the Miura map, provides a new avenue for their application to the KdV problem. As the existence of an abundance of commuting flows is a necessary (but not sufficient) condition for a system to be considered completely integrable, the method has a good chance of being applicable to any PDE that is considered completely integrable.
The commuting flows associated to the traditional sequence of conserved quantities (based on polynomials in $q$ and its derivatives) are not what we have in mind. Their well-posedness is at least as difficult as for \eqref{KdV} itself. Moreover, there is no sense in which they approximate the KdV flow; they are better considered as flowing in orthogonal directions. Rather, we begin our discussion with
\begin{equation}\label{Intro renorm}
\alpha(\kappa;q) := - \log[a(i\kappa;q)] + \tfrac{1}{2\kappa}\int q(x)\,dx,
\end{equation}
where $a(k;q)$ denotes the coefficient $a(k)$ from \eqref{a and b} associated to the potential $q$. As noted previously, both $a(k;q)$ and $\int q$ are conserved under the KdV flow; thus one should expect $\alpha(\kappa)$ to also be conserved whenever it is defined.
Unaware that the same idea had already been implemented by Rybkin in \cite{MR2683250}, the authors together with X.~Zhang showed in \cite{KVZ} that $\alpha(\kappa;q)$ is a real-analytic function of $q\in H^{-1}({\mathbb{R}})$, provided $\kappa\geq 1 + 45\|q\|_{H^{-1}}^2$. We also gave a direct proof that it is conserved for Schwartz initial data. In these arguments, both we and Rybkin use the fact that $a(k;q)$ can also be written as a Fredholm determinant; see \eqref{O37} below. That such a determinant representation of this scattering coefficient is possible was first noticed in the setting of three-dimensional central potentials in \cite{MR0044404}. (See \cite[Proposition~5.4]{MR2154153} or \cite[Lemma~2.8]{MR2310217} for simple proofs in one dimension.)
The renormalization of $\log|a(k)|$ appearing in \eqref{Intro renorm} is essential for considering $q\in H^{-1}({\mathbb{R}})$; without it, one would need to restrict to potentials that are at least conditionally integrable. Incidentally, this renormalizing term can also be predicted as the leading behaviour of the phase shift via WKB theory.
The goal of the paper \cite{KVZ} was the construction of a variety of low-regularity conservation laws for KdV both on the line and on the circle. (NLS and complex mKdV were also treated there by the same method.) In the case of $H^{-1}({\mathbb{R}})$ bounds for KdV, our argument is essentially that of Rybkin \cite{MR2683250}, who obtained the same result. Another proof (also independent of Rybkin) can be found in \cite{MR3400442}. In the line setting, general $H^s({\mathbb{R}})$ bounds for KdV, NLS, and mKdV were obtained, independently, by Koch and Tataru \cite{KT}. For an earlier partial result, see also \cite{MR3292346}. In the circle setting, bounds of this type were obtained considerably earlier; see \cite{MR2179653}.
In this paper, we will not rely on the results of \cite{MR3400442,MR2179653,KVZ,KT,MR2683250}. In fact, the proof of Theorem~\ref{T:ls conv} below relies on our development of an alternate argument, which also yields the global $H^{-1}$ bound. Specifically, we will develop a microscopic version (cf. \eqref{micro law}) of the macroscopic conservation law from \cite{KVZ,MR2683250}.
A priori bounds of the type just described do not in themselves yield well-posedness. Indeed, conservation of momentum was known already to Korteweg and de~Vries, yet the corresponding well-posedness result did not appear until \cite{MR1215780}. The key obstacle is always to control differences of solutions. While individual solutions admit infinitely many conservation laws, the difference of two solutions need not have any.
As discussed previously, the map $q\mapsto \alpha(\kappa;q)$ is analytic; therefore, its derivative (with respect to $q$) is represented (in the sense \eqref{derivative}) by an analytic $H^1$-valued function of $q\in H^{-1}$. Thus, when we consider the Hamiltonian evolution induced by this functional, namely,
$$
\frac{d\ }{dt} q(t) = \partial_x \frac{\delta \alpha}{\delta q},
$$
we see that the right-hand side is a Lipschitz function on $H^{-1}$ and so well-posedness of this equation follows by the standard ODE argument. Our ambition (and this appears to be a new idea) is to approximate \eqref{KdV} by this flow. It turns out that this is possible after one further renormalization, as we will now explain.
It was observed already in \cite{MR0303132} that $\log[a(i\kappa)]$ acts as a generating function for the polynomial conserved quantities; in particular, this yields the asymptotic expansion
$$
\alpha(\kappa;q) = \tfrac{1}{4\kappa^3} P(q) - \tfrac{1}{16\kappa^5} H_\text{KdV}(q) + O(\kappa^{-7}),
$$
using the notations \eqref{I HKdV} and \eqref{I P}. Inspired by this, one may then postulate that the Hamiltonian
$$
H_\kappa := - 16 \kappa^5 \alpha(\kappa;q) + 4 \kappa^2 P(q)
$$
provides a good approximation to the KdV Hamiltonian for $\kappa$ large. More ambitiously, one may hope that the KdV flow is well approximated by the flow under $H_\kappa$. Verifying this and so deducing Theorem~\ref{T:main} occupies the central portion of the paper, namely, Sections~3--5.
Several observations are in order. Firstly, while $\alpha(\kappa,q)$ is a analytic function on $H^{-1}$, the approximate Hamiltonian $H_\kappa$ is not, because momentum is not. Nevertheless, well-posedness of the resulting flow is still elementary; see Proposition~\ref{P:H kappa}.
The problem of estimating the discrepancy between the $H_\kappa$ flow and the full KdV flow is much simplified by the fact that the two flows commute. Indeed, it reduces the question of such an approximation to showing that the flow induced by the difference $H_\text{KdV} - H_\kappa$ is close to the identity for $\kappa$ large and bounded time intervals.
Naturally, one needs to show that this flow is close to the identity in the $H^{-1}$ metric; however, this follows from proximity in much weaker norms, say $H^{-3}$. The central point here is equicontinuity, or what is equivalent, tightness on the Fourier side; see Lemma~\ref{L:equi 1}. The equicontinuity of orbits under the flows of interest to us follows from the fact that all conserve $\alpha(\kappa;q)$; see Lemma~\ref{L:equi 2}. Indeed, from \eqref{alpha as I2}, we see that this functional effectively captures how much of the $H^{-1}$ norm of $q$ lives at frequencies $\xi$ with $|\xi|\gtrsim \kappa$.
One further innovation informs our implementation of the program laid out above, namely, the adoption of the `good unknown' $x\mapsto\kappa - \tfrac{1}{2g(x)}$. Here $g(x):=g(x;\kappa,q)$ denotes the diagonal of the Green's function associated to the potential $q$ at energy $-\kappa^2$. For a discussion of this object, see Section~\ref{S:2}. In particular, it is shown there that the map from $q(x)$ to $\kappa - \tfrac{1}{2g(x)}$ is a real-analytic diffeomorphism, thus, justifying the notion that $g(x)$ may effectively replace the traditional unknown $q(x)$.
Both the diagonal Green's function and its reciprocal appear naturally in several places in our argument, including in the conserved density $\rho$ introduced in \eqref{E:rho defn} and in the dynamics associated to the Hamiltonian $H_\kappa$; see \eqref{H kappa flow q}. Although our embracement of $g(x)$ is certainly responsible for the simplicity of many of our estimates and concomitantly, for the brevity of the paper, we caution the reader that it is not in itself the key to overcoming the fundamental obstacle confounding previous investigators, namely, the problem of estimating differences between two solutions.
We are not aware of any obstruction to extending the method employed here to a wide range of integrable systems, including those in the AKNS family. As evidence in favour of this assertion, we demonstrate in Section~\ref{S:periodic} how our method applies in the setting of KdV on the circle. In Appendix~\ref{S:A} we apply it to the next equation in the KdV hierarchy, following up on an enquiry of a referee. Regarding models in the AKNS family, we note that the functional $\alpha(\kappa,q)$ discussed in \cite{KVZ} is easily seen to have several of the favorable properties needed for our arguments, such as providing global norm control, yielding equicontinuity, and inducing a well-posed Hamiltonian flow.
We do not consider what our results may imply for (real) mKdV via the Miura map. Rather, it is our hope that our method may soon be adapted
to give an \emph{intrinsic} treatment of the more general \emph{complex} mKdV, which fits within the AKNS family of integrable systems.
Finally, while the ideas presented here are rooted in the complete integrability of KdV, we believe they may prove fruitful beyond this realm. Specifically, we envision the $H_\kappa$ flow being used as a leading approximation for KdV-like equations in much the same way as the Airy equation, $\partial_t q = -q'''$, has been used as an approximation of KdV itself.
\subsection{Local smoothing}\label{SS:ls}
The local smoothing effect is observed for a wide range of dispersive equations in Euclidean space, both linear and nonlinear. The underlying physical principle is that when high-frequency components of a wave travel very quickly, they must spend little time in any fixed finite region of space. Thus, one should expect a gain in regularity locally in space on average in time. This phenomenon seems to have been first appreciated by Kato, both for linear \cite{MR0190801,MR0234314} and nonlinear \cite{MR0759907} problems. In \cite{MR0759907}, it is shown that for Schwartz solutions to \eqref{KdV} one has
$$
\int_{-1}^1 \int_{-1}^1 |q'(t,x)|^2 \,dx\,dt \lesssim \| q(0) \|_{L^2}^2 + \| q(0) \|_{L^2}^6.
$$
This is then used to prove the existence of global weak solutions to \eqref{KdV} for initial data in $L^2({\mathbb{R}})$. Prior to this, existence of global weak solutions (in either geometry) was known only for data in $H^1$; see \cite{MR0261183}.
In \cite{MR3400442}, Buckmaster and Koch proved the existence of an analogous a priori local-smoothing estimate one degree lower in regularity (on both sides). This is achieved by using a Miura-type map and adapting Kato's local smoothing estimate for mKdV to the presence of a kink. This technology is then used to prove the existence of global weak/distributional solutions to \eqref{KdV} with initial data in $H^{-1}({\mathbb{R}})$. (The nonlinearity may now be interpreted distributionally, because local smoothing guarantees that $q(t,x)$ is locally square integrable in space-time.) As is usual with the construction of weak solutions, the arguments do not yield uniqueness and continuity in time is only shown with respect to the weak topology. (Continuous dependence on the initial data is hopeless without first knowing uniqueness.) For a restricted class of $H^{-1}$ initial data (namely, that in the range of the traditional Miura map), the existence of weak solutions was shown earlier in \cite{MR2189502}; see also \cite{MR0990865}.
In Section~\ref{S:7} we will give a new derivation of the a priori local smoothing bound of \cite{MR3400442}. Our argument is based on the discovery of a new microscopic conservation law \eqref{E:l5.1h} adapted to regularity $H^{-1}({\mathbb{R}})$, which is then integrated against a suitably chosen weight function. It is not difficult to extend the a priori bound to the full class of solutions constructed in Theorem~\ref{T:main}. However, we are able to take the argument one step further and show the following (cf. Proposition~\ref{P:loc smoothing}):
\begin{theorem}\label{T:ls conv} Let $q$ and $\{q_n:n\in{\mathbb{N}}\}$ be solutions to \eqref{KdV} on the line in the sense of Theorem~\ref{T:main}. If the initial data obey $q_n(0)\to q(0)$ in $H^{-1}({\mathbb{R}})$, then
\begin{equation}
\iint_K \bigl| q(t,x) - q_n(t,x)\bigr|^2\,dx\,dt \to 0\qtq{as} n\to\infty
\end{equation}
for every compact set $K\subset {\mathbb{R}}\times{\mathbb{R}}$.
\end{theorem}
It follows immediately from this result that the solutions we construct are indeed distributional solutions in the line case.
\subsection*{Acknowledgements} R. K. was supported, in part, by NSF grant DMS-1600942 and M. V. by grant DMS-1500707. We would also like to thank the referee, whose comments and questions led to the inclusion of Appendix~\ref{S:A}.
\subsection{Notation and Preliminaries}\label{S:1.1}
Many of the functions considered in this paper have numerous arguments. For example, the diagonal Green's function ultimately depends on the location in space $x$, an energy parameter $\kappa$, and the wave profile $q$, which itself depends on time. We find it advantageous to readability to suppress some of these dependencies from time to time.
We use prime solely to indicate derivatives in $x$; thus $f' =\partial_x f$.
Our conventions for the Fourier transform are as follows:
\begin{align*}
\hat f(\xi) = \tfrac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} e^{-i\xi x} f(x)\,dx \qtq{so} f(x) = \tfrac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} e^{i\xi x} \hat f(\xi)\,d\xi
\end{align*}
for functions on the line and
\begin{align*}
\hat f(\xi) = \int_0^1 e^{- i\xi x} f(x)\,dx \qtq{so} f(x) = \sum_{\xi\in 2\pi{\mathbb{Z}}} \hat f(\xi) e^{i\xi x}
\end{align*}
for functions on the circle ${\mathbb{R}}/{\mathbb{Z}}$. Concomitant with this, we define
\begin{align*}
\| f\|_{H^{s}({\mathbb{R}})}^2 = \int_{\mathbb{R}} |\hat f(\xi)|^2 (4+|\xi|^2)^s \,d\xi
\qtq{and}
\| f\|_{H^{s}({\mathbb{R}}/{\mathbb{Z}})}^2 = \sum_{\xi\in 2\pi{\mathbb{Z}}} (4+\xi^2)^s |\hat f(\xi)|^2 .
\end{align*}
The use of the number $4$ here rather than the more traditional $1$ has no meaningful effect on these Hilbert spaces (the norms are equivalent); however, this definition simplifies our exposition by making certain key relations exact identities. More generally, we define
\begin{align*}
\| f\|_{H^{s}_\kappa({\mathbb{R}})}^2 = \int_{\mathbb{R}} |\hat f(\xi)|^2 (4\kappa^2+|\xi|^2)^s \,d\xi
\qtq{and}
\| f\|_{H^{s}_\kappa({\mathbb{R}}/{\mathbb{Z}})}^2 = \sum_{\xi\in 2\pi{\mathbb{Z}}} (4\kappa^2+\xi^2)^s |\hat f(\xi)|^2 .
\end{align*}
Note that $H^{1}_\kappa$ is an algebra in either geometry. Indeed, one readily sees that
$$
\| f g \|_{H^{1}_\kappa} \lesssim \| f \|_{H^{1}} \| g \|_{H^{1}_\kappa} \leq \| f \|_{H^{1}_\kappa} \| g \|_{H^{1}_\kappa} \quad\text{uniformly for $\kappa\geq 1$.}
$$
By duality, this implies that
$$
\| f h \|_{H^{-1}_\kappa} \lesssim \| f \|_{H^{\vphantom{+}1}_{\vphantom{\kappa}}} \| h \|_{H^{-1}_\kappa} \quad\text{uniformly for $\kappa\geq 1$.}
$$
Throughout the paper, we will employ the $L^2$ pairing. This informs our identification of $H^{-1}$ and $H^1$ as dual spaces and our notation for functional derivatives:
\begin{equation}\label{derivative}
\frac{d\ }{ds}\biggr|_{s=0} F(q+sf) = dF\bigl|_q (f) = \int \frac{\delta F}{\delta q}(x) f(x)\,dx .
\end{equation}
We write ${\mathfrak{I}}_p$ for the Schatten class of compact operators whose singular values are $\ell^p$ summable. In truth, we shall use the Hilbert--Schmidt class ${\mathfrak{I}}_2$ almost exclusively. When we do use ${\mathfrak{I}}_1$, it will only be as a notation for products of Hilbert--Schmidt operators; see \eqref{I1 from I2}. Let us quickly recall several facts about the class ${\mathfrak{I}}_2$ that we will use repeatedly: An operator $A$ on $L^2({\mathbb{R}})$ is Hilbert--Schmidt if and only if it admits an integral kernel $a(x,y)\in L^2({\mathbb{R}}\times{\mathbb{R}})$; moreover,
\begin{align*}
\| A \|_{L^2\to L^2} \leq \| A \|_{{\mathfrak{I}}_2} = \iint |a(x,y)|^2\,dx\,dy.
\end{align*}
The product of two Hilbert--Schmidt operators is trace class; moreover,
\begin{align*}
\tr(AB) := \iint a(x,y)b(y,x)\,dy\,dx = \tr(BA) \qtq{and} |\tr(AB)| \leq \| A \|_{{\mathfrak{I}}_2} \| B \|_{{\mathfrak{I}}_2}.
\end{align*}
Lastly, Hilbert--Schmidt operators form a two-sided ideal in the algebra of bounded operators; indeed,
\begin{align*}
\| B A C \| \leq \| B \|_{L^2\to L^2} \| A \|_{{\mathfrak{I}}_2}\| C \|_{L^2\to L^2}.
\end{align*}
All of this (and much more) is explained very clearly in \cite{MR2154153}.
For the arguments presented here, the problem \eqref{KdV} posed on circle is more favorably interpreted as a problem on the whole line with periodic initial data. Correspondingly, even in this case, we will be dealing primarily with operators on the whole line, albeit with periodic coefficients. When we do need to discuss operators acting on the circle ${\mathbb{R}}/{\mathbb{Z}}$ in connection with prior work, these will be distinguished by the use of calligraphic font.
\section{Diagonal Green's function}\label{S:2}
The goal of this section is to discuss the Green's function $G(x,y)$ associated to the whole-line Schr\"odinger operator
$$
L := -\partial_x^2 + q
$$
for potentials
\begin{align}\label{B delta}
q \in B_\delta := \{ q \in H^{-1}({\mathbb{R}}) : \| q \|_{H^{-1}({\mathbb{R}})} \leq \delta\}
\end{align}
and $\delta$ small. Particular attention will be paid to the diagonal $g(x):=G(x,x)$ and its reciprocal $1/g(x)$; the latter appears in the energy density associated to the key microscopic conservation law for KdV.
Let us briefly recall one key fact associated to the Schr\"odinger operator with $q\equiv 0$: The resolvent
\begin{align}\label{R resolvent}
R_0(\kappa) = (-\partial^2_x + \kappa^2)^{-1} \qtq{has integral kernel} G_0(x,y;\kappa) = \tfrac{1}{2\kappa} e^{-\kappa|x-y|}
\end{align}
for all $\kappa>0$.
\begin{prop}\label{P:sa L}
Given $q\in H^{-1}({\mathbb{R}})$, there is a unique self-adjoint operator $L$ associated to the quadratic form
$$
\psi \mapsto \int |\psi'(x)|^2 + q(x) |\psi(x)|^2\,dx \qtq{with domain} H^1({\mathbb{R}}).
$$
It is semi-bounded. Moreover, for $\delta\leq \frac12$ and $q\in B_\delta$, the resolvent is given by the norm-convergent series
\begin{align}\label{E:R series}
R := (L+\kappa^2)^{-1} = \sum_{\ell=0}^\infty (-1)^\ell \sqrt{R_0} \Bigl( \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^\ell \sqrt{R_0}
\end{align}
for all $\kappa\geq 1$.
\end{prop}
\begin{proof}
The key estimate on which all rests is the following:
\begin{align}\label{R I2}
\Bigl\| \sqrt{R_0}\, q\, \sqrt{R_0} \Bigr\|^2_{\text{op}} \leq \Bigl\| \sqrt{R_0}\, q\, \sqrt{R_0} \Bigr\|^2_{\mathfrak I_2({\mathbb{R}})} &= \frac1\kappa \int \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2}\,d\xi .
\end{align}
For $q\in \mathcal S({\mathbb{R}})$ the Hilbert--Schmidt norm can be evaluated directly using \eqref{R resolvent}:
\begin{align*}
\Bigl\| \sqrt{R_0}\, q\, \sqrt{R_0} \Bigr\|^2_{\mathfrak I_2({\mathbb{R}})} &= \frac1{4\kappa^2} \iint q(x) e^{-2\kappa|x-y|}q(y)\,dx\,dy = \text{RHS\eqref{R I2}}.
\end{align*}
This then extends to all $q\in H^{-1}({\mathbb{R}})$ by approximation.
From \eqref{R I2}, we see that
\begin{align*}
\int q(x)|\psi(x)|^2 \,dx \leq \kappa^{-1/2} \|q\|_{H^{-1}} \int |\psi'(x)|^2 + \kappa^2 |\psi(x)|^2\,dx \qtq{for all} \psi \in H^1({\mathbb{R}}),
\end{align*}
at least for all $\kappa\geq 1$. (Note that the LHS here should be interpreted via the natural pairing between $H^{-1}$, which contains $q$, and $H^1$, which contains $|\psi|^2$.) This estimate shows that
$q$ is an infinitesimally form-bounded perturbation of the case $q\equiv 0$ and so the existence and uniqueness of $L$ follows from \cite[Theorem X.17]{MR0493420}.
In view of \eqref{R I2}, the series \eqref{E:R series} converges provided we just choose $\delta <1$.
\end{proof}
\begin{prop}[Diffeomorphism property]\label{P:diffeo} There exists $\delta>0$ so that the following are true for all $\kappa\geq 1${\upshape:}\\
(i) For each $q\in B_\delta$, the resolvent $R$ admits a continuous integral kernel $G(x,y;\kappa,q)${\upshape;} thus, we may unambiguously define
\begin{align}\label{g defn}
g(x;\kappa,q):=G(x,x;\kappa,q).
\end{align}
(ii) The mappings
\begin{align}\label{diffeos}
q\mapsto g-\tfrac1{2\kappa} \qtq{and} q\mapsto \kappa-\tfrac1{2g}
\end{align}
are (real analytic) diffeomorphisms of $B_\delta$ into $H^1({\mathbb{R}})$.\\
(iii) If $q(x)$ is Schwartz, then so are $g(x)- \tfrac1{2\kappa}$ and $\kappa-\tfrac1{2g(x)}$. Indeed,
\begin{align}\label{g stronger mapping}
\|g'(x)\|_{H^{s}} \lesssim_s \| q \|_{H^{s-1}} \qtq{and} \| \langle x\rangle ^s g'(x) \|_{L^2} \lesssim_s \| \langle x\rangle ^s q \|_{H^{-1}}
\end{align}
for every integer $s\geq 0$.
\end{prop}
\begin{remark}
The diffeomorphism property is necessarily restricted to a neighborhood of the origin because for $q$ large, the spectrum of $L$ may intersect $-\kappa^2$.
\end{remark}
\begin{proof}
Initially, we ask that $\delta\leq \frac12$; later, we will add further restrictions.
From \eqref{E:R series} and \eqref{R I2}, we see that
\begin{align*}
\Bigl\| \sqrt{\kappa^2-\partial_x^2} \bigl(R - R_0\bigr) \sqrt{\kappa^2-\partial_x^2} \Bigr\|_{{\mathfrak{I}}_2} < \infty \qtq{for all} q\in B_\delta \qtq{and all} \kappa\geq 1.
\end{align*}
Consequently, $G-G_0$ exists as an element of $H^1({\mathbb{R}})\otimes H^1({\mathbb{R}})$. Here we mean tensor product in the Hilbert-space sense (cf. \cite{MR0493419}); note that $H^1({\mathbb{R}})\otimes H^1({\mathbb{R}})$ is comprised of those $f\in H^1({\mathbb{R}}^2)$ for which $\partial_x\partial_yf \in L^2({\mathbb{R}}^2)$. It follows that $G(x,y;\kappa,q)$ is a continuous function of $x$ and $y$ and we may define
\begin{align}\label{E:g series}
g(x) = g(x;\kappa,q) = \tfrac1{2\kappa} + \sum_{\ell=1}^\infty (-1)^\ell \Bigl\langle\sqrt{R_0}\delta_x,\ \Bigl( \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^\ell \sqrt{R_0} \delta_x\Bigr\rangle,
\end{align}
where inner products are taken in $L^2({\mathbb{R}})$. This settles (i).
Next we observe that by \eqref{E:R series} and \eqref{R I2},
\begin{align*}
\Bigl| \int f(x) \bigl[g(x) -\tfrac1{2\kappa}\bigr]\,dx \Bigr| &\leq \sum_{\ell=1}^\infty \Bigl\| \sqrt{R_0}\, f\, \sqrt{R_0} \Bigr\|_{\mathfrak I_2({\mathbb{R}})}\Bigl\| \sqrt{R_0}\, q\, \sqrt{R_0} \Bigr\|_{\mathfrak I_2({\mathbb{R}})}^\ell \\
&\leq 2\delta\kappa^{-1} \|f\|_{H^{-1}({\mathbb{R}})}
\end{align*}
for any Schwartz function $f$. Thus $g-\frac1{2\kappa}\in H^1({\mathbb{R}})$; indeed,
\begin{align}\label{g H1 bound}
\bigl\| g - \tfrac1{2\kappa} \bigr\|_{H^1({\mathbb{R}})} \leq 2\delta\kappa^{-1}.
\end{align}
Moreover, this argument precisely shows the convergence of the series \eqref{E:g series} and so that the mapping from $q\in B_\delta$ to $g-\frac1{2\kappa}\in H^{1}({\mathbb{R}})$ is real analytic.
Given $f\in H^{-1}({\mathbb{R}})$, the resolvent identity implies
\begin{align}\label{O35pp}
\frac{d\ }{ds}\biggr|_{s=0} g(x;q+sf) = - \int G(x,y)f(y)G(y,x)\,dy .
\end{align}
In particular, by \eqref{R resolvent},
$$
dg\bigr|_{q\equiv 0} = - \kappa^{-1} R_0(2\kappa),
$$
which is an isomorphism of $H^{-1}_\kappa$ onto $H^1_\kappa$, with condition number equal to $1$. Moreover, by \eqref{R I2}, \eqref{E:R series}, and duality,
\begin{align}\label{inverse input}
\bigl\| dg\bigr|_{q\equiv 0} - dg\bigr|_q \bigr\|_{H^{-1}_\kappa \to H^{1\vphantom{+}}_\kappa} \lesssim \kappa^{-1} \|q\|_{H^{-1}_\kappa}
\lesssim \delta \Bigl\| \bigl(dg\bigr|_{q\equiv 0} \bigr)^{-1} \Bigr\|_{H^{1\vphantom{+}}_\kappa \to H^{-1}_\kappa}^{-1} .
\end{align}
Thus choosing $\delta$ sufficiently small, the inverse function theorem guarantees that
\begin{align}\label{g is diffeo}
q \mapsto g - \tfrac{1}{2\kappa} \quad\text{is a diffeomorphism of $\{ q : \|q\|_{H^{-1}_\kappa} \leq \delta\}$ into $H^{1}_\kappa$}.
\end{align}
Note that \eqref{inverse input} combined with the standard contraction-mapping proof of the implicit function theorem guarantees that $\delta$ can be chosen independently of $\kappa$. The claimed $H^{-1}\to H^1$ diffeomorphism property of this map then follows since
$$
\|q\|_{H^{-1}_\kappa} \leq \|q\|_{H^{-1}} \qtq{and} \| f\|_{H^1_\kappa} \lesssim_\kappa \|f\|_{H^1}.
$$
Choosing $\delta$ even smaller if necessary, \eqref{g H1 bound} together with the embedding $H^1\hookrightarrow L^\infty$ guarantees that
$$
\tfrac1{4\kappa} \leq g(x) \leq \tfrac{3}{4\kappa} \quad\text{for all $q\in B_\delta$.}
$$
Consequently, the second mapping in \eqref{diffeos} is also real-analytic. To prove that it is a diffeomorphism (for some $\kappa$-independent choice of $\delta$), we simply note that
$$
f \mapsto \frac{f}{1+f}
$$
is a diffeomorphism from a neighbourhood of zero in $H^1({\mathbb{R}})$ into $H^1({\mathbb{R}})$, write
$$
\kappa-\tfrac1{2g} = \kappa \frac{2\kappa(g-\frac1{2\kappa} )}{1+2\kappa(g-\frac1{2\kappa} )},
$$
and use \eqref{g H1 bound} together with \eqref{g is diffeo}.
We now turn our attention to part (iii). The Green's function associated to a translated potential is simply the translation of the original Green's function. Correspondingly,
\begin{align}\label{translation identity}
g(x+h;q) = g\bigl(x; q(\cdot+h)\bigr) \qquad\text{for all $h\in{\mathbb{R}}$.}
\end{align}
Differentiating with respect to $h$ at $h=0$ and invoking \eqref{E:g series} yields
\begin{align*}
\int \bigl[\partial_x^s g(x)\bigr] f(x)\,dx &\leq \sum_{\ell=1}^\infty \sum_{\sigma} \binom{s}{\sigma} \Bigl\| \sqrt{R_0}\, f\, \sqrt{R_0} \Bigr\|_{\mathfrak I_2({\mathbb{R}})}
\prod_{k=1}^\ell \Bigl\| \sqrt{R_0}\, q^{(\sigma_k)}\, \sqrt{R_0} \Bigr\|_{\mathfrak I_2({\mathbb{R}})}.
\end{align*}
Here, the inner sum extends over multi-indices $\sigma=(\sigma_1,\ldots,\sigma_\ell)$ with $|\sigma|=s$. Maximizing over unit vectors $f\in H^{-1}$, exploiting \eqref{R I2}, and using
$$
\prod_{k=1}^\ell \bigl\| q^{(\sigma_k)} \bigr\|_{H^{-1}} \leq \bigl\| q^{(s)} \bigr\|_{H^{-1}} \bigl\| q \bigr\|_{H^{-1}}^{\ell-1},
$$
which is merely an application of Holder's inequality in Fourier variables, this yields
\begin{align*}
\bigl\|\partial_x^s g(x)\bigr\|_{H^{1}} &\leq \sum_{\ell=1}^\infty \ell^s \bigl\| q^{(s)} \bigr\|_{H^{-1}} \delta^{\ell-1} \lesssim_s \bigl\| q\bigr\|_{H^{s-1}}.
\end{align*}
Thus we have verified the first claim in \eqref{g stronger mapping}.
To address the second assertion in \eqref{g stronger mapping}, we first make the following claim: For every integer $s\geq 0$,
\begin{align}\label{R langle commutator}
\langle x\rangle^s R_0 = \sum_{r=0}^s \sqrt{R_0}\, A_{r,s} \sqrt{R_0}\, \langle x\rangle^r
\qtq{with operators} \| A_{r,s} \|_{L^2\to L^2}\lesssim_s 1.
\end{align}
This is easily verified recursively, by repeatedly using the following commutators:
\begin{equation}\label{basic commutators}
\begin{aligned}
\bigl[ \langle x\rangle,\ R_0\bigr] = R_0 \bigl[ -\partial_x^2 +\kappa^2 ,\ \langle x\rangle \bigr] R_0 &= -R_0 \bigl(\tfrac{x}{\langle x\rangle} \partial_x + \partial_x\tfrac{x}{\langle x\rangle}\bigr) R_0 \\
\bigl[ \tfrac{x}{\langle x\rangle} \partial_x + \partial_x\tfrac{x}{\langle x\rangle},\ \langle x\rangle\bigr] &= 2\tfrac{x^2}{\langle x\rangle^2}.
\end{aligned}
\end{equation}
In connection with \eqref{R langle commutator}, let us also pause to note that
\begin{align}\label{E:weight change}
\bigl\| \langle x\rangle^r q \bigr\|_{H^{-1}} \lesssim_{s} \bigl\| \langle x\rangle^s q \bigr\|_{H^{-1}} \quad\text{for any pair of integers $0\leq r\leq s$},
\end{align}
since $\langle x\rangle^{-1}\in H^1({\mathbb{R}})$, which is an algebra.
By applying \eqref{E:g series}, \eqref{R I2}, \eqref{R langle commutator}, and \eqref{E:weight change}, we deduce that
\begin{align*}
\int f(x) & \langle x\rangle^s \bigl[g(x) -\tfrac{1}{2\kappa} \bigr]\,dx \\
&\lesssim_s \sum_{\ell=1}^\infty \sum_{r=0}^s \Bigl\| \sqrt{R_0}\, f\, \sqrt{R_0} \Bigr\|_{\mathfrak I_2({\mathbb{R}})} \Bigl\| \sqrt{R_0}\, \langle x\rangle^r q\, \sqrt{R_0} \Bigr\|_{\mathfrak I_2({\mathbb{R}})} \delta^{\ell-1} \\
&\lesssim_s \bigl\| f \bigr\|_{H^{-1}} \bigl\| \langle x\rangle^s q \bigr\|_{H^{-1}}.
\end{align*}
Optimizing over $f\in H^{-1}({\mathbb{R}})$, it then follows that
$$
\| \langle x\rangle^s g'(x) \|_{L^2({\mathbb{R}})} \lesssim_s \bigl\| \langle x\rangle^s \bigl[g(x) -\tfrac{1}{2\kappa} \bigr] \bigr\|_{H^1({\mathbb{R}})} \lesssim_s \| \langle x\rangle ^s q \|_{H^{-1}},
$$
thereby completing the proof of \eqref{g stronger mapping} and so the proof of the proposition.
\end{proof}
\begin{prop}[Elliptic PDE]\label{P:elliptic}
The diagonal Green's function obeys
\begin{align}
g'''(x) &= 2 \bigl[q(x) g(x) \bigr]' + 2 q(x) g'(x) + 4\kappa^2 g'(x) .\label{E:l5.1a}
\end{align}
\end{prop}
\begin{proof}
By virtue of being the Green's function,
\begin{align*}
\bigl( -\partial_x^2 + q(x) \bigr) G(x,y) = -\kappa^2 G(x,y) + \delta(x-y) = \bigl( -\partial_y^2 + q(y) \bigr) G(x,y)
\end{align*}
and consequently,
\begin{align*}
\bigl( \partial_x + \partial_y\bigr)^3 G(x,y) &= \bigl(q'(x)+q'(y)\bigr)G(x,y) + 2\bigl(q(x)+q(y)\bigr) \bigl( \partial_x + \partial_y\bigr) G(x,y) \\
&\qquad - \bigl(q(x)-q(y)\bigr) \bigl( \partial_x - \partial_y\bigr) G(x,y) +4\kappa^2\bigl(\partial_x + \partial_y\bigr) G(x,y).
\end{align*}
Thus specializing to $y=x$, we deduce that
$$
g'''(x) = 2 q'(x) g(x) + 4 q(x) g'(x) + 4 \kappa^2 g'(x),
$$
which agrees with \eqref{E:l5.1a} after regrouping terms.
\end{proof}
\begin{remark}
As will be discussed in the proof of Lemma~\ref{L:D 1/g}, the Green's function can be expressed in terms of two solutions $\psi_\pm(x)$ to the Sturm--Liouville equation (the Weyl solutions); see \eqref{G from psi}.
In this sense, $g(x)=\psi_+(x)\psi_-(x)$ was seen to obey \eqref{E:l5.1a} already in \cite{Appell}.
\end{remark}
\begin{prop}[Introducing $\rho$]\label{P:Intro rho}
There exists $\delta>0$ so that
\begin{align}\label{E:rho defn}
\rho(x;\kappa,q) := \kappa - \tfrac{1}{2g(x)} + \tfrac12\int e^{-2\kappa|x-y|} q(y)\,dy
\end{align}
belongs to $L^1({\mathbb{R}}) \cap H^1({\mathbb{R}})$ for all $q\in B_\delta$ and $\kappa\geq 1$. Moreover, fixing $x\in{\mathbb{R}}$, the map $q\mapsto \rho(x)$ is non-negative and convex. Additionally,
\begin{align}\label{E:alpha defn}
\alpha(\kappa;q) := \int_{\mathbb{R}} \rho(x)\,dx
\end{align}
defines a non-negative, real-analytic, strictly convex function of $q\in B_\delta$, and satisfies
\begin{align}\label{alpha as I2}
\alpha(\kappa;q) \approx \frac{1}{\kappa} \int_{\mathbb{R}} \frac{|\hat q(\xi)|^2\,d\xi}{\xi^2+4\kappa^2},
\end{align}
uniformly for $q\in B_\delta$ and $\kappa\geq 1$. Lastly,
\begin{align}\label{O37}
\alpha(\kappa;q) = - \log\det_2\left( 1+ \sqrt{R_0}\, q \, \sqrt{R_0} \right).
\end{align}
\end{prop}
\begin{remarks}
1. Although we shall have no use for the strict convexity of $q\mapsto\alpha(\kappa;q)$ in this paper, it does have important consequences. Most notably, by the Radon--Riesz argument, it shows that weakly continuous solutions conserving $\alpha(\kappa)$ are automatically norm-continuous.
2. As noted in the Introduction (see \eqref{Intro renorm} and subsequent discussion), the quantity $\alpha(\kappa;q)$ is essentially the logarithm of the transmission coefficient and so well-studied. Nevertheless, none of the literature we have studied contains the representation \eqref{E:alpha defn} in terms of the reciprocal of the Green's function. Rather, prior works employ an integral representation based on the logarithmic derivative of one of the Jost solutions; see \eqref{E:a from Weyl}. To the best of our knowledge, this approach originates in \cite[\S3]{MR0303132}, where it was shown to be an effective tool for deriving polynomial conservation laws and for demonstrating that these polynomial conservation laws appear as coefficients in the asymptotic expansion of the logarithm of the transmission coefficient as $\kappa\to\infty$.
\end{remarks}
Before turning to the proof of Proposition~\ref{P:Intro rho}, we first explain the meaning of RHS\eqref{O37} and then present two lemmas that we shall need.
The symbol $\det_2$ denotes the renormalized Fredholm determinant introduced by Hilbert in \cite{Hilbert}; see \cite{MR2154153} for a more up-to-date exposition. In the context of Proposition~\ref{P:Intro rho}, our choice of $\delta$ guarantees that the operator
$$
A = \sqrt{R_0}\, q \, \sqrt{R_0} \qtq{obeys} \| A \|_{{\mathfrak{I}}_2} < 1.
$$
Consequently, it suffices for what follows to exploit only the notion of the trace of an operator (rather than determinant) thanks to the identity
\begin{align}\label{det series}
-\log \det_2 \bigl(1 + A \bigr) = \tr\big( A - \log(1+A) \bigr) = \sum_{\ell=2}^\infty \frac{(-1)^\ell}{\ell} \tr\bigl( A^\ell \bigr).
\end{align}
We shall not delve deeply into such matters here, since \eqref{O37} has no bearing on the proof of well-posedness for KdV; indeed, our only reason for verifying this identity is to make the link to the prior works \cite{KVZ,MR2683250}, which might otherwise seem unrelated.
\begin{lemma}\label{L:D 1/g}
There exists $\delta>0$ so that
\begin{align}\label{GgG identity}
\int \frac{G(x,y;\kappa,q)G(y,x;\kappa,q)}{2g(y;\kappa,q)^2}\,dy = g(x;\kappa,q)
\end{align}
for all $q\in B_\delta$ and all $\kappa\geq 1$.
\end{lemma}
\begin{remark}
Augmenting the proof below with the results of \cite[\S8.3]{MR0069338} shows that \eqref{GgG identity} holds also in the case of $q\in H^{-1}({\mathbb{R}}/{\mathbb{Z}})$ and $\kappa\geq 1$ that obey \eqref{periodic smallness}. As below, one first uses analyticity to reduce to a case where one may apply ODE techniques, more specifically, to the case of small smooth periodic potentials.
\end{remark}
\begin{proof}
We choose $\delta>0$ as needed for Proposition~\ref{P:diffeo}. In this case, both sides of \eqref{GgG identity} are analytic functions of $q$. Consequently, it suffices to prove the result under the additional hypotheses that $q$ is Schwartz and $\|q\|_{L^\infty}<1$.
Techniques in Sturm--Liouville theory (cf. \cite[\S3.8]{MR0069338}) show that there are solutions $\psi_\pm(x)$ to
\begin{align}\label{ODE}
-\psi'' + q \psi = -\kappa^2 \psi
\end{align}
that decay (along with derivatives) exponentially as $x\to\pm\infty$ and grow exponentially as $x\to\mp\infty$. Constancy of the Wronskian guarantees that these Weyl solutions (as they are known) are unique up to scalar multiples; we (partially) normalize them by requiring the Wronskian relation
\begin{equation}\label{E:Wron}
\psi_+(x) \psi_-'(x) - \psi_+'(x)\psi_-(x) = 1
\end{equation}
and that $\psi_\pm(x) >0$. Note that the Sturm oscillation theorem guarantees that neither solution may change sign.
Using the Weyl solutions, we may write the Green's function as
\begin{align}\label{G from psi}
G(x,y) = \psi_+(x\vee y) \psi_-(x\wedge y).
\end{align}
In this way, the proof of the lemma reduces to showing that
\begin{align}\label{pre FTC}
\tfrac12 \int_{-\infty}^x \Bigl[\tfrac{\psi_+(x)}{\psi_+(y)}\Bigr]^2 \,dy + \tfrac12 \int_x^\infty \Bigl[\tfrac{\psi_-(x)}{\psi_-(y)}\Bigr]^2 \,dy = \psi_+(x)\psi_-(x).
\end{align}
However, by \eqref{E:Wron}, we have
\begin{align*}
\tfrac{d\ }{dy} \tfrac{\psi_-(y)}{\psi_+(y)} = \tfrac{1}{\psi_+(y)^2} \qtq{and} \tfrac{d\ }{dy} \tfrac{\psi_+(y)}{\psi_-(y)} = - \tfrac{1}{\psi_-(y)^2}.
\end{align*}
Thus \eqref{pre FTC} follows by the fundamental theorem of calculus and the exponential behavior of $\psi_\pm(y)$, as $|y|\to \infty$.
\end{proof}
\begin{remark}
As mentioned above, there is an alternate integral representation of $\alpha(\kappa;q)$ introduced much earlier. The proof of Lemma~\ref{L:D 1/g} provides the requisite vocabulary to explain what that is:
\begin{align}\label{E:a from Weyl}
\log[a(i\kappa)] = - \int \tfrac{\psi_+'(y)}{\psi_+(y)} + \kappa \,dy = \int \tfrac{\psi_-'(y)}{\psi_-(y)} - \kappa \,dy.
\end{align}
Here $\psi_\pm$ represent the Weyl solutions; however, the formula applies equally well using the Jost solutions, since they differ only in normalization. It is in this equivalent form that the first identity appears in \cite[\S3]{MR0303132}.
Averaging these two representations and invoking \eqref{E:Wron} and then \eqref{G from psi} yields
\begin{align}\label{E:a from little g}
\log[a(i\kappa)] = \int \tfrac{1}{2\psi_-(y)\psi_+(y)} - \kappa \,dy = \int \tfrac{1}{2g(y)} - \kappa\,dy,
\end{align}
which is readily seen to be equivalent to \eqref{E:alpha defn}. One easy way to distinguish these three representations is the fact that $\psi_+(y)$ depends only on the values of $q$ on the interval $[y,\infty)$, while $\psi_-(y)$ is determined by $q$ on the interval $(-\infty,y]$; on the other hand, $g(y)$ depends on the values of $q$ throughout the real line.
\end{remark}
The following identity will be used not only in the proof of Proposition~\ref{P:Intro rho}, but also in Section~\ref{S:3}.
\begin{lemma}\label{L:G ibp} Given Schwartz functions $f$ and $q$,
\begin{align*}
& \int G(x,y;\kappa,q) \bigl[ - f'''(y) + 2q(y)f'(y) + 2\bigl(q(y)f(y)\bigr)'+4\kappa^2 f'(y)\bigr]G(y,x;\kappa,q)\,dy \\
&= 2 f'(x)g(x;\kappa,q) - 2f(x)g'(x;\kappa,q).
\end{align*}
This identity also holds if merely $f(x)-c$ is Schwartz for some constant $c$.
\end{lemma}
\begin{proof}
The argument that follows applies equally well irrespective of the presence/absence of the constant $c$. Alternately, as both sides of the identity are linear in $f$, the cases $f$ Schwartz and $f$ constant can be treated separately. However, when $f$ is constant the identity can be obtained more swiftly by other means; see \eqref{translation identity'}.
The most elementary proof proceeds from the defining property of $G$, namely,
$$
\bigl(-\partial_y^2 + q(y) + \kappa^2\bigr)G(y,x) = \bigl(-\partial_y^2 + q(y) + \kappa^2\bigr)G(x,y) = \delta(x-y)
$$
and integration by parts. However, we find the argument more palatable when presented in terms of operator identities. Specifically, from the operator identity
\begin{align*}
- f''' &= (-\partial^2+\kappa^2)f' + f' (-\partial^2+\kappa^2) - 2(-\partial^2+\kappa^2)f\partial + 2\partial f(-\partial^2+\kappa^2) - 4\kappa^2f',
\end{align*}
it follows that
\begin{align*}
- R f''' R = f'R - 2 R qf' R + Rf' - 2f\partial R - 2R[\partial,qf]R + 2 R\partial f - 4\kappa^2 Rf'R.
\end{align*}
Noting, for example, that
$$
g'(x) = \langle\delta_x, [\partial,R]\delta_x\rangle,
$$
the lemma then follows by considering the diagonal of the associated integral kernel.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{P:Intro rho}]
By \eqref{R resolvent},
$$
\tfrac12\int e^{-2\kappa|x-y|} q(y)\,dy = 2\kappa [R_0(2\kappa) q](x).
$$
Combined with Proposition~\ref{P:diffeo}, this shows $\rho\in H^1({\mathbb{R}})$. Next we write
$$
\rho(x) = 2\kappa^2 \bigl[g - \tfrac1{2\kappa} + \tfrac1{\kappa} R_0(2\kappa) q\bigr](x) - \tfrac{2\kappa^2}{g(x)}[g(x)-\tfrac1{2\kappa}]^2 .
$$
The second summand belongs to $L^1({\mathbb{R}})$ by Proposition~\ref{P:diffeo}; thus it remains to consider the first summand. To this end, we use \eqref{E:g series} and \eqref{R I2} to obtain
\begin{align}
\int \bigl[g - \tfrac1{2\kappa} + \tfrac1{\kappa} R_0(2\kappa) q\bigr](x) f(x)\,dx &= \sum_{\ell=2}^\infty (-1)^\ell \tr\Bigl\{\sqrt{R_0} f \sqrt{R_0} \Bigl( \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^\ell \Bigr\} \notag\\
&\leq \| f \|_{L^\infty} \Bigl\|\sqrt{R_0}\Big\|_{op}^2 \Bigl\| \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr\|^2_{{\mathfrak{I}}_2} \sum_{\ell=2}^\infty \delta^{\ell-2},\label{E:L1 est}
\end{align}
from which we may conclude that $\rho\in L^1({\mathbb{R}})$. Note that the arguments just presented actually show that $q\mapsto\rho$ is real analytic as a mapping of $B_\delta$ into $L^1\cap H^1$.
To show convexity at fixed $x$, we compute derivatives. As in \eqref{O35pp}, the resolvent identity guarantees that
\begin{align}\label{drho}
d[\rho(x)]\bigr|_q (f) = \tfrac{-1}{2g(x)^2} \int G(x,y) f(y) G(y,x) \,dy + \tfrac12 [e^{-2\kappa|\cdot|} * f ](x)
\end{align}
and thence
\begin{align}\label{ddrho}
d^2[\rho(x)]\bigr|_q (f,h) = {}&{}\tfrac{-1}{g(x)^3} \iint G(x,y) f(y) G(y,x) G(x,z) h(z) G(z,x) \,dy\,dz \\
& + \tfrac{1}{g(x)^2} \iint G(x,y) f(y) G(y,z) h(z) G(z,x) \,dy\,dz. \notag
\end{align}
Multiplying through by $g(x)^3>0$ we then see that the convexity of $\rho(x)$ is reduced to the assertion that
$$
\bigl\langle \sqrt{R} \delta_x,\sqrt{R} \delta_x\bigr\rangle \bigl\langle \sqrt{R} \delta_x,\sqrt{R}fRf\sqrt{R}\,\sqrt{R} \delta_x\bigr\rangle
- \bigl\langle \sqrt{R} \delta_x,\sqrt{R}f\sqrt{R}\, \sqrt{R} \delta_x\bigr\rangle^2 \geq 0
$$
for all $f\in H^{-1}({\mathbb{R}})$. (Here inner-products are taken in $L^2({\mathbb{R}})$ which contains $\sqrt{R} \delta_x$.) The veracity of this assertion now follows immediately from the Cauchy--Schwarz inequality.
Specializing \eqref{drho} to $q\equiv 0$ and substituting \eqref{R resolvent} shows
\begin{align}\label{d rho 0}
\frac{\delta\rho(x)}{\delta q}\biggr|_{q\equiv0} = 0.
\end{align}
Note also that $\rho(x)\equiv0$ when $q\equiv 0$. In this way the convexity of $q\mapsto \rho(x)$ guarantees its positivity.
Let us now turn our attention to $\alpha(\kappa;q)$. In view of the preceding, we already know that this is a non-negative, convex, and real-analytic function of $q\in B_\delta$. It remains to show strict convexity, \eqref{alpha as I2}, and \eqref{O37}.
As we have already noted, $\rho(x)\equiv 0$ when $q\equiv 0$. Thus \eqref{O37} holds trivially in this case. In general \eqref{O37} follows easily from
\begin{align}\label{delta alpha}
\frac{\delta \alpha}{\delta q} = \tfrac{1}{2\kappa} - g(x) = \frac{\delta\ }{\delta q} - \log\det_2\left( 1+ \sqrt{R_0}\, q \, \sqrt{R_0} \right),
\end{align}
which we will now verify.
From \eqref{drho} and Lemma~\ref{L:D 1/g},
\begin{align*}
\frac{d\ }{ds}\biggr|_{s=0} \alpha(\kappa; q+sf) &= - \iint \frac{G(y,x)G(x,y)}{2g(x)^2} f(y)\,dx \,dy+ \tfrac1{2\kappa}\int f(y)\,dy \\
&= \int \Bigl[\tfrac{1}{2\kappa} - g(x)\Bigr] f(x)\,dx,
\end{align*}
at least for Schwartz functions $f$. This proves the first equality in \eqref{delta alpha}.
From \eqref{det series} and \eqref{E:g series}, we have
\begin{align*}
\frac{d\ }{ds}\biggr|_{s=0} - \log&\det_2\left( 1+ \sqrt{R_0}\, (q+sf) \, \sqrt{R_0} \right) \\
&= \sum_{\ell=2}^\infty (-1)^\ell\tr\Bigl\{ \Bigl(\sqrt{R_0}\, q \, \sqrt{R_0} \Bigr)^{\ell-1} \sqrt{R_0}\, f \, \sqrt{R_0} \Bigr\} \\
&= \int \Bigl[\tfrac{1}{2\kappa} - g(x)\Bigr] f(x)\,dx.
\end{align*}
This verifies the second equality in \eqref{delta alpha} and so finishes the proof of \eqref{O37}.
Toward verifying strict convexity and \eqref{alpha as I2}, let us first compute the Hessian of $\alpha(\kappa)$ at $q\equiv 0$. From \eqref{ddrho} and \eqref{R resolvent}, we have
\begin{align}
d^2\alpha\bigr|_{q\equiv 0} (f,f) &= -\tfrac{1}{2\kappa} \iiint e^{-2\kappa|x-y| - 2\kappa|x-z|} f(y)f(z) \,dx\,dy\,dz \notag\\
&\quad + \tfrac{1}{2\kappa} \iiint e^{-\kappa|x-y| - \kappa|y-z| - \kappa|z-x|} f(y) f(z) \,dx\,dy\,dz \label{delta2alpha}\\
&= \tfrac{1}{4\kappa^2} \iint e^{-2\kappa|y-z|} f(y) f(z) \,dy\,dz = \tfrac{1}{\kappa} \int \frac{|\hat f(\xi)|^2}{\xi^2+4\kappa^2}\,d\xi. \notag
\end{align}
As $\alpha(\kappa)$ is real analytic, this immediately shows strict convexity and \eqref{alpha as I2} in some neighbourhood of $q\equiv 0$; however, to verify that the size $\delta$ of this neighbourhood may
be taken independent of $\kappa$, we must adequately control the modulus of continuity of the Hessian. From \eqref{inverse input} and the first identity in \eqref{delta alpha}, we have
$$
\Bigl|\Bigl( d^2\alpha\bigr|_{q\equiv 0} - d^2\alpha\bigr|_{q}\Bigr)(f,f)\Bigr| \lesssim \delta\kappa^{-1} \int \frac{|\hat f(\xi)|^2}{\xi^2+4\kappa^2}\,d\xi,
$$
thereby settling the matter.
\end{proof}
\section{Dynamics}\label{S:3}
The natural Poisson structure on $\mathcal S({\mathbb{R}})$ or $C^\infty({\mathbb{R}}/{\mathbb{Z}})$ associated to the KdV equation is
\begin{align}\label{3.0}
\{ F, G \} = \int \frac{\delta F}{\delta q}(x) \biggl(\frac{\delta G}{\delta q}\biggr) '(x) \,dx .
\end{align}
This structure is degenerate: $q\mapsto \int q$ is a Casimir (i.e. Poisson commutes with everything). It is common practice to say that this is the
Poisson bracket associated to the (degenerate) almost complex structure $J=\partial_x$ and the $L^2$ inner product. We shall not need such notions; however, they do suggest a very convenient notation for the time-$t$ flow under the Hamiltonian $H$:
$$
q(t) = e^{t J\nabla\! H} q(0).
$$
Note that under our sign conventions,
$$
\frac{d\ }{dt}\ F\circ e^{t J\nabla\! H} = \{ F, H \} \circ e^{t J\nabla\! H}.
$$
As two simple examples, we note that for
$$
P:=\int \tfrac12 |q(x)|^2\,dx \qtq{and} H_\text{KdV} := \int \tfrac12 |q'(x)|^2 + q(x)^3 \,dx,
$$
we have
\begin{align}\label{trivial delta}
\frac{\delta P}{\delta q}(x) = q(x) \qtq{and} \frac{\delta H_\text{KdV}}{\delta q}(x) = -q''(x) + 3q(x)^2.
\end{align}
Thus, the flow associated to $P$ is precisely $\partial_t q = \partial_x q$, which is to say, $P$ represents momentum (= generator of translations); the flow associated to $H_\text{KdV}$ is precisely the KdV equation.
Note that $H_\text{KdV}$ and $P$ Poisson commute:
$$
\{H_\text{KdV},P\}=\int \bigl(-q''(x)+3q(x)^2\bigr) q'(x)\,dx = \int \bigl(-\tfrac12 q'(x)^2 + q(x)^3\bigr)' \,dx =0.
$$
This simultaneously expresses that the KdV flow conserves $P$ and that $H_\text{KdV}$ is conserved under translations. Moreover, the two flows commute:
$$
e^{s J\nabla\! P} \circ e^{t J\nabla\! H_\text{KdV}} = e^{t J\nabla\! H_\text{KdV}} \circ e^{s J\nabla\! P} \qtq{for all} s,t\in{\mathbb{R}},
$$
at least as mappings of Schwartz space. The claim that the KdV flow commutes with translations is without controversy; nonetheless, it is important for what follows to see that it stems precisely from the vanishing of the Poisson bracket. Fortunately, by restricting our attention to Schwartz-space solutions, we may simply apply the standard arguments from differential geometry; see, for example, \cite[\S39]{MR0997295}.
We will also consider one more Hamiltonian, namely,
\begin{align}\label{H kappa defn}
H_\kappa := - 16 \kappa^5 \alpha(\kappa) + 2 \kappa^2 \int q(x)^2\,dx
\end{align}
which, formally at least, converges to
$$
H_\text{KdV} := \int \tfrac12 |q'(x)|^2 + q(x)^3 \,dx
$$
as $\kappa\to\infty$. In due course, we will see that $H_\kappa$ leads to a well-posed flow on $H^{-1}$ and that it Poisson commutes with both $P$ and $H_\text{KdV}$, at least as a functional on Schwartz space. For the moment, however, let us describe the evolution of the diagonal Green's function under the KdV flow.
\begin{prop}\label{L:5.2}
Given $\delta>0$, there is a $\delta_0>0$ so that for every Schwartz solution $q(t)$ to KdV with initial data
$q(0)\in B_{\delta_0}$, we have
\begin{align}\label{prop small}
\sup_{t\in{\mathbb{R}}}\| q(t)\|_{H^{-1}({\mathbb{R}})} \leq \delta.
\end{align}
Moreover, for each $\kappa\geq 1$, the quantities $g(t,x)=g(x;\kappa,q(t))$, $\rho(t,x)=\rho(x;\kappa,q(t))$, and $\alpha(\kappa;q(t))$ obey
\begin{gather}
\tfrac{d\ }{dt}\, g(t,x) = -2 q'(t,x) g(t,x) + 2 q(t,x) g'\!(t,x) - 4\kappa^2 g'\!(t,x) \label{E:l5.1c}\\
\tfrac{d\ }{dt} \, \tfrac{1}{2g(t,x)} = \Bigl( \tfrac{q(t,x)}{g(t,x)} - \tfrac{2\kappa^2}{g(t,x)} + 4\kappa^3\Bigr)' \label{E:l5.1g} \\
\tfrac{d\ }{dt} \rho(t,x) = \Bigl(\tfrac32 \bigl[e^{-2\kappa|\cdot|}* q^2\bigr](t,x) + 2q(t,x)\bigl[ \kappa - \tfrac{1}{2g(t,x)}\bigr] - 4\kappa^2\rho(t,x) \Bigr)' \label{E:l5.1h} \\
\tfrac{d\ }{dt} \alpha(\kappa;q(t)) = 0. \label{E:l5.1z}
\end{gather}
\end{prop}
\begin{proof}
Without loss of generality, we may require that $\delta$ is a small as we wish. We shall require that $\delta$ meets the requirements of Propositions~\ref{P:diffeo},~\ref{P:elliptic}, and~\ref{P:Intro rho}. As an initial choice, we then set $\delta_0=\tfrac12\delta$. This guarantees that these propositions are all applicable to $q(t)$ for some open interval of times containing $t=0$. (Schwartz solutions are necessarily continuous in $H^{-1}({\mathbb{R}})$.) We will show below that equations \eqref{E:l5.1c}--\eqref{E:l5.1z} are valid on this time interval. But then, choosing $\kappa=1$ in \eqref{alpha as I2} and \eqref{E:l5.1z}, we obtain
$$
\| q(t) \|_{H^{-1}} \lesssim \| q(0) \|_{H^{-1}}
$$
on this interval. Thus we see that \eqref{prop small} holds globally in time, after updating our choice of $\delta_0$, if necessary.
It remains to show that the stated differential equations apply to Schwartz solutions whose $H^{-1}$ norm is small enough that the results of Section~\ref{S:2} apply. We begin the proof in earnest after one minor preliminary: by taking an $h$ derivative in \eqref{translation identity} and using the resolvent identity, we have
\begin{align}\label{translation identity'}
g'(x; q) = - \int G(x,y) q'(y) G(y,x)\,dy.
\end{align}
By the resolvent identity, then Lemma~\ref{L:G ibp}, and then \eqref{translation identity'},
\begin{align*}
\frac{d\ }{dt} g(&x; q(t)) = - \int G(x,y) \bigl[ -q'''(t,y) + 6q(t,y)q'(t,y) \bigr] G(y,x)\,dy \\
&= - 2 q'(t,x)g(x; q(t)) + 2 q(t,x) g'(x; q(t)) + 4\kappa^2 \int G(x,y) q'(t,y) G(y,x)\,dy \\
&= - 2 q'(t,x)g(x; q(t)) + 2 q(t,x) g'(x; q(t)) - 4\kappa^2 g'(x;q(t)).
\end{align*}
This proves \eqref{E:l5.1c}. Alternately, \eqref{E:l5.1c} can be derived from the Lax pair formulation of KdV; specifically,
\begin{align*}
\frac{d\ }{dt} \bigl(L(t)+\kappa^2\bigr)^{-1} = \bigl[ P(t), \bigl(L(t)+\kappa^2\bigr)^{-1} \bigr] .
\end{align*}
We leave the details to the interested reader.
Equation \eqref{E:l5.1g} follows immediately from \eqref{E:l5.1c} and the chain rule, while \eqref{E:l5.1h} is simply a combination of \eqref{E:l5.1g} and \eqref{KdV}. Lastly, \eqref{E:l5.1z} follows from integrating \eqref{E:l5.1h} in $x$ over the whole line.
\end{proof}
\begin{remark} Combining \eqref{E:l5.1a} and \eqref{E:l5.1c} yields
\begin{align}
\tfrac{d\ }{dt}\, g(x) &= \Bigl( 2g''(x) -6q(x)g(x) - 12\kappa^2g(x) + 6\kappa \Bigr)' , \label{E:l5.1c'}
\end{align}
from which we see that there is also a microscopic conservation law for the KdV flow associated to $g(x)$. Ultimately, however, this turns out to be a consequence of the conservation of $\alpha(\kappa)$; specifically, we have
$$
\frac{d\ }{d\kappa} \alpha(\kappa) = - 2\kappa \int g(x) - \tfrac{1}{2\kappa} + \tfrac{1}{4\kappa^3} q(x) \,dx.
$$
\end{remark}
\begin{prop}\label{P:H kappa}
Fix $\kappa\geq 1$. The Hamiltonian evolution induced by $H_\kappa$ is
\begin{align}\label{H kappa flow q}
\tfrac{d\ }{dt} q(x) = 16\kappa^5 g'(x;\kappa) + 4\kappa^2 q'(x).
\end{align}
This flow is globally well-posed for initial data in $B_\delta$, for $\delta>0$ small enough (independent of $\kappa$), and conserves $\alpha(\varkappa)$ for any $\varkappa\geq 1$. Moreover, in the case of Schwartz-class initial data, the solution is Schwartz-class for all time, the associated diagonal Green's function evolves according to
\begin{align}\label{H kappa flow g}
\tfrac{d\ }{dt} \, \tfrac{1}{2g(x;\varkappa)} &= - \tfrac{4\kappa^5}{\kappa^2-\varkappa^2} \Bigl( \tfrac{g(x;\kappa)}{g(x;\varkappa)} -\tfrac{\varkappa}{\kappa} \Bigr)' + 4\kappa^2 \Bigl( \tfrac{1}{2g(x;\varkappa)} - \varkappa \Bigr)'
\quad\text{if $\varkappa\neq \kappa$},
\end{align}
and the flow commutes with that of $H_\text{KdV}$.
\end{prop}
\begin{proof}
From \eqref{delta alpha} and \eqref{trivial delta} we see that
$$
\frac{\delta H_\kappa}{\delta q} = - 16 \kappa^5\bigl[\tfrac{1}{2\kappa} - g(x;\kappa,q)\bigr] + 4 \kappa^2 q(x) ,
$$
from which \eqref{H kappa flow q} immediately follows.
Rewriting \eqref{H kappa flow q} as the integral equation
$$
q(t,x) = q(0,x+4\kappa^2 t) + \int_0^t 16\kappa^5 g'\bigl(x+4\kappa^2(t-s);\kappa,q(s)\bigr) \,ds ,
$$
we see that local well-posedness follows by Picard iteration and the estimate
$$
\bigl\| g'(x,q) - g'(x,\tilde q) \bigr\|_{H^{-1}} \lesssim \bigl\| g(x,q) - g(x,\tilde q) \bigr\|_{H^{1}} \lesssim \| q -\tilde q \|_{H^{-1}},
$$
which in turn follows from the diffeomorphism property.
Global well-posedness follows from local well-posedness, once we prove that $\alpha(\varkappa)$ is conserved, since we may then use \eqref{alpha as I2} to guarantee that the solution remains small in $H^{-1}$. (This argument appeared already in Proposition~\ref{L:5.2}.) Moreover, because the problem is $H^{-1}$-locally well-posed, it suffices to verify conservation of $\alpha(\varkappa)$ just in the case of Schwartz initial data. Note that \eqref{g stronger mapping} shows that solutions with Schwartz initial data remain in Schwartz class. So let us consider a Schwartz solution $q(t)$ to \eqref{H kappa flow q} and endeavor to prove conservation of $\alpha(\varkappa)$. Actually, it suffices to prove \eqref{H kappa flow g}, because conservation of $\alpha(\varkappa)$ follows from this and \eqref{H kappa flow q}.
By the resolvent identity and \eqref{H kappa flow q},
\begin{align*}
\tfrac{d\ }{dt} \tfrac{1}{2 g(t,x;\varkappa)} ={} & \tfrac{8\kappa^5}{g(t,x;\varkappa)^2} \int G(x,y;\varkappa,q(t)) g'(t,y;\kappa) G(y,x;\varkappa,q(t))\,dy \\
& + \tfrac{2\kappa^2}{g(t,x;\varkappa)^2} \int G(x,y;\varkappa,q(t)) q'(t,y) G(y,x;\varkappa,q(t))\,dy.
\end{align*}
From here we substitute the following rewriting of \eqref{E:l5.1a}
$$
4(\kappa^2-\varkappa^2) g'(y;\kappa) = -\bigl[ - g'''(y;\kappa) + 2\bigl(q(y)g(y;\kappa)\bigr)' + 2 q(y)g'(y;\kappa) + 4\varkappa^2g'(y;\kappa)\bigr]
$$
into the first term and use Lemma~\ref{L:G ibp}, while for the second term we employ \eqref{translation identity'}. In this way, we deduce that
\begin{align*}
\tfrac{d\ }{dt} \tfrac{1}{2 g(t,x;\varkappa)} &= - \tfrac{4\kappa^5}{(\kappa^2-\varkappa^2) g(t,x;\varkappa)^2} \bigl[ g'(t,x;\kappa)g(t,x;\varkappa) - g(t,x;\kappa)g'(t,x;\varkappa) \bigr] \\
&\qquad - \tfrac{2\kappa^2}{g(t,x;\varkappa)^2} g'(t,x;\varkappa),
\end{align*}
which agrees with \eqref{H kappa flow g}.
Lastly, by \eqref{H kappa defn} and Proposition~\ref{L:5.2},
\begin{align*}
\{ H_\kappa, H_\text{KdV} \} = - 16 \kappa^5 \{ \alpha(\kappa), H_\text{KdV} \} + 4 \kappa^2 \{ P, H_\text{KdV} \} = 0,
\end{align*}
which shows that the $H_\kappa$ and $H_\text{KdV}$ flows commute, at least as mappings on Schwartz space.
\end{proof}
\section{Equicontinuity}\label{S:4}
Let us first recall the meaning of equicontinuity:
\begin{definition}
A subset $Q$ of $H^s$ is said to be \emph{equicontinuous} if
\begin{gather}
q(x+h) \to q(x) \quad\text{in $H^s$ as $h\to 0$, uniformly for $q\in Q$.} \label{E:equi1}
\end{gather}
\end{definition}
This definition works in great generality. For $H^s$ spaces, it is also common to define equicontinuity as tightness of the Fourier transform. The two approaches are easily reconciled, as our next lemma shows.
\begin{lemma}\label{L:equi 1}
Fix $-\infty < \sigma < s <\infty$. Then:\\
(i) A bounded subset $Q$ of $H^s({\mathbb{R}})$ is equicontinuous in $H^s({\mathbb{R}})$ if and only if
\begin{gather}
\int_{|\xi|\geq \kappa} |\hat q(\xi)|^2 (\xi^2+4)^s \,d\xi \to 0 \qtq{as $\kappa\to \infty$, uniformly for $q\in Q$.} \label{E:equi2}
\end{gather}
(ii) A sequence $q_n$ is convergent in $H^s({\mathbb{R}})$ if and only if it is convergent in $H^\sigma({\mathbb{R}})$ and equicontinuous in $H^s({\mathbb{R}})$.
\end{lemma}
\begin{proof}
As $Q$ is bounded and
\begin{align*}
\int |e^{i\xi h}-1|^2 |\hat q(\xi)|^2 (\xi^2+4)^s \,d\xi &\lesssim \kappa^2 h^2 \int |\hat q(\xi)|^2 (\xi^2+4)^s \,d\xi \\
&\qquad + \int_{|\xi|>\kappa} |\hat q(\xi)|^2 (\xi^2+4)^s \,d\xi,
\end{align*}
we see that \eqref{E:equi2} implies \eqref{E:equi1}. To prove the converse, we note that
\begin{align*}
\int |e^{i\xi h}-1|^2\, \kappa e^{-2\kappa|h|}\,dh &= \tfrac{2\xi^2}{\xi^2+4\kappa^2} \gtrsim 1 - \chi_{[-\kappa,\kappa]}(\xi)
\end{align*}
and hence
\begin{align*}
\int\, \| q(x+h) - q(x) \|_{H^s({\mathbb{R}})}^2 \kappa e^{-2\kappa|h|}\,dh \gtrsim \int_{|\xi|>\kappa} |\hat q(\xi)|^2 (\xi^2+4)^s \,d\xi.
\end{align*}
Let us now turn attention to (ii). As the forward implication is trivial, we need only consider sequences $q_n$ that are convergent in $H^\sigma({\mathbb{R}})$ and equicontinuous in $H^s({\mathbb{R}})$. But then writing
\begin{align*}
\int |\hat q_n(\xi) - \hat q_m(\xi)|^2 (\xi^2+4)^s \,d\xi &\leq (\kappa^2+4)^{s-\sigma} \int |\hat q_n(\xi) - \hat q_m(\xi)|^2 (\xi^2+4)^\sigma \,d\xi \\
&\qquad + \int_{|\xi|>\kappa} |\hat q_n(\xi) - \hat q_m(\xi)|^2 (\xi^2+4)^s \,d\xi
\end{align*}
and employing \eqref{E:equi2}, we see that the sequence is Cauchy in $H^{s}({\mathbb{R}})$ and so convergent there also.
\end{proof}
It is now easy to see that equicontinuity in $H^{-1}({\mathbb{R}})$ is readily accessible through the conserved quantity $\alpha(\kappa;q)$:
\begin{lemma}\label{L:equi 2}
A subset $Q$ of $B_\delta$ is equicontinuous in $H^{-1}({\mathbb{R}})$ if and only if
\begin{gather}
\kappa \alpha(\kappa;q) \to 0 \quad\text{as $\kappa\to \infty$, uniformly for $q\in Q$.} \label{E:equi3}
\end{gather}
\end{lemma}
\begin{proof}
By virtue of \eqref{alpha as I2}, it suffices to show that $Q$ is equicontinuous in $H^{-1}({\mathbb{R}})$ if and only if
\begin{gather}
\lim_{\kappa\to\infty} \ \sup_{q\in Q}\ \int_{{\mathbb{R}}} \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2} \,d\xi = 0. \label{E:equi3'}
\end{gather}
That \eqref{E:equi3'} implies \eqref{E:equi2} and hence equicontinuity follows immediately from
\begin{align*}
\int_{|\xi|\geq \kappa} \frac{|\hat q(\xi)|^2}{\xi^2+4} \,d\xi \lesssim \int_{{\mathbb{R}}} \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2} \,d\xi .
\end{align*}
On the other hand, \eqref{E:equi2} implies \eqref{E:equi3'} by virtue of the boundedness of $Q$ and
\begin{align*}
\int \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2} \,d\xi &\lesssim \tfrac{\varkappa^2}{\kappa^2} \int \frac{|\hat q(\xi)|^2}{\xi^2+4} \,d\xi + \int_{|\xi|>\varkappa} \frac{|\hat q(\xi)|^2 \,d\xi}{\xi^2+4}.
\qedhere
\end{align*}
\end{proof}
From the preceding lemma and the conservation of $\alpha(\kappa)$ we readily deduce the following:
\begin{prop}\label{P:equi}
Let $Q\subset B_\delta$ be a set of Schwartz functions that is equicontinuous in $H^{-1}({\mathbb{R}})$. Then
\begin{align}\label{Q star}
Q^* = \bigl\{ e^{J\nabla(t H_\text{KdV} + s H_\kappa)} q : q\in Q,\ t,s\in {\mathbb{R}},\text{ and } \kappa\geq 1 \bigr\}
\end{align}
is equicontinuous in $H^{-1}({\mathbb{R}})$. By virtue of this,
\begin{align}\label{uniform to q}
4\kappa^3\bigl[ \tfrac1{2\kappa} - g(x;\kappa,q) \bigr] \to q \quad\text{in $H^{-1}({\mathbb{R}})$ as $\kappa\to\infty$},
\end{align}
uniformly for $q\in Q^*$.
\end{prop}
\begin{proof}
By Lemma~\ref{L:equi 2} and \eqref{alpha as I2}, the boundedness and equicontinuity of $Q$ guarantees that $\alpha(\kappa;q)$ is uniformly bounded on $Q$ and that
$$
\lim_{\kappa\to\infty} \kappa \alpha(\kappa;q) = 0 \quad\text{uniformly for $q\in Q$.}
$$
But then since $\alpha(\kappa;q)$ is conserved under these flows, we may reverse this reasoning to deduce that $Q^*$ is equicontinuous as well.
Looking back to \eqref{E:L1 est}, \eqref{R I2}, and \eqref{alpha as I2}, we have
$$
\kappa^3 \bigl\| \tfrac1{2\kappa} - g(\,\cdot\;;\kappa,q) - \tfrac1{\kappa} R_0(2\kappa) q\bigr\|_{L^1} \lesssim \kappa \alpha(\kappa;q),
$$
which converges to zero as $\kappa\to\infty$ uniformly for $q\in Q^*$ by \eqref{E:equi3}. In this way, the proof of \eqref{uniform to q} is reduced to the simple calculation
$$
\| 4\kappa^2 R_0(2\kappa)q - q \|_{H^{-1}}^2 = \int \frac{\xi^4 |\hat q(\xi)|^2}{(\xi^2+4\kappa^2)^2}\,\frac{d\xi}{\xi^2+4} \leq \int \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2}\,d\xi
$$
and \eqref{E:equi3'}.
\end{proof}
\section{Well-posedness}\label{S:5}
\begin{theorem}\label{T:converge}
Let $q_n(t)$ be a sequence of Schwartz solutions to \eqref{KdV} on the line and fix $T>0$. If $q_n(0)$ converges in $H^{-1}({\mathbb{R}})$ then so does $q_n(t)$, uniformly for $t\in[-T,T]$.
\end{theorem}
\begin{proof}
Let us first reduce to the case $q_n(0)\in B_\delta$ for any fixed $\delta>0$, which is required in order to apply many of the results of the previous sections. This is easily handled by a simple scaling argument:
if $q(t,x)$ is a Schwartz solution to \eqref{KdV}, then so is
\begin{align}\label{q scaling}
q_\lambda(t,x) = \lambda^2 q(\lambda^3 t, \lambda x)
\end{align}
for any $\lambda>0$; moreover,
\begin{align}\label{H-1 scaling}
\| q_\lambda(0) \|_{H^{-1}({\mathbb{R}})}^2 = \lambda \int \frac{|\hat q(0,\xi)|^2\,d\xi}{\xi^2+4\lambda^{-2}},
\end{align}
which converges to zero as $\lambda\to 0$. Although it is incidental to the current proof, let us note here that
\begin{equation}\label{G scaling}
\begin{gathered}
G(x,y;\kappa,q_\lambda) = \lambda^{-1} G(\lambda x,\lambda y;\lambda^{-1}\kappa,q),\\
\rho(x;\kappa,q_\lambda) =\lambda\rho(\lambda x;\lambda^{-1}\kappa,q),\qtq{and} \alpha(\kappa;q_\lambda) = \alpha(\lambda^{-1}\kappa;q).
\end{gathered}
\end{equation}
By commutativity of the flows, we have
\begin{align*}
q_n(t) = e^{t J\nabla(H_\text{KdV} - H_\kappa)} \circ e^{t J\nabla H_\kappa} q_n(0).
\end{align*}
Thus, setting $Q=\{q_n(0)\}$ and defining $Q^*$ as in \eqref{Q star}, we have
\begin{align}
\sup_{|t|\leq T} \| q_n(t) - q_m(t) \|_{H^{-1}} &\leq \sup_{|t|\leq T} \| e^{t J\nabla H_\kappa} q_n(0) - e^{t J\nabla H_\kappa} q_m(0) \|_{H^{-1}} \label{diff 1}\\
&\qquad + 2 \sup_{q\in Q^*} \sup_{|t|\leq T} \| e^{tJ\nabla (H_\text{KdV} - H_\kappa)} q - q \|_{H^{-1}}. \notag
\end{align}
Note that $Q^*$ is equicontinuous in $H^{-1}({\mathbb{R}})$; this follows from Proposition~\ref{P:equi}.
For fixed $\kappa$, the first term in RHS\eqref{diff 1} converges to zero as $n,m\to\infty$ due to the well-posedness of the $H_\kappa$ flow; see Proposition~\ref{P:H kappa}. Thus, it remains to prove that
\begin{align}\label{56}
\lim_{\kappa\to\infty} \sup_{q\in Q^*} \ \sup_{|t|\leq T}\ \| e^{tJ\nabla (H_\text{KdV} - H_\kappa)} q - q \|_{H^{-1}} =0.
\end{align}
We prove \eqref{56} by considering the reciprocal of the diagonal Green's function at some fixed energy. To this end, we fix $\varkappa\geq 1$ and adopt the following notations: given $q\in Q^*$ and $\kappa\geq \varkappa +1$,
$$
q(t) := e^{tJ\nabla (H_\text{KdV} - H_\kappa)} q \qtq{and} g(t,x;\varkappa) := g(x;\varkappa,q(t)).
$$
Note that $q(t)\in (Q^*)^*=Q^*$ for any $t\in{\mathbb{R}}$.
Combining \eqref{E:l5.1g} and \eqref{H kappa flow g}, we obtain
\begin{align*}
\tfrac{d\ }{dt} \tfrac{1}{2g(t,x;\varkappa)} &= \Bigl\{ \tfrac{1}{g(t,x;\varkappa)} \Bigl( q(t,x) + \tfrac{4\kappa^5}{\kappa^2-\varkappa^2}\bigl[g(t,x;\kappa)-\tfrac{1}{2\kappa}\bigr] - \tfrac{4\varkappa^5}{\kappa^2-\varkappa^2} \bigl[ g(t,x;\varkappa) -\tfrac{1}{2\varkappa}\bigr]\Bigr)\Bigr\}'
\end{align*}
and thence
\begin{align*}
\bigl\| \tfrac{d\ }{dt} \bigl(\varkappa - \tfrac{1}{2g(t;\varkappa)}\bigr) \bigr\|_{H^{-2}}
&\lesssim \bigl\| q(t,x) + 4\kappa^3\bigl[g(t,x;\kappa)-\tfrac{1}{2\kappa}\bigr]\bigr\|_{H^{-1}} \\
&\quad \ {} + \kappa \bigl\| g(t,x;\kappa)-\tfrac{1}{2\kappa}\bigr\|_{H^{-1}} + \kappa^{-2} \bigl\| g(t,x;\varkappa)-\tfrac{1}{2\varkappa}\bigr\|_{H^{-1}}
\end{align*}
uniformly for $q\in Q^*$ and $\kappa\geq \varkappa+1$. (The implicit constants here depend on $\varkappa$.) But then, by the fundamental theorem of calculus and Proposition~\ref{P:equi},
\begin{align}
\lim_{\kappa\to\infty}\; \sup_{q\in Q^*}\ \sup_{|t|\leq T}\ \bigl\| \tfrac{1}{2g(t;\varkappa)} - \tfrac{1}{2g(0;\varkappa)} \bigr\|_{H^{-2}} =0.
\end{align}
In view of Lemma~\ref{L:equi 1}(ii), we may upgrade this convergence to
\begin{align}\label{60}
\lim_{\kappa\to\infty} \; \sup_{q\in Q^*} \ \sup_{|t|\leq T}\ \bigl\| \tfrac{1}{2g(t;\varkappa)} - \tfrac{1}{2g(0;\varkappa)} \bigr\|_{H^{1}} =0,
\end{align}
due to the equicontinuity of the set
$$
E:= \Bigl\{ \varkappa - \tfrac{1}{2g(x;\varkappa,q(t))} \in H^1({\mathbb{R}}) : q\in Q^* \text{ and } t\in{\mathbb{R}}\Bigr\}
$$
in $H^1({\mathbb{R}})$. This property of $E$ holds because, by the diffeomorphism property and the relation \eqref{translation identity}, it is equivalent to equicontinuity of $Q^*$.
Lastly, the diffeomorphism property shows that \eqref{60} implies \eqref{56} and so completes the proof of Theorem~\ref{T:converge}.
\end{proof}
The line case of Theorem~\ref{T:main} follows from the next corollary. We then extend this to higher values of $s$.
\begin{corollary}\label{C:1}
The KdV equation is globally well-posed in $H^{-1}({\mathbb{R}})$ in the following sense: The solution map extends (uniquely) from Schwartz space to a jointly continuous map
$$
\Phi:{\mathbb{R}}\times H^{-1}({\mathbb{R}})\to H^{-1}({\mathbb{R}}).
$$
In particular, $\Phi$ has the group property: $\Phi(t+s)=\Phi(t)\circ \Phi(s)$. Moreover, each orbit $\{\Phi(t,q) : t\in{\mathbb{R}}\}$ is bounded and equicontinuous in $H^{-1}({\mathbb{R}})$. Concretely,
\begin{align}\label{global bound}
\sup_t \| q(t) \|_{H^{-1}({\mathbb{R}})} \lesssim \| q(0) \|_{H^{-1}({\mathbb{R}})} + \| q(0) \|_{H^{-1}({\mathbb{R}})}^3 .
\end{align}
\end{corollary}
\begin{proof}
Given $q\in H^{-1}({\mathbb{R}})$, we may define $\Phi(t,q)$ by choosing some sequence of Schwartz solutions $q_n(t)$ with $q_n(0)\to q$ in $H^{-1}({\mathbb{R}})$ and then set
$$
\Phi(t,q)= \lim_{n\to\infty} q_n(t).
$$
By virtue of Theorem~\ref{T:converge}, this limit exists in $H^{-1}({\mathbb{R}})$, it is independent of the sequence $q_n$, and the convergence is uniform on compact intervals of time.
Now consider a sequence $q_n\to q \in H^{-1}({\mathbb{R}})$ and fix $T>0$. Theorem~\ref{T:converge} guarantees that there is a sequence of Schwartz solutions $\tilde q_n$ so that
$$
\sup_{|t|\leq T} \| \tilde q_n(t) - \Phi(t,q_n) \|_{H^{-1}} \to 0 \qtq{as $n\to\infty$.}
$$
But then $\tilde q_n(0) \to q$ in $H^{-1}$ and so Theorem~\ref{T:converge} implies
$$
\sup_{|t|\leq T} \| \tilde q_n(t) - \Phi(t,q) \|_{H^{-1}} \to 0 \qtq{as $n\to\infty$.}
$$
As each $\tilde q_n(t)$ is itself $H^{-1}({\mathbb{R}})$-continuous in time, this proves joint continuity of $\Phi$.
As $\Phi$ is continuous, the group property on $H^{-1}({\mathbb{R}})$ is inherited from that on Schwartz space.
For small initial data, boundedness and equicontinuity of orbits follows from conservation of $\alpha(\kappa)$, \eqref{alpha as I2}, and Lemma~\ref{L:equi 2}. In fact, this argument shows that
$$
\sup_t \| q(t) \|_{H^{-1}({\mathbb{R}})} \lesssim \| q(0) \|_{H^{-1}({\mathbb{R}})} \qtq{for} q(0)\in B_\delta
$$
and $\delta>0$ sufficiently small. Equicontinuity and \eqref{global bound} for large data then follow from the scaling transformation \eqref{q scaling}.
\end{proof}
\begin{corollary}\label{C:2}
The KdV equation is globally well-posed in $H^{s}({\mathbb{R}})$ for all $s\geq -1$.
\end{corollary}
\begin{proof}
In view of the preceding, it suffices to prove an analogue of Theorem~\ref{T:converge} in $H^s({\mathbb{R}})$. We shall content ourselves with the treatment of $s\in(-1,0)$ here, since we may do so in a simple and uniform manner; moreover, together with Corollary~\ref{C:1}, this covers all cases not previously known, namely, $s\in[-1,-\tfrac34)$.
Given Schwartz solutions $q_n(t)$ to \eqref{KdV} with $q_n(0)$ convergent in $H^{s}({\mathbb{R}})$ and $T>0$, we may apply Theorem~\ref{T:converge} to obtain convergence of $q_n(t)$ in $H^{-1}({\mathbb{R}})$, uniformly for $t\in[-T,T]$. The goal is to upgrade this to uniform convergence in $H^s({\mathbb{R}})$. In view of Lemma~\ref{L:equi 1}, this amounts to demonstrating $H^s({\mathbb{R}})$-equicontinuity of the set $\{q_n(t):n\in{\mathbb{N}}\text{ and } t\in[-T,T]\}$.
To prove equicontinuity, we employ the following trick we used in \cite{KVZ}: Integrating both sides of \eqref{alpha as I2} against the measure $\kappa^{2+2s} \,d\kappa$ over the interval $[\kappa_0,\infty)$, we obtain
\begin{align}\label{int alpha}
\int_{\kappa_0}^\infty \alpha(\kappa;q) \kappa^{2+2s} \,d\kappa \approx \int |\hat q(\xi)|^2 (\xi^2+4\kappa_0^2)^s \,d\xi,
\end{align}
where the implicit constants depend only on $s$. Notice that LHS\eqref{int alpha} is conserved by the flow and so, it follows that
\begin{align}\label{int alpha;}
\int |\hat q_n(t,\xi)|^2 (\xi^2+4\kappa_0^2)^s \,d\xi \approx \int |\hat q_n(0,\xi)|^2 (\xi^2+4\kappa_0^2)^s \,d\xi
\end{align}
uniformly for $n\in\mathbb{N}$ and $t\in{\mathbb{R}}$. As the initial data $q_n(0)$ are $H^s({\mathbb{R}})$-convergent, they are $H^s({\mathbb{R}})$-equicontinuous and so RHS\eqref{int alpha;} converges to zero as $\kappa_0\to\infty$ uniformly in $n$. But then LHS\eqref{int alpha;} converges to zero as $\kappa_0\to\infty$ uniformly in $n$, thus proving equicontinuity of $\{q_n(t) : n\in\mathbb{N}\text{ and } t\in{\mathbb{R}}\}$.
\end{proof}
\section{The periodic case}\label{S:periodic}
With the exception of Section~\ref{S:2}, very little of substance changes in carrying over the arguments presented so far to the case of KdV with initial data in $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$. Nevertheless, there are several reasons why we chose not to present both geometries simultaneously (as in our prior work \cite{KVZ}). First and foremost, we avoid the necessity of continually interrupting the principal line of reasoning to discuss minor changes (often notational) associated to the two geometries.
Secondly, in the non-periodic setting, the scaling transformation \eqref{q scaling} allows us effortlessly to focus attention on small solutions, which manifests in the appearance of $\delta$ throughout our arguments thus far. To overcome the lack of scaling-invariance in the periodic case, we follow the approach we used in \cite{KVZ}. Although we still maintain that this is the best solution, we can attest that it burdens the exposition considerably. Looking to \eqref{H-1 scaling} and \eqref{G scaling}, we see that rescaling $q$ transforms the parameter $\kappa$. Correspondingly, the smallness condition for $q$ can be replaced by a relation involving $\kappa$ and $q$. On the other hand, many formulas become tremendously ugly if we do not employ the simplifications made possible by requiring $\kappa\geq 1$. This reasoning leads to the following \emph{coupled} conditions on $q$ and $\kappa$ that we shall impose:
\begin{equation}\label{periodic smallness}
\kappa\geq 1 \qtq{and} \kappa^{-1/2} \| q \|_{H^{-1}({\mathbb{R}}/{\mathbb{Z}})} \leq \delta.
\end{equation}
Here $\delta>0$ remains our over-arching smallness parameter, whose value will be allowed to shrink as the argument progresses. Concomitant with this, for fixed $\kappa\geq 1$, we define
\begin{equation}\label{periodic B delta}
B_{\delta,\kappa} := \{ q\in H^{-1}({\mathbb{R}}/{\mathbb{Z}}) : \kappa^{-1/2} \| q \|_{H^{-1}({\mathbb{R}}/{\mathbb{Z}})} \leq \delta\bigr\}.
\end{equation}
To make the arguments as parallel as possible, we shall insist on working with a Lax operator
$$
L = -\partial_x^2 + q(x)
$$
(and its resolvent) acting on $L^2({\mathbb{R}})$ with periodic coefficients and \emph{not} as an operator on $L^2({\mathbb{R}}/{\mathbb{Z}})$. This deviates from our treatment in \cite{KVZ}.
In light of our convention, $L$ is no longer a relatively Hilbert--Schmidt (or even relatively compact) perturbation of the case $q\equiv 0$. As many arguments in Section~\ref{S:2} were founded on \eqref{R I2}, which is the quantitative expression of this, those arguments do not automatically carry over to the periodic case. In Lemma~\ref{L:A.1} we obtain the key substitute for \eqref{R I2}. We will then show how to use this to obtain the analogues of the results from Section~\ref{S:2} in the periodic setting.
By comparison, Section~\ref{S:3} is almost devoid of estimates (and those that do appear are easily adapted). Rather, it is preoccupied with identities that hold pointwise in space and so are immune to the ambient geometry.
Once we have proved \eqref{periodic alpha} below, everything in Section~\ref{S:4} carries over by simply replacing every instance of integration with respect to $\xi$ by summation over $\xi\in2\pi{\mathbb{Z}}$.
The principal difficulty in transferring the proof of Theorem~\ref{T:converge} to the circle case is the absence of the scaling symmetry \eqref{q scaling}; we have already explained how this can be avoided. The only change needed for the treatment of the remaining results in Section~\ref{S:5}, namely, Corollaries~\ref{C:1} and~\ref{C:2}, is to employ \eqref{periodic alpha} whenever the original argument calls on \eqref{alpha as I2}.
Let us turn now to the central matter at hand, namely, obtaining analogues of the principal results of Section~\ref{S:2} in the periodic setting.
\begin{lemma}\label{L:A.1}
Fix $\psi\in C^\infty_c({\mathbb{R}})$. If $q,f\in H^{-1}({\mathbb{R}}/{\mathbb{Z}})$, then
\begin{align}
\bigl\| \sqrt{R_0}\,q \sqrt{R_0} \bigr\|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})}^2 &\lesssim \kappa^{-1} \sum_{\xi\in2\pi{\mathbb{Z}}} \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2},\label{A.1.1} \\
\bigl\| \sqrt{R_0}\, f\psi R_0 q \sqrt{R_0} \bigr\|_{{\mathfrak{I}}_1(L^2({\mathbb{R}}))} &\lesssim \kappa^{-1} \| f \|_{H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})} \| q \|_{H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})},\label{A.1.2}
\end{align}
both uniformly for $\kappa\geq 1$.
\end{lemma}
Note that ${\mathfrak{I}}_1(L^2({\mathbb{R}}))$ denotes the ideal of trace-class operators acting on the Hilbert space $L^2({\mathbb{R}})$. Here and below, we simply use trace-class as a notational convenience for denoting operators representable as a product of Hilbert--Schmidt operators:
\begin{align}\label{I1 from I2}
\| B \|_{{\mathfrak{I}}_1} = \inf \bigl\{ \|B_1\|_{{\mathfrak{I}}_2}\|B_1\|_{{\mathfrak{I}}_2} : B = B_1 B_2 \bigr\} .
\end{align}
For a proper discussion of trace-class, including the veracity of \eqref{I1 from I2}, see \cite{MR2154153}.
Before beginning the proof of Lemma~\ref{L:A.1}, we describe one more preliminary: Given $f\in L^2({\mathbb{R}})$ and $\theta\in[0,2\pi]$, we define
$$
f_\theta(x) = \sum_{\xi\in2\pi{\mathbb{Z}}} \hat f(\xi+\theta) e^{ix(\xi+\theta)},
$$
which may be regarded as a jointly square-integrable function of $x\in[0,1]$ and $\theta\in[0,2\pi]$. Indeed,
$$
\int_{{\mathbb{R}}} |f(x)|^2\,dx = \int_0^{2\pi} \| f_\theta\|_{L^2([0,1])}^2\,d\theta.
$$
Moreover, any (pseudo)differential operator $L$ with $1$-periodic coefficients acts fibre-wise, which is to say it commutes with multiplication by any function of $\theta$. Note that what we describe here is simply the standard direct integral representation of a periodic operator (cf. \cite[\S XIII.16]{MR0493421}).
\begin{proof}[Proof of Lemma~\ref{L:A.1}]
As the operator appearing in \eqref{A.1.1} is self-adjoint, it suffices to take $f\in L^2({\mathbb{R}})$ and consider
\begin{align*}
\langle f, \sqrt{R_0} q \sqrt{R_0} \,f\rangle_{L^2} = \int_0^{2\pi} \langle f_\theta, {\mathcal M}_\theta q {\mathcal M}_\theta f_\theta \rangle\,d\theta,
\end{align*}
where ${\mathcal M}_\theta :L^2([0,1])\to L^2([0,1])$ is defined via
$$
{\mathcal M}_\theta : \sum_{\xi\in2\pi{\mathbb{Z}}} c_\xi e^{ix(\xi+\theta)} \mapsto \sum_{\xi\in2\pi{\mathbb{Z}}} \frac{c_\xi e^{ix(\xi+\theta)}}{\sqrt{(\xi+\theta)^2+\kappa^2}}.
$$
In this way, we see that
$$
\| \sqrt{R_0} q \sqrt{R_0} \|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})} = \bigl\| \| {\mathcal M}_\theta q {\mathcal M}_\theta \|_{L^2([0,1])\to L^2([0,1])} \bigr\|_{L^\infty_\theta}.
$$
The estimate \eqref{A.1.1} now follows by bounding operator norms by Hilbert--Schmidt norms and the equivalence
\begin{align}\label{mcM norm}
\| {\mathcal M}_\theta q {\mathcal M}_\theta \|_{{\mathfrak{I}}_2(L^2([0,1]))}^2 \approx \kappa^{-1} \sum_{\xi\in2\pi{\mathbb{Z}}} \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2},
\end{align}
which is valid uniformly for $\theta\in[0,2\pi)$.
We turn now to \eqref{A.1.2}. From \eqref{basic commutators}, we find
\begin{align*}
\sqrt{R_0}\, f\psi R_0 q \sqrt{R_0}\, &= \sqrt{R_0}\, f\psi\langle x\rangle R_0 \langle x\rangle^{-1} q \sqrt{R_0}
+ \sqrt{R_0}\, f\psi \sqrt{R_0}\, A \sqrt{R_0}\, \langle x\rangle^{-1} q \sqrt{R_0} \\
\text{with}\quad A &= \sqrt{R_0}\, \bigl(\tfrac{x}{\langle x\rangle} \partial_x + \partial_x\tfrac{x}{\langle x\rangle}\bigr) \sqrt{R_0}\,.
\end{align*}
Evidently, $A$ is an $L^2({\mathbb{R}})$-bounded operator. From \eqref{R I2} we obtain
\begin{align*}
\bigl\| \sqrt{R_0}\, \langle x\rangle^{-1} q \sqrt{R_0} \bigr\|_{{\mathfrak{I}}_2(L^2({\mathbb{R}}))}
&\lesssim \kappa^{-1/2} \bigl\| \langle x\rangle^{-1} q \bigr\|_{H^{-1}_\kappa({\mathbb{R}})} \lesssim \kappa^{-1/2} \bigl\| q \bigr\|_{H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})}
\end{align*}
and similarly,
\begin{align*}
\bigl\| \sqrt{R_0}\, f\psi \sqrt{R_0} \bigr\|_{{\mathfrak{I}}_2(L^2({\mathbb{R}}))} + \bigl\| \sqrt{R_0}\, f\psi\langle x\rangle\sqrt{R_0} \bigr\|_{{\mathfrak{I}}_2(L^2({\mathbb{R}}))}
\lesssim \kappa^{-1/2} \bigl\| f \bigr\|_{H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})}.
\end{align*}
Combining the preceding immediately yields \eqref{A.1.2}.
\end{proof}
\begin{prop}\label{P:periodic 1}
Let $q\in H^{-1}({\mathbb{R}}/{\mathbb{Z}})$. There is a unique self-adjoint operator $L$ acting on $L^2({\mathbb{R}})$ associated to the semi-bounded quadratic form
$$
\psi \mapsto \int_{{\mathbb{R}}} |\psi'(x)|^2 + q(x) |\psi(x)|^2\,dx .
$$
Furthermore, there exists $\delta>0$, so that if $q$ and $\kappa$ obey \eqref{periodic smallness}, then the resolvent $R:=(L+\kappa^2)^{-1}$ admits a continuous integral kernel $G(x,y;\kappa,q)$ given by the uniformly convergent series
\begin{align}\label{E:periodic G}
G(x,y;\kappa, q) = \tfrac{1}{2\kappa} e^{-\kappa|x-y|} + \sum_{\ell=1}^\infty (-1)^\ell \Bigl\langle \sqrt{R_0}\, \delta_x, \Bigl(\!\sqrt{R_0}\,q \sqrt{R_0}\Bigr)^\ell \sqrt{R_0}\,\delta_y\Bigr\rangle.
\end{align}
\end{prop}
\begin{proof}
Regarding the existence and uniqueness of $L$, we note that \eqref{A.1.1} guarantees that $q$ is an infinitesimally form bounded perturbation and then apply \cite[Theorem~X.17]{MR0493420}. This is the same argument used in the proof of Proposition~\ref{P:sa L}.
Using Plancherel, it is easy to check that $x\mapsto \sqrt{R_0}\delta_x$ is H\"older-continuous as a map from ${\mathbb{R}}$ to $L^2({\mathbb{R}})$. Thus, convergence of the series \eqref{E:periodic G} and continuity of the result follows whenever
$\sqrt{R_0}\,q \sqrt{R_0}$ is a contraction; this in turn follows from \eqref{periodic smallness} and \eqref{A.1.1} when $\delta$ is small enough.
\end{proof}
We define $g(x;\kappa,q)$ and $\rho(x;\kappa,q)$ exactly as in Section~\ref{S:2}; see \eqref{g defn} and \eqref{E:rho defn}. Let us now demonstrate their basic properties:
\begin{prop}\label{P:periodic 2}
There exists $\delta>0$, so that the following are true for all $\kappa\geq 1:$\\
(i) The mappings
\begin{align}\label{periodic diffeos}
q\mapsto g-\tfrac1{2\kappa} \qtq{and} q\mapsto \kappa-\tfrac1{2g}
\end{align}
are (real analytic) diffeomorphisms of $B_{\delta,\kappa}$ into $H^1({\mathbb{R}}/{\mathbb{Z}})$.\\
(ii) For every $q\in B_{\delta,\kappa}$ and every integer $s\geq 0$,
\begin{align}\label{periodic stronger}
\|g'(x)\|_{H^{s}({\mathbb{R}}/{\mathbb{Z}})} \lesssim_s \| q \|_{H^{s-1}({\mathbb{R}}/{\mathbb{Z}})} .
\end{align}
(iii) For every $q\in B_{\delta,\kappa}$, $\rho(x;q)$ is non-negative and in $H^1({\mathbb{R}}/{\mathbb{Z}})$. Moreover,
\begin{align}\label{periodic alpha}
\alpha(\kappa,q):=\int_0^1 \rho(x)\,dx \approx \kappa^{-1} \sum_{\xi\in 2\pi{\mathbb{Z}}} \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2},
\end{align}
uniformly for $\kappa\geq 1$ and $q\in B_{\delta,\kappa}$.
\end{prop}
\begin{proof}
First we show that $g\in H^{1}({\mathbb{R}}/{\mathbb{Z}})$. That it is periodic is self-evident from \eqref{E:periodic G}. To estimate its norm, we pick $\psi\in C^\infty_c({\mathbb{R}})$ so that
$$
\sum_{k\in{\mathbb{Z}}} \psi(x-k) \equiv 1.
$$
The utility of this partition of unity for us stems from the duality relation
$$
\| h \|_{H^{1}_\kappa({\mathbb{R}}/{\mathbb{Z}})} = \sup \biggl\{ \int_{\mathbb{R}} h(x) \psi(x) f(x)\,dx : f\in C^\infty({\mathbb{R}}/{\mathbb{Z}}) \text{ and }\|f\|_{H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})}\leq 1\biggr\}.
$$
Now given $f\in C^\infty({\mathbb{R}}/{\mathbb{Z}})$, from \eqref{E:periodic G} we obtain that
\begin{align}\label{dual g}
\int \bigl[g(x) -\tfrac1{2\kappa}\bigr] \psi(x) f(x) \,dx = \sum_{\ell=1}^\infty (-1)^\ell \tr\Bigl\{ \!\sqrt{R_0}\,f\psi \sqrt{R_0} \Bigl(\!\sqrt{R_0}\,q \sqrt{R_0}\Bigr)^\ell \Bigr\}
\end{align}
and thence, using \eqref{A.1.2}, \eqref{A.1.1}, and \eqref{periodic B delta},
\begin{align}\label{periodic g in H1}
\bigl\|g(x) -\tfrac1{2\kappa}\bigr\|_{H^1({\mathbb{R}}/{\mathbb{Z}})} \lesssim \kappa^{-1} \|q\|_{H^{-1}({\mathbb{R}}/{\mathbb{Z}})},
\end{align}
provided $\delta$ is chosen sufficiently small. Moreover, this argument shows that the first mapping in \eqref{periodic diffeos} is real-analytic by directly proving convergence of the power series.
When combined with \eqref{translation identity}, the estimates just presented also lead to a proof of \eqref{periodic stronger}; for further details see the proof of Proposition~\ref{P:diffeo}.
We consider now the inverse mapping. As in Section~\ref{S:2},
$$
\kappa \cdot dg\bigr|_{q\equiv 0} = - R_0(2\kappa)
$$
is a unitary map of $H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})$ onto $H^1_\kappa({\mathbb{R}}/{\mathbb{Z}})$. Thus, by the inverse function theorem, $q\mapsto g-\tfrac1{2\kappa}$ is a diffeomorphism in some neighbourhood of zero. We must verify that the inverse mapping extends to the whole of $B_{\delta,\kappa}$. Differentiating \eqref{dual g} with respect to $q$ and applying \eqref{A.1.1} and \eqref{A.1.2} yields
\begin{align}\label{periodic dg in H1}
\Bigl\| dg -dg\bigr|_{q\equiv0} \Bigr\|_{H^{-1}_\kappa\to H^{\vphantom{+}1}_\kappa} \lesssim \kappa^{-3/2} \|q\|_{H^{-1}({\mathbb{R}}/{\mathbb{Z}})},
\end{align}
which suffices for this task.
The diffeomorphism property extends from the first map in \eqref{periodic diffeos} to the second by exactly the same argument presented in the proof of Proposition~\ref{P:diffeo}.
We turn now to part (iii), focussing on \eqref{periodic alpha}. As previously, we proceed by computing derivatives, beginning with
\begin{align}\label{peri red}
\frac{d\ }{ds}\biggr|_{s=0} \int_0^1 \frac{dy}{2g(y;q+sf)} = \int_0^1 g(x) f(x)\,dx.
\end{align}
This may be proved as follows: By the resolvent identity, periodicity, and Lemma~\ref{L:D 1/g},
\begin{align*}
\text{LHS\eqref{peri red}} &= \int_0^1 \int_{\mathbb{R}} \frac{G(y,x)f(x)G(x,y)}{2g(y)^2}\,dx\,dy \\
&= \sum_{k\in{\mathbb{Z}}} \int_0^1 \int_0^1 \frac{G(y,x+k)f(x+k)G(x+k,y)}{2g(y)^2}\,dx\,dy \\
&= \sum_{k\in{\mathbb{Z}}} \int_0^1 \int_0^1 \frac{G(y-k,x)f(x)G(x,y-k)}{2g(y-k)^2}\,dx\,dy \\
&= \int_0^1 \int_{\mathbb{R}} \frac{G(y,x)G(x,y)}{2g(y)^2}\,dy\,f(x)\,dx = \text{RHS\eqref{peri red}}.
\end{align*}
Beginning with \eqref{peri red} and using \eqref{O35pp} shows
\begin{align*}
d^2\alpha\bigr|_{q\equiv 0} (f,f) &= \tfrac{1}{4\kappa^2} \int_0^1 \!\! \int_{\mathbb{R}} e^{-2\kappa|y-z|} f(y) f(z) \,dy\,dz
= \tfrac{1}{\kappa} \sum_{\xi\in 2\pi{\mathbb{Z}}} \frac{|\hat f(\xi)|^2}{\xi^2+4\kappa^2}.
\end{align*}
Relying also on \eqref{periodic dg in H1} yields
\begin{align*}
\Bigl| d^2\alpha(f,f) - d^2\alpha\bigr|_{q\equiv 0} (f,f)\Bigr| & \lesssim \kappa^{-3/2} \|q\|_{H^{-1}({\mathbb{R}}/{\mathbb{Z}})} \|f\|_{H^{-1}_\kappa({\mathbb{R}}/{\mathbb{Z}})}^2.
\end{align*}
In this way, we see that the power series expansion of $\alpha$ as a function of $q$ is dominated by its quadratic term throughout \eqref{periodic B delta}, thus proving \eqref{periodic alpha}.
\end{proof}
Proposition~\ref{P:periodic 2} contains no analogue of \eqref{O37}. Unlike in the decaying case, the quantity defined in \eqref{periodic alpha} does not coincide precisely with the renormalized perturbation determinant considered in \cite{KVZ}. To describe the connection, we must introduce several new objects. Henceforth, we consider only $q\in C^\infty({\mathbb{R}}/{\mathbb{Z}})$. Once we have established suitable identities in this setting, they may be extended via analyticity to $q$ that are merely $H^{-1}({\mathbb{R}}/{\mathbb{Z}})$.
Let ${\mathcal R}_0$ denote the resolvent associated to the Laplacian on $[0,1]$ with periodic boundary conditions; concretely,
$$
{\mathcal R}_0 : \sum_{\xi\in2\pi{\mathbb{Z}}} c_\xi e^{i\xi x} \mapsto \sum_{\xi\in2\pi{\mathbb{Z}}} \frac{c_\xi e^{i\xi x}}{\xi^2 + \kappa^2}.
$$
Note that this coincides with ${\mathcal M}_0^2$ where ${\mathcal M}_0$ is as in the proof of \eqref{A.1.1}. Using \eqref{mcM norm}, we see that the resolvent of the operator $\mathcal{L}=-\partial_x^2+q$, acting on $L^2([0,1])$ with periodic boundary conditions, can be expanded in a convergent series
$$
{\mathcal R} = {\mathcal R}_0 + \sum_{\ell=1}^\infty (-1)^\ell \sqrt{{\mathcal R}_0} \left(\sqrt{{\mathcal R}_0}\, q \, \sqrt{{\mathcal R}_0}\right)^\ell \sqrt{{\mathcal R}_0}
$$
whenever $\kappa$ and $q$ satisfy \eqref{periodic smallness} for suitable $\delta>0$. Moreover, the kernels of these operators can be found by the method of images:
\begin{align}\label{mcR0 kernel}
\langle \delta_x, {\mathcal R}_0\delta_y\rangle = \sum_{k\in{\mathbb{Z}}} \langle \delta_x, R_0\delta_{y+k}\rangle = \tfrac{1}{2\kappa(1-e^{-\kappa})} \bigl[ e^{-\kappa\|x-y\|} + e^{-\kappa(1-\|x-y\|)} \bigr],
\end{align}
where $\|x-y\|=\dist(x-y,{\mathbb{Z}})$, and similarly,
\begin{align}\label{mcR kernel}
\mathcal G(x,y) := \langle \delta_x, {\mathcal R} \delta_y\rangle = \sum_{k\in{\mathbb{Z}}} G(x,y+k).
\end{align}
The last object we need to define is the Lyapunov exponent. Let $\psi_\pm(x;\kappa)$ be the Weyl solutions introduced earlier in this section. Due to the periodicity of $q$, we see that $x\mapsto\psi_+(x+1)$ and $x\mapsto\psi_-(x+1)$ constitute equally good Weyl solutions and so must differ from the originals by numerical constants. Noting the constancy of the Wronskian as well as the square-integrability constraint, we see
that there is a $\gamma=\gamma(\kappa)>0$ so that
$$
\psi_+(x+1;\kappa) = e^{-\gamma(\kappa)} \psi_+(x;\kappa) \qtq{and} \psi_-(x+1;\kappa) = e^{+\gamma(\kappa)} \psi_-(x;\kappa).
$$
This quantity $\gamma$ is known as the Lyapunov exponent. Employing these relations to sum in \eqref{mcR kernel}, we deduce that
\begin{align}\label{mcR kernel'}
\mathcal G(x,x) = \frac{1+e^{-\gamma}}{1-e^{-\gamma}} G(x,x).
\end{align}
\begin{prop} For $q\in C^\infty({\mathbb{R}}/{\mathbb{Z}})$ and $\kappa$ satisfying \eqref{periodic smallness}, we have
\begin{align}\label{Lyapunov formula}
\gamma(\kappa) = \int_0^1 \frac{dx}{2g(x)},
\end{align}
\begin{align}\label{periodic trace}
\tr\left(\sqrt{{\mathcal R}_0}\, q \, \sqrt{{\mathcal R}_0} \right) = \tfrac{1+e^{-\kappa}}{1-e^{-\kappa}} \int_0^1 \bigl[\tfrac{1}{2} e^{-2\kappa|\cdot|}*q\bigr](x) \,dx
= \tfrac1{2\kappa}\,\tfrac{1+e^{-\kappa}}{1-e^{-\kappa}} \int_0^1 q(x) \,dx,
\end{align}
which is a Casimir, and
\begin{align}\label{periodic determinant}
\log\det\left( 1 + \sqrt{{\mathcal R}_0}\, q \, \sqrt{{\mathcal R}_0} \right) = \log\bigl(e^{\gamma} - 2 + e^{-\gamma} \bigr) - \log\bigl(e^{\kappa} - 2 + e^{-\kappa} \bigr).
\end{align}
Here the trace and determinant are with respect to the Hilbert space $L^2({\mathbb{R}}/{\mathbb{Z}})$.
\end{prop}
\begin{proof}
The proof of \eqref{Lyapunov formula} is very simple: combining $g(x)=\psi_+(x)\psi_-(x)$ with the Wronskian relation \eqref{E:Wron}, we have
\begin{align*}
\int_0^1 \frac{dx}{2g(x)} = \tfrac12 \int_0^1 \frac{d }{dx} \log\Bigl[\tfrac{\psi_-(x)}{\psi_+(x)}\Bigr]\,dx = \tfrac12 \log\Bigl[\tfrac{\psi_-(x+1)\psi_+(x)}{\psi_-(x)\psi_+(x+1)}\Bigr] = \gamma.
\end{align*}
By \eqref{mcR0 kernel}, we have
$$
\tr\Bigl\{ \sqrt{{\mathcal R}_0}\, q \, \sqrt{{\mathcal R}_0} \Bigr\} = \frac{(1+e^{-\kappa})}{2\kappa(1-e^{-\kappa})} \int_0^1 q(x)\,dx,
$$
while
\begin{align*}
\int_0^1\int_{\mathbb{R}} \tfrac{1}{2} e^{-2\kappa|x-y|} q(y)\,dy\,dx &= \sum_{k\in{\mathbb{Z}}} \int_0^1\int_0^1 \tfrac{1}{2} e^{-2\kappa|x-k-y|} q(y)\,dy\,dx \\
&= \int_0^1\int_{\mathbb{R}} \tfrac{1}{2} e^{-2\kappa|x-y|} q(y) \,dx\,dy = \tfrac{1}{2\kappa} \int_0^1 q(y)\,dy.
\end{align*}
This proves \eqref{periodic trace}.
The identity \eqref{periodic determinant} can be readily deduced from \cite[Theorem~2.9]{MR0559928}, which is a recapitulation of venerable results of Hill and of Wittaker and Watson. For completeness, we given an alternate proof paralleling our arguments from the rapidly decreasing case.
By \eqref{Lyapunov formula}, we see that \eqref{periodic determinant} holds in the case $q\equiv 0$. Moreover, arguing as in the decaying case, we find that for any $f\in C^\infty({\mathbb{R}}/{\mathbb{Z}})$,
$$
\frac{d\ }{ds}\biggr|_{s=0} \log\det\left( 1 + \sqrt{{\mathcal R}_0}\, (q+sf) \, \sqrt{{\mathcal R}_0} \right) (x) = \int_0^1 \mathcal G(x,x) f(x)\,dx.
$$
On the other hand, by \eqref{Lyapunov formula} and \eqref{peri red},
$$
\frac{d\ }{ds}\biggr|_{s=0} \log\bigl(e^{\gamma(\kappa;q+sf)} - 2 + e^{-\gamma(\kappa;q+sf)} \bigr) = \frac{e^\gamma-e^{-\gamma}}{(e^{\gamma} - 2 + e^{-\gamma})}\int_0^1 g(x) f(x)\,dx.
$$
In view of \eqref{mcR kernel'}, these two derivatives agree. Thus equality in \eqref{periodic determinant} extends to all $q\in B_{\delta,\kappa}$.
\end{proof}
\begin{corollary} For smooth initial data, the conservation of
$$
\int_0^1 \rho(x)\,dx
$$
under the KdV flow, which follows from \eqref{E:l5.1h}, is equivalent to conservation of
$$
-\log\det_2\left( 1 + \sqrt{{\mathcal R}_0}\, q \, \sqrt{{\mathcal R}_0} \right),
$$
which was proved in \cite{KVZ}.
\end{corollary}
\section{Local smoothing}\label{S:7}
Our first goal in this section is to derive a local smoothing result for $H^{-1}$-solutions to \eqref{KdV} on the line. A similar a priori estimate was obtained by Buckmaster and Koch in \cite{MR3400442} via the Miura map.
\begin{lemma}[Local smoothing]\label{L:loc smoothing}
There exists $\delta>0$ so that for every $H^{-1}({\mathbb{R}})$-solution $q(t)$ to \eqref{KdV}, in the sense of Corollary~\ref{C:1}, with initial data $q(0)\in B_\delta$,
\begin{equation}\label{E:loc smoothing *}
\sup_{t_0,x_0\in{\mathbb{R}}} \ \int_{0}^{1} \!\! \int_{0}^{1} |q(t-t_0,x-x_0)|^2 \,dx\,dt \lesssim \delta^2.
\end{equation}
\end{lemma}
\begin{proof}
As noted already in Proposition~\ref{L:5.2}, conservation of $\alpha(\kappa=1)$ guarantees that
\begin{align*}
\| q \|_{L^\infty_x H^{-1}_x} ^2 \lesssim \delta^2.
\end{align*}
This allows us to choose $\delta$ sufficiently small that all results from Section~\ref{S:3} can be applied at all times $t\in{\mathbb{R}}$.
It also means that it suffices to prove \eqref{E:loc smoothing *} with $t_0=x_0=0$.
Let us now fix a smooth function $\phi$ whose derivative $\phi'$ is positive and Schwartz and define
$$
\psi(x) = \tfrac32 \int_{\mathbb{R}} e^{-2|x-y|} \phi'(y) \,dy,
$$
which is positive everywhere.
Suppose first that $q(t)$ is a Schwartz solution to \eqref{KdV}. Setting $\kappa=1$ in \eqref{E:l5.1h}, we obtain
\begin{align*}
\tfrac{d\ }{dt} \rho(t,x) &= \Bigl( \tfrac32 \bigl[e^{-2|\cdot|}* q(t)^2\bigr](x) + 2q(t,x)\bigl[ 1 - \tfrac{1}{2g(t,x)}\bigr] - 4\rho(t,x) \Bigr)'.
\end{align*}
Integrating this against $\phi(x)$ and integrating by parts, yields
\begin{align}
\int_0^1 \! \int_{\mathbb{R}} |q(t,x)|^2 \psi(x) \,dx\,dt &= \int_{\mathbb{R}} [\rho(0,x)-\rho(1,x)] \phi(x) \,dx \notag\\
&\quad - 2 \int_0^1 \! \int_{\mathbb{R}} q(t,x)\bigl[ 1 - \tfrac{1}{2g(t,x)}\bigr] \phi'(x) \,dx\,dt \label{64}\\
&\quad + 4 \int_0^1 \! \int_{\mathbb{R}} \rho(t,x) \phi'(x) \,dx\,dt . \notag
\end{align}
But by the results of Section~\ref{S:3}, the right-hand side is bounded uniformly; thus \eqref{E:loc smoothing *} follows for Schwartz solutions.
Next we allow $q(t)$ to be a general (not Schwartz) solution to \eqref{KdV} and suppose $q_n(t)$ is a sequence of Schwartz solutions with $q_n(0)\to q(0)$ in $H^{-1}({\mathbb{R}})$. By weak lower-semicontinuity of the $L^2$-norm and the fact that weak convergence is guaranteed by Theorem~\ref{T:converge},
\begin{equation*}
\int_{0}^{1} \int_{0}^{1} |q(t,x)|^2 \,dx\,dt \leq \liminf_{n\to\infty} \int_{0}^{1} \int_0^1 |q_n(t,x)|^2 \,dx\,dt.
\end{equation*}
Thus, \eqref{E:loc smoothing *} for such general solutions $q(t)$ follows from the Schwartz-class case already proven.
\end{proof}
Using this a priori bound as a stepping stone, we will now show that solutions whose initial data converge in $H^{-1}({\mathbb{R}})$ actually converge in the local smoothing norm as claimed in Theorem~\ref{T:ls conv}. In fact, the following proposition is strictly stronger than this theorem because of the additional uniformity in $x_0$.
\begin{prop}\label{P:loc smoothing}
Let $q(t)$ and $q_n(t)$ be $H^{-1}({\mathbb{R}})$-solutions to \eqref{KdV}, in the sense of Corollary~\ref{C:1}, with initial data $q_n(0)\to q(0)$ in $H^{-1}({\mathbb{R}})$. Then for every $T>0$,
\begin{equation}\label{E:loc smoothing n}
\lim_{n\to\infty} \ \sup_{x_0\in{\mathbb{R}}} \ \int_{-T}^T \int_0^1 |q(t,x-x_0)-q_n(t,x-x_0)|^2 \,dx\,dt = 0.
\end{equation}
In particular, solutions in the sense of Corollary~\ref{C:1} are distributional solutions.
\end{prop}
A major part of the argument leading to Proposition~\ref{P:loc smoothing} is a refinement of the proof of Lemma~\ref{L:loc smoothing}. The key improvement stems from analyzing the behavior of the various terms in \eqref{64} as $\kappa\to\infty$, rather than simply setting $\kappa=1$. We begin with the following preliminary estimates:
\begin{lemma}
Fix $\psi\in C^\infty_c({\mathbb{R}})$ with $\supp(\psi)\subset (0,1)$. There exists $\delta>0$ so that
\begin{gather}
\bigl\| \psi(x)\bigl[g(x)-\tfrac1{2\kappa}\bigr] + \kappa^{-1}[R_0(2\kappa)(q\psi)](x) \bigr\|_{L^2({\mathbb{R}})}^2
\lesssim \kappa^{-7} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] \label{E:psi g}\\
\bigl\| \psi(x)\bigl[\kappa -\tfrac1{2g(x)}\bigr] + 2\kappa [R_0(2\kappa)(q\psi)](x) \bigr\|_{L^2({\mathbb{R}})}^2
\lesssim \kappa^{-3} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] \label{E:psi 1/g}\\
\ \ \biggl| \int \rho(x) \psi(x)^2 \,dx - \tfrac{1}{2\kappa} \int \frac{|\widehat{q\psi} (\xi)|^2\,d\xi}{\xi^2+4\kappa^2} \biggr|
\lesssim \kappa^{-7/2} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] \label{E:psi rho}\\
\biggl| \iint q(x)^2 \kappa e^{-2\kappa|x-y|} \psi(y)^2 \,dx\,dy - \int q(x)^2 \psi(x)^2 \,dx \biggr|
\lesssim \int_{\mathbb{R}} \frac{|q(x)|^2\,dx}{\kappa(1+x^2)} \label{E:psi dumb}
\end{gather}
for every $q\in B_\delta$ and $\kappa\geq 1$. (Note that the implicit constants depend on $\psi$.)
\end{lemma}
\begin{proof}
We begin with a commutator calculation:
\begin{align*}
[\psi(x),R_0] &= R_0 \bigl(-2\partial_x \psi'(x) + \psi''(x)\bigr) R_0 \\
&= R_0 \bigl(-2\partial_x\bigr) [\psi'(x),R_0] + R_0 \bigl(-2\partial_x\bigr)R_0\psi'(x) + R_0\psi''(x) R_0.
\end{align*}
This shows that for $\kappa\geq 1$, we can write
\begin{equation}\label{psi comm bound}
\begin{gathered}\relax
[\psi(x),R_0] = \sqrt{R_0} A \sqrt{R_0} = \sqrt{R_0} B \sqrt{R_0} + \sqrt{R_0} C \sqrt{R_0} \psi'(x) \\
\text{with} \quad \|A\|_{L^2\to L^2} + \|C\|_{L^2\to L^2} \lesssim \kappa^{-1} \qtq{and} \|B\|_{L^2\to L^2} \lesssim \kappa^{-2}.
\end{gathered}
\end{equation}
From the series \eqref{E:g series}, we have
\begin{align}
&\int_{\mathbb{R}}\{ \psi(x)\bigl[g(x)-\tfrac1{2\kappa}\bigr] + \kappa^{-1}[R_0(2\kappa)(q\psi)](x) \bigr\} f(x)\,dx \label{dual psi g}\\
={}& \sum_{\ell\geq 2} (-1)^\ell \tr\Bigl\{ \sqrt{R_0}\,f\,\sqrt{R_0} \sqrt{R_0}\,\psi q\,\sqrt{R_0} \Bigl( \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^{\ell-1} \Bigr\} \notag\\
&\ +\sum_{\ell\geq 1} (-1)^\ell \tr\Bigl\{ \sqrt{R_0}\,f\,\sqrt{R_0} B \Bigl( \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^\ell \Bigr\} \notag\\
&\ +\sum_{\ell\geq 1} (-1)^\ell \tr\Bigl\{ \sqrt{R_0}\,f\,\sqrt{R_0} C \sqrt{R_0}\,\psi' q\,\sqrt{R_0} \Bigl( \sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^{\ell-1} \Bigr\}. \notag
\end{align}
Using
\begin{align}\label{I2 S6}
\Bigl\| \sqrt{R_0}\,h\,\sqrt{R_0} \Bigr\|_{{\mathfrak{I}}_2} \lesssim \kappa^{-3/2} \| h \|_{L^2}
\qtq{and} \Bigl\| \sqrt{R_0}\,q\,\sqrt{R_0} \Bigr\|_{{\mathfrak{I}}_2} \lesssim \kappa^{-1/2} \| q \|_{H^{-1}},
\end{align}
which follow from \eqref{R I2}, we then deduce that
\begin{align*}
\text{LHS\eqref{dual psi g}} \leq \kappa^{-7/2} \|f\|_{L^2} \Bigl\{ \|\psi q\|_{L^2} + 1 + \|\psi' q\|_{L^2}\Bigr\},
\end{align*}
provided, say, $\delta\leq\frac12$. This proves \eqref{E:psi g}. For future use, we also note that with the aid of \eqref{E:psi g}, one may readily show
\begin{align}\label{step to rho2}
\int \bigl(g(x) - \tfrac1{2\kappa}\bigr)^2\psi(x)^2\,dx + \bigl\| \kappa^{-1} R_0(2\kappa)(q\psi) \bigr\|_{L^2({\mathbb{R}})}^2
\lesssim \kappa^{-6} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] .
\end{align}
Next we prove \eqref{E:psi 1/g}. This almost follows from \eqref{E:psi g}; indeed, writing
\begin{align}\label{1/g expansion}
\kappa -\tfrac1{2g(x)} = 2\kappa^2\bigl( g(x) - \tfrac1{2\kappa}\bigr) - \tfrac{2\kappa^2}{g(x)} \bigl( g(x) - \tfrac1{2\kappa}\bigr)^2
\end{align}
and invoking \eqref{E:psi g}, we are left only to prove
\begin{align}\label{psi/g left}
\int \tfrac{2\kappa^2}{g(x)} \bigl( g(x) - \tfrac1{2\kappa}\bigr)^2 \psi(x)^2\,dx \lesssim \kappa^{-3} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr].
\end{align}
This then follows from \eqref{step to rho2}.
We begin the proof of \eqref{E:psi rho} by expanding one step further than \eqref{1/g expansion} to write
\begin{equation*
\begin{gathered}
\rho(x) =\sum_{i=1}^3 \rho_i(x) \qtq{with} \rho_1(x):= 2\kappa^2\bigl\{g(x) - \tfrac1{2\kappa} + \tfrac1\kappa [R_0(2\kappa)q](x)\bigr\},\\
\rho_2(x):= - 4\kappa^3\bigl(g(x) - \tfrac1{2\kappa}\bigr)^2 \qtq{and} \rho_3(x):= 4\kappa^3\bigl(g(x) - \tfrac1{2\kappa}\bigr)^3 / g(x).
\end{gathered}
\end{equation*}
Let us begin our analysis with the contribution of $\rho_1$. From \eqref{E:g series}, we have
\begin{align*}
\int \rho_1(x)\psi(x)^2\,dx = 2\kappa^2 \sum_{\ell\geq 2} (-1)^\ell \tr\Bigl\{ \psi(x)^2 \sqrt{R_0}\Bigl(\sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^\ell \sqrt{R_0}\Bigr\}.
\end{align*}
Now for any $\ell\geq2$, we have from \eqref{I2 S6}, \eqref{psi comm bound} and its adjoint that
\begin{align*}
&\tr \Bigl\{ \psi(x)^2 \sqrt{R_0} \Bigl(\sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^\ell \sqrt{R_0} \Bigr\} \\
&\ \ = \tr \Bigl\{ \sqrt{R_0}\,\psi q R_0^2 \psi q\,\sqrt{R_0} \Bigl(\sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^{\ell-2} \Bigr\}
+ O\biggl( \kappa^{-6} \delta^{\ell-2} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] \biggr) .
\end{align*}
The error term here sums acceptably over $\ell\geq 2$. The contribution of the first term is also acceptable provided we restrict to $\ell\geq 3$; indeed,
$$
\Bigl| \tr \Bigl\{ \sqrt{R_0}\,\psi q R_0^2 \psi q\,\sqrt{R_0} \Bigl(\sqrt{R_0}\,q\,\sqrt{R_0}\Bigr)^{\ell-2} \Bigr\} \Bigr|
\lesssim \kappa^{-5-\frac{\ell-2}2} \delta^{\ell-2} \int_0^1 |q(x)|^2\,dx.
$$
Combining all of this, we deduce that
\begin{align*
\int \rho_1(x)\psi(x)^2\,dx = 2\kappa^2 \tr \Bigl\{ R_0\psi q R_0 \psi q R_0 \Bigr\}
+ O\biggl( \kappa^{-7/2} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] \biggr) .
\end{align*}
We turn now to $\rho_2$. Combining \eqref{step to rho2} and \eqref{E:psi g} gives
\begin{align*}
\biggl| \int \bigl(g(x) - \tfrac1{2\kappa}\bigr)^2\psi(x)^2\,dx - \kappa^{-2} \bigl\| R_0(2\kappa)(q\psi) \bigr\|_{L^2({\mathbb{R}})}^2 \biggr|
\lesssim \kappa^{-13/2} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr].
\end{align*}
Therefore,
\begin{align*
\int \rho_2(x)\psi(x)^2\,dx = - 4 \kappa \bigl\| R_0(2\kappa)(q\psi) \bigr\|_{L^2({\mathbb{R}})}^2
+ O\biggl( \kappa^{-7/2} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] \biggr) .
\end{align*}
We now consider $\rho_3$. From \eqref{R I2} we have
\begin{align*}
\Bigl\| \sqrt{R_0}\,h\,\sqrt{R_0} \Bigr\|_{{\mathfrak{I}}_2}^2 \lesssim \kappa^{-1} \| h \|_{L^1}^2 \int\frac{d\xi}{\xi^2+4\kappa^2} \lesssim \kappa^{-2} \| h \|_{L^1}^2 .
\end{align*}
Employing this to estimate the series \eqref{E:g series} via duality, we obtain
$$
\bigl\| g - \tfrac1{2\kappa} \bigr\|_{L^\infty} \lesssim \kappa^{-3/2} \delta .
$$
Combining this with \eqref{step to rho2} yields
\begin{align*
\int \rho_3(x)\psi(x)^2\,dx \lesssim \kappa^{-7/2} \biggl[ 1+ \int_0^1 |q(x)|^2\,dx \biggr] .
\end{align*}
To derive the claim \eqref{E:psi rho} by combining our results on each part of $\rho$, we need one additional identity, namely,
$$
2\kappa^2 \tr \Bigl\{ R_0\psi q R_0 \psi q R_0 \Bigr\} - 4 \kappa \bigl\| R_0(2\kappa)(q\psi) \bigr\|_{L^2({\mathbb{R}})}^2
= \tfrac{1}{2\kappa} \int \frac{|\widehat{q\psi} (\xi)|^2\,d\xi}{\xi^2+4\kappa^2} .
$$
That this equality holds follows from the same calculation we carried out in \eqref{delta2alpha}.
The last estimate \eqref{E:psi dumb} is relatively trivial. As $\int \kappa e^{-2\kappa|x-y|}\,dy =1$,
\begin{align*}
\biggl| \psi(x)^2 - \int \kappa e^{-2\kappa|x-y|} \psi(y)^2 \,dy \biggr| \lesssim \int \kappa e^{-2\kappa|y-x|} |x-y| \,dy \lesssim \kappa^{-1},
\end{align*}
which settles the case $|x|\leq 10$. For $|x|\geq 10$, we have
\begin{align*}
\biggl| \psi(x)^2 - \int \kappa e^{-2\kappa|x-y|} \psi(y)^2 \,dy \biggr| = \int_0^1 \kappa e^{-2\kappa|y-x|} \psi(y)^2 \,dy \lesssim \kappa e^{-\kappa|x|} ,
\end{align*}
which offers more than enough decay in both $x$ and $\kappa$.
\end{proof}
\begin{lemma}\label{L:kappa ls}
There is a $\delta>0$ so that the following is true: Let $Q$ be a family of Schwartz solutions to \eqref{KdV} on the line such that $\{q(0):q\in Q\}$ is an equicontinuous subset of $B_\delta$. Then, for any $\psi\in C^\infty_c({\mathbb{R}})$ and any $T>0$, we have
\begin{equation}\label{E:loc smoothing hi}
\lim_{\kappa\to\infty} \ \sup_{q\in Q} \ \int_{-T}^T \int \frac{\xi^2 |\widehat{q\psi} (t,\xi)|^2\,d\xi}{\xi^2+4\kappa^2} \,d\xi\,dt = 0.
\end{equation}
\end{lemma}
\begin{proof}
Without loss of generality, we may assume that $\supp(\psi)\subset (0,1)$. Throughout the proof, we regard $\psi$ and $T$ as fixed and implicit constants may depend on them. Multiplying \eqref{E:l5.1h} by $\kappa$ and integrating against
$$
\phi(x) = \int_{-\infty}^x \psi(y)^2\,dy,
$$
we obtain
\begin{align}
\int \kappa [\rho(T,x)-\rho(-T,x)] & \phi(x) \,dx \label{lsk1}\\
&= -\tfrac32 \int_{-T}^T \! \iint |q(t,x)|^2 \kappa e^{-2\kappa|x-y|}\psi(y)^2 \,dx\,dy\,dt \label{lsk2}\\
&\quad - 2\kappa \int_{-T}^T \! \int q(t,x)\bigl[ \kappa - \tfrac{1}{2g(t,x)}\bigr] \psi(x)^2 \,dx\,dt \label{lsk3}\\
&\quad + 4\kappa^3 \int_{-T}^T \! \int \rho(t,x) \psi(x)^2 \,dx\,dt.\label{lsk4}
\end{align}
We will discuss these terms one at a time.
From Proposition~\ref{P:Intro rho}, we have
$$
\bigl| \text{\eqref{lsk1}} \bigr| \lesssim \kappa \alpha(\kappa;q),
$$
which converges to zero as $\kappa\to\infty$ uniformly for $q\in Q$; see Lemma~\ref{L:equi 2}.
Combining \eqref{E:psi dumb} and Lemma~\ref{L:loc smoothing} yields
\begin{align*}
\biggl| \eqref{lsk2} + \tfrac32 \int_{-T}^T \! \int |q(t,x)|^2 \psi(x)^2 \,dx\,dt \biggr| \lesssim \kappa^{-1},
\end{align*}
or equivalently, by Plancherel,
\begin{align*}
\biggl| \eqref{lsk2} + \tfrac32 \int_{-T}^T \! \int |\widehat{\psi q}(t,\xi)|^2 \,d\xi\,dt \biggr| \lesssim \kappa^{-1}.
\end{align*}
From \eqref{E:psi 1/g} and Lemma~\ref{L:loc smoothing}, we have
\begin{align*}
\biggl| \eqref{lsk3} - \int_{-T}^T \! \iint \psi(x)q(t,x) \kappa e^{-2\kappa|x-y|} q(t,y)\psi(y) \,dx\,dy\,dt \biggr| \lesssim \kappa^{-1/2},
\end{align*}
or equivalently (see \eqref{R resolvent}),
\begin{align*}
\biggl| \eqref{lsk3} - \int_{-T}^T \! \int \frac{4\kappa^2|\widehat{\psi q}(t,\xi)|^2}{\xi^2+4\kappa^2} \,d\xi\,dt \biggr| \lesssim \kappa^{-1/2}.
\end{align*}
From \eqref{E:psi rho} and Lemma~\ref{L:loc smoothing},
$$
\biggl| \eqref{lsk4} - \tfrac{1}{2} \int_{-T}^T \! \int \frac{4\kappa^2|\widehat{\psi q}(t,\xi)|^2}{\xi^2+4\kappa^2} \,d\xi\,dt \biggr|
\lesssim \kappa^{-1/2}.
$$
The claim \eqref{E:loc smoothing hi} now follows from recombining \eqref{lsk1}--\eqref{lsk4}.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{P:loc smoothing}]
By the same scaling argument as in Theorem~\ref{T:converge}, it suffices to prove \eqref{E:loc smoothing n} for solutions that are small in $L^\infty_t H^{-1}_x$. Moreover, we may assume that $q_n$ are Schwartz solutions, since Proposition~\ref{P:loc smoothing} in this reduced generality provides precisely the tool necessary to obtain the full version by approximation.
Let us fix $\psi\in C^\infty_c({\mathbb{R}})$, not identically zero, with $\supp(\psi)\subseteq(0,1)$. By a simple covering argument, it suffices to show that
\begin{align}\label{E:ls 1}
\lim_{n\to\infty} \ \sup_{x_0\in{\mathbb{R}}} \ \int_{-T}^T \Bigl\|\bigl[q_n(t,x)-q(t,x)\bigr] \psi(x+x_0) \Bigr\|_{L^2({\mathbb{R}})}^2 \,dt = 0.
\end{align}
We will do this by breaking into high- and low-frequency components, using a refined local smoothing argument to handle the former, and applying Theorem~\ref{T:converge} to handle the latter. The frequency decomposition is based on the multipliers
$$
m_\text{hi}(\xi) = \frac{|\xi|}{\sqrt{\xi^2+4\kappa^2}} \qtq{and} m_\text{lo}(\xi) = \sqrt{1 - m_\text{hi}(\xi)^2} = \frac{2\kappa}{\sqrt{\xi^2+4\kappa^2}}.
$$
We begin with the low frequencies. For $\kappa$ fixed, Theorem~\ref{T:converge} implies
\begin{align}
\lim_{n\to\infty} &\ \sup_{x_0\in{\mathbb{R}}}\ \int_{-T}^T \; \Bigl\|m_\text{lo}(-i\partial_x) \Bigl(\bigl[q_n(t)-q(t)\bigr] \psi(\cdot + x_0) \Bigr)\Bigr\|_{L^2({\mathbb{R}})}^2 \,dt\notag\\
&\lesssim \kappa T \lim_{n\to\infty}\ \sup_{x_0\in{\mathbb{R}}}\ \Bigl\| \bigl[q_n(t,x)-q(t,x)\bigr] \psi(x+x_0) \Bigr\|_{L^\infty_t H^{-1}_x ([-T,T]\times{\mathbb{R}})}\label{ls low} \\
&\lesssim \kappa T \|\psi\|_{H^1({\mathbb{R}})} \lim_{n\to\infty} \bigl\| q_n(t,x)-q(t,x) \bigr\|_{L^\infty_t H^{-1}_x ([-T,T]\times{\mathbb{R}})} =0.\notag
\end{align}
We turn now to the high-frequency part. As the sequence $q_n(0)$ is convergent in $H^{-1}({\mathbb{R}})$, it is equicontinuous there. Proposition~\ref{P:equi} then guarantees that $\{q_n(t) : t\in{\mathbb{R}}\text{ and } n\in{\mathbb{N}}\}$ is also $H^{-1}({\mathbb{R}})$-equicontinuous. Thus Lemma~\ref{L:kappa ls} implies
\begin{equation}\label{ls n high}
\lim_{\kappa\to\infty} \ \sup_{n} \int_{-T}^T \; \Bigl\|m_\text{hi}(-i\partial_x) [q_n(t)\psi(\cdot+x_0)] \Bigr\|_{L^2({\mathbb{R}})}^2 \,dt = 0.
\end{equation}
Note that by Theorem~\ref{T:converge} and weak lower-semicontinuity, it then follows that
\begin{equation}\label{ls q high}
\lim_{\kappa\to\infty} \int_{-T}^T \; \Bigl\|m_\text{hi}(-i\partial_x) [q(t)\psi(\cdot+x_0)] \Bigr\|_{L^2({\mathbb{R}})}^2 \,dt = 0.
\end{equation}
We are now ready to put the pieces together. From \eqref{ls n high} and \eqref{ls q high}, we see that we can make the high-frequency contribution to LHS\eqref{E:ls 1} small, uniformly in $n$, by choosing $\kappa$ sufficiently large. But then by \eqref{ls low}, we may make the low-frequency contribution as small as we wish by choosing $n$ sufficiently large. This proves \eqref{E:ls 1} and so \eqref{E:loc smoothing n}.
Lastly, integration by parts shows that Schwartz solutions are distributional solutions of the initial-value problem, which is to say
\begin{align*}
\int h(0,x)q(0,x)\,dx + \int_0^\infty \!&\! \int [\partial_t h](t,x) q(t,x) \,dx\,dt \\
& = \int_0^\infty \!\! \int - [\partial_x^3 h](t,x) q(t,x) + 3[\partial_x h](t,x) q(t,x)^2 \,dx\,dt
\end{align*}
for every $h\in C^\infty_c({\mathbb{R}}\times{\mathbb{R}})$ (as well as the analogous statement backwards in time). This now extends to $H^{-1}$ solutions via Corollary~\ref{C:1} and Proposition~\ref{P:loc smoothing}.
\end{proof}
|
1,477,468,750,404 | arxiv | \section{Introduction}
\textbf{Wearable technologies} are rapidly advancing thanks to the developments in consumer electronics, with activity trackers leading the way. However, these devices have yet to fulfill their promise of revolutionizing the way we live. The abandonment rate is relatively high as well. There are many hypotheses out there for why this could be, from perceived ``ugliness'' of the device design to lack of features \cite{Lazar:2015:WWU:2750858.2804288}. Our research agenda is to tackle these both aspects and in this paper, we focus on the latter.
We are specifically interested in detecting \textbf{players' mental state} by using a wearable device. Our interviews with 6 professional coaches and 20 professional players show that: (1) players do not need/want wearable devices to track their fitness level since they are already self-aware in this respect and (2) their biggest concern is about the tracking and learning to regulate their \textbf{mental states} \cite{havlucu2017understanding}.
In this study, we investigate whether wearable devices, specifically a commercially available activity tracker, can be used to detect more than an activity, such as a psychological state. To reach this goal, we set out to detect the \textbf{flow state}, the mental state of optimal performance of players as they play the game using wearables as a first step towards this end. Csikszentmihalyi \cite{csikszentmihalyi1990flow} defines the flow state as ``\textit{putting oneself in a state of optimal experience, the state in which people are so involved in an activity that nothing else seems to matter}''. The coaches we interviewed claim to be able to observe whether their players are in the flow state or not. This suggests that flow, or what coaches call being \textit{in-the-zone} and \textit{fall}, can be detected, at least for tennis.
Motivated by this, we performed a study involving an experienced coach working with professional tennis players and two of his students. Each player wore a wearable device which recorded data while the coach indicated when the players were in flow or not. Using the coach's labels as targets and the recorded data as inputs, we trained multiple \textbf{machine learning} models. We reached around 98\%
testing accuracy using \textbf{deep neural networks} for a variety of conditions involving multiple data combinations. Our results show that the \textbf{flow state} can be detected using wearables data from an \textbf{Inertial Measuring Unit (IMU)}. To the best of our knowledge, this has never been demonstrated before.
\subsection{Related Work}
Existing work on wearables data in sports mostly concentrate on activity recognition. Um et al. uses \textit{deep learning} to classify exercise motion from large-scale wearable sensor data achieving 92.14\% accuracy with a 3-layered Convolutional Neural Network (CNN) \cite{um2016exercise}. In \cite{chernbumroong2011activity}, 5 activities including sitting, standing, lying, walking, and running are classified using Decision Trees and Artificial Neural Networks using a wrist-worn accelerometer. The authors use 4 separate feature sets from time and frequency domains achieving 94.13\% accuracy with their best models. In \cite{connaghan2011multi}, the authors try to classify tennis strokes - forehand, backhand, and serves - of the players using an IMU which is equipped with accelerometer, gyroscope and magnetometer sensors. They have achieved 90\% accuracy using the fusion of accelerometer, gyroscope and magnetometer sensors. There are other studies that take advantge of IMU sensors in wearable devices. In \cite{spriggs2009temporal}, the authors use a camera and IMU for temporally segmenting human motion into primitive actions. Our work focuses on detecting the flow state as opposed to a specific activity.
Existing literature about flow state detection includes different sensors and are concerned with different tasks. In \cite{bian2016framework}, the heart rate, interbeat interval, heart rate variability (HRV), high-frequency HRV (HF-HRV), and respiratory rate are argued to be effective indicators of flow. In \cite{de2010psychophysiology}, the authors try to find a relationship between subjective flow and psychophysiological measures while playing piano. They measure arterial pulse pressure, respiration, head movements (via a 3-axis accelerometer) and certain facial muscle activity. They did not find any significant relationship between flow and the head movements. In \cite{nacke2008flow}, the authors use electroencephalography, electrocardiography, electromyography, galvanic skin response, and eye tracking equipment to detect the flow state of participants playing a video game. Our approach of detecting flow in a sports application using IMUs has not been done before.
\begin{table}[t]
\caption{The collected motion data and their respective ranges.}
\label{tab:rawrange}
\centering
\begin{tabular}{r c c}
& \multicolumn{2}{c}{\small{\textbf{Intervals}}} \\
\cmidrule(r){2-3}
& {\small \textbf{Min}}
& {\small \textbf{Max}} \\
\toprule
{\small GravityX} & -1.0 & 1.0 \\
{\small GravityY} & -1.0 & 1.0 \\
{\small GravityZ} & -1.0 & 1.0 \\
\midrule
{\small AccelerationX} & -17.1031 & 7.0596 \\
{\small AccelerationY} & -16.2396 & 16.6477 \\
{\small AccelerationZ} & -16.3296 & 16.8778 \\
\midrule
{\small RotationRateX} & -26.3145 & 40.8265 \\
{\small RotationRateY} & -39.8416 & 32.0937 \\
{\small RotationRateZ} & -35.2566 & 25.3197 \\
\midrule
{\small AttitudeYAW} & $-\pi$ & $\pi$ \\
{\small AttitudeROLL} & $-\pi$ & $\pi$ \\
{\small AttitudePITCH} & $-\pi/2$ & $\pi/2$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Method}
\subsection{Data Collection}
The data was collected using two Apple Watches (Series 2) linked to two Apple iPhones, worn by two tennis players on their racket holding wrists during a match. The flow labels were recorded separately by the players' coach as a binary variable. The duration of the match was 74 minutes.
The devices start recording before the match begins. In order to capture the data, we use a self-developed application that collects the raw data. The players locate themselves in a corner of the field and raise their hand for 3 seconds. The players then move down the line to the other corner and raise their hand for another 3 seconds. This procedure is done to synchronize the recordings. The motion data was collected for each player for every decisecond (i.e. with 10 Hz) throughout the match. Table \ref{tab:rawrange} summarizes the data and their respective ranges in the recorded data.
The flow labels are recorded on a separate iPhone via another application. The coach observes the match and uses volume up and volume down keys to capture the flow state while speaking out the labels. The flow labels are also written down by one of the researchers next to the coach. This is done to cross-validate the recorded labels from the app in case the coach forgets or mis-presses the buttons on the device.
As our wearables data, we use motion data captured by an IMU sensor including gravity relative coordinate axes (3D), acceleration along these axes (3D), rotation rate (angular velocity) about these axes (3D), and attitude relative to the magnetic north reference frame (YAW, ROLL, PITCH). The players' heart rates and GPS locations were also recorded to help with flow detection but large chunks of missing data and the poor accuracy of GPS hindered these useless.
\subsection{Data Cleaning and Preprocessing}
\begin{figure}[t]
\includegraphics[width=1\columnwidth]{figures/flows}
\caption{Flow states of the players throughout the match. 1 denotes \textit{flow} and -1 denotes \textit{fall}. In the bottommost plot P2 has been shifted by 0.05 for visualization purposes.(P1: Player 1 - P2: Player 2)}~\label{fig:flows}
\end{figure}
The motion data was pre-processed before being used. The duplicated entries were removed. Next, the data was averaged using a sliding window of size 5. Then, the entries were scaled between -1 and 1. Fig
The working data for the models contain 44,516 entries for each player, amounting to around 74 minutes. Player 1 and 2 are in flow 51.11\% and 49.95\% of the time respectively. Figure \ref{fig:flows} shows the flow states of the players during the match using the pre-processed data. 1 denotes \textit{flow} and -1 denotes \textit{fall}.
\section{Learning}
\begin{table}[b]
\caption{Train/test split combinations used in models. (B: Both Players - P1: Player 1 - P2: Player 2)}
\label{tab:datacombination}
\centering
\begin{tabular}{r c c}
& \multicolumn{2}{c}{\small{\textbf{Splits}}} \\
\cmidrule(r){2-3}
& {\small \textbf{Train}}
& {\small \textbf{Test}} \\
\toprule
{\small B-B} & 0.9(P1+P2) & 0.1(P1+P2) \\
{\small B-P1} & 0.9P1+1.0P2 & 0.1P1 \\
{\small B-P2} & 1.0P1+0.9P2 & 0.1P2 \\
{\small P1-P1} & 0.9P1 & 0.1P1 \\
{\small P1-P2} & 1.0P1 & 1.0P2 \\
{\small P2-P1} & 1.0P2 & 1.0P1 \\
{\small P2-P2} & 0.9P2 & 0.1P2 \\
\bottomrule
\end{tabular}
\end{table}
The pre-processed data along with coach labels were used to train multiple binary-classifiers. We use 7 different data combinations and corresponding train-test splits to evaluate our models. These combinations are presented in Table~\ref{tab:datacombination}.
Conventional methods, other than k-Nearest Neighbors (kNN) and random forests performed poorly, barely beating random choice (50\% accuracy) as shown in Figure~\ref{fig:modelsum}.
Due to the poor performance of the conventional approaches, we used convolutional neural networks (CNNs) and recurrent neural networks (RNN). To further capture the sequential nature of data, our input to these models are formed by combining 10 sequential data points in a sliding windows fashion, resulting in an input dimensionality of $10\times 12$.
The CNN model has three 2D convolutional layers. The first layer has an output size of 128 with a kernel size of $1\times 12$. The second and third layers have size 256 and 512 respectively with kernel size of 1, which results in weight sharing between each time step.
Activation function of each hidden unit is ReLU and batch normalization is applied after each convolution layer. The last convolutional layer is followed by a fully connected layer of size 128 with ReLU activation function and an output layer of size 2. Softmax function is applied to the output to get flow-state probabilities.
The RNN model has a single layer of Long Short-Term Memory (LSTM) with hidden unit of size 512 attached to a fully connected layer of size 128 with ReLU activation function and an output layer of size 2. Softmax function is applied to the output to get flow-state probabilities.
To train both models, we used the Adam optimizer with 0.001 learning rate with no decay and a mini-batch size of 64 to minimize cross-entropy loss. We included a 0.5 dropout rate before the output layer in both models. We further include a 0.25 dropout rate after the second convolution later in the CNN based model.
The kNN model uses 1 neighbor and the SVM model uses RBF kernel with $\gamma = 1/12$ and soft-margin cost of $C=1000$ (the latter selected via cross-validation). Figure \ref{fig:modelsum} illustrates the testing accuracy of the models for the data combinations depicted in Table~\ref{tab:datacombination}.
\begin{figure}
\includegraphics[width=1\columnwidth]{figures/model_sum}
\caption{Results of models - the horizontal line represents 50\% accuracy i.e. random chance. (B: Both Players - P1: Player 1 - P2: Player 2)}~\label{fig:modelsum}
\end{figure}
\section{Results}
\textit{Coach's perception of flow can be detected from IMU data in tennis with high accuracy.} The CNN and LSTM models reach around 98\% testing accuracy. Even with a simple approach like kNN, we get around 75\% accuracy.
\textit{Flow cannot be generalized from a single player.} All the methods are around 50\% when we look at the P1-P2 and P2-P1 parts of the Figure~\ref{fig:modelsum}. This shows that, data from just one player cannot be used to detect flow in others.
\textit{Flow maybe generalized with more player data.} When we look at the B-B, B-P1 and B-P2 parts of the Figure~\ref{fig:modelsum}, we can see that results are as good or better than the P1-P1, and the P2-P2 case. This shows that using more player data may improve flow detection accuracy. This suggests that with more data, we maybe able to detect flow in players we have not trained with but this needs further study.
\textit{Deep neural network based models outperform conventional methods.} The models based on CNN and LSTM have the best results but CNN is slightly better than LSTM for combinations other than P1-P2 and P2-P1. SVM has a very poor performance, barely beating the random chance of 50\% accuracy. The kNN approach with one neighbor is more successful but it is still not competitive\footnote{The story is similar with other conventional methods.}. One reason is that these methods do not account for the sequential nature of the data and utilizing methods such as Hidden Markov Models (HMM) and Conditional Random Fields (CRF) may help. However, our preliminary trials with augmented states (concatenated multiple time steps) and HMMs lacked behind deep models.
We think that the flow signal is in the IMU data but we need sophisticated models with lots of data to detect it.
\section{Implications and Future Work}
Our end-goal is to be able to detect flow state in professional tennis players. The novel results presented in the previous section strongly support this aim. There are two main directions to take this study; verify flow state detection and advance it and further use the successful detection results to develop devices and interaction paradigms to be used in training to regulate flow state.
Even though the results are highly encouraging, there are still certain challenges to be addressed. These are;
(1) do we need to have training data for a player to be able to detect his/her flow state or can we collect enough data to be able to generalize cross professional tennis players? In other words, can we detect flow in a player we have no training data for? (2) do the movements of professional tennis players change over time and with training such that it affects flow detection in the future? In other words, would the data we collect now be used to detect flow in a player in the future as well?
(3) is what perceived as flow by the coach is really flow and whether this matters or not?
To address challenges (1) and (2) we need to conduct further studies and collect more data. To address (2) specifically, we need to collect data from the same players over time. Collecting more data is our immediate future work. To address the first half of challenge (3), we need to be able to measure flow directly and to see whether the coach labels are correlated with the measurements. There is no easy way to do this with the current body of flow state knowledge. To address the second half of this challenge, we are planning to follow the first research direction and develop a wearable device for training and see if it works.
A problem that tennis players face is that tennis is a lonely sport and it is hard for them to recover after they lose concentration \cite{havlucu2017understanding}. It is important for these players to train mentally to be able to cope with such difficulties. Not all players get to train with capable coaches or get enough individual training time with them. A wearable device that can help with such mental training would be invaluable. Detecting whether the player is in flow state or not is the first step towards this end. For example, if the device detects that the player goes out-of-flow, it can interact with the user or provide feedback - which is necessary to maintain flow - to help the player get back in flow. We are going to conduct user studies to validate the device and our approach in general.
\section{Conclusion}
In this study, we concentrate on the flow state, mental state of optimal performance, in tennis. We collect flow labels from a professional coach during a tennis match between two of his players and IMU data from the players themselves. We then train several models using this data.
Our findings show that flow, or what the coach perceived as flow, can be detected from IMU data. Most successful methods, two deep learning models, reach around 98\% testing accuracy in a variety of data combinations. The results are the same or better if we have both players' data in the training set. However, one player's data cannot be used to detect flow in the other player. These findings about flow state detection is first in the field.
There are two immediate directions for this study. First, to address data collection and generalization challenges in flow state detection; and second, to develop devices and interaction paradigms to help professional tennis players to train to regulate their flow. We are interested in pursuing both of these directions simultaneously.
|
1,477,468,750,405 | arxiv | \section{\@startsection {section}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus.2ex}%
{\normalfont\large\bf}}
\makeatother
\makeatletter
\renewcommand\subsection{\@startsection {subsection}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus.2ex}%
{\normalfont\normalsize\bf}}
\makeatother
\makeatletter
\renewcommand\subsubsection{\@startsection {subsubsection}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus.2ex}%
{\normalfont\normalsize\it}}
\makeatother
\usepackage[%
bookmarks=true,%
bookmarksdepth=3,%
bookmarksnumbered=true,%
setpagesize=false,%
pdftitle={Sample path behaviors of L\'{e}vy processes %
conditioned to avoid zero},%
pdfauthor={Shosei Takeda},%
pdfkeywords={one-dimensional L\'{e}vy process; %
conditioning; sample path properties; limit theorem}%
]{hyperref}
\title{\Large\textbf{Sample path behaviors of L\'{e}vy processes
conditioned to avoid zero}}
\author{Shosei Takeda\footnote{Rakunan High School,
Kyoto,
Japan.}
\footnote{The research of this author was supported by
JSPS Open Partnership Joint Research Projects grant no. JPJSBP120209921.}}
\date{}
\begin{document}
\maketitle
\begin{abstract}
Takeda--Yano~\cite{me} determined
the limit of L\'{e}vy processes conditioned to avoid zero via various random clocks
in terms of Doob's \(h\)-transform, where
the limit processes may differ according to the choice of random clocks.
The purpose of this paper is to investigate sample path behaviors
of the limit processes in long time and in short time.
\end{abstract}
{\small Keywords and phrases: one-dimensional L\'{e}vy process;
conditioning; sample path property; limit theorem \\
MSC 2020 subject classifications: 60G17 (60F05; 60G51; 60J25)}
\section{Introduction}
For a measure \(\mu\) and for a non-negative measurable
or an integrable function \(f\),
we write \(\mu\sbra{f}\) for the integral \(\int f\,\mathrm{d}\mu\) for simplicity.
Let \(\cbra{\ensuremath{\mathbb{P}}_x^{\mathrm{Bes}\, +}\colon x\ge 0}\)
(resp. \(\cbra{\ensuremath{\mathbb{P}}_x^{\mathrm{Bes}\,-}\colon x\le 0}\))
denote the law of (resp.\ the negative of) the three-dimensional
Bessel process, starting from \(x\). For \(x\in\ensuremath{\mathbb{R}}\),
let \(\ensuremath{\mathbb{P}}_x^{\mathrm{sBes}}\) denote
the law of the symmetrized three-dimensional Bessel process
starting from \(x\), i.e., it holds that
\(\ensuremath{\mathbb{P}}_x^{\mathrm{sBes}}=\ensuremath{\mathbb{P}}_x^{\mathrm{Bes}\, +}\) for \(x>0\),
\(\ensuremath{\mathbb{P}}_x^{\mathrm{sBes}}=\ensuremath{\mathbb{P}}_x^{\mathrm{Bes}\, -}\) for \(x<0\)
and \(\ensuremath{\mathbb{P}}_0^{\mathrm{sBes}}=\frac{1}{2}\ensuremath{\mathbb{P}}_0^{\mathrm{Bes}\, +}
+\frac{1}{2}\ensuremath{\mathbb{P}}_0^{\mathrm{Bes}\, -}\).
Let \(x\in \ensuremath{\mathbb{R}} \setminus \cbra{0}\) and
let \((X=(X_t,t\ge 0),\ensuremath{\mathbb{P}}_x^B)\)
be the canonical representation of a standard Brownian motion starting from \(x\)
and \((\ensuremath{\mathcal{F}}_t)\) denote the right-continuous filtration
generated by the natural filtration.
We denote by \(T_0=\inf\cbra{t>0\colon X_t=0}\) the first hitting time of the origin.
Then
we have the following conditioning limit theorem:
for any bounded \(\ensuremath{\mathcal{F}}_t\)-measurable functional \(F_t\),
it holds that
\begin{align}\label{eq:intro}
\lim_{s\to\infty}\ensuremath{\mathbb{P}}_x^B\sbra{F_t| T_0>s} = \ensuremath{\mathbb{P}}_x^{\mathrm{sBes}}\sbra{F_t}.
\end{align}
This means that the Brownian motion conditioned to avoid zero
up to time \(t\) converges in law to the symmetrized three-dimensional Bessel process.
The left-hand side of~\eqref{eq:intro} can be regarded as the
\textit{Brownian motion conditioned to avoid zero}.
We remark that the three-dimensional Bessel process is transient
and never hits the origin.
For \(x\in\ensuremath{\mathbb{R}}\setminus\cbra{0}\),
the process \(\ensuremath{\mathbb{P}}_x^{\mathrm{sBes}}\) can be written
via Doob's \(h\)-transform with respect to
the non-negative harmonic function \(h(x)=\abs{x}\)
of the killed Brownian motion \(\cbra{\ensuremath{\mathbb{P}}_x^{B,0}\colon x\in \ensuremath{\mathbb{R}}\setminus\cbra{0}}\)
as follows:
\begin{align}
\ensuremath{\mathbb{P}}_x^{\mathrm{sBes}}|_{\ensuremath{\mathcal{F}}_t}
= \frac{\abs{X_t}}{\abs{x}}\ensuremath{\mathbb{P}}_x^{B,0}|_{\ensuremath{\mathcal{F}}_t},\quad t>0.
\end{align}
Let \(n^B\) stand for the Brownian excursion measure.
Then \(\ensuremath{\mathbb{P}}_0^{\mathrm{sBes}}\) can also be written as
\begin{align}
\ensuremath{\mathbb{P}}_0^{\mathrm{sBes}}|_{\ensuremath{\mathcal{F}}_t}
= \frac{\abs{X_t}}{n^B\sbra{\abs{X_t}}}\cdot n^B|_{\ensuremath{\mathcal{F}}_t},
\quad t>0.
\end{align}
These results for Brownian motions were generalized to
one-dimensional L\'{e}vy processes.
Yano~\cite{MR2603019} constructed and investigated
one-dimensional L\'{e}vy processes conditioned to avoid zero
under the conditions that the process is symmetric and has no Gaussian part.
He also investigated path behaviors of the process.
Yano~\cite{MR3072331} extended their results
to asymmetric L\'{e}vy processes.
He also showed the existence of a non-negative harmonic function
for asymmetric killed L\'{e}vy processes
under some technical conditions.
Pant\'{\i}~\cite{MR3689384} and Tsukada~\cite{MR3838874}
showed the existence of the harmonic function
under more general conditions
and Pant\'{\i}~\cite{MR3689384} investigated asymmetric
L\'{e}vy processes conditioned to avoid zero using
\(h\)-transform with respect to its harmonic function.
Recently, Takeda--Yano~\cite{me} obtained
a family of harmonic functions \(h^{(\gamma)}\) parametrized by \(-1\le\gamma\le 1\)
for the killed L\'{e}vy process
under more general conditions.
They also constructed the L\'{e}vy process conditioned to avoid zero,
using the \(h^{(\gamma)}\)-transform.
In this paper, we investigate the path behaviors of
L\'{e}vy processes conditioned to avoid zero which is
constructed in Takeda--Yano~\cite{me}.
\subsection{L\'{e}vy processes conditioned to avoid zero}
We shall recall the construction of L\'{e}vy processes
conditioned to avoid zero in Takeda--Yano~\cite{me}.
For more details of the notation of this section,
see Section~\ref{Sec:preliminaries}.
Let \((X=(X_t,t\ge 0),(\ensuremath{\mathbb{P}}_x)_{x\in\ensuremath{\mathbb{R}}})\) denote the canonical representation
of a one-dimensional L\'{e}vy process
and we write \(\ensuremath{\mathbb{P}}=\ensuremath{\mathbb{P}}_0\).
Throughout this paper, we always assume
the following condition~\ref{item:assumption}:
\begin{enumerate}[label=\textbf{(\Alph*)}]
\item The process \((X,\ensuremath{\mathbb{P}})\) is recurrent and, for each \(q > 0\), it holds that
\begin{align}
\int_0^\infty
\abs*{\frac{1}{q+\varPsi(\lambda)}}
\, \mathrm{d} \lambda < \infty,
\end{align}
where \(\varPsi(\lambda)\) denotes the characteristic exponent
given by \(\ensuremath{\mathbb{P}}\sbra{\mathrm{e}^{\mathrm{i} \lambda X_t}}
=\mathrm{e}^{-t\varPsi(\lambda)}\).\label{item:assumption}
\end{enumerate}
Let \(T_A=\inf\cbra{t>0\colon X_t\in A}\) stand for the hitting time of
a set \(A\subset\ensuremath{\mathbb{R}}\) and
we simply write \(T_a\coloneqq T_{\cbra{a}}\)
for the hitting time of a point \(a\in\ensuremath{\mathbb{R}}\).
The condition~\ref{item:assumption} implies that
\(\ensuremath{\mathbb{P}}(T_0=0)=1\) and \((X,\ensuremath{\mathbb{P}})\) is not
compound Poisson. In addition, there exists
the \(q\)-resolvent density \(r_q\) for \(q>0\).
For \(x\in\ensuremath{\mathbb{R}}\), we define
\(h_q(x)=r_q(0)-r_q(-x)\ge 0\) and
\begin{align}
h(x)= \lim_{q\to 0+}h_q(x), \quad x\in \ensuremath{\mathbb{R}},
\end{align}
which is called the \emph{renormalized zero resolvent};
see (\ref{Lem-item:h-exist}) of Lemma~\ref{Lem:h}.
The function \(h\) is subadditive;
see (\ref{Lem-item:h-subadditive}) of Lemma~\ref{Lem:h}.
We denote the second
moment of \(X_1\) by
\begin{align}
m^2=\ensuremath{\mathbb{P}}\sbra{X_1^2}\in (0,\infty].
\end{align}
For \(-1\le \gamma\le 1\), define
\begin{align}
h^{(\gamma)}(x)=h(x)+\frac{\gamma}{m^2}x,\quad x\in\ensuremath{\mathbb{R}}.
\end{align}
If \(m^2=\infty\), the functions \(h^{(\gamma)}\)
coincide with \(h\) for all \(-1\le\gamma\le 1\).
The function \(h^{(\gamma)}\) is non-negative
(see~\eqref{eq:h-g-plus}) and subadditive.
Let \(\ensuremath{\mathbb{P}}_x^0\) denote the law under \(\ensuremath{\mathbb{P}}_x\) of the killed process
\begin{align}
X^0_t = \begin{cases}
X_t & \text{if \(t< T_0\)}, \\
\Delta & \text{if \(t\ge T_0\)},
\end{cases}
\end{align}
where \(\Delta\) stands for a cemetary point.
Let \(n\) denote It\^{o}'s excursion measure
normalized by the equation
\begin{align}\label{eq:n-normalize}
n\sbra{1-\mathrm{e}^{-qT_0}}=\frac{1}{r_q(0)},\quad q>0;
\end{align}
see Section~\ref{Sec:preliminaries}.
The next lemma says \(h^{(\gamma)}\) is harmonic for the killed process.
\begin{Lem}[\cite{me}]\label{Lem:harmonic}
Assume the condition~\ref{item:assumption} is satisfied.
For \(-1\le \gamma\le 1\) and \(x\in\ensuremath{\mathbb{R}}\), it holds that
\begin{align}\label{eq:harmonic}
\ensuremath{\mathbb{P}}^0_x\sbra{h^{(\gamma)}(X_t)} = h^{(\gamma)}(x)
\quad\text{and}\quad
n\sbra{h^{(\gamma)}(X_t)} = 1,\quad t>0.
\end{align}
In particular, the process \((h^{(\gamma)}(X_t),t> 0)\) is
a non-negative \(\ensuremath{\mathbb{P}}^0_x\)-martingale.
\end{Lem}
The proof of Lemma~\ref{Lem:harmonic} can be found in~\cite[Theorem 8.1]{me}
and~\cite[(iii) of Theorem 2.2]{MR3689384}.
For \(-1\le\gamma\le 1\),
define \(\ensuremath{\mathcal{H}}^{(\gamma)} = \cbra{x\in\ensuremath{\mathbb{R}}\colon h^{(\gamma)}(x)>0 }\) and
\(\ensuremath{\mathcal{H}}^{(\gamma)}_0 = \ensuremath{\mathcal{H}}^{(\gamma)} \cup \cbra{0}\).
If \(m^2=\infty\), we have \(\ensuremath{\mathcal{H}}^{(\gamma)}=\ensuremath{\mathcal{H}}^{(0)}\).
If \(m^2<\infty\), we have \(\ensuremath{\mathcal{H}}^{(1)}_0\cap \ensuremath{\mathcal{H}}^{(-1)}_0
\subset \ensuremath{\mathcal{H}}^{(\gamma)}_0=\ensuremath{\mathbb{R}}\) for \(-1<\gamma<1\) by~\eqref{eq:h-g-plus}.
Adopting Doob's \(h\)-transform approach,
we construct the \(h^{(\gamma)}\)-transform by
\begin{align}\label{eq:def-h-trans}
\ensuremath{\mathbb{P}}_x^{(\gamma)}|_{\ensuremath{\mathcal{F}}_t}
= \begin{dcases}
\frac{h^{(\gamma)}(X_t)}{h^{(\gamma)}(x)}
\cdot \ensuremath{\mathbb{P}}^0_x|_{\ensuremath{\mathcal{F}}_t} & \text{if } x\in\ensuremath{\mathcal{H}}^{(\gamma)}, \\
h^{(\gamma)}(X_t) \cdot n|_{\ensuremath{\mathcal{F}}_t} & \text{if } x = 0.
\end{dcases}
\end{align}
Note that, if \(m^2=\infty\), we have \(\ensuremath{\mathbb{P}}_x^{(\gamma)}=\ensuremath{\mathbb{P}}_x^{(0)}\)
for all \(-1\le \gamma\le 1\).
By Lemma~\ref{Lem:harmonic}, we see that
\(\ensuremath{\mathbb{P}}_x^{(\gamma)}|_{\ensuremath{\mathcal{F}}_t}\) is consistent in \(t>0\)
and thus \(\ensuremath{\mathbb{P}}_x^{(\gamma)}\) is well-defined
and is a probability measure on \(\ensuremath{\mathcal{F}}_\infty\);
for more details, see Yano~\cite[Theorem 9.1]{yano2021universality}.
We can see \(\ensuremath{\mathbb{P}}^{(\gamma)}_x(T_{\ensuremath{\mathbb{R}}\setminus\ensuremath{\mathcal{H}}^{(\gamma)}}>t)=0\) for all \(t>0\)
and consequently it holds that
\(\ensuremath{\mathbb{P}}^{(\gamma)}_x(T_{\ensuremath{\mathbb{R}}\setminus\ensuremath{\mathcal{H}}^{(\gamma)}}=\infty)=0\).
Hence the process \((X, \ensuremath{\mathbb{P}}^{(\gamma)}_x)\) never hits zero.
We remark that, for \(x\in\ensuremath{\mathcal{H}}^{(\gamma)}\), the measure
\(\ensuremath{\mathbb{P}}^{(\gamma)}_x\) is absolutely continuous on \(\ensuremath{\mathcal{F}}_t\)
with respect to \(\ensuremath{\mathbb{P}}_x\), but is singular on \(\ensuremath{\mathcal{F}}_\infty\) to \(\ensuremath{\mathbb{P}}_x\)
since
\(\ensuremath{\mathbb{P}}_x^{(\gamma)}(T_0=\infty) = \ensuremath{\mathbb{P}}_x(T_0<\infty)=1\).
The next theorem shows that for \(x\in \ensuremath{\mathcal{H}}^{(\gamma)}\),
the measure \(\ensuremath{\mathbb{P}}_x^{(\gamma)}\) can be obtained
as the limit measure of the L\'{e}vy process conditioned to avoid zero
via a random clock, i.e., a certain parametrized family of random times, going
to infinity.
Let \(\bm{e}\) be an independent exponential time with mean
\(1\) and we write \(\bm{e}_q=\bm{e}/q\).
\begin{Thm}[\cite{me}]\label{Thm:limit-meas}
Assume the condition~\ref{item:assumption} is satisfied.
Let \(t>0\) and \(F_t\) be a bounded \(\ensuremath{\mathcal{F}}_t\)-measurable functional.
Then the following assertions hold:
\begin{enumerate}
\item
\(\displaystyle
\lim_{q\to 0+}
\ensuremath{\mathbb{P}}_x\sbra{F_t|T_0>\bm{e}_q}=\ensuremath{\mathbb{P}}_x^{(0)}\sbra{F_t}
\), for \(x\in \ensuremath{\mathcal{H}}^{(0)}\);\label{Thm-item:limit-meas-exp}
\item
\(\displaystyle
\lim_{a\to\pm\infty}
\ensuremath{\mathbb{P}}_x\sbra{F_t|T_0>T_a}=\ensuremath{\mathbb{P}}_x^{(\pm 1)}\sbra{F_t}
\), for \(x\in \ensuremath{\mathcal{H}}^{(\pm 1)}\);
\item
\(\displaystyle
\lim_{\substack{a\to\infty,\,b\to\infty,\\ \frac{a-b}{a+b}\to \gamma}}
\ensuremath{\mathbb{P}}_x\sbra{F_t|T_0>T_{\cbra{a,-b}}}=\ensuremath{\mathbb{P}}_x^{(\gamma)}\sbra{F_t}
\), for \(-1\le \gamma\le 1\)
and \(x\in \ensuremath{\mathcal{H}}^{(\gamma)}\).\label{Thm-item:limit-meas-twohitting}
\end{enumerate}
\end{Thm}
The proof of Theorem~\ref{Thm:limit-meas} can be found in Corollary 8.2 of~\cite{me}.
Claim (\ref{Thm-item:limit-meas-exp}) of Theorem~\ref{Thm:limit-meas}
is also proved in Pant\'{\i}~\cite[Theorem 2.7]{MR3689384}.
If \(m^2<\infty\), then the limit measure differs
according to the random clock.
We remark that the limit
\( \lim_{s\to\infty}
\ensuremath{\mathbb{P}}_x\sbra{F_t|T_0>s}\) via constant clock is determined in symmetric stable case
(see Yano--Yano--Yor~\cite{MR2552915})
but the limit is an open problem in general L\'{e}vy case.
The left-hand side
of each of~(\ref{Thm-item:limit-meas-exp})--(\ref{Thm-item:limit-meas-twohitting})
of Theorem~\ref{Thm:limit-meas} can be regarded as
\emph{L\'{e}vy processes conditioned to avoid zero}
although the resulting processes may differ according to the choice of the clocks.
We remark that the resulting processes are characterized via Doob's \(h\)-transform.
For related studies, see
Chaumont~\cite{MR1419491} and
Chaumont--Doney~\cite{MR2164035} for
L\'{e}vy processes conditioned to stay positive.
Yano--Yano~\cite{MR3444297} for diffusions conditioned to avoid zero.
Let \(\ensuremath{\mathcal{D}}\) denote the space of c\`{a}dl\`{a}g paths
\(\omega\colon [0,\infty)\to \ensuremath{\mathbb{R}}\cup \cbra{\Delta}\).
We denote by \(\theta\) the shift operator and by \(k\) the killing operator, i.e.,
we define, for \(\omega\in \ensuremath{\mathcal{D}}\) and \(t\ge 0\), \(\theta_t\omega(s)=\omega(s+t),\;
s\ge 0\), and define
\begin{align}
k_t \omega(s)=\begin{cases}
\omega(s) & \text{if \(s<t\),} \\
\Delta & \text{if \(s\ge t\).}
\end{cases}
\end{align}
For \(s>0\), we denote by \(g_s=\sup\cbra{u\le s\colon X_u=0}\)
the last hitting time of the origin up to time \(s\).
Then we have, for \(\tau>0\),
\begin{align}
k_{\tau-g_\tau}\circ \theta_{g_\tau} \omega(s)=
\begin{cases}
\omega(g_\tau +s) & \text{if \(0\le s<\tau-g_\tau\)}, \\
\Delta & \text{if \(s\ge \tau-g_{\tau}\)}.
\end{cases}\end{align}
The next theorem shows that for \(x=0\),
the measure \(\ensuremath{\mathbb{P}}_x^{(\gamma)}=\ensuremath{\mathbb{P}}_0^{(\gamma)}\) can be obtained
as the limit, via a random clock, of a measure similar to
the L\'{e}vy meander.
\begin{Thm}\label{Thm:limit-P0}
Assume the condition~\ref{item:assumption} is satisfied.
Let \(t>0\) and \(F_t\) be a bounded \(\ensuremath{\mathcal{F}}_t\)-measurable functional.
Then the following assertions hold:
\begin{enumerate}
\item
\(\displaystyle
\lim_{q\to 0+}
\ensuremath{\mathbb{P}}_0\sbra{F_t\circ k_{\bm{e}_q-g_{\bm{e}_q}}\circ \theta_{g_{\bm{e}_q}}}
=\ensuremath{\mathbb{P}}_0^{(0)}\sbra{F_t}
\);\label{Thm-item:limit-P0-exp}
\item
\(\displaystyle
\lim_{a\to\pm\infty}
\ensuremath{\mathbb{P}}_0\sbra{F_t\circ k_{{T_a}-g_{{T_a}}}\circ \theta_{g_{{T_a}}}}
=\ensuremath{\mathbb{P}}_0^{(\pm 1)}\sbra{F_t}
\);\label{Thm-item:limit-P0-hitting}
\item
\(\displaystyle
\lim_{\substack{a\to\infty,\,b\to\infty,\\ \frac{a-b}{a+b}\to \gamma}}
\ensuremath{\mathbb{P}}_0\sbra{F_t\circ k_{{T_{\cbra{a,-b}}}-g_{T_{\cbra{a,-b}}}}
\circ \theta_{g_{T_{\cbra{a,-b}}}}}
=\ensuremath{\mathbb{P}}_0^{(\gamma)}\sbra{F_t}
\), for \(-1\le \gamma\le 1\).\label{Thm-item:limit-P0-twohitting}
\end{enumerate}
\end{Thm}
The proof of Theorem~\ref{Thm:limit-P0} will be given in Section~\ref{Sec:pf-meander}.
Claim (\ref{Thm-item:limit-P0-exp}) of Theorem~\ref{Thm:limit-P0} is also proved
in Pant\'{\i}~\cite[Theorem 2.8]{MR3689384}.
\subsection{Main results}\label{Subsec:result}
Recall that we always assume the condition~\ref{item:assumption}.
\subsubsection{Long-time behaviors of the process \((X,\ensuremath{\mathbb{P}}_x^{(\gamma)})\)}
The proofs of the following Theorems~\ref{Thm:p-gamma-abs-infty},~\ref{Thm:P-g-equi}
and~\ref{Thm:h-gamma-process-infty} will be given in Section~\ref{Sec:pf}.
\begin{Thm}\label{Thm:p-gamma-abs-infty}
Let \(-1\le \gamma\le 1\) and \(x\in \ensuremath{\mathcal{H}}_0^{(\gamma)}\). Then it holds that
\begin{align}
\ensuremath{\mathbb{P}}_x^{(\gamma)}\rbra*{\lim_{t\to\infty} \abs{X_t}=\infty} =1.
\end{align}
Consequently, the process \((X, \ensuremath{\mathbb{P}}_x^{(\gamma)})\) is transient.
\end{Thm}
We discuss the result
when \(m^2<\infty\).
In this case, recall that, by~\eqref{eq:h-g-plus}, we have \(\ensuremath{\mathcal{H}}^{(\gamma)}_0=\ensuremath{\mathbb{R}}\)
for \(-1<\gamma<1\).
\begin{Thm}\label{Thm:P-g-equi}
Assume \(m^2<\infty\).
Then, for \(x\in \ensuremath{\mathbb{R}}\) and \(-1<\gamma<1\),
the measure \(\ensuremath{\mathbb{P}}_x^{(\gamma)}\) is
equivalent to \(\ensuremath{\mathbb{P}}_x^{(0)}\).
Moreover, for \(x\in \ensuremath{\mathcal{H}}_0^{(\pm 1)}\), the measure
\(\ensuremath{\mathbb{P}}_x^{(\pm 1)}\)
is absolutely continuous
with respect to \(\ensuremath{\mathbb{P}}_x^{(0)}\).
\end{Thm}
We discuss long-time behaviors of the process \((X,\ensuremath{\mathbb{P}}^{(\gamma)}_x)\)
in the case \(m^2<\infty\).
Define
\begin{align}
\Omega^{+}_\infty & = \cbra*{\lim_{t\to\infty}X_t=\infty}, \\
\Omega^{-}_\infty & = \cbra*{\lim_{t\to\infty}X_t=-\infty}, \\
\Omega^{+,-}_\infty & = \cbra*{\limsup_{t\to\infty}X_t=-\liminf_{t\to\infty}X_t=\infty}.
\end{align}
Then the sets \( \Omega^{+}_\infty\), \(\Omega^{-}_\infty \)
and \(\Omega^{+,-}_\infty\) are mutually disjoint and
\(\cbra{\lim_{t\to\infty}\abs{X_t}=\infty}
\subset \Omega^{+}_\infty\cup\Omega^{-}_\infty \cup \Omega^{+,-}_\infty\).
Hence by Theorem~\ref{Thm:p-gamma-abs-infty}, it holds that
\begin{align}\label{eq:Omega-union}
\ensuremath{\mathbb{P}}^{(\gamma)}_x(\Omega^{+}_\infty\cup \Omega^{-}_\infty\cup \Omega^{+,-}_\infty)=1.
\end{align}
If \(m^2<\infty\), the process \((X,\ensuremath{\mathbb{P}}_x^{(\gamma)})\) drifts to
either \(+\infty\) or \(-\infty\)
with a certain probability.
\begin{Thm}\label{Thm:h-gamma-process-infty}
Assume \(m^2<\infty\).
Then, for \(-1\le \gamma\le 1\), it holds that
\begin{align}
\ensuremath{\mathbb{P}}_x^{(\gamma)}\rbra{\Omega^{\pm}_\infty}
= \begin{dcases}
\frac{(1\pm\gamma)}{2}\frac{h^{(\pm 1)}(x)}{h^{(\gamma)}(x)}
& \text{if \(x\in \ensuremath{\mathcal{H}}^{(\gamma)}\),} \\
\frac{1\pm\gamma}{2}
& \text{if \(x=0\).}
\end{dcases}
\end{align}
Consequently, for \(x\in\ensuremath{\mathcal{H}}^{(1)}_0\cap\ensuremath{\mathcal{H}}^{(-1)}_0 \),
it holds that
\begin{gather}
\ensuremath{\mathbb{P}}_x^{(\gamma)}
\rbra{\Omega^{+}_\infty \cup \Omega^{-}_\infty}
=
\ensuremath{\mathbb{P}}_x^{(1)}\rbra{\Omega^{+}_\infty}
= \ensuremath{\mathbb{P}}_x^{(-1)}\rbra{\Omega^{-}_\infty}
=1,
\end{gather}
which implies that \(\ensuremath{\mathbb{P}}_x^{(1)}\) and \(\ensuremath{\mathbb{P}}_x^{(-1)}\)
are mutually singular on \(\ensuremath{\mathcal{F}}_{\infty}\).
\end{Thm}
Note that,
if \(m^2=\infty\), the process \(X\) can be oscillating
under \(\ensuremath{\mathbb{P}}_x^{(\gamma)}=\ensuremath{\mathbb{P}}_x^{(0)}\);
see, e.g., Theorem~\ref{Thm:stable-oscillate}.
\subsubsection{Short-time behaviors of the process \((X,\ensuremath{\mathbb{P}}_x^{(\gamma)})\)}
The proofs of the
following Theorems~\ref{Thm:h^S/x},~\ref{Thm:gaussian-entrance},~\ref{Thm:entrance-one}
and~\ref{Thm:feller}
will be given in Section~\ref{Sec:short-time}.
We first deal with differential property at \(0\) of \(h\), which
is used for the discussion of short-time behaviors.
Since \(h\) and \(h^{(\gamma)}\) are subadditive, it holds that
\begin{align}
h'(0\pm) & \coloneqq
\lim_{x\to 0\pm}\frac{h(x)}{x}
=\pm\sup_{x> 0}\frac{h(\pm x)}{x} \\
{h^{(\gamma)\prime}}(0\pm) & \coloneqq
\lim_{x\to 0\pm}\frac{h^{(\gamma)}(x)}{x}
=\pm\sup_{x> 0}\frac{h^{(\gamma)}(\pm x)}{x} =
h'(0\pm)\pm \frac{\gamma}{m^2}.
\end{align}
By~(\ref{Lem-item:h/x-infty})
of Lemma~\ref{Lem:h}, we have
\begin{align}
\abs{h'(0\pm)} & =\pm h'(0\pm) \in \sbra*{\frac{1}{m^2},\infty}, \\
\abs{h^{(\gamma)\prime}(0\pm)} & =\pm h^{(\gamma)\prime}(0\pm)
\in\sbra*{\frac{1\pm\gamma}{m^2},\infty}.
\end{align}
\begin{Thm}\label{Thm:h^S/x}
It holds that
\begin{align}\label{eq:h^S'}
{h}'(0+)+\abs{{h}'(0-)}
=\lim_{x\to 0+}\frac{h(x)+h(-x)}{x}=\frac{2}{\sigma^2}\in (0,\infty].
\end{align}
Consequently, for \(-1\le \gamma\le 1\), it holds that
\begin{align}
{h^{(\gamma)\prime}}(0+)+\abs{{h^{(\gamma)\prime}}(0-)}
=\lim_{x\to 0+}\frac{h^{(\gamma)}(x)+h^{(\gamma)}(-x)}{x}
=\frac{2}{\sigma^2}\in (0,\infty].
\end{align}
\end{Thm}
We remark that
Winkel~\cite[Lemma 1]{MR1894112} already showed that
\begin{align}
\lim_{x\to 0+}\frac{h_q(x)+h_q(-x)}{x} = \frac{2}{\sigma^2}, \quad q > 0.
\end{align}
By Theorem~\ref{Thm:h^S/x}, we see that
\(\sigma^2>0\) implies \(\abs{{h^{\prime}}(0\pm)}\le 2/\sigma^2 <\infty\)
and that \(\sigma^2=0\) implies \({{h^{\prime}}(0+)}=\infty\)
or \(-{{h^{\prime}}(0-)}=\infty\).
Define
\begin{align}
\Omega^{+}_0
& = \cbra{\exists t_0>0 \text{ such that } 0<\forall t<t_0, \, X_t>0}, \\
\Omega^{-}_0
& = \cbra{\exists t_0>0 \text{ such that } 0<\forall t<t_0, \, X_t<0}, \\
\Omega^{+,-}_0
& = \cbra{\exists\cbra{t_n} \text{ with } t_n\to {0+} \text{ such that
} \forall n,\,
X_{t_n}X_{t_{n+1}}<0}.
\end{align}
Then \(\Omega^{+}_0\), \(\Omega^{-}_0\) and \(\Omega^{+,-}_0\) are
mutually disjoint and
we have \(\Omega^{+}_0\cup \Omega^{-}_0\cup \Omega^{+,-}_0 = \ensuremath{\mathcal{D}}\).
\begin{Thm}\label{Thm:gaussian-entrance}
Assume \(m^2<\infty\) and \(\sigma^2>0\).
Then, for \(-1\le\gamma\le 1\), it holds that
\(\ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega^{+,-}_0)=0\),
\begin{align}
\ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega^{+}_0) = \frac{\sigma^2}{2}{h^{(\gamma)\prime}}(0+)
\quad\text{and}\quad
\ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega^{-}_0) = \frac{\sigma^2}{2}\abs{{h^{(\gamma)\prime}}(0-)}.
\end{align}
\end{Thm}
\begin{Thm}\label{Thm:entrance-one}
Assume \(m^2<\infty\).
If
\(h'(0+)=\infty\) and \(\abs{h'(0-)}<\infty\),
then \(\ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega^{+}_0)=1\) for \(-1\le\gamma\le 1\).
If \(h'(0+)<\infty\) and \(\abs{h'(0-)}=\infty\),
then \(\ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega^{-}_0)=1\) for \(-1\le\gamma\le 1\).
\end{Thm}
In the case \(\abs{h^{\prime}(0\pm)} =\infty\),
we do not obtain general properties.
We obtain the oscillating short-time behavior
under some technical assumptions.
\begin{Thm}\label{Thm:feller}
Let \(-1\le\gamma\le 1\). Assume the following four assertions hold:
\begin{enumerate}
\item \(\displaystyle
\liminf_{x\to\infty}h^{(\gamma)}(x)>0\)
and \(\displaystyle
\liminf_{x\to-\infty}h^{(\gamma)}(x)>0\);\label{item:cond:h-gamma-infty}
\item \(\displaystyle\lim_{x\to
0}\frac{h(x)}{\abs{x}}=\infty\),
i.e., \(h'(0+)=-h'(0-)=\infty\);\label{item:cond:h/x-infty}
\item \(\displaystyle\lim_{x\to 0}\frac{h_q(x+y)-h_q(y)}{h(x)}=1_{\cbra{y=0}}\) for
all \(q>0\);\label{item:cond:h_q/h}
\item \(\displaystyle0<\liminf_{x\to 0}\frac{h(-x)}{h(x)}\le \limsup_{x\to
0}\frac{h(-x)}{h(x)}<\infty\).\label{item:cond:h-/h}
\end{enumerate}
Then it holds that \(\ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega^{+,-}_0)=1\).
\end{Thm}
Note that,
If \(m^2<\infty\), (\ref{Lem-item:h/x-infty}) of Lemma~\ref{Lem:h}
implies that the condition~(\ref{item:cond:h-gamma-infty}) of Theorem~\ref{Thm:feller}
always holds for \(-1<\gamma<1\).
\subsection{Examples}
Before proceeding the proofs of the results, we introduce some examples.
\subsubsection{Brownian motions}
Assume \((X,\ensuremath{\mathbb{P}})\) is a standard Brownian motion.
Then \(\sigma^2=m^2=1\) and \(h(x)=\abs{x}\).
By Theorems~\ref{Thm:h-gamma-process-infty} and~\ref{Thm:gaussian-entrance},
it holds that, for \(-1\le \gamma\le 1\),
\begin{align}
\ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega_\infty^+) & =
\ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega_0^+)=\frac{1+\gamma}{2}, \\
\ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega_\infty^-) & = \ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega_0^-)=\frac{1-\gamma}{2}.
\end{align}
Since the Brownian motion has no jumps, the process \((X,\ensuremath{\mathbb{P}}_x^{(\gamma)})\)
also has no jumps. Thus the avoiding zero process \((X,\ensuremath{\mathbb{P}}_x^{(\gamma)})\)
does not change the sign. In fact, we have
\begin{align}
\ensuremath{\mathbb{P}}_0^{(\gamma)}= \frac{1+\gamma}{2}\ensuremath{\mathbb{P}}_0^{\mathrm{Bes}\, +}
+\frac{1-\gamma}{2}\ensuremath{\mathbb{P}}_0^{\mathrm{Bes}\, -}.
\end{align}
Moreover, by Theorem~\ref{Thm:gaussian-entrance}, it holds that,
for \(-1\le \gamma\le 1\),
\begin{align}
\ensuremath{\mathbb{P}}_x^{(\gamma)}(\Omega_\infty^+)=1_{\cbra{x>0}},
\quad
\ensuremath{\mathbb{P}}_x^{(\gamma)}(\Omega_\infty^-)=1_{\cbra{x<0}},
\quad x\ne 0.
\end{align}
In fact, we have \(\ensuremath{\mathbb{P}}_x^{(\gamma)} = \ensuremath{\mathbb{P}}_x^{\mathrm{Bes}\, +}\) for \(x>0\)
and \(\ensuremath{\mathbb{P}}_x^{(\gamma)} = \ensuremath{\mathbb{P}}_x^{\mathrm{Bes}\, -}\) for \(x<0\).
\subsubsection{Stable processes}
Assume \((X,\ensuremath{\mathbb{P}})\) is strictly stable of index \(1<\alpha<2\).
Then \(m^2=\infty\) and the L\'{e}vy measure \(\nu\) can be written as
\begin{align}
\nu(\mathrm{d} x)=\begin{cases}
c_{+} x^{-1-\alpha}\, \mathrm{d} x & \text{for \(x\in (0,\infty)\)}, \\
c_{-} \abs{x}^{-1-\alpha}\,\mathrm{d} x & \text{for \(x\in (-\infty, 0)\)},
\end{cases}
\end{align}
where \(c_+\) and \(c_-\) are non-negative constants such that
\(c_++c_->0\).
The characteristic exponent \(\varPsi\) can be expressed as
\begin{align}
\varPsi(\lambda)= c\abs{\lambda}^\alpha
\rbra*{1-\mathrm{i} \beta\sgn(\lambda)\tan \frac{\alpha\pi}{2}},
\end{align}
where \(c=-(c_++c_-)\Gamma(-\alpha)\cos(\pi\alpha/2)\)
and \(\beta = (c_+-c_-)/(c_++c_-)\);
see, e.g.,~\cite[Section 1.2]{MR3155252}.
We write \(c'=-c\beta \tan (\alpha\pi/2)\).
Then as a special case of~\eqref{eq:h_q}, it holds that
\begin{align}
h_q(x)=\frac{1}{\pi}\int_0^\infty
\Re \rbra*{\frac{1-e^{\mathrm{i} \lambda x}}{q+(c+\mathrm{i} c')\abs{\lambda}^\alpha}}
\,\mathrm{d} \lambda.
\end{align}
The dominated convergence theorem
implies that
\begin{align}\label{eq:stable-h-inte}
h(x)=\frac{1}{\pi}\int_0^\infty
\Re \rbra*{\frac{1-e^{\mathrm{i} \lambda x}}{(c+\mathrm{i} c')\abs{\lambda}^\alpha}}
\,\mathrm{d} \lambda.
\end{align}
Thus we have
\begin{align}\label{eq:h-repre-stable}
h(x)-h_q(x)
= \frac{1}{\pi} \int_0^\infty
\Re\rbra*{\frac{1-e^{\mathrm{i} \lambda x}}{(c+\mathrm{i} c')\abs{\lambda}^\alpha}
\frac{q}{q+(c+\mathrm{i} c')\abs{\lambda}^\alpha}}
\, \mathrm{d} \lambda.
\end{align}
Hence it is obvious that
\begin{align}
h_q'(x)=h'(x)+v(x), \quad x\in\ensuremath{\mathbb{R}}\setminus\cbra{0},
\end{align}
where \(v(x)\) is a bounded continuous function.
Yano~\cite{MR3072331} calculated~\eqref{eq:stable-h-inte}
and obtained
\begin{align}\label{eq:stable-h-Yano}
h(x)=-\frac{\alpha\Gamma(-\alpha)\sin(\pi\alpha/2)(1-\beta\sgn(x))}
{c(1+\beta^2\tan^2(\pi\alpha/2))}
\abs{x}^{\alpha-1};
\end{align}
see also Pant\'{\i}~\cite[Example 5.1]{MR3689384}.
Assume \((X,\ensuremath{\mathbb{P}})\) is spectrally positive (resp.\ negative), i.e.,
\(\beta=1\) (resp. \(\beta=-1\)). Then by~\eqref{eq:stable-h-Yano},
we have \(\ensuremath{\mathcal{H}}_0^{(0)}=(-\infty, 0]\)
(resp. \(\ensuremath{\mathcal{H}}_0^{(0)}=[0,\infty)\)).
On the other hand, assume \((X,\ensuremath{\mathbb{P}})\) is not spectrally one-sided, i.e., \(-1<\beta<1\).
Then the functions \(h\) and \(h_q\) satisfy the assumptions
of Theorem~\ref{Thm:feller}. Hence we obtain the following theorem:
\begin{Thm}
Assume \((X,\ensuremath{\mathbb{P}})\) is a strictly stable process
of index \(1<\alpha<2\).
If \((X,\ensuremath{\mathbb{P}})\) is spectrally positive (resp.\ negative), it holds that
\begin{align}
\ensuremath{\mathbb{P}}^{(0)}_0(\Omega_0^{-})=1 \quad \text{(resp. \( \ensuremath{\mathbb{P}}^{(0)}_0(\Omega_0^{+})=1 \)).}
\end{align}
If \((X,\ensuremath{\mathbb{P}})\) is not spectrally one-sided, it holds that
\begin{align}
\ensuremath{\mathbb{P}}^{(0)}_0(\Omega_0^{+,-})=1.
\end{align}
\end{Thm}
Furthermore,
we obtain the following long-time behavior:
\begin{Thm}\label{Thm:stable-oscillate}
Assume \((X,\ensuremath{\mathbb{P}})\) is a strictly stable process
of index \(1<\alpha<2\).
If \((X,\ensuremath{\mathbb{P}})\) is spectrally positive (resp.\ negative), it holds that
\begin{align}
\ensuremath{\mathbb{P}}^{(0)}_0(\Omega_\infty^{-})=1 \quad
\text{(resp. \( \ensuremath{\mathbb{P}}^{(0)}_0(\Omega_\infty^{+})=1 \)).}
\end{align}
If \((X,\ensuremath{\mathbb{P}})\) is not spectrally one-sided, it holds that
\begin{align}\label{eq:stable-oscillate}
\ensuremath{\mathbb{P}}^{(0)}_0(\Omega_\infty^{+,-})=1.
\end{align}
\end{Thm}
To prove~\eqref{eq:stable-oscillate}, we use the same discussion as
the proof of~\cite[Corollary 1.4]{MR2603019}.
\subsubsection{Recurrent spectrally negative processes}
Let \((X,\ensuremath{\mathbb{P}})\) be a spectrally negative L\'{e}vy process, i.e.,
\(\nu(0,\infty)=0\), satisfying the assumption~\ref{item:assumption}.
Then,~\cite[Example 5.2]{MR3689384} says that
\begin{align}
h(x) = W(x)-\frac{x}{m^2},
\end{align}
where \(W(x)\) stands for the scale function of \(X\).
Since \(W(x)=0\) for \(x\le 0\), we have
\(h(x)=\abs{x}/{m^2}\in [0,\infty)\) for \(x\le 0\).
If \(m^2=\infty\), we have \(\ensuremath{\mathcal{H}}_0^{(0)}=[0,\infty)\)
and hence it holds that
\begin{align}
\ensuremath{\mathbb{P}}^{(0)}_x(\Omega_\infty^{+})=\ensuremath{\mathbb{P}}^{(0)}_0(\Omega_0^{+})=1,\quad
x\in [0,\infty).
\end{align}
\subsubsection{Symmetric processes}
We consider the case \((X,\ensuremath{\mathbb{P}})\) is symmetric and satisfies
the condition~\ref{item:assumption}.
Then \(h(x)=h(-x)\).
If \(\sigma^2>0\), we have
\(h'(0+)=-h'(0-)=1/\sigma^2\)
and hence, by Theorem~\ref{Thm:gaussian-entrance},
\begin{align}
\ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega_0^{+})=\ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega_0^{-})=\frac{1}{2},
\quad -1\le \gamma\le 1.
\end{align}
On the other hand, we assume \(\sigma^2=0\)
and \(\lambda \mapsto \Re \varPsi(\lambda)\) is eventually non-decreasing.
Then, Theorem~\ref{Thm:h^S/x} implies that
the function \(h\) satisfies
the assumption
(\ref{item:cond:h/x-infty})
of Theorem~\ref{Thm:feller}.
Moreover,
Lemma 4.4 and (i) of Lemma 6.2 of Yano~\cite{MR2603019} states that
\(h\) and \(h_q\) satisfy the
assumption~(\ref{item:cond:h_q/h})
of Theorem~\ref{Thm:feller}.
The function \(h\) obviously satisfies the
assumption~(\ref{item:cond:h-/h})
of Theorem~\ref{Thm:feller}.
Hence, if~(\ref{item:cond:h-gamma-infty}) of Theorem~\ref{Thm:feller}
also holds, it holds that
\begin{align}
\ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega_0^{+,-})=1,\quad -1\le\gamma\le 1.
\end{align}
\subsection{Outline of the paper}
The paper is organized as follows.
In Section~\ref{Sec:preliminaries}, we prepare
general properties of L\'{e}vy processes and
some preliminary facts of the renormalized zero resolvent \(h\).
In Sections~\ref{Sec:pf} and~\ref{Sec:short-time},
we prove the main results for long-time behaviors and
short-time behaviors, respectively.
In Section~\ref{Sec:resolvent} as an appendix, we investigate
the resolvent density under \(\ensuremath{\mathbb{P}}_x^{(\gamma)}\).
In Section~\ref{Sec:pf-meander} as another appendix,
we will give the proof of Theorem~\ref{Thm:limit-P0}.
\textbf{Acknowledgements.} The author would like
to express his deep gratitude to Professor Kouji Yano
for his helpful advice and encouragement.
\section{Preliminaries}\label{Sec:preliminaries}
\subsection{General properties of L\'{e}vy processes}
Let \((X=(X_t,t\ge 0),\ensuremath{\mathbb{P}}_x)\) denote the canonical representation
of a one-dimensional L\'{e}vy process starting from \(x\in\ensuremath{\mathbb{R}}\)
on the c\`{a}dl\`{a}g path space \(\ensuremath{\mathcal{D}}\)
and we write \(\ensuremath{\mathbb{P}}=\ensuremath{\mathbb{P}}_0\).
For \(t\ge 0\), we denote by \(\ensuremath{\mathcal{F}}_t^X = \sigma(X_s, 0\le s\le t)\) the natural
filtration of \(X\) and we write \(\ensuremath{\mathcal{F}}_t = \bigcap_{s>t} \ensuremath{\mathcal{F}}_s^X\)
and \(\ensuremath{\mathcal{F}}_\infty = \sigma(\bigcup_{t>0}\ensuremath{\mathcal{F}}_t)\).
It is well-known that we have
\begin{align}
\ensuremath{\mathbb{P}}\sbra{\mathrm{e}^{\mathrm{i}\lambda X_t}}=\mathrm{e}^{-t\varPsi(\lambda)},
\quad \text{for \(t\ge 0\) and \(\lambda\in\ensuremath{\mathbb{R}}\),}
\end{align}
where \(\varPsi(\lambda)\) denotes the characteristic exponent
of \(X\) given by the L\'{e}vy--Khintchine formula
\begin{align}
\varPsi(\lambda)
= \mathrm{i} v \lambda
+ \frac{1}{2} \sigma^2 \lambda^2
+ \int_\ensuremath{\mathbb{R}} \rbra*{1 - \mathrm{e}^{\mathrm{i} \lambda x} + \mathrm{i} \lambda x 1_{\cbra{\abs{x}< 1}}}
\nu(\mathrm{d} x)
\end{align}
for some constants \(v \in \ensuremath{\mathbb{R}}\) and \(\sigma^2 \ge 0\)
and a characteristic measure \(\nu\) on \(\ensuremath{\mathbb{R}}\)
which satisfies \(\nu(\cbra{0})=0\) and
\begin{align}
\int_\ensuremath{\mathbb{R}} \rbra*{x^2 \wedge 1} \nu(\mathrm{d} x) < \infty.
\end{align}
The measure \(\nu\) is called
a L\'{e}vy measure.
See, e.g.,~\cite{MR1406564,MR3155252}.
For a Borel set \(A\subset\ensuremath{\mathbb{R}}\), we denote by \(T_A=\inf\cbra{t>0\colon X_t\in A}\)
the first hitting time of \(A\) and
we simply write \(T_a\coloneqq T_{\cbra{a}}\)
for the hitting time of a point \(a\in\ensuremath{\mathbb{R}}\).
We consider the following four conditions:
\begin{enumerate}[label=\textbf{(A\arabic*)}]
\item The process \((X,\ensuremath{\mathbb{P}})\) is not a compound Poisson process;\label{item:cond-not-CPP}
\item \(0\) is regular for itself, i.e., \(\ensuremath{\mathbb{P}}(T_0=0)=1\);\label{item:cond-regular}
\item \(\displaystyle \int_\ensuremath{\mathbb{R}} \Re\rbra*{\frac{1}{q+\varPsi(\lambda)}}\,\mathrm{d}\lambda<\infty\)
for all \(q>0\);\label{item:cond-Reinte}
\item We have either \(\sigma^2>0\) or
\(\displaystyle \int_{(-1,1)}\abs{x}\nu(\mathrm{d} x)=\infty\).\label{item:cond-nu}
\end{enumerate}
Then the following lemma is well-known:
\begin{Lem}\label{Lem:equi-four-assump}
The following three assertions hold:
\begin{enumerate}
\item The conditions~\ref{item:cond-not-CPP} and~\ref{item:cond-regular} hold
if and only if the conditions~\ref{item:cond-Reinte}
and~\ref{item:cond-nu} hold;\label{Lem-item:equi-1234}
\item Under the condition~\ref{item:cond-Reinte},
the condition~\ref{item:cond-regular}
holds if and only if the
condition~\ref{item:cond-nu} holds;\label{Lem-item:equi-24}
\item The condition~\ref{item:cond-Reinte} holds if and only if
\((X,\ensuremath{\mathbb{P}})\) has the bounded \(q\)-resolvent density \(r_q\),
which satisfies
\begin{align}
\int_\ensuremath{\mathbb{R}} f(x) r_q(x) \, \mathrm{d} x
= \ensuremath{\mathbb{P}}\sbra*{\int_0^\infty \mathrm{e}^{-qt}f(X_t) \, \mathrm{d} t},\quad q>0,
\end{align}
for all non-negative measurable functions \(f\).
Moreover, under the condition~\ref{item:cond-Reinte},
the condition~\ref{item:cond-regular} holds if and only if
\(x \mapsto r_q(x)\) is continuous.\label{Lem-item:equi-resol}
\end{enumerate}
\end{Lem}
For the proofs of~(\ref{Lem-item:equi-1234}) and~(\ref{Lem-item:equi-24})
of Lemma~\ref{Lem:equi-four-assump}, see Kesten~\cite{MR0272059} and
Bretagnolle~\cite{MR0368175}.
For the proof of~(\ref{Lem-item:equi-resol}) of Lemma~\ref{Lem:equi-four-assump},
see Theorems II.16 and II.19 of Bertoin~\cite{MR1406564}.
Throughout this paper, we always assume the condition~\ref{item:assumption}.
This implies that \((X,\ensuremath{\mathbb{P}})\) has the bounded continuous resolvent density
which is given by
\begin{align}
r_q(x)
=\frac{1}{\pi}\int_0^\infty
\Re\rbra*{\frac{\mathrm{e}^{-\mathrm{i}\lambda x}}{q+\varPsi(\lambda)}}
\, \mathrm{d} \lambda
\end{align}
for all \(q > 0\) and \(x\in \ensuremath{\mathbb{R}}\);
see, e.g., Winkel~\cite[Lemma 2]{MR1894112}
and Tsukada~\cite[Colorally 15.1]{MR3838874}.
Combining this and Lemma~\ref{Lem:equi-four-assump},
we see the condition~\ref{item:assumption}
implies~\ref{item:cond-not-CPP}--\ref{item:cond-nu}.
Tsukada~\cite[Lemma 15.5]{MR3838874} also proved that
the condition~\ref{item:assumption} implies
\begin{align}\label{eq:inte-of-varPsi}
\int_0^\infty \abs*{\frac{1\wedge \lambda^2}{\varPsi(\lambda)}}
\, \mathrm{d} \lambda < \infty;
\end{align}
see also~\cite[Lemma 2.4]{me}.
Under the condition~\ref{item:assumption}, we denote
by \(L=(L_t, t\ge 0)\) local time at \(0\) normalized by
the equation
\begin{align}\label{eq:regularity-of-L}
\ensuremath{\mathbb{P}}_x\sbra*{\int_0^\infty \mathrm{e}^{-qt} \mathrm{d} L_t} = r_q(-x),
\quad x \in \ensuremath{\mathbb{R}};
\end{align}
see, e.g.,~\cite[Section V]{MR1406564}.
Let \(n\) denote the characteristic measure of
excursions away from \(0\), called It\^{o}'s excursion measure
(see, e.g.,~\cite[Section IV.4]{MR1406564}).
Then the equation~\eqref{eq:n-normalize} holds.
\subsection{The renormalized zero resolvent}
We define
\begin{align}\label{eq:h_q}
h_q(x)=r_q(0)-r_q(-x)
=\frac{1}{\pi}\int_0^\infty
\Re\rbra*{\frac{1-\mathrm{e}^{\mathrm{i} \lambda x}}{q+\varPsi(\lambda)}} \, \mathrm{d} \lambda.
\end{align}
Since we have
\begin{align}\label{eq:-qT_0}
\ensuremath{\mathbb{P}}_x\sbra{\mathrm{e}^{-qT_0}}=\frac{r_q(-x)}{r_q(0)} \ge 0
\end{align}
(see, e.g., Bertoin~\cite[Colorally II.18]{MR1406564}),
the function \(h_q\) is non-negative.
In addition, \(h_q\) is subadditive, i.e., \(h_q(x+y)\le h_q(x)+h_q(y)\)
for \(x,y\in \ensuremath{\mathbb{R}}\); see, e.g., the proof of
Lemma 3.3 in~\cite{MR3689384} and
the proofs of (ii) and (iii) of Theorem 1.1 in~\cite{me}.
We denote the second moment of \(X_1\) by
\begin{align}
m^2 = \ensuremath{\mathbb{P}}\sbra{X_1^2}=\sigma^2+\int_\ensuremath{\mathbb{R}} x^2\nu(\mathrm{d} x)\in (0,\infty].
\end{align}
\begin{Lem}[The renormalized zero resolvent]\label{Lem:h}
Assume the condition~\ref{item:assumption} is satisfied.
Then the following assertions hold:
\begin{enumerate}
\item for \(x\in\ensuremath{\mathbb{R}}\), the limit \( h(x)\coloneqq \lim_{q\to 0+} h_q(x)\) exists and is finite,
which is called the \emph{renormalized zero resolvent};\label{Lem-item:h-exist}
\item \(h\) is non-negative, continuous and subadditive \((h(x+y)\le h(x)+h(y)\) for \(x,y\in\ensuremath{\mathbb{R}})\)
and \(h(0)=0\);\label{Lem-item:h-subadditive}
\item \(\displaystyle h(x)+h(-x)
=\frac{2}{\pi}\int_0^\infty
\Re\rbra*{\frac{1-\cos \lambda x}{\varPsi(\lambda)}} \, \mathrm{d} \lambda\),
for \(x\in\ensuremath{\mathbb{R}}\);\label{Lem-item:h^S-repre}
\item if \(m^2<\infty\), it holds that
\(\displaystyle h(x)=\frac{1}{\pi}\int_0^\infty
\Re\rbra*{\frac{1-\mathrm{e}^{\mathrm{i} \lambda x}}{\varPsi(\lambda)}} \, \mathrm{d} \lambda\),
for \(x\in\ensuremath{\mathbb{R}}\);
\item \(\displaystyle \lim_{x\to\infty}\frac{h(x)}{\abs{x}}=\frac{1}{m^2}
\in [0,\infty)\);\label{Lem-item:h/x-infty}
\item \(\displaystyle \lim_{y\to\pm\infty}
\cbra{h(x+y)-h(y)}=\pm \frac{x}{m^2}\in\ensuremath{\mathbb{R}}\).\label{Lem-item:h-diff-infty}
\end{enumerate}
\end{Lem}
The proof of Lemma~\ref{Lem:h} can be found in
Theorems 1.1 and 1.2 and Lemma 3.3 of~\cite{me}.
We define, for \(-1\le \gamma\le 1\),
\begin{align}\label{eq:def-h-g}
h^{(\gamma)}(x)=h(x)+\frac{\gamma}{m^2}x,\quad x\in\ensuremath{\mathbb{R}}.
\end{align}
By Lemma~\ref{Lem:h}, the function
\(h^{(\gamma)}\) is subadditive, \(h^{(\gamma)}(0)=0\)
and
\begin{gather}\label{eq:h-g/x-infty}
\lim_{x\to\pm\infty} \frac{h^{(\gamma)}(x)}{\abs{x}} = \frac{1\pm\gamma}{m^2}.
\end{gather}
By subadditivity of \(h^{(\gamma)}\) and by~\eqref{eq:h-g/x-infty}, we also have
\begin{align}\label{eq:h-g-plus}
h^{(\gamma)}(\pm x)\ge \frac{1\pm \gamma}{m^2}{x} \ge 0, \quad \text{for all \(x\ge 0\)}.
\end{align}
For \(-1\le\gamma\le 1\),
we define \(\ensuremath{\mathcal{H}}^{(\gamma)} = \cbra{x\in\ensuremath{\mathbb{R}}\colon h^{(\gamma)}(x)>0 }\) and
\(\ensuremath{\mathcal{H}}^{(\gamma)}_0 = \ensuremath{\mathcal{H}}^{(\gamma)} \cup \cbra{0}\).
By recurrence of \(X\), continuity and subadditivity of \(h\) and~\eqref{eq:harmonic},
\(\ensuremath{\mathcal{H}}^{(\gamma)}_0\) is either \(\ensuremath{\mathbb{R}}\), \([0,\infty)\) or \((-\infty,0]\).
In addition, if \(m^2<\infty\) and \(-1<\gamma<1\), then~\eqref{eq:h-g-plus}
implies that \(\ensuremath{\mathcal{H}}^{(\gamma)}_0=\ensuremath{\mathbb{R}}\).
Then we can define the \(h^{(\gamma)}\)-transformed process
given by~\eqref{eq:def-h-trans}.
\section{The long-time behaviors}\label{Sec:pf}
We prepare some important \(\ensuremath{\mathbb{P}}_x^{(\gamma)}\)-martingale
which is used for investigating the path behaviors of the process.
Recall that we always assume the condition~\ref{item:assumption}.
\begin{Lem}\label{Lem:P-g-mart}
Let \(-1\le \gamma\le 1\) and \(x\in \ensuremath{\mathcal{H}}_0^{(\gamma)}\).
Then \((\frac{1}{h^{(\gamma)}(X_t)},t> 0)\)
is a non-negative \(\ensuremath{\mathbb{P}}_x^{(\gamma)}\)-supermartingale.
Moreover, if \(m^2<\infty\), then,
for \(\gamma_1, \gamma_2\in [-1,1]\),
the process
\((\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)},t>0)\)
is a non-negative \(\ensuremath{\mathbb{P}}_x^{(\gamma_1)}\)-martingale,
and its mean is \(\frac{h^{(\gamma_2)}(x)}{h^{(\gamma_1)}(x)}\)
if \(x\in \ensuremath{\mathcal{H}}^{(\gamma_1)}\)
and is \(1\) if \(x=0\).
\end{Lem}
\begin{proof}
Note that since \(P_x^{(\gamma)}(T_{\ensuremath{\mathbb{R}}\setminus \ensuremath{\mathcal{H}}^{(\gamma)}}=\infty)=1\)
for \(x\in \ensuremath{\mathcal{H}}^{(\gamma)}_0\), we have \(h^{(\gamma)}(X_t)\ne 0\),
\(\ensuremath{\mathbb{P}}_x^{(\gamma)}\)-a.s.
We first assume \(x\in \ensuremath{\mathcal{H}}^{(\gamma)}\).
Let \(0<s<t\) and let \(F_s\) be a non-negative bounded \(\ensuremath{\mathcal{F}}_s\)-measurable functional.
Then we have
\begin{align}
\ensuremath{\mathbb{P}}_x^{(\gamma)}\sbra*{\frac{1}{h^{(\gamma)}(X_t)}F_s}
= \frac{1}{h^{(\gamma)}(x)}\ensuremath{\mathbb{P}}_x\sbra{F_s; T_0 > t}
\le\frac{1}{h^{(\gamma)}(x)} \ensuremath{\mathbb{P}}_x\sbra{F_s; T_0 > s}
= \ensuremath{\mathbb{P}}_x^{(\gamma)}\sbra*{\frac{1}{h^{(\gamma)}(X_s)}F_s},
\end{align}
which implies that \((\frac{1}{h^{(\gamma)}(X_t)}, t>0)\) is a
non-negative
\(\ensuremath{\mathbb{P}}_x^{(\gamma)}\)-supermartingale.
Suppose \(m^2<\infty\). Then,
by Lemma~\ref{Lem:harmonic}, we have,
for \(\gamma_1,\gamma_2\in [-1,1]\)
and \(x\in \ensuremath{\mathcal{H}}^{(\gamma_1)}\),
\begin{align}\label{eq:h/h-gamma-mart}
\ensuremath{\mathbb{P}}_x^{(\gamma_1)}
\sbra*{\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)}F_s}
& =\frac{1}{h^{(\gamma_1)}(x)}\ensuremath{\mathbb{P}}_x\sbra*{h^{(\gamma_2)}(X_t)F_s;T_0>t} \\
& =\frac{1}{h^{(\gamma_1)}(x)}\ensuremath{\mathbb{P}}_x\sbra*{h^{(\gamma_2)}(X_s)F_s;T_0>s} \\
& = \ensuremath{\mathbb{P}}_x^{(\gamma_1)}
\sbra*{\frac{h^{(\gamma_2)}(X_s)}{h^{(\gamma_1)}(X_s)}F_s},
\end{align}
which implies that \((\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)}, t> 0)\) is
a non-negative \(\ensuremath{\mathbb{P}}_x^{(\gamma_1)}\)-martingale with mean
\(\frac{h^{(\gamma_2)}(x)}{h^{(\gamma_1)}(x)}\).
We next assume \(x=0\). Then we have
\begin{align}
\ensuremath{\mathbb{P}}_0^{(\gamma)}\sbra*{\frac{1}{h^{(\gamma)}(X_t)}F_s}
= n\sbra{F_s; T_0> t }
\le n\sbra{F_s; T_0 > s}
= \ensuremath{\mathbb{P}}_0^{(\gamma)}\sbra*{\frac{1}{h^{(\gamma)}(X_s)}F_s},
\end{align}
which implies that \((\frac{1}{h^{(\gamma)}(X_t)}, t>0)\) is a
non-negative
\(\ensuremath{\mathbb{P}}_0^{(\gamma)}\)-supermartingale.
Suppose \(m^2<\infty\).
Then, by Lemma~\ref{Lem:harmonic}, we have
\begin{align}\label{eq:h/h-gamma-mart-n}
\ensuremath{\mathbb{P}}_0^{(\gamma_1)}
\sbra*{\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)}F_s}
& =n\sbra*{h^{(\gamma_2)}(X_t)F_s;T_0>t}
= n\sbra*{h^{(\gamma_2)}(X_s)F_s;T_0>s}
= \ensuremath{\mathbb{P}}_0^{(\gamma_1)}
\sbra*{\frac{h^{(\gamma_2)}(X_s)}{h^{(\gamma_1)}(X_s)}F_s},
\end{align}
which implies that \((\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)}, t> 0)\) is
a non-negative \(\ensuremath{\mathbb{P}}_0^{(\gamma_1)}\)-martingale with mean \(1\).
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm:p-gamma-abs-infty}]
We first proof Theorem~\ref{Thm:p-gamma-abs-infty}
in the case \(x\in \ensuremath{\mathcal{H}}^{(\gamma)}\).
Let \(F_s\) be a
non-negative bounded \(\ensuremath{\mathcal{F}}_s\)-measurable functional. Then,
since \((\frac{1}{h^{(\gamma)}(X_t)},t\ge 0)\) is a non-negative
\(\ensuremath{\mathbb{P}}_x^{(\gamma)}\)-martingale
by Lemma~\ref{Lem:P-g-mart}, we see
\( \lim_{t\to\infty}\frac{1}{h^{(\gamma)}(X_t)}\)
exists and its limit is non-negative \(\ensuremath{\mathbb{P}}_x^{(\gamma)}\)-a.s.
By Fatou's lemma, we obtain
\begin{align}
\ensuremath{\mathbb{P}}_x^{(\gamma)}\sbra*{ \lim_{t\to\infty}\frac{1}{h^{(\gamma)}(X_t)}}
\le \liminf_{t\to\infty}\ensuremath{\mathbb{P}}_x^{(\gamma)}\sbra*{\frac{1}{h^{(\gamma)}(X_t)}}
=\frac{1}{h^{(\gamma)}(x)}\liminf_{t\to\infty}
\ensuremath{\mathbb{P}}_x\rbra{T_0>t}
= 0,
\end{align}
here the last equality follows from the fact that \((X,\ensuremath{\mathbb{P}}_x)\) is recurrent.
Hence it holds that
\(\lim_{t\to\infty}\frac{1}{h^{(\gamma)}(X_t)} = 0\), \(\ensuremath{\mathbb{P}}_x^{(\gamma)}\)-a.s.
This implies \(\lim_{t\to\infty}\abs{X_t} = \infty\), \(\ensuremath{\mathbb{P}}_x^{(\gamma)}\)-a.s.
The proof in the case \(x=0\) is similar. Hence we omit it.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm:P-g-equi}]
We first consider the case \(x\in \ensuremath{\mathcal{H}}^{(\gamma)}\).
Let \(F_t\) be
a non-negative bounded \(\ensuremath{\mathcal{F}}_t\)-measurable
functional. Then we have
\begin{align}
\ensuremath{\mathbb{P}}_x^{(\gamma)}\sbra{F_t}
= \ensuremath{\mathbb{P}}_x\sbra*{\frac{h^{(\gamma)}(X_t)}{h^{(\gamma)}(x)}F_t; T_0>t}
= \frac{h(x)}{h^{(\gamma)}(x)}
\ensuremath{\mathbb{P}}_x^{(0)}\sbra*{\frac{h^{(\gamma)}(X_t)}{h(X_t)}F_t}.
\end{align}
Since \(h(x)\ge \abs{x}/m^2\) and
\(h^{(\gamma)}(x) = h(x) + \gamma x / m^2\), it holds that
\begin{align}
1- \abs{\gamma}
\le \frac{h^{(\gamma)}(X_t)}{h(X_t)}
\le 1 + \abs{\gamma}.
\end{align}
Thus, we have
\begin{align}
(1-\abs{\gamma}) \frac{h(x)}{h^{(\gamma)}(x)} \ensuremath{\mathbb{P}}_x^{(0)}\sbra{F_t}
\le \ensuremath{\mathbb{P}}_x^{(\gamma)}\sbra{F_t}
\le (1+\abs{\gamma}) \frac{h(x)}{h^{(\gamma)}(x)}\ensuremath{\mathbb{P}}_x^{(0)}\sbra{F_t}.
\end{align}
By the extension theorem, it holds that
\begin{align}
(1-\abs{\gamma}) \frac{h(x)}{h^{(\gamma)}(x)} \ensuremath{\mathbb{P}}_x^{(0)}
\le \ensuremath{\mathbb{P}}_x^{(\gamma)}
\le (1+\abs{\gamma}) \frac{h(x)}{h^{(\gamma)}(x)}\ensuremath{\mathbb{P}}_x^{(0)},
\quad \text{on \(\ensuremath{\mathcal{F}}_\infty\).}
\end{align}
By the similar discussion, we also have
\begin{align}
(1-\abs{\gamma}) \ensuremath{\mathbb{P}}_0^{(0)}
\le \ensuremath{\mathbb{P}}_0^{(\gamma)}
\le (1+\abs{\gamma})\ensuremath{\mathbb{P}}_0^{(0)},
\quad \text{on \(\ensuremath{\mathcal{F}}_\infty\).}
\end{align}
Therefore we obtain the desired result.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm:h-gamma-process-infty}]
We first consider the case \(x\in \ensuremath{\mathcal{H}}^{(\gamma)}\).
Let \(\gamma_1,\gamma_2 \in [-1, 1]\) be
different constants.
Since \((\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)}, t> 0)\) is
a non-negative \(\ensuremath{\mathbb{P}}_x^{(\gamma_1)}\)-martingale by Lemma~\ref{Lem:P-g-mart},
the limit \(\lim_{t\to\infty}\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)}\)
exists and is finite
\(\ensuremath{\mathbb{P}}_0^{(\gamma_1)}\)-a.s.
By
(\ref{Lem-item:h/x-infty}) of Lemma~\ref{Lem:h},
we see
\begin{align}\label{eq:lim-hg'/hg}
\lim_{x\to\pm\infty} \frac{h^{(\gamma_2)}(x)}{h^{(\gamma_1)}(x)}
= \frac{1\pm \gamma_2}{1\pm \gamma_1}
\in [0, \infty].
\end{align}
Hence the limits
\(\lim_{x\to\infty} \frac{h^{(\gamma_2)}(x)}{h^{(\gamma_1)}(x)}\)
and \(\lim_{x\to -\infty} \frac{h^{(\gamma_2)}(x)}{h^{(\gamma_1)}(x)}\)
are different.
Combining this and~\eqref{eq:Omega-union}, we have
\(\ensuremath{\mathbb{P}}^{(\gamma_1)}_x(\Omega^{+}_\infty\cup \Omega^{-}_\infty)=1\).
Since \(\lim_{t\to\infty}\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)}\)
is finite \(\ensuremath{\mathbb{P}}_x^{(\gamma_1)}\)-a.s., we have
\(\ensuremath{\mathbb{P}}^{(1)}_x(\Omega^{+}_\infty)=\ensuremath{\mathbb{P}}^{(-1)}_x(\Omega^{-}_\infty)=1\).
Suppose \(-1<\gamma_1<1\).
Then, since
\(\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)} \le \frac{1+\abs{\gamma_2}}{1-\abs{\gamma_1}}\),
we may apply the dominated convergence theorem to obtain
\begin{align}\label{eq:lim-inf-mart}
\ensuremath{\mathbb{P}}_x^{(\gamma_1)}\sbra*{\lim_{t\to\infty}\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)}}
= \lim_{t\to\infty} \ensuremath{\mathbb{P}}_x^{(\gamma_1)}\sbra*{\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)}}
= \frac{h^{(\gamma_2)}(x)}{h^{(\gamma_1)}(x)}.
\end{align}
By~\eqref{eq:lim-hg'/hg} and~\eqref{eq:lim-inf-mart},
we have
\begin{align}\label{eq:p-g1-p-g2}
\frac{1+\gamma_2}{1+\gamma_1}\ensuremath{\mathbb{P}}_x^{(\gamma_1)}(\Omega^{+}_\infty)
+ \frac{1-\gamma_2}{1-\gamma_1}\ensuremath{\mathbb{P}}_x^{(\gamma_1)}(\Omega^{-}_\infty)
= \frac{h^{(\gamma_2)}(x)}{h^{(\gamma_1)}(x)}.
\end{align}
Since \(\ensuremath{\mathbb{P}}_x^{(\gamma_1)}(\Omega^{+}_\infty)+
\ensuremath{\mathbb{P}}_x^{(\gamma_1)}(\Omega^{-}_\infty)=1\),~\eqref{eq:p-g1-p-g2} implies that
\begin{align}
\ensuremath{\mathbb{P}}_x^{(\gamma_1)}(\Omega^{+}_\infty) = \frac{1+\gamma_1}{2}\frac{h^{(1)}(x)}{h^{(\gamma_1)}(x)}
\quad\text{and}\quad
\ensuremath{\mathbb{P}}_x^{(\gamma_1)}(\Omega^{-}_\infty) = \frac{1-\gamma_1}{2}\frac{h^{(-1)}(x)}{h^{(\gamma_1)}(x)}.
\end{align}
The proof in the case \(x=0\) is similar. So we omit it.
\end{proof}
\section{The short-time behaviors}\label{Sec:short-time}
First, we offer the proof of Theorem~\ref{Thm:h^S/x}.
\begin{proof}[Proof of Theorem~\ref{Thm:h^S/x}]
We write \(h^S(x)=h(x)+h(-x)\).
Since we have
\begin{align}
\Re \varPsi(\lambda) = \frac{1}{2} \sigma^2 \lambda^2
+ \int_\ensuremath{\mathbb{R}} \rbra*{1 - \cos \lambda x} \nu(\mathrm{d} x)\ge 0,\label{eq:theta}
\end{align}
it holds that \(\Re(\frac{1}{\varPsi(\lambda)})\ge 0\).
In addition, it holds that
\(\lim_{x\to 0+}x^2 \varPsi\rbra{\lambda/x} =\sigma^2\lambda^2/2\).
By (\ref{Lem-item:h^S-repre}) of Lemma~\ref{Lem:h}, we have
\begin{align}
\frac{h^S(x)}{x}
& = \frac{2}{\pi x} \int_0^\infty
\Re\rbra*{\frac{1-\cos\lambda x}{\varPsi(\lambda)}}\, \mathrm{d} \lambda
= \frac{2}{\pi} \int_0^\infty
\Re\rbra*{\frac{1-\cos\xi}{x^2\varPsi(\xi/x)}}\, \mathrm{d}\xi.
\end{align}
We first assume \(\sigma^2=0\). By Fatou's lemma, we obtain
\begin{align}
\liminf_{x\to 0+}\frac{h^S(x)}{x}
& = \liminf_{x\to 0+} \frac{2}{\pi} \int_0^\infty
\Re\rbra*{\frac{1-\cos\xi}{x^2\varPsi(\xi/x)}}\, \mathrm{d}\xi \\
& \ge \frac{2}{\pi} \int_0^\infty\liminf_{x\to 0+}
\Re\rbra*{\frac{1-\cos\xi}{x^2\varPsi(\xi/x)}}\, \mathrm{d}\xi \\
& =\infty,
\end{align}
which implies~\eqref{eq:h^S'}.
We next assume \(\sigma^2>0\).
Since \(\abs{x^2\varPsi(\xi/x)}\ge \abs{\Re (x^2\varPsi(\xi/x))}\ge \sigma^2\xi^2/2\),
we have
\begin{align}
\abs*{\Re\rbra*{\frac{1-\cos\xi}{x^2\varPsi(\xi/x)}}}
\le \abs*{\frac{1-\cos\xi}{x^2\varPsi(\xi/x)}}
\le \frac{2(1\wedge\xi^2)}{ \sigma^2\xi^2},
\end{align}
which is integrable in \(\xi>0\).
Hence we may apply the dominated convergence theorem to obtain
\begin{align}
\lim_{x\to 0+}\frac{h^S(x)}{x}
=\frac{4}{\pi\sigma^2} \int_0^\infty
\frac{1-\cos\xi}{\xi^2}\, \mathrm{d}\xi=\frac{2}{\sigma^2},
\end{align}
which implies~\eqref{eq:h^S'}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm:gaussian-entrance}]
Let \(\gamma_1,\gamma_2 \in [-1, 1]\)
be different constants.
Then, since \(\rbra{\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)}, t> 0}\)
is a non-negative \(\ensuremath{\mathbb{P}}_0^{(\gamma_1)}\)-martingale
by Lemma~\ref{Lem:P-g-mart}, the limit
\(\lim_{t\to 0+}\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)}\) exists
\(\ensuremath{\mathbb{P}}_0^{(\gamma_1)}\)-a.s.\ for all \(t>0\).
We have
\begin{align}
\lim_{x\to 0\pm}\frac{h^{(\gamma_2)}(x)}{h^{(\gamma_1)}(x)}
= \frac{\abs{h^{(\gamma_2)\prime}(0\pm)}}{\abs{h^{(\gamma_1)\prime}(0\pm)}}
=\frac{\abs{h'(0\pm)}\pm \gamma_2/m^2}{\abs{h'(0\pm)}\pm \gamma_1/m^2}
\in [0,\infty].
\end{align}
Consequently, the limits
\( \lim_{x\to 0+}\frac{h^{(\gamma_2)}(x)}{h^{(\gamma_1)}(x)}\)
and \( \lim_{x\to 0-}\frac{h^{(\gamma_2)}(x)}{h^{(\gamma_1)}(x)}\)
are different,
which yields that \(\ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega_0^{+,-})=0\).
Since the \(\ensuremath{\mathbb{P}}_0^{(\gamma_1)}\)-martingale
\(\rbra{\frac{h^{(\gamma_2)}(X_t)}{h^{(\gamma_1)}(X_t)}, t>0}\)
has mean \(1\),
it holds that
\begin{align}\label{eq:zero-mean1}
\frac{{h^{(\gamma_2)\prime}}(0+)}{{h^{(\gamma_1)\prime}}(0+)}
\ensuremath{\mathbb{P}}_0^{(\gamma_1)}(\Omega_0^+)
+ \frac{\abs{h^{(\gamma_2)\prime}(0-)}}{\abs{h^{(\gamma_1)\prime}(0-)}}
\ensuremath{\mathbb{P}}_0^{(\gamma_1)}(\Omega_0^-)=1.
\end{align}
Since \(\ensuremath{\mathbb{P}}_0^{(\gamma_1)}(\Omega_0^+)+ \ensuremath{\mathbb{P}}_0^{(\gamma_1)}(\Omega_0^-)=1\)
and by Theorem~\ref{Thm:h^S/x}, we obtain
\begin{align}
\ensuremath{\mathbb{P}}_0^{(\gamma_1)}(\Omega_0^+)
= \frac{\sigma^2}{2}{h^{(\gamma_1)\prime}}(0+)
\quad\text{and}\quad
\ensuremath{\mathbb{P}}_0^{(\gamma_1)}(\Omega_0^-)=\frac{\sigma^2}{2}\abs{{h^{(\gamma_1)\prime}}(0-)}.
\end{align}
Hence the proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm:entrance-one}]
Assume \(h'(0+)=\infty\) and \(\abs{h'(0-)}<\infty\).
Let \(\gamma_1,\gamma_2 \in [-1, 1]\)
be different constants.
By the same discussion as the proof of Theorem~\ref{Thm:gaussian-entrance},
we obtain~\eqref{eq:zero-mean1}. By the assumption, we also have
\(\frac{h^{(\gamma_2)\prime}(0+)}{h^{(\gamma_1)\prime}(0+)}=1\)
for all \(-1\le\gamma_2\le 1\). Thus we have
\begin{align}
\ensuremath{\mathbb{P}}_0^{(\gamma_1)}(\Omega_0^+)
= 1
\quad\text{and}\quad
\ensuremath{\mathbb{P}}_0^{(\gamma_1)}(\Omega_0^-)=0.
\end{align}
The proof in the case \(h'(0+)<\infty\) and \(\abs{h'(0-)}=\infty\)
is similar. So we omit it.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm:feller}]
Ikeda--Watanabe~\cite[Theorem 3.3]{MR0451425} proved that
\begin{align}
\ensuremath{\mathbb{P}}_x\rbra{\Omega_1^{+,-}| T_0<\infty} = 1,
\quad x\in \ensuremath{\mathbb{R}}\setminus\cbra{0},
\end{align}
where
\begin{align}
\Omega_1^{+,-}\coloneqq
\cbra*{\text{\(\exists \cbra{t_n}\)
with \(t_n\to {T_0-}\) such that \(\forall n,\, X_{t_n}X_{t_{n+1}}<0\)}}.
\end{align}
This implies that
\begin{align}
n((\Omega_1^{+,-})^c\cap \cbra{T_0<\infty})=0.
\end{align}
By time reversal property of excursion paths (see~\cite[Lemma 5.2]{MR2397787}),
it holds that
\begin{align}
n((\Omega_0^{+,-})^c\cap \cbra{T_0<\infty})=0.
\end{align}
Since \((X,\ensuremath{\mathbb{P}})\) is recurrent,
it holds that
\(n(\cbra{T_0<\infty}^c)=0\). Thus we have \(n((\Omega_0^{+,-})^c)=0\),
which leads that
\( \ensuremath{\mathbb{P}}_0^{(\gamma)}(\Omega_0^{+,-})=1\).
\end{proof}
\section{Appendix A: Resolvent density under \(\ensuremath{\mathbb{P}}_x^{(\gamma)}\)}\label{Sec:resolvent}
We calculate the resolvent density under \(\ensuremath{\mathbb{P}}_x^{(\gamma)}\)
and show some Feller property.
Recall that we always assume the assumption~\ref{item:assumption}.
Let \(p_t(\mathrm{d} x)\) denote the transition law of \(X_t\) under \(\ensuremath{\mathbb{P}}\)
and let \(p_t^0(x, \mathrm{d} y)\) denote
the transition law of \(X_t\) under \(\ensuremath{\mathbb{P}}_x^0\).
By the Markov property,
we have, for \(x, y\in \ensuremath{\mathbb{R}}\setminus\cbra{0}\),
\begin{align}\label{eq:p_t^0}
p_t^0(x, \mathrm{d} y)= p_t(\mathrm{d} y-x) -\int_{[0,t]} \ensuremath{\mathbb{P}}_x(T_0\in \mathrm{d} s)p_{t-s}(\mathrm{d} y).
\end{align}
For \(t,q>0\) and \(x,y \in \ensuremath{\mathbb{R}}\setminus\cbra{0}\),
we denote the \(q\)-resolvent for killed process by
\begin{align}
r_q^0(x,y) & = \int_0^\infty \mathrm{e}^{-qt}p_t^0(x,\mathrm{d} y)\,\mathrm{d} t/\mathrm{d} y \\
& = r_q(y-x)-\frac{r_q(-x)r_q(y)}{r_q(0)} \\
& = h_q(x)+h_q(-y) - h_q(x-y) - \frac{h_q(x)h_q(-y)}{r_q(0)}.
\end{align}
Note that the second identity follows from~\eqref{eq:p_t^0}
and~\eqref{eq:-qT_0}.
This implies that the killed process
\((X,\ensuremath{\mathbb{P}}_x^0)\) has the continuous \(q\)-resolvent density.
Let \(-1\le \gamma \le 1\).
For \(x\in \ensuremath{\mathcal{H}}^{(\gamma)}_0\) and
\(y\in \ensuremath{\mathcal{H}}^{(\gamma)}\), we denote the transition law of \(X_t\) under
\(\ensuremath{\mathbb{P}}^{(\gamma)}_x\)
by
\begin{align}
p_t^{(\gamma)}(x,\mathrm{d} y) =
\begin{dcases}
\frac{h^{(\gamma)}(y)}{h^{(\gamma)}(x)}
p_t^0(x,\mathrm{d} y) & x\in\ensuremath{\mathcal{H}}^{(\gamma)}, \\
h^{(\gamma)}(y) n(X_t\in\mathrm{d} y) & x = 0.
\end{dcases}
\end{align}
Then the \(q\)-resolvent density
\(r_q^{(\gamma)}(x,y)\) of \((X,\ensuremath{\mathbb{P}}_x^{(\gamma)})\) can be expressed as follows:
if \(x\in \ensuremath{\mathcal{H}}^{(\gamma)}\),
\begin{align}
r_q^{(\gamma)}(x,y)
& = \int_0^\infty \mathrm{e}^{-qt} p_t^{(\gamma)}(x,\mathrm{d} y) \, \mathrm{d} t /\mathrm{d} y \\
& = \frac{h^{(\gamma)}(y)}{h^{(\gamma)}(x)}
\rbra*{h_q(x)+h_q(-y)-h_q(x-y)-\frac{h_q(x)h_q(-y)}{r_q(0)}},
\end{align}
and
\begin{align}
r_q^{(\gamma)}(0, y)
& = \int_0^\infty \mathrm{e}^{-qt} p_t^{(\gamma)}(0,\mathrm{d} y) \, \mathrm{d} t /\mathrm{d} y
= \frac{h^{(\gamma)}(y)r_q(y)}{r_q(0)}
= h^{(\gamma)}(y)\rbra*{1-\frac{h_q(-y)}{r_q(0)}},
\end{align}
where the second identity follows from the formula:
for any non-negative measurable function \(f\), it holds that
\begin{align}\label{eq:formula-duality}
\int_0^\infty \mathrm{e}^{-qt} n\sbra{f(X_t)}\, \mathrm{d} t
= \int_\ensuremath{\mathbb{R}} f(x) \widehat{\ensuremath{\mathbb{P}}}_x\sbra{\mathrm{e}^{-qT_0}}\, \mathrm{d} x,
\end{align}
where \(\ensuremath{\mathbb{P}}_x\) and \(\widehat{\ensuremath{\mathbb{P}}}_x\) are in weak duality, i.e.,
the probability measure \(\widehat{\ensuremath{\mathbb{P}}}_x\) denotes the law of \((-X_t, t\ge 0)\)
under \(\ensuremath{\mathbb{P}}_{-x}\).
For more details, see Chen--Fukushima--Ying~\cite{MR2397787}
and Fitzsimmons--Getoor~\cite{MR2247835}.
See also Yano--Yano--Yor~\cite[Theorem 3.3]{MR2599211}.
Summarize the above, we obtain the following results:
\begin{Prop}\label{Prop:gamma-q-resolvent}
Let \(q>0\) and \(-1\le\gamma\le 1\). Let \(r_q^{(\gamma)}(x,y)\) denote the \(q\)-resolvent
density of \((X,\ensuremath{\mathbb{P}}_x^{(\gamma)})\). Then,
for \(y\in \ensuremath{\mathcal{H}}^{(\gamma)}\), it holds that
\begin{align}\label{eq:r_q^g}
r_q^{(\gamma)}(x,y)
=\begin{dcases}
\frac{h^{(\gamma)}(y)}{h^{(\gamma)}(x)}
\rbra*{h_q(x)+h_q(-y)-h_q(x-y)-\frac{h_q(x)h_q(-y)}{r_q(0)}}
& \text{if \(x\in \ensuremath{\mathcal{H}}^{(\gamma)}\)}, \\
h^{(\gamma)}(y)\rbra*{1-\frac{h_q(-y)}{r_q(0)}}
& \text{if \(x=0\).}
\end{dcases}
\end{align}
\end{Prop}
Letting \(q\to 0+\) in~\eqref{eq:r_q^g},
we obtain the zero resolvent
\begin{align}
r_0^{(\gamma)}(x,y)\coloneqq \lim_{q\to 0+} r_q^{(\gamma)}(x,y).
\end{align}
Note that it holds that \(\lim_{q\to 0+}\frac{1}{r_q(0)}=0\)
if \((X,\ensuremath{\mathbb{P}})\) is recurrent;
see, e.g.,~\cite[Theorem I.17]{MR1406564}
and~\cite[Theorem 37.5]{MR1739520}.
(Recall that we always assume \((X,\ensuremath{\mathbb{P}})\) is recurrent.)
\begin{Cor}
For \(y\in \ensuremath{\mathcal{H}}^{(\gamma)}\), it holds that
\begin{align}
r_0^{(\gamma)}(x,y)
=\begin{dcases}
\frac{h^{(\gamma)}(y)}{h^{(\gamma)}(x)}
\rbra*{h(x)+h(-y)-h(x-y)}
& \text{if \(x\in \ensuremath{\mathcal{H}}^{(\gamma)}\),} \\
h^{(\gamma)}(y) & \text{if \(x=0\).}
\end{dcases}
\end{align}
\end{Cor}
Set
\begin{align}
T_t^{(\gamma)}f(x)= \ensuremath{\mathbb{P}}_x^{(\gamma)}\sbra{f(X_t)},
\quad t\ge 0,\; f\in \ensuremath{\mathcal{B}}_{+, b}(\ensuremath{\mathcal{H}}_0^{(\gamma)}),
\end{align}
where \(\ensuremath{\mathcal{B}}_{+, b}(\ensuremath{\mathcal{H}}_0^{(\gamma)})\) denotes
the set of non-negative bounded measurable functions.
Then the family \(\cbra{T_t^{(\gamma)},t\ge 0}\)
forms a transition semigroup.
We define the resolvent operator of the semigroup \(T_t^{(\gamma)}\) as
\begin{align}
R_q^{(\gamma)}f(x) = \int_0^\infty \mathrm{e}^{-qt}T_t^{(\gamma)}f(x)\, \mathrm{d} t,\quad
q>0,\; f\in \ensuremath{\mathcal{B}}_{+, b}(\ensuremath{\mathcal{H}}_0^{(\gamma)}).
\end{align}
For Theorem~\ref{Thm:fellerp}, we are inspired by
Yano~\cite[Theorem 1.5]{MR2603019}.
\begin{Thm}\label{Thm:fellerp}
Assume the conditions (\ref{item:cond:h-gamma-infty})--(\ref{item:cond:h-/h})
of Theorem~\ref{Thm:feller} hold.
(Note that~(\ref{item:cond:h-gamma-infty}) of Theorem~\ref{Thm:feller}
and subadditivity of \(h^{(\gamma)}\)
imply that \(\ensuremath{\mathcal{H}}_0^{(\gamma)}=\ensuremath{\mathbb{R}}\).)
Then the semigroup \(\rbra{T_t^{(\gamma)}}_{t\ge 0}\) enjoys Feller property, i.e.,
\begin{enumerate}[label=\textbf{(F\arabic*)},series=feller]
\item \(T^{(\gamma)}_tC_0(\ensuremath{\mathbb{R}})\subset C_0(\ensuremath{\mathbb{R}})\);\label{item:feller1}
\item \(\norm{T_t^{(\gamma)}f - f}\to 0\) as \(t\to 0+\) for
all \(f\in C_0(\ensuremath{\mathbb{R}})\),\label{item:feller2}
\end{enumerate}
\end{Thm}
where \(C_0(\ensuremath{\mathbb{R}})\) stands for the class of
continuous functions vanishing at infinity.
\begin{proof}
Note that the condition~(\ref{item:cond:h/x-infty}) implies that
\begin{align}
\lim_{x\to 0}\frac{h^{(\gamma)}(x)}{\abs{x}}= \infty
\quad\text{and}\quad
\lim_{x\to 0}\frac{h^{(\gamma)}(x)}{h(x)}= 1.
\end{align}
for \(-1\le\gamma \le 1\).
To show Feller property,
it is sufficient to show that
\begin{enumerate}[resume*=feller]
\item \(T_t^{(\gamma)}f(x)\to f(x)\) as \(t\to 0+\) for all
\(x\in\ensuremath{\mathbb{R}},\; f\in C_0(\ensuremath{\mathbb{R}})\),\label{item:feller3}
\item \(R_q^{(\gamma)}C_0(\ensuremath{\mathbb{R}})\subset C_0(\ensuremath{\mathbb{R}})\).\label{item:feller4}
\end{enumerate}
For more details see~\cite[Proposition III.2.4]{RevuzYor}.
Since \(T_t^{(\gamma)}f(x)=\ensuremath{\mathbb{P}}_x^{(\gamma)}\sbra{f(X_t)}\)
and since \(\ensuremath{\mathbb{P}}_x^{(\gamma)}\) is a probability measure on the
c\`{a}dl\`{a}g space,~\ref{item:feller3} is obvious.
We proceed the proof of~\ref{item:feller4}.
Let \(C_c(\ensuremath{\mathbb{R}})\) stand for the set of continuous functions with compact support
on \(\ensuremath{\mathbb{R}}\).
Since \(\norm{qR_q^{(\gamma)}f}\le
\norm{f}\)
and since the closure of \({C_c(\ensuremath{\mathbb{R}})}\) is \(C_0(\ensuremath{\mathbb{R}}) \),
it is sufficient to show that \(R_q^{(\gamma)}C_c(\ensuremath{\mathbb{R}})\subset C_0(\ensuremath{\mathbb{R}})\).
Recall that, for \(x\in\ensuremath{\mathbb{R}}\),
\begin{align}
R_q^{(\gamma)}f(x)
= \int_{\ensuremath{\mathcal{H}}^{(\gamma)}} f(y)r_q^{(\gamma)}(x,y)\, \mathrm{d} y,
\quad f\in C_c(\ensuremath{\mathbb{R}}).
\end{align}
Since \(f\) has compact support and is
continuous, and \(r_q^{(\gamma)}\) is continuous in
\((x,y)\in \ensuremath{\mathcal{H}}^{(\gamma)}\times \ensuremath{\mathcal{H}}^{(\gamma)}\),
the function \(R_q^{(\gamma)}f(x)\) is continuous
in \(x\in\ensuremath{\mathcal{H}}^{(\gamma)}\).
Let the set \(A\subset\ensuremath{\mathbb{R}}\) stand for the support of \(f\).
Since \(r_q^{(\gamma)}(x,y)=\frac{h^{(\gamma)}(y)}{h^{(\gamma)}(x)}r_q^0(x,y)\),
it holds that
\begin{align}
R_q^{(\gamma)}f(x)
& = \frac{1}{h^{(\gamma)}(x)}\int_{\ensuremath{\mathcal{H}}^{(\gamma)}} f(y)h^{(\gamma)}(y) r_q^0(x,y)
\, \mathrm{d} y \\
& \le \sup_{y\in A} h^{(\gamma)}(y) \frac{1}{q
h^{(\gamma)}(x)} \int_{\ensuremath{\mathcal{H}}^{(\gamma)}} f(y) r_q^0(x,y)\, \mathrm{d} y \\
& \le \sup_{y\in A} h^{(\gamma)}(y) \frac{\norm{f}}{q
h^{(\gamma)}(x)}\ensuremath{\mathbb{P}}_x\sbra{\mathrm{e}^{-qT_A}} \\
& \to 0 \quad \text{as \(x\to\pm\infty\).}
\end{align}
Here we used the assumption~(\ref{item:cond:h-gamma-infty}).
Hence \(R_q^{(\gamma)}f(x)\) vanishes at infinity.
We have to prove \(R_q^{(\gamma)}f(x)\) is continuous at \(x=0\).
By Proposition~\ref{Prop:gamma-q-resolvent}, and
assumptions~(\ref{item:cond:h/x-infty})
and~(\ref{item:cond:h_q/h}),
we have \(r_q(x,y)\to r_q(0,y)\) as \(x\to 0\).
Moreover, since \(h_q\) is subadditive,
it holds that, for \(x,y\in \ensuremath{\mathcal{H}}^{(\gamma)}\),
\begin{align}
r_q^{(\gamma)}(x,y)\le h^{(\gamma)}(y)
\frac{h_q(x)+h_q(-x)}{h^{(\gamma)}(x)}.
\label{eq:r_q-bdd}
\end{align}
The conditions~(\ref{item:cond:h_q/h}) and~(\ref{item:cond:h-/h}) implies that
the right hand side of~\eqref{eq:r_q-bdd} is bounded near \(x=0\)
and \(y\in A\).
Thus we may apply the dominated convergence theorem to deduce that
\(R_q^{(\gamma)}f(x)\) is also continuous at \(x=0\).
Hence \((T_t^{(\gamma)})\) has Feller property.
\end{proof}
\section{Appendix B: Proof of Theorem~\ref{Thm:limit-P0}}\label{Sec:pf-meander}
Recall that we always assume the assumption~\ref{item:assumption}.
\begin{Lem}\label{Lem:integ-h-}
For any \(t>0\), it holds that \(n\sbra{h(-X_t)}<\infty\).
\end{Lem}
Recall that \(n\sbra{h(X_t)}=1\) for all \(t>0\); see
Lemma~\ref{Lem:harmonic}.
\begin{proof}[Proof of Lemma~\ref{Lem:integ-h-}]
We write \(\hat{h}(x)=h(-x)\).
By the formula~\eqref{eq:formula-duality}
and by~\eqref{eq:-qT_0},
we have
\begin{align}
\int_0^\infty \mathrm{e}^{-qt} n\sbra{\hat{h}(X_t)}\,\mathrm{d} t
& = \int_\ensuremath{\mathbb{R}} \hat{h}(x)\widehat{\ensuremath{\mathbb{P}}}_x\sbra{\mathrm{e}^{-qT_0}}\,\mathrm{d} x \\
& = \int_\ensuremath{\mathbb{R}} \hat{h}(x)\frac{r_q(x)}{r_q(0)}\,\mathrm{d} x.\label{eq:h-hat-inte}
\end{align}
By~\cite[(3.20)]{MR3689384}, the equation~\eqref{eq:h-hat-inte} is finite.
(Note that the assumptions in~\cite{MR3689384} is stronger,
but this remains true since its proof is valid if Lemma~\ref{Lem:h} holds.)
Hence, for almost any \(t>0\),
it holds that \(n\sbra{\hat{h}(X_t)}<\infty\).
Thus for any \(t>0\), there exists \(0<s<t\) such that
\(n\sbra{\hat{h}(X_s)}<\infty\).
By the Markov property of the excursion measure \(n\), we have
\begin{align}
n\sbra{\hat{h}(X_t)}
& = n\sbra{\ensuremath{\mathbb{P}}_{X_s}\sbra{\hat{h}(X_{t-s}); T_0>t-s}} \\
& \le n\sbra{\ensuremath{\mathbb{P}}_{X_s}\sbra{\hat{h}(X_{t-s})}} \\
& = n\sbra{\widetilde{\ensuremath{\mathbb{P}}}_{0}\sbra{\hat{h}(X_s+\widetilde{X}_{t-s})}},
\end{align}
where the symbol \(\widetilde{\hspace{12pt}}\) means independence.
Since \(\hat{h}\) is subadditive,
we obtain
\begin{align}
n\sbra{\hat{h}(X_t)}\le
n\sbra{\hat{h}(X_s)}+\ensuremath{\mathbb{P}}_{0}\sbra{\hat{h}(X_{t-s})}.
\end{align}
By Tsukada~\cite[Proof of Theorem 15.2]{MR3838874}
(see also Takeda--Yano~\cite[Lemma 4.3]{me}),
we have \(\ensuremath{\mathbb{P}}_{0}\sbra{\hat{h}(X_{t-s})}<\infty\).
Consequently, it holds that
\(n\sbra{\hat{h}(X_t)}<\infty\) for all \(t>0\).
\end{proof}
For the proof of Theorem~\ref{Thm:limit-P0},
we introduce the following lemma, whose proof
is in~\cite[Lemmas 3.4 and 6.2]{me}.
\begin{Lem}[\cite{me}]\label{Lem:hitting-h-repre}
For \(a,b\in \ensuremath{\mathbb{R}}\setminus\cbra{0}\) and \(a\ne b\), it holds that
\begin{align}
& h^B(a)\coloneqq \ensuremath{\mathbb{P}}_0\sbra{L_{T_a}}
= h(a)+h(-a), \\
& h^B(a)\ensuremath{\mathbb{P}}_x(T_a<T_0)
= h(x)+h(-a)-h(x-a), \\
& \ensuremath{\mathbb{P}}_0\sbra{L_{T_{\cbra{a,-b}}}}\ensuremath{\mathbb{P}}_x(T_{\cbra{a,-b}}<T_0) \\
& =h(x) + \frac{1}{h^B(a+b)}
\cbra*{\begin{multlined}
\rbra[\big]{h(-a)-h(x-a)}h(a+b)
+ \rbra[\big]{h(b)-h(x+b)}h(-a-b)\\
-\rbra[\big]{h(a)-h(-b)}\rbra[\big]{h(-a)-h(x-a)-h(b)+h(x+b)}
\end{multlined}}.
\end{align}
\end{Lem}
\begin{proof}[Proof of Theorem~\ref{Thm:limit-P0}]
For \(s>0\),
we define \(d_s=\inf\cbra{u>s\colon X_u=0}\).
We also define \(G=\cbra{g_s\colon g_s\ne d_s, s>0}\).
\noindent (\ref{Thm-item:limit-P0-exp})
For any \(q>0\), we have
\begin{align}
\ensuremath{\mathbb{P}}_0\sbra{F_t\circ k_{\bm{e}_q-g_{\bm{e}_q}}\circ \theta_{g_{\bm{e}_q}}}
& =\ensuremath{\mathbb{P}}_0\sbra*{\int_0^\infty q\mathrm{e}^{-qu}F_t\circ k_{u-{g_u}}\circ \theta_{g_u}\,
\mathrm{d} u} \\
& = \ensuremath{\mathbb{P}}_0\sbra*{\sum_{s\in G}\mathrm{e}^{-qs} \int_s^{d_s} q\mathrm{e}^{-q(u-s)}F_t
\circ k_{u-s}\circ \theta_{s}\, \mathrm{d} u}.
\end{align}
Using the compensation formula in excursion theory (see e.g.,
Bertoin~\cite[Corollary IV.11]{MR1406564}), we obtain
\begin{align}
\ensuremath{\mathbb{P}}_0\sbra{F_t\circ k_{\bm{e}_q-g_{\bm{e}_q}}\circ \theta_{g_{\bm{e}_q}}}
=\ensuremath{\mathbb{P}}_0 \sbra*{\int_0^\infty \mathrm{e}^{-qs}\, \mathrm{d} L_s}
n\sbra*{\int_0^{T_0} q\mathrm{e}^{-qu} F_t 1_{\cbra{u>t}}\, \mathrm{d} u}.
\end{align}
By~\eqref{eq:regularity-of-L} and the Markov property of the excursion measure \(n\),
it holds that
\begin{align}
\ensuremath{\mathbb{P}}_0\sbra{F_t\circ k_{\bm{e}_q-g_{\bm{e}_q}}\circ \theta_{g_{\bm{e}_q}}}
& = r_q(0) n\sbra{F_t; t<\bm{e}_q<T_0} \\
& = r_q(0)\mathrm{e}^{-qt}n\sbra{F_t \ensuremath{\mathbb{P}}_{X_t}\sbra{T_0>\bm{e}_q}}.
\end{align}
It follows from~\eqref{eq:-qT_0} that
\begin{align}
\ensuremath{\mathbb{P}}_{X_t}\sbra{T_0>\bm{e}_q}=1-\ensuremath{\mathbb{P}}_{X_t}\sbra{\mathrm{e}^{-qT_0}}
= 1-\frac{r_q(-X_t)}{r_q(0)}=\frac{h_q(X_t)}{r_q(0)}.
\end{align}
Hence we have
\begin{align}
\ensuremath{\mathbb{P}}_0\sbra{F_t\circ k_{\bm{e}_q-g_{\bm{e}_q}}\circ \theta_{g_{\bm{e}_q}}}
= \mathrm{e}^{-qt}n\sbra{F_t h_q(X_t)}.
\end{align}
By~\eqref{eq:h_q},~\eqref{eq:theta} and (\ref{Lem-item:h^S-repre}) of
Lemma~\ref{Lem:h},
it holds that
\begin{align}
h_q(X_t) & \le h_q(X_t)+h_q(-X_t)
=\frac{2}{\pi}\int_0^\infty
\Re\rbra*{\frac{1-\cos \lambda x}{q+\varPsi(\lambda)}} \, \mathrm{d} \lambda \\
& \le \frac{2}{\pi}\int_0^\infty
\Re\rbra*{\frac{1-\cos \lambda x}{\varPsi(\lambda)}} \, \mathrm{d} \lambda
= h(X_t)+h(-X_t).
\end{align}
By Lemmas~\ref{Lem:harmonic} and~\ref{Lem:integ-h-},
the function
\(h(X_t)+h(-X_t)\) is integrable with respect to the measure \(n\).
Thus we may apply the dominated convergence theorem
to deduce
\begin{align}
\lim_{q\to 0+}\ensuremath{\mathbb{P}}_0\sbra{F_t\circ k_{\bm{e}_q-g_{\bm{e}_q}}\circ \theta_{g_{\bm{e}_q}}}
= \lim_{q\to 0+}\mathrm{e}^{-qt}n\sbra{F_t h_q(X_t)}
=n\sbra{F_t h(X_t)}
= \ensuremath{\mathbb{P}}_0^{(0)}\sbra{F_t}.
\end{align}
\noindent (\ref{Thm-item:limit-P0-hitting})
For \(a\in \ensuremath{\mathbb{R}}\setminus\cbra{0}\), we have
\begin{align}
\ensuremath{\mathbb{P}}_0\sbra{F_t\circ k_{{T_a}-g_{{T_a}}}\circ \theta_{g_{{T_a}}}}
=\ensuremath{\mathbb{P}}_0\sbra*{\sum_{s\in G} 1_{\cbra{s<T_a<d_s}}
F_t\circ k_{{T_a}-s}\circ \theta_{s}}.
\end{align}
Using the compensation formula in excursion theory,
the Markov property of the excursion measure \(n\)
and Lemma~\ref{Lem:hitting-h-repre},
it holds that
\begin{align}
\ensuremath{\mathbb{P}}_0\sbra{F_t\circ k_{{T_a}-g_{{T_a}}}\circ \theta_{g_{{T_a}}}}
& =\ensuremath{\mathbb{P}}_0\sbra*{\int_0^{T_a}\, \mathrm{d} L_s}
n\sbra{F_t; t<T_a<T_0} \\
& = \ensuremath{\mathbb{P}}_0\sbra{L_{T_a}}n\sbra{F_t\ensuremath{\mathbb{P}}_{X_t}(T_a<T_0);t<T_a} \\
& = n\sbra{F_t ( h(-a)+h(X_t)-h(X_t-a)); t<T_a}.
\end{align}
Since \(h\) is subadditive,
we have
\begin{align}
h(-a)+h(X_t)-h(X_t-a)\le h(X_t)+h(-X_t),
\end{align}
which is integrable with respect to the measure \(n\).
Thus we may apply the dominated convergence theorem to deduce
\begin{align}
\lim_{a\to\pm\infty}
\ensuremath{\mathbb{P}}_0\sbra{F_t\circ k_{{T_a}-g_{{T_a}}}\circ \theta_{g_{{T_a}}}}
= n\sbra{F_t h^{(\pm 1)}(X_t)}
=\ensuremath{\mathbb{P}}_0^{(\pm 1)}\sbra{F_t},
\end{align}
here we used (\ref{Lem-item:h-diff-infty}) of Lemma~\ref{Lem:h}.
(\ref{Thm-item:limit-meas-twohitting})
By the same discussion as the proof of (\ref{Thm-item:limit-P0-hitting}),
it holds that
\begin{align}
\ensuremath{\mathbb{P}}_0\sbra{F_t\circ k_{{T_{\cbra{a,-b}}}-g_{T_{\cbra{a,-b}}}}
\circ \theta_{g_{T_{\cbra{a,-b}}}}}
= \ensuremath{\mathbb{P}}_0\sbra{L_{T_{\cbra{a,-b}}}}n\sbra{F_t\ensuremath{\mathbb{P}}_{X_t}(T_{\cbra{a,-b}}<T_0)
;t<T_{\cbra{a,-b}}}.
\end{align}
By Lemma~\ref{Lem:hitting-h-repre} and by the dominated convergence theorem,
we obtain the desired result. (We omit the details.)
\end{proof}
|
1,477,468,750,406 | arxiv | \section{Introduction}
This template is for papers of VGTC-sponsored conferences which are \emph{\textbf{not}} published in a special issue of TVCG.
\section{Using the Style Template}
\begin{itemize}
\item If you receive compilation errors along the lines of ``\texttt{Package ifpdf Error: Name clash, \textbackslash ifpdf is already defined}'' then please add a new line ``\texttt{\textbackslash let\textbackslash ifpdf\textbackslash relax}'' right after the ``\texttt{\textbackslash documentclass[journal]\{vgtc\}}'' call. Note that your error is due to packages you use that define ``\texttt{\textbackslash ifpdf}'' which is obsolete (the result is that \texttt{\textbackslash ifpdf} is defined twice); these packages should be changed to use ifpdf package instead.
\item The style uses the hyperref package, thus turns references into internal links. We thus recommend to make use of the ``\texttt{\textbackslash autoref\{reference\}}'' call (instead of ``\texttt{Figure\~{}\textbackslash ref\{reference\}}'' or similar) since ``\texttt{\textbackslash autoref\{reference\}}'' turns the entire reference into an internal link, not just the number. Examples: \autoref{fig:sample} and \autoref{tab:vis_papers}.
\item The style automatically looks for image files with the correct extension (eps for regular \LaTeX; pdf, png, and jpg for pdf\LaTeX), in a set of given subfolders (figures/, pictures/, images/). It is thus sufficient to use ``\texttt{\textbackslash includegraphics\{CypressView\}}'' (instead of ``\texttt{\textbackslash includegraphics\{pictures/CypressView.jpg\}}'').
\item For adding hyperlinks and DOIs to the list of references, you can use ``\texttt{\textbackslash bibliographystyle\{abbrv-doi-hyperref-narrow\}}'' (instead of ``\texttt{\textbackslash bibliographystyle\{abbrv\}}''). It uses the doi and url fields in a bib\TeX\ entry and turns the entire reference into a link, giving priority to the doi. The doi can be entered with or without the ``\texttt{http://dx.doi.org/}'' url part. See the examples in the bib\TeX\ file and the bibliography at the end of this template.\\[1em]
\textbf{Note 1:} occasionally (for some \LaTeX\ distributions) this hyper-linked bib\TeX\ style may lead to \textbf{compilation errors} (``\texttt{pdfendlink ended up in different nesting level ...}'') if a reference entry is broken across two pages (due to a bug in hyperref). In this case make sure you have the latest version of the hyperref package (i.\,e., update your \LaTeX\ installation/packages) or, alternatively, revert back to ``\texttt{\textbackslash bibliographystyle\{abbrv-doi-narrow\}}'' (at the expense of removing hyperlinks from the bibliography) and try ``\texttt{\textbackslash bibliographystyle\{abbrv-doi-hyperref-narrow\}}'' again after some more editing.\\[1em]
\textbf{Note 2:} the ``\texttt{-narrow}'' versions of the bibliography style use the font ``PTSansNarrow-TLF'' for typesetting the DOIs in a compact way. This font needs to be available on your \LaTeX\ system. It is part of the \href{https://www.ctan.org/pkg/paratype}{``paratype'' package}, and many distributions (such as MikTeX) have it automatically installed. If you do not have this package yet and want to use a ``\texttt{-narrow}'' bibliography style then use your \LaTeX\ system's package installer to add it. If this is not possible you can also revert to the respective bibliography styles without the ``\texttt{-narrow}'' in the file name.\\[1em]
DVI-based processes to compile the template apparently cannot handle the different font so, by default, the template file uses the \texttt{abbrv-doi} bibliography style but the compiled PDF shows you the effect of the \texttt{abbrv-doi-hyperref-narrow} style.
\end{itemize}
\section{Bibliography Instructions}
\begin{itemize}
\item Sort all bibliographic entries alphabetically but the last name of the first author. This \LaTeX/bib\TeX\ template takes care of this sorting automatically.
\item Merge multiple references into one; e.\,g., use \cite{Max:1995:OMF,Kitware:2003} (not \cite{Kitware:2003}\cite{Max:1995:OMF}). Within each set of multiple references, the references should be sorted in ascending order. This \LaTeX/bib\TeX\ template takes care of both the merging and the sorting automatically.
\item Verify all data obtained from digital libraries, even ACM's DL and IEEE Xplore etc.\ are sometimes wrong or incomplete.
\item Do not trust bibliographic data from other services such as Mendeley.com, Google Scholar, or similar; these are even more likely to be incorrect or incomplete.
\item Articles in journal---items to include:
\begin{itemize}
\item author names
\item title
\item journal name
\item year
\item volume
\item number
\item month of publication as variable name (i.\,e., \{jan\} for January, etc.; month ranges using \{jan \#\{/\}\# feb\} or \{jan \#\{-{}-\}\# feb\})
\end{itemize}
\item use journal names in proper style: correct: ``IEEE Transactions on Visualization and Computer Graphics'', incorrect: ``Visualization and Computer Graphics, IEEE Transactions on''
\item Papers in proceedings---items to include:
\begin{itemize}
\item author names
\item title
\item abbreviated proceedings name: e.\,g., ``Proc.\textbackslash{} CONF\_ACRONYNM'' without the year; example: ``Proc.\textbackslash{} CHI'', ``Proc.\textbackslash{} 3DUI'', ``Proc.\textbackslash{} Eurographics'', ``Proc.\textbackslash{} EuroVis''
\item year
\item publisher
\item town with country of publisher (the town can be abbreviated for well-known towns such as New York or Berlin)
\end{itemize}
\item article/paper title convention: refrain from using curly brackets, except for acronyms/proper names/words following dashes/question marks etc.; example:
\begin{itemize}
\item paper ``Marching Cubes: A High Resolution 3D Surface Construction Algorithm''
\item should be entered as ``\{M\}arching \{C\}ubes: A High Resolution \{3D\} Surface Construction Algorithm'' or ``\{M\}arching \{C\}ubes: A high resolution \{3D\} surface construction algorithm''
\item will be typeset as ``Marching Cubes: A high resolution 3D surface construction algorithm''
\end{itemize}
\item for all entries
\begin{itemize}
\item DOI can be entered in the DOI field as plain DOI number or as DOI url; alternative: a url in the URL field
\item provide full page ranges AA-{}-BB
\end{itemize}
\item when citing references, do not use the reference as a sentence object; e.\,g., wrong: ``In \cite{Lorensen:1987:MCA} the authors describe \dots'', correct: ``Lorensen and Cline \cite{Lorensen:1987:MCA} describe \dots''
\end{itemize}
\section{Example Section}
Lorem\marginpar{\small You can use the margins for comments while editing the submission, but please remove the marginpar comments for submission.} ipsum dolor sit amet, consetetur sadipscing elitr, sed diam
nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat,
sed diam voluptua. At vero eos et accusam et justo duo dolores et ea
rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem
ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur
sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et
dolore magna aliquyam erat, sed diam
voluptua~\cite{Kitware:2003,Max:1995:OMF}. At vero eos et accusam et
justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea
takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit
amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor
invidunt ut labore et dolore magna aliquyam erat, sed diam
voluptua. At vero eos et accusam et justo duo dolores et ea
rebum. Stet clita kasd gubergren, no sea takimata sanctus est.
\section{Exposition}
Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse
molestie consequat, vel illum dolore eu feugiat nulla facilisis at
vero eros et accumsan et iusto odio dignissim qui blandit praesent
luptatum zzril delenit augue duis dolore te feugait nulla
facilisi. Lorem ipsum dolor sit amet, consectetuer adipiscing elit,
sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna
aliquam erat volutpat~\cite{Kindlmann:1999:SAG}.
\begin{equation}
\sum_{j=1}^{z} j = \frac{z(z+1)}{2}
\end{equation}
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam
nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat,
sed diam voluptua. At vero eos et accusam et justo duo dolores et ea
rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem
ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur
sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et
dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam
et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea
takimata sanctus est Lorem ipsum dolor sit amet.
\subsection{Lorem ipsum}
Lorem ipsum dolor sit amet (see \autoref{tab:vis_papers}), consetetur sadipscing elitr, sed diam
nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat,
sed diam voluptua. At vero eos et accusam et justo duo dolores et ea
rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem
ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur
sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et
dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam
et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea
takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit
amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor
invidunt ut labore et dolore magna aliquyam erat, sed diam
voluptua. At vero eos et accusam et justo duo dolores et ea
rebum.
\begin{table}[tb]
\caption{VIS/VisWeek accepted/presented papers: 1990--2016.}
\label{tab:vis_papers}
\scriptsize%
\centering%
\begin{tabu}{%
r%
*{7}{c}%
*{2}{r}%
}
\toprule
year & \rotatebox{90}{Vis/SciVis} & \rotatebox{90}{SciVis conf} & \rotatebox{90}{InfoVis} & \rotatebox{90}{VAST} & \rotatebox{90}{VAST conf} & \rotatebox{90}{TVCG @ VIS} & \rotatebox{90}{CG\&A @ VIS} & \rotatebox{90}{VIS/VisWeek} \rotatebox{90}{incl. TVCG/CG\&A} & \rotatebox{90}{VIS/VisWeek} \rotatebox{90}{w/o TVCG/CG\&A} \\
\midrule
2016 & 30 & & 37 & 33 & 15 & 23 & 10 & 148 & 115 \\
2015 & 33 & 9 & 38 & 33 & 14 & 17 & 15 & 159 & 127 \\
2014 & 34 & & 45 & 33 & 21 & 20 & & 153 & 133 \\
2013 & 31 & & 38 & 32 & & 20 & & 121 & 101 \\
2012 & 42 & & 44 & 30 & & 23 & & 139 & 116 \\
2011 & 49 & & 44 & 26 & & 20 & & 139 & 119 \\
2010 & 48 & & 35 & 26 & & & & 109 & 109 \\
2009 & 54 & & 37 & 26 & & & & 117 & 117 \\
2008 & 50 & & 28 & 21 & & & & 99 & 99 \\
2007 & 56 & & 27 & 24 & & & & 107 & 107 \\
2006 & 63 & & 24 & 26 & & & & 113 & 113 \\
2005 & 88 & & 31 & & & & & 119 & 119 \\
2004 & 70 & & 27 & & & & & 97 & 97 \\
2003 & 74 & & 29 & & & & & 103 & 103 \\
2002 & 78 & & 23 & & & & & 101 & 101 \\
2001 & 74 & & 22 & & & & & 96 & 96 \\
2000 & 73 & & 20 & & & & & 93 & 93 \\
1999 & 69 & & 19 & & & & & 88 & 88 \\
1998 & 72 & & 18 & & & & & 90 & 90 \\
1997 & 72 & & 16 & & & & & 88 & 88 \\
1996 & 65 & & 12 & & & & & 77 & 77 \\
1995 & 56 & & 18 & & & & & 74 & 74 \\
1994 & 53 & & & & & & & 53 & 53 \\
1993 & 55 & & & & & & & 55 & 55 \\
1992 & 53 & & & & & & & 53 & 53 \\
1991 & 50 & & & & & & & 50 & 50 \\
1990 & 53 & & & & & & & 53 & 53 \\
\midrule
\textbf{sum} & \textbf{1545} & \textbf{9} & \textbf{632} & \textbf{310} & \textbf{50} & \textbf{123} & \textbf{25} & \textbf{2694} & \textbf{2546} \\
\bottomrule
\end{tabu}%
\end{table}
\subsection{Mezcal Head}
Lorem ipsum dolor sit amet (see \autoref{fig:sample}), consetetur sadipscing elitr, sed diam
nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat,
sed diam voluptua. At vero eos et accusam et justo duo dolores et ea
rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem
ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur
sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et
dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam
et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea
takimata sanctus est Lorem ipsum dolor sit amet.
\subsubsection{Duis Autem}
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam
nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat,
sed diam voluptua. At vero eos et accusam et justo duo dolores et ea
rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem
ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur
sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et
dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam
et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea
takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit
amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor
invidunt ut labore et dolore magna aliquyam erat, sed diam
voluptua. At vero eos et accusam et justo duo dolores et ea
rebum. Stet clita kasd gubergren, no sea takimata sanctus est. Lorem
ipsum dolor sit amet.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{paper-count-w-2015-new}
\caption{A visualization of the 1990--2015 data from \autoref{tab:vis_papers}. The image is from \cite{Isenberg:2017:VMC} and is in the public domain.}
\label{fig:sample}
\end{figure}
\subsubsection{Ejector Seat Reservation}
Duis autem~\cite{Lorensen:1987:MCA}\footnote{The algorithm behind
Marching Cubes \cite{Lorensen:1987:MCA} had already been
described by Wyvill et al. \cite{Wyvill:1986:DSS} a year
earlier.} vel eum iriure dolor in hendrerit
in vulputate velit esse molestie consequat,\footnote{Footnotes
appear at the bottom of the column.} vel illum dolore eu
feugiat nulla facilisis at vero eros et accumsan et iusto odio
dignissim qui blandit praesent luptatum zzril delenit augue duis
dolore te feugait nulla facilisi. Lorem ipsum dolor sit amet,
consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt
ut laoreet dolore magna aliquam erat volutpat.
\paragraph{Confirmed Ejector Seat Reservation}
Ut wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper
suscipit lobortis nisl ut aliquip ex ea commodo
consequat~\cite{Nielson:1991:TAD}. Duis autem vel eum iriure dolor in
hendrerit in vulputate velit esse molestie consequat, vel illum dolore
eu feugiat nulla facilisis at vero eros et accumsan et iusto odio
dignissim qui blandit praesent luptatum zzril delenit augue duis
dolore te feugait nulla facilisi.
\paragraph{Rejected Ejector Seat Reservation}
Ut wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper
suscipit lobortis nisl ut aliquip ex ea commodo consequat. Duis autem
vel eum iriure dolor in hendrerit in vulputate velit esse molestie
\section{Conclusion}
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam
nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat,
sed diam voluptua. At vero eos et accusam et justo duo dolores et ea
rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem
ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur
sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et
dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam
et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea
takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit
amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor
invidunt ut labore et dolore magna aliquyam erat, sed diam
voluptua. At vero eos et accusam et justo duo dolores et ea
rebum.
\acknowledgments{
The authors wish to thank A, B, and C. This work was supported in part by
a grant from XYZ.}
\bibliographystyle{abbrv-doi}
\section{Introduction}
Virtual reality (VR) technology has a wide variety of applications (e.g., education, physical fitness, rehabilitation, entertainment). However, VR causes gait (i.e., walking patterns) disturbance in most users, which limits the usability and benefits of VR \cite{agrawal2009disorders,ferdous2018investigating,guo2013effects}. This problem is especially severe for persons with mobility impairments (MI) as these populations have functional gait disorders, and further gait disturbance in VR makes it increasingly difficult for them to use VR technologies. For example, persons with mobility impairments may find it very difficult to perform various locomotor movements in VR without the risk of falling or injury. However, minimal research has been conducted to mitigate these challenges.
Outside of VR research, the field of assistive technologies has shown that some multimodal feedback techniques \cite{franco2012ibalance,sienko2017role} can improve gait and balance and support individuals with MI during daily activities. For example, assistive technology based on vibrotactile \cite{mahmud2022vibrotactile} and visual feedback has been applied in studies aimed at improving balance and gait for persons with disabilities \cite{velazquez2010wearable,thikey2011need,vcakrt2010exercise,sutbeyaz2007mirror}. Similarly, auditory feedback in a non-VR environment has also improved gait in some prior studies. For example, Baram et al. \cite{baram2007auditory} reported that walking velocity and stride length improved significantly in a non-VR environment using auditory feedback compared to baseline for participants with Multiple Sclerosis (MS). However, the application of auditory feedback on gait and balance performance in VR has not been thoroughly explored.
To investigate solutions to the gait disturbance issue, we conducted empirical studies applying various auditory feedback techniques (e.g., spatial, static rest frame, and rhythmic audio) in VR for participants with and without MI. Participants performed a timed walking task using a pressure-sensitive walkway for quantitative gait analysis - GAITRite. In our study, all auditory conditions improved gait parameters significantly whereas spatial audio outperformed others. The purpose of this study was to make immersive VR more accessible using auditory feedback and analyzing its influence on gait performance while in VR. However, we did not measure post-study effects on gait performance.
\section{Background and Related Work}
\subsection{Gait Disturbance in VR}
VR has been shown to induce instability and gait disturbance in prior studies. Research published in 2001 reported that balance in VR was impaired \cite{takahashi2001change}. Individuals using HMDs can lose stability due to end-to-end latency and illusory impressions of body movement generated by VR because HMDs obstruct visual feedback from the real world \cite{soltani2020influence, martinez2018analysing}. Prolonged engagement in VR also resulted in postural instability\cite{murata2004effects}. These postural instabilities caused by VR can induce gait instability while walking in a virtual environment (VE) \cite{hollman2007does}. Riem et al. \cite{riem2020effect} also reported significant disturbance of step length (\textit{p} $<$ .05) in VR compared to baseline. Other studies have also explored the impact of imbalance on gait disturbance \cite{sondell2005altered}. Horsak et al. recruited 21 participants (male: 9, female:12, age: 37.62 ± 8.55 years) to see if walking in an HMD-based VE has a significant impact on gait\cite{horsak2021overground}. Walking speed was reduced by 7.3\% in HMD-based VE in their study. Canessa et al. also investigated the difference between real-world walking and immersive VR walking using an HMD\cite{canessa2019comparing}. They reported that walking velocity decreased significantly (\textit{p} $<$ .05) in immersive VE compared to real-world walking. Also, most prior studies concentrated on participants without MI \cite{lott2003effect,epure2014effect,robert2016effect,horlings2009influence,samaraweera2015applying} For example, Martelli et al.\cite{martelli2019gait} investigated whether gaits change during overground walking in a VE while using a VR HMD with continuous multidirectional visual field perturbations. In four different settings, 12 healthy young adults walked for six minutes on a pathway. Reduced stride length, greater stride width, and higher stride variability were observed when the visual field was perturbed.
However, there have been very few attempts in the past to address gait disturbance in VR. This inspired us to investigate the gait disturbance issue to make walking in VR more accessible.
\subsection{Gait Improvement After VR Intervention}
Although the focus of our research is VR accessibility and gait improvement while in VR, it is important to review how VR rehabilitation applications have previously facilitated balance and gait improvement that persists after the VR experience is over \cite{de2016effect,meldrum2012effectiveness,park2015effects,cho2016treadmill,duque2013effects, bergeron2015use}.
For example, Walker et al. \cite{walker2010virtual} recruited seven post-stroke patients with MI to investigate the improvement in walking and balance abilities using a low-cost VR system. They designed the VE to provide participants the sensation of walking along a city street, which was displayed via a television screen in front of a treadmill. They collected postural feedback via a head-mounted position sensor. An overhead suspension harness supported all participants. Six participants (mean age 53.5y, range 49–62y) completed the study. Results suggested significant improvement (\textit{p} $<$ .05) in post-study balance, walking speed, and gait functionality. Berg Balance Scale (BBS) score improved by 10\%, walking speed improved by 38\%, and Functional Gait Assessment (FGA) score increased by 30\% compared to baseline in their study. However, the majority of the prior work in VR gait rehabilitation did not use HMDs. We used HMDs to render the VEs in our study.
\subsection{The Effects of HMDs on Persons With Gait Disturbance}
Winter et al. \cite{winter2021immersive} recruited 36 (Male: 10, Female: 26) participants without MI and 14 participants with MI (MS: 10, Stroke: 4) to investigate the effect of an immersive, semi-immersive, and no VR environment on gait during treadmill training. First, participants completed the treadmill training without VR. Then, they experienced a virtual walking path displayed via a monitor in the semi-immersive VR condition. They experienced the same VR scenario via HMD in the immersive VR condition. Experimental results showed that immersive VR during gait rehabilitation increased walking speed more significantly (\textit{p} $<$ .001) than semi-immersive and no VR conditions for participants with and without MI. Participants did not experience cybersickness or a significant increase in heart rate after the VR conditions.
Janeh et al. recruited 15 male patients with Parkinson's disease to investigate a VR-based gait manipulation approach aimed at achieving gait symmetry by adjusting step length \cite{janeh2019gait}. They compared natural gait with walking circumstances during VR-based gait manipulation activities utilizing visual or proprioceptive signals. VR manipulation activities enhanced step width and swing time as compared to natural gait. Janeh et al. also reported VR as a promising and potentially beneficial tool for improving the gait of persons with neurological disorders after VR experience. They stressed the significance of using virtual walking approaches in rehabilitation\cite{janeh2021review}.
Also, Guo et al. \cite{guo2015mobility}
investigated the effect of VEs on gait for both participants with and without MI. They reported that MI participants responded differently in terms of walking velocity, step length, and stride length. However, there was no significant difference for other gait parameters between participants with and without MI.
Ferdous et al. \cite{ferdous2018investigating} investigated the effect of HMDs and visual components on postural stability in VR for participants with MS. However, they did not investigate the effect on gait, which is the case of most prior studies in immersive VR with HMDs. As a result, the impacts of immersive VR with HMDs on gait parameters have not received enough attention, prompting us to look into the effect on gait in VEs with HMDs for people with and without disabilities.
\subsection{Non-VR Assistive Technology: Auditory Feedback for Gait and Balance Improvement in Real World}
Prior research in non-VR environments found that auditory feedback can greatly improve postural control in the real world, although it is considered less effective than visual feedback techniques \cite{gandemer2016sound}.
Auditory feedback based on the user's lateral trunk lean helped to maintain postural stability \cite{chiari2005audio}. \textit{Spatial audio} - audio that is localized in 3D by the user - was effective in preserving postural stability \cite{stevens2016auditory,gandemer2017spatial}. \textit{Static rest frame audio} - white noise that is uniform and continuous - was found to reduce postural instability in older adults\cite{ross2016auditory,cornwell2020walking}. People having mobility impairments (e.g., people with multiple sclerosis, Parkinson's) and the elderly improved gait using \textit{rhythmic audio} (hearing a consistent beat) \cite{ghai2018effect}. Maculewicz et al. also investigated different rhythmic auditory feedback patterns\cite{maculewicz2015effects}. They reported a significant increase in walking speed (\textit{p} $<$ .001) with the rhythmic auditory feedback compared to the without auditory feedback condition. However, spatial audio was employed more frequently than other auditory approaches since it is claimed to be more natural and realistic \cite{chong2020audio, pinkl2020spatialized}. However, these studies investigated the auditory feedback in non-VR settings whereas we investigated the effect of auditory feedback on gait improvement in VR settings.
\subsection{Auditory Feedback in VR and the Effect on Gait}
Limited research has been done to investigate the effect of auditory feedback on gait and balance in VR. Mahmud et al. \cite{mahmud2022auditory} investigated different auditory feedback to mitigate imbalance issue in VR. They observed that all auditory feedback improved balance significantly for both participants with and without MS while spatial audio outperformed others. Gandemer et al. studied persons with low vision and found that spatial audio in an immersive VE enhanced gait and balance \cite{gandemer2017spatial}. In most cases, spatial audio was favored for usage in VR because it gave more immersion \cite{mahmud2022auditory, wenzel2017perception, naef2002spatialized}.
However, the effect of various auditory feedback on VR walking has been minimally studied \cite{nilsson2018natural}. This inspired us to explore the impact of spatial, static rest frame, and rhythmic auditory feedback on gait in VR with a specific focus on persons with gait-related disabilities.
\section{Methods}
\subsection{Hypotheses}
The impact of three auditory techniques (spatial, static rest frame, and rhythmic) on gait parameters in a VR environment was explored in this study. Spatial\cite{mahmud2022auditory, gandemer2017spatial}, Static rest frame \cite{ross2016auditory,cornwell2020walking}, rhythmic \cite{ghai2018effect} auditory feedback types were found to be effective in previous literature in VR and non-VR settings, which motivated us to choose these auditory feedback conditions. These hypotheses were largely motivated by the literature on gait disturbances in VR and non-VR auditory techniques to improve gait in the real world (see Background and Related Work).
H1: Gait disturbances will happen in VR baseline without auditory techniques as compared to the non-VR baseline without auditory techniques.
H2: Three VR-based auditory techniques (spatial, static rest frame, and rhythmic) will improve gait parameters more than the no-audio in VR condition.
H3: Spatial audio technique will improve gait parameters more than static rest frame and rhythmic audio techniques.
H4: Gait improvement (e.g., velocity) might be more apparent in participants with MI than participants without MI.
\subsection{Participants, Selection Criteria, and Screening Process}
We recruited 39 participants (Male: 9, Female: 30) from various multiple sclerosis support groups and the local community using a matched-comparison group design to investigate gait improvement using auditory feedback in VR. Of these, eighteen participants (Male: 5, Female: 13) had MI due to multiple sclerosis. 52.4\% of the participants with MI identified as White, 28.6\% identified as Hispanic, and 23.8\% identified as African American. In addition, we recruited a group of twenty-one participants without MI (Male: 4, Female: 17). 19\% of the participants without MI were White, 52.4\% were Hispanic, 28.6\% were African American, 9.5\% were American Indian, and 9.5\% were Asian. Both participant groups were statistically comparable in age, height, and weight. Table 1 displays participants' mean (SD) age, height, weight, and gender characteristics for both participant groups. We excluded participants with cognitive impairments, severely low vision, cardiovascular or respiratory conditions, or the inability to walk without assistance. The main challenge for this study was to recruit participants with MS.
\begin{table}[ht!]
\caption{Descriptive statistics for participants}
\label{tab:my_label}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\begin{tabular}[c]{@{}l@{}}Group\\ Name\end{tabular} & \begin{tabular}[c]{@{}l@{}}No. \\ of \\ Male\end{tabular} & \begin{tabular}[c]{@{}l@{}}No.\\ of\\ Female\end{tabular} & \begin{tabular}[c]{@{}l@{}}Age \\ (Years)\\ Mean \\ (SD)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Height \\ (cm)\\ Mean \\ (SD)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Weight \\ (Kg)\\ Mean \\ (SD)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}} Participants \\ with MI\end{tabular} & 5 & 13 & \begin{tabular}[c]{@{}l@{}}44.8 \\ (13.2)\end{tabular} & \begin{tabular}[c]{@{}l@{}}163.32 \\ (12.64)\end{tabular} & \begin{tabular}[c]{@{}l@{}}81.87 \\ (23.63)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}} Participants \\ without MI\end{tabular} & 4 & 17 & \begin{tabular}[c]{@{}l@{}}43.2 \\ (12.6)\end{tabular} & \begin{tabular}[c]{@{}l@{}}164.33 \\ (12.7)\end{tabular} & \begin{tabular}[c]{@{}l@{}}85.25 \\ (17.96)\end{tabular} \\ \hline
\end{tabular}
\end{table}
\textbf{\textit{Screening Process:}}
Participants were recruited through telephone calls, email lists, and flyers. Pre-screening was conducted over the telephone to determine participants' eligibility. We inquired about general demographic information, health, and medical-related history to address participant inclusion or exclusion from the study. For example, we confirmed the individual’s ability to visit the on-campus lab and participate through the duration of the study. We also assessed participants' history of MI and their ability to walk without assistance. We minimized participant characteristic imbalances and ensured age, height, and weight were proportionally similar between both participant groups.
\subsection{System Description}
The following equipment was used in the study for participants' safety and data collection.
\textbf{\textit{Computers, VR Equipment, and Software:}}
The VEs were developed using Unity3D software. We used an HTC Vive wireless HMD which has a pixel resolution of 2160 x 1200, 90 Hz refresh rate, and a 110-degree field of view. We used the integrated HMD headphones to apply the auditory feedback techniques. We used a computer to render the VE with specifications including a Windows 10 operating system, an Intel Core i7 Processor (4.20 GHz), 32GB DDR3 RAM, and an NVIDIA GeForce RTX 2080 graphics card.
\textbf{\textit{Safety Equipment:}}
We used a Kaye Products Inc. suspension walking system which consisted of a body harness, thigh cuffs, and suspension walker for the safety of the participants during the study.
\textbf{\textit{Gait Analysis:}}
A GAITRite walkway system was used to collect participants' gait parameters. The GAITRite walkway system is a portable 12 ft. pressure sensor pad capable of providing spatial and temporal gait parameters of participants during a walking task.
\textbf{\textit{Environment:}}
The study was conducted in a controlled environment ($>$ 600sq ft.). We conducted each study with only the participant and researcher in the room in order to minimize any ambient noises or other disturbances. Fig. 1 shows the comparison between the real-world environment and the virtual environment for the timed walking task.
\begin{figure}[h!]
\centering
\includegraphics[width=0.20\textwidth, height=8.79cm, angle=270]{figures/gaitrite-real.jpg}
\includegraphics[width=0.495\textwidth,height=4cm]{figures/gaitrite-virtual.jpg}
\caption{ Comparison between real environment (top) and virtual environment (bottom) for timed walking task}
\end{figure}
\subsection{Study Conditions}
We applied three VR-based auditory feedback techniques and one condition with no audio in order to observe the effect of these feedback techniques on an individual's gait performance. We played these feedback conditions at the start of the tasks through the HMD's integrated headphones. We employed white noise for auditory feedback because it had been found to improve gait and balance performance due to stochastic resonance phenomena \cite{helps2014different}. In prior research in the real world, auditory white noise was also found to be beneficial in minimizing postural instability \cite{cornwell2020walking, harry2005balancing, sacco2018effects,zhou2021effects,ross2015auditory}. The study conditions were:
\subsubsection{Non-VR Baseline}
We measured the baseline data while participants performed the timed walking task using the GAITRite system without any auditory feedback.
\subsubsection{VR Baseline}
We performed the same timed walking task in VR but with no auditory feedback to establish a VR baseline measurement for participants. Participants performed this condition while wearing the HMD and integrated headphones.
\subsubsection{Auditory VR Feedback}
We applied the following three auditory conditions in VR.
\textbf{\textit{Spatial Audio:}}
This was 3D auditory feedback in relation to the participant's physical position in the lab. This was simulated spatial audio (rather than recorded ambisonic audio). In particular, we played spatialized white noise from Unity3D such that when the user rotated their head, the noise played at varying levels in each ear to imitate a stationary sound source. We used Google resonance audio SDK to implement this because the plugin utilizes head-related transfer functions (HRTFs) to simulate 3D sound more accurately than unity's default \cite{WinNT1}. The 3D audio source in the VE had X, Y, and Z coordinates of 0, 0, and 0, respectively.
\textbf{\textit{Static Rest Frame Audio:}}
This auditory feedback was continuous white noise that was not relative to the participants positioning in the lab. Participants heard the noise at the same intensity in both ears all the time for this condition which is similar to a "mono" sample with no panning. In previous non-VR research, this strategy was also shown to enhance adult participants' balance. \cite{ross2016auditory, cornwell2020walking}.
\textbf{\textit{Rhythmic Audio:}}
The white noise was similar to the static rest frame, but it was a white noise clip at every one-second interval. The length of the rhythmic audio clip was also one-second. Previous research revealed that hearing a constant rhythm may enhance balance and walking in persons with neurological disorders and the adults in non-VR settings \cite{ghai2018effect}.
\subsection{Auditory Feedback Design}
In Unity3D, we connected the audio to sound sources and modified them to fit our research needs. We had a different Unity scene for each auditory feedback circumstance. When the participant was ready, we started each condition with the relevant scene to start the auditory feedback. We performed the scenes for all participants in counterbalanced order, which assigned the auditory feedback in different orders for different participants. We used counterbalancing as it reduces carryover and fatigue effects\cite{WinNT5,WinNT6,WinNT4}. The audio was delivered over the wireless HMD's embedded headphones at the start of the task. The loudness of the audio was adjusted to the participant's satisfaction.
\subsection{Study Procedure}
The flowchart in Fig. 2 represents the whole study procedure.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.47\textwidth,height=10cm]{figures/finalflow.png}
\caption{Study procedure}
\end{figure}
First, we sanitized all lab equipment used in the study (e.g., HMD, controllers, balance board, safety harness, and suspension system). We recorded participants' temperature upon entry to the lab and completed a COVID-19 symptom screening questionnaire form. We then informed participants of the study procedures and documented formal participant consent. Participants completed an Activities-specific Balance Confidence (ABC) \cite{powell1995activities} form and a Simulator Sickness Questionnaire (SSQ) form \cite{kennedy1993simulator} at the beginning of the study. Participants were asked to remove footwear that would interfere with the GAITRite. The Institutional Review Board (IRB) at the University of Texas at San Antonio approved our study protocols.
\subsubsection{Real World Walking}
We used the GAITRite to measure gait parameters in the study. Participants were securely fastened to the safety harness and suspension walker to prevent fall-related injuries. We instructed the participants to walk at a comfortable speed on the GAITRite. We also instructed them to complete 180 degree turns at both ends. Participants took their feet off the GAITRite while taking turns - the GAITRite software requires participants to step off between trials; moreover, the GAITRite can not technically assess turns. Participants performed three timed walking tasks \cite{steffen2002age} while we timed them using a stopwatch and collected their gait data with our GAITRite. Fig. 3 (left) shows an example of a participant’s timed walking task in the real-world environment in our study.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.47\textwidth,height= 6cm]{figures/real-virtual-walking.png}
\caption{Participants were supported by a harness while they performed the timed walking task using the GAITRite system in real world (left, face blurred) and virtual environment (right)}
\end{figure}
\subsubsection{Virtual Environment Walking}
This was the replication of the task in the real-world environment, except it was performed in a VE with various auditory feedback conditions. We used the same harness and suspension system as the real-world environment walking to prevent sudden falls. Participants were told to walk on the virtual GAITRite overlaid on top of a real GAITRite. They used an HMD to observe the VE and the integrated HMD's headphones to hear the auditory feedback. Participants' performed three timed walking tasks in VR for each auditory condition (e.g., spatial, static rest frame, and rhythmic) and a no-audio in VR condition. The four conditions were applied in counterbalanced order for all participants. Fig. 3 (right) shows an example of a participant’s timed walking task in the virtual environment in our study.
\subsubsection{Post-Study Questionnaires}
Participants completed a post-study SSQ form and a demographic questionnaire. Finally, each participant received compensation of \$30/Hr and a parking validation ticket at the end of the study.
\section{Metrics}
\subsection{Gait Metrics}
We investigated the following gait metrics in our study.\\
\textit{- Walking Velocity}: The distance traveled (cm) divided by ambulation time (sec).\\
\textit{- Cadence}: The number of footsteps per minute.\\
\textit{ - Step Time (Left/Right)}: The time (sec) between the initial contact points of the opposite foot.\\
\textit{- Step Length (Left/Right)}: The distance (cm) between heel centers of two consecutive steps of opposing feet.\\
\textit{- Cycle Time (Left/Right)}: The time (sec) between the initial contact points of two consecutive steps of the same foot.\\
\textit{- Stride Length (Left/Right)}: The distance (cm) between the steps of the same foot.\\
\textit{- Swing Time (Left/Right)}: The time (sec) between the final contact point of a foot and the initial contact point of the same foot.\\
\textit{- Stance Time (Left/Right)}: The time (sec) between the initial contact point and the final contact point of the same footstep.\\
\textit{- Single Support Time (Left/Right)}: This is the time (sec) between the current footfall's last contact and the first contact of the next footfall of the same foot. The single support time is equal to the opposing foot's swing time.\\
\textit{- Double Support Time (Left/Right)}: The time (sec) that both feet are in contact with the ground.\\
\textit{- Base of Support (Left/Right)}: The width between one foot and the line of progression of the opposite footstep. The line of progression is a line that connects the heels of two footsteps of the same foot.\\
\textit{- Toe-In/Toe-Out (Left/Right)}: The angle (degrees) between the line of progression and the center-line of a footprint. Toe-in indicates the center-line of the footprint is inside the line of progression. Toe-out indicates the center-line of the footprint is outside the line of progression.\\
More information on gait parameters can be found in the GAITRite manual \cite{WinNT3}.
\subsection{Activities-specific Balance Confidence (ABC) Scale}
The Activities-specific Balance Confidence (ABC) Scale is an outcome measure questionnaire used to assess participant balance, mobility, and physical functioning. The questionnaire uses 16 items to measure an individual’s confidence while performing everyday activities without losing balance \cite{powell1995activities}. Participants are asked to rate their confidence in each specific activity on a scale of 0\% (not confident) to 100\% (most confident). The ABC Scale score is calculated by the sum of the ratings (0-1600), divided by 16. A low level of functioning is indicated by a total ABC score below 50. A moderate level of functioning is indicated by a total ABC score between 50-80, and a high level of functioning is indicated by a total score above 80.
\subsection{Simulator Sickness Questionnaire}
The Simulator Sickness Questionnaire (SSQ) is used to measure the severity of cybersickness produced by exposure to virtual environments. The SSQ assesses participant physiological discomfort due to cybersickness using 16 symptoms in three different categories \cite{kennedy1993simulator}. The categories include nausea, oculomotor disturbance, and disorientation.
\section{Statistical Analysis}
A Shapiro-Wilk test was applied for testing data normality for each gait parameter separately. Results indicated the normal distribution of data ( \textit{p} $>$ .05) for all gait parameters for both groups of participants. To discover any significant difference among study conditions, we used a 2$\times$5 mixed-model ANOVA with Bonferroni correction where we had one between-subject factor with two levels (participants with MI and participants without MI) and one within-subject factor with five levels (five study conditions: baseline, spatial, static, rhythmic, and no audio). When we found a significant difference, we used post hoc two-tailed paired sample t-tests to obtain the difference between two particular study conditions for within-subject comparisons and to investigate hypotheses H1, H2, and H3. To investigate hypothesis H4, we also used post hoc two-tailed t-tests between the two groups for between-subject comparisons. To assess the difference in physical ability, we used post hoc two-tailed t-tests between the ABC scores of both groups of participants. We also performed two-tailed paired sample t-tests between pre-session SSQ score and post-session SSQ score for both groups of participants separately to analyze cybersickness. Bonferroni correction was used for all tests that included multiple comparisons.
\section{Results}
Among the twelve investigated gait parameters, we found significant improvement in seven gait parameters (walking velocity, cadence, step length, stride length, step time, cycle time, and swing time) for different audio conditions and the improvement of the other five gait parameters were not significant for both groups of participants. Gait improvement also differed significantly depending on the auditory feedback conditions. We computed the gait parameters from the beginning to the end of the trials (not any specific portion of the trials). Also, we analyzed both left and right leg data. However, there was no significant difference between left and right leg data. Therefore, we reported the averaged data of the left and right leg for all gait parameters for simplicity.
We found a significant difference in walking velocity after conducting the mixed-model ANOVA test for all participants, \textit{F}(1,123) = 71.6, \textit{p} $<$ .001; and effect size, $\eta^{2}$ = 0.09. We also found a significant difference (\textit{p} $<$ .001) in cadence, step length, stride length, step time, cycle time, swing time. Next, we conducted the following post-hoc two-tailed t-tests for within-group and between-group comparisons to find differences between particular study conditions.
\subsection{Within-Group Comparisons}
\subsubsection{Non-VR Baseline vs. VR Baseline}
For participants with MI, there was a significant decrease in walking velocity in VR baseline without audio condition (Mean, \textit{M} = 60.09, Standard Deviation, \textit{SD} = 18.97) as compared to non-VR baseline without audio condition (\textit{M} = 62.49, \textit{SD} = 18.81); \textit{t}(17) = 3.94, \textit{p} $<$ .001; and effect size, Cohen's \textit{d} = 0.12.
For participants without MI, there was also a significant decrease in walking velocity in VR baseline without audio condition (\textit{M} = 78.79, \textit{SD} = 16.31), as compared to the non-VR baseline without audio condition (\textit{M} = 80.96, \textit{SD} = 14.73); \textit{t}(20) = 0.64, \textit{p} $<$ .001, \textit{d} = 0.13.
For both groups of participants, we also observed a significant decrease (\textit{p} $<$ .001) in cadence, step length, and stride length for VR baseline without audio condition whereas step time, cycle time, and swing time were significantly increased (\textit{p} $<$ .001) in VR baseline without audio condition as compared to non-VR baseline without audio condition. This result indicated gait disturbance in VR environments for both participants with and without MI.
\subsubsection{Spatial Audio vs. VR Baseline}
For participants with MI, experimental results revealed that walking velocity increased significantly more in the spatial audio condition (\textit{M} = 75.16, \textit{SD} = 21.3) as compared to VR baseline without audio condition (\textit{M} = 60.09, \textit{SD} = 18.97); \textit{t}(17) = 7.33, \textit{p} $<$ .001, \textit{d} = 0.75.
For participants without MI, walking velocity increased significantly more in the spatial audio condition (\textit{M} = 90.35, \textit{SD} = 15.11) as compared to VR baseline without audio condition (\textit{M} = 78.79, \textit{SD} = 16.31); \textit{t}(20) = 4.72, \textit{p} $<$ .001, \textit{d} = 0.74.
For both participants with and without MI, we observed a significant increase (\textit{p} $<$ .001) in cadence, step length, and stride length for spatial audio condition as compared to VR baseline without audio condition. Also, there was a significant decrease in step time, cycle time, and swing time (\textit{p} $<$ .001) in spatial audio condition as compared to VR baseline without audio condition for both groups. This result substantiated that spatial audio improved gait parameters than the VR baseline without audio condition for both group of participants.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.49\textwidth,height=7 cm]{figures/mi-last.png}
\caption{Walking velocity comparison between study conditions for participants with MI.}
\end{figure}
\begin{table}[ht]
\caption{Gait parameters in five conditions for participants with MI}
\label{tab:my_label}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\begin{tabular}[c]{@{}l@{}}Gait \\ Metrics\end{tabular} & \begin{tabular}[c]{@{}l@{}}Non-VR \\ baseline\\ \\ Mean\\ (SD)\end{tabular} & \begin{tabular}[c]{@{}l@{}}VR\\ base-\\ line\\ \\ Mean\\ (SD)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Spatial\\ \\ Mean\\ (SD)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Rhy-\\ thmic\\ \\ Mean\\ (SD)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Static\\ Rest\\ Frame\\ \\ Mean\\ (SD)\end{tabular} \\ \hline
Cadence & \begin{tabular}[c]{@{}l@{}}74.91\\ (16.17)\end{tabular} & \begin{tabular}[c]{@{}l@{}}66.25\\ (19.6)\end{tabular} & \begin{tabular}[c]{@{}l@{}}96.99\\ (17.19)\end{tabular} & \begin{tabular}[c]{@{}l@{}}86.43\\ (17.81)\end{tabular} & \begin{tabular}[c]{@{}l@{}}89.23\\ (16.68)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Step\\ Length\end{tabular} & \begin{tabular}[c]{@{}l@{}}39.03\\ (7.5)\end{tabular} & \begin{tabular}[c]{@{}l@{}}33.53\\ (6.46)\end{tabular} & \begin{tabular}[c]{@{}l@{}}53.41\\ (6.99)\end{tabular} & \begin{tabular}[c]{@{}l@{}}46.04\\ (7.19)\end{tabular} & \begin{tabular}[c]{@{}l@{}}46.24\\ (6.69)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Stride\\ Length\end{tabular} & \begin{tabular}[c]{@{}l@{}}83.31\\ (14.92)\end{tabular} & \begin{tabular}[c]{@{}l@{}}77.9\\ (12.07)\end{tabular} & \begin{tabular}[c]{@{}l@{}}101.60\\ (14.00)\end{tabular} & \begin{tabular}[c]{@{}l@{}}92.35\\ (14.15)\end{tabular} & \begin{tabular}[c]{@{}l@{}}92.99\\ (12.83)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Step \\ Time\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.85\\ (0.38)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.93\\ (0.22)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.52\\ (0.13)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.72\\ (0.18)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.72\\ (0.2)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Cycle\\ Time\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.58\\ (0.33)\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.89\\ (0.47)\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.04\\ (0.26)\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.44\\ (0.33)\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.39\\ (0.30)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Swing \\ Time\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.49\\ (0.06)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.56\\ (0.08)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.33\\ (0.06)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.45\\ (0.07)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.43\\ (0.06)\end{tabular} \\ \hline
\end{tabular}
\end{table}
\subsubsection{Spatial Audio vs. Static Rest Frame Audio}
For participants with MI, spatial audio condition increased walking velocity (\textit{M} = 75.16, \textit{SD} = 21.3) comparative to static rest frame audio condition (\textit{M} = 70.07, \textit{SD} = 18.5); \textit{t}(17) = 2.93, \textit{p} $<$ .001, \textit{d} = 0.26.
For participants without MI, walking velocity increased significantly more in spatial audio condition (\textit{M} = 90.35, \textit{SD} = 15.11) as compared to static rest frame audio condition (\textit{M} = 84.76, \textit{SD} = 13.78); \textit{t}(20) = 3.61, \textit{p} $<$ .001, \textit{d} = 0.39.
For both participants with and without MI, we found a significant increase (\textit{p} $<$ .001) in cadence, step length, and stride length for spatial audio condition compared to static rest frame audio condition. Also, step time, cycle time,and swing time were significantly decreased (\textit{p} $<$ .001) in the spatial audio condition as compared to the static rest frame audio condition for both groups. Thus, spatial audio condition had better performance concerning gait than the static rest frame audio for participants with and without MI.
\subsubsection{Spatial Audio vs. Rhythmic Audio}
For participants with MI, walking velocity increased significantly more in spatial audio condition (\textit{M} = 75.16, \textit{SD} = 21.3) as compared to rhythmic audio condition (\textit{M} = 67.5, \textit{SD} = 20.63); \textit{t}(17) = 4.9, \textit{p} $<$ .001, \textit{d} = 0.37.
For participants without MI, walking velocity increased significantly more in the spatial audio condition (\textit{M} = 90.35, \textit{SD} = 15.11) as compared to the rhythmic audio condition (\textit{M} = 82.66, \textit{SD} = 15.69); \textit{t}(20) = 3.29, \textit{p} $<$ .001, \textit{d} = 0.5.
For both group of participants, there was a significant increase (\textit{p} $<$ .001) in cadence, step length, and stride length for spatial audio condition as compared to the rhythmic audio condition. However, step time, cycle time, and swing time were significantly decreased (\textit{p} $<$ .001) in spatial audio condition than rhythmic audio condition for both groups. These results suggest that spatial audio may be more effective than rhythmic audio for gait performance.
\begin{table}[ht]
\caption{Gait parameters in five conditions for participants without MI}
\label{tab:my_label}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\begin{tabular}[c]{@{}l@{}}Gait \\ Metrics\end{tabular} & \begin{tabular}[c]{@{}l@{}}Non-VR \\ baseline\\ \\ Mean\\ (SD)\end{tabular} & \begin{tabular}[c]{@{}l@{}}VR\\ base-\\ line\\ \\ Mean\\ (SD)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Spatial\\ \\ Mean\\ (SD)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Rhy-\\ thmic\\ \\ Mean\\ (SD)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Static\\ Rest\\ Frame\\ \\ Mean\\ (SD)\end{tabular} \\ \hline
Cadence & \begin{tabular}[c]{@{}l@{}}90.45\\ (11.39)\end{tabular} & \begin{tabular}[c]{@{}l@{}}84.86\\ (13.49)\end{tabular} & \begin{tabular}[c]{@{}l@{}}102.79\\ (13.72)\end{tabular} & \begin{tabular}[c]{@{}l@{}}93.42\\ (11.15)\end{tabular} & \begin{tabular}[c]{@{}l@{}}93.04\\ (11.8)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Step\\ Length\end{tabular} & \begin{tabular}[c]{@{}l@{}}48.55\\ (6.33)\end{tabular} & \begin{tabular}[c]{@{}l@{}}39.46\\ (7.13)\end{tabular} & \begin{tabular}[c]{@{}l@{}}68.96\\ (6.3)\end{tabular} & \begin{tabular}[c]{@{}l@{}}59.48\\ (7.71)\end{tabular} & \begin{tabular}[c]{@{}l@{}}59.26\\ (6.98)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Stride\\ Length\end{tabular} & \begin{tabular}[c]{@{}l@{}}88.69\\ (12.67)\end{tabular} & \begin{tabular}[c]{@{}l@{}}83.98\\ (14.76)\end{tabular} & \begin{tabular}[c]{@{}l@{}}103.38\\ (12.68)\end{tabular} & \begin{tabular}[c]{@{}l@{}}99.4\\ (14.78)\end{tabular} & \begin{tabular}[c]{@{}l@{}}99.1\\ (13.76)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Step \\ Time\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.66\\ (0.08)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.68\\ (0.08)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.55\\ (0.1)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.63\\ (0.07)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.64\\ (0.08)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Cycle\\ Time\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.2\\ (0.24)\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.45\\ (0.23)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.98\\ (0.26)\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.16\\ (0.22)\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.16\\ (0.23)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Swing \\ Time\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.47\\ (0.05)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.52\\ (0.06)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.37\\ (0.07)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.42\\ (0.05)\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.42\\ (0.06)\end{tabular} \\ \hline
\end{tabular}
\end{table}
\subsubsection{Static Rest Frame Audio vs. VR Baseline}
For participants with MI, we observed a significant increase in walking velocity in static rest frame audio condition (\textit{M} = 70.07, \textit{SD} = 18.5) comparative to VR baseline without audio condition (\textit{M} = 60.09, \textit{SD} = 18.97); \textit{t}(17) = 8.2, \textit{p} $<$ .001, \textit{d} = 0.57.
For participants without MI, there was a significant increase in walking velocity in static rest frame audio condition (\textit{M} = 84.76, \textit{SD} = 13.78) as compared to VR baseline without audio condition (\textit{M} = 78.79, \textit{SD} = 16.31); \textit{t}(20) = 3.89, \textit{p} $<$ .001, \textit{d} = 0.4.
For both participants with and without MI, there was a significant increase (\textit{p} $<$ .001) in cadence, step length, and stride length as compared to VR baseline without audio condition. However, step time, cycle time, and swing time were significantly decreased (\textit{p} $<$ .001) in static rest frame audio condition as compared to VR baseline without audio condition for both groups. As a result, static rest frame audio provided better performance than VR baseline without no audio condition in reference to gait behavior.
\subsubsection{Static Rest Frame Audio vs. Rhythmic Audio}
For participants with MI, there was no significant difference in walking velocity between static rest frame audio condition (\textit{M} = 70.07, \textit{SD} = 18.5) and rhythmic audio condition (\textit{M} = 67.5, \textit{SD} = 20.63); \textit{t}(17) = 1.89, \textit{p} $=$ .06, \textit{d} = 0.13 after post-hoc two-tailed paired t-test.
For participants without MI, we did not observe a significant difference in walking velocity between static rest frame audio condition (\textit{M} = 84.76, \textit{SD} = 13.78) and rhythmic audio condition (\textit{M} = 82.66, \textit{SD} = 15.69); \textit{t}(20) = 1.11, \textit{p} $=$ .138, \textit{d} = 0.14.
Similarly for both participants with and without MI, we did not observe any significant differences for other gait parameters between static rest frame audio condition and rhythmic audio condition. Hence, it was inconclusive which audio can be preferred between rhythmic and static rest frame for increasing gait performance.
\subsubsection{Rhythmic Audio vs. VR Baseline}
For participants with MI, tests revealed that walking velocity increased significantly more in rhythmic audio condition (\textit{M} = 67.5, \textit{SD} = 20.63) as compared to VR baseline without audio condition (\textit{M} = 60.09, \textit{SD} = 18.97); \textit{t}(17) = 5.63, \textit{p} $<$ .001, \textit{d} = 0.37.
For participants without MI, results indicated a significant difference in walking velocity between rhythmic audio condition (\textit{M} = 82.66, \textit{SD} = 15.69) and VR baseline without audio condition (\textit{M} = 78.79, \textit{SD} = 16.31); \textit{t}(20) = 2.01, \textit{p} $<$ .001, \textit{d} = 0.24.
For both groups, cadence, step length, and stride length significantly increased (\textit{p} $<$ .001) in rhythmic audio condition as compared to VR baseline without audio condition. However, step time, cycle time, and swing time were significantly decreased (\textit{p} $<$ .001) in rhythmic audio condition as compared to VR baseline without audio condition for both groups. Thus, rhythmic auditory condition surpassed VR baseline condition for gait improvement.
The comparisons of walking velocity between five different study conditions have been shown in Figure 4 (participants with MI) and Figure 5 (participants without MI). The other gait parameters which resulted in significant improvement have been shown in Table 2 (participants with MI) and Table 3 (Participants without MI) with their respective mean and standard deviation (SD) in the five study conditions. The comparisons of effect size (Cohen's \textit{d}) for walking velocity between different study conditions have been shown in Figure 6.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.48\textwidth,height=7cm]{figures/healthy-last.png}
\caption{Walking velocity comparison between study conditions for participants without MI.}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.47\textwidth,height= 6cm]{figures/effect.png}
\caption{Comparisons of effect size for walking velocity between study conditions for participants with and without MI.}
\end{figure}
\subsection{Between-Group Comparisons}
We found a significant decrease in walking velocity for participants with MI compared to participants without MI for non-VR baseline condition; \textit{t}(37) = 3.37, \textit{p} $=$ .002; \textit{d} = 1.1 and for VR baseline condition; \textit{t}(37) = 3.27, \textit{p} $=$ .003; \textit{d} = 1.06.
Results also revealed a significant decrease in walking velocity for participants with MI as compared to participants without MI for all VR-based auditory feedback conditions: spatial audio (\textit{t}(37) = 2.53, \textit{p} $=$ .02; \textit{d} = 0.83), static rest frame audio (\textit{t}(37) = 2.77, \textit{p} $=$ .009; \textit{d} = 0.91), and for rhythmic audio (\textit{t}(37) = 2.55, \textit{p} $=$ .01; \textit{d} = 0.84).
For all conditions (non-VR baseline, VR baseline, spatial audio, static rest frame audio, and rhythmic audio), we also obtained a significant decrease (\textit{p} $<$ .05) in cadence, step length, and stride length for participants with MI than participants without MI.
\subsection{Activities-specific Balance Confidence (ABC) Scale}
We conducted a two-tailed t-test based on responses from the Activities-specific Balance Confidence (ABC) Scale between participants with MI ($M$ = 70.83, $SD$ = 24.83) and participants without MI ($M$ = 91.76, $SD$ = 13.71), \textit{t}(37) = 3.38, \textit{p} $<$ .001, \textit{d} = 1.04. The calculated mean ABC Scale score for participants with MI was 70.83\%, which indicated a moderate level of functioning. However, the calculated mean ABC score was 91.76\% for participants without MI. This indicated a high level of functioning. These scores represented a significant difference in physical functioning between participants with and without MI.
\subsection{Simulator Sickness Questionnaire}
We conducted a two-tailed t-test between pre-study SSQ scores and post-study SSQ scores for both participant groups. We did not observe a significant increase in SSQ scores for both participants with and without MI. We obtained \textit{t}(17) = 1.71, \textit{p} = .07, \textit{d} = 0.2 for participants with MI and \textit{t}(20) = 1.72, \textit{p} = .06, \textit{d} = 0.1 for participants without MI.
\section{Discussion}
\subsection{Gait disturbance in VR Without Audio}
Based on the results from Mixed ANOVA and post hoc t-tests for both groups of participants, we found that participants' walking velocity, step length, stride length, cadence, step time, cycle time, and swing time were significantly affected in VR-baseline without audio condition compared to the non-VR baseline without audio condition. Therefore, we noticed that gait disturbance happened in VR conditions for all participants when there was no auditory feedback, which supported our hypothesis H1. Prior works also reported that VR might cause postural instability, which could lead to gait disturbance \cite{hollman2007does,riem2020effect,sondell2005altered}.
\subsection{ Gait Improvement in VR-based Auditory Conditions }
In all VR-based auditory conditions, results showed that cadence, step length, stride length increased whereas step time, cycle time, swing time decreased. That is, participants had more steps and bigger steps in a shorter amount of time in VR-based auditory conditions as compared to the VR no-audio baseline. Thus, walking velocity was increased in all VR-based auditory conditions (Figure 4 and Figure 5) significantly more (\textit{p} $<$ .001) than VR baseline condition for both participants with and without MI, which validated our hypothesis H2. Also, the value of effect size (Cohen's \textit{d}) = 0.5 indicated that Spatial audio had a medium effect on both groups of participants. Static rest frame audio had a medium effect on participants with MI but a small effect on participants without MI. Rhythmic audio had a small effect on both groups of participants. Because of the auditory feedback effects, the gait parameters improved. Previous findings substantiated these results where they reported auditory white noise \cite{sacco2018effects,zhou2021effects,ross2015auditory,harry2005balancing}, spatial \cite{stevens2016auditory,gandemer2017spatial}, static rest frame \cite{ross2016auditory}, CoP \cite{hasegawa2017learning}, and rhythmic audio \cite{ghai2018effect} improved gait and reduced postural sway in the real-world environment. However, most of the previous works were performed in non-VR settings while we investigated the effect of auditory feedback in VR.
Among the VR-based auditory conditions, spatial audio outperformed (\textit{p} $<$ .001) other conditions, which supported our hypothesis H3. Also, spatial audio had a greater effect size compared to other auditory conditions (Fig. 6). This was also mentioned in previous studies where spatial audio was reported to be effective in improving gait and postural stability because it offered better fidelity \cite{chong2020audio, pinkl2020spatialized} and immersion \cite{wenzel2017perception,naef2002spatialized}. However, these studies only investigated spatial audio, whereas we compared three different kinds of auditory feedback in VR in this study.
\subsection{Gait Similarities and Dissimilarities Between Participants With and Without MI}
We found significant differences in walking velocity, cadence, step length, stride length between the participants with and without MI for all five study conditions. However, we did not find any significant difference for other gait parameters between participants with and without MI. Therefore, we found that few gait parameters (e.g., velocity, cadence, step length, stride length) were affected differently for participants with MI and participants without MI, whereas other gait parameters were affected in a similar way for both groups of participants, which supported our hypothesis H4. These results partially matched with previous work of Guo et al. \cite{guo2015mobility} where they investigated the gait parameters of participants with and without MI in a VE. They found significant differences in walking velocity, step length, and stride length between participants with MI and participants without MI, whereas there was no significant differences in other gait parameters of the two groups of participants.
To figure out how groups differed in gait improvement from the audio conditions, we first subtracted baseline data from all conditions. Then, ANOVA and post hoc two-tailed t-tests between two different groups revealed that gait improvement for participants with MI was significantly (\textit{p} $<$ .001) more than the participants without MI. Effect size, Cohen's \textit{d} = 0.9, also indicated larger effect for participants with MI. We hypothesized that as the participants with MI had less gait functionality, there might have a chance for more improvement than the participants without MI.
\subsection{Cybersickness}
Previous research had observed that VR users exposed to virtual environments for more than 10 minutes could begin to experience the onset of cybersickness \cite{chang2020virtual,kim2021clinical}. Our study required participants to wear the HMD for around 45 minutes under several conditions, which increased the chance of developing cybersickness symptoms. However, we designed the virtual environment with no illusory self-motion to minimize the possibility of participants developing cybersickness \cite{mccauley1992cybersickness}. We learned from the post-study verbal conversation with the participants that a few experienced mild cybersickness after the study, which only raised their post SSQ score slightly. Moreover, there was no significant difference between pre-SSQ and post-SSQ scores. This suggests that cybersickness was negligible and did not alter the participant's gait performance.
\section{Limitations}
Both participant groups used a suspension walking system consisting of a body harness and thigh cuffs for the duration of the timed walking task. The walking system was used in the baseline condition and all auditory feedback conditions in VR by every participant to maintain study procedure consistency. Although participants were instructed to walk at a comfortable speed, the heavy suspension walking system could have reduced participants' normal walking speed. Previous studies also reported that wearing a safety harness caused a significant decrease in walking speed \cite{decker2012wearing}. Due to this intervention, studies that do not require a safety harness may observe different outcomes.
The duration of the study was lengthy and required participants to complete multiple time-consuming trials on the GAITRite which could sometimes produce symptoms of fatigue in participants. To mitigate the fatigue effect, participants were able to rest and remove the HMD between trials and conditions if needed. This rest and removal of the HMD may have allowed them to regain spatial orientation of the room setup and possibly skewed data results.
In our study, we applied the four auditory feedback conditions in counterbalanced order, which reduces carryover or practice effects \cite{WinNT5, WinNT4, WinNT6}. Counterbalancing order was also reported to be effective in many prior research\cite{millner2020suicide,plechata2019age,sheskin2018thechildlab}. Alternatively, we could have applied the auditory feedback in randomized order to reduce bias. However, our study included participants with MS who are very prone to fatigue and cybersickness. Thus, we were more concerned about the carryover fatigue and cybersickness effects on results. As counterbalancing also reduces fatigue and cybersickness effects\cite{WinNT6}, we preferred counterbalancing over randomization.
We applied the "rhythmic" auditory feedback in every one-second interval. However, we did not investigate this feedback condition for other time intervals (e.g., two-second). Therefore, studies that would apply "rhythmic" auditory feedback for different time intervals might find slightly different results for this specific condition.
For static rest frame audio condition, continuously played white noise could have had fatigue effect on participants. However, we did not measure the fatigue effect for this condition.
In our study, the non-VR baseline was always done first, which might have influenced the walking speed for this condition. However, we wanted to have enough baseline tasks before starting the VR conditions to reduce the learning effect of VR conditions.
Our research focus was to investigate solutions to solve the gait disturbance issues in VR. So, we did not consider non-VR audio conditions. Also, adding the three auditory feedback conditions in non-VR would result in three additional study conditions, which would make the study significantly longer. As our study included participants with MI due to MS who had less physical ability and were prone to fatigue, we tried to keep the study time shorter.
We had more female participants than males in our study. This is because we recruited from the population with MS, which is statistically more common in females \cite{WinNT}. Many previous studies reported no significant effect of gender on balance \cite{kahraman2018gender, faraldo2012influence,schedler2019age}. However, we plan to investigate the gender effect on balance in VR in our future work.
We measured gait performance during VR intervention. We did not measure the post-study effects on gait. Our motivation here was accessibility rather than rehabilitation, and thus we only investigated gait outside of VR as a baseline and during VR immersion.
We had five different study conditions. We performed three trials for each study condition, which resulted in fifteen trials for each participant. We collected separate data files for each trial. Therefore, we performed total of 585 trials for our 39 participants. The HMD display stopped working during four trials of four participants (MI group:3, Healthy group: 1). Restarting the "Vive wireless app" solved the issue each time. We repeated those four incomplete trials and omitted the four incomplete data files.
We encountered challenges with participant recruitment due to the COVID-19 pandemic. Many of our participants with MI had multiple sclerosis and were therefore immunosuppressed. This placed them in the “high-risk” category for contracting COVID-19 and deterred them from joining the study. We would have been able to recruit a larger participant group and provide additional gait performance results if the study was conducted at a period outside of the pandemic.
\section{Conclusion}
We found significant evidence that spatial, rhythmic, and static rest frame auditory feedback conditions resulted in the improvement of gait performance in both participant groups while immersed in VR. Spatial audio improved gait parameters significantly more than rhythmic and static rest frame audio conditions. Also, improvements of gait parameters were significantly greater in participants with mobility impairments than the participants without mobility impairments. The results from this study will provide guidance to researchers to better understand the implications of assistive technologies based on auditory feedback for improving gait performance in HMD-based VEs. Furthermore, these results suggest that auditory feedback should be considered more in the future development of VR experiences to improve usability and accessibility, especially for persons with mobility impairments.
\acknowledgments{
The US National Science Foundation provided funding for this project (IIS 2007041). We also express our gratitude to all of our research participants.
}
\bibliographystyle{abbrv-doi}
|
1,477,468,750,407 | arxiv | \section{Introduction}
\label{Sec:Intro}
Many statistical procedures rely on the eigendecomposition of a matrix. Examples include principal components analysis and its cousin sparse principal components analysis \citep{Zouetal2006}, factor analysis, high-dimensional covariance matrix estimation \citep{Fanetal2013} and spectral clustering for community detection with network data \citep{DonathHoffman1973}. In these and most other related statistical applications, the matrix involved is real and symmetric, e.g. a covariance or correlation matrix, or a graph Laplacian or adjacency matrix in the case of spectral clustering.
In the theoretical analysis of such methods, it is frequently desirable to be able to argue that if a sample version of this matrix is close to its population counterpart, and provided certain relevant eigenvalues are well-separated in a sense to be made precise below, then a population eigenvector should be well approximated by a corresponding sample eigenvector. A quantitative version of such a result is provided by the Davis--Kahan `$\sin \theta$' theorem \citep{DavisKahan1970}. This is a deep theorem from operator theory, involving operators acting on Hilbert spaces, though as remarked by \citet{StewartSun1990}, its `content more than justifies its impenetrability'. In statistical applications, we typically do not require this full generality; we state below a version in a form typically used in the statistical literature. We write $\|\cdot\|$ and $\|\cdot\|_{\mathrm{F}}$ respectively for the Euclidean norm of a vector and the Frobenius norm of a matrix. Recall that if $V, \hat{V} \in \mathbb{R}^{p \times d}$ both have orthonormal columns, then the vector of $d$ principal angles between their column spaces is given by $(\cos^{-1} \sigma_1,\ldots,\cos^{-1}\sigma_d)^T$, where $\sigma_1 \geq \ldots \geq \sigma_d$ are the singular values of $\hat{V}^T V$. Let $\Theta(\hat{V},V)$ denote the $d \times d$ diagonal matrix whose $j$th diagonal entry is the $j$th principal angle, and let $\sin \Theta(\hat{V},V)$ be defined entrywise.
\begin{thm}[Davis--Kahan $\sin \theta$ theorem]
\label{Thm:DKSinTheta}
Let $\Sigma,\hat{\Sigma} \in \mathbb{R}^{p \times p}$ be symmetric, with eigenvalues $\lambda_1 \geq \ldots \geq \lambda_p$ and $\hat{\lambda}_1 \geq \ldots \geq \hat{\lambda}_p$ respectively. Fix $1 \leq r \leq s \leq p$, let $d := s - r + 1$, and let $V = (v_r,v_{r+1},\ldots,v_s) \in \mathbb{R}^{p \times d}$ and $\hat{V} = (\hat{v}_r,\hat{v}_{r+1},\ldots,\hat{v}_s) \in \mathbb{R}^{p \times d}$ have orthonormal columns satisfying $\Sigma v_j = \lambda_j v_j$ and $\hat{\Sigma}\hat{v}_j = \hat{\lambda}_j \hat{v}_j$ for $j=r,r+1,\ldots,s$. If $\delta := \inf\{|\hat{\lambda} - \lambda|: \lambda \in [\lambda_s,\lambda_r],\hat{\lambda} \in (-\infty,\hat{\lambda}_{s-1}] \cup [\hat{\lambda}_{r+1}, \infty)\} > 0$, where $\hat{\lambda}_0 := -\infty$ and $\hat{\lambda}_{p+1} := \infty$, then
\begin{equation}
\label{Eq:DKSinTheta}
\|\sin \Theta(\hat{V},V)\|_{\mathrm{F}} \leq \frac{\|\hat{\Sigma} - \Sigma\|_{\mathrm{F}}}{\delta}.
\end{equation}
\end{thm}
In fact, both occurrences of the Frobenius norm in~(\ref{Eq:DKSinTheta}) can be replaced with the operator norm $\|\cdot\|_{\mathrm{op}}$, or any other orthogonally invariant norm. Frequently in applications, we have $r = s = j$, say, in which case we can conclude that
\[
\sin \Theta(\hat{v}_j,v_j) \leq \frac{\|\hat{\Sigma} - \Sigma\|_{\mathrm{op}}}{\min(|\hat{\lambda}_{j-1} - \lambda_j|,|\hat{\lambda}_{j+1} - \lambda_j|)}.
\]
Since we may reverse the sign of $\hat{v}_j$ if necessary, there is a choice of orientation of $\hat{v}_j$ for which $\hat{v}_j^T v_j \geq 0$. For this choice, we can also deduce that $\|\hat{v}_j - v_j\| \leq \sqrt{2}\sin \Theta(\hat{v}_j,v_j)$.
This theorem is then used to show that $\hat{v}_j$ is close to $v_j$ as follows: first, we argue that $\hat{\Sigma}$ is close to $\Sigma$. This is often straightforward; for instance, when $\Sigma$ is a population covariance matrix, it may be that $\hat{\Sigma}$ is just an empirical average of independent and identically distributed random matrices. Then we argue, e.g. using Weyl's inequality, that with high probability, $|\hat{\lambda}_{j-1} - \lambda_j| \geq (\lambda_{j-1} - \lambda_j)/2$ and $|\hat{\lambda}_{j+1} - \lambda_j| \geq (\lambda_j - \lambda_{j+1})/2$, so on these events $\|\hat{v}_j - v_j\|$ is small provided we are willing to assume an eigenvalue separation, or eigen-gap, condition on the population eigenvalues.
The main contribution of this paper is to give a variant of the Davis--Kahan theorem in Theorem~\ref{Thm:Main} in Section~\ref{Thm:Main} below, where the only eigen-gap condition is on the population eigenvalues, by contrast with the definition of $\delta$ in Theorem~\ref{Thm:DKSinTheta} above. Similarly, only population eigenvalues appear in the denominator of the bounds. This means there is no need for the statistician to worry about the event where $|\hat{\lambda}_{j+1} - \lambda_{j+1}|$ or $|\hat{\lambda}_{j-1} - \lambda_{j-1}|$ is small. In Section~\ref{Sec:Examples}, we give a selection of several examples where the Davis--Kahan theorem has been used in the statistical literature, and where our results could be applied directly to allow those authors to assume more natural conditions, to simplify proofs, and in some cases, to improve bounds.
Singular value decomposition, which may be regarded as a generalisation of eigendecomposition, but which exists even when a matrix is not square, also plays an important role in many modern algorithms in Statistics and machine learning. Examples include matrix completion \citep{CandesRecht2009}, robust principal components analysis \citep{Candesetal2009} and motion analysis \citep{Kukushetal2002}, among many others. \citet{Wedin1972} provided the analogue of the Davis--Kahan theorem for such general real matrices, working with singular vectors rather than eigenvectors, but with conditions and bounds that mix sample and population singular values. In Section~\ref{Sec:SVD}, we extend the results of Section~\ref{Sec:Main} to such settings; again our results depend only on a condition on the population singular values. Proofs are deferred to the Appendix.
\section{Main results}
\label{Sec:Main}
\begin{thm}
\label{Thm:Main}
Let $\Sigma,\hat{\Sigma} \in \mathbb{R}^{p \times p}$ be symmetric, with eigenvalues $\lambda_1 \geq \ldots \geq \lambda_p$ and $\hat{\lambda}_1 \geq \ldots \geq \hat{\lambda}_p$ respectively. Fix $1 \leq r \leq s \leq p$ and assume that $\min(\lambda_{r-1} - \lambda_r,\lambda_s - \lambda_{s+1}) > 0$, where $\lambda_0 := \infty$ and $\lambda_{p+1} := -\infty$. Let $d := s - r + 1$, and let $V = (v_r,v_{r+1},\ldots,v_s) \in \mathbb{R}^{p \times d}$ and $\hat{V} = (\hat{v}_r,\hat{v}_{r+1},\ldots,\hat{v}_s) \in \mathbb{R}^{p \times d}$ have orthonormal columns satisfying $\Sigma v_j = \lambda_j v_j$ and $\hat{\Sigma}\hat{v}_j = \hat{\lambda}_j \hat{v}_j$ for $j= r,r+1,\ldots,s$. Then
\begin{equation}
\label{Eq:OurSinTheta}
\|\sin \Theta(\hat{V},V)\|_{\mathrm{F}} \leq \frac{2\min(d^{1/2}\|\hat{\Sigma} - \Sigma\|_{\mathrm{op}},\|\hat{\Sigma} - \Sigma\|_\mathrm{F})}{\min(\lambda_{r-1} - \lambda_r,\lambda_s - \lambda_{s+1})}.
\end{equation}
Moreover, there exists an orthogonal matrix $\hat{O} \in \mathbb{R}^{d \times d}$ such that
\begin{equation}
\label{Eq:OurDifference}
\|\hat{V}\hat{O} - V\|_{\mathrm{F}} \leq \frac{2^{3/2}\min(d^{1/2}\|\hat{\Sigma} - \Sigma\|_{\mathrm{op}},\|\hat{\Sigma} - \Sigma\|_\mathrm{F})}{\min(\lambda_{r-1} - \lambda_r,\lambda_s - \lambda_{s+1})}.
\end{equation}
\end{thm}
Apart from the fact that we only impose a population eigen-gap condition, the main difference between this result and that given in~Theorem~\ref{Thm:DKSinTheta} is in the $\min(d^{1/2}\|\hat{\Sigma} - \Sigma\|_{\mathrm{op}},\|\hat{\Sigma} - \Sigma\|_\mathrm{F})$ term in the numerator of the bounds. In fact, the original statement of the Davis--Kahan $\sin \theta$ theorem has a numerator of $\|V\Lambda - \hat{\Sigma}V\|_{\mathrm{F}}$ in our notation, where $\Lambda := \mathrm{diag}(\lambda_r,\lambda_{r+1},\ldots,\lambda_s)$. However, in order to apply that theorem in practice, statisticians have bounded this expression by $\|\hat{\Sigma} - \Sigma\|_{\mathrm{F}}$, yielding the bound in Theorem~\ref{Thm:DKSinTheta}. When $p$ is large, though, one would often anticipate that $\|\hat{\Sigma} - \Sigma\|_{\mathrm{op}}$, which is the $\ell_\infty$ norm of the vector of eigenvalues of $\hat{\Sigma} - \Sigma$, may well be much smaller than $\|\hat{\Sigma} - \Sigma\|_{\mathrm{F}}$, which is the $\ell_2$ norm of this vector of eigenvalues. Thus when $d \ll p$, as will often be the case in practice, the minimum in the numerator may well be attained by the first term. It is immediately apparent from~(\ref{Eq:FirstLower}) and~(\ref{Eq:SecondLower}) in our proof that the smaller numerator $\|\hat{V}\Lambda - \Sigma \hat{V}\|_{\mathrm{F}}$ could also be used in our bound for $\|\sin \Theta(\hat{V},V)\|_\mathrm{F}$ in Theorem~\ref{Thm:Main}, while $2^{1/2}\|\hat{V}\Lambda - \Sigma\hat{V}\|_{\mathrm{F}}$ could be used in our bound for $\|\hat{V}\hat{O} - V\|_{\mathrm{F}}$. Our reason for presenting the weaker bound in Theorem~\ref{Thm:Main} is to aid direct applicability; see Section~\ref{Sec:Examples} for several examples.
The constants presented in Theorem~\ref{Thm:Main} are sharp, as the following example illustrates. Let $\Sigma = \mathrm{diag}(\lambda_1,\ldots,\lambda_p)$ and $\hat{\Sigma} = \mathrm{diag}(\hat{\lambda}_1,\ldots,\hat{\lambda}_p)$, where $\lambda_1 = \ldots = \lambda_d = 3$, $\lambda_{d+1} = \ldots = \lambda_p = 1$ and $\hat{\lambda}_1 = \ldots = \hat{\lambda}_{p-d} = 2-\epsilon$, $\hat{\lambda}_{p-d+1} = \ldots = \hat{\lambda}_p = 2$, where $\epsilon > 0$ and where $d \in \{1,\ldots,\lfloor p/2 \rfloor\}$. If we are interested in the the eigenvectors corresponding to the largest $d$ eigenvalues, then for every orthogonal matrix $\hat{O} \in \mathbb{R}^{d \times d}$,
\[
\|\hat{V}\hat{O} - V\|_{\mathrm{F}} = 2^{1/2}\|\sin \Theta(\hat{V}, V)\|_F = (2d)^{1/2} \leq (2d)^{1/2}(1+\epsilon) = \frac{2^{3/2}d^{1/2}\|\hat{\Sigma} - \Sigma\|_{\mathrm{op}}}{\lambda_d - \lambda_{d+1}}.
\]
In this example, the column spaces of $V$ and $\hat{V}$ were orthogonal. However, even when these column spaces are close, our bound~(\ref{Eq:OurSinTheta}) is tight up to a factor of 2, while our bound~(\ref{Eq:OurDifference}) is tight up to a factor of $2^{3/2}$. To see this, suppose that $\Sigma = \mathrm{diag}(3,1)$ while $\hat{\Sigma} = \hat{V}\mathrm{diag}(3,1)\hat{V}^T$, where $\hat{V} = \begin{pmatrix} (1-\epsilon^2)^{1/2} & -\epsilon \\ \epsilon & (1-\epsilon^2)^{1/2}\end{pmatrix}$ for some $\epsilon > 0$. If $v = (1,0)^T$ and $\hat{v} = \bigl((1-\epsilon^2)^{1/2},-\epsilon\bigr)^T$ denote the top eigenvectors of $\Sigma$ and $\hat{\Sigma}$ respectively, then
\[
\sin \Theta(\hat{v},v) = \epsilon, \quad \|\hat{v} - v\|^2 = 2 - 2(1-\epsilon^2)^{1/2}, \quad \text{and} \quad \frac{2\|\hat{\Sigma} - \Sigma\|_{\mathrm{op}}}{3-1} = 2\epsilon.
\]
It is also worth mentioning that there is another theorem in the \citet{DavisKahan1970} paper, the so-called `$\sin 2\theta$' theorem, which provides a bound for $\|\sin 2\Theta(\hat{V},V)\|_\mathrm{F}$ assuming only a population eigen-gap condition. In the case $d=1$, this quantity can be related to the square of the length of the difference between the sample and population eigenvectors $\hat{v}$ and $v$ as follows:
\begin{equation}
\label{Eq:Sin2theta}
\sin^2 2\Theta(\hat{v},v) = (2\hat{v}^Tv)^2\{1-(\hat{v}^Tv)^2\} = \frac{1}{4}\|\hat{v} - v\|^2(2 - \|\hat{v} - v\|^2)(4 -\|\hat{v} - v\|^2).
\end{equation}
Equation~(\ref{Eq:Sin2theta}) reveals, however, that $\|\sin 2\Theta(\hat{V},V)\|_\mathrm{F}$ is unlikely to be of immediate interest to statisticians, and in fact we are not aware of applications of the Davis--Kahan $\sin 2\theta$ theorem in Statistics. No general bound for $\|\sin \Theta(\hat{V},V)\|_\mathrm{F}$ or $\|\hat{V}\hat{O} - V\|_\mathrm{F}$ can be derived from the Davis--Kahan $\sin 2\theta$ theorem since we would require further information such as $\hat{v}^Tv \geq 1/2^{1/2}$ when $d=1$, and such information would typically be unavailable. The utility of our bound comes from the fact that it provides direct control of the main quantities of interest to statisticians.
Many if not most applications of this result will only need $s=r$, i.e. $d=1$. In that case, the statement simplifies a little; for ease of reference, we state it as a corollary:
\begin{corollary}
Let $\Sigma,\hat{\Sigma} \in \mathbb{R}^{p \times p}$ be symmetric, with eigenvalues $\lambda_1 \geq \ldots \geq \lambda_p$ and $\hat{\lambda}_1 \geq \ldots \geq \hat{\lambda}_p$ respectively. Fix $j \in \{1,\ldots,p\}$, and assume that $\min(\lambda_{j-1} - \lambda_j,\lambda_j - \lambda_{j+1}) > 0$, where $\lambda_0 := \infty$ and $\lambda_{p+1} := -\infty$. If $v, \hat{v} \in \mathbb{R}^p$ satisfy $\Sigma v = \lambda_j v$ and $\hat{\Sigma} \hat{v} = \hat{\lambda}_j \hat{v}$, then
\[
\sin \Theta(\hat{v},v) \leq \frac{2\|\hat{\Sigma} - \Sigma\|_{\mathrm{op}}}{\min(\lambda_{j-1} - \lambda_j,\lambda_j - \lambda_{j+1})}.
\]
Moreover, if $\hat{v}^T v \geq 0$, then
\[
\|\hat{v} - v\| \leq
\frac{2^{3/2}\|\hat{\Sigma} - \Sigma\|_{\mathrm{op}}}{\min(\lambda_{j-1} - \lambda_j,\lambda_j - \lambda_{j+1})}.
\]
\end{corollary}
\section{Applications of the Davis--Kahan theorem in statistical contexts}
\label{Sec:Examples}
In this section, we give several examples of ways in which the Davis--Kahan $\sin \theta$ theorem has been applied in the statistical literature. Our selection is by no means exhaustive -- indeed there are many others of a similar flavour -- but it does illustrate a range of applications. In fact, we also found some instances in the literature where a version of the Davis--Kahan theorem with a population eigen-gap condition was used without justification. In all of the examples below, our results can be applied directly to impose more natural conditions, to simplify the proofs and, in some cases, to improve the bounds.
\citet{Fanetal2013} study large covariance matrix estimation problems where the population covariance matrix can be represented as the sum of a low rank matrix and a sparse matrix. Their Proposition~2 uses the operator norm version of Theorem~\ref{Thm:DKSinTheta} with $d=1$. They then use a further bound from Weyl's inequality and a population eigen-gap condition as outlined in the introduction to control the norm of the difference between the leading sample and population eigenvectors. \citet{MitraZhang2014} apply the theorem in a very similar way, but for general $d$ and for large correlation matrices as opposed to covariance matrices. Again in the same spirit, \citet{FanHan2013} apply the result with $d=1$ to the problem of estimating the false discovery proportion in large-scale multiple testing with highly correlated test statistics. Other similar applications include \citet{Elkaroui2008}, who derives consistency of sparse covariance matrix estimators, \citet{Caietal2013}, who study sparse principal component estimation, and \citet{WangNyquist1991}, who consider how eigenstructure is altered by deleting an observation.
\citet{vonluxburg2007}, \citet{Roheetal2011}, \citet{Aminietal2013} and \citet{BhattacharyyaBickel2014} use the Davis--Kahan $\sin \theta$ theorem as a way of providing theoretical justification for spectral clustering in community detection with network data. Here, the matrices of interest include graph Laplacians and adjacency matrices, both of which may or may not be normalised. In these works, the statement of the Davis--Kahan theorem given is a slight variant of Theorem~\ref{Thm:DKSinTheta}, and it may appear from, e.g. Proposition~B.1 of \citet{Roheetal2011}, that only a population eigen-gap condition is assumed. However, careful inspection reveals that $\Sigma$ and $\hat{\Sigma}$ must have the same number of eigenvalues in the interval of interest, so that their condition is essentially the same as that in Theorem~\ref{Thm:DKSinTheta}.
\section{Extension to general real matrices}
\label{Sec:SVD}
We now describe how the results of Section~\ref{Sec:Main} can be extended to situations where the matrices under study may not be symmetric and may not even be square, and where interest is in controlling the principal angles between corresponding singular vectors.
\begin{thm}
\label{Thm:SVD}
Let $A, \hat{A} \in \mathbb{R}^{p \times q}$ have singular values $\sigma_1 \geq \ldots \geq \sigma_{\min(p,q)}$ and $\hat{\sigma}_1 \geq \ldots \geq \hat{\sigma}_{\min(p,q)}$ respectively. Fix $1 \leq r \leq s \leq \mathrm{rank}(A)$ and assume that $\min(\sigma_{r-1}^2 - \sigma_r^2,\sigma_s^2 - \sigma_{s+1}^2) > 0$, where $\sigma_0^2 := \infty$ and $\sigma_{\mathrm{rank}(A)+1}^2 := -\infty$. Let $d := s - r + 1$, and let $V = (v_r,v_{r+1},\ldots,v_s) \in \mathbb{R}^{q \times d}$ and $\hat{V} = (\hat{v}_r,\hat{v}_{r+1},\ldots,\hat{v}_s) \in \mathbb{R}^{q \times d}$ have orthonormal columns satisfying $A v_j = \sigma_j u_j$ and $\hat{A}\hat{v}_j = \hat{\sigma}_j \hat{u}_j$ for $j= r,r+1,\ldots,s$. Then
\[
\|\sin \Theta(\hat{V},V)\|_{\mathrm{F}} \leq \frac{2(2\sigma_1 + \|\hat{A} - A\|_{\mathrm{op}})\min(d^{1/2}\|\hat{A} - A\|_{\mathrm{op}},\|\hat{A} - A\|_\mathrm{F})}{\min(\sigma_{r-1}^2 - \sigma_r^2,\sigma_s^2 - \sigma_{s+1}^2)}.
\]
Moreover, there exists an orthogonal matrix $\hat{O} \in \mathbb{R}^{d \times d}$ such that
\[
\|\hat{V}\hat{O} - V\|_{\mathrm{F}} \leq \frac{2^{3/2}(2\sigma_1 + \|\hat{A} - A\|_{\mathrm{op}})\min(d^{1/2}\|\hat{A} - A\|_{\mathrm{op}},\|\hat{A} - A\|_\mathrm{F})}{\min(\sigma_{r-1}^2 - \sigma_r^2,\sigma_s^2 - \sigma_{s+1}^2)}.
\]
\end{thm}
Theorem~\ref{Thm:SVD} gives bounds on the proximity of the right singular vectors of $\Sigma$ and $\hat{\Sigma}$. Identical bounds also hold if $V$ and $\hat{V}$ are replaced with the matrices of left singular vectors $U$ and $\hat{U}$, where $U = (u_r,u_{r+1},\ldots,u_s) \in \mathbb{R}^{p \times d}$ and $\hat{U} = (\hat{u}_r,\hat{u}_{r+1},\ldots,\hat{u}_s) \in \mathbb{R}^{p \times d}$ have orthonormal columns satisfying $A^T u_j = \sigma_j v_j$ and $\hat{A}^T\hat{u}_j = \hat{\sigma}_j \hat{v}_j$ for $j= r,r+1,\ldots,s$.
As mentioned in the introduction, Theorem~\ref{Thm:SVD} can be viewed as a variant of the `generalized $\sin \theta$' theorem of \citet{Wedin1972}. Again, the main difference is that our condition only requires a gap between the relevant population singular values.
Similar to the situation for symmetric matrices, there are many places in the statistical literature where Wedin's result has been used, but where we argue that Theorem~\ref{Thm:SVD} above would be a more natural result to which to appeal. Examples include the papers of \citet{VanHuffelVandewalle1989} on the accuracy of least squares techniques, \citet{Anandkumaretal2014} on tensor decompositions for learning latent variable models, \citet{ShabalinNobel2013} on recovering a low rank matrix from a noisy version and \citet{SunZhang2012} on matrix completion.
\section*{Acknowledgements}
The first and third authors are supported by the third author's Engineering and Physical Sciences Research Council Early Career Fellowship EP/J017213/1. The second author is supported by a Benefactors' scholarship from St John's College, Cambridge.
\section{Appendix}
We first state an elementary lemma that will be useful in several places.
\begin{lemma}
\label{Lemma:Orthogonal}
Let $A \in \mathbb{R}^{m \times n}$, and let $U \in \mathbb{R}^{m \times p}$ and $W \in \mathbb{R}^{n \times q}$ both have orthonormal columns. Then
\[
\|U^TAW\|_\mathrm{F} \leq \|A\|_\mathrm{F}.
\]
If instead, $U \in \mathbb{R}^{m \times p}$ and $W \in \mathbb{R}^{n \times q}$ both have orthonormal rows, then
\[
\|U^TAW\|_\mathrm{F} = \|A\|_\mathrm{F}.
\]
\end{lemma}
\begin{proof}
For the first claim, find a matrix $U_1 \in \mathbb{R}^{m \times (m-p)}$ such that $\begin{pmatrix} U & U_1 \end{pmatrix}$ is orthogonal, and a matrix $W_1 \in \mathbb{R}^{n \times (n-q)}$ such that $\begin{pmatrix} W & W_1 \end{pmatrix}$ is orthogonal. Then
\[
\|A\|_\mathrm{F} = \Biggl\|\begin{pmatrix}U^T \\ U_1^T\end{pmatrix}A \begin{pmatrix}W & W_1\end{pmatrix}\Biggr\|_\mathrm{F} \geq \Biggl\|\begin{pmatrix}U^T \\ U_1^T\end{pmatrix}AW\Biggr\|_\mathrm{F} \geq \|U^TAW\|_\mathrm{F}.
\]
For the second claim, observe that
\[
\|U^TAW\|_\mathrm{F}^2 = \mathrm{tr}(U^TAWW^TA^TU) = \mathrm{tr}(AA^TUU^T) = \mathrm{tr}(AA^T) = \|A\|_\mathrm{F}^2.
\]
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm:Main}]
Let $\Lambda := \mathrm{diag}(\lambda_r,\lambda_{r+1},\ldots,\lambda_s)$ and $\hat{\Lambda} := \mathrm{diag}(\hat{\lambda}_r,\hat{\lambda}_{r+1},\ldots,\hat{\lambda}_s)$. Then
\[
0 = \hat{\Sigma}\hat{V} - \hat{V}\hat{\Lambda} = \Sigma\hat{V} - \hat{V}\Lambda + (\hat{\Sigma} - \Sigma)\hat{V} - \hat{V}(\hat{\Lambda} - \Lambda).
\]
Hence
\begin{align}
\|\hat{V}\Lambda - \Sigma\hat{V}\|_{\mathrm{F}} &\leq \|(\hat{\Sigma} - \Sigma)\hat{V}\|_{\mathrm{F}} + \|\hat{V}(\hat{\Lambda} - \Lambda)\|_{\mathrm{F}} \nonumber \\
&\leq d^{1/2}\|\hat{\Sigma} - \Sigma\|_{\mathrm{op}} + \|\hat{\Lambda} - \Lambda\|_{\mathrm{F}} \leq 2d^{1/2}\|\hat{\Sigma} - \Sigma\|_{\mathrm{op}}, \label{Eq:Eigenvalue2}
\end{align}
where we have used Lemma~\ref{Lemma:Orthogonal} in the second inequality and Weyl's inequality \citep[e.g.][Corollary~4.9]{StewartSun1990} for the final bound. Alternatively, we can argue that
\begin{align}
\label{Eq:Upper2}
\|\hat{V}\Lambda - \Sigma\hat{V}\|_{\mathrm{F}} &\leq \|(\hat{\Sigma} - \Sigma)\hat{V}\|_{\mathrm{F}} + \|\hat{V}(\hat{\Lambda} - \Lambda)\|_{\mathrm{F}} \nonumber \\
&\leq \|\hat{\Sigma} - \Sigma\|_{\mathrm{F}} + \|\hat{\Lambda} - \Lambda\|_{\mathrm{F}} \leq 2\|\hat{\Sigma} - \Sigma\|_{\mathrm{F}},
\end{align}
where the second inequality follows from two applications of Lemma~\ref{Lemma:Orthogonal}, and the final inequality follows from the Wielandt--Hoffman theorem \citep[e.g.][pp.~104--108]{Wilkinson1965}.
Let $\Lambda_1 := \mathrm{diag}(\lambda_1,\ldots,\lambda_{r-1},\lambda_{s+1},\ldots,\lambda_p)$, and let $V_1$ be a $p \times (p-d)$ matrix such that $P := \begin{pmatrix}V & V_1\end{pmatrix}$ is orthogonal and such that $P^T \Sigma P = \begin{pmatrix} \Lambda & 0 \\ 0 & \Lambda_1 \end{pmatrix}$. Then
\begin{align}
\label{Eq:FirstLower}
\|\hat{V}\Lambda - \Sigma\hat{V}\|_{\mathrm{F}} &= \|VV^T\hat{V}\Lambda + V_1V_1^T \hat{V} \Lambda - V\Lambda V^T\hat{V} - V_1 \Lambda_1 V_1^T\hat{V}\|_{\mathrm{F}} \nonumber \\
&\geq \|V_1V_1^T\hat{V}\Lambda - V_1\Lambda_1 V_1^T\hat{V}\|_{\mathrm{F}} \geq \|V_1^T\hat{V}\Lambda - \Lambda_1 V_1^T\hat{V}\|_{\mathrm{F}},
\end{align}
where the first inequality follows because $V^TV_1 = 0$, and the second from another application of Lemma~\ref{Lemma:Orthogonal}. For real matrices $A$ and $B$, we write $A \otimes B$ for their Kronecker product \citep[e.g.][p.~30]{StewartSun1990} and $\mathrm{vec}(A)$ for the vectorisation of $A$, i.e. the vector formed by stacking its columns. We recall the standard identity $\mathrm{vec}(ABC) = (C^T \otimes A)\mathrm{vec}(B)$, which holds whenever the dimensions of the matrices are such that the matrix multiplication is well-defined. We also write $I_m$ for the $m$-dimensional identity matrix. Then
\begin{align}
\label{Eq:SecondLower}
\|V_1^T\hat{V}\Lambda - \Lambda_1 V_1^T\hat{V}\|_{\mathrm{F}} &= \|(\Lambda \otimes I_{p-d} - I_d \otimes \Lambda_1)\mathrm{vec}(V_1^T\hat{V})\| \nonumber \\
&\geq \min(\lambda_{r-1} - \lambda_r,\lambda_s - \lambda_{s+1})\|\mathrm{vec}(V_1^T\hat{V})\| \nonumber \\
&= \min(\lambda_{r-1} - \lambda_r,\lambda_s - \lambda_{s+1})\|\sin \Theta(\hat{V},V)\|_\mathrm{F},
\end{align}
since
\[
\|\mathrm{vec}(V_1^T\hat{V})\|^2 = \mathrm{tr}(\hat{V}^TV_1V_1^T\hat{V}) = \mathrm{tr}\bigl((I_p - VV^T)\hat{V}\hat{V}^T\bigr) = d - \|\hat{V}^TV\|_\mathrm{F}^2 = \|\sin \Theta(\hat{V},V)\|_\mathrm{F}^2.
\]
We deduce from~(\ref{Eq:SecondLower}), (\ref{Eq:FirstLower}), (\ref{Eq:Upper2}) and~(\ref{Eq:Eigenvalue2}) that
\[
\|\sin \Theta(\hat{V},V)\|_\mathrm{F} \leq \frac{\|V_1^T\hat{V}\Lambda - \Lambda_1 V_1^T\hat{V}\|_{\mathrm{F}}}{\min(\lambda_{r-1} - \lambda_r,\lambda_s - \lambda_{s+1})} \leq \frac{2\min(d^{1/2}\|\hat{\Sigma} - \Sigma\|_{\mathrm{op}},\|\hat{\Sigma} - \Sigma\|_\mathrm{F})}{\min(\lambda_{r-1} - \lambda_r,\lambda_s - \lambda_{s+1})},
\]
as required.
For the second conclusion, by a singular value decomposition, we can find orthogonal matrices $\hat{O}_1, \hat{O}_2 \in \mathbb{R}^{d \times d}$ such that $\hat{O}_1^T\hat{V}^TV\hat{O}_2 = \mathrm{diag}(\cos \theta_1,\ldots,\cos \theta_d)$, where $\theta_1,\ldots,\theta_d$ are the principal angles between the column spaces of $V$ and $\hat{V}$. Setting $\hat{O} := \hat{O}_1 \hat{O}_2^T$, we have
\begin{align}
\label{Eq:Intermediate}
\|\hat{V}\hat{O} - V\|_{\mathrm{F}}^2 &= \mathrm{tr}\bigl((\hat{V}\hat{O} - V)^T(\hat{V}\hat{O} - V)\bigr) = 2d - 2\mathrm{tr}(\hat{O}_2\hat{O}_1^T\hat{V}^TV) \nonumber \\
&= 2d - 2\sum_{j=1}^d \cos \theta_j \leq 2d - 2\sum_{j=1}^d \cos^2 \theta_j = 2\|\sin \Theta(\hat{V},V)\|_{\mathrm{F}}^2.
\end{align}
The result now follows from our first conclusion.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm:SVD}]
Note that $A^TA, \hat{A}^T\hat{A} \in \mathbb{R}^{q \times q}$ are symmetric, with eigenvalues $\sigma_1^2 \geq \ldots \geq \sigma_q^2$ and $\hat{\sigma}_1^2 \geq \ldots \geq \hat{\sigma}_q^2$ respectively. Moreover, we have $A^TAv_j = \sigma_j^2 v_j$ and $\hat{A}^T\hat{A}\hat{v}_j = \hat{\sigma}_j^2 \hat{v}_j$ for $j=r,r+1,\ldots,s$. We deduce from Theorem~\ref{Thm:Main} that
\begin{equation}
\label{Eq:SVDsintheta}
\|\sin \Theta(\hat{V},V)\|_{\mathrm{F}} \leq \frac{2\min(d^{1/2}\|\hat{A}^T\hat{A} - A^TA\|_{\mathrm{op}},\|\hat{A}^T\hat{A} - A^TA\|_\mathrm{F})}{\min(\sigma_{r-1}^2 - \sigma_r^2,\sigma_s^2 - \sigma_{s+1}^2)}.
\end{equation}
Now, by the submultiplicity of the operator norm,
\begin{align}
\label{Eq:SVDop}
\|\hat{A}^T\hat{A} - A^TA\|_{\mathrm{op}} = \|(\hat{A}-A)^T\hat{A} + A^T(\hat{A}-A)\|_{\mathrm{op}} &\leq (\|\hat{A}\|_{\mathrm{op}} + \|A\|_{\mathrm{op}})\|\hat{A}-A\|_{\mathrm{op}} \nonumber \\
&\leq (2\sigma_1 + \|\hat{A}-A\|_{\mathrm{op}})\|\hat{A}-A\|_{\mathrm{op}}.
\end{align}
On the other hand,
\begin{align}
\label{Eq:SVDFrobenius}
\|\hat{A}^T\hat{A} - A^TA\|_{\mathrm{F}} &= \|(\hat{A}-A)^T\hat{A} + A^T(\hat{A}-A)\|_{\mathrm{F}} \nonumber\\
&\leq \|(\hat{A}^T \otimes I_q)\mathrm{vec}\bigl((\hat{A}-A)^T\bigr)\| + \|(I_p \otimes A^T)\mathrm{vec}(\hat{A}-A)\| \nonumber \\
&\leq (\|\hat{A}^T \otimes I_q\|_\mathrm{op} + \|I_p \otimes A^T\|_\mathrm{op})\|\hat{A}-A\|_{\mathrm{F}} \nonumber \\
&\leq (2\sigma_1 + \|\hat{A}-A\|_{\mathrm{op}})\|\hat{A}-A\|_{\mathrm{F}}.
\end{align}
We deduce from~(\ref{Eq:SVDsintheta}), (\ref{Eq:SVDop}) and~(\ref{Eq:SVDFrobenius}) that
\[
\|\sin \Theta(\hat{V},V)\|_{\mathrm{F}} \leq \frac{2(2\sigma_1 + \|\hat{A} - A\|_{\mathrm{op}})\min(d^{1/2}\|\hat{A} - A\|_{\mathrm{op}},\|\hat{A} - A\|_\mathrm{F})}{\min(\sigma_{r-1}^2 - \sigma_r^2,\sigma_s^2 - \sigma_{s+1}^2)}.
\]
The bound for $\|\hat{V}\hat{O}-V\|_{\mathrm{F}}$ now follows immediately from this and~(\ref{Eq:Intermediate}).
\end{proof}
|
1,477,468,750,408 | arxiv | \section{Introduction}
Neutrinos define the only sector of the standard model (SM)
where some basic
questions have no answer yet. We do not know, for example,
whether they are Dirac or Majorana spinors, or whether
the sector includes additional sterile modes.
Although neutrinos are related by the gauge symmetry
to the electron and the other charged leptons,
the absence of electric charge makes them a very {\em different}
particle. From an experimental point of view their invisibility
is an obvious challenge that, at the same time,
provides unexpected opportunities
in the search for new physics. Like protons or photons, neutrinos
are produced with very high energies in astrophysical processes; unlike
these particles, they may cross large distances and reach with
no energy loss the center of a neutrino telescope like IceCube.
Once there, the relative frequence $\omega_{\rm NP}$
of neutrino interactions involving new physics will be
enhanced by their small SM cross section:
\begin{equation}
\omega_{\rm NP}\approx {\sigma^{\nu N}_{\rm NP}\over \sigma^{\nu N}_{\rm SM}}\,.
\end{equation}
As we will see, the large target mass in a clean environment
(only contaminated by atmospheric muons) at telescopes defines
the ideal ground to probe a class of ultraviolet (UV) completions
of the SM.
In this article we will be interested in the 37 events
of energy above 30 TeV observed between the years 2010 and
2013 by IceCube \cite{Aartsen:2013jdh,Aartsen:2014gkd}.
Their analysis has shown that these events
can {\em not} be explained with standard interactions of atmospheric
neutrinos, even if the lepton flux from charmed hadron decays
were anomalously high. In the next section we review the IceCube
analysis and their interpretation, namely, that the origin of
these events is a diffuse flux of cosmic neutrinos with a
$\propto E^{-2}$ spectrum. We will argue that the data admits other
interpretations, and in Section 3 we describe a new physics
scenario that does the work. In Section 4 we show that very
{\em soft} collisions of cosmogenic
neutrinos (with energy around $10^9$~GeV) mediated by this
new physics would provide an excellent fit to the data, and that
an increased statistics could clearly discriminate this hypothesis
from the standard one.
\section{IceCube data}
The IceCube analysis isolates neutrino events of energy
$\gsim 30$~TeV coming from any direction.
Depending on whether the events include the characteristic track
of a muon, they are divided into {\em tracks} and {\em showers}.
The directionality in track events is very good, whereas
the pointlike topology of the showers introduces a $\pm 15^\circ$
uncertainty.
The analysis tries to eliminate muon tracks
entering the detector from outside. This also reduces by
a factor of $\approx 0.5$ the number of atmospheric neutrino events
from downgoing directions. An expected muon background
of $8.4\pm 4.2$ events remains, which seems consistent with the
5 events (one of them containing two coincident
muons from unrelated air showers) where the
muon track starts near the detector boundary. We will in
principle exclude\footnote{We think that these ambiguous events
could be excluded just by increasing the thresholds in IceTop
and the veto region.}
events number 3, 8, 18, 28, 32 together with the $8.4\pm 4.2$
background from our analysis, assuming that we are then left with
32 {\em genuine} neutrino interactions inside the IceCube detector,
and we will comment on how the inclusion of these events would affect
our results.
\begin{figure}
\begin{center}
\begin{tabular}{ll}
(a) & (b) \\[-4ex]
\includegraphics{f1a.pdf} & \includegraphics{f1b.pdf}
\end{tabular}
\end{center}
\caption{\it
(a)
Probability $P_{\rm surv}$ that a neutrino reaches IceCube from
a zenith angle $\theta_z$ for several energies $E_\nu$ (we have used
the $\nu N$ cross section in \cite{Connolly:2011vc}).
(b)
Atmospheric \cite{Illana:2010gh} and cosmogenic \cite{Kotera:2010yn}
neutrino fluxes integrated over all directions and including all flavors.
\label{fig:fig1}
}
\end{figure}
We define two energy bins (30 -- 300~TeV and 300 -- 3000~TeV)
and three direction bins: {\em downgoing}, which
includes
declinations $-90^\circ \le \delta < -20^\circ$ ($\delta=\theta_z-90^\circ$),
{\em near-horizontal} ($-20^\circ \le \delta < +20^\circ$) and
{\em upgoing} ($+20^\circ \le \delta < +90^\circ$).
The Earth is unable to
absorb neutrinos from downgoing and near-horizontal
directions at all the energies of interest, but it becomes opaque
from upgoing directions (see Fig.~\ref{fig:fig1}a), especially in the high
energy bin.
For example, a 100~TeV (1~PeV) neutrino has only
a 58\% (21\%) probability
to reach IceCube from the $+20^\circ \le \delta < +90^\circ$ bin.
To estimate the number of atmospheric events we will use the
fluxes in Fig.~\ref{fig:fig1}b.
We have separated the neutrino flux into the standard component
from pion and kaon decays plus another component from charmed hadron
decays. The first one has a strong dependence
on the zenith angle (it is larger from horizontal directions)
and is dominated (in an approximate 17:1 ratio)
by the muon over the electron
neutrino flavor. The charm component is isotropic and contains
both flavors with the same frequency, together with
a 2\% $\nu_{\tau}$ component.
\begin{table}
\begin{center}
\scalebox{1.}{
\begin{tabular}{r|c|c|c||c|c|c|c}
\multicolumn{1}{c}{}
& \multicolumn{1}{c}{Data}
& \multicolumn{1}{c}{$\;$Atm$\;$}
& \multicolumn{1}{c}{$\;E^{-2}\;\;$}
& \multicolumn{1}{c}{Data}
& \multicolumn{1}{c}{$\;$Atm$\;$}
& \multicolumn{1}{c}{$\;E^{-2}\;\;$} & \\
\multicolumn{8}{c}{}\\ [-3ex]
\cline{2-7}
Tracks & 2 & 0.8 & $\;$0.6$\;$ & 0 & 0.0 & 0.1 & UPGOING \\ [0.5ex]
\cline{2-7}
Showers & 5 & 2.7 & 3.6 & 0 & 0.0 & 0.7 & ($+20^\circ<\delta<+90^\circ$) \\
\cline{2-7}
\multicolumn{8}{c}{}\\[-2ex]
\cline{2-7}
Tracks & 2 & 3.5 & 1.5 & 0 & 0.0 & 0.5 & NEAR HORIZONTAL \\
\cline{2-7}
Showers & 8 & 5.9 & 6.4 & 1 & 0.2 & 2.6 & ($-20^\circ<\delta<+20^\circ$) \\
\cline{2-7}
\multicolumn{8}{c}{}\\[-2ex]
\cline{2-7}
Tracks & 0 & 0.2 & 1.6 & 0 & 0.0 & 0.6 & DOWNGOING \\
\cline{2-7}
Showers & 11 & 0.6 & 6.5 & 3 & 0.0 & 2.9 & ($-90^\circ<\delta<-20^\circ$) \\
\cline{2-7}
\multicolumn{8}{c}{}\\[-2.5ex]
\multicolumn{1}{c}{}
&\multicolumn{3}{c}{30 -- 300~TeV} & \multicolumn{3}{c}{300 -- 3000~TeV}
\end{tabular}
}
\end{center}
\vspace{-0.5cm}
\caption{\it Data, atmospheric background, and best fit of the excess
with a $E^{-2}$ diffuse flux at IceCube in 988 days.
\label{tab:events1}
}
\end{table}
The 32 neutrino events and our estimate for the atmospheric
background can be found in Table~\ref{tab:events1}. An inspection of the data
reveals two clear features:
\begin{enumerate}
\item The number and distribution of tracks
is well explained by atmospheric neutrinos. In the low-energy
bin there are 4 tracks
from upgoing and near-horizontal directions for an expected
background of 4.3, whereas at higher energies there are
no events but just 0.06 tracks expected. If we added the 5
downgoing tracks excluded in our analysis together with the
$8.4\pm 4.2$ muon background, we would expect a total of 12.9 track
events and find only 9 in the data: again, no need for extra tracks.
\item There is an excess of showers that is
especially significant from downgoing directions. At low
energies we find 11 events for 0.6 expected, and in the
300 -- 3000~TeV bin there are 3 showers for a 0.04
background. If we include near-horizontal directions we
obtain a total of 23 events for just 6.7 expected.
\end{enumerate}
IceCube then proposes a fit to the excess
using a diffuse flux of
astrophysical neutrinos with spectrum proportional to $E^{-2}$
(also in Table~\ref{tab:events1}). We find
that this $E^{-2}$ hypothesis has two generic implications. First,
it gives around 4.5 showers per track. Second, it implies
a very similar number of downgoing and near-horizontal
events (see Table~\ref{tab:events1}).
To compare it with the data we just
subtract the atmospheric background. We
obtain:
\begin{itemize}
\item
An excess of 18.6 showers
(28 observed, 9.4 expected) while no tracks (4 observed, 4.5
expected). The IceCube hypothesis introduces 18.4 showers
and 4.2 tracks.
\item An excess of 13.2 downgoing events
but just 1.4 extra events from near-horizontal directions.
The $E^{-2}$ diffuse flux proposed by IceCube predicts,
respectively, 11.6 and 11.0 events.
\end{itemize}
Therefore, although the statistical significance of these
deviations is not conclusive yet \cite{Chen:2013dza}, it is apparent that
other possibilities may give a better fit.
In particular, we will define a new physics scenario
that only introduces near-horizontal and downgoing showers
(in a 1:2 ratio) with no new muon tracks from any directions.
\section{A consistent model of TeV gravity}
Consider a model of gravity \cite{ArkaniHamed:1998rs}
with one flat extra dimension $y$
of radius $R$ and a fundamental
scale\footnote{We will follow the notation in \cite{Giudice:2001ce}:
$\bar M_P= M_P /\sqrt{8\pi}$ and $\bar M_D= M_D / (2\pi)^{n/(2+n)}$,
where $n=D-4$ is the number of extra dimensions. Using this
notation $G_D=G_N(2\pi R)^n =1/(8\pi \bar M_D^{2+n})$ for any
value of $n$, including $n=0$.}
$\bar M_5\approx 1$~TeV. Since the generalized Newton's
constant in $D$ dimensions is $G_D=V_nG_N$, this setup
requires a very large extra dimension:
\begin{equation}
V_1=2\pi R= {\bar M_P^2\over \bar M_5^3}
\end{equation}
{\em i.e.}, $R \approx (10^{-27}\;{\rm GeV})^{-1}\approx 1$ AU.
A change from $1/r$ to $1/r^2$ in the gravitational potential
at such large distances would of course have been observed.
The model is also excluded by astrophysical \cite{Hannestad:2003yd}
and cosmological \cite{Hannestad:2001nq}
bounds. This can be understood in terms of the
Kaluza-Klein (KK) modes, of mass $m_n=nm_c$ with $m_c=1/R$.
Although each excitation couples
very weakly ($\propto \bar M_P^{-1}$) to matter, the large multiplicity
of light states during primordial nucleosynthesis or
supernova explosions
would introduce unacceptable changes in the dynamics.
We intend to solve these problems while keeping the main features
of the model. In particular,
\begin{enumerate}
\item
We will keep the same
$\bar M_5\approx 1$~TeV. $M_5$ is the scale ($\mu$)
where gravity becomes strong: the number of light KK
modes ($2 \mu/m_c$) times their coupling squared
to matter ($\mu^2/\bar M_P^2$) gives an amplitude of
order 1 at $\mu=M_5$.
\item
In order to avoid astrophysical and cosmological bounds,
we will increase the mass of the first KK mode and the
mass gap between excitations from $m_c=1/R$
to $m_c\ge 50$~MeV.
Obviously, since now there are {\em less} KK gravitons,
consistency with the previous point will require that
the coupling squared of each mode is increased by a factor of
$m_c \bar M_P^2/M_5^3$.
\end{enumerate}
Notice that doing that the gravitational potential at
distances $r<1/m_c$ (approximately 4~fm for
$m_c=50$~MeV) will be exactly the same as in the case
of one very large compact dimension: the smaller
density of KK modes is exactly compensated by their larger
coupling. The main difference is that now gravity
becomes 4-dimensional at distances much shorter than
before, $r>1/m_c$ instead of $r>1$ AU.
The framework just outlined would be an explicit realization
of the UV completion by classicalization discussed
in \cite{Dvali:2010jz,Dvali:2012mx}, and it has been defined
by Giudice, Plehn and Strumia in \cite{Giudice:2004mg} as follows
(see also \cite{Borunda:2009wd}). Let us deform the
flat circle described above to an orbifold by identifying
$y\to -y$, and let us place 4-dim
branes at $y=0$ (IR brane) and $y=\pi R$ (UV brane). We will
also introduce a (slight) warping along the
extra dimension:
\begin{equation}
{\rm d} s^2 = e^{2\sigma(y)} \eta_{\mu\nu}\, {\rm d}x^\mu {\rm d}x^{\nu}
+ {\rm d} y^2\,,\;\;\; \sigma(y)\equiv k \, |y|\,.
\end{equation}
The 4-dim Planck mass is then given by
\begin{equation}
\bar M_P^2 = {\bar M_5^3\over k} \left( e^{2 k \pi R} -1\right)\,.
\label{warp}
\end{equation}
If the 5-dim curvature is $k\ll 1/R$ we recover in Eq.~(\ref{warp})
the flat case,
\begin{equation}
\bar M_P^2 \approx \bar M_5^3 \, 2 \pi R\,,
\end{equation}
together with
a tower of KK gravitons with mass $m_n=n/R$ and
coupling\footnote{Notice that the orbifolding
projects out half of the KK modes but also increases by a factor of
$\sqrt{2}$ their coupling to matter.} $\approx
\sqrt{2} \mu/\bar M_P$. We will take, however, the
opposite limit: $k$ larger than $R^{-1}$ but still much smaller than
$\bar M_5$. For example, we obtain $\bar M_5=1$ TeV in Eq.~(\ref{warp})
for $k=50$ MeV and $R=(5$ MeV$)^{-1}=40$ fm.
The curvature has then two main effects on the
KK gravitons \cite{Giudice:2004mg}: their masses
become proportional to $\pi k\equiv m_c$,
\begin{equation}
m_n\approx \left( n+{1\over 4} \right) k \pi
= \left( n+{1\over 4} \right) m_c\,,
\end{equation}
and their 5-dim wave function is {\em pushed} towards the
IR brane.
Assuming that quarks and neutrinos are located there,
this will translate into a larger coupling of all the gravitons
to matter,
\begin{equation}
{\sqrt{2} \over \bar M_P}\to \sqrt{k\over \bar M_5^3}
\approx \sqrt{2 m_c\over M_5^3}\,.
\end{equation}
This is exactly
the factor discussed above. In short, this
TeV gravity model has just one extra dimension, a low fundamental
scale $M_5\approx 1$~TeV, and an arbitrary mass $m_c\ge 50$~MeV
for the first KK mode. Given the (approximately) constant
mass gap between resonances and their enhanced coupling to
matter, the model gives at distances $r<m_c^{-1}$
the same gravitational potential as a model with one flat extra
dimension of length $L\approx 1$ AU,
while at $r>m_c^{-1}$ it implies Newton's 4-dim gravity.
Once the setup has been justified, we can consider
graviton-mediated collisions
at center of mass energies $s > M_5^2$, {\em i.e.}, in the
transplanckian regime \cite{Emparan:2001kf}.
In particular, we will be interested
in scatterings with large impact parameter: distances longer
than the typical ones to form a
black hole (and thus with a larger cross section) but
still shorter than $1/m_c$,
so that gravity is still purely 5-dimensional. In these processes
the incident neutrino interacts with a
parton in the target nucleon,
transfers a small fraction\footnote{We use the same symbol
$y$ for the inelasticity and the label of the extra dimension
hoping that it does not mislead the reader.}
$y=(E_\nu-E'_\nu)/E_\nu$
of its energy and keeps going with almost the same energy.
Using the eikonal approximation the amplitude for this process
can be calculated in impact parameter space
as a sum of ladder and cross ladder diagrams. It turns out
that \cite{Giudice:2001ce,Illana:2005pu}
\begin{equation}
{\cal A}_{\rm eik}(\hat s,q)=4\pi \hat s b_c^2\; F_1(b_c q)\;,
\label{eikonal}
\end{equation}
where $\hat s$ and $\hat t$ refer to the Mandelstam
variables at the parton level, $y=-t/s$, $q=\sqrt{-\hat t}$,
$b_c= \hat s/(4\,M^3_5)$ and
\begin{equation}
F_n(u)=-i
\int_0^\infty {\rm d}v\;v\; J_0(uv)
\left( e^{iv^{-n}} -1 \right)\;.
\label{f5}
\end{equation}
\begin{figure}
\begin{center}
\begin{tabular}{ll}
(a) & (b) \\[-4ex]
\includegraphics{f2a.pdf} & \includegraphics{f2b.pdf}
\end{tabular}
\end{center}
\vspace{-0.5cm}
\caption{\it
(a)
$\nu N$ cross sections for proceses mediated
by TeV-gravity and by $W$ exchange.
(b)
Differential cross sections
$y\, {\rm d} \sigma/{\rm d} y$ for $E_\nu=10^9$~GeV.
In both panels $M_5=1.7$~TeV and $m_c=5$~GeV (solid), 50~MeV (dashed).
\label{fig:fig2}
}
\end{figure}
The differential $\nu N$ cross section that we propose
is then
\begin{equation}
\frac{{\rm d}\sigma^{\nu N}_{\rm eik}}{{\rm d}y}=\int^1_{M^2_5/s} \!\! {\rm d}x\ xs\
\pi b^4_c \left|F_1(b_cq)\right|^2 \ e^{-2m_c/\mu}
\sum_{i=q,\bar{q},g}f_i(x,\mu)\,,
\end{equation}
where $\mu=1/b_c$ if $q<b_c^{-1}$ or
$\mu=\sqrt{q/b_c}$ otherwise
is the typical inverse distance in the collision and
we have included a Yukawa suppression at distances larger than $m_c$
(a numerical fit gives
$\left|F_1(u)\right|^2 = 1/ (1.57 u^3 + u^2)$).
Fig.~\ref{fig:fig2} summarizes its main features. At low energies
the new physics is negligible, and neutrinos interact with matter
only through $W$ and $Z$ exchange. Above an energy threshold
$E_\nu=M_5^2/(2m_N)\approx 10^6$~GeV the gravitational
cross section grows fast, and it becomes much larger than
the standard one at $E_\nu\approx 10^8$~GeV.
This large cross section, however, is very soft
(see Fig.~\ref{fig:fig2}b): the neutrino mean free path in
ice becomes short ($\approx 10$ km at $10^9$~GeV), but the
fraction of energy deposited in each interaction is
small ($\langle y \rangle \approx 10^{-5}$). Notice that, in addition to
$W$-mediated collisions, only the short distance
interactions of $\langle y\rangle\approx 1$ or those resulting into
a mini-black hole (see our estimate in Fig.~\ref{fig:fig2}a)
are able to {\em stop} the neutrino when it propagates
through matter.
The low-$y$ end of the differential cross section
in Fig.~\ref{fig:fig2}b is regulated
by the arbitrary parameter $m_c$. If the mass of the lightest
KK graviton is around $50$~MeV, then a $10^{10}$~GeV neutrino
would have several TeV energy depositions inside a km of ice,
whereas values $m_c\approx 5$~GeV prevent the total
cross section from reaching very large values.
Let us finally mention that, although our framework is unconstrained
by astrophysics and cosmology, collider bounds would
be similar to the ones obtained in any TeV gravity model. If a black
hole is created one expects a high-multiplicity event with some jets
and leptons of large $E_T$. These bounds, however, are weak for
a low number of extra dimensions \cite{Aad:2014gka} (just one in
our case) and very model dependent (for example, on the angular momentum
of the black hole or on the minimum mass --in units of $M_D$-- that
it should have). Other experimental
constraints could be obtained from the missing $E_T$
associated to the production of the massive gravitons. In particular,
an analysis of LEP data suggests bounds between 1.5
and 2.4 TeV on $M_5$ \cite{Giudice:2004mg}. These bounds are also quite
model dependent: they become weaker if, for example, we
{\it hide} the right-handed electron in the UV brane. Notice that
the particular model with just one extra dimension under study
has not attracted the interest of the experimentalists at the LHC, as
they may consider it excluded by astrophysical observations.
In any case, we would like to emphasize that the very soft
collisions that we propose, with the incident particle
losing a very small fraction of energy, are invisible in colliders:
they imply new ultraforward physics, at rapidities out of reach there.
The ideal place to test such type of cross sections is not colliders,
it is IceCube.
\section{Fit of the IceCube data}
To fit the IceCube data we will use the cosmogenic
neutrino flux in Fig.~\ref{fig:fig1}b \cite{Kotera:2010yn} and the
eikonal collisions discussed in the previous section.
The cosmogenic flux is mostly produced in collisions
of cosmic rays with the CMB radiation, and it
consists of a few hundred
neutrinos of energy between $10^8$ and $10^{10}$~GeV per km$^2$
and year.
Cosmogenic neutrinos can reach the center of IceCube from
zenith angles $\theta_z\le 90^\circ$ and deposit there a small fraction
of energy through an eikonal scattering. Notice that these
soft collisions do not {\em destroy} the incident
neutrino, which could actually interact once or several times
in the ice before reaching the detector. However,
short distance (both standard and gravitational) interactions
will always prevent cosmogenic neutrinos from reaching IceCube
from high inclinations ({\em i.e.}, upgoing directions).
For example, a $10^9$~GeV neutrino has a cross section
$\sigma_{\nu N}^{CC}\approx 10$ nb for $W$ exchange
with a nucleon, or $\sigma_{\nu N}^{BH}\approx 8$ nb
to produce a black hole\footnote{We take $M_5=1.7$~TeV and a
geometrical cross section to produce a mini-black hole.}
through short distance gravitational interactions. However,
the cross section for an eikonal interactions is much larger,
$\sigma_{\nu N}^{eik}\approx 1$ $\mu$b.
Therefore, soft (long-distance)
gravitational collisions would introduce in IceCube an excess
of downgoing and near-horizontal showers only.
In Table~\ref{tab:events2} we give the number of eikonal events for the diffuse
cosmogenic flux in Fig.~1 that corresponds to
$M_5=1.7$~TeV, $m_c=1$~GeV in a 3 year period.
For comparison, we include our estimate
using the diffuse $E^{-2}$ flux proposed
by IceCube.
\begin{table}
\begin{center}
\begin{tabular}{r|c|c|c|c||c|c|c|c|c}
\multicolumn{1}{c}{}
& \multicolumn{1}{c}{Data}
& \multicolumn{1}{c}{$\;$Atm$\;$}
& \multicolumn{1}{c}{$\;E^{-2}\;$}
& \multicolumn{1}{c}{$\;\;$NP$\;\;$}
& \multicolumn{1}{c}{Data}
& \multicolumn{1}{c}{$\;$Atm$\;$}
& \multicolumn{1}{c}{$\;E^{-2}\;$}
& \multicolumn{1}{c}{$\;\;$NP$\;\;$} & \\
\multicolumn{10}{c}{}\\[-3ex]
\cline{2-9}
Tracks & 2 & 0.8 & 0.6 & 0.0 & 0 & 0.0 & 0.1 & 0.0 & UPGOING \\
\cline{2-9}
Showers & 5 & 2.7 & 3.6 & 0.0 & 0 & 0.0 & 0.7 & 0.0 &
($+20^\circ<\delta<+90^\circ$) \\
\cline{2-9}
\multicolumn{8}{c}{}\\[-2ex]
\cline{2-9}
Tracks & 2 & 3.5 & 1.5 & 0.0 & 0 & 0.0 & 0.5 & 0.0 & NEAR HORIZONTAL \\
\cline{2-9}
Showers & 8 & 5.9 & 6.4 & 4.2 & 1 & 0.2 & 2.6 & 1.9 &
($-20^\circ<\delta<+20^\circ$) \\
\cline{2-9}
\multicolumn{8}{c}{}\\[-2ex]
\cline{2-9}
Tracks & 0 & 0.2 & 1.6 & 0.0 & 0 & 0.0 & 0.6 & 0.0 & DOWNGOING \\
\cline{2-9}
Showers & 11 & 0.6 & 6.5 & 8.0 & 3 & 0.0 & 2.9 & 3.5 &
($-90^\circ<\delta<-20^\circ$) \\
\cline{2-9}
\multicolumn{10}{c}{}\\[-2.5ex]
\multicolumn{1}{c}{}
&\multicolumn{4}{c}{30 -- 300~TeV} & \multicolumn{4}{c}{300 -- 3000~TeV}
\end{tabular}
\end{center}
\vspace{-0.5cm}
\caption{\it Data, atmospheric background, excess from a
$E^{-2}$ diffuse flux, and excess from eikonal collisions
of cosmogenic neutrinos ($M_5 = 1.7$~TeV, $m_c = 1$~GeV) in 988 days.
\label{tab:events2}}
\end{table}
It is apparent that the sum of the atmospheric background and
our hypothesis provides the most accurate fit of the data.
In particular, the likelihood ratio $\lambda$ \cite{Agashe:2014kda}
\begin{equation}
-2\ln \lambda = \sum_i^N 2 \left( E_i - X_i + X_i\,\ln {X_i\over E_i}
\right)\,,
\end{equation}
where $E_i$ is the prediction, $X_i$
the data and $N$ the number of bins,
gives a significant difference between both hypotheses:
\begin{equation}
-2\ln \lambda^{NP}= 5.9\;,\qquad -2\ln \lambda^{E^{-2}}= 15.4\;.
\end{equation}
If the 5 ambiguous tracks were included in the analysis,
we would obtain similar values:
\begin{equation}
-2\ln \lambda^{NP}= 7.3\;,\qquad -2\ln \lambda^{E^{-2}}= 15.1\;.
\end{equation}
\section{Summary and discussion}
The observation by IceCube of 37 events with
energy above 30~TeV during the past 3 years is with no doubt
a very remarkable and interesting result. Their analysis has
shown (and ours confirms) that atmospheric neutrinos are
unable to explain the data. Therefore, IceCube has most
certainly discovered a neutrino flux of different origin.
We think, however, that the determination of the
nature and the possible origin of this flux is
still work in progress.
The events observed do not exhibit a clear
preference for the galactic disc and/or
the galactic center. The best fit of the data by IceCube
has been obtained using a diffuse cosmic
flux with a spectrum proportional to $E^{-2}$. Since neutrinos can
propagate without significant energy losses
from very distant sources, an isotropic diffuse flux generated
by the ensemble of all extragalactic sources in the universe
is indeed expected.
This hypotesis, in principle, implies equipartition between
the 3 neutrino flavors and a given distribution of
zenith angles. Regarding the first point,
it gives around 1 muon event per 4.5 showers, whereas
the excess that we find in the data is around 18.6 showers
and no muons (4 observed, 4.5 atmospheric events expected; if
the muon background were included, we would
observe 9 events but 12.9 expected). The expected number of
tracks could be smaller if the efficiency to
detect charged current $\nu_\mu$ interactions (the effective
IceCube mass for these processes described in
in \cite{Aartsen:2013jdh}) in IceCube were lower
than assumed (see the discussion in \cite{Aartsen:2014muf}).
In any case, it seems clear that
the uncertainties and the low statistics still available
make the muon count compatible both with IceCube's $E^{-2}$ hypothesis
and also with the basic result in \cite{Mena:2014sja} (that we
subscribe): muon topologies are well
explained by the atmospheric flux, the only significant
excess appears in the number of showers.
As for the zenith angle distribution, we have distinguished 3 regions
of similar angular size: downgoing, near-horizontal and upgoing
directions. At PeV energies the Earth is (partially)
opaque only to upgoing
neutrinos ($+20^\circ<\delta<+90^\circ$). Therefore,
IceCube's diffuse-flux hypothesis implies a similar number of events
in the downgoing and horizontal bins. The data, however,
reveals an excess of 13.4 shower events in the first bin but just
2.9 from horizontal directions. Of course, the low statistics
gives little significance to these
discrepancies\footnote{The main difference between IceCube's
analysis and ours is that they seem to treat the atmospheric
neutrino flux from the prompt decay of charmed hadrons as
an error bar, whereas in our case it dominates over the
flux from $\pi$ and $K$ decays at
$E>10^{5.5}$~GeV (see Fig.~1).}, but
it also leaves plenty of room for alternative explanations.
We have proposed a scenario where the IceCube excess appears only
in showers (no muon topologies) from downgoing and near-horizontal
directions (no upgoing events) in a 2:1 ratio. It seems to provide
a more accurate fit of the data than the $E^{-2}$ flux hypotesis.
The excess events are caused by {\it exotic} very-soft interactions of
cosmogenic neutrinos, whose flux can be
estimated with some accuracy assuming that the $10^{10}$--$10^{11}$
cosmic rays observed by AUGER \cite{Settimo:2012zz} are
protons\footnote{Notice also that these
soft interactions experienced by
cosmogenic neutrinos are unconstrained by
AUGER \cite{Abreu:2013zbq}, since the energy deposited
in the atmosphere, of order
$yE\approx 10^6$~GeV, is below threshold.}. The much
larger energy of these neutrinos, around $10^9$~GeV, prevents
them from reaching IceCube from below, suppressing the flux
in a $\approx 50\%$ already from horizontal directions.
We have defined a TeV gravity model that provides a neutrino-nucleon
cross section with the precise features that are required. It
should be considered as a particular realization of the generic type of
models \cite{Dvali:2010jz,Dvali:2012mx} where UV physics is dominated by
long-wavelength degrees of freedom.
We think that an increased statistics at IceCube will establish
whether new physics (see also \cite{Esmaili:2013gha})
is necessary in order to interpret the data.
\section*{Acknowledgments}
We woud like to thank Carlos P\'erez de los Heros and
Monica Verducci for useful discussions.
This work has
been supported by MICINN of Spain (FPA2010-16802, FPA2013-47836, and
Consolider-Ingenio {\bf Multidark} CSD2009-00064), by Junta de
Andaluc\'\i a (FQM101,3048,6552),
and by MIUR of Italy under the program Futuro in Ricerca 2010
(RBFR10O36O).
|
1,477,468,750,409 | arxiv | \section{Introduction}
The complex Chern-Simons theory was studied by embedding it into string theory in \cite{equivariant}, and the starting point is the following configuration of M-theory fivebranes that is often used to study the 3d-3d correspondence~\cite{DGH, Terashima:2011qi, Dimofte:2011ju, Dimofte:2011py,Cecotti:2011iy}:
\begin{equation}\begin{aligned}
\begin{matrix}
{\mbox{\rm space-time:}} & \qquad & L(k,1)_b & \times & T^* M_3 & \times & {\mathbb R}^2\\
& \qquad & & & \cup \\
N~{\mbox{\rm fivebranes:}} & \qquad & L(k,1)_b & \times & M_3
\end{matrix}
\label{3d3d1}
\end{aligned}\end{equation}
If one reduces along the squashed Lens space $L(k,1)_b$, one obtains complex Chern-Simons theory at level $k$ on $M_3$ \cite{Cordova:2013cea}. Even in the simple case where $M_3$ is the product of a Riemann surface $\Sigma$ with a circle $S^1$, this system is extremely interesting and can be used to gain a lot of insight into complex Chern-Simons theory. For example, the partition function of the 6d $(2,0)$-theory on this geometry gives the ``equivariant Verlinde formula'', which can be identified with the dimension of the Hilbert space of the complex Chern-Simons theory at level $k$ on $\Sigma$:
\begin{equation}\label{EVF}
Z_{\text{M5}}(L(k,1)\times \Sigma \times S^1, \beta)=\dim_\beta {\mathcal H}_{\text{CS}}(\Sigma,k).
\end{equation}
Here $\beta$ is an ``equivariant parameter'' associated with a geometric $U(1)_\beta$ action whose precise definition will be reviewed in section \ref{sec:EVACBI}. The left-hand side of \eqref{EVF} has been computed in several ways in \cite{equivariant} and \cite{appetizer}, each gives unique insight into the equivariant Verlinde formula, the complex Chern-Simons theory and the 3d-3d correspondence in general. In this paper, we will add to the list yet another method of computing the partition of the system of M5-branes by relating it to superconformal indices of class ${\mathcal S}$ theories.
The starting point is the following observation. For $M_3=\Sigma\times S^1$, the setup \eqref{3d3d1} looks like:
\begin{equation}\begin{aligned}
\begin{matrix}
{\mbox{$N$ fivebranes:}} & \qquad & L(k,1)_b & \times & \Sigma & \times &S^1 \\
& \qquad & & & \cap \\
{\mbox{space-time:}} & \qquad & L(k,1)_b & \times & T^* \Sigma & \times & S^1 & \times & {\mathbb R}^3
\end{matrix}
\label{3d3d2}
\end{aligned}\end{equation}
and it is already very reminiscent of the setting of Lens space superconformal indices of class ${\mathcal S}$ theories \cite{Romelsberger:2005eg, Kinney:2005ej, Gadde:2011uv, Alday:2013rs, Razamat:2013jxa}:
\begin{equation}\begin{aligned}
\begin{matrix}
{\mbox {\textrm{$N$ fivebranes:}}} & \qquad & L(k,1)\times S^1& \times & \Sigma & & \\
& \qquad & & & \cap \\
{\mbox{\rm space-time:}} & \qquad & L(k,1)\times S^1 & \times & T^* \Sigma & & & \times & {\mathbb R}^3\\
& \qquad & \!\!\!\!\!\!\!\!\!\!\!\!\circlearrowright & &\!\!\!\!\!\!\! \circlearrowright & & & & \circlearrowright \\
{\mbox{\rm symmetries:}} & \qquad & \!\!\!\!\!\! SO(4)_E & & \!\!\!\! U(1)_N & & & & SU(2)_R
\end{matrix}.
\label{IndGeo}
\end{aligned}\end{equation}
In this geometry, one can turn on holonomies of the symmetries along the $S^1$ circle in a supersymmetric way and introduce three ``universal fugacities'' $(p,q,t)$. Then the partition function of M5-branes in this geometry is the Lens space superconformal index of the 4d ${\mathcal N}=2$ theory $T[\Sigma]$ of class ${\mathcal S}$:
\begin{equation}\label{Index}
Z_{\text{M5}}(L(k,1)\times S^1\times \Sigma, p,q,t)={{\mathcal I}}(T[\Sigma],p,q,t),
\end{equation}
where we have adopted the following convention for the index\footnote{In the literature there are several other conventions in use. The other two most commonly used conventions for universal fugacities are $(\rho, \sigma, \tau)$ which are related to our convention via $p = \sigma \tau, q = \rho \tau, t = \tau^2$, and $(t,y,v)$ with $t = \sigma^{\frac{1}{6}}\rho^{\frac{1}{6}}\tau^{\frac{1}{3}}, y = \sigma^{\frac{1}{2}} \rho^{-\frac{1}{2}}, v = \sigma^{\frac{2}{3}}\rho^{\frac{2}{3}}\tau^{-\frac{2}{3}}$.}
\begin{equation}
{{\mathcal I}} (p,q,t) = {\rm{Tr}} (-1)^F p^{\frac{1}{2} \delta_{1+}}q^{\frac{1}{2} \delta_{1-}}t^{R+r} e^{-\beta'' {\widetilde \delta}_{1\dot{-}}}.
\label{4d index}
\end{equation}
As the left-hand sides of \eqref{EVF} and \eqref{Index} are closely related, it is very tempting to ask whether the equivariant Verlinde formula for a Riemann surface $\Sigma$, parametrized by $\beta\in{\mathbb R}$, can actually be embedded as a one-parameter family inside the three-parameter space of superconformal indices of the theory $T[\Sigma]$.
The goal of this paper is to give strong evidence for the following proposal
\begin{equation}\label{Statement}
\boxed{\genfrac{}{}{0pt}{}{\text{Equivariant Verlinde formula}}{\text{ at level $k$ on $\Sigma$ for group $G$}}} \quad = \quad \boxed{\genfrac{}{}{0pt}{}{\text{Coulomb branch index}}{\text{of $T[\Sigma,^L\!G]$ on $L(k,1)\times S^1$}}}\, ,
\end{equation}
where the Coulomb branch index is the one-parameter family obtained by taking $p,q,t\rightarrow 0$ while keeping $\ft=pq/t$ fixed.
To clarify the proposed relation \eqref{Statement}, we first give a few remarks:
\begin{enumerate}
\item When we fixed $\Sigma$, $G$ and $k\in{\mathbb Z}$, both sides depend on a real parameter and the identification between them is given by $\ft=e^{-\beta}$.
\item We will assume $\mathfrak{g}=\mathrm{Lie}\,G$ is of type ADE (modulo possible abelian factors), as $T[\Sigma,^L\!G]$, with $^L\!G$ being the Langlands dual group of $G$, is not yet defined in the literature when $\mathfrak{g}$ is not simply-laced. Then we have $\mathfrak{g}=^L\!\mathfrak{g}$.
\item When $G$ is simple but not simply-connected, the left-hand side of \eqref{Statement} is only defined when $k$ annihilates $\pi_1(G)$ (under the natural ${\mathbb Z}$-action on this abelian group), and the proposal is meant for these values of $k$.
\item When $^L\!G$ is simple but not simply-connected, the theory $T[\Sigma,^L\!G]$ is not yet defined. Denote the universal cover of $^L\!G$ (which equals the universal cover of $G$ as $\mathfrak{g}$ is of type ADE) as $\widetilde{G}$. We will interpret the Coulomb index of $T[\Sigma,^L\!G]$ as a summation of indices of $T[\Sigma,\widetilde{G}]$ with insertion of all possible 't Hooft fluxes valued in $\pi_1(^L\!G)$. The insertion is along the 2d surface $S^1\times S^1_{\text{Hopf}}\subset S^1\times L(k,1)$, where $S^1_{\text{Hopf}}$ is the Hopf fiber of the Lens space $L(k,1)$.\footnote{Another natural definition of the partition function of $T[\Sigma,^L\!G]$ is as the summation over only fluxes valued in $H^2(L(k,1),\pi_1(^L\!G))={\mathbb Z}_k\otimes\pi_1(^L\!G)$, which is a subgroup of $\pi_1(^L\!G)$. If one takes this as the definition, then \eqref{Statement} is correct when $k$ also annihilates $\pi_1(^L\!G)$.} We will give a concrete argument in Section~\ref{fluxargument} using string theory for the $A_{N-1}$ series by starting with $\mathfrak{g}=\mathfrak{u}(N)$, and show that this summation naturally arises when we decouple the abelian $\mathfrak{u}(1)$ factor.
\item Conceptually, the reason why $G$ appears on the left of \eqref{Statement} while $^L\!G$ appears on the right can be understood as follows. The left-hand side of \eqref{Statement} can be viewed as certain B-model partition function of the Hitchin moduli space ${\mathcal M}_H(\Sigma,G)$ \cite{Hitchin:1986vp}. Mirror symmetry will produce the Hitchin moduli space associated with the dual group ${\mathcal M}_H(\Sigma,^L\!G)$ \cite{hausel2003mirror, Donagi:2006cr}, and as we will argue in later sections, the corresponding A-model partition function of ${\mathcal M}_H(\Sigma,^L\!G)$ can be identified with the right-hand side of \eqref{Statement}.
\end{enumerate}
To further illustrate \eqref{Statement}, we will present the simplest example where $k=1$ and $G$ is simply connected. The equivariant Verlinde formula can be obtained using the TQFT structure studied in \cite{Andersen:2016hoj}
\begin{equation}\label{EVFk=1}
\dim_\beta {\mathcal H}_{\text{CS}}(\Sigma,G_{\mathbb{C}},k=1)=\frac{|{\mathcal Z}(G)|^g}{\left[\prod_{i=1}^{\mathrm{rank}\,G}(1-\ft^{d_i})^{h_i}\right]^{g-1}} \ ,
\end{equation}
where $|{\mathcal Z}(G)|$ is the order of the center of group $G$, $d_i$'s are degrees of the fundamental invariants of $\mathfrak{g}=\mathrm{Lie}\,G$, and $h_i$'s are the dimension of the space of $d_i$-differentials on $\Sigma$. The reader may have already recognized that \eqref{EVFk=1} is exactly the Coulomb branch index of $T[\Sigma,G]$ on $L(k=1,1)=S^3$ times $|{\mathcal Z}(G)|^g$. As we will explain in great detail later, the $|{\mathcal Z}(G)|^g$ factor comes from summation over 't Hooft fluxes, which are labeled precisely by elements in ${\mathcal Z}(G)\simeq \pi_1(^L\!G)$. The $g$ power morally originates from the fact that there are $g$ ``independent gauge nodes'' in the theory $T[\Sigma,G]$ (\textit{i.e.}~one copy of $G$ for each handle of $\Sigma$). So \eqref{EVFk=1} agrees with the Coulomb index of $T[\Sigma,^L\!G]$.
For $k>1$, the relation \eqref{Statement} becomes more non-trivial, and each flux sector gives generally different contribution. Even if one sets $\ft=0$, the identification of Verlinde algebra with the algebra of allowed 't Hooft fluxes in $T[\Sigma,G]$ is novel.
This paper is organized as follows. In section \ref{sec:EVACBI}, we examine more closely the two fivebranes systems \eqref{3d3d1} and \eqref{IndGeo}, and give arguments supporting the relation \eqref{Statement} between the equivariant Verlinde formula and the Coulomb branch index. In section \ref{sec:SU2}, after reviewing basic facts and ingredients of the index, we verify our proposals by reproducing the already known $SU(2)$ equivariant Verlinde algebra from the Coulomb branch indices of class ${\mathcal S}$ theories on the Lens space. We will see that after an appropriate normalization, the TQFT algebras on both sides are exactly identical, and so are the partition functions. In section \ref{sec:SU3}, we will use the proposed relation \eqref{Statement} to derive the $SU(3)$ equivariant Verlinde algebra from the index of $T[\Sigma,SU(3)]$ computed via the Argyres-Seiberg duality. Careful analysis of the results reveals interesting geometry of the Hitchin moduli space ${\mathcal M}_H(\Sigma, SU(3))$.
\section{Equivariant Verlinde algebra and Coulomb branch index}\label{sec:EVACBI}
One obvious difference between the two brane systems \eqref{3d3d1} and \eqref{IndGeo} is that the $S^1$ factor appears on different sides of the correspondence. From the geometry of \eqref{3d3d1}, one would expect that
\begin{equation}\label{Statement2}
{\genfrac{}{}{0pt}{}{\text{Equivariant Verlinde formula}}{\text{ at level $k$ on $\Sigma$}}} \quad = \quad {\genfrac{}{}{0pt}{}{\text{Partition function of}}{\text{$T[\Sigma\times S^1]$ on $L(k,1)$}}}\,.
\end{equation}
In particular, there should be no dependence on the size of the $S^1$, so it is more natural to use ``3d variables'':
\begin{equation}\label{4d3dConvert}
t=e^{L\beta-(b+b^{-1})L/r},\quad p=e^{-bL/r},\quad q=e^{-b^{-1}L/r}.
\end{equation}
Here, $L$ is the size of the $S^1$ circle, $b$ is the squashing parameter of $L(k,1)_b$, $r$ measures the size of the Seifert base $S^2$, and $\beta$ parametrizes the ``canonical mass deformation'' of the 3d ${\mathcal N}=4$ theory (in our case $T[\Sigma\times S^1]$) into 3d ${\mathcal N}=2$. The latter is defined as follows on flat space. The 3d ${\mathcal N}=4$ theory has R-symmetry $SU(2)_N\times SU(2)_R$ and we can view it as a 3d ${\mathcal N}=2$ theory with the R-symmetry group being the diagonal subgroup $U(1)_{N+R}\subset U(1)_N\times U(1)_R$ with $U(1)_N$ and $U(1)_R$ being the Cartans of $SU(2)_N$ and $SU(2)_R$ respectively. The difference $U(1)_{N-R} = U(1)_N - U(1)_R$ of the original R-symmetry group is now a flavor symmetry $U(1)_\beta$ and we can weakly gauge it to introduce real masses proportional to $\beta$. It is exactly how the ``equivariant parameter'' in \cite{equivariant}, denoted by the same letter $\beta$, is defined.\footnote{More precisely, the dimensionless combination $\beta L$ is used. And from now on, we will rename $\beta_{\text{new}}=\beta_{\text{old}} L$ and $r_{\text{new}}=r_{\text{old}}/L$ to make all 3d variables dimensionless.}
In \cite{equivariant}, it was observed that much could be learned about the brane system \eqref{3d3d1} and the Hilbert space of complex Chern-Simons theory by preserving supersymmetry along the Lens space $L(k,1)$ in a different way, namely by doing partial topological twist instead of deforming the supersymmetry algebra. Geometrically, this corresponds to combining the last ${\mathbb R}^3$ factor in \eqref{3d3d2} with $L(k,1)$ to form $T^*L(k,1)$ regarded as a local Calabi-Yau 3-fold with $L(k,1)_b$ being a special Lagrangian submanifold:
\begin{equation}\begin{aligned}
\begin{matrix}
{\mbox{\textrm{$N$ fivebranes:}}} & \qquad & L(k,1)_b & \times & \Sigma & \times &S^1 \\
& \qquad & \cap & & \cap \\
{\mbox{\rm space-time:}} & \qquad & T^*L(k,1)_b & \times & T^* \Sigma & \times & S^1 \\
& \qquad & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\circlearrowright & &\!\!\!\!\!\!\! \circlearrowright \\
{\mbox{\rm symmetries:}} & \qquad & \!\!\!\!\!\!\!\!\!\!\!\!\!\! U(1)_R & & \!\!\!\! U(1)_N&.
\end{matrix}
\label{3d3dTwist}
\end{aligned}\end{equation}
In this geometry, $U(1)_N$ acts by rotating the cotangent fiber of $\Sigma$, while $U(1)_R$ rotates the cotangent fiber of the Seifert base $S^2$ of the Lens space.\footnote{Note, $U(1)_N$ is always an isometry of the system whereas the $U(1)_R$ is only an isometry in certain limits where the metric on $L(k,1)$ is singular ({\it e.g.}~when $L(k,1)$ is viewed a small torus fibered over a long interval). However, if we are only interested in questions that have no dependence on the metric on $L(k,1)$, we can always assume the $U(1)_R$ symmetry to exist. For example, the theory $T[L(k,1)]$, or in general $T[M_3]$ for any Seifert manifolds $M_3$ should enjoy an extra flavor symmetry $U(1)_\beta=U(1)_N-U(1)_R$.} This point of view enables one to derive the equivariant Verlinde formula as it is now the partition function of the {\it supersymmetric} theory $T[L(k,1),\beta]$ on $\Sigma\times S^1$.
Although the geometric setting \eqref{3d3dTwist} appears to be different from the original one \eqref{3d3d1}, there is substantial evidence that they are related. For example, the equivariant Verlinde formula can be defined and computed on both sides and they agree. Namely, the partition function in the twisted background \eqref{3d3dTwist} is given by the partition function of $T[L(k,1)]$ on $\Sigma$, while the partition function under the background \eqref{3d3d1} is given by an equivariant integral over the Hitchin moduli space, and they are proven to be equal in \cite{Andersen:2016hoj}. Moreover, the modern viewpoint on supersymmetry in curved backgrounds is that the deformed supersymmetry is an extension of topological twisting, see {\textit{e.g.}}~\cite{Closset:2014uda}. Therefore, one should expect that the equivariant Verlinde formula at level $k$ could be identified with a particular slice of the four-parameter family of 4d indices $(k,p,q,t)$ (or in 3d variables $(k,\beta,b,r)$). And this particular slice should have the property that the index has no dependence on the geometry of $L(k,1)_b$. Since $T[L(k,1)]$ is derived in the limit where $L(k,1)$ shrinks, one should naturally take the $r\rightarrow 0$ limit for the superconformal index. In terms of the 4d parameters, that corresponds to
\begin{equation}\label{CBI0}
p,q,t\rightarrow 0.
\end{equation}
This is known as the Coulomb branch limit. In this particular limit, the only combination of $(k,p,q,t)$ independent of $b$ and $r$ that one could possibly construct is
\begin{equation}\label{CBI1}
\ft=\frac{pq}{t}=e^{-\beta},
\end{equation}
and this is precisely the parameter used in the Coulomb branch index. Therefore, one arrives at the following proposal:
\begin{equation}\label{StatementUN}
\boxed{\genfrac{}{}{0pt}{}{\text{Equivariant Verlinde formula}}{\text{of $U(N)_k$ on $\Sigma$}}} \quad = \quad \boxed{\genfrac{}{}{0pt}{}{\text{Coulomb branch index}}{\text{of $T[\Sigma,U(N)]$ on $L(k,1)\times S^1$}}}\ .
\end{equation}
This relation should be more accurately viewed as the natural isomorphism between two TQFT functors
\begin{equation}
Z_{\text{EV}}=Z_{\text{CB}}.
\end{equation}
At the level of partition function on a closed Riemann surface $\Sigma$, it is the equality between the equivariant Verlinde formula and the Coulomb index of $T[\Sigma]$
\begin{equation}
Z_{\text{EV}}(\Sigma)=Z_{\text{CB}}(\Sigma).
\end{equation}
Going one dimension lower, we also have an isomorphism between the Hilbert spaces of the two TQFTs on a circle:
\begin{equation}
{\mathcal H}_{\text{EV}}=Z_{\text{EV}}(S^1)={\mathcal H}_{\text{CB}}=Z_{\text{CB}}(S^1).
\end{equation}
As these underlying vector spaces set the stages for any interesting TQFT algebra, the equality above is the most fundamental and needs to be established first. We now show how one can canonically identify the two seemingly different Hilbert spaces ${\mathcal H}_{\text{EV}}$ and ${\mathcal H}_{\text{CB}}$.
\subsection{${\mathcal H}_{\text{EV}}$ vs.~${\mathcal H}_{\text{CB}}$}
In the equivariant Verlinde TQFT, operator-state correspondence tells us that states in ${\mathcal H}_{\text{EV}}$ are in one-to-one correspondence with local operators. Since these local operators come from codimension-2 ``monodromy defects'' \cite{Gukov:2006jk} (see also \cite{Gang:2015wya} in the context of 3d-3d correspondence) in $T[L(k,1)]$ supported on the circle fibers of $\Sigma\times S^1$, they are labeled by
\begin{equation}
\mathbf{a}=\mathrm{diag}\{a_1,a_2,a_3,\ldots,a_N\}\in \mathfrak{u}(N)
\end{equation}
together with a compatible choice of Levi subgroup $\mathfrak{L}\subset U(N)$. In the equivariant Verlinde TQFT, one only needs to consider maximal defects with $\mathfrak{L}=U(1)^N$ as they are enough to span the finite-dimensional ${\mathcal H}_{\text{EV}}$. The set of continuous parameters $\mathbf{a}$ is acted upon by the affine Weyl group $W_{\text{aff}}$ and therefore can be chosen to live in the Weyl alcove:
\begin{equation}
1> a_1\geq a_2\geq\ldots\geq a_N\geq 0.
\end{equation}
In the presence of a Chern-Simons term at level $k$, gauge invariance imposes the following integrality condition
\begin{equation}\label{Integrality}
e^{2\pi i k\,\mathbf{a}}=\mathbf{1}.
\end{equation}
We can then define
\begin{equation}
\mathbf{h}=k\mathbf{a}
\end{equation}
whose elements are now integers in the range $[0,k)$. The condition \eqref{Integrality} is also the condition for the adjoint orbit
\begin{equation}
{\mathcal O}_{\mathbf{h}}=\{ghg^{-1}|g\in U(N)\}
\end{equation}
to be quantizable. Via the Borel-Weil-Bott theorem, quantizing ${\mathcal O}_{\mathbf{h}}$ gives a representation of $U(N)$ labeled by a Young tableau $\vec{h}=(h_1,h_2,\ldots,h_N)$. So, we can also label the states in ${\mathcal H}_{\text{EV}}(S^1)$ by representations of $U(N)$ or, more precisely, integrable representations of the loop group of $U(N)$ at level $k$. In other words, the Hilbert space of the equivariant Verlinde TQFT is the same as that of the usual Verlinde TQFT (better known as the $G/G$ gauged WZW model). This is, of course, what one expects as the Verlinde algebra corresponds to the $\ft=0$ limit of the equivariant Verlinde algebra, and the effect of $\ft$ is to modify the algebra structure without changing ${\mathcal H}_{\text{EV}}$. In particular, the dimension of ${\mathcal H}_{\text{EV}}$ is independent of the value of $\ft$.
One could also use the local operators from the dimensional reduction of Wilson loops as the basis for ${\mathcal H}_{\text{EV}}(S^1)$. In pure Chern-Simons theory, the monodromy defects are the same as Wilson loops. In $T[L(k,1),\beta]$ with $\beta$ turned on, these two types of defects are still linearly related by a transformation matrix, which is no longer diagonal. One of the many reasons that we prefer the maximal monodromy defects is because, under the correspondence, they are mapped to more familiar objects on the Coulomb index side. To see this, we first notice that the following brane system
\begin{equation}\begin{aligned}
\begin{matrix}
{\mbox{\textrm{ $N$ fivebranes:}}}& L(k,1)_b & \times & \Sigma & \times &S^1 \\
& & & \cap \\
{\mbox{\rm space-time:}}& L(k,1)_b & \times & T^* \Sigma & \times & S^1 & \times & {\mathbb R}^3\\
& & & \cup \\
{\mbox{\textrm{$n\times N$ ``defect'' fivebranes:}}} & L(k,1)_b &\times& T^*|_{p_i}\Sigma & \times & S^1
\end{matrix}
\label{3d3dDefects}
\end{aligned}\end{equation}
gives $n$ maximal monodromy defects at $(p_1,p_2,\ldots,p_n)\in\Sigma$. If one first compactifies the brane system above on $\Sigma$, one obtains the 4d ${\mathcal N}=2$ class ${\mathcal S}$ theory $T[\Sigma_{g,n}]$ on $L(k,1)_b\times S^1$. This theory has flavor symmetry $U(N)^n$ and one can consider sectors of the theory with non-trivial flavor holonomies $\{\exp[\mathbf{a}_i],i=1,2,\ldots,n\}$ of $U(N)^n$ along the Hopf fiber. The $L(k,1)$-Coulomb branch index of $T[\Sigma_{g,n}]$ depends only on $\{\mathbf{a}_i,i=1,2,\ldots,n\}$ and therefore states in the Hilbert space ${\mathcal H}_{\text{CB}}$ of the Coulomb branch index TQFT associated to a puncture on $\Sigma$ are labeled by a $U(N)$ holonomy $\mathbf{a}$. (Notice that, for other types of indices, the states are in general also labeled by a continuous parameter corresponding to the holonomy along the $S^1$ circle and the 2d TQFT for them is in general infinite-dimensional). As the Hopf fiber is the generator of $\pi_1(L(k,1))={\mathbb Z}_k$, one has
\begin{equation}\label{Integrality2}
e^{2\pi i k\mathbf{a}}=\mathrm{Id}.
\end{equation}
This is exactly the same as the condition \eqref{Integrality}. In fact, we have even used the same letter $\mathbf{a}$ in both equations, anticipating the connection between the two. What we have found is the canonical way of identifying the two sets of basis vectors in the two Hilbert spaces
\begin{equation}\label{StatementStates}
\begin{matrix}
{\mathcal H}_{\text{EV}}^{\otimes n}& & & &{\mathcal H}_{\text{CB}}^{\otimes n}\\
\rotatebox[origin=c]{90}{$\in$}& & & &\rotatebox[origin=c]{90}{$\in$}\\
\boxed{\genfrac{}{}{0pt}{}{\text{Monodromy defects on $\Sigma_{g,n}\times S^1$}}{\text{in $GL(N,{\mathbb C})_k$ complex Chern-Simons theory}}}& \quad &=& \quad &\boxed{\genfrac{}{}{0pt}{}{\text{Flavor holonomy sectors}}{\text{of $T[\Sigma_{g,n}\times S^1,U(N)]$ on $L(k,1)$}}}\end{matrix}\ .
\end{equation}
And, of course, this relation is expected as both sides are labeled by flat connections of the Chan-Paton bundle associated to the coincident $N$ ``defect'' M5-branes in \eqref{3d3dDefects}. Using the relation \eqref{StatementStates}, henceforth we identify ${\mathcal H}_{\text{EV}}$ and ${\mathcal H}_{\text{CB}}$.
\subsection{The statement for a general group}
The proposed relation \eqref{Statement} between the $U(N)$ equivariant Verlinde formula and the Coulomb branch index for $T[\Sigma,U(N)]$ can be generalized to other groups. First, one could consider decoupling the center of mass degree of freedom for all coincident stacks of M5-branes. However, there are at least two different ways of achieving this. Namely, one could get rid of the $\mathfrak{u}(1)$ part of $\mathbf{a}$ by either
\begin{enumerate}
\item subtracting the trace part from $\mathbf{a}$:
\begin{equation}
\mathbf{a}_{\text{SU}}=\mathbf{a}-\frac{1}{N}\mathrm{tr}\, \mathbf{a},
\end{equation}
\item or forcing $\mathbf{a}$ to be traceless by imposing
\begin{equation}
a_N=-\sum_i^{N-1}a_i
\end{equation}
to get
\begin{equation}
\mathbf{a}_{\text{PSU}}=\mathrm{diag}(a_1,a_2,\ldots,a_{N-1},-\sum_i^{N-1}a_i).
\end{equation}
\end{enumerate}
Naively, one may expect the two different approaches to be equivalent. However, as we are considering Lens space index, the global structure of the group comes into play. Indeed, the integrality condition \eqref{Integrality} becomes different:
\begin{equation}\label{IntegralPSU}
e^{2\pi i k\cdot \mathbf{a}_{\text{SU}}}\in {\mathbb Z}_N={\mathcal Z}(SU(N))
\end{equation}
while
\begin{equation}\label{IntegralSU}
e^{2\pi i k\cdot \mathbf{a}_{\text{PSU}}}=\mathbf{1}={\mathcal Z}(PSU(N)).
\end{equation}
Here $PSU(N)=SU(N)/{\mathbb Z}_N$ has trivial center but a non-trivial fundamental group. As a consequence of having different integrality conditions, one can get either Verlinde formula for $SU(N)$ or $PSU(N)$. In the first case, the claim is
\begin{equation}\label{StatementSU}
\boxed{\genfrac{}{}{0pt}{}{\text{Equivariant Verlinde formula}}{\text{of $SU(N)_k$ on $\Sigma$}}} \quad = \quad \boxed{\genfrac{}{}{0pt}{}{\text{Coulomb branch index}}{\text{of $T[\Sigma,PSU(N)]$ on $L(k,1)\times S^1$ }}}\ .
\end{equation}
The meaning of $T[\Sigma,PSU(N)]$ and the way to compute its Coulomb branch index will be discussed shortly. On the other hand, if one employs the second method to decouple the $U(1)$ factor, one finds a similar relation with the role of $SU(N)$ and $PSU(N)$ reversed:
\begin{equation}\label{StatementPSU}
\boxed{\genfrac{}{}{0pt}{}{\text{Equivariant Verlinde formula}}{\text{of $PSU(N)_k$ on $\Sigma$}}} \quad = \quad \boxed{\genfrac{}{}{0pt}{}{\text{Coulomb branch index}}{\text{of $T[\Sigma,SU(N)]$ on $L(k,1)\times S^1$}}}\ .
\end{equation}
Before deriving these statements, we first remark that they are all compatible with \eqref{Statement} for general $G$,
which we record again below:
\begin{equation}\label{StatementG}
\boxed{\genfrac{}{}{0pt}{}{\text{Equivariant Verlinde formula}}{\text{of $G_k$ on $\Sigma$}}} \quad = \quad \boxed{\genfrac{}{}{0pt}{}{\text{Coulomb branch index}}{\text{of $T[\Sigma,^L\! G]$ on $L(k,1)\times S^1$}}}\ ,
\end{equation}
since $^LU(N)=U(N)$ and $^LSU(N)=PSU(N)$. This general proposal also gives a geometric/physical interpretation of the Coulomb index of $T[\Sigma,G]$ on $L(k,1)$ by relating it to the quantization of the Hitchin moduli space ${\mathcal M}_H(\Sigma,^L\!\! G)$. In fact, one can make a even more general conjecture for all 4d ${\mathcal N}=2$ superconformal theories (not necessarily of class ${\mathcal S}$):
\begin{equation}\label{StatementTheory}
\boxed{\genfrac{}{}{0pt}{}{\text{$L(k,1)$ Coulomb index}}{\text{of a 4d ${\mathcal N}=2$ superconformal theory ${\mathcal T}$}}} \quad \overset{?}{=} \quad \boxed{\genfrac{}{}{0pt}{}{\text{Graded dimension of Hilbert space}}{\text{from quantization of $(\widetilde{{\mathcal M}}_{{\mathcal T}},k\omega_I)$}}}\ .
\end{equation}
Here, $\widetilde{{\mathcal M}}_{{\mathcal T}}$ is the SYZ mirror \cite{Strominger:1996it} of the Coulomb branch ${\mathcal M}_{{\mathcal T}}$ of ${\mathcal T}$ on ${\mathbb R}^3\times S^1$.
Indeed, ${\mathcal M}_{{\mathcal T}}$ has the structure of a torus fibration:
\begin{equation}
\begin{matrix}
\mathbf{T}^{2d} & \hookrightarrow & {\mathcal M}_{{\mathcal T}} \\
& & \downarrow \\
& & {\mathcal B} \end{matrix}.
\end{equation}
Here ${\mathcal B}$ is the $d$-(complex-)dimensional Coulomb branch of ${\mathcal T}$ on ${\mathbb R}^4$, $\mathbf{T}^{2d}$ is the 2d-torus parametrized by the holomonies of the low energy $U(1)^d$ gauge group along the spatial circle $S^1$ and the expectation values of $d$ dual photons. One can perform T-duality on $\mathbf{T}^{2d}$ to obtain the mirror manifold\footnote{In many cases, the mirror manifold $\widetilde{{\mathcal M}}_{{\mathcal T}}={\mathcal M}_{{\mathcal T}'}$ is also the 3d Coulomb branch of a theory ${\mathcal T}'$ obtained by replacing the gauge group of ${\mathcal T}$ with its Langlands dual. One can easily see that ${\mathcal T}'$ obtained this way always has same 4d Coulomb branch $B$ as ${\mathcal T}$.} $\widetilde{{\mathcal M}}_{{\mathcal T}}$
\begin{equation}
\begin{matrix}
\widetilde{\mathbf{T}}^{2d} & \hookrightarrow & \widetilde{{\mathcal M}}_{{\mathcal T}} \\
& & \downarrow \\
& & {\mathcal B} \end{matrix}.
\end{equation}
The dual torus $\widetilde{\mathbf{T}}^{2d}$ is a K\"ahler manifold equipped with a K\"ahler form $\omega$, which extends to $\omega_I$, one of the three K\"ahler forms $(\omega_I,\omega_J,\omega_K)$ of the hyper-K\"ahler manifold $\widetilde{{\mathcal M}}_{{\mathcal T}}$. Part of the R-symmetry that corresponds to the $U(1)_N-U(1)_R$ subgroup inside the $SU(2)_R\times U(1)_N$ R-symmetry group of ${\mathcal T}$ becomes a $U(1)_{\beta}$ symmetry of $\widetilde{{\mathcal M}}_{{\mathcal T}}$.
Quantizing $\widetilde{{\mathcal M}}_{{\mathcal T}}$ with respect to the symplectic form $k\omega_I$ yields a Hilbert space ${\mathcal H}({\mathcal T},k)$. Because $\widetilde{{\mathcal M}}_{{\mathcal T}}$ is non-compact, the resulting Hilbert space ${\mathcal H}({\mathcal T},k)$ is infinite-dimensional. However, because the fixed point set of $U(1)_\beta$ is compact and is contained in the nilpotent cone (= the fiber of $\widetilde{{\mathcal M}}_{{\mathcal T}}$ at the origin of ${\mathcal B}$), the following graded dimension is free of any divergences and can be computed with the help of the equivariant index theorem
\begin{equation}\label{CBIQuantum}
\dim_\beta{\mathcal H}({\mathcal T},k)=\sum_{m=0}^\infty \ft^m \dim {\mathcal H}^{m}({\mathcal T},k)=\int_{\widetilde{{\mathcal M}}_{{\mathcal T}}}\mathrm{ch}({\mathcal L}^{\otimes k},\beta)\^\mathrm{Td}(\widetilde{{\mathcal M}}_{{\mathcal T}},\beta).
\end{equation}
Here $\ft=e^{-\beta}$ is identified with the parameter of the Coulomb branch index, ${\mathcal L}$ is a line bundle whose curvature is $\omega_I$, and ${\mathcal H}^{m}({\mathcal T},k)$ is the weight-$m$ component of ${\mathcal H}({\mathcal T},k)$ with respect to the $U(1)_\beta$ action. In obtaining \eqref{CBIQuantum}, we have used the identification ${\mathcal H}({\mathcal T},k)=H^*(\widetilde{{\mathcal M}}_{{\mathcal T}},{\mathcal L}^{\otimes k})$ from geometric quantization.\footnote{One expects the higher cohomology groups to vanish, since ${\mathcal L}$ is ample on each generic fiber $\widetilde{\mathbf{T}}^{2d}$. For Hitchin moduli space, the vanishing of higher cohomology for ${\mathcal L}^{\otimes k}$ is proven in \cite{2016arXiv160801754H,Andersen:2016hoj}.}
Now let us give a heuristic argument for why \eqref{CBIQuantum} computes the Coulomb branch index.
The Lens space $L(k,1)$ can be viewed as a torus fibered over an interval. Following \cite{Gukov:2008ve, Gukov:2010sw, Nekrasov:2010ka} and \cite{Dimofte:2011jd}, one can identify the Coulomb branch index with the partition function of a topological A-model living on a strip, with ${\mathcal M}_{{\mathcal T}}$ as the target space. The boundary condition at each end of the strip gives a certain brane in ${\mathcal M}_{{\mathcal T}}$. One can then apply mirror symmetry and turn the system into a B-model with $\widetilde{{\mathcal M}}_{{\mathcal T}}$ as the target space. Inside $\widetilde{{\mathcal M}}_{{\mathcal T}}$, there are two branes $\mathfrak{B}_1$ and $\mathfrak{B}_2$ specifying the boundary conditions at the two endpoints of the spatial interval. The partition function for this B-model computes the dimension of the $\mathrm{Hom}$-space between the two branes:
\begin{equation}
Z_{\text{B-model}}=\dim \mathrm{Hom}(\mathfrak{B}_1,\mathfrak{B}_2).
\end{equation}
Now $\mathfrak{B}_1$ and $\mathfrak{B}_2$ are objects in the derived category of coherent sheaves on $\widetilde{{\mathcal M}_{{\mathcal T}}}$ and the quantity above can be computed using the index theorem. The equivariant version is
\begin{equation}
Z_{\text{B-model},\beta}=\dim_\beta \mathrm{Hom}(\mathfrak{B}_1,\mathfrak{B}_2)=\int_{\widetilde{{\mathcal M}}_{{\mathcal T}}}\mathrm{ch}(\mathfrak{B}_1^*,\beta)\^\mathrm{ch}(\mathfrak{B}_2,\beta)\^\mathrm{Td}(\widetilde{{\mathcal M}}_{{\mathcal T}},\beta).
\end{equation}
We can choose the duality frame such that $\mathfrak{B}_1={\mathcal O}$ is the structure sheaf. Then $\mathfrak{B}_2$ is obtained by acting $T^k\in SL(2,{\mathbb Z})$ on $\mathfrak{B}_1$. A simple calculation shows $\mathfrak{B}_2={\mathcal L}^{\otimes k}$. So the Coulomb branch index indeed equals \eqref{CBIQuantum}, confirming the proposed relation \eqref{StatementTheory} (see also \cite{Fredrickson:2017yka} for a test of this relation for many Argyres-Douglas theories).
\subsubsection{$SU(N)$ vs.~$PSU(N)$}
Now let us explain why \eqref{StatementSU} and \eqref{StatementPSU} are expected. Both orbits, ${\mathcal O}_{\mathbf{a_{\text{SU}}}}$ and ${\mathcal O}_{\mathbf{a_{\text{PSU}}}}$, are quantizable and give rise to representations of $\mathfrak{su}(N)$. However, as the integrality conditions are different, there is a crucial difference between the two classes of representations that one can obtain from $\mathbf{a}_{\text{SU}}$ and $\mathbf{a}_{\text{PSU}}$. Namely, one can get all representations of $SU(N)_k$ from ${\mathcal O}_{\mathbf{a_{\text{SU}}}}$ but only representations\footnote{In our conventions, representations of $PSU(N)_k$ are those representations of $SU(N)_k$ invariant under the action of the center. There exist different conventions in the literature and one is related to ours by $k'=\lfloor k/N\rfloor$. Strictly speaking, when $N \nmid k$, the 3d Chern-Simons theory is not invariant under large gauge transformation and doesn't exist. Nonetheless, the 2d equivariant Verlinde algebra is still well defined and matches the algebra from the Coulomb index side.} of $PSU(N)_k$ from ${\mathcal O}_{\mathbf{a_{\text{PSU}}}}$. This can be directly verified as follows.
For either $\mathbf{a}_{\text{SU}}$ or $\mathbf{a}_{\text{PSU}}$, quantizing ${\mathcal O}_{\mathbf{a}}$ gives a representation of $SU(N)$ with the highest weight\footnote{Sometimes it is more convenient to use a different convention for the highest weight
\begin{equation}
\vec{\lambda}=(h_1-h_2,h_2-h_3,\ldots,h_{N-1}-h_N)\equiv k\cdot (a_1-a_2,a_2-a_3,\ldots,a_{N-1}-a_N)\pmod N.
\end{equation}
}
\begin{equation}
\vec{\mu}=(h_1-h_N,h_2-h_N,\ldots,h_{N-1}-h_N)\equiv k(a_1-a_N,a_2-a_N,\ldots,a_{N-1}-a_N) \pmod N.
\label{repMu}
\end{equation}
The corresponding Young tableau consists of $N-1$ rows with $h_i-h_N$ boxes in the $i$-th row. The integrality condition \eqref{IntegralPSU} simply says that $\vec{\mu}$ is integral. With no other constraints imposed, one can get all representations of $SU(N)$ from $\mathbf{a}_{\text{SU}}$. On the other hand, the condition \eqref{IntegralSU} requires the total number of boxes to be a multiple of $N$,
\begin{equation}
\sum_{i=1}^{N-1} \mu_i=N\cdot\sum_{i=1}^{N-1}a_i \equiv 0 \pmod N,
\end{equation}
restricting us to these representations of $SU(N)$ where the center ${\mathbb Z}_N$ acts trivially. These are precisely the representations of $PSU(N)$.
What we have seen is that in the first way of decoupling $U(1)$, one arrives at the equivariant Verlinde algebra for $SU(N)_k$, while the second option leads to the $PSU(N)_k$ algebra. Then, what happens on the Lens space side?
\subsubsection{$T[\Sigma,SU(N)]$ vs.~$T[\Sigma,PSU(N)]$ }\label{fluxargument}
In the second approach of removing the center, the flavor $U(N)$-bundles become well-defined $SU(N)$-bundles on $L(k,1)$ and decoupling all the central $U(1)$'s on the Lens space side simply means computing the Lens space Coulomb branch index of $T[\Sigma,SU(N)]$. So we arrive at the equivalence \eqref{StatementPSU} between $PSU(N)_k$ equivariant Verlinde algebra and the algebra of the Coulomb index TQFT for $SU(N)$. On the other hand, in the first way of decoupling the $U(1)$, the integrality condition
\begin{equation}
e^{2\pi i k\cdot \mathbf{a}}=1
\end{equation}
is not satisfied for $\mathbf{a}_{\text{SU}}$. And as in \eqref{IntegralPSU}, the right-hand side can be an arbitrary element in the center ${\mathbb Z}_N$ of $SU(N)$. In other words, after using the first method of decoupling the central $U(1)$, the $U(N)$-bundle over $L(k,1)$ becomes a $PSU(N)=SU(N)/{\mathbb Z}_N$-bundle. Another way to see this is by noticing that for $\exp[2\pi i\mathbf{a}]\in{\mathcal Z}(SU(N))$,
\begin{equation}
\mathbf{a}_{\text{SU}}=\mathbf{a}-\frac{1}{N}\mathrm{tr}\, \mathbf{a}= 0.
\end{equation}
This tells us that the $U(1)$ quotient done in this way has collapsed the ${\mathbb Z}_N$ center of $U(N)$, giving us not a well-defined $SU(N)$-bundle but a $PSU(N)$-bundle. Therefore, it is very natural to give the name ``$T[\Sigma,PSU(N)]$'' to the resulting theory living on $L(k,1)\times S^1$, as the class ${\mathcal S}$ theory $T[\Sigma,G]$ doesn't currently have proper definition in the literature if $G$ is not simply-connected.
For a general group $G$, one natural definition of the path integral of $T[\Sigma,G]$ on $L(k,1)\times S^1$ is as the path integral of $T[\Sigma,\widetilde{G}]$ with summation over all possible 't Hooft fluxes labeled by $\pi_1(G)\subset{\mathcal Z}(\widetilde{G})$ along $L(k,1)$, where $\widetilde{G}$ is the universal cover of $G$ (see {\textit{e.g.}}~\cite[Section 4.1]{Witten:2009at} for nice explanation from the 6d viewpoint). This amounts to summing over different topological types of $G$-bundles over $L(k,1)$, classified by $H^2(L(k,1),\pi_1(G))=\pi_1(G)\otimes{\mathbb Z}_k$
Although this is a valid definition, it is not the right one for \eqref{Statement} to work for general $k$. This is clear from the quantization condition \eqref{IntegralPSU}, which tells us that, in order to get the $SU(N)$ Verlinde algebra, the Lens index of $T[\Sigma,PSU(N)]$ should be interpreted in the following way: in the process of assembling $\Sigma$ from pairs of pants and cylinders, we should sum over 't Hooft fluxes in the \emph{full} fundamental group $\pi_1(PSU(N))={\mathbb Z}_N$, as opposed to ${\mathbb Z}_N\otimes{\mathbb Z}_k$, in the $T[\Sigma,SU(N)]$ theory for each gauge group associated with a cylinder. But in general, ${\mathbb Z}_N\otimes{\mathbb Z}_k$ is only a proper subgroup of ${\mathbb Z}_N$, unless $N$ divides $k$.
However, general flux backgrounds can be realized by inserting surface operators (which we will refer to as ``flux tubes'') with central monodromy whose Levi subgroup is the entire group \cite{Gukov:2006jk}. In the spatial directions, the flux tube lives on a $S^1\subset L(k,1)$ that has linking number 1 with the Hopf fiber. So we can choose this $S^1$ to be a particular Hopf fiber $S^1_{\text{Hopf}}$. The amount of flux is labeled by an element in $\pi_1(G)\subset{\mathcal Z}(\widetilde{G})$. Geometrically, this construction amounts to removing a single Hopf fiber from $L(k,1)$, leading to compactly supported cohomology $H^2_c(L(k,1)\backslash S^1_{\text{Hopf}},{\mathbb Z})={\mathbb Z}$ that is freely generated. Then $H^2_c\left(L(k,1)\backslash S^1_{\text{Hopf}},\pi_1(G)\right)=\pi_1(G)$, and the flux can take value on the whole $\pi_1(G)$.
When $G$ is a group of adjoint type (\textit{i.e.}~${\mathcal Z}(G)$ is trivial), we will call the index of $T[\Sigma,G]$ defined this way the ``full Coulomb branch index'' of $T[\Sigma, \widetilde{G}]$, which sums over \textit{all} elements of $\pi_1(G)={\mathcal Z}(\widetilde{G})$. As it contains the most information about the field theory, it is also the most interesting in the whole family associated to the Lie algebra $\mathfrak{g}$. This is not at all surprising as on the other side of the duality, the $\widetilde{G}$ equivariant Verlinde algebra involves all representations of $\mathfrak{g}$ and is the most interesting one among its cousins.
As for the $A_{N-1}$ series that we will focus on in the rest of this paper, we will be studying the correspondence \eqref{StatementSU} between the $SU(N)$ equivariant Verlinde algebra and the Coulomb index of $T[\Sigma, PSU(N)]$. But before going any further, we will first address a common concern that the reader may have. Namely, charge quantization appears to be violated in the presence of these non-integral $SU(N)$ holonomies. Shouldn't this suggest that the index is just zero with a non-trivial flux background? Indeed, for a state transforming under the fundamental representation of $SU(N)$, translation along the Hopf fiber of $L(k,1)$ $k$ times gives a non-abelian Aharonov-Bohm phase
\begin{equation}\label{ABPhase}
e^{2\pi i k \mathbf{a}_{\text{SU}}}.
\end{equation}
Since the loop is trivial in $\pi_1(L(k,1))$, one would expect this phase to be trivial. However, in the presence of a non-trivial 't Hooft flux, \eqref{ABPhase} is a non-trivial element in the center of $SU(N)$. Then the partition function with insertion of such an 't Hooft operator is automatically zero. However, this is actually what one must have in order to recover even the usual Verlinde formula in the $\ft=0$ limit. As we will explain next, what is observed above in the $SU(2)$ case is basically the ``selection rule'' saying that in the decomposition of a tensor product
\begin{equation}
\text{(half integer spin)}\otimes\text{(integer spin)}\otimes\ldots\otimes\text{(integer spin)}
\end{equation}
there is no representation with integer spins! What we will do next is to use Dirac quantization conditions in $T[\Sigma, PSU(N)]$ to derive the selection rule above and analogous rules for the $SU(N)$ Verlinde algebra.
\subsection{Verlinde algebra and Dirac quantization}
The Verlinde formula associates to a pair of pants a fusion coefficient $f_{abc}$ which tells us how to decompose a tensor product of representations:
\begin{equation}
R_a\otimes R_b=\bigoplus_{c}f_{ab}^{\phantom{ab}c}R_c.
\end{equation}
Equivalently, this coefficient gives the dimension of the invariant subspace of three-fold tensor products
\begin{equation}
\dim\mathrm{Inv}(R_a\otimes R_b\otimes R_c)=f_{abc}.
\end{equation}
Here, upper and lower indices are related by the ``metric''
\begin{equation}
\eta_{ab}=\dim\mathrm{Inv}(R_a\otimes R_b)=\delta_{a\overline{b}},
\end{equation}
which is what the TQFT associates to a cylinder.
In the case of $SU(N)$, the fusion coefficients $f_{abc}$ are zero whenever a selection rule is not satisfied. For three representations labeled by the highest weights $\vec{\mu}^{(1)},\vec{\mu}^{(2)},\vec{\mu}^{(3)}$ in \eqref{repMu} the selection rule is
\begin{equation}\label{SelectionSUN}
\sum_{i=1}^{N-1}(\mu^{(1)}_i+\mu^{(2)}_i+\mu^{(3)}_i)\equiv 0 \pmod N.
\end{equation}
This is equivalent to the condition that ${\mathbb Z}_N$ acts trivially on $R_a\otimes R_b\otimes R_c$. Of course, when this action is non-trivial, it is easy to see that there can't be any invariant subspace.
Our job now is to reproduce this rule on the Coulomb index side via Dirac quantization. We start with the familiar case of $SU(2)$. The theory $T_2=T[\Sigma_{0,3},SU(2)]$ consists of eight 4d ${\mathcal N}=2$ half-hypermultiplets transforming in the tri-fundamental of the $SU(2)_a\times SU(2)_b\times SU(2)_c$ flavor symmetry. The holonomy $(H_a,H_b,H_c)\in U(1)^3$ of this flavor symmetry along the Hopf fiber is given by a triple$(m_a,m_b,m_c)$ with
\begin{equation}
H_I=e^{2\pi i m_I/k}, \quad I=a,b,c.
\end{equation}
The Dirac quantization requires that the Aharonov-Bohm phase associated with a trivial loop must be trivial. So, in the presence of the non-trivial holonomy along the Hopf fiber, a physical state with charge $(e_a,e_b,e_c)$ needs to satisfy
\begin{equation}
H_a^{ke_a}H_b^{ke_b}H_c^{ke_c}=e^{2\pi i \sum_{I=a,b,c} e_I m_I}=1,
\end{equation}
or, equivalently,
\begin{equation}
\sum_{I=a,b,c} e_I m_I \in {\mathbb Z}.
\end{equation}
When decomposed into representations of $U(1)^3$, the tri-fundamental hypermultiplet splits into eight components:
\begin{equation}
(\mathbf{2},\mathbf{2},\mathbf{2})\rightarrow \bigoplus_{\text{All $\pm$}}(\pm{ 1},\pm {1},\pm {1}).
\end{equation}
Therefore, one needs to satisfy eight equations
\begin{equation}
\pm m_a \pm m_b \pm m_c \in {\mathbb Z}.
\end{equation}
For individual $m_I$, the condition is
\begin{equation}\label{IntSU2}
m_I\in \frac{{\mathbb Z}}{2},
\end{equation}
which is the same as the relaxed integrality condition \eqref{IntegralPSU} for $SU(2)$. This already suggests that the condition \eqref{IntegralPSU} is the most general one and there is no need to relax it further. Indeed, $m_i$ is the ``spin'' of the corresponding $SU(2)$ representation and we know that all allowed values for it are integers and half-integers.
Besides the individual constraint \eqref{IntSU2}, there is an additional one:
\begin{equation}
m_a+m_b+m_c\in {\mathbb Z},
\end{equation}
which is precisely the ``selection rule'' we mentioned before. Only when this rule is satisfied, could $R_{m_c}$ appear in the decomposition of $R_{m_a}\otimes R_{m_b}$.
We then proceed to the case of $SU(N)$. When $N=3$ the theory $T_3$ doesn't have a Lagrangian description but is conjectured to have $E_6$ global symmetry \cite{Minahan:1996fg}. And the matter fields transform in the 78-dimensional adjoint representation of $E_6$ \cite{Gaiotto:2008nz, Gadde:2010te,Argyres:2007cn} which decomposes into $SU(3)^3$ representations as follows
\begin{equation}
\mathbf{78}=(\mathbf{3},\mathbf{3},\mathbf{3})\oplus(\overline{\mathbf{3}},\overline{\mathbf{3}},\overline{\mathbf{3}})\oplus(\mathbf{8},\mathbf{1},\mathbf{1})\oplus(\mathbf{1},\mathbf{8},\mathbf{1})\oplus(\mathbf{1},\mathbf{1},\mathbf{8}).
\end{equation}
The $\mathbf{8}$ is the adjoint representation of $\mathfrak{su}(3)$ and, being a representation for both $SU(3)$ and $PSU(3)$, imposes no additional restriction on 't Hooft fluxes. So we only need to understand the quantization condition in the presence of a tri-fundamental matter $(\mathbf{3},\mathbf{3},\mathbf{3})$. A natural question, then, is whether it happens more generally, {\it i.e.},
\begin{equation}\label{QuantSUN}
\genfrac{}{}{0pt}{}{\text{Dirac quantization condition}}{\text{ for the $T_N$ theory}} \quad = \quad \genfrac{}{}{0pt}{}{\text{Dirac quantization condition}}{\text{for a tri-fundamental matter.}}
\end{equation}
This imposes on the $T_N$ theory an interesting condition, which is expected to be true as it turns out to give the correct selection rule for $SU(N)$ Verlinde algebra.
Now, we proceed to determine the quantization condition for the tri-fundamental of $SU(N)^3$. We assume the holonomy in $SU(N)^3$ to be
\begin{equation}
(H_a,H_b,H_c),
\end{equation}
where
\begin{equation}
H_I=\exp\left[\frac{2\pi i}{k}\mathrm{diag}\{m_{I1},m_{I2},\ldots,m_{IN}\}\right].
\end{equation}
The tracelessness condition looks like
\begin{equation}\label{Traceless}
\sum_{j=1}^N m_{Ij}=0 \quad\text{for all $I=a,b,c.$}
\end{equation}
We now have $N^3$ constraints given by
\begin{equation}
m_{aj_1}+m_{bj_2}+m_{cj_3}\in {\mathbb Z} \quad\text{for all choices of $j_1,j_2$ and $j_3$}.
\end{equation}
Using \eqref{Traceless}, one can derive the individual constraint for each $i=a,b,c$:\footnote{In this paper, bold letters like $\mathbf{m}$ are used to denote an element in the Cartan subalgebra of $\mathfrak{g}$. They are sometimes viewed as a diagonal matrix and sometimes a multi-component vector. The interpretation should be clear from the context.}
\begin{equation}\label{Indiv}
\mathbf{m}_I \equiv \left(\frac{1}{N},\frac{1}{N},\frac{1}{N},\ldots,\frac{1}{N}\right)\cdot{\mathbb Z} \pmod {\mathbb Z}.
\end{equation}
This is exactly the same as \eqref{IntegralPSU}. There is only one additional ``selection rule'' that needs to be satisfied:
\begin{equation}\label{Selec}
\sum_{I=a,b,c}\sum_{j=1}^{N-1} (m_{Ij}-{m_{IN}})\equiv 0 \pmod N,
\end{equation}
which coincides with \eqref{SelectionSUN}. Therefore, we have demonstrated the equivalence between the Dirac quantization condition of the tri-fundamental and the selection rules in the $SU(N)$ Verlinde algebra. Since the argument is independent of the value of $\ft$, the same set of selection rules also applies to the equivariant Verlinde algebra.
Beside pairs of pants, one needs one more ingredient to build a 2d TQFT---the cylinder. It can be used to glue punctures together to build general Riemann surfaces. Each cylinder corresponds to a free 4d ${\mathcal N}=2$ vector multiplet. Since all of its components transform under the adjoint representation, it does not alter the individual constraints \eqref{Indiv}. However, the holonomies associated with the two punctures need to be the inverse of each other as the two flavor symmetries are identified and gauged. So the index of $T[\Sigma_{0,2},SU(N)]$ gives a diagonal ``metric''
\begin{equation}
\eta_{ab}\sim \delta_{a\overline{b}}.
\end{equation}
The proportionality constant is $\ft$ dependent and will be determined in later sections.
We can also derive the the Dirac quantization condition for $T[\Sigma_{g,n},PSU(N)]$. We use $m_{Ij}$ to label the $j$-th component of the $U(1)^N$ holonomy associated to the $I$-th puncture. Then the index or any kind of partition function of $T[\Sigma_{g,n},SU(N)]$ is zero unless
\begin{enumerate}
\item each $\vec{m}_{I}$ satisfies the individual constraint \eqref{Indiv}, and
\item an additional constraint analogous to \eqref{Selec},
\begin{equation}\label{SelecGeneral}
\sum_{I=1}^n\sum_{j=1}^{N-1} (m_{Ij}-{m_{IN}})\equiv 0 \pmod N,
\end{equation}
is also satisfied.
\end{enumerate}
To end this section, we will explain how the additional numerical factor in \eqref{EVFk=1} in the introduction arises from non-trivial 't Hooft fluxes. For $G=SU(N)$, one has
\begin{equation}
Z_{\text{EV}}(\Sigma,k=1,\ft)=N^g\cdot\left[\frac{1}{\prod_{i=1}^{\mathrm{rank}\,G}(1-\ft^{i+1})^{2i+1}}\right]^{g-1}.
\end{equation}
Here we are only concerned with the first factor $N^g$ which is the $k=1$ Verlinde formula for $SU(N)$
\begin{equation}
Z_{\text{EV}}(\Sigma,k=1,\ft=0)=N^g.
\end{equation}
We now derive this result on the index side.
Consider the twice-punctured torus, obtained by gluing two pairs of pants. Let $(a_1,a_2,a_3)$ and $(b_1,b_2,b_3)\in{\mathbb Z}_N^3$ label the 't Hooft fluxes corresponding to all six punctures. We glue $a_2$ with $b_2$, $a_3$ with $b_3$ to get $\Sigma_{1,2}$. Then we have the following set of constraints:
\begin{equation}
a_2 b_2=1,\;a_3 b_3=1,
\end{equation}
and
\begin{equation}
a_1a_2a_3=1,\; b_1b_2b_3=1.
\end{equation}
From these constraints, we can first confirm that
\begin{equation}
a_1b_1=1,
\end{equation}
which is what the selection rule \eqref{SelecGeneral} predicts. Then there is a free parameter $a_2$ that can take arbitrary values in ${\mathbb Z}_N$. So in the $\ft=0$ limit, the Coulomb index TQFT associates to $\Sigma_{1,2}$
\begin{equation}
Z_{\text{CB}}(\Sigma_{1,2},SU(N),\ft=0)=N\delta_{a_1,\overline{b_1}}.
\end{equation}
We can now glue $g-1$ twice-punctured tori to get
\begin{equation}
Z_{\text{CB}}(\Sigma_{g-1,2},SU(N),\ft=0)=N^{g-1}\delta_{a_1,\overline{b}_{g-1}}.
\end{equation}
Taking trace of this gives\footnote{What we have verified is basically that the algebra of ${\mathbb Z}_N$ 't Hooft fluxes gives the $SU(N)$ Verlinde algebra at level $k=1$, which is isomorphic to the group algebra of ${\mathbb Z}_N$. Another TQFT whose Frobenius algebra is also related to the group algebra of ${\mathbb Z}_N$ is the 2d ${\mathbb Z}_N$ Dijkgraaf-Witten theory \cite{dijkgraaf1990}. However, the normalizations of the trace operator are different so the partition functions are also different.}
\begin{equation}
Z_{\text{CB}}(\Sigma_{g,0},SU(N),\ft=0)=N^{g}.
\end{equation}
Combining this with the $\ft$ dependent part of \eqref{EVFk=1}, we have proved that, for $k=1$, the equivariant Verlinde formula is the same as the full Coulomb branch index.
We will now move on to cases with more general $k$ to perform stronger checks.
\section{A check of the proposal}
\label{sec:SU2}
In this section, we perform explicit computation of the Coulomb branch index for the theory $T[\Sigma_{g,n}, PSU(2)]$ in the presence of 't Hooft fluxes (or half-integral flavor holonomies). We will see that after taking into account a proper normalization, the full Coulomb branch index nicely reproduces the known $SU(2)$ equivariant Verlinde algebra. First, we introduce the necessary ingredients of 4d ${{\mathcal N}}=2$ superconformal index on $S^1 \times L(k,1)$ for a theory with a Lagrangian description.
\subsection{The Lens space index and its Coulomb branch limit}
The Lens space index of 4d ${{\mathcal N}} = 2$ theories is a generalization of the ordinary superconformal index on $S^1 \times S^3$, as $S^3 = L(1,1)$ \cite{Benini:2011nc}. For $k>1$, $L(k,1)$ has a nontrivial fundamental group $\mathbb{Z}_k$, and a supersymmetric theory on $L(k,1)$ tends to have a set of degenerate vacua labeled by holonomies along the Hopf fiber. This feature renders the Lens space index a refined tool to study the BPS spectra of the superconformal theory; for instance it can distinguish between theories with gauge groups that have the same Lie algebra but different topologies ({\textit{e.g.}}~$SU(2)$ versus $SO(3)$ \cite{Razamat:2013opa}). Moreover, as it involves not only continuous fugacities but also discrete holonomies, Lens space indices of class ${\mathcal S}$ theories lead to a very large family of interesting and exotic 2d TQFTs \cite{Benini:2011nc,Alday:2013rs,Razamat:2013jxa}.
The basic ingredients of the Lens space index are indices of free supermultiplets, each of which can be conveniently expressed as a integral over gauge group of the plethystic exponential of the ``single-letter index'', endowed with gauge and flavor fugacities. This procedure corresponds to constructing all possible gauge invariant multi-trace operators that are short with respect to the superconformal algebra.
In particular, for a gauge vector multiplet the single-letter index is
\begin{equation}\label{singleV}
f^V(p,q,t,m,k) =\frac{1}{1-p q} \pbra{\frac{p^m}{1-p^k}+\frac{q^{k-m}}{1-q^k}}(pq+\frac{pq}{t}- 1 -t) + \delta_{m,0},
\end{equation}
where $m$ will be related to holonomies of gauge symmetries. For a half-hypermultiplet, one has
\begin{equation}\label{singleH}
f^{H/2}(p,q,t,m,k) = \frac{1}{1-p q} \pbra{\frac{p^m}{1-p^k}+\frac{q^{k-m}}{1-q^k}}(\sqrt{t} - \frac{pq}{\sqrt{t}}).
\end{equation}
In addition, there is also a ``zero point energy'' contribution for each type of field. For a vector multiplet and a half hypermultiplet, they are given by
\begin{equation}
\begin{aligned}
I_V^0(p,q,t,{\bf m}, k) & = \prod_{\alpha \in \Delta^+} \pbra{\frac{pq}{t}}^{- [\![ \alpha({\mathbf m}) ]\!]_k + \frac{1}{k} [\![ \alpha({\mathbf m}) ]\!]_k^2},\\[0.5em]
I_{H/2}^0(p,q,t,{\mathbf m}, {\mathbf{\widetilde{m}}}, k) & = \prod_{\rho \in \mathfrak{R}} \pbra{\frac{pq}{t}}^{\frac{1}{4}\pbra{[\![ \rho({\mathbf m}, {\mathbf{ \widetilde{ m}}}) ]\!]_k - \frac{1}{k} [\![ \rho({\mathbf m}, {\mathbf{\widetilde {m}}}) ]\!]_k^2}},
\end{aligned}
\end{equation}
where $\dbra{x}_k$ denotes remainder of $x$ divided by $k$. The boldface letters ${\bf m}$ and ${\mathbf{ \widetilde{ m}}}$ label holonomies for, respectively, gauge symmetries and flavor symmetries\footnote{As before, the holonomies are given by $e^{2\pi i\mathbf{m}/k}$.}; they are chosen to live in the Weyl alcove and can be viewed as a collection of integers $m_1 \geq m_2 \geq \dots \geq m_r$.
Now the full index can be written as
\begin{equation}
\begin{aligned}
{{\mathcal I}} = \sum_{\bf m} & I_V^0(p,q,t,{\bf m}) I_{H/2}^0(p,q,t,{\bf m},{\mathbf{\widetilde{m}}}) \int \prod_i \frac{dz_i}{2\pi i z_i} \Delta(z)_{\bf m}\\[0.5em]
& \times \exp \pbra{\sum_{n=1}^{+\infty}\sum_{\alpha,\rho}\frac{1}{n} \left[ f^V(p^n,q^n,t^n, \alpha({\bf m})) \alpha(z) + f^{H/2} (p^n,q^n,t^n, \rho({\bf m},{\mathbf{\widetilde{m}}})) \rho(z, F) \right]}.
\label{LensIndex}
\end{aligned}
\end{equation}
Here, to avoid clutter, we only include one vector multiplet and one half-hypermultiplet. Of course, in general one should remember to include the entire field contents of the theory. Here, $F$ stands for the continuous flavor fugacities and the $z_i$'s are the gauge fugacities; for $SU(N)$ theories one should impose the condition $z_1 z_2 \dots z_N = 1$. The additional summation in the plethystic exponential is over all the weights in the relevant representations. The integration measure is determined by ${\bf m}$:
\begin{equation}
\Delta_{\bf m}(z_i) = \prod_{i,j; m_i = m_j }\left(1-\frac{z_i}{z_j}\right),
\end{equation}
since a nonzero holonomy would break the gauge group into its stabilizer.
In this paper we are particularly interested in the Coulomb branch limit, {\textit{i.e.}}~\eqref{CBI0} and \eqref{CBI1}. From the single letter index \eqref{singleV} and \eqref{singleH} we immediately conclude that $f^{H/2} = 0$ identically, so the hypermultiplets contributes to the index only through the zero point energy. As for $f^V$, the vector multiplet gives a non-zero contribution $pq/t = \ft$ for each root $\alpha$ that has $\alpha({\bf m}) = 0$. So the zero roots (Cartan generators) always contribute, and non-zero roots can only contribute when the gauge symmetry is enhanced from $U(1)^r$, {\textit{i.e.}}~when $\mathbf{m}$ is at the boundary of the Weyl alcove. This closely resembles the behavior of the ``metric'' of the equivariant Verlinde algebra, as we will see shortly.
More explicitly, for $SU(2)$ theory, the index of a vector multiplet in the Coulomb branch limit is
\begin{equation}\begin{aligned}
{I}_V (\ft, m, k) = \ft^{- [\![ 2m ]\!]_k + \frac{1}{k} [\![ 2m ]\!]_k^2} \pbra{\frac{1}{1-\ft}}\pbra{\frac{1}{1+\ft}}^{\delta_{\dbra{2m},0}},
\label{SU2CBIV}
\end{aligned}\end{equation}
while for tri-fundamental hypermultiplet the contribution is
\begin{equation}\begin{aligned}
{I}_{H/2}(\ft, m_1,m_2,m_3,k) = \prod_{s_i = \pm} \pbra{\ft}^{\frac{1}{4}\sum_{i=1}^3 \pbra{[\![ m_i s_i ]\!]_k - \frac{1}{k} [\![ m_i s_i ]\!]_k^2}},
\end{aligned}\end{equation}
where all holonomies take values from $\{0,1/2,1,3/2,\dots k/2\}$.
Unsurprisingly, this limit fits the name of the ``Coulomb branch index.'' Indeed, in the case of $k=1$, the index receives only contributions from the Coulomb branch operators, {\it i.e.}~a collection of ``Casimir operators'' for the theory \cite{Gadde:2011uv} ({\textit{e.g.}}~${\rm{Tr}}\phi^2, \, {\rm{Tr}}\phi^3, \, \dots ,\, {\rm{Tr}}\phi^N$ for $SU(N)$, where $\phi$ is the scalar in the ${\cal N}=2$ vector multiplet). We see here that a general Lens space index also counts the Coulomb branch operators, but the contribution from each operator is modified according to the background holonomies.
Another interesting feature of the Coulomb branch index is the complete disappearance of continuous fugacities of flavor symmetries. Punctures are now only parametrized by discrete holonomies along the Hopf fiber of $L(k,1)$. This property ensures that we will obtain a \textit{finite-dimensional} algebra.
Then, to make sure that the algebra defines a TQFT, one needs to check associativity, especially because non-integral holonomies considered here are novel and may cause subtleties. We have checked by explicit computation in $\ft$ that the structure constant and metric defined by Lens space index do satisfy associativity, confirming that the ``Coulomb branch index TQFT'' is indeed well-defined. In fact, even with all $p, q, t$ turned on, the associativity still holds order by order in the expansion in terms of fugacities.
\subsection{Equivariant Verlinde algebra from Hitchin moduli space}
As explained in greater detail in \cite{equivariant}, the equivariant Verlinde TQFT computes an equivariant integral over ${\mathcal M}_H$, the moduli space of Higgs bundles. In the case of $SU(2)$, the relevant moduli spaces are simple enough and one can deduce the TQFT algebra from geometry of ${\mathcal M}_H$. For example, one can obtain the fusion coefficients from ${\mathcal M}_H(\Sigma_{0,3}, \alpha_1, \alpha_2, \alpha_3; SU(2))$. Here the $\alpha_i$'s are the ramification data specifying the monodromies of the gauge field \cite{Gukov:2006jk} and take discrete values in the presence of a level $k$ Chern-Simons term. Since in this case the moduli space is just a point or empty, one can directly evaluate the integral. The result is as follows.
Define $\lambda = 2 k \alpha$ whose value is quantized to be $0, 1, \dots, k$. Let
\begin{equation}\begin{aligned}
d_0 & = \lambda_1+\lambda_2+\lambda_3 - 2k,\\
d_1 & = \lambda_1 - \lambda_2 - \lambda_3,\\
d_2 & = \lambda_2 - \lambda_3 - \lambda_1,\\
d_1 & = \lambda_3 - \lambda_1 - \lambda_2,
\end{aligned}\end{equation}
and moreover
\begin{equation}\begin{aligned}
\Delta \lambda = \max(d_0, d_1, d_2, d_3),
\end{aligned}\end{equation}
then
\begin{equation}\begin{aligned}
f_{\lambda_1 \lambda_2 \lambda_3} = \begin{cases}
1\ \ \ \ & {\rm{if}}\ \lambda_1+\lambda_2+\lambda_3\ \text{is even and}\ \Delta \lambda \leq 0,\\[0.5em]
\ft^{-\Delta \lambda/2}& {\rm{if}}\ \lambda_1+\lambda_2+\lambda_3\ \text{is even and}\ \Delta \lambda > 0,\\[0.5em]
0 \ & {\rm{if}}\ \lambda_1+\lambda_2+\lambda_3\ \text{is odd}.
\end{cases}
\label{POPVerlinde}
\end{aligned}\end{equation}
On the other hand, the cylinder gives the trace form (or ``metric'') of the algebra
\begin{equation}\begin{aligned}
\eta_{\lambda_1 \lambda_2} = \{ 1-\ft^2, 1-\ft, \dots, 1-\ft, 1-\ft^2 \}.
\label{CVerlinde}
\end{aligned}\end{equation}
Via cutting-and-gluing, we can compute the partition function of the TQFT on a general Riemann surface $\Sigma_{g,n}$.
\subsection{Matching two TQFTs}
So far we have introduced two TQFTs: the first one is given by equivariant integration over Hitchin moduli space ${{\mathcal M}}_H$, the second one is given by the $L(k,1)$ Coulomb branch index of the theory $T[\Sigma, PSU(2)]$. It is easy to see that the underlying vector space of the two TQFTs are the same, confirming in the $SU(2)$ case the more general result we obtained previously:
\begin{equation}
Z_{\text{EV}}(S^1)=Z_{\text{CB}}(S^1).
\end{equation}
We can freely switch between two different descriptions of the same set of basis vectors, by either viewing them as integrable highest weight representations of $\widehat{su}(2)_k$ or $SU(2)$ holonomies along the Hopf fiber. In this section, we only use highest weights $\lambda$ as the labels for puncture data, and one can easily translate them into holonomies via $\lambda = 2m$.
Then, one needs to compare the algebraic structure of the two TQFTs and may notice that there are apparent differences. Namely, if one compares $I_V$ and $I_{H/2}$ with $\eta$ and $f$ in \eqref{POPVerlinde} and \eqref{CVerlinde}, there are additional factors coming from the zero point energy in the expressions on the index side. However, one can simply rescale states in the Hilbert space on the Coulomb index side to absorb them.
The scaling required is
\begin{equation}\begin{aligned}
|\lambda \rangle = \ft^{\frac{1}{2} \left( [\![ \lambda ]\!]_k - \frac{1}{k} [\![ \lambda ]\!]_k^2\right)} | \lambda \rangle'.
\end{aligned}\end{equation}
This makes $I_V$ exactly the same as $\eta^{\lambda \mu}$. After rescaling, the index of the half-hypermultiplet becomes
\begin{equation}\begin{aligned}
I_{H/2}\Rightarrow f'_{\lambda_1 \lambda_2 \lambda_3} = \ft^{-\frac{1}{2} \sum_{i=1}^3 \left( [\![ \lambda_i ]\!]_k - \frac{1}{k} [\![ \lambda_i ]\!]_k^2\right)} {I}_{H/2} (\ft, \lambda_1,\lambda_2,\lambda_3,k),
\end{aligned}\end{equation}
and this is indeed identical to the fusion coefficient $f_{\lambda\mu\nu}$ of the equivariant Verlinde algebra, which we show as follows. If we define
\begin{equation}\begin{aligned}
g_0 & = m_1+m_2+m_3 = \frac{1}{2} \pbra{\lambda_1+\lambda_2+\lambda_3},\\[0.5em]
g_1 & = m_1 - m_2 -m_3 = \frac{1}{2} \pbra{\lambda_1-\lambda_2-\lambda_3},\\[0.5em]
g_2 & = m_2 - m_1 - m_3 = \frac{1}{2} \pbra{\lambda_2-\lambda_1-\lambda_3},\\[0.5em]
g_3 & = m_3 - m_1 - m_2 = \frac{1}{2} \pbra{\lambda_3-\lambda_1-\lambda_3},\\[0.5em]
\end{aligned}\end{equation}
then our pair of pants can be written as
\begin{equation}\begin{aligned}
f'_{\lambda_1 \lambda_2 \lambda_3} = & \ft^{\frac{1}{2k}\pbra{\dbra{g_0}_k\dbra{-g_0}_k+\dbra{g_1}_k\dbra{-g_1}_k+\dbra{g_2}_k\dbra{-g_2}_k+\dbra{g_2}_k\dbra{-g_2}_k}}\\[0.5em]
& \times \ft^{-\frac{1}{2k} \pbra{\lambda_1(k-\lambda_1)+\lambda_2(k-\lambda_2)+\lambda_3(k-\lambda_3)}}.
\end{aligned}\end{equation}
Now we can simplify the above equation further under various assumptions of each $g_i$. For instance if $0< g_0 < k$ and $g_i<0$ for $i=1,2,3$, then
\begin{equation}\begin{aligned}
f'_{\lambda_1 \lambda_2 \lambda_3} = 1.
\end{aligned}\end{equation}
If on the other hand, $g_0 > k$ and $g_i<0$ for $i=1,2,3$, which means $\max(g_0 - k, g_1, g_2, g_3) = g_0 - k$, then
\begin{equation}\begin{aligned}
f'_{\lambda_1 \lambda_2 \lambda_3} = t^{g_0-k},
\end{aligned}\end{equation}
this is precisely what we obtained by \eqref{POPVerlinde}.
Therefore, we have shown that the building blocks of the two TQFTs are the same. And by the TQFT axioms, we have proven the isomorphism of the two TQFTs. For example, they both give $\ft$-deformation of the $\widehat{su}(2)_k$ representation ring; at level $k=10$ a typical example is
\begin{equation}\begin{aligned}
|3\rangle \otimes |3 \rangle = \frac{1}{1-\ft^2} |0\rangle \oplus \frac{1}{1-\ft} |2\rangle \oplus \frac{1}{1-\ft} |4\rangle \oplus \frac{1}{1-\ft} |6\rangle \oplus \frac{\ft}{1-\ft} |8\rangle \oplus \frac{\ft^2}{1-\ft^2} |10\rangle.
\end{aligned}\end{equation}
For closed Riemann surfaces, we list partition functions for several low genera and levels in table \ref{SU2Partition}. And this concludes our discussion of the $SU(2)$ case.
\setlength\extrarowheight{8pt}
\begin{table}
\begin{adjustwidth}{-2.1cm}{}
\begin{tabular}{ | x{1.0cm} | c | c | c | c |}
\hline
& $k=1$ & $k=2$ & $k=3$ & $k =4$ \\[0.5em] \hline
$g=2$ & $\frac{4}{(1-t^2)^3}$ & $\frac{2}{(1-t^2)^3}(5t^2+6t+5)$ & $\frac{4}{(1-t^2)^3}(4t^3+9t^2+9t+5)$ & \begin{tabular}[c]{@{}l@{}} $ \frac{1}{\left(1-t^2\right)^3} \left(16 t^4+49 t^3\right.$\\ \ \ \ $\left.+81 t^2+75 t+35\right)$ \end{tabular}\\[0.5em] \hline
$g =3$ & $\frac{8}{(1-t^2)^6}$ &\begin{tabular}[c]{@{}l@{}} $ \frac{4}{\left(1-t^2\right)^6} \left(9 t^4+28 t^3\right.$\\ $\left.+54 t^2+28 t+9\right) $\end{tabular} & \begin{tabular}[c]{@{}l@{}} $\frac{8}{\left(1-t^2\right)^6}\left(8 t^6+54 t^5+159 t^4\right.$\\ $\left.+238 t^3+183 t^2+72 t+15\right) $ \end{tabular}&\begin{tabular}[c]{@{}l@{}} $\frac{1}{\left(1-t^2\right)^6} \left(64 t^8+384 t^7+1793 t^6\right.$\\ $\left.+5250 t^5+8823 t^4+8828 t^3\right.$\\ $\left.+5407 t^2+1890 t+329 \right)$ \end{tabular} \\[0.7em] \hline
$\forall g$ & $2\left(\frac{2}{(1-t^2)^{3}}\right)^{g-1}$ & \begin{tabular}[c]{@{}l@{}} $\pbra{\frac{2 (1-t)^{2}}{(1-t^2)^{3}}}^{g-1}$\\ $+ 2\pbra{\frac{2 (1+t)^{2}}{(1-t^2)^{3}}}^{g-1}$ \end{tabular} &\begin{tabular}[c]{@{}l@{}} $ 2 \left(\frac{5+9t+9t^2+4t^3-\sqrt{5+4t}(1+5t+t^2)}{(1-t^2)^3} \right)^{g-1} + $\\ $ 2\left(\frac{5+9t+9t^2+4t^3+\sqrt{5+4t}(1+5t+t^2)}{(1-t^2)^3} \right)^{g-1}$ \end{tabular}& \begin{tabular}[c]{@{}l@{}} $\pbra{\frac{(3+t)(1-t)^2}{(1-t^2)^3}}^{g-1} + 2\pbra{\frac{4}{1-t^2}}^{g-1}$\\ $ +\pbra{\frac{4(3+t)(1+t)^3}{(1-t^2)^3}}^{g-1}$ \end{tabular} \\[0.8cm] \hline
\end{tabular}
\end{adjustwidth}
\caption{The partition function $Z_{\rm{EV}}(T[L(k,1),SU(2)] , \ft) = Z_{\rm{CB}}(T[\Sigma_g,PSU(2)] , \ft)$ for genus $g=2,3$ and level $k=1,2,3,4$.} \label{SU2Partition}
\end{table}
\section{$SU(3)$ equivariant Verlinde algebra from the Argyres-Seiberg duality}\label{sec:SU3}
In the last section, we have tested the proposal about the equivalence between the equivariant Verlinde algebra and the algebra from the Coulomb index of class ${\mathcal S}$ theories. Then one would ask whether one can do more with such a correspondence and what are its applications. For example, can one use the Coulomb index as a tool to access geometric and topological information about Hitchin moduli spaces? Indeed, the study of the moduli space of Higgs bundles poses many interesting and challenging problems. In particular, doing the equivariant integral directly on ${{\mathcal M}}_H$ quickly becomes unpractical when one increases the rank of the gauge group. However, our proposal states that the equivariant integral could be computed in a completely different way by looking at the superconformal index of familiar SCFTs! This is exactly what we will do in this section---we will put the correspondence to good use and probe the geometry of ${{\mathcal M}}_H(\Sigma, SU(3))$ with superconformal indices.
The natural starting point is still a pair of pants or, more precisely, a sphere with three ``maximal'' punctures (for mathematicians, three punctures with full-flag parabolic structure). The 4d theory $T[\Sigma_{0,3}, SU(3)]$ is known as the $T_3$ theory \cite{Tachikawa:2015bga}, which is first identified as an ${{\mathcal N}}=2$ strongly coupled rank-$1$ SCFT with a global $E_6$ symmetry\footnote{In the following we will use the name ``$T_3$ theory" and ``$E_6$ SCFT" interchangeably.} \cite{Minahan:1996fg}. In light of the proposed correspondence, one expects that the Coulomb branch index of the $T_3$ theory equals the fusion coefficients $f_{\lambda_1\lambda_2\lambda_3}$ of the $SU(3)$ equivariant Verlinde algebra.
\subsection{Argyres-Seiberg duality and Coulomb branch index of $T_3$ theory}
\subsubsection{A short review}
As the $T_3$ theory is an isolated SCFT, there is no Lagrangian description, and currently no method of direct computation of its index is known in the literature. However, there is a powerful duality proposed by Argyres and Seiberg \cite{Argyres:2007cn}, that relates a superconformal theory with Lagrangian description at infinite coupling to a weakly coupled gauge theory obtained by gauging an $SU(2)$ subgroup of the $E_6$ flavor symmetry of the $T_3$ SCFT.
To be more precise, one starts with an $SU(3)$ theory with six hypermultiplets (call it theory A) in the fundamental representation $3\Box \oplus 3{\overline \Box}$ of the gauge group. Unlike its $SU(2)$ counterpart, the $SU(3)$ theory has the electric-magnetic duality group $\Gamma^0(2)$, a subgroup of $SL(2, {\mathbb Z})$. As a consequence, the fundamental domain of the gauge coupling $\tau$ has a cusp and the theory has an infinite coupling limit. As argued by Argyres and Seiberg through direct analysis of the Seiberg-Witten curve at strong couplings, it was shown that the theory can be naturally identified as another theory B obtained by weakly gauging the $E_6$ SCFT coupled to an additional hypermultiplet in fundamental representation of $SU(2)$. There is much evidence supporting this duality picture. For instance, the $E_6$ SCFT has a Coulomb branch operator with dimension $3$, which could be identified as the second Casimir operator ${\rm{Tr}}\phi^3$ of the dual $SU(3)$ gauge group. The $E_6$ theory has a Higgs branch of $\dim_{\mathbb{C}} {{\mathcal H}} = 22$ parametrized by an operator $\mathbb{X}$ in adjoint representation of $E_6$ with Joseph relation \cite{Gaiotto:2008nz}; after gauging $SU(2)$ subgroup, two complex dimensions are removed, leaving the correct dimension of the Higgs branch for the theory A. Finally, Higgsing this $SU(2)$ leaves an $SU(6) \times U(1)$ subgroup of the maximal $E_6$ group, which is the same as the $U(6) = SU(6) \times U(1)$ flavor symmetry in the A frame.
In \cite{Gaiotto:2009we}, the Argyres-Seiberg duality is given a nice geometric interpretation. To obtain theory A, one starts with a 2-sphere with two $SU(3)$ maximal punctures and two $U(1)$ simple punctures, corresponding to global symmetry $SU(3)_a \times SU(3)_b \times U(1)_a \times U(1)_b$, where two $U(1)$ are baryonic symmetry. In this setup, the Argyres-Seiberg duality relates different degeneration limits of this Riemann surface, see figure \ref{AS3} and \ref{GeneralAS1}.
\begin{figure*}[htbp]
\begin{adjustwidth}{-2.0cm}{}
\centering
\begin{subfigure}[t]{.4\textwidth}
\includegraphics[width=8cm]{TheoryA.png}
\caption*{(a)}
\end{subfigure}
\hspace{1.5cm}
\begin{subfigure}[t]{.4\textwidth}
\includegraphics[width=8cm]{TheoryB.png}
\caption*{(b)}
\end{subfigure}
\end{adjustwidth}
\caption[Illustration of Argyres-Seiberg duality.]{Illustration of Argyres-Seiberg duality. (a) The theory A, which is an $SU(3)$ superconformal gauge theory with six hypermultiplets, with the $SU(3)_a \times U(1)_a \times SU(3)_b \times U(1)_b$ subgroup of the global $U(6)$ flavor symmetry. (b) The theory B, obtained by gauging an $SU(2)$ subgroup of the $E_6$ symmetry of $T_3$. Note in the geometric realization the cylinder connecting both sides has a regular puncture $R$ on the left and an irregular puncture $IR$ on the right.}
\label{AS3}
\end{figure*}
\begin{figure*}[htbp]
\begin{adjustwidth}{-2.0cm}{}
\centering
\begin{subfigure}[t]{.4\textwidth}
\includegraphics[width=8cm]{TNA.png}
\caption*{(a)}
\end{subfigure}
\hspace{1.5cm}
\begin{subfigure}[t]{.4\textwidth}
\includegraphics[width=8cm]{TNB.png}
\caption*{(b)}
\end{subfigure}
\end{adjustwidth}
\caption[Illustration of geometric realization of Argyres-Seiberg duality for $T_3$ theory.]{Illustration of geometric realization of Argyres-Seiberg duality for $T_3$ theory. The dots represent simple punctures while circles are maximal punctures. (a) The theory A, which is an $SU(3)$ superconformal gauge theory with six hypermultiplets, is pictured as two spheres connected by a long tube. Each of them has two maximal and one simple punctures. (b) The theory B, which is obtained by gauging an $SU(2)$ subgroup of the flavor symmetry of the theory $T_3$. This gauge group connects a regular puncture and an irregular puncture.}
\label{GeneralAS1}
\end{figure*}
The Argyres-Seiberg duality gives access to the superconformal index for the $E_6$ SCFT \cite{Gadde:2010te}. The basic idea is to start with the index of theory A and, with the aid of the inversion formula of elliptic beta integrals, one identifies two sets of flavor fugacities and extracts the $E_6$ SCFT index by integrating over a carefully chosen kernel. It was later realized that the above procedure has a physical interpretation, namely the $E_6$ SCFT can be obtained by flowing to the IR from an ${{\mathcal N}}=1$ theory which has Lagrangian description \cite{Gadde:2015xta}. The index computation of the ${{\mathcal N}}=1$ theory reproduces that of \cite{Gadde:2010te}, and the authors also compute the Coulomb branch index in the large $k$ limit.
Here we would like to obtain the index for general $k$. In principle, we could start with the ${{\mathcal N}}=1$ theory described in \cite{Gadde:2015xta} and compute the Coulomb branch index on Lens space directly. However, a direct inversion is more intuitive here due to simplicity of the Coulomb branch limit, and can be generalized to arbitrary $T_N$ theories. In the next subsection we outline the general procedure of computing the Coulomb branch index of $T_3$.
\subsubsection{Computation of the index}
To obtain a complete basis of the TQFT Hilbert space, we need to turn on all possible flavor holonomies and determine when they correspond to a weight in the Weyl alcove. For the $T_3$ theory each puncture has $SU(3)$ flavor symmetry, so we can turn on holonomies as ${\bf h}^*=(h^*_1, h^*_2, h^*_3)$ for $* = a, b, c$ with constraints $h^*_1 + h^*_2 + h^*_3 = 0$. The Dirac quantization condition tells us that
\begin{equation}\begin{aligned}
h^r_i + h^s_j + h^t_k \in {\mathbb Z}
\end{aligned}\end{equation}
for arbitrary $r,s,t \in \{a,b,c\}$ and $i,j,k = 1,2,3$. This means there are only three classes of choices modulo ${\mathbb Z}$, namely
\begin{equation}\begin{aligned}
\pbra{\frac{1}{3}, \frac{1}{3}, -\frac{2}{3}}, \ \ {\rm{or}}\ \ \pbra{\frac{2}{3}, -\frac{1}{3}, -\frac{1}{3}},\ \ {\rm{or}}\ \ \pbra{0,0,0}\ \ \pmod{\mathbb{Z}}.
\end{aligned}\end{equation}
Furthermore, the three punctures either belong to the same class (for instance, all are $(1/3,1/3,-2/3) \pmod {\mathbb Z}$) or to three distinct classes. Recall that the range of the holonomy variables are also constrained by the level $k$, so we pick out the Weyl alcove as the following:
\begin{equation}\begin{aligned}\label{fundDomain}
D(k) = \{ (h_1,h_2, h_3) | h_1 \geq h_2, h_1 \geq -2h_2, 2h_1 + h_2 \leq k \},
\end{aligned}\end{equation}
with a pictorial illustration in figure \ref{WeylAlcove}.
\begin{figure}[htbp]
\centering
\includegraphics[width=10cm]{WeylAlcove1.png}
\caption{The Weyl alcove for the choice of holonomy variables at level $k=3$. The red markers represent the allowed points. The coordinates beside each point denote the corresponding highest weight representation. The transformation between flavor holonomies and highest weight is given by \eqref{highestWeight}.}
\label{WeylAlcove}
\end{figure}
As we will later identify each holonomy as an integrable highest weight representation for the affine Lie algebra $\widehat{su}(3)_k$, it is more convenient to use the label $(\lambda_1, \lambda_2)$ defined as
\begin{equation}\begin{aligned}
\lambda_1 = h_2 - h_3, \ \ \ \lambda_2 = h_1 - h_2.
\label{highestWeight}
\end{aligned}\end{equation}
They are integers with $\lambda_1 + \lambda_2 \leq k$ and $(\lambda_1,\lambda_2)$ lives on the weight lattice of $su(3)$. The dimension of the representation with the highest weight $(\lambda_1, \lambda_2)$ is
\begin{equation}\begin{aligned}
\dim R_{(\lambda_1,\lambda_2)} = \frac{1}{2}(\lambda_1+1)(\lambda_2+1)(\lambda_1+\lambda_2+2).
\end{aligned}\end{equation}
Next we proceed to compute the index in the Coulomb branch limit. As taking the Coulomb branch limit simplifies the index computation dramatically, one can easily write down the index for theory A\footnote{In \cite{Gadde:2015xta} the authors try to compensate for the non-integral holonomies of $n_a$ and $n_b$ by shifting the gauge holonomies ${\bf m}$. In contrast, our approach is free from such subtleties because we allow non-integral holonomies for all flavor symmetries as long as the Dirac quantization condition is obeyed.}:
\begin{equation}
\begin{aligned}
{{\mathcal I}}_A& (\ft, {\mathbf{\widetilde{m}}}_a, {\mathbf{\widetilde{m}}}_b, n_a, n_b) \\[0.5em]
& = \sum_{\bf m} I_{H/2} (\ft, {\bf m},{\mathbf{\widetilde{m}}}_a, n_a) \int \prod_{i=1}^2 \frac{dz_i}{2\pi i z_i} \Delta(z)_{\bf m} I_V(\ft, z, {\bf m}) I_{H/2} (\ft, -{\bf m}, {\mathbf{\widetilde{m}}}_b, n_b),
\end{aligned}
\end{equation}
where ${\bf m}_a, {\bf m}_b$ and $n_a, n_b$ denote the flavor holonomies for $SU(3)_{a,b}$ and $U(1)_{a,b}$ respectivel
. It is illustrative to write down what the gauge integrals look like:
\begin{equation}
I_V(\ft, {\bf m}) =\int \prod_{i=1}^2 \frac{dz_i}{2\pi i z_i} \Delta(z)_{\bf m} I_V(\ft, z, {\bf m}) = I_V^0(\ft, {\bf m}) \times \begin{cases} \frac{1}{(1-t^2)(1-t^3)}, \ \ m_1 \equiv m_2 \equiv m_3\ \pmod k,\\[0.5em]
\frac{1}{(1-t)(1-t^2)}, \ \ m_i \equiv m_j \neq m_k\ \pmod k,\\[0.5em]
\frac{1}{(1-t)^2},\ \ m_1 \neq m_2 \neq m_3\ \pmod k.
\end{cases}
\end{equation}
Except the zero point energy $I_V^0(\ft, {\bf m})$ the rest looks very much alike our ``metric" for the $SU(3)$ equivariant Verlinde TQFT. Moreover,
\begin{equation}
I_{H/2} ({\bf m}, {\mathbf{\widetilde{m}}}_a, n_a)= \prod_{\psi \in R_{\Phi}} \ft^{\frac{1}{4} \pbra{ [\![ \psi({\bf m},{\mathbf{\widetilde{m}}}_a,n_a) ]\!]_k - \frac{1}{k} [\![ \psi({\bf m},{\mathbf{\widetilde{m}}}_a,n_a) ]\!]_k^2}},
\end{equation}
where for a half-hypermultiplet in the fundamental representation of $SU(3) \times SU(3)_a$ with positive $U(1)_a$ charge we have
\begin{equation}
\psi_{ij}( {\bf m}, {\mathbf{\widetilde{m}}}_a, n_a) = {\bf m}_i + {\mathbf{\widetilde{m}}}_{a,j} + n_a.
\end{equation}
Now we write down the index for theory B. Take the $SU(3)_a \times SU(3)_b \times SU(3)_c$ maximal subgroup of $E_6$ and gauge $SU(2)$ subgroup of the $SU(3)_c$ flavor symmetry. This leads to the replacement
\begin{equation}
\{h_{c,1}, h_{c,2}, h_{c,3} \} \rightarrow \{ w + n_y, n_y - w, -2 n_y \},
\label{E6 gauge}
\end{equation}
where $n_y$ denotes the fugacity for the remaining $U(1)_y$ symmetry, and $n_s$ is the fugacity for $U(1)_s$ flavor symmetry rotating the single hypermultiplet. We then write down the index of theory B as
\begin{equation}\begin{aligned}
{{\mathcal I}}_B (\ft, {\bf h}_a, {\bf h}_b, n_y, n_s) = \sum_w C^{E_6} ({\bf h}_a, {\bf h}_b, w, n_y) I_V (\ft, w) I_{H/2} (-w, n_s),
\label{theoryBindex}
\end{aligned}\end{equation}
where $I_V (\ft, w)$ is given by \eqref{SU2CBIV} with substitution $m \rightarrow w$, and $w = 0, 1/2, \dots, k/2$. Argyres-Seiberg duality tells us that
\begin{equation}\begin{aligned}
{{\mathcal I}}_A& (\ft, {\mathbf{\widetilde{m}}}_a, {\mathbf{\widetilde m}}_b, n_a, n_b) = {{\mathcal I}}_B (\ft, {\bf h}_a, {\bf h}_b, n_y, n_s),
\end{aligned}\end{equation}
with the following identification of the holonomy variables:
\begin{equation}\begin{aligned}
{\mathbf{\widetilde{m}}_a} & = {\bf h}_a, \ \ {\mathbf{\widetilde{m}}_b} = {\bf h}_b; \\[0.5em]
n_a & = \frac{1}{3}n_s - n_y, \ \ n_b = - \frac{1}{3}n_s - n_y.
\label{ChangeHolonomy}
\end{aligned}\end{equation}
On the right-hand side of the expression \eqref{theoryBindex} we can view the summation as a matrix multiplication with $w$ and $n_s$ being the row and column indices respectively. Then we can take the inverse of the matrix $I_{H/2} (-w, n_s)$, $I^{-1}_{H/2} (n_s, w')$, by restricting the range\footnote{As long as it satisfies the Dirac quantization condition, we do not have to know what the range of $n_s$ should be. For example, $n_s = 0,1/2,\dots, k/2$ is a valid choice.} of $n_s$ to be the same as $w$ and multiply it to both sides of \eqref{theoryBindex}. This moves the summation to the other side of the equation and gives:
\begin{equation}\begin{aligned}
\boxed{C^{E_6} (\ft, {\bf h}_a, {\bf h}_b, w, n_y,k) =\sum_{n_s} \frac{1}{ I_V (\ft, w)}{{\mathcal I}}_A (\ft, {\bf h}_a, {\bf h}_b, n_a, n_b,k) I_{H/2}^{-1} (n_s, w) }\ .
\end{aligned}\end{equation}
We now regard $C^{E_6} (\ft, {\bf h}_a, {\bf h}_b, {\bf h}_c,k)$ as the fusion coefficient of the 2d equivariant Verlinde algebra, and have checked the associativity. Moreover, let us confirm that the index obtained in this way is symmetric under permutations of the three $SU(3)$ flavor fugacities, and the flavor symmetry group is indeed enhanced to $E_6$. First of all, we have permutation symmetry for three $SU(3)$ factors at, for instance, level $k=2$:
\begin{equation}\begin{aligned}
C^{E_6}\pbra{\frac{2}{3}, \frac{2}{3}, 0, 0, \frac{4}{3}, -\frac{2}{3}} = C^{E_6}\pbra{\frac{2}{3}, \frac{2}{3}, \frac{4}{3}, -\frac{2}{3}, 0, 0} = \dots = C^{E_6}\pbra{ \frac{4}{3}, -\frac{2}{3},\frac{2}{3}, \frac{2}{3}, 0, 0} = \frac{1+\ft^4}{1-\ft^3}.
\end{aligned}\end{equation}
To show that the index $C^{E_6}$ is invariant under the full $E_6$ symmetry, one needs to show that the two $SU(3)$ factors, combined with the $U(1)_y$ symmetry, enhance to an $SU(6)$ symmetry. The five Cartan elements of this $SU(6)$ group can be expressed as the combination of the fluxes \cite{Gadde:2015xta}:
\begin{equation}\begin{aligned}
\pbra{h_1^a - n_y, h_2^a - n_y, -h_1^a- h_2^a - n_y, h^b_1+n_y, h^b_2+n_y}.
\end{aligned}\end{equation}
Then the index should be invariant under the permutation of the five Cartans. Note the computation is almost the same as in \cite{Gadde:2015xta} except that not all permutations necessarily exist---an allowed permutation should satisfy the charge quantization condition. Restraining ourselves from the illegal permutations, we have verified that the global symmetry is enlarged to $E_6$.
Finally, at large $k$ our results reproduce these of \cite{Gadde:2015xta}, as can be checked by analyzing the large $k$ limit of the matrix $ I_{H/2}^{-1} (n_s, w) $. Indeed, at large $k$ the matrix $I_{H/2}(w, n_s)$ can be simplified as
\begin{equation}\begin{aligned}
I_{H/2} = \ft^{\frac{1}{2}(|w+n_s| + |-w+n_s|)} = \left(
\begin{array}{ccccccc} 1 & 0 & \ft & 0 & \ft^2 & 0 & \dots \\
0 & \sqrt{\ft} & 0 & \ft^{\frac{3}{2}} & 0 & \ft^{\frac{5}{2}} & \\
\ft & 0 & \ft & 0 & \ft^2 & 0 & \\
0 & \ft^{\frac{3}{2}} & 0 & \ft^{\frac{3}{2}} & 0 & \ft^{\frac{5}{2}} \\
\ft^2 & 0 & \ft^2 & 0 & \ft^2 & 0 &\\
0 & \ft^{\frac{5}{2}} & 0 & \ft^{\frac{5}{2}} & 0 & \ft^{\frac{5}{2}}\\
\vdots & & & & & & \ddots
\end{array}
\right).
\end{aligned}\end{equation}
Upon inversion it gives
\begin{equation}\begin{aligned}
I^{-1}_{H/2} = \left( \begin{array}{ccccccc} \frac{1}{1-\ft} & 0 & -\frac{1}{1-\ft} & 0 & 0 & 0 & \dots \\
0 & \frac{1}{\sqrt{\ft}(1-\ft)} & 0 & -\frac{1}{\sqrt{\ft}(1-\ft)} & 0 & 0 & \\
-\frac{1}{1-\ft} & 0 & \frac{1+\ft}{\ft(1-\ft)} & 0 & -\frac{1}{\ft(1-\ft)} & 0 & \\
0 & -\frac{1}{\sqrt{\ft}(1-\ft)} & 0 & \frac{1+\ft}{\ft^{\frac{3}{2}}(1-\ft)} & 0 & -\frac{1}{\ft^{\frac{3}{2}}(1-\ft)} \\
0 & 0 & -\frac{1}{\ft(1-\ft)} & 0 & \frac{1+\ft}{\ft^2(1-\ft)} & 0 &\\
0 & 0 & 0 & -\frac{1}{\ft^{\frac{3}{2}}(1-\ft)} & 0 & \frac{1+\ft}{\ft^{\frac{5}{2}}(1-\ft)}\\
\vdots & & & & & & \ddots
\end{array} \right).
\end{aligned}\end{equation}
Here $w$ goes from $0, 1/2, 1, 3/2, \cdots$. For a generic value of $w$ only three elements in a single column can contribute to the index\footnote{By ``generic" we mean the first and the second column are not reliable due to our choice of domain for $w$. It is imaginable that if we take $w$ to be a half integer from $(-\infty, +\infty)$, then such ``boundary ambiguity" can be removed. But we refrain from doing this to have weights living in the Weyl alcove.}. For large $k$ the index of vector multiplet becomes
\begin{equation}\begin{aligned}
I_V(w) = \ft^{-2w} \pbra{\frac{1}{1-\ft}},
\end{aligned}\end{equation}
and we get
\begin{equation}\begin{aligned}
C^{E_6} (\ft, {\bf h}_a, {\bf h}_b, w, n_y) = \ft^w & \left[(1+\ft){{\mathcal I}}_A (\ft, {\bf h}_a, {\bf h}_b, n_y, w,k) \right.\\[0.5em]
& \left.- \ft~ {{\mathcal I}}_A (\ft, {\bf h}_a, {\bf h}_b, n_y, w-1,k) - {{\mathcal I}}_A (\ft, {\bf h}_a, {\bf h}_b, n_y, w+1,k)\right],
\end{aligned}\end{equation}
which exactly agrees with \cite{Gadde:2015xta}.
\subsection{$SU(3)$ equivariant Verlinde algebra}
Now with all the basic building blocks of the 2d TQFT at our disposal, we assemble the pieces and see what interesting information could be extracted.
The metric of the TQFT is given by the Coulomb branch index of an $SU(3)$ vector multiplet, with a possible normalization factor. Note the conjugation of representations acts on a highest weight state $(\lambda_1,\lambda_2)$ via
\begin{equation}\begin{aligned}
\overline{ (\lambda_1,\lambda_2)} = (\lambda_2, \lambda_1),
\end{aligned}\end{equation}
and the metric $\eta^{\lambda\mu}$ is non-vanishing if and only if $\mu = \overline \lambda$. Let
\begin{equation}\begin{aligned}
N(\lambda_1, \lambda_2, k) = \ft^{-\frac{1}{k}\left(\dbra{\lambda_1}_k\dbra{-\lambda_1}_k+\dbra{\lambda_2}_k\dbra{-\lambda_2}_k+\dbra{\lambda_1+\lambda_2}_k\dbra{-\lambda_1-\lambda_2}_k\right)},\\[0.5em]
\end{aligned}\end{equation}
and we rescale our TQFT states as
\begin{equation}
( \lambda_1, \lambda_2 )' = N(\lambda_1, \lambda_2, k)^{-\frac{1}{2}} (\lambda_1, \lambda_2).
\end{equation}
Then the metric $\eta$ takes a simple form (here we define $\lambda_3 = \lambda_1 + \lambda_2$):
\begin{equation}
\eta^{(\lambda_1, \lambda_2)\overline{(\lambda_1, \lambda_2)}} = \begin{cases} \frac{1}{(1-t^2)(1-t^3)}, \ \ \text{ if}\ \ \dbra{\lambda_1}_k = \dbra{\lambda_2}_k = 0,\\[0.3em]
\frac{1}{(1-t)(1-t^2)}, \ \ \ \text{ if only one}\ \ \dbra{\lambda_i}_k = 0\ \ \text{for}\ \ i = 1,2,3,\\[0.3em]
\frac{1}{(1-t)^2}, \ \ \ \ \ \ \ \ \ \text{ if all}\ \ \dbra{\lambda_i}_k \neq 0.
\end{cases}
\label{SU(3)C}
\end{equation}
Next we find the ``pair of pants'' $f_{\pbra{\lambda_1,\lambda_2}\pbra{\mu_1,\mu_2}\pbra{\nu_1,\nu_2}}$, from the normalized Coulomb branch index of $E_6$ SCFT:
\begin{equation}\begin{aligned}
f_{\pbra{\lambda_1,\lambda_2}\pbra{\mu_1,\mu_2}\pbra{\nu_1,\nu_2}} = \pbra{N(\lambda_1, \lambda_2, k)N(\mu_1, \mu_2, k)N(\nu_1, \nu_2, k)}^{\frac{1}{2}}C^{E_6} (\ft, \lambda_1,\lambda_2; \mu_1,\mu_2; \nu_1,\nu_2; k).
\label{SU(3)PoP}
\end{aligned}\end{equation}
Along with the metric we already have, they define a $\ft$-deformation of the $\widehat{su}(3)_k$ fusion algebra. For instance we could write down at level $k=3$:
\begin{equation}\begin{aligned}
(1,0) \otimes (1,0) & = \frac{1+\ft+\ft^3}{(1-\ft)(1-\ft^2)(1-\ft^3)}(0,1)\oplus\frac{1+2\ft^2}{(1-\ft) (1-\ft^2) (1-\ft^3)}(2,0)\\[0.5em]
& \oplus \frac{\ft(2+\ft)}{(1-\ft)(1-\ft^2)(1-\ft^3)}(1,2).
\end{aligned}\end{equation}
Using dimensions to denote representations, the above reads
\begin{equation}\begin{aligned}
{\bf 3} \times {\bf 3} = & \frac{1+\ft+\ft^3}{(1-\ft)(1-\ft^2)(1-\ft^3)}{\mathbf{\overline 3}}+\frac{1+2\ft^2}{(1-\ft) (1-\ft^2) (1-\ft^3)}{\bf 6}\\[0.5em]
& + \frac{\ft(2+\ft)}{(1-\ft)(1-\ft^2)(1-\ft^3)}{\mathbf{ \overline{15}}}.
\end{aligned}\end{equation}
When $\ft = 0$, it reproduces the fusion rules of the affine $\widehat{su}(3)_k$ algebra, and $f_{\lambda\mu\nu}$ becomes the fusion coefficients $N_{\lambda \mu \nu}^{(k)}$. These fusion coefficients are worked out combinatorically in \cite{Gepner:1986wi, Kirillov:1992np,Begin:1992rt}. We review details of the results in appendix \ref{sec: FusionCoeff}.
With pairs of pants and cylinders, one can glue them together to get the partition function on a closed Riemann surface, which gives the $SU(3)$ equivariant Verlinde formula: a $\ft$-deformation of the $SU(3)$ Verlinde formula. For genus $g=2$, at large $k$, one can obtain
\begin{equation}\begin{aligned}
& \dim_{\beta} {{\mathcal H}}_{{\rm CS}} (\Sigma_{2,0}; SL(3,{\mathbb{C}}), k)\\[0.5em]
& = \frac{1}{20160}k^8 + \frac{1}{840} k^7 + \frac{7}{480}k^6 + \frac{9}{80}k^5 + \frac{529}{960}k^4 + \frac{133}{80}k^3 + \frac{14789}{5040}k^2 + \frac{572}{210} k + 1\\[0.5em]
& + \pbra{\frac{1}{2520}k^8 + \frac{1}{84} k^7 + \frac{17}{120}k^6 + \frac{17}{20}k^5 + \frac{319}{120}k^4 + \frac{15}{4}k^3 + \frac{503}{2520}k^2 - \frac{1937}{420} k - 3}\ft\\[0.5em]
& + \pbra{\frac{1}{560}k^8 + \frac{9}{140} k^7 + \frac{31}{40}k^6 + \frac{39}{10}k^5 + \frac{727}{80}k^4 + \frac{183}{20}k^3 + \frac{369}{140}k^2 - \frac{27}{70} k + 1}\ft^2\\[0.5em]
& + \dots,
\end{aligned}\end{equation}
and the reader can check that the degree zero piece in $\ft$ is the usual $SU(3)$ Verlinde formula for $g=2$ \cite{guha2009witten}:
\begin{equation}\begin{aligned}
\dim & {{\mathcal H}}(\Sigma_{g,0}; SU(3), k)\\[0.5em]
& = \frac{(k+3)^{2g-2} 6^{g-1}}{2^{7g-7}} \sum_{\lambda_1, \lambda_2} \pbra{\sin \frac{\pi(\lambda_1+1)}{k+3}\sin \frac{\pi (\lambda_2+1)}{k+3}\sin \frac{\pi(\lambda_1+\lambda_2+2)}{k+3}}^{2-2g},
\end{aligned}\end{equation}
expressed as a polynomial in $k$.
For a 2d TQFT, the state associated with the ``cap" contains interesting information, namely the ``cap state" tells us how to close a puncture. Moreover, there are many close cousins of the cap. There is one type which we call the ``central cap'' that has a defect with central monodromy with the Levi subgroup being the entire gauge group (there is no reduction of the gauge group when we approach the singularity). For $SU(3)$ equivariant Verlinde algebra, besides the ``identity-cap'' the central cap also includes ``$\omega$-cap'' and ``$\omega^2$-cap,'' and the corresponding TQFT states are denoted by $|\phi \rangle_1, |\phi \rangle_\omega$ and $| \phi \rangle_{\omega^2}$. One can also insert on the cap a minimal puncture (gauge group only reduces to $SU(2)\times U(1)$ as opposed to $U(1)^3$ for maximal punctures) and the corresponding states can be expressed as linear combinations of the maximal puncture states which we use as the basis vectors of the TQFT Hilbert space.
The cap state can be deduced from $f$ and $\eta$ written in \eqref{SU(3)PoP} and \eqref{SU(3)C}, since closing a puncture on a three-punctured sphere gives a cylinder. In algebraic language,
\begin{equation}\begin{aligned}
f_{\lambda \mu \phi} = \eta_{\lambda\mu}.
\end{aligned}\end{equation}
One can easily solve this equation, obtaining
\begin{equation}\begin{aligned}
| \phi \rangle_1 = |0, 0 \rangle - \ft(1+\ft) | 1,1 \rangle + \ft^2 |0,3 \rangle + \ft^2 |3,0 \rangle - \ft^3 |2,2 \rangle.
\label{1cap}
\end{aligned}\end{equation}
For other two remaining caps, by multiplying\footnote{More precisely, we multiply holonomies with these central elements and translate the new holonomies back to weights.} $\omega$ and $\omega^2$ on the above equation \eqref{1cap}, we obtain
\begin{equation}\begin{aligned}\label{CentralCap}
| \phi \rangle_{\omega} & = |k, 0 \rangle - \ft(1+\ft) | k-2,1 \rangle + \ft^2 |k-3,0 \rangle + \ft^2 |k-3,3 \rangle - \ft^3 |k-4,2 \rangle,\\[0.5em]
| \phi \rangle_{\omega^2} & = |0, k \rangle - \ft(1+\ft) | 1,k-2 \rangle + \ft^2 |0,k-3 \rangle + \ft^2 |3,k-3 \rangle - \ft^3 |2,k-4 \rangle.
\end{aligned}\end{equation}
When closing a maximal puncture using $| \phi \rangle_{\omega}$, we have a ``twisted metric'' $\eta'_{\lambda \mu}$ which is non-zero if and only if $(\mu_1,\mu_2) = (\lambda_1, k-\lambda_1 - \lambda_2)$. When closing a maximal puncture using $| \phi \rangle_{\omega^2}$, we have another twisted metric $\eta''_{\lambda \mu}$ which is non-zero if and only if $(\mu_1,\mu_2) = (k-\lambda_1 - \lambda_2, \lambda_2)$. When there are insertions of central monodromies on the Riemann surface, it is easier to incorporate them into twisted metrics instead of using the expansion \eqref{CentralCap}.
For minimal punctures, the holonomy is of the form $(u,u,-2u)$, modulo the action of the affine Weyl group, where $u$ takes value $0, 1/3, 2/3, \dots, k - 2/3, k-1/3$. We can use index computation to expand the corresponding state $|u\rangle_{U(1)}$ in terms of maximal punctures. After scaling by a normalization constant
\begin{equation}\begin{aligned}
\ft^{\frac{1}{2} \pbra{\dbra{3u}_k - \frac{1}{k}\dbra{3u}^2_k}},
\end{aligned}\end{equation}
the decomposition is given by the following:
\begin{enumerate}
\item[(1).] $\langle 0, 0 \rangle - \ft^2 \langle 1,1 \rangle$, if $k = u$ or $u=0$;
\item[(2).] $\langle 3u, 0 \rangle - \ft \langle 3u-1,2 \rangle$, if $k > 3u >0$;
\item[(3).] $\langle 3u, 0 \rangle - \ft^2 \langle 3u-2,1 \rangle$, if $k = 3u$;
\item[(4).] $\langle 2k-3u, 3u-k \rangle - \ft \langle 2k-3u-1,3u-k-1 \rangle$, if $3u/2 < k < 3u$;
\item[(5).] $\langle 0, 3u/2 \rangle - \ft^2 \langle 1, 3u/2-2 \rangle$, if $k = 3u/2$;
\item[(6).] $\langle 0, 3k-3u \rangle - \ft \langle 2, 3k-3u-1 \rangle$, if $u < k < 3u/2$.
\end{enumerate}
The above formulae have a natural ${\mathbb{Z}}_2$-symmetry of the form ${\mathcal{C}} \circ \psi$, where
\begin{equation}\begin{aligned}
\psi: (u,k) \rightarrow (k-u,k),
\end{aligned}\end{equation}
and ${\mathcal C}$ is the conjugation operator that acts linearly on Hilbert space:
\begin{equation}\begin{aligned}
{{\mathcal C}}: (\lambda_1, \lambda_2) \rightarrow (\lambda_2, \lambda_1), \ \ \ (\lambda_1,\lambda_2) \in {{\mathcal H}}.
\end{aligned}\end{equation}
This $\mathbb{Z}_2$ action sends each state in the above list to itself. Moreover, it is interesting to observe that when $\ft = 0$, increasing $u$ from $0$ to $k$ corresponds to moving along the edges of the Weyl alcove ($c.f.$ figure \ref{WeylAlcove}) a full cycle. This may not be a surprise because closing a maximal puncture actually implies that one only considers states whose $SU(3)$ holonomy $(h_1, h_2, h_3)$ preserves at least $SU(2) \subset SU(3)$ symmetry, which are precisely the states lying on the edges of the Weyl alcove.
\subsection{From algebra to geometry}
This TQFT structure reveals a lot of interesting geometric properties of moduli spaces of rank $3$ Higgs bundles. But as the current paper is a physics paper, we only look at a one example---but arguably the most interesting one---the moduli space ${{\mathcal M}}_H(\Sigma_{0,3}, SU(3))$. In particular this moduli space was studied in \cite{Gothen1994,garcia2004betti} and \cite{boalch2012hyperkahler} from the point of view of differential equations. Here, from index computation, we can recover some of the results in the mathematical literature and reveal some new features for this moduli space. In particular, we propose the following formula for the fusion coefficient $f_{\lambda\mu\nu}$:
\begin{equation}
f_{\pbra{\lambda_1,\lambda_2}\pbra{\mu_1,\mu_2}\pbra{\nu_1,\nu_2}} = \ft^{k\eta_0} \pbra{\frac{k{\rm{Vol}}({\mathcal M}) + 1}{1-\ft} + \frac{2\ft}{(1-\ft)^2} }+ \frac{Q_1(\ft)}{(1-\ft^{-1})(1-\ft^2)} + \frac{Q_2(\ft)}{(1-\ft^{-2})(1-\ft^3)}.
\label{SU(3)Equiv}
\end{equation}
This ansatz comes from Atiyah-Bott localization of the equivariant integral done in similar fashion as in \cite{equivariant}. The localization formula enables us to write the fusion coefficient $f$ in \eqref{SU(3)PoP} as a summation over fixed points of the $U(1)_H$ Hitchin action. In \eqref{SU(3)Equiv}, $\eta_0$ is the moment map\footnote{Recall the $U(1)_H$ Hitchin action is generated by a Hamiltonian, which we call $\eta$---not to be confused with the metric, which will make no appearance from now on. $\eta$ is also the norm squared of the Higgs field.} for the lowest critical manifold ${\mathcal M}$. When the undeformed fusion coefficients $N_{\lambda\mu\nu}^{(k)} \neq 0$, one has
\begin{equation}\begin{aligned}
k{\rm{Vol}}({\mathcal M}) + 1 = N_{\lambda\mu\nu}^{(k)}, \ \ \eta _0 = 0.
\end{aligned}\end{equation}
Numerical computation shows that $Q_{1,2}(\ft)$ are individually a sum of three terms of the form
\begin{equation}\begin{aligned}
Q_1(\ft) = \sum_{i=1}^3 \ft^{k\eta_i}, \ \ \ \ Q_2(\ft) = \sum_{j=4}^6 \ft^{k\eta_j},
\end{aligned}\end{equation}
where $\eta_i$ are interpreted as the moment maps at each of the six higher fixed points of $U(1)_H$.
The moduli space ${\mathcal M}$ of $SU(3)$ flat connections on $\Sigma_{0,3}$ is either empty, a point or ${\mathbb{C}}{\mathbf{P}}^1$ depending on the choice of $(\lambda,\mu,\nu)$ \cite{hayashi1999moduli}, and when it is empty, the lowest critical manifold of $\eta$ is a ${\mathbb{C}}{\mathbf{P}}^1$ with $\eta_0>0$ and we will still use ${\mathcal M}$ to denote it. The fixed loci of ${\mathcal M}_H(\Sigma_{0,3}, SU(3))$ under $U(1)$ action consist of ${\mathcal M}$ and the six additional points, and there are Morse flow lines traveling between them. The downward Morse flow coincides with the nilpotent cone \cite{Biswas}---the singular fiber of the Hitchin fibration, and its geometry is depicted in figure \ref{SingularFiber}. The Morse flow carves out six spheres that can be divided into two classes. Intersections of $D^{(1)}_i \bigcap D^{(2)}_i$ are denoted as $P^{(1)}_{1,2,3}$, and at the top of these $D^{(2)}_i$'s there are $P^{(2)}_{1,2,3}$. We also use $P_1,\ldots,P_6$ and $D_1, \ldots, D_6$ sometimes to avoid clutter. The nilpotent cone can be decomposed into
\begin{equation}
{{\mathcal N}} = {{\mathcal M}} \cup D_i^{(1)} \cup D_j^{(2)},
\end{equation}
which gives an affine $E_6$ singularity (IV$^*$ in Kodaira's classification) of the Hitchin fibration. Knowing the singular fiber structure, we can immediately read off the Poincar\'{e} polynomial for ${{\mathcal M}}_H(\Sigma_{0,3}, SU(3))$:
\begin{equation}\begin{aligned}
{\mathcal P}_{{\mathfrak r}} = 1 + 7 {\mathfrak r}^2,
\end{aligned}\end{equation}
which is the same as that given in \cite{garcia2004betti}.
To use the Atiyah-Bott localization formula, we also need to understand the normal bundle to the critical manifolds. For the base, the normal bundle is the cotangent bundle with $U(1)_H$ weight 1. Its contribution to the fusion coefficient is given by
\begin{equation}
t^{k\eta_0}\int_{{\mathcal M}}\frac{\mathrm{Td}({\mathbb{C}}{\mathbf{P}}^1)\^e^{k\omega}}{1-e^{-\beta+2\omega'}}= \ft^{k\eta_0} \pbra{\frac{k{\rm{Vol}}({\mathcal M}) + 1}{1-\ft} + \frac{2\ft}{(1-\ft)^2}}.
\end{equation}
For the higher fixed points, the first class $P^{(1)}$ has normal bundle ${\mathbb C}[-1]\oplus{\mathbb C}[2]$ with respect to $U(1)_H$, which gives a factor
\begin{equation}
\frac{1}{(1-\ft^{-1})(1-\ft^2)}
\end{equation}
multiplying $e^{k\eta_{1,2,3}}$. For the second class $P^{(2)}$, the normal bundle is ${\mathbb C}[-2]\oplus{\mathbb C}[3]$ and we instead have a factor
\begin{equation}
\frac{1}{(1-\ft^{-2})(1-\ft^3)}.
\end{equation}
In this paper, we won't give the analytic expression for the seven moment maps and will leave \eqref{SU(3)Equiv} as it is. Instead, we will give a relation between them:
\begin{equation}\begin{aligned}
2k & = 6 ( N^{(k)}_{\lambda\mu\nu} - 1) + 3k(\eta_1+\eta_2 + \eta_3) + k(\eta_4+\eta_5+\eta_6)\\[0.5em]
& = 6 k {\rm{Vol}}({\mathcal M}) + 3k(\eta_1+\eta_2 + \eta_3) + k(\eta_4+\eta_5+\eta_6).
\end{aligned}\end{equation}
This is verified numerically and can be explained from geometry. Noticing that the moment maps are related to the volume of the $D$'s:
\begin{equation}\begin{aligned}
{\rm{Vol}}(D_1) & = \eta_1,\ \ {\rm Vol}(D_2) = \eta_2, \ \ {\rm Vol}(D_3) = \eta_3,\\[0.5em]
{\rm{Vol}}(D_4) & =\frac{ \eta_4-\eta_1}{2},\ \ {\rm Vol}(D_5) = \frac{ \eta_5-\eta_2}{2}, \ \ {\rm{Vol}}(D_6) = \frac{ \eta_6-\eta_3}{2}.
\label{ModuliVolume}
\end{aligned}\end{equation}
The factor $2$ in the second line of \eqref{ModuliVolume} is related to the fact that $U(1)_H$ rotates the $D^{(2)}$'s twice as fast as it rotates the $D^{(1)}$'s. Then we get the following relation between the volume of the components of ${\mathcal N}$:
\begin{equation}\begin{aligned}
{\rm{Vol}}(\mathbf{F}) = 6 {\rm Vol}({\mathcal M}) + 4\sum_{i=1}^3 {\rm Vol}(D_i) + 2\sum_{i=4}^6{\rm Vol}(D_j).
\label{Vrelation}
\end{aligned}\end{equation}
Here $F$ is a generic fiber of the Hitchin fibration and has volume
\begin{equation}\begin{aligned}
{\rm{Vol}}(\mathbf{F}) = 2.
\end{aligned}\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width=11cm]{SingularFiber.png}
\caption[The illustration of the nilpotent cone in ${{\mathcal M}}_H(\Sigma_{0,3}, SU(3))$]{The illustration of the nilpotent cone in ${{\mathcal M}}_H(\Sigma_{0,3}, SU(3))$. Here ${\mathcal M}$ is the base ${\mathbb{C}}{\mathbf{P}}^1$, $D_{1,2,3}$ consist of downward Morse flows from $P_{1,2,3}$ to the base, while $D_{4,5,6}$ include the flows from $P_{4,5,6}$ to $P_{1,2,3}$.}
\label{SingularFiber}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=10cm]{E6.png}
\caption{The affine $\widehat{E}_6$ extended Dynkin diagram. The Dynkin label gives the multiplicity of each node in the decomposition of the null vector.}
\label{E6Diag}
\end{figure}
The intersection form of different components in the nilpotent cone gives the Cartan matrix of affine $E_6$. Figure \ref{E6Diag} is the Dynkin diagram of $\widehat{E}_6$, and coefficients in \eqref{Vrelation} are Dynkin labels on the corresponding node. These numbers tell us the combination of $D$'s and ${\mathcal M}$ that give a null vector $\mathbf{F}$ of $\widehat{E}_6$.
\subsection{Comments on $T_N$ theories}
The above procedure can be generalized to arbitrary rank, for all $T_N$ theories, if we employ the generalized Argyres-Seiberg dualities.
There are in fact several ways to generalized Argyres-Seiberg duality \cite{Tachikawa:2015bga, Gaiotto:2009we, Chacaltana:2010ks}. For our purposes, we want no punctures of the $T_N$ to be closed under dualities, so we need the following setup \cite{Gaiotto:2009we}.
We start with a linear quiver gauge theory A' with $N-2$ nodes of $SU(N)$ gauge groups, and at each end of the quiver we associate $N$ hypermultiplets in the fundamental representation of $SU(N)$. One sees immediately that each gauge node is automatically superconformal. Geometrically, we actually start with a punctured Riemann sphere with two full $SU(N)$ punctures and $N-1$ simple punctures. Then, the $N-1$ simple punctures are brought together and a hidden $SU(N-1)$ gauge group becomes very weak. In our original quiver diagram, such a procedure of colliding $N-1$ simple punctures corresponds to attaching a quiver tail of the form $SU(N-1) - SU(N-2) - \cdots - SU(2)$ with a single hypermultiplet attached to the last $SU(2)$ node. See figure \ref{TN_AS} for the quiver diagrams and figure \ref{TN_ASGeo} for the geometric realization.
\begin{figure*}[htbp]
\begin{adjustwidth}{-2.0cm}{}
\centering
\begin{subfigure}[t]{.4\textwidth}
\includegraphics[width=8cm]{TN_Linear.png}
\caption*{(a)}
\end{subfigure}
\hspace{1.5cm}
\begin{subfigure}[t]{.4\textwidth}
\includegraphics[width=8cm]{TNdual.png}
\caption*{(b)}
\end{subfigure}
\end{adjustwidth}
\caption[Illustration of generalized Argyres-Seiberg duality for the $T_N$ theories.]{Illustration of generalized Argyres-Seiberg duality for the $T_N$ theories. (a) The theory A', which is a linear quiver gauge theory with $N-2$ $SU(N)$ vector multiplets. Between each gauge node there is a bi-fundamental hypermultiplet, and at each end of the quiver there are $N$ fundamental hypermultiplets. In the quiver diagram we omit the $U(1)^{N-1}$ baryonic symmetries. (b) The theory B' is obtained by gauging an $SU(N-1)$ subgroup of the $SU(N)^3$ flavor symmetry of $T_N$, giving rise to a quiver tail. Again the $U(1)$ symmetries are implicit in the diagram.}
\label{TN_AS}
\end{figure*}
\begin{figure*}[htbp]
\begin{adjustwidth}{-2.0cm}{}
\centering
\begin{subfigure}[t]{.4\textwidth}
\includegraphics[width=9cm]{TNGeo.png}
\caption*{(a)}
\end{subfigure}
\hspace{1.5cm}
\begin{subfigure}[t]{.4\textwidth}
\includegraphics[width=9cm]{TNdualGeo.png}
\caption*{(b)}
\end{subfigure}
\end{adjustwidth}
\caption[Illustration of the geometric realization of generalized Argyres-Seiberg duality for $T_N$ theories.]{Illustration of the geometric realization of generalized Argyres-Seiberg duality for $T_N$ theories. (a) The theory A' is obtained by compactifying 6d $(2,0)$ theory on a Riemann sphere with two maximal $SU(N)$ punctures and $N-1$ simple punctures. (b) The theory B', obtained by colliding $N-1$ simple punctures, is then the theory that arises from gauging a $SU(N-1)$ flavor subgroup of $T_N$ by a quiver tail.}
\label{TN_ASGeo}
\end{figure*}
Here we summarize briefly how to obtain the Lens space Coulomb index of $T_N$. Let ${{\mathcal I}}^N_{A'}$ be the index of the linear quiver theory, which depends on two $SU(N)$ flavor holonomies ${\bf h}_a$ and ${\bf h}_b$ (here we use the same notation as that of $SU(3)$) and $N-1$ $U(1)$-holonomies $n_i$ where $i=1,2,\dots, N-1$. In the infinite coupling limit, the dual weakly coupled theory B' emerges. One first splits the $SU(N)_c$ subgroup of the full $SU(N)^3$ flavor symmetry group into $SU(N-1)\times U(1)$ and then gauges the $SU(N-1)$ part with the first gauge node in the quiver tail. As in the $T_3$ case there is a transformation:
\begin{equation}\begin{aligned}
\pbra{h^c_1, h^c_2, \cdots, h^c_N} \rightarrow \pbra{w_1, w_2, \cdots w_{N-2}, {\widetilde n}_0}.
\end{aligned}\end{equation}
After the $SU(N-1)$ node, there are $N-2$ more $U(1)$ symmetries, we will call those associated holonomies ${\widetilde n}_j$ with $j=1,2,\dots, N-2$. Again there exists a correspondence as in the $T_3$ case:
\begin{equation}\begin{aligned}
\pbra{n_1, n_2, \dots, n_{N-1}} \rightarrow \pbra{{\widetilde n}_0, {\widetilde n}_1, \dots, {\widetilde n}_{N-2}}.
\end{aligned}\end{equation}
Then the Coulomb branch index of the theory B' is
\begin{equation}\begin{aligned}
{{\mathcal I}}^N_{B'} ({\bf h}^a, {\bf h}^b, {\widetilde n}_0, {\widetilde n}_1, \dots, {\widetilde n}_{N-2}) = \sum_{\{w_i\}} C^{T_N}({\bf h}^a, {\bf h}^b, w_1, w_2, \cdots w_{N-2}, {\widetilde n}_0) {{\mathcal I}}_{T}(w_i; {\widetilde n}_1, \dots, {\widetilde n}_{N-2}),
\end{aligned}\end{equation}
where ${{\mathcal I}}_T$ is the index of the quiver tail:
\begin{equation}\begin{aligned}
{{\mathcal I}}_T (w_i; {\widetilde n}_1, \dots, {\widetilde n}_{N-2}) = & \sum_{\{w^{(N-2)}_i\}} \sum_{\{w^{(N-3)}_i\}} \dots \sum_{\{w^{(2)}_i\}}{I}^V_{N-1} (w_i) {I}^H_{N-1,N-2} (w_i, w^{(N-2)}_j, {\widetilde n}_1){I}^V_{N-2} (w^{(N-2)}_i)\\[0.5em]
& \times {I}^H_{N-2,N-3} (w^{(N-2)}_i, w^{(N-3)}_j, {\widetilde n}_2) {I}^V_{N-3} (w^{(N-3)}_i) \times \dots \\[0.5em]
& \times {I}^V_{2} (w^{(2)}_i) {I}^H_{2,1} (w^{(2)}_i, {\widetilde n}_{N-2}).
\end{aligned}\end{equation}
Now we can view ${{\mathcal I}}_T$ as a large matrix $\mathfrak{M}_{\{w_i\}, \{{\widetilde n}_j\}}$, and in fact it is a square matrix. Although the set $\{{\widetilde n}_j\}$ appears to be bigger, there is an affine Weyl group $\widehat{A}_{N-2}$ acting on it. From the geometric picture, one can directly see the $A_{N-2}=S_{N-2}$ permuting the $N-2$; and the shift $n_i\rightarrow n_i+k$, which gives the same holonomy in $U(1)_i$, enlarges the symmetry to that of $\widehat{A}_{N-2}$. After taking quotient by this symmetry, one requires $\{{\widetilde n}_j\}$ to live in the Weyl alcove of $\mathfrak{su}(N-1)$, reducing the cardinality of the set $\{{\widetilde n}_j\}$ to that of $\{w_i\}$. Then one can invert the matrix $\mathfrak{M}_{\{w_i\}, \{{\widetilde n}_j\}}$ and obtain the index $C^{T_N}$, which in turn gives the fusion coefficients and the algebra structure of the $SU(N)$ equivariant TQFT.
The metric of the TQFT coming from the cylinder is also straightforward even in the $SU(N)$ case. It is always diagonal and only depends on the symmetry reserved by the holonomy labeled by the highest weight $\lambda$. For instance, if the holonomy is such that $SU(N) \rightarrow U(1)^n \times SU(N_1) \times SU(N_2) \times SU(N_l)$, we have
\begin{equation}\begin{aligned}
\eta^{\lambda {\overline \lambda}} = \frac{1}{(1-\ft)^n} \prod_{j=1}^l \frac{1}{(1-\ft^2)(1-\ft^3)\dots(1-\ft^{N_j})}.
\end{aligned}\end{equation}
This can be generalized to arbitrary group $G$. If the holonomy given by $\lambda$ has stabilizer $G'\subset G$, the norm square of $\lambda$ in the $G_k$ equivariant Verlinde algebra is
\begin{equation}
\eta^{\lambda {\overline \lambda}} = P(BG',\ft).
\end{equation}
Here $P(BG',\ft)$ is the Poincar\'e polynomial\footnote{More precisely, it is the Poincar\'e polynomial in variable $\ft^{1/2}$. But as $H^*(BG,{\mathbb C})$ is zero in odd degrees, this Poincar\'e polynomial is also a series in $\ft$ with integer powers.} of the infinite-dimensional classifying space of $G'$. In the ``maximal'' case of $G'=U(1)^r$, we indeed get
\begin{equation}
P\left(BU(1)^r,\ft\right)=P\left(\left({\mathbb{C}}{\mathbf{P}}^{\infty}\right)^r,\ft\right)=\frac{1}{(1-t)^r}.
\end{equation}
|
1,477,468,750,410 | arxiv | \section{Introduction}
Many edge-based AI applications have emerged in recent years, where various edge systems (e.g., PCs, smart phones, IoT devices) collect local data, collaboratively train a ML model, and use the model for AI-driven services. For example, smart cameras are deployed in surveillance systems \cite{ai_solution_edge,park2018wireless}, which capture local images/videos and train a global face recognition model aggregately. In Industry AI Operations (AIOps) \cite{qu2017next}, chillers in a building or an area collect temperature and electricity consumption data in the households, and derive a global COP (Coefficient of Performance) prediction model \cite{chen2019data}.
A straightforward way of training a global model with data collected from multiple edge systems is to send all data to a central venue, e.g., a cloud data center, and train the datasets using a ML framework, such as TensorFlow \cite{2016abadi-tensorflow}, MXNet \cite{2015chen-mxnet} and Caffe2 \cite{hazelwood2018applied}). Such a `data aggregation $\rightarrow$ training' approach may well incur large network bandwidth cost, due to large data transmission volumes and continuous generation nature of the data flow, as well as data security and privacy concerns. To alleviate these issues, collaborative, distributed training among edge systems has been advocated \cite{tao2018esgd}, where each edge system locally trains the dataset it collects, and exchanges model parameter {\em updates} (i.e., gradients) with each other through parameter servers \cite{wang2018adaptive_federated,konevcny2016federated,park2018wireless} (a.k.a., geo-distributed data parallel training).
Edge systems are intrinsically heterogeneous: their hardware configurations can be vastly different, leading to different computation and communication capacities. This brings significant new issues on parameter synchronization among the edge workers. In a data center environment, synchronous training (i.e., Bulk Synchronous Parallel (BSP) \cite{2013ssp} \cite{li2013parameter} \cite{low2012distributed}) is adopted by the majority of production ML jobs (based on our exchanges with large AI cloud operators), given the largely homogeneous worker configuration: each worker trains a mini-batch of input data and commits computed gradients to the PS; the PS updates global model after receiving commits from all workers, and then dispatches updated model parameters to all workers, before each worker can continue training its next mini-batch. In the edge environment, the vastly different training speeds among edge devices call for a more asynchronous parameter synchronization model, to expedite ML model convergence.
{\em Stale Synchronous Parallel (SSP)} \cite{2013ssp} and {\em Totally Asynchronous Parallel (TAP)} \cite{2017hsieh-gaia} are representative asynchronous synchronization models. With TAP, the PS updates the global model upon commit from each individual worker, and dispatches updated model immediately to the respective worker; it has been proven that such complete asynchrony cannot ensure model convergence \cite{2017hsieh-gaia}. SSP enforces bounded asynchronization: fast workers wait for slow workers for a bounded difference in their training progress, in order to ensure model convergence. A few recent approaches have been proposed to further improve convergence speed of asynchronous training\cite{hadjis2016omnivore,wang2018adaptive} (see more in Sec.~\ref{sec:relatedwork}).
We investigate how existing parameter synchronization models work in a heterogeneous edge environment with testbed experiments (Sec.~\ref{motivation}), and show that the waiting time (overall model training time minus gradient computation time) is still more than 50\% of the total training time with the representative synchronization models.
Aiming at minimizing the waiting time and optimizing computing resource utilization, we propose {\em ADSP} (ADaptive Synchronous Parallel), a new parameter synchronization model for distributed ML with heterogeneous edge systems. Our core idea is to let faster workers continue with their mini-batch training all the time, while enabling all workers to commit their model updates at the same strategically decided intervals, to ensure not only model convergence but also faster convergence. The highlights of {\em ADSP} are summarized as follows:
$\triangleright$ {\em ADAP} is tailored for distributed training in heterogeneous edge systems, which fully exploits individual workers' processing capacities by eliminating the waiting time.
$\triangleright$ {\em ADSP} actively controls the parameter update rate from each worker to the PS, to ensure that the total number of commits from each worker to the PS is roughly the same over time, no matter how fast or slow each worker performs local model training. Our algorithm exploits a momentum-based online search approach to identify the best cumulative commit number across all workers, and computes the commit rates of individual workers accordingly. ADSP is proven to converge after a sufficient number of training iterations.
$\triangleright$ We have done a full-fledged implementation of {\em ADSP} and evaluated it with real-world edge ML applications. Evaluation results show that it outperforms representative parameter synchronization models significantly in terms of model convergence time, scalability and adaptability to large heterogeneity.
\section{Background and Motivation}
\label{sec:relatedwork}
\subsection{SGD in PS Architecture}
Stochastic Gradient Descent (SGD) is the widely used algorithm for training neural networks \cite{hadjis2016omnivore,2016abadi-tensorflow}. Let $W_t$ be the set of global parameters of the ML model at $t$. A common model update method with SGD is:
\begin{equation}\label{eq:sgdMomentum}
\begin{aligned}
W_{t+1} = W_t - \eta \nabla \ell(W_t)+ \mu (W_{t} - W_{t-1})
\end{aligned}
\end{equation}
\noindent where $\nabla \ell(W_t)$ is the gradient, $\eta$ is the learning rate, and $\mu \in [0, 1]$ is the {\em momentum} introduced to accelerate the training process, since it accumulates gradients in the right direction to the optimal point \cite{polyak1964some,sutskever2013importance}.
In widely adopted data-parallel training with the parameter server (PS) architecture\cite{2014chilimbi-Adam}, SGD update rule can be applied at both the workers and the PS \cite{jiang2017heterogeneity}. Each worker holds a local copy of the ML model, its local dataset is divided into mini-batches, and the worker trains its model in an iterative fashion: in each step, the worker calculates gradients of model parameters using one mini-batch of its data, and may commit its gradients to the PS and pull the newest global model parameters from the PS. The PS updates the global model using Eqn.~(\ref{eq:sgdMomentum}) with gradients received from the workers and a \textit{global learning rate} $\eta$. In the case that a worker does not synchronize model parameters with the PS per step, the worker may carry out local model updates using computed gradients according to Eqn.~(\ref{eq:sgdMomentum}), where the gradients are multiplied by a \textit{local learning rate} $\eta^{\prime}$.
\subsection{Existing Parameter Synchronization Models}
A parameter synchronization model specifies when each worker commits its gradients to the PS and whether it should be synchronized with updates from other workers; it critically affects convergence speed of model training. Three representative synchronization models, BSP, SSP and TAP, have been compared in \cite{2017hsieh-gaia}, which proves that BSP and SSP guarantee model convergence whereas TAP does not. Training convergence with BSP is significantly slower than SSP \cite{2013ssp}, due to BSP's strict synchronization barriers. Based on the three synchronization models, many studies have followed, aiming to reduce the convergence time by reducing communication contention or overhead \cite{chenround,lin2017deep,sun2016timed}, adjusting the learning rate \cite{jiang2017heterogeneity}, and others \cite{zhang2018stay}.
ADACOMM \cite{wang2018adaptive} allows accumulating local updates before committing to the PS, and adopts BSP-style synchronization model, i.e., all workers run $\tau$ training steps before synchronizing with the PS. It also suggests reducing the commit rate periodically according to model loss; however, the instability in loss values during training and the rapidly declining commit rate are not ideal for expediting training (according to our experiments).
Aiming at minimizing waiting time among heterogeneous workers, our synchronization model, ADSP, employs an online search algorithm to automatically find the optimal/near-optimal update commit rate for each worker to adopt.
\subsection{Impact of Waiting} \label{motivation}
\begin{figure}[!th]
\begin{center}
\includegraphics[width = .95\columnwidth]{waiting_time.pdf}
\caption{Training time breakdown with different parameter synchronization models.}
\label{figure:waiting_time}
\end{center}
\end{figure}
We divide the time a worker spends in each training step into two parts: (i) the {\em computation time}, to carry out backward propagation to compute gradients/apply model updates and forward propagation to produce output \cite{2015chen-mxnet}; and (ii) the \textit{waiting time}, including the time for exchanging gradients/parameters with the PS and the blocked time due to synchronization barrier (i.e., the time when the worker is not doing computation nor communication).
We experiment with representative synchronization models to investigate their waiting time incurred. We train a convolutional neural network (CNN) model on the Cifar10 dataset \cite{cifar10} with 1 PS and 3 workers with heterogeneous computation capacities (time ratio to train one mini-batch is 1:1:3). Fig.~\ref{figure:waiting_time} shows the convergence time (overall training time to model convergence) and the average time spent per training step, incurred with BSP, SSP, and ADACOMM (See Sec.~\ref{sec:evaluation} for their details). TAP is not compared as it has no convergence guarantee. The computation/waiting time is averaged among all workers. We see that with heterogeneous workers, the waiting time dominates the overall training time with BSP and SSP, and their overall convergence time and time spent per training step are long. With ADACOMM, the waiting time and overall training time are much shorter. Nevertheless, its waiting time is still close to half of the total training time, i.e., the effective time used for model training is only around 50\%, due to its relative conservative approach on local model updates.
Our key question is: what is the limit that we can further reduce the waiting time to, such that time is spent most efficiently on model computation and convergence can be achieved in the most expedited fashion? Our answer, ADSP, allows fast workers to keep training while maintaining approximately the same gradient commit rates among all workers. Fig.~\ref{figure:waiting_time} shows that the waiting time is minimized to a negligible level with ADSP, as compared to the computation time. As such, almost all training time is effectively used for model computation and fast model convergence is achieved.
\section{ADSP Overview}\label{section:overview}
We consider a set of heterogeneous edge systems and a parameter server (PS) located in a datacenter, which together carry out SGD-based distributed training to learning a ML model. ADSP (ADaptive Synchronous Parallel) is a new parameter synchronization model for this distributed ML system. The design of ADSP targets the following goals:
(i) make full use of the computation capacity of each worker;
(ii) choose a proper commit rate to balance the tradeoff between hardware efficiency (utilization of worker computing capacity) and statistical efficiency (i.e., reduction of loss per training step), in order to minimize the overall time taken to achieve model convergence;
(iii) ensure model convergence under various training speeds and bandwidth situations at different workers.
\begin{figure}[!t]
\begin{center}
\includegraphics[width = .95\columnwidth]{STrain_architecture.pdf}
\caption{ADSP workflow.}
\label{figure:STrain_arc}
\end{center}
\end{figure}
With ADSP, time is divided into equal-sized slots of duration $\Gamma>0$: $0, \Gamma, 2\Gamma, \ldots$, which we call as \textit{check periods}. We refer to time points $\Gamma, 2\Gamma, \ldots, p\Gamma, \ldots$, as {\em checkpoints}. More precisely, we define the process of a worker sending computed gradients to the PS as a \textit{commit}, and the number of commits from worker $i$ during a check period as \textit{commit rate} $\Delta C_{target}^{i}$. ADSP consists of two modules: 1) a novel synchronization model, which allows faster edge systems to perform more training before each update to the PS, and ensures that the commit rates of all worker are the same; 2) a global commit rate search algorithm, which selects an appropriate commit rate for all workers to pursue, in order to achieve fast convergence.
Let $c_i$ denote the total number of commits from worker $i$ to the PS, since the very beginning. At each checkpoint, we compute the target total number of commits that each worker is expected to have submitted by the next checkpoint, $C_{target}$, and adjust the commit rate of each worker $i$ in the next check period as $\Delta C_{target}^{i} = C_{target} - c_i$, respectively.
Fig.~\ref{figure:STrain_arc} shows the workflow of our ADSP model. The data produced/collected at each edge system/worker is stored into training datasets. For each mini-batch in its dataset, an edge system computes a \textit{local update} of model parameters, i.e., gradients, using examples in this mini-batch. After training one mini-batch, it moves on to train the next mini-batch and derives another local update. Worker $i$ pushes its \textit{accumulative update} (i.e., sum of all gradients it has produced since last commit multiplied with the local learning rate) according to the commit rate $\Delta C_{target}^{i}$. A scheduler adjusts and informs each end system of the target commit rate $\Delta C_{target}^i$ over time. Upon receiving a commit from worker $i$, the PS multiplies the accumulated update with the \textit{global learning rate} \cite{jiang2017heterogeneity} and then updates the global model with it; worker $i$ then pulls updated parameters from the PS and continues training the next mini-batch.
\section{ADSP Algorithms and Analysis}
It is common to have large heterogeneity among edge systems, including different computation power and network delays to the datacenter hosting the PS. Our core idea in designing ADSP is to {\em adapt} to the heterogeneity, i.e., to transform the training in heterogeneous settings into homogeneous settings using a {\em no-waiting} strategy: \textit{we allow different workers to process different numbers of mini-batches between two commits according to their training speed, while ensuring the number of commits of all the workers approximately equal at periodical checkpoints.} To achieve this, we mainly control the hyper-parameter, {\em commit rate}, making faster workers accumulate more local updates before committing their updates, so as to eliminate the waiting time. By enforcing approximately equal numbers of commits from all the workers over time, we can ensure model convergence.
\subsection{The Impact of $C_{target}$ on Convergence}
The target total number of commits to be achieved by each worker by the next checkpoint, $C_{target}$, decides commit rate of each worker $i$ within the next check period, as $\Delta C_{target}^i = C_{target} - c_i$ ($c_i$ is $i$'s current total commit number). The commit rate has a significant impact on the training progress: if $\Delta C_{target}^i$ is large, a slow worker may fail to achieve that many commits in the next period, due to the limited compute capacity; even if it can hit the target, too many commits may incur high communication overhead, which in turn slows down the training process. On the other hand, if the number of target commits is too small, which implies that each end system commits its gradients after many steps of mini-batch training using local parameters, large difference and significant staleness exist among the model copies at different workers, which may adversely influence model convergence as well.
\begin{figure}
\begin{center}
\includegraphics[width = .95\columnwidth]{vary_commit_rate_mu.pdf}
\caption{(a) the impact of $C_{target}$ on convergence time; (b) an illustration of $\mu_{implicit}$ ; (c) convergence time with different $\mu_{implicit}$ values.}
\label{figure:vary_commit_rate_mu}
\end{center}
\end{figure}
To illustrate this, we train a CNN model on the Cifar10 dataset \cite{cifar10} with 1 PS and 3 workers (time ratio to train one mini-batch is 1:1:3), where all workers keep training their mini-batches and commit gradients to the PS at the same commit rate $\Delta C_{target}$ over time. We vary the value of $\Delta C_{target}$ in different runs of the experiment. Fig.~\ref{figure:vary_commit_rate_mu}(a) shows that with the increase of $\Delta C_{target}$, the model convergence time becomes smaller at first and then increases. This is consistent with our discussions above.
We next quantify the effect of the commit rate $\Delta C_{target}$ on model convergence. Suppose that all $m$ workers communicate with the PS independently. Let $U(W_t)$ denote the accumulative local updates that a worker commits when the global model is $W_t$, and $v_i$ denote the number of steps that worker $i$ can train per unit time. We have the following theorem.
\begin{theorem}\label{local_model}
Set the momentum $\mu$ in the SGD update formula (\ref{eq:sgdMomentum}) to zero. The expected SGD update on the global model is equivalent to
\begin{eqnarray}
\mathbb{E}(W_{t+1}\! - W_{t})\! = \!(1-p)\mathbb{E}(W_{t}\! - W_{t-1}) - p \eta \mathbb{E} U(W_t) \label{eq:theorem1} \\
\mbox{where } ~ p = 1/(1 + (1 - 1/m)\sum_{i=1}^{m}\frac{\Gamma}{\Delta C_{target}^i v_i}) \label{eq:theorem1_p}
\end{eqnarray}
\end{theorem}
\noindent The detailed proof is given in the supplemental file. Compared to the SGD update formula in Eqn.~(\ref{eq:sgdMomentum}), the result is interesting: with our ADSP model, staleness induced by cumulative local updates can be considered as inducing an extra momentum term (i.e., $1-p$) into the SGD update equation. To distinguish this term from the original momentum $\mu$ in Eqn.~(\ref{eq:sgdMomentum}), we refer to this term as the \textit{implicit momentum}, denoted by $\mu_{implicit}=1-p$. As we increase $\Delta C_{target}^i$, the implicit momentum becomes smaller according to Eqn.~(\ref{eq:theorem1}).
With the same CNN training experiments as above, Fig.~\ref{figure:vary_commit_rate_mu}(b) illustrates how $1-p$ varies with $\Delta C_{target}$ (according to Eqn.~(\ref{eq:theorem1_p})). The optimal momentum is derived based on Fig.~\ref{figure:vary_commit_rate_mu}(c), where we vary the value of $\mu_{implicit}$ in Eqn.~(\ref{eq:theorem1}) in our experiments, and show how the time taken for model convergence varies with different $\mu_{implicit}$ values. Inspired by the observations, we seek to identify the best commit rate $\Delta C_{target}$ for the workers, that decides the best $\mu_{implicit}$ to achieve the shortest convergence time.
\subsection{The Commit Rate Search Algorithm}
\label{section:search}
We propose a local search method to identify a near-optimal commit rate to achieve the fastest convergence, exploiting the observations that the staleness induced by local updates can be converted to an implicit momentum term in SGD update and the implicit momentum decreases as we increase the commit rate. The algorithm is given in Alg.~\ref{algorithm:scheduler}, which is executed by the scheduler (Fig.~\ref{figure:STrain_arc}).
In the algorithm, an {\em epoch} is a time interval containing multiple check periods, for commit rate adjustment. At the beginning of each \textit{epoch} (e.g., 1 hour), the scheduler performs the search for the optimal commit rates of workers in this epoch. We start with a small target total commit number $C_{target}$, allowing each worker to commit at least once in each check period; in this case, the commit rates $\Delta C_{target}^i$'s are small, asynchrony-induced implicit momentum is large, and the corresponding point in Fig.~\ref{figure:vary_commit_rate_mu}(b) is located to the left of the optimal momentum. Then the scheduler evaluates the training performance (i.e., loss decrease speed, to be detailed in Sec.~\ref{section:onlineSearch}) induced by $C_{target}$ and $C_{target}+1$, by running the system using commit rates computed based on the two values for a specific period of time (e.g., 1 minute). If $C_{target}+1$ leads to better performance, the scheduler repeats the search, comparing performance achieved by $C_{target}+1$ and $C_{target}+2$ further; otherwise, the search stops and the commit rates $\Delta C_{target}^i$'s decided by the current $C_{target}$ are used for the rest of this epoch. The rationale behind is that the optimal $C_{target}$ for each epoch is larger than the initial value ($\max_{i=1,\ldots,M}{c_i+1}$), so we only need to determine whether to increase it or not.
\renewcommand{\algorithmicrequire}{\textbf{Scheduler:}}
\renewcommand{\algorithmicensure}{\textbf{Workers:}}
\begin{algorithm}[!th]
\caption{Commit Rate Adjustment at the Scheduler}
\label{algorithm:scheduler}
\begin{algorithmic}[1]
\Function {\textsc{MainFunction}}{}
\For {epoch e = 1, 2, \ldots}
\State $C_{target}=\max_{i=1,\ldots,M}{c_i+1}$
\State $C_{target} \leftarrow $ \textsc{DecideCommitRate}($C_{target}$)
\State run \textsc{ParameterServer} and \textsc{Wokers} for the remaining time
\EndFor
\EndFunction
\Function {\textsc{DecideCommitRate}}{$C_{target}$}
\State $r_1 \leftarrow$ \textsc{OnlineEvaluate}($C_{target}$)
\State $r_2 \leftarrow$ \textsc{OnlineEvaluate}($C_{target} + 1$)
\If {$r_2 > r_1$}
\State \Return \textsc{DecideCommitRate}($C_{target} + 1$).
\Else
\State \Return $C_{target}$
\EndIf
\EndFunction
\Function {\textsc{OnlineEvaluate}}{$C_{target}$}
\For {$i$ = 0, 1, 2, \ldots,M}
\State $\Delta C_{target}^i = C_{target} - c_i$
\State Send $\Delta C_{target}^i$ to worker $i$
\EndFor
\State Training for 1 minute
\State \Return reward $r$
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsubsection{Online Search and Reward Design. } \label{section:onlineSearch}
Traditional search methods are usually offline \cite{hadjis2016omnivore}, blocking the whole system when trying out a specific set of variable values and trying each configuration starting with the same system state. With an offline search method, one can select the best configuration by comparing the final loss achieved after running different configurations for the same length of time. However, such a search process incurs significant extra delay into the training progress and hence significant slowdown of model convergence. In Alg.~\ref{algorithm:scheduler}, we instead adopt an {\em online search} method (in DECIDECOMMITRATE()): we consecutively run each configuration for a specific time (e.g., 1 minute) without blocking the training process.
To compare the performance of the configurations when they do not start with the same system state, we define a \textit{reward} as follows. The loss convergence curve of SGD training usually follows the form of $O(1/t)$ \cite{peng2018optimus}. We collect a few (time $t$, loss $\ell$) pairs when the system is running with a particular configuration, e.g., at the start, middle and end of the 1 minute period, and use them to fit the following formula on the left:
$$
\ell = \frac{1}{a_1^2 t + a_2} + a_3 \quad \Rightarrow \quad r = \frac{a_1^2}{\frac{1}{\ell - a_3} - a_2}
$$
\noindent where $a_1, a_2, a_3$ are parameters. Then we obtain the reward $r$ as the loss decrease speed, by setting $\ell$ to a constant and calculating the reciprocal of corresponding $t$. The target of the online search algorithm is to find the commit rate that reaches the maximum reward, i.e., the minimum time to converge to a certain loss.
\renewcommand{\algorithmicrequire}{\textbf{Parameter Server:}}
\renewcommand{\algorithmicensure}{\textbf{End System:}}
\begin{algorithm}[!t]
\caption{ADSP: Worker and PS Procedures}
\label{algorithm:design}
\begin{algorithmic}[1]
\Ensure i = 1, 2, ..., m
\Function {\textsc{Worker}}{}
\For {epoch $e = 1, 2, \ldots$}
\State {receive $\Delta C_{target}^i$ from the scheduler}
\State set a timer with a timeout of $\frac{\Gamma}{\Delta C_{target}^i} - \mathcal{O}_i$ and invoking TimeOut() upon timeout
\While {model not converged}
\State train a minibatch to obtain gradient $g_{i}$
\State accumulated gradient $U_{i} \leftarrow U_{i} + \eta^{\prime} g_{i}$ ($\eta^{\prime}$ is the local learning rate)
\EndWhile
\EndFor
\EndFunction
\Function {\textsc{TimeOut}}{}
\State commit $U_{i}$ to the PS
\State receive updated global model parameters from the PS and update local model accordingly
\State restart the timer with timeout of $\frac{\Gamma }{\Delta C_{target}^i} - \mathcal{O}_i$
\EndFunction
\end{algorithmic}
\begin{algorithmic}[1]
\Require
\hspace*{0.05in}
\Function {\textsc{ParameterServer}}{}
\While {model not converged}
\If {receive commit $U_{i}$ from worker $i$}
\State $W \leftarrow W - \eta U_{i}$
\State Send $W$ to worker $i$
\EndIf
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsection{Worker and PS Procedures} \label{section:c2D_formula}
The procedures at each end system (i.e., worker) and the PS with ADSP is summarized in Alg.~\ref{algorithm:design}, where $\mathcal{O}_i$ represents the communication time for worker $i$ to commit an update to the PS and pull the updated parameters back. At each worker, we use a timer to trigger commit of local accumulative model update to the PS asynchronously, once every $\frac{\Gamma}{\Delta C_{target}^i} - \mathcal{O}_i$ time.
\subsection{Convergence Analysis} \label{section:proof}
We show that ADSP in Alg.~\ref{algorithm:design} ensures model convergence. We define $f_t(W)$ as the objective loss function at step $t$ with global parameter state $W$, where $t$ is the global number of steps (i.e., cumulative number of training steps carried out by all workers). Let $\tilde{W}_t$ be the set of global parameters obtained by ADSP right after step \textit{t}, and $W^{*}$ denote the optimal model parameters that minimize the loss function. We make the following assumptions on the loss function and the learning rate, which are needed for our convergence proof, but are not followed in our experimental settings.
\noindent \textbf{Assumptions:}
\textit{
\begin{enumerate}[label=(\arabic*), leftmargin=20pt]
\item $f_t(W)$ is convex
\item $f_t(W)$ is $L$-Lipschitz, i.e., $\left\|\nabla f_{t}\right\| \leqslant L$
\item The learning rate decreases as $\eta_t = \frac{\eta}{\sqrt{t}}$, $t = 1, 2, \ldots$, where $\eta$ is a constant.
\end{enumerate}
}
Based on the assumptions, we have the following theorem on training convergence of ADSP.
\begin{theorem}[Convergence]\label{convergence1}
ADSP ensures that by each checkpoint, the numbers of update commits submitted by any two different workers $i_1$ and $i_2$ are roughly equal, i.e., $c_{i_1} \approx c_{i_2}$. The regret $R = \sum_{t = 1}^{T}f_t(\tilde{W}_t) - f(W^{*})$ is upper-bounded by $O(\sqrt{T})$, when $T \to +\infty$.
\end{theorem}
The {\em regret} is the accumulative difference between the loss achieved by ADSP and the optimal loss over the training course. When the accumulative difference is under a sub-linear bound about \textit{T} (where \textit{T} is the total number of parameter update steps at the PS), we have $f_t(\tilde{W}_t)\to f(W^{*})$ when \textit{t} is large. Then $R/T \to 0$ as $T \to +\infty$, showing that our ADSP model converges to the optimal loss. The detailed proof is given in the supplemental file.
\section{Performance Evaluation}
\label{sec:evaluation}
We implement ADSP as a ready-to-use Python library based on TensorFlow \cite{2016abadi-tensorflow}, and evaluate its performance with testbed experiments.
\subsection{Experiment Setup}
\noindent \textbf{Testbed.} We emulate heterogeneous edge systems following the distribution of hardware configurations of edge devices in a survey \cite{smartphone2018}, using 19 Amazon EC2 instances \cite{amazon}: 7 $\mathtt{t2.large}$ instances, 5 $\mathtt{t2.xlarge}$ instances, 4 $\mathtt{t2.2xlarge}$ instances and 2 $\mathtt{t3.xlarge}$ instances as workers, and 1 $\mathtt{t3.2xlarge}$ instance as the PS.
\noindent \textbf{Applications.}
We evaluate ADSP with three distributed ML applications: (i) image classification on Cifar-10 \cite{cifar10} using a CNN model from the TensorFlow tutorial \cite{tf_cifar10_tutorial}; (ii) Fatigue life prediction of bogies on high-speed trains, training a recurrent neural network (RNN) model with the dataset collected from the China high-speed rail system; (iii) Coefficient of Performance (COP) prediction of chillers, training a global linear SVM model with a chiller dataset.
\noindent \textbf{Baselines.} (1) \textit{SSP} \cite{2013ssp}, which allows the fastest worker to run ahead of the slowest worker by up to $s$ steps; (2) \textit{BSP} \cite{1990-bridging}, where the PS strictly synchronizes among all workers such that they always perform the same number of training steps. (3) {\em ADACOMM} \cite{wang2018adaptive}, which allows all workers to accumulate $\tau$ updates before synchronizing with the PS and reduces $\tau$ periodically. (4) {\em Fixed ADACOMM}, a variant of ADACOMM with $\tau$ fixed for all workers.
\noindent \textbf{Default Settings.}
By default, each mini-batch in our model training includes 128 examples. The check period of ADSP is 60 seconds, and each epoch is 20 minutes long. The global learning rate is $1/M$ (which we find works well through experiments). The local learning rate is initialized to 0.1 and decays exponentially over time.
\begin{figure}[!t]
\begin{center}
\includegraphics[width = .95\columnwidth]{cifar_rst.pdf}
\caption{Comparison of ADSP with baselines in training efficiency: training CNN on the Cifar-10 dataset.}
\label{figure:strain_eval}
\end{center}
\end{figure}
\subsection{Experiment Results}
All results given in the following are based on CNN training on the Cifar-10 dataset. More experiment results on fatigue life prediction and CoP prediction are given in the supplemental file.
\subsubsection{Performance of ADSP.} \label{section:eval_adsp}
We compare ADSP with the baselines in terms of the wall-clock time and the number of training steps needed to reach model convergence, to validate the effectiveness of no-waiting training of ADSP. In Fig.~\ref{figure:strain_eval}, the global loss is the loss evaluated on the global model on the PS, and the number of steps is the cumulative number of steps trained at all workers. We stop training, i.e., decide that the model has converged, when the loss variance is smaller than a small enough value for 10 steps. Fig.~\ref{figure:strain_eval}(a) plots the loss curves and Fig.~\ref{figure:strain_eval}(b) correspondingly shows the convergence time with each method. We see that ADSP achieves the fastest convergence: $80\%$ acceleration as compared to BSP, $53\%$ to SSP, and $33\%$ to Fixed ADACOMM. For ADACOMM, although we have used the optimal hyper-parameters as in \cite{wang2018adaptive}, it converges quite slowly, which could be due to its instability in tuning $\tau$: $\tau$ is tuned periodically based on the current loss; if the loss does not decrease, it simply multiplies $\tau$ with a constant. In Fig.~\ref{figure:strain_eval}(c), we see that ADSP carries out many more training steps within its short convergence time, which may potentially lead to a concern on its training efficiency. Fig.~\ref{figure:strain_eval}(d) further reveals that the per-training-step loss decrease achieved by ADSP is slightly lower than that of Fixed ADACOMM, and better than other baselines. The spike in ADSP curve at the beginning stage is due to small commit rates that our search algorithm derives, which make the loss fluctuates significantly. However, with ADSP, the model eventually converges to a smaller loss than losses that other baselines converge to.
\begin{figure}[!t]
\begin{center}
\includegraphics[width = .95\columnwidth]{vary_Hete_degree_size.pdf}
\caption{Comparison of ADSP with Fixed ADACOMM at different degrees of heterogeneity and system scales.}
\label{figure:vary_heterogeneity}
\end{center}
\end{figure}
\subsubsection{Adaptability to Heterogeneity.}
We next evaluate ADSP's adaptability to different levels of end system heterogeneity. Besides hardware configuration difference among the workers, we further enable each worker to sleep for a specific short time after each step of training one mini-batch, and tune the sleep time to adjust training speeds of workers. We define the heterogeneity degree among the workers as follows:
$$
H = \frac{\sum_i^M v_i/M}{\min_{i=1,\ldots,M} v_i}
$$
where $v_i$ is the number of mini-batches that worker $i$ can process per unit time. The discussion of the heterogeneity degree considering communication overhead is given in our supplemental file.
Since BSP, SSP and ADACOMM are significantly slower than ADSP in training convergence, here we only compare ADSP with Fixed ADACOMM. Fig.~\ref{figure:vary_heterogeneity}(a)-(d) show that ADSP achieves faster convergence than Fixed ADACOMM (though with more spikes) in different heterogeneity levels. The corresponding convergence times are summarized in Fig.~\ref{figure:vary_heterogeneity}(e), which shows that the gap between ADSP and Fixed ADACOMM becomes larger when the workers differ more in training speeds. ADSP achieves a $62.4\%$ convergence speedup as compared to Fiexd ADACOMM when $H=3.2$. The reason lies in that Fixed ADACOMM still enforces faster workers to stop and wait for the slower workers to finish $\tau$ local updates, so the convergence is significantly influenced by the slowest worker. With ADSP, the heterogeneity degree hardly affects the convergence time much, due to its no-waiting strategy. Therefore, ADSP can adapt well to heterogeneity in end systems.
\subsubsection{System Scalability}\label{sec:scaling}
We further evaluate ADSP with 36 workers used for model training, whose hardware configuration follows the same distribution as in the 18-worker case. Fig.~\ref{figure:vary_heterogeneity}(f) shows that when the worker number is larger, both ADACOMM and ADSP become slower, and ADSP still achieves convergence faster than Fixed ADACOMM (which is more obvious than in the case of smaller worker number). Intuitively, when the scale of the system becomes larger, the chances increase for workers to wait for slower ones to catch up, resulting in that more time being wasted with Fixed ADACOMM; ADSP can use this part of time to do more training, and is hence a more scalable solution in big ML training jobs.
\subsubsection{The Impact of Network Latency.}\label{section:latency}
Edge systems usually have relatively poor network connectivity \cite{konevcny2016federated}; the communication time for each commit is not negligible, and could be even larger than the processing time in each step. Fig.~\ref{figure:delay} presents the convergence curve of each method as we add different extra delays to the communication module. When we increase the communication delay, the speed-up ratio of {\em ADSP}, {\em Adacomm} and {\em Fixed Adacomm}, as compared to {\em BSP} and {\em SSP}, becomes larger. This is because the first three models allow local updates and commit to the PS less frequently, consequently less affected by the communication delay than the last two methods. Among the first three models, ADSP still performs the best in terms of convergence speed, regardless of the communication delay.
The rationale behind is that we can count the communication time when evaluating a worker's `\textit{processing capacity}': for worker $i$, the average processing time per training step is $t_i + \mathcal{O}_i/\tau_i$, where $t_i$ is the time to train a mini-batch, $\mathcal{O}_i$ is the communication time for each commit, and $\tau_i$ is the number of local updates between two commits.Therefore, we can extend the scope of heterogeneity in processing capacity to include the heterogeneity of communication time as well. ADSP only needs to ensure the commit rates of all workers are consistent, and can inherently handle the generalized heterogeneity without regard to which components cause the heterogeneity.
\begin{figure}
\begin{center}
\includegraphics[width = .95\columnwidth]{vary_network_delay.pdf}
\caption{Comparison of ADSP with baselines with different network delays.}
\label{figure:delay}
\end{center}
\end{figure}
\section{Concluding Remarks}
This paper presents ADSP, a new parameter synchronization model for distributed ML with heterogeneous edge systems. ADSP allows workers to keep training with minimum waiting and enforces approximately equal numbers of commits from all workers to ensure training convergence. An online search algorithm is carefully devised to identify the near-optimal global commit rate. ADSP maximally exploits computation resources at heterogeneous workers, targeting training convergence in the most expedited fashion. Our testbed experiments show that ADSP achieves up to $62.4\%$ convergence acceleration as compared to most of the state-of-the-art parameter synchronization models. ADSP is also well adapted to different degrees of heterogeneity and large-scale ML applications.
\section{Acknowledgements}
This work was supported in part by grants from Hong Kong RGC under the contracts HKU 17204715, 17225516, C7036-15G (CRF), C5026-18G (CRF), in part by WHU-Xiaomi AI Lab, and in part by GRF PolyU 15210119, ITF UIM/363, CRF C5026-18G, PolyU 1-ZVPZ, and a Huawei Collaborative Grant.
\bibliographystyle{aaai}
|
1,477,468,750,411 | arxiv | \section{Introduction}
One of the grand challenges of reinforcement learning (RL), and of decision-making in general, is the ability to generalize to new tasks. RL agents have shown incredible performance on single task settings \citep{berner2019dota, lillicrap2015continuous, mnih2013playing}, yet frequently stumble when presented with unseen challenges. Single-task RL agents are largely overfit on the tasks they are trained on \citep{kirk2021survey}, limiting their practical use. In contrast, a general agent, which can robustly perform well on a wide range of novel tasks, can then be adapted to solve downstream tasks and unseen challenges.
General agents greatly depend on a diverse set of tasks to train on. Recent progress in deep learning has shown that as the amount of data increases, so do generalization capabilities of trained models \citep{brown2020language, ramesh2021zero, bommasani2021opportunities, radford2021learning}. Agents trained on environments with domain randomization or procedural generation capabilities transfer better to unseen test tasks \cite{cobbe2020leveraging, tobin2017domain, risi2020increasing, khalifa2020pcgrl}. However, as creating training tasks is expensive and challenging, most standard environments are inherently over-specific or limited by their focus on a single task type, e.g. robotic control or gridworld movement.
As the need to study the relationships between training tasks and generalization increases, the RL community would benefit greatly from a `foundation environment' supporting diverse tasks arising from the same core rules. The benefits of expansive task spaces have been showcased in Unsupervised Environment Design \citep{wang2019paired, dennis2020emergent, jiang2021prioritized, parker2022evolving}, but gridworld domains fail to display how such methods scale up. Previous works have proposed specialized task distributions for multi-task training \citep{samvelyan2021minihack, suarez2019neural, fan2022minedojo, team2021open}, each focusing on a specific decision-making problem. To further investigate generalization, it is beneficial to have an environment where many variations of training tasks can easily be compared.
As a step toward lightweight yet expressive environments, this paper presents Powderworld, a simulation environment geared to support procedural data generation, agent learning, and multi-task generalization. Powderworld aims to efficiently provide environment dynamics by running directly on the GPU. Elements (e.g. sand, water, fire) interact in a modular manner within local neighborhoods, allowing for efficient runtime. The free-form nature of Powderworld enables construction of tasks ranging from simple manipulation objectives to complex multi-step goals. Powderworld aims to 1) be modular and supportive of emergent interactions, 2) allow for expressive design capability, and 3) support efficient runtime and representations.
Additionally presented are two motivating frameworks for defining world-modelling and reinforcement learning tasks within Powderworld. World models trained on increasingly complex environments show superior transfer performance. In addition, models trained over more element types show stronger fine-tuning on novel rulesets, demonstrating that a robust representation has been learned. In the reinforcement learning case, increases in task complexity benefit generalization up to a task-specific inflection point, at which performance decreases. This point may mark when variance in the resulting reward signal becomes too high, inhibiting learning. These findings provide a starting point for future directions in studying generalization using Powderworld as a foundation.
\section{Related Work}
\textbf{Task Distributions for RL.}
Video games are a popular setting for studying multi-task RL, and environments have been built off NetHack \citep{samvelyan2021minihack, kuttler2020nethack}, Minecraft \citep{fan2022minedojo, johnson2016malmo, guss2019minerl}, Doom \citep{kempka2016vizdoom}, and Atari \citep{bellemare2013arcade}. \cite{team2021open, yu2020meta, cobbe2020leveraging} describe task distributions focused on meta-learning, and \cite{fan2022minedojo, suarez2019neural, hafner2021benchmarking, perez2016general} detail more open-ended environments containing multiple task types. Most similar to this work may be ProcGen \citep{cobbe2020leveraging}, a platform that supports infinite procedurally generated environments. However, while ProcGen games each have their own rulesets, Powderworld aims to share core rules across all tasks. Powderworld focuses specifically on runtime and expressivity, taking inspiration from online ``powder games" where players build ranges of creations out of simple elements \citep{ball, powdertoy, bittker}.
\textbf{Generalization in RL.} Multi-task reinforcement learning agents are generally valued for their ability to perform on unseen training tasks \citep{packer2018assessing, kirk2021survey}. The sim2real problem requires agents aim to generalize to out-of-distribution real world domains \citep{tobin2017domain, sadeghi2016cad2rl}. The platforms cited above also target generalization, often within the context of solving unseen levels within a game. This work aims to study generalization within a physics-inspired simulated setting, and creates out-of-distribution challenges by hand-designing a set of unseen test tasks.
\section{Powderworld Environment}
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=\textwidth]{imgs/tasks.png}
\end{tabular}
\vskip -0.1in
\caption{\textbf{Examples of tasks created in the Powderworld engine.} Powderworld provides a physics-inspired simulation over which many distributions of tasks can be defined. Pictured above are human-designed challenges where a player must construct unstable arches, transport sand through a tunnel, freeze water to create a bridge, and draw a path with plants. Tasks in Powderworld creates challenges from a set of core rules, allowing agents to learn generalizable knowledge.
\textbf{Try an interactive Powderworld simulation at} \href{http://kvfrans.com/static/powder}{kvfrans.com/static/powder}}
\label{fig:tasks}
\end{figure}
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=\textwidth]{imgs/scaling.png}
\end{tabular}
\vskip -0.1in
\caption{\textbf{Powderworld runs on the GPU and can simulate many worlds in parallel.} GPU simulation provides a significant speedup and allows simulation time to scale with batch size. Simulation speed is guaranteed to remain constant regardless of how many elements are present in the world.}
\label{fig:scaling}
\end{figure}
The main contribution of this work is an environment specifically for training generalizable agents over easily customizable distributions of tasks. Powderworld is designed to feature:
\begin{itemize}
\item \textbf{Modularity and support for emergent phenomena.} The core of Powderworld is a set of fundamental rules defining how two neighboring elements interact. The consistent nature of these rules is key to agent generalization; e.g. fire will always burn wood, and agents can learn these inherent properties of the environment. Furthermore, local interactions can build up to form emergent wider-scale phenomena, e.g. fire spreading throughout the world. This capacity for emergence enables tasks to be diverse yet share consistent properties. Thus, fundamental Powderworld priors exist that agents can take advantage of to generalize.
\item \textbf{Expressive task design capability.} A major challenge in the study of RL generalization is that tasks are often nonadjustable. Instead, an ideal environment should present an explorable space of tasks, capable of representing interesting challenges, goals, and constraints. Tasks should be parametrized to allows for automated design and interpretable control. Powderworld represents each task as a 2D array of elements, enabling a variety of procedural generation methods. Many ways exist to test a specific agent capability, e.g. ``burn plants to create a gap", increasing the chance that agents encounter these challenges.
\item \textbf{Fast runtime and representation.} As multi-task learning can be computationally expensive, it is important that the underlying environment runs efficiently. Powerworld is designed to run on the GPU, enabling large batches of simulation to be run in parallel. Additionally, Powderworld employs a neural-network-friendly matrix representation for both task design and agent observations. To simplify the training of decision-making agents, the Powderworld representation is fully-observable and runs on a discrete timescale (but partial-observability is an easy modification if desired).
\end{itemize}
\subsection{Engine}
Described below is an overview of the engine used for the Powderworld simulator. Additional technical details can be founded in the Appendix.
\textbf{World matrix.} The core structure of Powderworld is a matrix of elements $W$ representing the world. Each location $W_{x,y}$ holds a vector of information representing that location in the world. Namely, each vector contains a one-hot encoding of the occupying element, plus additional values indicating gravity, density, and velocity. The $W$ matrix is a Markovian state of the world, and thus past $W$ matrices are unnecessary for state transitions. Every timestep, a new $W$ matrix is generated via a stochastic update function, as described below.
\textbf{Gravity.} Certain elements are affected by gravity, as noted by the IsGravity flag in Figure \ref{fig:elements}. Each gravity-affected element also holds a density value, which determines the element's priority during the gravity calculation. Every timestep, each element checks with its neighbor below. If both elements are gravity-affected, and the neighbor below has a lower density, then the two elements swap positions. This interaction functions as a core rule in the Powderworld simulation and allows elements to stack, displace, and block each other.
\textbf{Element-specific reactions.} The behavior of Powderworld arises from a set of modular, local element reactions. Element reactions can occur either within a single element, or as a reaction when two elements are neighbors to each other. These reactions are designed to facilitate larger-scale behaviors; e.g. the sand element falls to neighboring locations, thus areas of sand form pyramid-like structures. Elements such as water, gas, and lava are fluids, and move horizontally to occupy available space. Finally, pairwise reactions provide interactions between specific elements, e.g. fire spreads to flammable elements, and plants grow when water is nearby. See Figure \ref{fig:elements} for a description of the Powderworld reactions, and full documentation is given in the appendix and code.
\textbf{Velocity system.} Another interaction method is applying movement through the velocity system. Certain reactions, such as fire burning or dust exploding, add to the velocity field. Velocity is represented via an two-component $V_{x,y}$ vector at each world location. If the magnitude of the velocity field at a location is greater than a threshold, elements are moved in one of eight cardinal directions, depending on the velocity angle. Velocity naturally diffuses and spreads in its own direction, thus a velocity difference will spread outwards before fading away. Walls are immune to velocity affects. Additionally, the velocity field can be directly manipulated by an interacting agent.
All operators are local and translation equivariant, yielding a simple implementation in terms of (nonlinear) convolutional kernels. To exploit GPU-optimized operators, Powderworld is implemented in Pytorch~\citep{NEURIPS2019_9015}, and performance scales with GPU capacity (Figure \ref{fig:scaling}).
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=\textwidth]{imgs/elements.png}
\end{tabular}
\vskip -0.1in
\caption{\textbf{A list of elements and reactions in the Powderworld simulation.} Elements each contain gravity and density information. A set of element-specific reactions dictates how each element behaves and reacts to neighbors. Certain reactions manipulate the world's velocity field, which can push further elements away. Together, the gravity, velocity, and reaction systems create a core set of rules by which interesting simulations arise.}
\label{fig:elements}
\end{figure}
\section{Experiments}
The following section presents a series of motivating experiments showcasing task distributions within Powderworld. These tasks intend to provide two frameworks for accessing the richness of the Powderworld simulation, one through supervised learning and one through reinforcement learning. While these tasks aim to specifically highlight how Powderworld can be used to generate diverse task distributions, the presented tasks are by no means exhaustive, and future work may easily define modifications or additional task objectives as needed.
In all tasks, the model is provided the $W \in \mathbb{R}^{H\times W\times20}$ matrix as an observation, which is a Markovian state containing element, gravity, density, and velocity information. All task distributions also include a procedural generation algorithm for generating training tasks, as well as tests used to measure transfer learning.
\textit{In all experiments below, evaluation is on out-of-distribution tests which are unseen during training.}
\subsection{World Modelling Task}
\begin{figure}
\includegraphics[width=\textwidth]{imgs/dist_test.png}
\vskip -0.1in
\caption{\textbf{World modelling test states are designed to showcase specific element interactions.} Test states are out-of-distribution and unseen during training. Model generalization capability is measured by how accurate its future predictions are on all eight tests.}
\label{fig:test_tasks}
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth]{imgs/state_gen.png}
\vskip -0.1in
\caption{\textbf{Training states are generated via a procedural content generation (PCG) algorithm followed by Powderworld simulation.} Experiments examine the affect of increasing complexity in PCG parameters.}
\label{fig:state_gen}
\end{figure}
This section examines a world-modelling objective in which a neural network is given an observation of the world, and must then predict a future observation. World models can be seen as learning how to encode an environment's dynamics, and have proven to hold great practical value in downstream decision making \citep{ha2018recurrent, hafner2019learning, hafner2019dream}. A model which can correctly predict the future of any observation can be seen as thoroughly understanding the core rules of the environment. The world-modelling task does not require reinforcement learning, and is instead solved via a supervised objective with the future state as the target.
Specifically, given an observation $W^0 \in \mathbb{R}^{H\times W\times N}$ of the world, the model is tasked with generating a $W' \in \mathbb{R}^{H\times W\times N}$ matrix of the world 8 timesteps in the future. $W'$ values corresponds to logit probabilities of the $N$ different elements, and loss is computed via cross-entropy between the true and predicted world. Tasks are represented by a tuple of starting and ending observations.
Training examples for the world-modelling task are created via an parametrized procedural content generation (PCG) algorithm. The algorithm synthesizes starting states by randomly selecting elements and drawing a series of lines, circles, and squares. Thus, the training distribution can be modified by specifying how many of each shape to draw, out of which elements, and how many total starting states should be generated. A set of hand-designed tests are provided as shown in Figure \ref{fig:test_tasks} which each measures a distinct property of Powderworld, e.g. simulate sand falling through water, fire burning a vine, or gas flowing upwards. To generate the targets, each starting state is simulated forwards for 8 timesteps, as shown in Figure \ref{fig:state_gen}.
The model is a convolutional U-net network \citep{ronneberger2015u}, operating over a world size of 64x64 and 14 distinct elements. The agent network consists of three U-net blocks with 32, 64, and 128 features respectively. Each U-net block contains two convolutional kernels with a kernel size of three and ReLU activation, along with a MaxPool layer in the encoder blocks. The model is trained with Adam for 5000 iterations with a batch size of 256 and learning rate of 0.005. During training, a replay buffer of 1024*256 data points is randomly sampled to form the training batch, and the oldest data points are rotated out for fresh examples generated via the Powderworld simulator.
\subsubsection{Can world models generalize to unseen test states?}
\begin{figure}
\includegraphics[width=\textwidth]{imgs/generalization.png}
\vskip -0.1in
\caption{
\textbf{World model generalization improves as training distribution complexity is increased.} Shown are the test performances of world models trained with data from varying numbers of start states, number of lines, and types of shapes. By learning from diverse data, world models can better generalize to unseen test states. \textbf{Top-Right: World models trained on more elements can better fine-tune to novel elements.} These results show that Powderworld provides a rich enough simulation that world models learn robust representations capable of adaptation to new dynamics. \textbf{Bottom:} Examples of states generated with various PCG parameters.
}
\label{fig:wm_generalization}
\end{figure}
A starting experiment examines whether world models trained purely on simulated data can correctly generalize on hand-designed test states. The set of tests, as shown in Figure \ref{fig:test_tasks}, are out-of-distribution hand-designed worlds that do not appear in the training set. A world model must discover the core ruleset of environmental dynamics in order to successfully generalize.
Scaling laws for training large neural networks have shown that more data consistently improves performance \citep{kaplan2020scaling, zhai2022scaling}. Figure \ref{fig:wm_generalization} shows this observation to be true in Powderworld as well; world models trained on increasing amounts of start states display higher performance on test states. Each world model is trained on the same number of training examples and timesteps, the only difference is how this data is generated. The average test loss over three training runs are displayed.
Results show that the 10-state world model overfits and does not generalize to the test states. In contrast, the 100-state model achieves much higher test accuracy, and the trend continues as the number of training tasks improves. These results show that the Powderworld world-modelling task demonstrates similar scaling laws as real-world data.
\subsubsection{How do increasingly complex training tasks affect generalization?}
As training data expands to include more varieties of starting states, does world model performance over a set of test states improve? More complex training data may allow world models to learn more robust representations, but may also introduce variance which harms learning or create degenerate training examples when many elements overlap.
Figure \ref{fig:wm_generalization} displays how as additional shapes are included within the training distribution, zero-shot test performance successfully increases. World models are trained on distributions of training states characterized by which shapes are present between lines, circles, and square. Lines are assigned a random ($X^1$,$Y^1$), ($X^2$,$Y^2$), and thickness. Circles and Squares are assigned a random ($X^1$,$Y^1$) along with a radius. Each shape is filled in with a randomly selected element. Between 0 and 5 of each shape are drawn. Interestingly, training tasks with less shape variation also display higher instability, as shown in the test loss spikes for Line-only, Circle-only, and Square-only runs. Additionally, world models operating over training states with a greater number of lines display higher test performance. This behavior may indicate that models trained over more diverse training data learn representations which are more resistant to perturbations.
Results showcase how in Powderworld, as more diverse data is created from the same set of core rules, world models increase in generalization capability.
\subsubsection{Does environment richness influence transfer to novel interactions?}
While a perfect world model will always make correct predictions, there are no guarantees such models can learn new dynamics. This experiment tests the \textit{adaptability} of world models, by examining if they can quickly fine-tune on new elemental reactions.
Powderworld's ruleset is also of importance, as models will only transfer to new elements if all elements share fundamental similarities. Powderworld elements naturally share a set of behaviors, e.g. gravity, reactions-on-contact, and velocity. Thus, this experiment measures whether Powderworld presents a rich enough simulation that models can generalize to new \textit{rules} within the environment.
To run the experiment, distinct world models are trained on distributions containing a limited set of elements. The 1-element model sees only sand, the 2-element sees only sand and water, the 3-element sees sand, water, and wall, and so on. Worlds are generated via the same procedural generation algorithm, specifically up to 5 lines are drawn. After training for the standard 5000 iterations, each world model is then fine-tuned for 100 iterations on a training distribution containing three held-out elements: gas, stone, and acid. The world model loss is then measured on a new environment containing only these three elements.
Figure \ref{fig:wm_generalization} (top-right) highlights how world models trained on increasing numbers of elements show greater performance when fine-tuned on a set of unseen elements. These results indicate that world models trained on richer simulations also develop more robust representations, as these representations can more easily be trained on additional information. Powderworld world models learn not only the core rules of the world, but also general features describing those rules, that can then be used to learn new rules.
\section{Reinforcement Learning Tasks}
\begin{figure}
\includegraphics[width=\textwidth]{imgs/taskdesc.png}
\vskip -0.1in
\caption{\textbf{In Powderworld RL tasks, agents must iteratively place elements (including directional wind) to transform a starting state into a goal state.} Within this framework, we present three RL tasks as shown above. Each task contains many challenges, as starting states are randomly generated for each episode. Agents are evaluated on test states that are unseen during training.}
\label{fig:taskdesc}
\end{figure}
Reinforcement learning tasks can be defined within Powderworld via a simple framework, as shown in Figure \ref{fig:taskdesc}. Agents are allowed to iteratively place elements, and must transform a starting state into a goal state. The observation space contains the Powderworld world state $W \in \mathbb{R}^{64\times 64\times 20}$, and the action space is a multi-discrete combination of $X,Y,Element,V_x,V_y$. $V_x$ and $V_y$ are only utilized if the agent is placing wind.
Tasks are defined by a function that generates a starting state, a goal state, and any restrictions on element placement. Note that Powderworld tasks are specifically designed to be stochastically diverse and contain randomly generated starting states. Within this framework, many task varieties can be defined. This work considers:
\begin{itemize}
\item \textbf{Sand-Pushing.} The Sand-Pushing environment is an RL environment where an agent must move sand particles into a goal slot. The agent is restricted to only placing wind, at a controllable velocity and position. By producing wind, agents interact with the velocity field, allowing them to push and move elements around. Wind affects the velocity field in a 10x10 area around the specified position. Reward equals the number of sand elements within the goal slot, and episodes are run for 64 timesteps. The Sand-Pushing task presents a sparse-reward sequential decision-making problem.
\item \textbf{Destroying.} In the Destroying task, agents are tasked with placing a limited number of elements to efficiently destroy the starting state. Agents are allowed to place elements for five timesteps, after which the world is simulated forwards another 64 timesteps, and reward is calculated as the number of empty elements. A general strategy is to place fire on flammable structures, and place acid on other elements to dissolve them away. The Destroying task presents a task where correctly parsing the given observation is crucial.
\item \textbf{Path-Building.} The Path-Building task presents a construction challenge in which agents must place or remove wall elements to route water into a goal container. An episode lasts 64 timesteps, and reward is calculated as the number of water elements in the goal. Water is continuously produced from a source formation of Cloner+Water elements. In the Path-Building challenge, agents must correctly place blocks such that water flows efficiently in the correct direction. Additionally, any obstacles present must be cleared or built around.
\end{itemize}
To learn to control in this environment, a Stable Baselines 3 PPO agent \citep{raffin2021stable, schulman2017proximal} is trained over 1,000,000 environment interactions. The agent model is comprised of two convolutional layers with feature size 32 and 64 and kernel size of three, followed by two fully-connected layers. A learning rate of 0.0003 is used, along with a batchsize of 256. An off-the-shelf RL algorithm is intentionally chosen, so experiments can focus on the impact of training tasks.
Figure \ref{fig:test_rollout} highlights agents solving the various RL tasks. Training tasks are generated using the same procedural generation algorithm as the world-modelling experiments. Task-specific structures are also placed, such as the goal slots in Sand-Pushing and Path-Building, and initial sand/water elements.
\begin{figure}
\includegraphics[width=\textwidth]{imgs/rlperformance.png}
\vskip -0.1in
\caption{\textbf{Increasing the complexity of RL training tasks helps generalization, up to a task-specific inflection point.} Shown are the test rewards of RL agents trained on tasks with increasing numbers of shapes (shown in log-scale). In Sand-Pushing, too much complexity will decrease test performance, as agents become unable to extract a sufficient reward signal. In Destroying, complexity consistently increases test performance. While increased complexity generally increases the difficulty of training tasks and reduces reward, in Path-Building certain obstacles can be used to complete the goal, improving training reward.}
\label{fig:rl_generalization}
\end{figure}
To test generalization, agents are evaluated on test tasks that are out of distribution from training. Specifically, test tasks are generated using a procedural generation algorithm that only places squares (5 for Destroying and Sand-Pushing, 10 for Path-Building). In contrast, the training tasks are generated using only lines and circles.
Figure \ref{fig:rl_generalization} showcases how training task complexity affects generalization to test tasks. Displayed rewards are averaged from five independent training runs each. Agents are trained on tasks generated with increasing numbers of lines and circles (0, 1, 2, 4 ... 32, 64). These structures serve as obstacles, and training reward generally decreases as complexity increases. One exception is in Path-Building, as certain element structures can be useful in routing water to the goal.
Different RL tasks display a different response to training task complexity. In Sand-Pushing, it is helpful to increase complexity up to 8 shapes, but further complexity harms performance. This inflection point may correspond to the point where learning signal becomes too high-variance. RL is highly dependent on early reward signal to explore and continue to improve, and training tasks that are too complex can cause agent performance to suffer.
In contrast, agents on the Destroying and Path-Building task reliably gain a benefit from increased training task complexity. On the Destroying task, increased diversity during training may help agents recognize where to place fire/acid in test states. For Path-Building, training tasks with more shapes may present more possible strategies for reaching the goal.
\begin{figure}
\includegraphics[width=\textwidth]{imgs/taskrollout.png}
\vskip -0.1in
\caption{\textbf{Agents solving the Sand-Pushing, Destroying, and Path-Building tasks.}
In the Sand-Pushing task, wind is used to push a block of sand elements between obstacles to reach the goal slot on the right. In Destroying, agents must place a limited number of elements to efficiently destroy the world. In Path-Building, agents must construct a path for water to flow from a source to a goal container. Tasks are randomly generated via a procedural algorithm.
}
\label{fig:test_rollout}
\end{figure}
The difference in how complexity affects training in Powderworld world-modelling and reinforcement learning tasks highlights a motivating platform for further investigation. While baseline RL methods may fail to scale with additional complexity and instead suffer due to variance, alternative learning techniques may better handle the learning problem and show higher generalization.
\section{Conclusion}
Generalizing to novel unseen tasks is one of the grand challenges of reinforcement learning. Consistent lessons in deep learning show that \textit{training data} is of crucial importance, which in the case of RL is training tasks. To study how and when agents generalize, the research community will benefit from more expressive foundation environments supporting many tasks arising from the same core rules.
This work introduced Powderworld, an expressive simulation environment that can generate both supervised and reinforcement learning task distributions. Powderworld's ruleset encourages modular interactions and emergent phenomena, resulting in world models which can accurately predict unseen states and even adapt to novel elemental behaviors. Experimental results show that increased task complexity helps in the supervised world-modelling setting and in certain RL scenarios. At times, complexity hampers the performance of a standard RL agent.
Powderworld is built to encourage future research endeavors, providing a rich yet computationally efficient backbone for defining tasks and challenges. The provided experiments hope to showcase how Powderworld can be used as a platform for examining task complexity and agent generalization. Future work may use Powderworld as an environment for studying open-ended agent learning, unsupervised environment design techniques, or other directions. As such, all code for Powderworld is released online in support of extensions.
|
1,477,468,750,412 | arxiv | \section{Overview}\label{sec:overview}
The usual fundamental group functor $\pi_1\colon\mathsf{Top}_{\ast} \to\mathsf{Grp}$ which assigns to each pointed topological space $(X,x_0)\in\mathsf{Top}_{\ast}$ the group of homotopy classes of based loops and to each based continuous map $f\colon X\to Y$ the group homomorphism $[l]\mapsto [f\circ l]$ can be factorized as $\pi_1=\pcset\circ\looptop$ where $\looptop\colon\mathsf{Top}_{\ast}\to\mathsf{Top}_{\ast}$ is the loop space functor and $\pcset\colon\mathsf{Top}_{\ast}\to\mathsf{Set}_{\ast}$ is the path-component set functor. It is evident that the latter forgets the topological structure of $\looptop(X,x_0)$, thus throwing away potentially useful information encoded in it. A simple workaround to make this information available is to lift $\pcset$ to a functor $\pctop\colon\mathsf{Top}_{\ast}\to\mathsf{Top}_{\ast}$ (we keep the same symbol for economical reason) by placing on the set of path components of a space the quotient topology induced by the natural projection that assigns to each point its path component. This was done by Daniel Biss \cite{Bis2002} and results in a topologized fundamental group functor $\biss\colon\mathsf{Top}_{\ast}\to\mathsf{QTopGrp}$, where $\mathsf{QTopGrp}$ is the category of quasitopological groups\footnote{A quasitopological group is almost like a topological group, except that we do not require multiplication to be continuous.} and continuous homomorphisms. As clarified by Paul Fabel \cite{Fab2009} and Jeremy Brazas \cite{Bra2011}, the reason why $\biss$, contrary to what is claimed in \cite{Bis2002}, does not land in $\mathsf{TopGrp}$ is intimately related to the fact that in $\mathsf{Top}$ a product of quotient maps need not be quotient. This shortcoming has inspired an ingenious modification \cite{Bra2013} of the quotient topology of $\biss(X,x_0)$ to obtain a genuine $\mathsf{TopGrp}$-valued functor $\brazas$, consisting in iterating (transfinitely many times, in general) the procedure that places on $\pi_1(X,x_0)$ the quotient topology with respect to the multiplication $\biss(X,x_0)\times\biss(X,x_0)\to\pi_1(X,x_0)$. The additional information encoded in $\biss$ and $\brazas$ is inconspicuous for spaces admitting universal covers, whereas it turns out to be useful to discriminate among spaces with complicated local structure, i.e.~that fail to be locally path connected or semilocally simply connected. The functor $\brazas$ allows also for a generalized theory of covering spaces \cite{Bra2012a}.
We would like to follow a different route and leave $\mathsf{Top}$ in favor of one of its better-behaved supercategories where quotient maps are product-stable, thus yielding a continuous multiplication in the fundamental group. Clearly the new functor will not land in $\mathsf{TopGrp}$ but rather in the category of group-objects of the chosen supercategory.
When substituting $\mathsf{Top}$ with a larger category there are two contrasting principles at work \cite{Her1987} summarized by the catchlines ``the ampler the better'' and ``the meagerer the better'', that is, larger extensions are needed when stronger convenience requirements are made, whereas smaller extensions generally retain more structure of the original category. In our case it turns out there are at least two suitable extensions of $\mathsf{Top}$: the smaller one is its cartesian closed topological hull $\mathsf{EpiTop}$, whose objects are the so-called epitopological spaces (also known as Antoine spaces), and the larger one is its topological universe hull $\mathsf{PsTop}$, whose objects are the so-called pseudotopological spaces.
In the spirit of the catchlines above, $\mathsf{EpiTop}$ and $\mathsf{PsTop}$ are large enough to be cartesian closed, and small enough to be still significantly related to $\mathsf{Top}$. The key fact that makes $\mathsf{EpiTop}$ and $\mathsf{PsTop}$ suited for defining our enriched fundamental groups is the validity of the pasting lemma, together with the concrete reflectivity of the embeddings $\mathsf{Top}\hookrightarrow\mathsf{EpiTop}\hookrightarrow\mathsf{PsTop}$. We shall give proper definitions in the next sections. Here we only mention that concrete reflectivity does not rule out larger constructs from the list of possibly useful extensions, whereas the validity of the pasting lemma seems to be peculiar to extensions not larger than $\mathsf{PsTop}$.
In common with $\biss$ and $\brazas$, the resulting functors
\begin{equation*}
\begin{aligned}
\piepi&\colon\mathsf{EpiTop}_{\ast}\to\mathsf{EpiTopGrp}\\
\pips&\colon\mathsf{PsTop}_{\ast}\to\mathsf{PsTopGrp}
\end{aligned}
\end{equation*}
are homotopy invariant and behave as expected under a change of basepoint. Moreover, their restrictions to $\mathsf{Top}_{\ast}$ are suitable lifts of $\biss$. This implies that all the information encoded in $\biss$ is available in both $\piepi$ and $\pips$.
A piece of evidence that convenient properties of $\mathsf{EpiTop}$ and $\mathsf{PsTop}$ translate into greater regularity of $\piepi$ and $\pips$ than $\biss$ is given by the fact that both $\piepi$ and $\pips$ preserve finite products in their respective categories ($\pips$ preserves even arbitrary ones), while $\biss$ does not. This should be compared with the fact that $\brazas$ preserves finite products (it is not known whether it preserves arbitrary ones). A thorough investigation of basic categorical properties of $\piepi$ and $\pips$ has yet to be done.
\section{\texorpdfstring{$\mathsf{PsTop}$}{PsTop} and \texorpdfstring{$\mathsf{EpiTop}$}{EpiTop}}\label{sec:presentation}
No originality is claimed for this section. The reader familiar with these categories may safely skip it. References for $\mathsf{PsTop}$ include \cite{Cho1947}, \cite{BenHerLow1991}, \cite{HerColSch1991}, \cite{Wyl1991}. References for $\mathsf{EpiTop}$ include \cite{Ant1966}, \cite{Mac1973}, \cite{Bou1975}.
The existence of two different topologized versions of $\pi_1$ as reviewed in Sec.\@ \ref{sec:overview} can be ascribed to the fact that $\mathsf{Top}$ is not cartesian closed. Indeed, if $\mathsf{Top}$ were cartesian closed then the two procedures that define $\biss$ and $\brazas$ would give the same topologization for $\pi_1(X,x_0)$. In particular, this topologization would enjoy some desirable properties associated with $\biss$ and $\brazas$ separately: it would be defined in a straightforward way internally, as in $\biss$, and it would make the corresponding functor land in the category of group-objects of $\mathsf{Top}$, as in $\brazas$. It is then natural to construct a variant of these functors on a cartesian closed category related to $\mathsf{Top}$. We might consider \emph{sub}categories of $\mathsf{Top}$ but, in order to keep all topological spaces into the picture, we consider instead its \emph{super}categories. This has the additional effect of recovering $\biss$ by reflecting back to $\mathsf{Top}$, as we shall see.
For reasons already explained in Sec.\@ \ref{sec:overview} we look for extensions that do not depart too much from $\mathsf{Top}$. A useful setting for such a task is that of topological constructs. Loosely speaking, these are categories of structured sets admitting initial and final structures, as $\mathsf{Top}$. More precisely, a topological construct is a concrete category over $\mathsf{Set}$ which has unique initial (and hence also unique final) structures with respect to the forgetful functor (see the full treatment \cite{AdaHerStr2004} for more details). Sometimes a topological construct is assumed to be well-fibered\footnote{Well-fibered means that the following two conditions hold: for each set $X$, the class of objects having $X$ as underlying set is a set; for each set $X$ with at most one element, there is exactly one object with underlying set $X$.}. All the constructs considered in these notes are well-fibered, so in order to facilitate the presentation we tacitly assume a topological construct to be such.
We now recall the notion of an exponentiable object in a topological category. First notice that any topological construct $\mathsf{C}$ has concrete products: the product of a family of $\mathsf{C}$-objects $(X_j)_{j\in J}$ is defined as the unique initial lift of the structured source $(\operatorname{\Pi}_{k\in J} |X_k| \to X_j)_{j\in J}$. Moreover, given $\mathsf{C}$-objects $X,Y,Z$ and a $\mathsf{C}$-morphism $f\colon X\times Y\to Z$, for each point $x\in X$ the set map $f_x\colon Y\to Z$ given by $f_x(y)=f(x,y)$ is a $\mathsf{C}$-morphism\footnote{Here we used the assumption of well-fiberedness.}.
\begin{defn}
Given a topological construct $|\cdot|\colon\mathsf{C}\to\mathsf{Set}$, an object $X\in\mathsf{C}$ is said to be exponentiable if for each $Y\in\mathsf{C}$ there is a $\mathsf{C}$-object, denoted by $Y^X$, such that:
\begin{enumerate}
\item $|Y^X|=\mathsf{C}(X,Y)$,
\item the evaluation map $\ev\colon Y^X\times X\to Y$ is a $\mathsf{C}$-morphism,
\item for any $Z\in\mathsf{C}$ and any $\mathsf{C}$-morphism $h\colon Z\times X\to Y$ the corresponding set map $\hat{h}\colon Z\to Y^X$ defined by $\hat{h}(z)(x)=h(z,x)$ is a $\mathsf{C}$-morphism.
\end{enumerate}
The $\mathsf{C}$-structure defining $Y^X$ is called exponential.
\end{defn}
\begin{rmk}
Notice that for an exponentiable object $X\in\mathsf{C}$ the correspondence $\mathsf{C}({Z\times X}, Y)\to\mathsf{C}(Z,Y^X)$ given by $h\mapsto \hat{h}$ is a bijection.
\end{rmk}
A topological construct is cartesian closed iff all its objects are exponentiable (\cite{Her1974} or \cite[Thm 2.14]{HerColSch1991}). Although $\mathsf{Top}$ is not cartesian closed, the class of its exponentiable spaces is well understood \cite{EscHec2001} and contains all locally compact Hausdorff spaces. In particular, the interval $[0,1]$ with its standard topology is exponentiable. Moreover, for a locally compact Hausdorff space $X$ and any space $Y\in\mathsf{Top}$ the exponential topology defining $Y^X$ is the compact-open topology.
Before introducing pseudotopological spaces we briefly recall that, given a map $f\colon X\to Y$ between sets and a filter $\mathscr{F}$ on $X$, we can always define the pushforward filter $f_*\mathscr{F}$ on $Y$ as the filter generated by the filter base $\{f(F)\mid F\in\mathscr{F}\}$. It turns out that $f_*\mathscr{F}=\{S\subset Y\mid f^{-1}(S)\in \mathscr{F}\}$ and that for any maps $g\colon X\to Y$ and $f\colon Y\to Z$ we have $(f\circ g)_*\mathscr{F}=f_*(g_*\mathscr{F})$. Moreover, if $\mathscr{F}$ is a (principal) ultrafilter then $f_*\mathscr{F}$ is a (principal) ultrafilter. For a set $X$ and $x\in X$ we denote by $F(X)$ the set of all filters on $X$, by $U(X)$ the set of all ultrafilters on $X$ and by $\dot{x}$ the principal ultrafilter on $X$ consisting of all supersets of $\{x\}$.
\begin{defn}[$\mathsf{PsTop}$]
A pseudotopological structure on a set $X$ is a relation $u\subset U(X)\times X$ such that $(\dot{x},x)\in u$ for each $x\in X$. The pair $(X,u)$ is called a pseudotopological space, or pseudospace for short. A map $f\colon X\to Y$ between pseudospaces $(X,u_X)$ and $(Y,u_Y)$ is called continuous at $x\in X$ if we have $(f_*\mathscr{U},f(x))\in u_Y$ whenever $(\mathscr{U},x)\in u_X$. A map $f\colon X\to Y$ is called continuous if it is continuous at each $x\in X$. We denote by $\mathsf{PsTop}$ the category whose objects are all pseudospaces and whose morphisms are all continuous maps between them.
\end{defn}
The set of all pseudotopological structures on a given set is a complete lattice with respect to set inclusion. The least and greatest elements are, respectively, the discrete pseudotopological structure $\{(\dot{x},x)\mid x\in X\}$ and the indiscrete pseudotopological structure $U(X)\times X$. Given a set map $f\colon X\to Y$, a pseudospace structure $u$ on $X$ induces the pseudotopological structure $\{(f_*\mathscr{U},f(x))\mid (\mathscr{U},x)\in u\}\cup\{(\dot{y},y)\mid y\in Y\}$ on $Y$ and this is the least one among all pseudotopological structures on $Y$ making $f$ continuous. It is the final pseudotopological structure with respect to the pseudospace $(X,u)$ and the set map $f$. Dually, a pseudospace structure $u$ on $Y$ induces the pseudotopological structure $\{(\mathscr{U},x)\mid(f_*\mathscr{U},f(x))\in u\}\cup\{(\dot{x},x)\mid x\in X\}$ on $X$ and this is the greatest one among all pseudotopological structures on $X$ making $f$ continuous. It is the initial pseudotopological structure with respect to the pseudospace $(Y,u)$ and the set map $f$. It follows that the category $\mathsf{PsTop}$ together with the obvious forgetful functor, defined by $|(X,u)|=X$ on objects and the identity on maps, is a topological construct. Explicitly, the initial pseudotopology $u$ on a set $X$ with respect to a collection of maps $\{f_j\colon X\to (X_j,u_j)\}_{j\in J}$ is the intersection of all the initial pseudotopological structures induced by each $f_j$. Dually, the final pseudotopology $u$ on a set $X$ with respect to a collection of maps $\{f_j\colon (X_j,u_j)\to X\}_{j\in J}$ is the union of all the final pseudotopological structures induced by each $f_j$.
In the following we often use the alternative notation $\mathscr{U}\to_u x$, or more simply $\mathscr{U}\to x$, to indicate $(\mathscr{U},x)\in u$. When there is no possibility of confusion we adopt the abuse of notation of writing $X$ for the pseudospace $(X,u)$ and, accordingly, sometimes we use the notation $\mathscr{U}\to_X x$.
A pseudotopological structure $u\subset U(X)\times X$ induces a relation $u'\subset F(X)\times X$ defined by $(\mathscr{F},x)\in u'$ iff for all ultrafilters $\mathscr{U}\supset \mathscr{F}$ we have $(\mathscr{U},x)\in u$. When we write $\mathscr{F}\to x$ for a filter $\mathscr{F}$ on a pseudospace $(X,u)$ we thus mean\footnote{It is actually possible to define a pseudotopological structure directly by means of filters rather than ultrafilters, thereby using a relation $v\subset F(X)\times X$, in which case we must append to the single axiom $\dot{x}\to_v x$ the following two additional axioms:
\begin{itemize}
\item $F(X)\ni\mathscr{G}\supset\mathscr{F}\to_v x\implies\mathscr{G}\to_v x$,
\item if a filter $\mathscr{F}\not\to_v x$ then there is a filter $\mathscr{F}'\supset \mathscr{F}$ such that, for all filters $\mathscr{F}''\supset \mathscr{F}'$, $\mathscr{F}''\not\to_v x$.
\end{itemize}} $(\mathscr{F},x)\in u'$.
Not surprisingly, $\mathsf{Top}$ can be viewed as a full isomorphism-closed subcategory of $\mathsf{PsTop}$ by assigning to each topological space $X$ the pseudospace consisting of the same underlying set equipped with the pseudotopological structure given by ultrafilter convergence.
As opposed to $\mathsf{Top}$, the construct $\mathsf{PsTop}$ is cartesian closed \cite{HerColSch1991}. For $X,Y\in\mathsf{PsTop}$ the exponential pseudotopology on $\mathsf{PsTop}(X,Y)$ is given by declaring, for a filter $\mathscr{F}$ on $\mathsf{PsTop}(X,Y)$ and $f\in\mathsf{PsTop}(X,Y)$, $\mathscr{F}\to f$ iff, whenever $F(X)\ni\mathscr{G}\to_X x$, we have $\ev_*(\mathscr{F}\times \mathscr{G})\to_Y f(x)$. Here $\mathscr{F}\times \mathscr{G}$ is the filter generated by the collection $\{F\times G\mid F\in \mathscr{F}, G\in \mathscr{G}\}$ and $\ev\colon \mathsf{PsTop}(X,Y)\times X\to Y$ is the evaluation map.
\begin{defn}[$\mathsf{EpiTop}$]
A pseudospace $X$ is called epitopological, or an epispace for short, if it is the initial pseudospace\footnote{The original definition by Antoine \cite{Ant1966} \cite{Mac1973} uses the larger category of quasitopological spaces as ``environment'' but it is not hard to prove the equivalence with ours by following a trail of theorems in \cite{Mac1973}.} with respect to a collection of maps $\left\{f_j\colon X\to {Z_j}^{Y_j}\right\}_{j\in J}$ where $Y_j,Z_j\in\mathsf{Top}$. The collection of all epispaces determines a full isomorphism-closed subcategory of $\mathsf{PsTop}$ denoted by $\mathsf{EpiTop}$.
\end{defn}
Every topological space $X\in\mathsf{Top}$ is epitopological by considering the bijection $X\to X^{\{*\}}$, $x\mapsto (*\mapsto x)$ where $\{*\}$ is a one-point topological space, therefore $\mathsf{Top}$ is a full isomorphism-closed subcategory of $\mathsf{EpiTop}$. Given $Y\in\mathsf{EpiTop}$, the exponential pseudotopology on $\mathsf{PsTop}(X,Y)$ turns out to be epitopological \cite{Mac1973} and thus every epispace is exponentiable. In other words, $\mathsf{EpiTop}$ is cartesian closed. In a precise sense, $\mathsf{EpiTop}$ is the smallest cartesian closed topological construct generated by $\mathsf{Top}$ (called the cartesian closed topological hull of $\mathsf{Top}$, see e.g.\@ \cite{LowSioVer2009} for more details).
We summarize all these results in the next proposition.
\begin{prop}\label{prop:expo}
Of the three topological constructs $$\mathsf{Top}\hookrightarrow\mathsf{EpiTop}\hookrightarrow\mathsf{PsTop}\;,$$ the last two are cartesian closed. For $X,Y\in\mathsf{PsTop}$, the exponential pseudotopology defining $Y^X$ is the one described above. Moreover:
\begin{enumerate}
\item\label{expo:itm:first} whenever $Y\in\mathsf{EpiTop}$, we have $Y^X\in\mathsf{EpiTop}$. Therefore, the exponential epitopology is just the exponential pseudotopology applied to epispaces;
\item\label{expo:itm:second} whenever $X,Y\in\mathsf{Top}$ with $X$ locally compact Hausdorff, we have $Y^X\in\mathsf{Top}$. Therefore, for a locally compact Hausdorff space $X$ the exponential topology is just the exponential pseudotopology applied to spaces.
\end{enumerate}
\end{prop}
The next proposition shows that the inclusions $\mathsf{Top}\hookrightarrow\mathsf{EpiTop}\hookrightarrow\mathsf{PsTop}$ admit left adjoints.
\begin{prop}\label{prop:reflectors}
There are concrete reflectors
$$\mathsf{Top}\xleftarrow{R_2}\mathsf{EpiTop}\xleftarrow{R_1}\mathsf{PsTop}\;.$$
This means, say for $R_1$, that for each $X\in\mathsf{PsTop}$ there is $R_1 X\in\mathsf{EpiTop}$ such that: $|X|=|R_1 X|$, the identity map $\mathrm{id}\colon X\to R_1 X$ is continuous, and for each continuous map $f\colon X\to Y$ with $Y\in\mathsf{EpiTop}$ the map $f\colon R_1 X\to Y$ is continuous as well. Analogously for $R_2$ and for the composition $R \coloneqq R_2 R_1$. Moreover, $R_2$ is the restriction of $R$ to $\mathsf{EpiTop}$, and $R$ admits the following simple description: for $X\in\mathsf{PsTop}$, a subset $S\subset |X|$ is open in $R X$ iff whenever $\mathscr{U}\to x\in S$ we have $S\in\mathscr{U}$.
\end{prop}
\begin{proof}[Sketch of proof]
We proceed backwards: it can be checked that the given description of $R$ defines a concrete reflector, so its restriction $R_2$ to $\mathsf{EpiTop}$ is also a concrete reflector. The existence of a concrete reflector $R_1$ satisfying $R = R_2 R_1$ relies on the fact that $\mathsf{EpiTop}$ is initially closed in $\mathsf{PsTop}$ \cite{Mac1973} \cite[Prop.\@ 21.31]{AdaHerStr2004} and the fact that all constructs considered here are amnestic \cite{AdaHerStr2004}.
\end{proof}
Notice that each reflector is the identity functor when restricted to its respective image category. By the general theory of topological constructs \cite{AdaHerStr2004} one of the consequences of Prop.\@ \ref{prop:reflectors} is the following.
\begin{cor}\label{cor:preservation}
The inclusions $\mathsf{Top}\hookrightarrow\mathsf{EpiTop}\hookrightarrow\mathsf{PsTop}$ preserve initial sources and the reflectors $\mathsf{Top}\xleftarrow{R_2}\mathsf{EpiTop}\xleftarrow{R_1}\mathsf{PsTop}$ preserve final sinks.
\end{cor}
We mention in passing that by \cite{SchWec1992} quotient maps in $\mathsf{EpiTop}$ (resp., $\mathsf{PsTop}$) between topological spaces correspond exactly to product-stable (resp., pullback-stable) quotient maps in $\mathsf{Top}$.
\section{A pasting lemma in \texorpdfstring{$\mathsf{PsTop}$}{PsTop} and \texorpdfstring{$\mathsf{EpiTop}$}{EpiTop}}\label{sec:pasting}
In this section we state and prove a generalization to $\mathsf{PsTop}$ (and hence also to $\mathsf{EpiTop}$) of the pasting lemma in $\mathsf{Top}$ which we now recall (see e.g.~\cite[Chap~III, Thm~9.4]{Dug1966}).
\begin{lemma}[Pasting Lemma in $\mathsf{Top}$]\label{lemma:pasting_top}
Let $X$ be a topological space and $\{X_j\}_{j\in J}$ a cover of $X$ such that either
\begin{enumerate}
\item all $X_j$ are open, or
\item all $X_j$ are closed, and form a locally finite family\footnote{We recall that a family of subsets of a topological space is locally finite if each $x\in X$ has a neighborhood intersecting only finitely many of them.}.
\end{enumerate}
If $Y$ is a topological space and $f\colon X\to Y$ is a function such that each restriction $f_j\colon X_j\to Y$ is continuous, where each $X_j$ carries the subspace topology, then $f$ is continuous.
\end{lemma}
\begin{lemma}[Pasting Lemma in $\mathsf{PsTop}$]\label{lemma:pasting_pstop}
Let $X$ be a pseudospace and $\{X_j\}_{j\in J}$ a cover of $X$ such that either
\begin{enumerate}
\item \label{pasting:itm:first} all $X_j$ are open in the reflected topological space $R X$, or
\item \label{pasting:itm:second} all $X_j$ are closed in $R X$, and form a locally finite family in $R X$.
\end{enumerate}
If $Y$ is a pseudospace and $f\colon X\to Y$ is a function such that each restriction $f_j\colon X_j\to Y$ is continuous, where each $X_j$ carries the subspace pseudotopology, then $f$ is continuous.
\end{lemma}
\begin{rmk}\label{rmk:pasting_epi}
Lemma \ref{lemma:pasting_pstop} applies verbatim to $\mathsf{EpiTop}$ by Cor.\@ \ref{cor:preservation}. When the pseudotopologies of $X$ and $Y$ are topological, we recover Lemma \ref{lemma:pasting_top}.
\end{rmk}
Before proving Lemma \ref{lemma:pasting_pstop} we need to recall the notion of pullback for filters. In contrast to pushforwards, the pullback of a filter does not always exist. However, we have the following.
\begin{defn}
Let $f\colon X\to Y$ be a map between sets and let $\mathscr{F}$ be a filter on $Y$. If $f^{-1}(F)\neq\emptyset$ for each $F\in\mathscr{F}$ then we define $f^*\mathscr{F}$ as the filter generated by the filter base $\{f^{-1}(F)\mid F\in\mathscr{F}\}$.
\end{defn}
\begin{lemma}\label{lemma:pullback}
When defined, the filter $f^*\mathscr{F}$ satisfies the formula $\mathscr{F}\subset f_*f^*\mathscr{F}$. In particular, if $\mathscr{F}$ is an ultrafilter and $f^*\mathscr{F}$ is defined then $f_*f^*\mathscr{F}=\mathscr{F}$, which implies $f(X)\in\mathscr{F}$.
\end{lemma}
\begin{proof}
We compute: $f_*f^*\mathscr{F}=\{S\subset Y\mid f^{-1}(S)\supset f^{-1}(F)\text{ for some }F\in\mathscr{F}\}$. Obviously $\mathscr{F}\subset f_*f^*\mathscr{F}$. Notice that $f(X)\in f_*f^*\mathscr{F}$ since $f^{-1}f(X)=X$. When $\mathscr{F}$ is an ultrafilter, by the maximality of $\mathscr{F}$ we have equality.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:pasting_pstop}]
We first assume that all $X_j$ are open in $R X$. Take $\mathscr{U}\to x$ with $x\in X_j$ for some $j$. By openness we have $X_j\in\mathscr{U}$, so every $U\in\mathscr{U}$ has non-empty intersection with $X_j$ and by Lemma \ref{lemma:pullback} applied to the inclusion map $j\colon X_j\hookrightarrow X$ we have $j_* j^*\mathscr{U}=\mathscr{U}$. Therefore $j^*\mathscr{U}\to x$ in $X_j$ by the definition of subspace pseudotopology, and by the continuity of $f_j$ we have ${f_j}_*(j^*\mathscr{U})=f_*(j_* j^*\mathscr{U})=f_*(\mathscr{U})\to f(x)$ which proves $f$ is continuous. This proves the lemma for case \eqref{pasting:itm:first}. Now we assume case \eqref{pasting:itm:second}, that is, all $X_j$ are closed in $R X$ and form a locally finite family in $R X$. Take $\mathscr{U}\to x$. By assumption there is an open set $V$ containing $x$ and covered by finitely many closed sets $X_j$, call them $X_{j_1}, X_{j_2}, \dots, X_{j_n}$. We now show by contradiction that at least one among $X_{j_1}, \dots, X_{j_n}$ intersects each $U\in\mathscr{U}$: indeed, suppose that for each $i=1,\dots, n$ there is $U_i\in\mathscr{U}$ such that $X_{j_i}\cap U_i=\emptyset$. Then $(\cap_{i=1}^n U_i)\cap (\cup_{i=1}^n X_{j_i})=\emptyset$ which implies $(\cap_{i=1}^n U_i)\cap V=\emptyset$ since $V$ is covered by $\cup_{i=1}^n X_{j_i}$. But $\cap_{i=1}^n U_i$ belongs to $\mathscr{U}$ as a finite intersection of elements of $\mathscr{U}$, and $V$ belongs to $\mathscr{U}$ because $\mathscr{U}\to x\in V$ with $V$ open, therefore $(\cap_{i=1}^n U_i)\cap V\neq\emptyset$ and we have the desired contradiction. In order to fix notation, say $X_k$ has non-empty intersection with all $U\in\mathscr{U}$. By Lemma \ref{lemma:pullback} we have $k_* k^*\mathscr{U}=\mathscr{U}$ and $X_k\in\mathscr{U}$. By closedness $x\in X_k$, so $k^*\mathscr{U}\to x$ in $X_k$. Finally, by the continuity of $f_k$ we have ${f_k}_*(k^*\mathscr{U})=f_*(k_* k^*\mathscr{U})=f_*(\mathscr{U})\to f(x)$ which proves $f$ is continuous.
\end{proof}
\begin{rmk}
One is tempted to extend Lemma \ref{lemma:pasting_pstop} to larger cartesian closed topological constructs such as\footnote{These are constructs consisting of sets equipped with a choice of filters for each point, much like the formulation of $\mathsf{PsTop}$ in terms of filters except that weaker axioms are assumed for the choice of filters, see e.g.\@ \cite{ColLow2001}.} $\mathsf{Lim}$ or $\mathsf{Conv}$, although our proof seems to suggest that the fact that a pseudotopology can be specified in terms of \emph{ultra}filters is crucial. In \cite[App.\@ A.2]{Pre2002} a sort of pasting lemma is proved for functions from a pretopological space to a limit space, allowing for a definition of the fundamental group in $\mathsf{Lim}$ (and, by recurrence, of all higher homotopy groups). However, that lemma does not ensure the continuity of loop concatenation in a loop limit space (for that we would need a pasting lemma for functions between limit spaces) and thus it does not lead to a notion of a fundamental group enriched over $\mathsf{Lim}$. Whether such a notion exists remains an open problem. On the other hand, the same appendix offers a counterexample to the possibility of extending any form of pasting lemma to $\mathsf{Conv}$ (called $\mathsf{KConv}$ in \cite{Pre2002}) by exhibiting a convergence space $X$ for which the path concatenation of two ``path composable'' continuous maps $[0,1]\to X$ is not continuous. In other words, in $\mathsf{Conv}$ the unit interval is not suitable to define a reasonable notion of fundamental group.
\end{rmk}
\begin{problem}
Extending Lemma \ref{lemma:pasting_pstop} to $\mathsf{Lim}$, or finding a counterexample to such an extension.
\end{problem}
\section{Path-component functors in \texorpdfstring{$\mathsf{PsTop}$}{PsTop} and \texorpdfstring{$\mathsf{EpiTop}$}{EpiTop}}
We briefly recall that in $\mathsf{Top}$ one can define the path-component endofunctor $\pctop\colon\mathsf{Top}\to\mathsf{Top}$ which assigns:
\begin{itemize}
\item to each space $X$ the quotient space $\pctop X$ of its path components, where the quotient topology is taken with respect to the natural projection $\quot{X}{}$ of $X$ onto the set of its path components;
\item to each continuous map $f\colon X\to Y$ between spaces the continuous map $\pctop f\colon\pctop X\to\pctop Y$ defined by $(\pctop f)(\quot{X}{}(x))\coloneqq \quot{Y}{}(f(x))$.
\end{itemize}
For each continuous map $f\colon X\to Y$ between spaces we thus have the commutative diagram
\begin{center}
\begin{tikzcd}
X\arrow[r,"f"] \arrow[d,"\quot{X}{}"]& Y\arrow[d,"\quot{Y}{}"]\\
\pctop X \arrow[r,"\pctop f"] & \pctop Y
\end{tikzcd}
\end{center}
where $\quot{X}{}$ and $\quot{Y}{}$ are quotient maps in $\mathsf{Top}$. Given a product $\operatorname{\Pi}_j X_j$ of spaces, there is a natural continuous bijection $\pctop \operatorname{\Pi}_j X_j\to \operatorname{\Pi}_j {\pctop X_j}$ which is a homeomorphism iff the product of quotient maps $\operatorname{\Pi}_j \quot{X_j}{}\colon \operatorname{\Pi}_j X_j\to \operatorname{\Pi}_j \pctop X_j$ is quotient. Since in $\mathsf{Top}$ products of quotient maps need not be quotient, it follows that $\pctop$ does not preserve products (not even finite ones).
See \cite{Bra2012} for more details.
Substituting $\mathsf{Top}$ with $\mathsf{EpiTop}$ and $\mathsf{PsTop}$ in the construction above, we obtain analogous endofunctors
\begin{equation*}
\begin{aligned}
\pcepi&\colon\mathsf{EpiTop}\to\mathsf{EpiTop}\\
\pcps&\colon\mathsf{PsTop}\to\mathsf{PsTop}
\end{aligned}
\end{equation*}
with corresponding commutative diagrams
\begin{center}
\begin{tikzcd}
X\arrow[r,"f"] \arrow[d,"\quot{X}{epi}"]& Y\arrow[d,"\quot{Y}{epi}"]&&Z\arrow[r,"g"] \arrow[d,"\quot{Z}{ps}"]& W\arrow[d,"\quot{W}{ps}"]\\
\pcepi X \arrow[r,"\pcepi f"] & \pcepi Y&&\pcps Z \arrow[r,"\pcps g"] & \pcps W
\end{tikzcd}
\end{center}
where on the left-hand side $f\colon X\to Y$ is a continuous map between epispaces and $\quot{X}{epi}$ and $\quot{Y}{epi}$ are quotient maps in $\mathsf{EpiTop}$, while on the right-hand side $g\colon Z\to W$ is a continuous map between pseudospaces and $\quot{Z}{ps}$ and $\quot{W}{ps}$ are quotient maps in $\mathsf{PsTop}$.
The ``convenient'' properties of $\mathsf{EpiTop}$ and $\mathsf{PsTop}$ induce better-behaved path-component functors at least with respect to products, as the next proposition shows.
\begin{prop}\label{prop:pc_products}
$\pcps$ preserves products and $\pcepi$ preserves finite products.
\end{prop}
\begin{proof}
In $\mathsf{PsTop}$ products of quotient maps are quotient, see \cite[Theorem~31]{BenHerLow1991}. In $\mathsf{EpiTop}$ finite products of quotient maps are quotient: to prove it use the characterization of cartesian closedness for topological constructs given in \cite{Her1974} or \cite[Thm 2.14]{HerColSch1991}, together with the fact that $\mathsf{EpiTop}$ is cartesian closed.
\end{proof}
Let $X$ be a pseudospace and $\gamma\colon[0,1]\to X$ be a path in $X$. Then $R_1\gamma$ is a path in the reflected epispace $R_1 X$ and $R\gamma$ is a path in the reflected space $R X$. By Cor.\@ \ref{cor:preservation} we obtain natural continuous surjections $\quott{X}{ps}\colon R_1 \pcps X \twoheadrightarrow\pcepi R_1 X$ and $\quott{X'}{epi} \colon R_2 \pcepi X'\twoheadrightarrow \pctop R_2 X'$ for each $X\in\mathsf{PsTop}$ and each $X'\in\mathsf{EpiTop}$. The commutative diagram in Fig.\@ \ref{fig:wardrobe} illustrates the mutual relations among $\pcps$, $\pcepi$ and $\pctop$.
\begin{center}
\begin{figure}
\begin{tikzcd}[row sep={2em,between origins}, column sep={2em,between origins}, text height=1.5ex, text depth=0.25ex, nodes in empty cells]
X \arrow[rrrrrr,"f"] \arrow[dddrr,"\mathrm{id}",sloped] \arrow[dddd,swap,"\quot{X}{ps}",->>] &&&&&& Y \arrow[dddd,swap,"\quot{Y}{ps}",densely dotted,->>] \arrow[dddrr,"\mathrm{id}",sloped] \\
\\
\\
&& R_1 X \arrow[rrrrrr,crossing over,"f" near start] &&&&&& R_1 Y \arrow[dddrr,"\mathrm{id}",sloped] \arrow[dddd,"\quot{Y}{ps}",densely dotted,->>] \\
\pcps X \arrow[rrrrrr, "\pcps f" near end,densely dotted,swap] \arrow[dddrr,"\mathrm{id}",sloped] \arrow[dddddddrr,bend right=12,"\quott{X}{ps}",swap,->>]&&&&&& \pcps Y \arrow[dddddddrr,bend right=12,"\quott{Y}{ps}",swap,densely dotted,->>]\arrow[dddrr, "\mathrm{id}" near start,densely dotted,sloped]\\
\\
&&&& R X \arrow[uuull,<-,crossing over,"\mathrm{id}",swap,sloped] \arrow[rrrrrr,crossing over,"f" very near start] &&&&&& R Y \arrow[dddd,"\quot{Y}{ps}",->>]\\
&& R_1\pcps X \arrow[uuuu,<<-,crossing over,"\quot{X}{ps}",swap] \arrow[rrrrrr,crossing over,"\pcps f",densely dotted,swap] \arrow[dddrr,"\mathrm{id}" near start,sloped] \arrow[dddd,"\quott{X}{ps}",->>] &&&&&& R_1 \pcps Y \arrow[dddrr,"\mathrm{id}",densely dotted,sloped] \arrow[dddd,"\quott{Y}{ps}",densely dotted,->>] \\
\\
\\
&&&& R \pcps X \arrow[uuuu,<<-,crossing over,"\quot{X}{ps}",swap] \arrow[rrrrrr,crossing over,"\pcps f" near start] &&&&&& R \pcps Y \arrow[dddd,"\quott{Y}{ps}",->>] \\
&& \pcepi R_1 X \arrow[rrrrrr,"\pcepi f",densely dotted,swap] \arrow[dddrr,"\mathrm{id}" near start,sloped] \arrow[dddddddrr,bend right=12,"\quott{X}{epi}",swap,->>]&&&&&& \pcepi R_1 Y \arrow[dddrr,"\mathrm{id}",densely dotted,sloped] \arrow[dddddddrr,bend right=12,"\quott{Y}{epi}",swap,densely dotted,->>]\\
\\
\\
&&&& R_2 \pcepi R_1 X \arrow[rrrrrr,crossing over,"\pcepi f"] \arrow[uuuu,<<-,crossing over,"\quott{X}{ps}",swap] \arrow[dddd,"\quott{X}{epi}",->>] &&&&&& R_2 \pcepi R_1 Y \arrow[dddd,"\quott{Y}{epi}",->>]\\
\\
\\
\\
&&&& \pctop R X \arrow[rrrrrr,"\pctop f"]&&&&&& \pctop R Y
\end{tikzcd}
\caption{Mutual relations among $\pcps$, $\pcepi$ and $\pctop$.}\label{fig:wardrobe}
\end{figure}
\end{center}
\begin{problem}
Understanding for which $X\in\mathsf{PsTop}$ the map $\quott{X}{ps}$ is quotient in $\mathsf{EpiTop}$, and analogously for which $X'\in\mathsf{EpiTop}$ the map $\quott{X'}{epi}$ is quotient in $\mathsf{Top}$.
\end{problem}
The next proposition shows that $\pcps$ and $\pcepi$ are lifts of $\pctop$ to $\mathsf{PsTop}$ and to $\mathsf{EpiTop}$, respectively.
\begin{prop}
The following diagram of functors is commutative:
\begin{center}
\begin{tikzcd}
\mathsf{PsTop}\arrow[r,"\pcps"] & \mathsf{PsTop}\arrow[d,"R_1"]\\
\mathsf{EpiTop} \arrow[u,hook] \arrow[r,"\pcepi"] & \mathsf{EpiTop}\arrow[d,"R_2"]\\
\mathsf{Top} \arrow[u,hook] \arrow[r,"\pctop"] & \mathsf{Top}
\end{tikzcd}
\end{center}
\label{prop:lift}
\end{prop}
\begin{proof}
When $X\in\mathsf{Top}$ the map $\quott{X}{epi}$ becomes the identity and we obtain the identification $R_2 \pcepi X = \pctop X$. Analogously, when $X'\in\mathsf{EpiTop}$ we obtain $R_1 \pcps X' = \pcepi X'$.
\end{proof}
It is natural to ask on which $X\in\mathsf{Top}$ these new functors coincide with the usual $\pctop$ (or, what is the same by Prop.\@ \ref{prop:lift}, on which $X\in\mathsf{Top}$ these new functors take value in $\mathsf{Top}$). Notice that, by Prop.\@ \ref{prop:lift}, $\pcps X\in\mathsf{Top}$ implies $\pcepi X\in\mathsf{Top}$. For $\pcps$ there is an interesting characterization.
\begin{prop}
For each $X\in\mathsf{Top}$ the following are equivalent:
\begin{enumerate}
\item \label{pc:itm:first} $\pcps X\in\mathsf{Top}$,
\item \label{pc:itm:second} $\pcps X = \pctop X$,
\item \label{pc:itm:third} the projection $\quot{X}{}\colon X\to\pctop X$ is biquotient.
\end{enumerate}
\end{prop}
\begin{proof}
$\eqref{pc:itm:first} \iff \eqref{pc:itm:second}$ is clear. For $\eqref{pc:itm:second} \iff \eqref{pc:itm:third}$, observe that the equation $\pcps X=\pctop X$ means that on the set of path components of $X$ the quotient pseudotopology and the quotient topology coincide. By a result of Kent \cite[Theorem 5]{Ken1969} this is equivalent to the projection $\quot{X}{}\colon X\to \pctop X$ being biquotient.
\end{proof}
We recall that a continuous surjection $f\colon X\to Y$ between spaces is said to be biquotient if, whenever $y\in Y$ and $\mathcal{O}$ is a covering of $f^{-1}(y)$ by open sets of $X$, then finitely many $f(O)$, with $O\in \mathcal{O}$, cover some neighborhood of $y$ in $Y$. Spaces for which $\quot{X}{}$ is biquotient include semilocally 0-connected spaces (each path component is open) and totally path disconnected spaces (each path component is a singleton). However, these examples are not exhaustive: the topologist's sine curve has biquotient projection but is neither semilocally 0-connected nor totally path disconnected.
\begin{problem}
Finding a topological characterization of those spaces $X$ for which the projection $\quot{X}{}\colon X\to \pctop X$ is biquotient.
\end{problem}
\begin{problem}
Characterizing those spaces $X$ for which $\pcepi X = \pctop X$.
\end{problem}
\begin{prop}\label{prop:cont_mult}
Let $X\in\mathsf{PsTop}$ (resp., $X\in\mathsf{EpiTop}$) and $m\colon X\times X\to X$ be a continuous map. Then the induced map $\mu\colon \pcps X\times \pcps X\to \pcps X$ (resp., $\mu\colon\pcepi X\times\pcepi X\to\pcepi X$) is continuous as well.
\end{prop}
\begin{proof}
It follows from the fact that in both $\mathsf{PsTop}$ and $\mathsf{EpiTop}$ the product of two quotient maps is quotient.
\end{proof}
\begin{rmk}
In $\mathsf{Top}$ the above result does not hold and this is the reason why $\biss$ takes value in the category of quasitopological groups, as opposed to topological groups. See the works of Brazas.
\end{rmk}
It is clear that the above treatment holds for the pointed versions of $\pcps$, $\pcepi$ and $\pctop$ as well.
\section{Loop functors in \texorpdfstring{$\mathsf{PsTop}_{\ast}$}{PsTop*} and \texorpdfstring{$\mathsf{EpiTop}_{\ast}$}{EpiTop*}}
We briefly recall that in the category $\mathsf{Top}_{\ast}$ of pointed topological spaces and based continuous maps one can define the loop space endofunctor $\looptop\colon\mathsf{Top}_{\ast}\to\mathsf{Top}_{\ast}$ which assigns:
\begin{itemize}
\item to each pointed space\footnote{When there is no danger of confusion, for ease of notation we avoid to write the basepoint and thus indicate $(X,x_0)$ simply by $X$.} $(X,x_0)\in\mathsf{Top}_{\ast}$ the pointed space of based loops in $X$ equipped with the subspace topology inherited from the compact-open topology on the set $\mathsf{Top}(S^1,X)$ of all free loops (the basepoint of $\looptop (X,x_0)$ is the constant loop based at $x_0$);
\item to each based continuous map $f\colon (X,x_0)\to (Y,y_0)$ between pointed spaces the based continuous map $\looptop f\colon\looptop (X,x_0)\to\looptop (Y,y_0)$ defined by $(\looptop f)(l)\coloneqq f\circ l$.\end{itemize}
Concatenation and inversion of loops give $\looptop (X,x_0)$ a natural H-group structure, that is, each loop space is a group up to homotopy. More precisely, we adopt the following definition of an H-group.
\begin{defn}[H-group]\label{defn:Hgroup}
An H-group structure on a pointed topological space $(X,x_0)$ consists of pointed continuous maps $\wedge\colon X\times X\to X$ and $\sigma\colon X\to X$ such that:
\begin{enumerate}
\item the maps $x\mapsto x\wedge x_0$ and $x\mapsto x_0\wedge x$ are pointed homotopic to the identity map $\mathrm{id}\colon X\to X$;
\item the maps $x\mapsto x\wedge \sigma(x)$ and $x\mapsto \sigma(x)\wedge x$ are pointed homotopic to the constant map $x\mapsto x_0$;
\item the map $(x,x',x'')\mapsto (x\wedge x')\wedge x''$ is pointed homotopic to the map $(x,x',x'')\mapsto x\wedge (x'\wedge x'')$.
\end{enumerate}
\end{defn}
The construction of the loop space functor and the definition of an H-group make sense in any cartesian closed topological construct containing $\mathsf{Top}$. In particular, we obtain functors
\begin{equation*}
\begin{aligned}
\loopepi&\colon\mathsf{EpiTop}_{\ast}\to\mathsf{EpiTop}_{\ast}\\
\loopps&\colon\mathsf{PsTop}_{\ast}\to\mathsf{PsTop}_{\ast}
\end{aligned}
\end{equation*}
and we can ask whether each loop epi/pseudospace is an H-group. The answer is affirmative, as the next proposition shows. We first record two easy results we shall need.
\begin{lemma}\label{lemma:eval}
For each $X\in\mathsf{PsTop}_{\ast}$ the evaluation map ${\loopps X}\times [0,1]\to X$ is continuous. Analogously for $\mathsf{EpiTop}_{\ast}$ and $\mathsf{Top}_{\ast}$.
\end{lemma}
\begin{proof}
Write it as ${\loopps X}\times[0,1]\hookrightarrow X^{[0,1]}\times[0,1]\xrightarrow{\ev}X$.
\end{proof}
\begin{lemma}\label{lemma:dual}
For each $X,Y\in\mathsf{PsTop}_{\ast}$, all set-theoretic maps $h\colon X\times[0,1]\to Y$ such that, for each $x\in X$, $h(x,0)=h(x,1)=y_0$ correspond bijectively to all set-theoretic maps $\hat{h}\colon X\to \loopps(Y,y_0)$. Moreover, $h$ is continuous iff $\hat{h}$ is. Analogously for $\mathsf{EpiTop}_{\ast}$ and $\mathsf{Top}_{\ast}$.
\end{lemma}
\begin{proof}
Just observe that for a continuous map $X\to Y^{[0,1]}$ with image in $\loopps(Y)$ the restriction of the codomain to $\loopps(Y)$ gives a continuous map $X\to\loopps(Y)$ (we use the fact that the subspace pseudotopology is initial with respect to the inclusion map).
\end{proof}
\begin{prop}\label{prop:Hgroup}
For each $(X,x_0)\in\mathsf{PsTop}_{\ast}$ and each $(X',x'_0)\in\mathsf{EpiTop}_{\ast}$, concatenation and inversion of loops give $\loopps(X,x_0)$ and $\loopepi(X',x'_0)$ the structure of H-groups.
\end{prop}
\begin{proof}
The proof is a simple adaptation of the usual proof \cite[Chap.\@ IV]{Ser1951} in $\mathsf{Top}$, with some care needed when proving continuity of loop concatenation, but for clarity we write it in its entirety. We use symbols $l\cdot l'$ and $l^{-1}$ for loop concatenation and inversion, respectively, and for ease of notation we omit writing the basepoint. We consider $\mathsf{PsTop}$ only, the case $\mathsf{EpiTop}$ being perfectly analogous. To prove the continuity of loop concatenation let us consider $\Phi\colon({\loopps X})^2\times[0,1]\to X$, $\Phi(l,l',t)\coloneqq(l\cdot l')(t)$. Put momentarily $T\coloneqq ({\loopps X})^2$ to save typographic space. The map $\Phi$ restricted to sets $T\times[0,\frac{1}{2}]$ and $T\times[\frac{1}{2},1]$ becomes essentially (up to a continuous rescaling and an uninfluential ${\loopps X}$ factor) the evaluation map ${\loopps X}\times[0,1]\to X$ which is continuous by Lemma~\ref{lemma:eval}. Moreover, $T\times[0,\frac{1}{2}]$ and $T\times[\frac{1}{2},1]$ are closed in $R(T)\times[0,1]$, and the identity map $R(T\times[0,1])\to R(T)\times[0,1]$ is continuous. Therefore $T\times[0,\frac{1}{2}]$ and $T\times[\frac{1}{2},1]$ are closed in $R(T\times[0,1])$ as well and by the pasting lemma in $\mathsf{PsTop}$, Lemma \ref{lemma:pasting_pstop}, we deduce that $\Phi$ is continuous. Now observe that $\Phi(l,l',0)=l(0)=x_0=l'(1)=\Phi(l,l',1)$ so we can apply Lemma~\ref{lemma:dual} and conclude that loop concatenation $\hat{\Phi}\colon({\loopps X})^2\to {\loopps X}$ is continuous. The proof for loop inversion is similar (but it does not use the pasting lemma). We now prove the remaining properties in the definition of an H-group. Let us call $e$ the constant loop at $x_0$.
\begin{enumerate}
\item to prove that the map $l\mapsto l\cdot e$ is pointed homotopic to the identity map, consider the continuous map $\phi\colon[0,1]^2\to[0,1]$, $\phi(s,t)\coloneqq(1-s)\min(2t,1)+st$, then take $\mathrm{id}\times\phi\colon{\loopps X}\times[0,1]^2\to {\loopps X}\times[0,1]$ and compose it with the evaluation obtaining the continuous map $\sigma\colon{\loopps X}\times[0,1]^2\to X$, $\sigma(l,s,t)\coloneqq l(\phi(s,t))$. Observe that $\sigma(l,s,0)=l(0)=x_0=l(1)=\sigma(l,s,1)$ so that by Lemma~\ref{lemma:dual} we get a continuous map $\hat{\sigma}\colon{\loopps X}\times[0,1]\to{\loopps X}$. Finally observe that $\hat{\sigma}(l,0)=l\cdot e$, $\hat{\sigma}(l,1)=l$ and $\hat{\sigma}(e,s)=e$. Substituting $\min$ with $\max$ proves the analogous statement for the map $l\mapsto e\cdot l$;
\item to prove that the map $l\mapsto l\cdot l^{-1}$ is pointed homotopic to the constant map $l\mapsto e$, proceed as above with the continuous map $\psi\colon[0,1]^2\to[0,1]$, $\psi(s,t)\coloneqq 2(1-s)\min(t,1-t)$; substituting $\min$ with $\max$ proves the analogous statement for the map $l\mapsto l^{-1}\cdot l$;
\item to prove that the map $(l,l',l'')\mapsto (l\cdot l')\cdot l''$ is pointed homotopic to the map $(l,l',l'')\mapsto l\cdot (l'\cdot l'')$, proceed as above with the continuous map $\chi\colon[0,1]^2\to[0,1]$ given by
\begin{equation*}
\chi(s,t)=\begin{cases}
\frac{t}{1+s} & 0\leq t\leq\frac{1+s}{4}\\
t-\frac{s}{4} & \frac{1+s}{4}\leq t\leq\frac{2+s}{4}\\
\frac{1}{2}+\frac{4t-2-s}{4-2s} & \frac{2+s}{4}\leq t\leq 1
\end{cases}
\end{equation*}
(notice that this time we shall need the previously established continuity of $\cdot$ to let the argument go through).
\end{enumerate}
\end{proof}
\begin{rmk}
The above proof shows that, given a cartesian closed topological construct $\mathsf{C}$ containing $\mathsf{Top}$ as a subconstruct, for each $X\in\mathsf{C}_{\ast}$ the loop object ${\loopC X}$ satisfies all the properties of an H-group except possibly continuity of loop concatenation and homotopy associativity.
\end{rmk}
In the next proposition we record a useful property of $\loopepi$ and $\loopps$ that can be proven in exactly the same way as for $\looptop$.
\begin{prop}\label{prop:loop_products}
Both $\loopepi$ and $\loopps$ preserve arbitrary products.
\end{prop}
Let $X$ be a pseudospace and $l\colon[0,1]\to X$ be a loop in $X$. Then $R_1 l$ is a loop in the reflected epispace $R_1 X$ and $R l$ is a loop in the reflected space $R X$. Given that the reflectors do not modify the underlying set-theoretic maps we can identify ${\loopps X}$ with a subset of ${\loopepi X}$ and analogously ${\loopepi X}\hookrightarrow{\looptop X}$. These inclusions are continuous by Prop.\@ \ref{prop:expo}. The commutative diagram in Fig.\@ \ref{fig:wardrobe2} illustrates the mutual relations among $\loopps$, $\loopepi$ and $\looptop$.
\begin{center}
\begin{figure}
\begin{tikzcd}[row sep={2em,between origins}, column sep={2em,between origins}, text height=1.5ex, text depth=0.25ex, nodes in empty cells]
X \arrow[rrrrrr,"f"] \arrow[dddrr,"\mathrm{id}",sloped] &&&&&& Y \arrow[dddrr,"\mathrm{id}",sloped] \\
\\
\\
&& R_1 X \arrow[rrrrrr,crossing over,"f" near start] &&&&&& R_1 Y \arrow[dddrr,"\mathrm{id}",sloped] \\
\loopps X \arrow[rrrrrr, "\loopps f" near end,densely dotted,swap] \arrow[dddrr,"\mathrm{id}",sloped] \arrow[dddddddrr,bend right=12,swap,hook]&&&&&& \loopps Y \arrow[dddddddrr,bend right=12,swap,densely dotted,hook]\arrow[dddrr, "\mathrm{id}" near start,densely dotted,sloped]\\
\\
&&&& R X \arrow[uuull,<-,crossing over,"\mathrm{id}",sloped,near start] \arrow[rrrrrr,crossing over,"f" very near start] &&&&&& R Y \\
&& R_1\loopps X \arrow[rrrrrr,crossing over,"\loopps f",swap] \arrow[dddrr,"\mathrm{id}",sloped] \arrow[dddd,hook] &&&&&& R_1 \loopps Y \arrow[dddrr,"\mathrm{id}",sloped] \arrow[dddd,hook,densely dotted] \\
\\
\\
&&&& R \loopps X \arrow[rrrrrr,crossing over,"\loopps f" near start] &&&&&& R \loopps Y \arrow[dddd,hook] \\
&& \loopepi R_1 X \arrow[rrrrrr,"\loopepi f",densely dotted,swap] \arrow[dddrr,"\mathrm{id}",sloped] \arrow[dddddddrr,bend right=12,swap,hook]&&&&&& \loopepi R_1 Y \arrow[dddrr,"\mathrm{id}",densely dotted,sloped] \arrow[dddddddrr,bend right=12,swap,densely dotted,hook]\\
\\
\\
&&&& R_2 \loopepi R_1 X \arrow[rrrrrr,crossing over,"\loopepi f"] \arrow[uuuu,hookleftarrow,crossing over,swap] \arrow[dddd,hook] &&&&&& R_2 \loopepi R_1 Y \arrow[dddd,hook]\\
\\
\\
\\
&&&& \looptop R X \arrow[rrrrrr,"\looptop f"]&&&&&& \looptop R Y
\end{tikzcd}
\caption{Mutual relations among $\loopps$, $\loopepi$ and $\looptop$.}\label{fig:wardrobe2}
\end{figure}
\end{center}
\begin{problem}
Understanding for which $(X,x_0)\in\mathsf{PsTop}_{\ast}$ the inclusion
$$R_1 \loopps(X,x_0)\hookrightarrow \loopepi(R_1 X,x_0)$$
is initial in $\mathsf{EpiTop}$, that is, for which pointed pseudospaces $(X,x_0)$ the epispace $R_1 \loopps(X,x_0)$ is an epitopological subspace of $\loopepi(R_1 X,x_0)$. Analogous question for the inclusion $R_2 \loopepi(X',x'_0)\hookrightarrow \looptop(R_2 X',x'_0)$ where $(X',x'_0)\in\mathsf{EpiTop}_{\ast}$.
\end{problem}
By the above discussion and by Prop.\@ \ref{prop:expo} it is clear that $\loopps$ (resp., $\loopepi$) is an extension of $\looptop$ to $\mathsf{PsTop}_{\ast}$ (resp., to $\mathsf{EpiTop}_{\ast}$). We record this fact in the next proposition.
\begin{prop}\label{prop:extension}
If $(X,x_0)\in\mathsf{EpiTop}_{\ast}$ then $\loopps(X,x_0)=\loopepi(X,x_0)$. If $(X',x'_0)\in\mathsf{Top}_{\ast}$ then $\loopepi(X',x'_0)=\looptop(X',x'_0)$.
\end{prop}
\section{Epitopological and pseudotopological fundamental groups}
We now put together the previous constructions to obtain epi- and pseudo-topologizations of the fundamental group of a pointed epi- or pseudospace (in particular, of a pointed topological space). By Prop.\@ \ref{prop:Hgroup} the set of path components of a loop epi/pseudospace is a group (with multiplication and inversion induced by loop concatenation and loop inversion). On the other hand, the epi/pseudo-topologized path component functors $\pcepi$ and $\pcps$ endow this group with an epi/pseudotopology and by functoriality the induced inversion map is continuous. The continuity of the multiplication follows from Prop.\@ \ref{prop:cont_mult}.
\begin{defn}
A pseudotopological group is a group equipped with a pseudotopology making the inverse map and the multiplication map continuous. The category of all pseudotopological groups and continuous group homomorphisms between them is denoted by $\mathsf{PsTopGrp}$. The category $\mathsf{EpiTopGrp}$ is defined analogously.
\end{defn}
\begin{rmk}
The obvious embeddings $\mathsf{TopGrp}\hookrightarrow\mathsf{EpiTopGrp}\hookrightarrow\mathsf{PsTopGrp}$ are full, and each category in this chain of inclusions is a topological concrete category over $\mathsf{Grp}$. Moreover, $\mathsf{PsTopGrp}$ (resp., $\mathsf{EpiTopGrp}$, $\mathsf{TopGrp}$) is the category of group-objects of $\mathsf{PsTop}$ (resp., $\mathsf{EpiTop}$, $\mathsf{Top}$). On the other hand, the category $\mathsf{QTopGrp}$ of quasitopological groups is contained neither in $\mathsf{EpiTopGrp}$ nor in $\mathsf{PsTopGrp}$. Rather, if we consider all these categories as subcategories of the category of groups with pseudotopology (no requirements on the continuity of operations) and continuous homomorphisms, we have:
$$\mathsf{QTopGrp}\cap\mathsf{PsTopGrp}=\mathsf{QTopGrp}\cap\mathsf{EpiTopGrp}=\mathsf{TopGrp}\;.$$
\end{rmk}
The above discussion together with Prop.\@ \ref{prop:Hgroup} implies that, for each $X\in\mathsf{PsTop}_{\ast}$ and each $X'\in\mathsf{EpiTop}_{\ast}$, $\pcps\loopps X$ is a pseudotopological group and $\pcepi\loopepi X$ is an epitopological group. We record this fact in the next definition.
\begin{defn}
The epitopological and pseudotopological fundamental group functors are the composed functors
\begin{equation*}
\begin{aligned}
\piepi=\pcepi\loopepi&\colon\mathsf{EpiTop}_{\ast}\to\mathsf{EpiTopGrp}\\
\pips=\pcps\loopps&\colon\mathsf{PsTop}_{\ast}\to\mathsf{PsTopGrp}\;.\\
\end{aligned}
\end{equation*}
\end{defn}
The mutual relations among these functors are illustrated by the diagram in Fig.~\ref{fig:wardrobe3}, constructed by taking into account the results of the previous sections.
\begin{center}
\begin{figure}
\begin{tikzcd}[row sep={2em,between origins}, column sep={2em,between origins}, text height=1.5ex, text depth=0.25ex, nodes in empty cells]
X \arrow[rrrrrr,"f"] \arrow[dddrr,"\mathrm{id}",sloped] &&&&&& Y \arrow[dddrr,"\mathrm{id}",sloped] \\
\\
\\
&& R_1 X \arrow[rrrrrr,crossing over,"f" near start] &&&&&& R_1 Y \arrow[dddrr,"\mathrm{id}",sloped] \\
\pips X \arrow[rrrrrr, "\pips f" near end,densely dotted,swap] \arrow[dddrr,"\mathrm{id}",sloped] \arrow[dddddddrr,bend right=12,swap]&&&&&& \pips Y \arrow[dddddddrr,bend right=12,swap,densely dotted]\arrow[dddrr, "\mathrm{id}" near start,densely dotted,sloped]\\
\\
&&&& R X \arrow[uuull,<-,crossing over,"\mathrm{id}",near start,sloped] \arrow[rrrrrr,crossing over,"f" very near start] &&&&&& R Y \\
&& R_1\pips X \arrow[rrrrrr,crossing over,"\pips f",swap] \arrow[dddrr,"\mathrm{id}",sloped] \arrow[dddd] &&&&&& R_1 \pips Y \arrow[dddrr,"\mathrm{id}",sloped] \arrow[dddd,densely dotted] \\
\\
\\
&&&& R \pips X \arrow[rrrrrr,crossing over,"\pips f" near start] &&&&&& R \pips Y \arrow[dddd] \\
&& \piepi R_1 X \arrow[rrrrrr,"\piepi f",densely dotted,swap] \arrow[dddrr,"\mathrm{id}",sloped] \arrow[dddddddrr,bend right=12,swap]&&&&&& \piepi R_1 Y \arrow[dddrr,"\mathrm{id}",densely dotted,sloped] \arrow[dddddddrr,bend right=12,swap,densely dotted]\\
\\
\\
&&&& R_2 \piepi R_1 X \arrow[rrrrrr,crossing over,"\piepi f"] \arrow[uuuu,leftarrow,crossing over,swap] \arrow[dddd] &&&&&& R_2 \piepi R_1 Y \arrow[dddd]\\
\\
\\
\\
&&&& \biss R X \arrow[rrrrrr,"\biss f"]&&&&&& \biss R Y
\end{tikzcd}
\caption{Mutual relations among $\pips$, $\piepi$ and $\biss$.}\label{fig:wardrobe3}
\end{figure}
\end{center}
By combining Prop.\@ \ref{prop:pc_products} and Prop.\@ \ref{prop:loop_products} we get at once the following result.
\begin{prop}
$\piepi$ preserves finite products and $\pips$ preserves arbitrary products.
\end{prop}
It is also not hard to check that both $\piepi$ and $\pips$ are homotopy invariant, that is, given two pointed-homotopic continuous maps $f,g\colon X\to Y$ between pointed epispaces (resp., pseudospaces) the induced maps are equal, $\piepi f=\piepi g$ (resp., $\pips f=\pips g$). Analogously, one can check that a change of basepoint induces an isomorphism of epitopological groups (resp., pseudotopological groups).
Much like the path component functors $\pcps$ and $\pcepi$ are lifts of $\pctop$, the functors $\pips$ and $\piepi$ are lifts of $\biss$ to $\mathsf{PsTop}$ and to $\mathsf{EpiTop}$, respectively.
\begin{prop}
The following diagrams of functors are commutative:
\begin{center}
\begin{tikzcd}
\mathsf{EpiTop}_{\ast}\arrow[r,"\piepi"] & \mathsf{EpiTopGrp}\arrow[d,"R_2"]&&\mathsf{PsTop}_{\ast}\arrow[r,"\pips"] & \mathsf{PsTopGrp}\arrow[d,"R"]\\
\mathsf{Top}_{\ast} \arrow[u,hook] \arrow[r,"\biss"] & \mathsf{QTopGrp}&&\mathsf{Top}_{\ast} \arrow[u,hook] \arrow[r,"\biss"] & \mathsf{QTopGrp}\;.
\end{tikzcd}
\end{center}
\end{prop}
\label{prop:lift2}
\begin{proof}
It follows by previously established corresponding results for path component and loop functors.
\end{proof}
By the above proposition, the restrictions of $\piepi$ and $\pips$ to $\mathsf{Top}_{\ast}$ contain no less information than $\biss$. The next problem asks whether they contain more information at all.
\begin{problem}
Determining whether there are non-homeomorphic spaces $X\not\simeq Y$ for which ${\biss X}\simeq {\biss Y}$, ${\brazas X}\simeq{\brazas Y}$ but ${\piepi X}\not\simeq{\piepi Y}$ or ${\pips X}\not\simeq{\pips Y}$.
\end{problem}
\begin{prop}\label{prop:compare}
Let $X\in\mathsf{Top}_{\ast}$. Then the statements contained in each sublist are equivalent:
\begin{enumerate}
\item \begin{enumerate}
\item ${\pips X}\in\mathsf{EpiTopGrp}$,
\item ${\pips X}={\piepi X}$,
\item on $|{\pctop{\looptop X}}|$ the quotient pseudotopology and the quotient epitopology coincide\footnote{Here and in the following the notation $|X|$ indicates the underlying set of $X$.}.
\end{enumerate}
\item \begin{enumerate}
\item ${\piepi X}\in\mathsf{QTopGrp}$,
\item ${\piepi X}\in\mathsf{TopGrp}$,
\item ${\piepi X}={\biss X}$,
\item ${\piepi X}={\brazas X}$,
\item on $|{\pctop{\looptop X}}|$ the quotient epitopology and the quotient topology coincide.
\end{enumerate}
\item \begin{enumerate}
\item ${\pips X}\in\mathsf{QTopGrp}$,
\item ${\pips X}\in\mathsf{TopGrp}$,
\item ${\pips X}={\biss X}$,
\item ${\pips X}={\brazas X}$,
\item on $|{\pctop{\looptop X}}|$ the quotient pseudotopology and the quotient topology coincide,
\item ${\looptop X}\to {\pctop{\looptop X}}$ is biquotient.
\end{enumerate}
\end{enumerate}
\end{prop}
\begin{proof}[Sketch of proof]
Use previous results together with the fact: ${\biss X}={\brazas X}$ iff ${\biss X}\in\mathsf{TopGrp}$ (see \cite[Prop.~3.3]{Bra2013}).
\end{proof}
The above proposition does not clarify whether ${\biss X}={\brazas X}$ implies ${\piepi X}\in\mathsf{QTopGrp}$ or ${\pips X}\in\mathsf{QTopGrp}$. If either of these implications is true then ${\biss X}$ is a topological group iff $\quot{\looptop X}{}\times\quot{\looptop X}{}$ is quotient, where $\quot{\looptop X}{}\colon \looptop X\to {\pctop{\looptop X}}$ is the natural quotient map in $\mathsf{Top}$ (thereby settling a question in \cite[p.17]{BraFab2013}).
\begin{rmk}
Let $X\in\mathsf{Top}_{\ast}$ be such that ${\looptop X}$ is totally path disconnected. By Prop.\@ \ref{prop:compare}, ${\biss X}={\brazas X}={\pips X}={\piepi X}\simeq{\looptop X}$.
\end{rmk}
Many questions remain to be addressed. We mention the following.
\begin{problem}
Understanding whether $\piepi$ and $\pips$ are essentially surjective, that is, whether for each $G\in\mathsf{EpiTopGrp}$ and each $G'\in\mathsf{PsTopGrp}$ there are an epispace $X$ and a pseudospace $X'$ such that $\piepi X\simeq G$ and $\pips X'\simeq G'$.
\end{problem}
\begin{problem}
Understanding what is the image of $\mathsf{Top}_{\ast}$ under $\piepi$ and under $\pips$.
\end{problem}
\section{Conclusions}
We identified two supercategories of $\mathsf{Top}$, namely $\mathsf{EpiTop}$ and $\mathsf{PsTop}$, where it is possible to define well-behaved enriched fundamental group functors, $\piepi$ and $\pips$, by replicating the quotient construction specified in \cite{Bis2002}. One of the key points, which is a new result to the best of the author's knowledge, is the existence of a pasting lemma within these constructs. The product-stability of quotient maps in these larger categories guarantees that the target category of $\piepi$ and $\pips$ consists of the category of their group-objects, thus avoiding the pitfall of the original construction $\biss$ \cite{Bis2002}. It then turns out that $\piepi$ and $\pips$ are suitable lifts of $\biss$ under the natural inclusions $\mathsf{Top}\hookrightarrow\mathsf{EpiTop}\hookrightarrow\mathsf{PsTop}$ and the corresponding reflectors $\mathsf{Top}\xleftarrow{R_2}\mathsf{EpiTop}$ and $\mathsf{Top}\xleftarrow{R}\mathsf{PsTop}$, as shown by the following diagrams:
\begin{center}
\begin{tikzcd}
\mathsf{EpiTop}_{\ast}\arrow[r,"\piepi"] & \mathsf{EpiTopGrp}\arrow[d,"R_2"]&&\mathsf{PsTop}_{\ast}\arrow[r,"\pips"] & \mathsf{PsTopGrp}\arrow[d,"R"]\\
\mathsf{Top}_{\ast} \arrow[u,hook] \arrow[r,"\biss"] & \mathsf{QTopGrp}&&\mathsf{Top}_{\ast} \arrow[u,hook] \arrow[r,"\biss"] & \mathsf{QTopGrp}\;.
\end{tikzcd}
\end{center}
These two functors should be compared with the topologized fundamental group introduced and studied in \cite{Bra2013}. In particular, it should be possible to devise a covering epispace theory and a covering pseudospace theory along the lines of \cite{Bra2012a}, as well as constructing groupoid versions of $\piepi$ and $\pips$.
Obviously it is possible to recursively define all higher homotopy group functors by putting $\pihepi\coloneqq \pcepi(\loopepi)^n$ and $\pihps\coloneqq \pcps(\loopps)^n$. In a broader perspective, it would be nice to investigate how much of classical homotopy theory carries over to this new realm and what, if any, new insight the richer structure of these functors might offer.
\bibliographystyle{alpha}
|
1,477,468,750,413 | arxiv | \section{Introduction}
\label{section:introduction}
Stars form in giant molecular clouds (GMCs) whose primary constituent
is molecular hydrogen, \htwo. Because \htwo \ lacks a permanent
dipole moment, and the lowest lying excited state capable of
quadrupole emission requires temperatures $\sim 500$ \ K to be excited,
the physical conditions in the cold ($\sim 10$ K) molecular gas are
typically probed via tracer molecules, rather than by direct detection
of \htwo.
Carbon Monoxide ($^{12}$C$^{16}$O; hereafter, CO) is the second most
abundant molecule in GMCs. Because the J=1-0 rotational transition of
CO lies only $\sim5$ K above ground, has a relatively low effective
density ($\sim 10^{2-3}$\cmthree) for excitation \citep{eva99},
and has a wavelength of $\sim3$ mm which is readily observable from
the ground, CO (J=1-0) has historically been one of the most commonly
used tracers of physical conditions in the molecular ISM.
A large uncertainty in using CO to trace \htwo \ gas is relating the
observed CO line luminosity to the underlying \htwo \ column density.
However, despite the fact that CO/\htwo \ abundances vary strongly
within GMCs \citep[e.g. ][]{ste95,lee96,hol99,glo10,glo11}, a
multitude of observations suggests that the conversion factor
between CO and \htwo \ is reasonably constant in Galactic GMCs,
following the relation:
\begin{equation}
\label{eq:xco_definition}
\xco = 2-4 \times 10^{20} \rm {cm}^{-2}/(K-\kms)
\end{equation}
where \xco \ is the CO-\htwo \ conversion factor in units of \htwo
\ column density divided by velocity-integrated CO line
intensity\footnote{\xco \ is sometimes referred to in the literature
as the ``$X$-factor''. We will use \xco \ and $X$-factor
interchangeably.}. Lines of evidence for a relatively constant \xco
\ include comparisons between CO luminosities and molecular column
densities determined via a variety of techniques, including dust
extinction \citep{dic75}, $\gamma$-ray emission
\citep{blo86,str96,abd10} and thermal dust emission
\citep{dam01,dra07b}.
|
1,477,468,750,414 | arxiv | \section{Introduction}
Modeling galaxy formation and evolution requires
of a solid cosmological theoretical framework as much as an adequate
model to describe the large-scale star formation (SF) cycle and its
interplay with the interstellar medium (ISM). The dissipative
properties of the ISM play a crucial role in this latter process.
Globally, stellar radiation is mainly responsible for maintaining
the temperature of the ISM in its various phases.
However, thermal pressure is thought to be
negligible for the global disk gas dynamics (e.g., V\'azquez-
Semadeni et al. 1999; Cox, these proceedings).
Nevertheless, stars are also sources of kinetic energy ($E_k$)
deposition into the ISM, and as hydrodynamical
simulations have shown, the dynamics of the gas in this case is
deeply affected (e.g., Navarro \& White 1993). Due to the large $E_k$
input and the high Reynolds number of the ISM plasma, turbulence is
expected to develop, its pressure and dissipation
being key ingredients in the ``metabolism'' of the
disk stellar-gas system.
Several disk galaxy evolution models (e.g., Firmani, Hern\'andez,
\& Gallagher 1996) are based on the idea that
the intrinsic SF rate (SFR) is controlled by a balance
within the vertical {\it disk} gas between the turbulent energy input
rate due to SF and the dissipation rate. The crucial parameter
for the SFR and disk height is the turbulent dissipation timescale.
The self-regulating SF mechanism has been also used in
models of galaxy formation within the context of the
hierarchical CDM-based scenario, but in this case it was
applied to the large cosmological halo (White
\& Frenk 1991; Kauffmann, White, \& Guiderdoni 1993; Cole et
al. 1994; Somerville \& Primack 1999; van den Bosch 1999).
In these models the feedback of the stars is assumed to
efficiently reheat and drive back the disk gas into the
dark matter halo, in such a
way that the SFR efficiency is a strong function of the
halo mass. Thus, a crucial question is whether the energy
released by SNe and stars is able to not only maintain
the warm and hot phases and the stirring of the ISM, but
also to sustain a huge hot corona in quasi-hydrostatic
equilibrium with the cosmological halo.
It should be emphasized that the observed medium around
the disks ---diffuse ionized and
high-velocity-dispersion HI gas, usually called the
halo--- is much more local than the hypothetical
gas in virial equilibrium with the huge dark halo. We shall
refer to the former as the the {\bf extraplanar medium}, and
to the latter as the {\bf intrahalo medium}.
It is still not at all clear how to explain the observed
extraplanar medium, particularly the ionized gas (see a
recent review by Mac Low 1999). This calls into
question the possibility that ionizing sources from the
disk (mainly massive OB stars) are able to sustain the
extended intrahalo medium. A possibility is that this gas
is heated by turbulence from the disk. However, this
question again depends on the ability of the turbulent
ISM to dissipate its $E_k$. Avila-Reese \& V\'azquez-Semadeni
(in preparation; hereafter AV) have studied the
dissipative properties of compressible MHD fluids that
resemble the ISM. Here we briefly report their main results
and remark the implications on the aforementioned questions.
\section{The method}
AV have used numerical 2D MHD simulations of self-gravitating
turbulent compressible fluids that include terms for radiative
cooling, heating, rotation and stellar energy injection
(V\'azquez-Semadeni, Passot, \& Pouquet 1995,1996; Passot,
V\'azquez-Semadeni, \& Pouquet 1996). The parameters were chosen
in such a way the simulations resemble the ISM in the plane of
the Galaxy at the {\bf 1 kpc scale}.
Previous simulations on dissipation in compressible MHD fluids
were focused to study molecular clouds (Mac Low et al.
1998; Stone, Ostriker, \& Gammie 1998; Padoan \& Nordlund 1999;
Mac Low 1999). In those works where the forced case was
studied, the turbulence was driven in Fourier space by large-scale
random velocity perturbations whose
amplitudes are selected as to maintain $E_k$ constant in time.
As a result, $E_k$ is injected everywhere in space
(a ``ubiquitous'' injection). Instead, in the ISM the
stellar input sources are pointlike and their spheres of
direct influence are comparatively small w.r.t. to typical
scales of the global ISM. In the simulations of AV, an
``energy input source'' is turned on at grid point $\vec x$
whenever $\rho (\vec x)>\rho _c$, and $\vec \nabla \cdot
\vec u(\vec x)<0$. Once SF has turned on at a given grid point,
it stays on for a time interval $\Delta t_s$, during
which the gas receives an
acceleration $\vec a$ directed radially away from this point.
The input sources are spatially extended by convolving their spatial
distribution with a Gaussian of width $\lambda _f$. For
the turbulent fluid, $\lambda _f$ is the forcing scale. At
this scale the acceleration $\vec a$ produces a velocity difference
around the ``star'' $\upsilon _f\approx 2\vec {a}\Delta t_s$ (the
velocity at which turbulence is forced at the $\lambda _f$ scale).
Both $\lambda _f$ and $\upsilon _f$ are free parameters.
\section{Dissipation in driven and decaying regimes}
For ISM simulations ($128^2$) with driven
turbulence, AV find that the behaviour with time of the $E_k$
dissipation rate, $E_k^d$, is similar to that of the $E_k$ injection
rate, $\dot{E}_k$. This means that $E_k$ is dissipated locally, near
the input sources. For various simulations varying $\lambda _f$ and
$\upsilon _f$, it was found that the the dissipation timescale is given by
$t_d\approx 1.5-3.0\ 10^7(\frac{\lambda _f/30 pc}{\upsilon _f/30kms^{-1}})$,
i.e., $t_d$ is proportional to $\lambda _f/\upsilon _f$.
Due to the locality and discretness of the energy input
sources, most of the volume actually is occupied by a turbulent
flow in a decaying regime. Thus, one may say that in the same fluid
``active'' turbulent regions, where the turbulence
driven by small non-ubiquitous input sources
is {\it locally} dissipated, coexist with extended regions with a
``residual'' turbulence in decaying regime and
characterized by $\upsilon _{\rm rms}$, where
$\upsilon _{\rm rms}\ll \upsilon _f$.
In order to study the decaying regime, SF was turned off
in the simulations after some time $t\gg t_{d}$. It was found
that $E_k$ decays as $(1+t)^{-n}$ with $n\sim 0.8$, in good
agreement with previous studies for isothermal fluids
(Mac Low et al. 1998, Stone et al. 1998). A typical
decaying timescale, $t_{\rm dec}$, may be defined as the time at which
the initial $E_k$ has decreased by a factor 2.
From our simulations, $t_{\rm dec}\approx 1.7\times 10^7$ years,
which is in agreement with $t_d$ in the driven turbulence. With these
timescales, ``residual'' turbulent motions propagating at roughly 10 km/s
would attain typical distances of approximately 200 pc. It was also
suggested, on dimmensional arguments, that $E_k$
and $\upsilon _{\rm rms}$ will decay with distance $\ell$
as $\ell^{-2m}$ and $\ell^{-m }$, respectively, with $m=n/(2-n)$.
\section{Conclusions and implications}
$\bullet$ Localized, discrete forcing at small
scales gives rise to the coexistence of both forced and
decaying turbulence regimes in the same flow (ISM).
$\bullet$ The turbulent $E_k$ near the ``active'' turbulent
regions is dissipated locally and efficiently. The global dissipation
timescale $t_d$ is proportional to $\lambda _f/\upsilon _f$. For reasonable
values of $\lambda _f$ and $\upsilon _f$ (which produce $\upsilon _{\rm
rms}\sim 10$ km/s), $t_d$ is of the order of a few $10^7$ years. Far from
the sources, the ``residual'' ISM turbulence decays as $E_k(t)\propto
(1+t)^{-0.8}$. The characteristic decay time is again a few $10^7$
years, and for $\upsilon _{\rm rms}\sim 10$ km/s, the
turbulent motions reach distances of $\sim 200$ pc.
$\bullet$ Turbulent motions produced in
the disk plane will propagate up to distances of the order of
the gaseous disk height. Therefore, models of
galaxy evolution where this height is
determined by an energy balance that self-regulates
SF in the ISM appear viable. However, our results pose a serious
difficulty for models of galaxy
formation where the turbulent $E_k$ injected by SNe is
thought to be able to reheat and drive back the gas from the disk
into the intrahalo medium in such a way that the
SF is self-regulated at the level of the cosmological halo.
Nevertheless, for non-stationary runaway SF
(starbursts), most of the superbubbles might be able to
blowout of the disk, as required for expelling large amounts
of gas and energy into the dark matter halo.
|
1,477,468,750,415 | arxiv | \section{Introduction: motivations and overview of the new results}
\label{sec:intro}
TDA is now expanding towards machine learning and statistics due to stability that was proved in a very general form by Chazal et al. \cite{chazal2016structure}.
The key idea of TDA is to view a given cloud of points across all scales $s$, e.g. by blurring given points to balls of a variable radius $s$.
The resulting evolution of topological shapes is summarized by a persistence diagram.
\medskip
\begin{exa}
\label{exa:5-point_line}
Fig.~\ref{fig:5-point_line} illustrates the key concepts (before formal definitions) for the point set $A = \{0,4,6,9,10\}$ in the real line $\mathbb{R}$.
Imagine that we gradually blur original data points by growing balls of the same radius $s$ around the given points.
The balls of the closest points $9,10$ start overlapping at the scale $s=0.5$ when these points merge into one cluster $\{9,10\}$.
This merger is shown by blue arcs joining at the node at $s=0.5$ in the single-linkage dendrogram, see the bottom left picture in Fig.~\ref{fig:5-point_line} and more details in Definition~\ref{dfn:sl_clustering}.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale = 1.1]
\draw[->] (-1,0) -- (11,0) node[right]{} ;
\foreach \x/\xtext in {0, 4, 6, 9, 10}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\filldraw (0,0) circle (2pt);
\filldraw (4,0) circle (2pt);
\filldraw (6,0) circle (2pt);
\filldraw (9,0) circle (2pt);
\filldraw (10,0) circle (2pt);
\end{tikzpicture}
\begin{tikzpicture}[scale = 0.52][sloped]
\draw[style=help lines,step = 1] (-1,0) grid (10.4,4.4);
\draw [->] (-1,0) -- (-1,5) node[above] {scale $s$};
\foreach \i in {0,0.5,...,2}{ \node at (-1.5,2*\i) {\i}; }
\node (a) at (0,-0.3) {0};
\node (b) at (4,-0.3) {4};
\node (c) at (6,-0.3) {6};
\node (d) at (9,-0.3) {9};
\node (e) at (10,-0.3) {10};
\node (x) at (5,5) {};
\node (de) at (9.5,1){};
\node (bc) at (5.0,2){};
\node (bcde) at (8.0,3){};
\node (all) at (5.0,4){};
\draw [line width=0.5mm, blue ] (a) |- (all.center);
\draw [line width=0.5mm, blue ] (b) |- (bc.center);
\draw [line width=0.5mm, blue ] (c) |- (bc.center);
\draw [line width=0.5mm, blue ] (d) |- (de.center);
\draw [line width=0.5mm, blue ] (e) |- (de.center);
\draw [line width=0.5mm, blue ] (de.center) |- (bcde.center);
\draw [line width=0.5mm, blue ] (bc.center) |- (bcde.center);
\draw [line width=0.5mm, blue ] (bcde.center) |- (all.center);
\draw [line width=0.5mm, blue ] [->] (all.center) -> (x.center);
\end{tikzpicture}
\hspace*{1mm}
\begin{tikzpicture}[scale = 1.0]
\draw[style=help lines,step = 0.5] (0,0) grid (0.5,2.4);
\draw[->] (-0.2,0) -- (0.8,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {};
\draw[-] (0,0) -- (1,1) node[right]{};
\foreach \x/\xtext in {0.5/0.5}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw [fill=blue] (0,0.5) circle (2pt);
\filldraw [fill=blue] (0.0,1) circle (2pt);
\filldraw [fill=blue] (0,1.5) circle (2pt);
\filldraw [fill=blue] (0,2) circle (2pt);
\filldraw [fill=blue] (0,2.6) circle (2pt);
\end{tikzpicture}
\hspace*{1mm}
\begin{tikzpicture}[scale = 1.0]
\draw[style=help lines,step = 0.5] (0,0) grid (2.4,2.4);
\draw[->] (-0.2,0) -- (2.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {death};
\draw[-] (0,0) -- (2.4,2.4) node[right]{};
\foreach \x/\xtext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw [fill=red] (0,0.5) circle (2pt);
\filldraw [fill = red] (0.0,1) circle (2pt);
\filldraw [fill=blue] (0.0,2) circle (2pt);
\filldraw [fill=blue] (0.5,1.5) circle (2pt);
\filldraw [fill=blue] (1.0,1.5) circle (2pt);
\filldraw [fill=blue] (1.5, 2.0) circle (2pt);
\filldraw [fill=blue] (2, 2.6) circle (2pt);
\end{tikzpicture}
\caption{\textbf{Top}: the 5-point cloud $A = \{0,4,6,9,10\}\subset\mathbb{R}$.
\textbf{Bottom} from left to right: single-linkage dendrogram $\Delta_{SL}(A)$ from Definition~\ref{dfn:sl_clustering}, the 0D persistence diagram $\mathrm{PD}$ from Definition~\ref{dfn:persistence_diagram} and the new mergegram $\mathrm{MG}$ from Definition~\ref{dfn:mergegram}, where the red color shows dots of multiplicity 2.}
\label{fig:5-point_line}
\end{figure}
The persistence diagram $\mathrm{PD}$ in the bottom middle picture of Fig.~\ref{fig:5-point_line} represents this merger by the dot $(0,0.5)$ meaning that a singleton cluster of (say) point $9$ was born at the scale $s=0$ and then died later at $s=0.5$ (by merging into another cluster of point 10), see details in Definition~\ref{dfn:sl_clustering}.
When two clusters $\{4,6\}$ and $\{9,10\}$ merge at $s=1.5$, this event was previously encoded in the persistence diagram by the single dot $(0,1.5)$ meaning that one cluster inherited from (say) point 10 was born at $s=0$ and has died at $s=1.5$.
\medskip
For the same merger, the new mergegram in the bottom right picture of Fig.~\ref{fig:5-point_line} associates the following two dots.
The dot $(0.5,1.5)$ means that the cluster $\{9,10\}$ merged at the current scale $s=1.5$ was previously formed at the smaller scale $s=0.5$.
The dot $(1,1.5)$ means that another cluster $\{4,6\}$ merged at the current scale $s=1.5$ was formed at $s=1$.
\medskip
Every arc in the single-linkage dendrogram between nodes at scales $b$ and $d$ contributes one dot $(b,d)$ to the mergegram, e.g. both singleton sets $\{9\}$, $\{10\}$ merging at $s=0.5$ contribute two dots $(0,0.5)$ or one dot of multiplicity 2 shown in red, see Fig.~\ref{fig:5-point_line}.
\end{exa}
Example~\ref{exa:5-point_line} shows that the mergegram $\mathrm{MG}$ retains more geometric information of a set $A$ than the persistence diagram $\mathrm{PD}$.
It turns out that this new intermediate object (larger than $\mathrm{PD}$ and smaller than a full dendrogram) enjoys the stability of persistence, which makes $\mathrm{MG}$ useful for analysing noisy data in all cases when distance-based 0D persistence is used.
\medskip
Here is the summary of new contributions to Topological Data Analysis.
\smallskip
\noindent
$\bullet$
Definition~\ref{dfn:mergegram} introduces the concept of a mergegram for any dendrogram of clustering.
\smallskip
\noindent
$\bullet$
Theorem~\ref{thm:0D_persistence_mergegram} and Example~\ref{exa:mergegram_stronger} justify that the mergegram of a single-linkage dendrogram is strictly stronger than the 0D persistence of a distance-based filtration of sublevel sets.
\smallskip
\noindent
$\bullet$
Theorem~\ref{thm:stability_mergegram} proves that the mergegram of any single-linkage dendrogram is stable in the bottleneck distance under perturbations of a finite set in the Hausdorff distance.
\smallskip
\noindent
$\bullet$
Theorem~\ref{thm:complexity} shows that the mergegram can be computed in a near linear time.
\section{Related work on hierarchical clustering and deep neural networks}
\label{sec:review}
The aim of clustering is to split a given set of points into clusters such that points within one cluster are more similar to each other than points from different clusters.
\medskip
A clustering problem can be made exact by specifying a distance between given points and restrictions on outputs, e.g. a number of clusters or a cost function to minimize.
\medskip
All hierarchical clustering algorithms can output a hierarchy of clusters or a dendrogram visualising mergers of clusters as explained later in Definition~\ref{dfn:dendrogram}.
Here we introduce only the simplest single-linkage clustering, which plays the central role in the paper.
\begin{dfn}[single-linkage clustering]
\label{dfn:sl_clustering}
Let $A$ be a finite set in a metric space $X$ with a distance $d:X\times X\to[0,+\infty)$.
Given a distance threshold, which will be called a scale $s$, any points $a,b\in A$ should belong to one \emph{SL cluster} if and only if there is a finite sequence $a=a_1,\dots,a_m=b\in A$ such that any two successive points have a distance at most $s$, i.e. $d(a_i,a_{i+1})\leq s$ for $i=1,\dots,m-1$.
Let $\Delta_{SL}(A;s)$ denote the collection of SL clusters at the scale $s$.
For $s=0$, any point $a\in A$ forms a singleton cluster $\{a\}$.
Representing each cluster from $\Delta_{SL}(A;s)$ over all $s\geq 0$ by one point, we get the \emph{single-linkage dendrogram} $\Delta_{SL}(A)$ visualizing how clusters merge, see the first bottom picture in Fig.~\ref{exa:5-point_line}.
\hfill $\blacksquare$
\end{dfn}
Another way to visualize SL clusters is to build a Minimum Spanning Tree below.
\begin{dfn}[Minimum Spanning Tree $\mathrm{MST}(A)$]
\label{dfn:mst}
The \emph{Minimum Spanning Tree} $\mathrm{MST}(A)$ of a finite set $A$ in a metric space $X$ with a distance $d$ is a tree (a connected graph without cycles) that has the vertex set $A$ and the minimum total length of edges.
We assume that the length of any edge between vertices $a,b\in A$ is measured as $d(a,b)$.
\hfill $\blacksquare$
\end{dfn}
A review of the relevant past work on persistence diagrams is postponed to section~\ref{sec:persistence_modules}, which introduces more auxiliary notions.
A persistence diagram consists of dots $(b,d)\in\mathbb{R}^2$ whose birth/death coordinates represent a life interval $[b,d)$ of a homology class, e.g. a connected component in a Vietoris-Rips filtration, see the bottom middle picture in Fig.~\ref{fig:5-point_line}.
\medskip
Persistence diagrams are isometry invariants that are stable under noise in the sense that a topological space and its noisy point sample have close persistence diagrams.
This stability under noise allows us to classify continuous shapes by using only their discrete samples.
\medskip
Imagine that several rigid shapes are sparsely represented by a few salient points, e.g. corners or local maxima of a distance function.
Translations and rotations of these point clouds do not change the underlying shapes.
Hence clouds should be classified modulo isometries that preserve distances between points.
The important problem is to recognize of a shape, e.g. within a given set of representatives, from its sparse point sample with noise.
This paper solves the problem by computing isometry invariants, namely the new mergegram, the 0D persistence and the pair-set of distances to two nearest neighbors for each point.
\medskip
Since all dots in a persistence diagram are unordered, our experimental section~\ref{sec:experiments} uses a neural network whose output is invariant under permutations of input point by construction.
PersLay \cite{carriere2019perslay} is a collection of permutation invariant neural network layers i.e. functions on sets of points in $\mathbb{R}^n$ that give the same output regardless of the order they are inserted.
\medskip
PersLay extends the neural network layers introduced in Deep Sets \cite{zaheer2017deep}.
Perslay introduces new layers to specially handle persistence diagrams, as well as new form of representing such layers.
Each layer is a combination of a coefficient layer $\omega(p):\mathbb{R}^n \rightarrow \mathbb{R}$, point transformation $\phi(p):\mathbb{R}^n \rightarrow \mathbb{R}^q$ and permutation invariant layer $\text{op}$ to retrieve the final output
$$\text{PersLay}(\text{diagram}) = \text{op}(\{\omega(p)\phi(p)\}), \text{ where } p \in \text{diagram (any set of points in } \mathbb{R}^n).$$
\section{The merge module and mergegram of a dendrogram}
\label{sec:mergegram}
The section introduces a merge module (a family of vector spaces with consistent linear maps) and a mergegram (a diagram of points in $\mathbb{R}^2$ representing a merge module).
\begin{dfn}[partition set $\mathbb{P}(A)$]
\label{dfn:partition}
For any set $A$, a \emph{partition} of $A$ is a finite collection of non-empty disjoint subsets $A_1,\dots,A_k\subset A$ whose union is $A$.
The \emph{single-block} partition of $A$ consists of the set $A$ itself.
The \emph{partition set} $\mathbb{P}(A)$ consists of all partitions of $A$.
\hfill $\blacksquare$
\end{dfn}
If $A=\{1,2,3\}$, then $(\{1,2\},\{3\})$ is a partition of $A$, but
$(\{1\},\{2\})$ and $(\{1,2\},\{1,3\})$ are not.
In this case the partition set $\mathbb{P}(A)$ consists of 5 partitions
$$(\{1\},\{2\},\{3\}),\quad
(\{1,2\},\{3\}),\quad
(\{1,3\},\{2\}),\quad
(\{2,3\},\{1\}),\quad
(\{1,2,3\}).$$
Definition~\ref{dfn:dendrogram} below extends the concept of a dendrogram from \cite[section~3.1]{carlsson2010characterization} to arbitrary (possibly, infinite) sets $A$.
Since every partition of $A$ is finite by Definition~\ref{dfn:partition}, we don't need to add that an initial partition of $A$ is finite.
Non-singleton sets are now allowed.
\begin{dfn}[dendrogram of merge sets]
\label{dfn:dendrogram}
A \emph{dendrogram} over any set $A$ is a function $\Delta:[0,\infty)\to\mathbb{P}(A)$ of a scale $s\geq 0$ satisfying the following conditions.
\smallskip
\noindent
(\ref{dfn:dendrogram}a)
There exists a scale $r\geq 0$ such that $\Delta(A;s)$ is the single block partition for all $s\geq r$.
\smallskip
\noindent
(\ref{dfn:dendrogram}b)
If $s\leq t$, then $\Delta(A;s)$ \emph{refines} $\Delta(A;t)$, i.e. any set from $\Delta(t)$ is a subset of some set from $\Delta(A;t)$.
These inclusions of subsets of $X$ induce the natural map $\Delta_s^t:\Delta(s)\to\Delta(t)$.
\smallskip
\noindent
(\ref{dfn:dendrogram}c)
There are finitely many \emph{merge scales} $s_i$ such that $$s_0 = 0 \text{ and } s_{i+1} = \text{sup}\{s \mid \text{ the map } \Delta_s^t \text{ is identity for } s' \in [s_i,s)\}, i=0,\dots,m-1.$$
\noindent
Since $\Delta(A;s_{i})\to\Delta(A;s_{i+1})$ is not an identity map, there is a subset $B\in\Delta(s_{i+1})$ whose preimage consists of at least two subsets from $\Delta(s_{i})$.
This subset $B\subset X$ is called a \emph{merge} set and its \emph{birth} scale is $s_i$.
All sets of $\Delta(A;0)$ are merge sets at the birth scale 0.
The $\mathrm{life}(B)$ is the interval $[s_i,t)$ from its birth scale $s_i$ to its \emph{death} scale $t=\sup\{s \mid \Delta_{s_i}^s(B)=B\}$.
\hfill $\blacksquare$
\end{dfn}
Dendrograms are usually represented as trees whose nodes correspond to all sets from the partitions $\Delta(A;s_i)$ at merge scales.
Edges of such a tree connect any set $B\in\Delta(A;s_{i})$ with its preimages under $\Delta(A;s_{i})\to\Delta(A;s_{i+1})$.
Fig.~\ref{fig:3-point_dendrogram} shows the dendrogram on $A=\{1,2,3\}$.
\medskip
\begin{figure}
\parbox{100mm}{
\begin{tabular}{lccccc}
partition $\Delta(A;2)$ at scale $s_2=2$ & & & $\{1,2,3\}$ & & \\
map $\Delta_1^2:\Delta(A;1)\to\Delta(A;2)$ & & & $\uparrow$ & $\nwarrow$ & \\
partition $\Delta(A;1)$ at scale $s_1=1$ & & & \{1, 2\} & & \{3\} \\
map $\Delta_0^1:\Delta(A;0)\to\Delta(A;1)$ & & $\nearrow$ & $\uparrow$ & & $\uparrow$ \\
partition $\Delta(A;0)$ at scale $s_0=0$ & $\{1\}$ & & $\{2\}$ & & \{3\}
\end{tabular}}
\parbox{35mm}{
\begin{tikzpicture}[scale = 0.9]
\draw[style=help lines,step = 1] (0,0) grid (2.4,2.4);
\draw[->] (-0.2,0) -- (2.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {death};
\draw[-] (0,0) -- (2.4,2.4) node[right]{};
\foreach \x/\xtext in {1/1, 2/2}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {1/1, 2/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw [fill=red] (0,1) circle (2pt);
\filldraw [fill = red] (0,1) circle (2pt);
\filldraw [fill = blue] (0,2) circle (2pt);
\filldraw [fill = blue] (1,2) circle (2pt);
\filldraw [fill = blue] (2,2.7) circle (2pt);
\end{tikzpicture}}
\caption{The dendrogram $\Delta$ on $A=\{1,2,3\}$ and its mergegram $\mathrm{MG}(\Delta)$ from Definition~\ref{dfn:mergegram}.}
\label{fig:3-point_dendrogram}
\end{figure}
In the dendrogram above, the partition $\Delta(A;1)$ consists of $\{1,2\}$ and $\{3\}$.
The maps $\Delta_s^t$ induced by inclusions respect the compositions in the sense that $\Delta_s^t\circ\Delta_r^s=\Delta_r^t$ for any $r\leq s\leq t$, e.g. $\Delta_0^1(\{1\})=\{1,2\}=\Delta_0^1(\{2\})$ and $\Delta_0^1(\{3\})=\{3\}$, i.e. $\Delta_0^1$ is a well-defined map from the partition $\Delta(A;0)$ in 3 singleton sets to $\Delta(A;1)$, but isn't an identity.
\medskip
At the scale $s_0=0$ the merge sets $\{1\},\{2\}$ have $\mathrm{life}=[0,1)$, while the merge set $\{3\}$ has $\mathrm{life}=[0,2)$.
At the scale $s_1=1$ the only merge set $\{1,2\}$ has $\mathrm{life}=[1,2)$.
At the scale $s_2=2$ the only merge set $\{1,2,3\}$ has $\mathrm{life}=[2,+\infty)$.
The notation $\Delta$ is motivated as the first (Greek) letter in the word dendrogram and by a $\Delta$-shape of a typical tree above.
\medskip
Condition~(\ref{dfn:dendrogram}a) means that
a partition of $X$ is trivial for all large scales $s$.
Condition~(\ref{dfn:dendrogram}b) says that when the scale $s$ in increasing sets from a partition $\Delta(s)$ can only merge with each other, but can not split.
Condition~(\ref{dfn:dendrogram}c) implies that there are only finitely many mergers, when two or more subsets of $X$ merge into a larger merge set.
\medskip
\begin{lem}[single-linkage dendrogram]
\label{lem:sl_clustering}
Given a metric space $(X,d)$ and a finite set $A\subset X$, the single-linkage dendrogram $\Delta_{SL}(X)$ from Definition~\ref{dfn:sl_clustering} satisfies Definition~\ref{dfn:dendrogram}.
\end{lem}
\begin{proof}
Since $A$ is finite, there are only finitely many inter-point distances within $A$, which implies condition (\ref{dfn:dendrogram}a,c).
Let $f(p):X\to\mathbb{R}$ be the distance from a point $p\in X$ to (the closest point of) $A$.
Condition (\ref{dfn:dendrogram}b) follows the inclusions $f^{-1}[0,s) \subseteq f^{-1}[0,t) $ for $s\leq t$.
\end{proof}
A \emph{mergegram} represents lives of merge sets by dots with two coordinates (birth,death).
\begin{dfn}[mergegram $\mathrm{MG}(\Delta)$]
\label{dfn:mergegram}
The \emph{mergegram} of a dendrogram $\Delta$ from Definition~\ref{dfn:dendrogram} has the dot (birth,death) in $\mathbb{R}^2$ for each merge set $A$ of $\Delta$ with $\mathrm{life}(A)$=[birth,death).
If any life interval appears $k$ times, the dot (birth,death) has the multiplicity $k$ in $\mathrm{MG}(\Delta)$.
\hfill $\blacksquare$
\end{dfn}
For simplicity, this paper considers vector spaces with coefficients (of linear combinations of vectors) only in $\mathbb{Z}_2=\{0,1\}$, which can be replaced by any field.
\begin{dfn}[merge module $M(\Delta)$]
\label{dfn:merge_module}
For any dendrogam $\Delta$ on a set $X$ from Definition~\ref{dfn:dendrogram},
the \emph{merge module} $M(\Delta)$ consists of the vector spaces $M_s(\Delta)$, $s\in\mathbb{R}$, and linear maps $m_s^t:M_s(\Delta)\to M_t(\Delta)$, $s\leq t$.
For any $s\in\mathbb{R}$ and $A\in\Delta(s)$, the space $M_s(\Delta)$ has the generator or a basis vector $[A]\in M_s(\Delta)$.
For $s<t$ and any set $A\in\Delta(s)$,
if the image of $A$ under $\Delta_s^t$ coincides with $A\subset X$, i.e. $\Delta_s^t(A)=A$, then $m_s^t([A])=[A]$, else $m_s^t([A])=0$.
\hfill $\blacksquare$
\end{dfn}
\begin{figure}[h]
\begin{tabular}{lccccccccc}
scale $s_3=+\infty$ & 0 & & & & & 0 \\
map $m_2^{+\infty}$ & $\uparrow$ & & & & & $\uparrow$\\
scale $s_2=2$ & $\mathbb{Z}_2$ & & & 0 & 0 & [\{1,2,3\}]\\
map $m_1^2$ & $\uparrow$ & & & $\uparrow$ & $\uparrow$\\
scale $s_1=1$ & $\mathbb{Z}_2\oplus\mathbb{Z}_2$ & 0 & 0 & [\{3\}] & [\{1,2\}] \\
map $m_0^1$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ \\
scale $s_0=0$ & $\mathbb{Z}_2\oplus\mathbb{Z}_2\oplus\mathbb{Z}_2$ & [\{1\}] & [\{2\}] & [\{3\}] &
\end{tabular}
\caption{The merge module $M(\Delta)$ of the dendrogram $\Delta$ on the set $X=\{1,2,3\}$ in Fig.~\ref{fig:3-point_dendrogram}.}
\label{fig:3-point_module}
\end{figure}
\begin{exa}
\label{exa:5-point_set}
Fig.~\ref{fig:5-point_set} shows the metric space $X=\{a,b,c,d,e\}$ with distances defined by the shortest path metric induced by the specified edge-lengths, see the distance matrix.
\begin{figure}[H]
\parbox{80mm}{
\begin{tikzpicture}[scale = 0.75][sloped]
\node (x) at (5,3) {x};
\node (a) at (1,1) {a};
\draw (a) -- node[above]{5} ++ (x);
\node (b) at (3.5,4.0) {b};
\draw (b) -- node[above]{1} ++ (x);
\node (c) at (7,1) {c};
\draw (c) -- node[below]{2} ++ (x);
\node (y) at (8,3) {y};
\draw (x) -- node[above]{2} ++ (y);
\node (d) at (10,5){p};
\node (e) at (10,1){q};
\draw (y) -- node[below]{2} ++ (d);
\draw (y) -- node[below]{2} ++ (e);
\end{tikzpicture}}
\parbox{40mm}{
\begin{tabular}{c|ccccc}
& a & b & c & p & q \\
\hline
a & 0 & 6 & 7 & 9 & 9 \\
b & 6 & 0 & 3 & 5 & 5 \\
c & 7 & 3 & 0 & 6 & 6 \\
p & 9 & 5 & 6 & 0 & 4 \\
q & 9 & 5 & 6 & 4 & 0
\end{tabular}}
\caption{The set $X=\{a,b,c,d,e\}$ has the distance matrix defined by the shortest path metric.}
\label{fig:5-point_set}
\end{figure}
\begin{figure}[H]
\begin{tikzpicture}[scale = 0.6][sloped]
\draw[style=help lines,step = 1] (-1,0) grid (10.4,6.3);
\foreach \i in {0,0.5,...,3.0} { \node at (-1.4,2*\i) {\i}; }
\node (a) at (0,-0.3) {a};
\node (b) at (4,-0.3) {b};
\node (c) at (6,-0.3) {c};
\node (d) at (8,-0.3) {p};
\node (e) at (10,-0.3) {q};
\node (x) at (5,6.75) {};
\node (de) at (9,4){};
\node (bc) at (5.0,3){};
\node (bcde) at (7.0,5){};
\node (all) at (5.0,6){};
\draw [line width=0.5mm, blue ] (a) |- (all.center);
\draw [line width=0.5mm, blue ] (b) |- (bc.center);
\draw [line width=0.5mm, blue ] (c) |- (bc.center);
\draw [line width=0.5mm, blue ] (d) |- (de.center);
\draw [line width=0.5mm, blue ] (e) |- (de.center);
\draw [line width=0.5mm, blue ] (de.center) |- (bcde.center);
\draw [line width=0.5mm, blue ] (bc.center) |- (bcde.center);
\draw [line width=0.5mm, blue ] (bcde.center) |- (all.center);
\draw [line width=0.5mm, blue] [->] (all.center) -> (x.center);
\end{tikzpicture}
\hspace*{1cm}
\begin{tikzpicture}[scale = 1.1]
\draw[style=help lines,step = 0.5] (0,0) grid (3.4,3.4);
\draw[->] (-0.2,0) -- (3.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,3.4) node[above] {death};
\draw[-] (0,0) -- (3.4,3.4) node[right]{};
\foreach \x/\xtext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2, 2.5/2.5, 3.0/3}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2, 2.5/2.5, 3.0/3}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw[fill=red] (0,1.5) circle (2pt);
\filldraw [fill = red] (0.0,2.0) circle (2pt);
\filldraw [fill = blue] (0.0,3) circle (2pt);
\filldraw [fill = blue] (1.5,2.5) circle (2pt);
\filldraw [fill = blue] (2.0,2.5) circle (2pt);
\filldraw [fill = blue] (2.5, 3.0) circle (2pt);
\filldraw [fill = blue] (3, 3.7) circle (2pt);
\end{tikzpicture}
\caption{\textbf{Left}: the dendrogram $\Delta$ for the single linkage clustering of the set 5-point set $X=\{a,b,c,d,e\}$ in Fig.~\ref{fig:5-point_set}.
\textbf{Right}: the mergegram $\mathrm{MG}(\Delta)$, red dots have multiplicity 2.}
\label{fig:5-point_set_mergegram}
\end{figure}
The dendrogram $\Delta$ in the first picture of Fig.~\ref{fig:5-point_set_mergegram} generates the mergegram as follows:
\begin{itemize}
\item
each of the singleton sets $\{b\}$ and $\{c\}$ has the dot (0,1.5), so its multiplicity is 2;
\item
each of the singleton sets $\{p\}$ and $\{q\}$ has the dot (0,2), so its multiplicity is 2;
\item
the singleton set $\{a\}$ has the dot $(0,3)$;
the merge set $\{b,c\}$ has the dot (1.5,2.5);
\item
the merge set $\{p,q\}$ has the dot (2,2.5);
the merge set $\{b,c,p,q\}$ has the dot (2.5,3);
\item
the merge set $\{a,b,c,p,q\}$ has the dot $(3,+\infty)$.
\end{itemize}
\end{exa}
\section{Background on persistence modules and diagrams}
\label{sec:persistence_modules}
This section introduces the key concepts from the thorough review by Chazal et al. \cite{chazal2016structure}.
As will become clear soon, the merge module of any dendrogram belongs to a wider class below.
\begin{dfn}[persistence module $\mathbb{V}$]
\label{dfn:persistence_module}
A \emph{persistence module} $\mathbb{V}$ over the real numbers $\mathbb{R}$ is a family of vector spaces $V_t$, $t\in \mathbb{R}$ with linear maps $v^t_s:V_s \rightarrow V_t$, $s\leq t$ such that $v^t_t$ is the identity map on $V_t$ and the composition is respected: $v^t_s \circ v^s_r = v^t_r$ for any $r \leq s \leq t$.
\hfill $\blacksquare$
\end{dfn}
The set of real numbers can be considered as a category $\mathbb{R}$ in the following sense.
The objects of $\mathbb{R}$ are all real numbers.
Any two real numbers such that $a\leq b$ define a single morphism $a\to b$.
The composition of morphisms $a\to b$ and $b \to c$ is the morphism $a \leq c$.
In this language, a persistence module is a functor from $\mathbb{R}$ to the category of vector spaces.
\medskip
A basic example of $\mathbb{V}$ is an interval module.
An interval $J$ between points $p<q$ in the line $\mathbb{R}$ can be one of the following types: closed $[p,q]$, open $(p,q)$ and half-open or half-closed $[p,q)$ and $(p,q]$.
It is convenient to encode types of endpoints by $\pm$ superscripts as follows:
$$[p^-,q^+]:=[p,q],\quad
[p^+,q^-]:=(p,q),\quad
[p^+,q^+]:=(p,q],\quad
[p^-,q^-]:=[p,q).$$
The endpoints $p,q$ can also take the infinite values $\pm\infty$, but without superscripts.
\begin{exa}[interval module $\mathbb{I}(J)$]
\label{exa:interval_module}
For any interval $J\subset\mathbb{R}$, the \emph{interval module} $\mathbb{I}(J)$ is the persistence module defined by the following vector spaces $I_s$ and linear maps $i_s^t:I_s\to I_t$
$$I_s=\left\{ \begin{array}{ll}
\mathbb{Z}_2, & \mbox{ for } s\in J, \\
0, & \mbox{ otherwise };
\end{array} \right.\qquad
i_s^t=\left\{ \begin{array}{ll}
\mathrm{id}, & \mbox{ for } s,t\in J, \\
0, & \mbox{ otherwise }
\end{array} \right.\mbox{ for any }s\leq t.$$
\end{exa}
\medskip
The direct sum $\mathbb{W}=\mathbb{U}\oplus\mathbb{V}$ of persistence modules $\mathbb{U},\mathbb{V}$ is defined as the persistence module with the vector spaces $W_s=U_s\oplus V_s$ and linear maps $w_s^t=u_s^t\oplus v_s^t$.
\medskip
We illustrate the abstract concepts above using geometric constructions of Topological Data Analysis.
Let $f:X\to\mathbb{R}$ be a continuous function on a topological space.
Its \emph{sublevel} sets $X_s^f=f^{-1}((-\infty,s])$ form nested subspaces $X_s^f\subset X_t^f$ for any $s\leq t$.
The inclusions of the sublevel sets respect compositions similarly to a dendrogram $\Delta$ in Definition~\ref{dfn:dendrogram}.
\medskip
On a metric space $X$ with with a distance function $d:X\times X\to[0,+\infty)$, a typical example of a function $f:X\to\mathbb{R}$ is the distance to a finite set of points $A\subset X$.
More specifically, for any point $p\in X$, let $f(p)$ be the distance from $p$ to (a closest point of) $A$.
For any $r\geq 0$, the preimage $X_r^f=f^{-1}((-\infty,r])=\{q\in X \mid d(q,A)\leq r\}$ is the union of closed balls that have the radius $r$ and centers at all points $p\in A$.
For example, $X_0^f=f^{-1}((-\infty,0])=A$ and $X_{+\infty}^f=f^{-1}(\mathbb{R})=X$.
\medskip
If we consider any continuous function $f:X\to\mathbb{R}$, we have the inclusion $X_s^f\subset X_r^f$ for any $s\leq r$.
Hence all sublevel sets $X_s^f$ form a nested sequence of subspaces within $X$.
The above construction of a \emph{filtration} $\{X_s^f\}$ can be considered as a functor from $\mathbb{R}$ to the category of topological spaces.
Below we discuss the most practically used case of dimension 0.
\begin{exa}[persistent homology]
\label{exa:persistent_homology}
For any topological space $X$, the 0-dimensional \emph{homology} $H_0(X)$ is the vector space (with coefficients $\mathbb{Z}_2$) generated by all connected components of $X$.
Let $\{X_s\}$ be any \emph{filtration} of nested spaces, e.g. sublevel sets $X_s^f$ based on a continuous function $f:X\to\mathbb{R}$.
The inclusions $X_s\subset X_r$ for $s\leq r$ induce the linear maps between homology groups $H_0(X_s)\to H_0(X_r)$ and define the \emph{persistent homology} $\{H_0(X_s)\}$, which satisfies the conditions of a persistence module from Definition~\ref{dfn:persistence_module}.
\hfill $\blacksquare$
\end{exa}
\medskip
If $X$ is a finite set of $m$ points, then $H_0(X)$ is the direct sum $\mathbb{Z}_2^m$ of $m$ copies of $\mathbb{Z}_2$.
\medskip
The persistence modules that can be decomposed as direct sums of interval modules can be described in a very simple combinatorial way by persistence diagrams of dots in $\mathbb{R}^2$.
\begin{dfn}[persistence diagram $\mathrm{PD}(\mathbb{V})$]
\label{dfn:persistence_diagram}
Let a persistence module $\mathbb{V}$ be decomposed as a direct sum of interval modules from Example~\ref{exa:interval_module} : $\mathbb{V}\cong\bigoplus\limits_{l \in L}\mathbb{I}(p^{*}_l,q^{*}_l)$, where $*$ is $+$ or $-$.
The \emph{persistence diagram} $\mathrm{PD}(\mathbb{V})$ is the multiset
$\mathrm{PD}(\mathbb{V}) = \{(p_l,q_l) \mid l \in L \} \setminus \{p=q\}\subset\mathbb{R}^2$.
\hfill $\blacksquare$
\end{dfn}
\medskip
The 0-dimensional persistent homology of a space $X$ with a continuous function $f:X\to\mathbb{R}$ will be denoted by $\mathrm{PD}\{H_0(X_s^f)\}$.
Lemma~\ref{lem:merge_module_decomposition} will prove that the merge module $M(\Delta)$ of any dendrogram $\Delta$ is also decomposable into interval modules.
Hence the mergegram $\mathrm{MG}(\Delta)$ from Definition~\ref{dfn:mergegram} can be interpreted as the persistence diagram of the merge module $M(\Delta)$.
\section{The mergegram is stronger than the 0-dimensional persistence}
\label{sec:mergegram_stronger}
Let $f:X\to\mathbb{R}$ be the distance function to a finite subset $A$ of a metric space $(X,d)$.
The persistent homology $\{H_k(X_s^f)\}$ in any dimension $k$ is invariant under isometries of $X$.
\medskip
Moreover, the persistence diagrams of very different shapes, e.g. topological spaces and their discrete samples, can be easily compared by the bottleneck distance in Definition~\ref{dfn:bottleneck_distance}.
\medskip
Practical applications of persistence are justified by Stability Theorem~\ref{thm:stability_persistence} saying that the persistence diagram continuously changes under perturbations of a given filtration or an initial point set.
A similar stability of mergegrams will be proved in Theorem~\ref{thm:stability_mergegram}.
\medskip
This section shows that the mergegram $\mathrm{MG}(\Delta_{SL}(A))$ has more isometry information about the subset $A\subset X$ than the 0-dimensional persistent homology $\{H_0(X_s^f)\}$.
\medskip
Theorem~\ref{thm:0D_persistence_mergegram} shows how to obtain the 0D persistence $\mathrm{PD}\{H_0(X_s^f)\}$ from $\mathrm{MG}(\Delta_{SL}(A))$, where $f:X\to\mathbb{R}$ is the distance to a finite subset $A\subset X$.
Example~\ref{exa:mergegram_stronger} builds two 4-point sets in $\mathbb{R}$ whose persistence diagrams are identical, but their mergegrams are different.
\medskip
We start from folklore Claims~\ref{claim:0D_persistence_SL}-\ref{claim:0D_persistence_MST}, which interpret the 0D persistence $\mathrm{PD}\{H_0(X_s^f)\}$ using the classical concepts of the single-linkage dendrogram and Minimum Spanning Tree.
\begin{myclaim}[0D persistence from $\Delta_{SL}$]
\label{claim:0D_persistence_SL}
For a finite set $A$ in a metric space $(X,d)$, let $f:X\to\mathbb{R}$ be the distance to $A$.
In the single-linkage dendrogram $\Delta_{SL}(A)$, let $0<s_1<\dots<s_m<s_{m+1}=+\infty$ be all distinct merge scales.
If $k\geq 2$ subsets of $A$ merge into a larger subset of $A$ at a scale $s_i$, the multiplicity of $s_i$ is $\mu_i=k-1$.
Then the persistence diagram $\mathrm{PD}\{H_0(X_s^f)\}$ consists of the dots $(0,s_i)$ with multiplicities $\mu_i$, $i=1,\dots,m+1$.
\hfill $\blacksquare$
\end{myclaim}
\begin{myclaim}[0D persistence from MST]
\label{claim:0D_persistence_MST}
For a set $A$ of $n$ points in a metric space $(X,d)$, let $f:X\to\mathbb{R}$ be the distance to $A$.
Let a Minimum Spanning Tree $\mathrm{MST}(A)$ have edge-lengths $l_1\leq\dots\leq l_{n-1}$.
The persistence diagram $\mathrm{PD}\{H_0(X_s^f)\}$ consists of the $n-1$ dots $(0,0.5l_i)$ counted with multiplicities if some edge-lengths are equal, plus the infinite dot $(0,+\infty)$.
\hfill $\blacksquare$
\end{myclaim}
\begin{thm}[0D persistence from a mergegram]
\label{thm:0D_persistence_mergegram}
For a finite set $A$ in a metric space $(X,d)$, let $f:X\to\mathbb{R}$ be the distance to $A$.
Let the mergegram $\mathrm{MG}(\Delta_{SL}(A))$ be a multiset $\{(b_i,d_i)\}_{i=1}^k$, where some dots can be repeated.
Then the persistence diagram $\mathrm{PD}\{H_0(X_s^f)\}$ is the difference of the multisets $\{(0,d_i)\}_{i=1}^{k}-\{(0,b_i)\}_{i=1}^{k}$ containing each dot $(0,s)$ exactly $\#b-\#d$ times, where $\#b$ is the number of births $b_i=s$, $\#d$ is the number of deaths $d_i=s$.
All trivial dots $(0,0)$ are ignored, alternatively we take $\{(0,d_i)\}_{i=1}^{k}$ only with $d_i>0$.
\hfill $\blacksquare$
\end{thm}
\begin{proof}
In the language of Claim~\ref{claim:0D_persistence_SL}, let at a scale $s>0$ of multiplicity $\mu$ exactly $\mu+1$ subsets merge into a set $B\in\Delta_{SL}(A;s)$.
By Claim~\ref{claim:0D_persistence_SL} this set $B$ contributes $\mu$ dots $(0,s)$ to the persistence diagrams $\mathrm{PD}\{H_0(X_s^f)\}$.
By Definition~\ref{dfn:mergegram} the same set $B$ contributes $\mu+1$ dots of the form $(b_i,s)$, $i=1,\dots,\mu+1$, corresponding to the $\mu+1$ sets that merge into $B$ at the scale $s$.
Moreover, the set $B$ itself will merge later into a larger set, which creates one extra dot $(s,d)\in\mathrm{PD}\{H_0(X_s^f)\}$.
The exceptional case $B=A$ corresponds to $d=+\infty$.
\smallskip
If we remove one dot $(0,s)$ from the $\mu+1$ dots counted above
as expected in the difference $\{(0,d_i)\}_{i=1}^{k}-\{(0,b_i)\}_{i=1}^{k}$ of multisets, we get exactly $\mu$ dots $(0,s)\in\mathrm{PD}\{H_0(X_s^f)\}$.
The required formula has been proved for contributions of any merge set $B\subset A$.
\end{proof}
In Example~\ref{exa:5-point_line} the mergegram in the last picture of Fig.~\ref{fig:5-point_line} is the multiset of 9 dots:
$$\mathrm{MG}(\Delta_{SL}(A))=\{(0,0.5),(0,0.5),(0,1),(0,1),(0.5,1.5),(1,1.5),(0,2),(1.5,2),(2,+\infty)\}.$$
Taking the difference of multisets and ignoring trivial dots $(0,0)$, we get \\
$\mathrm{PD}(H_0\{X_s^f\})=\{(0,0.5),(0,0.5),(0,1),(0,1),(0,1.5),(0,1.5),(0,2),(0,2),(0,+\infty)\}-$ \\
$-\{(0,0.5),(0,1),(0,2)\}=\{(0,0.5),(0,1),(0,1.5),(0,2),(0,+\infty)\}
\mbox{ as in Fig.~\ref{fig:5-point_line}}.$
\begin{exa}[the mergegram is stronger than 0D persistence]
\label{exa:mergegram_stronger}
Fig.~\ref{fig:4-point_set1} and~\ref{fig:4-point_set2} show the dendrograms, identical 0D persistence diagrams and different mergegrams for the sets $A=\{0,1,3,7\}$ and $B=\{0,1,5,7\}$ in $\mathbb{R}$.
This example together with Theorem~\ref{thm:0D_persistence_mergegram} justify that the new mergregram is strictly stronger than 0D persistence as an isometry invariant.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale = 0.5][sloped]
\draw [->] (-1,0) -- (-1,5) node[above] {scale $s$};
\draw[style=help lines,step = 1] (-1,0) grid (6.4,4.4);
\foreach \i in {0,0.5,...,2} { \node at (-1.5,2*\i) {\i}; }
\node (a) at (0,-0.4) {0};
\node (b) at (2,-0.4) {1};
\node (c) at (4,-0.4) {3};
\node (d) at (6,-0.4) {7};
\node (x) at (4.625,5) {};
\node (y) at (4.625,4) {};
\node (ab) at (1,1) {};
\node (abc) at (2.5,2){};
\node (abce) at (3,4){};
\draw [line width=0.5mm, blue] (a) |- (ab.center);
\draw [line width=0.5mm, blue] (b) |- (ab.center);
\draw [line width=0.5mm, blue] (ab.center) |- (abc.center);
\draw [line width=0.5mm, blue] (c) |- (abc.center);
\draw [line width=0.5mm, blue] (d) |- (abce.center);
\draw [line width=0.5mm, blue] (abc.center) |- (abce.center);
\draw [line width=0.5mm, blue] [->] (y.center) -> (x.center);
\end{tikzpicture}
\hspace*{2mm}
\begin{tikzpicture}[scale = 1.0]
\draw[style=help lines,step = 0.5] (0,0) grid (2.4,2.4);
\draw[->] (-0.2,0) -- (2.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {};
\draw[-] (0,0) -- (2.4,2.4) node[right]{};
\foreach \x/\xtext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw [fill=blue] (0,0.5) circle (2pt);
\filldraw [fill=blue] (0,1) circle (2pt);
\filldraw [fill=blue] (0,2) circle (2pt);
\filldraw [fill=blue] (0,2.6) circle (2pt);
\end{tikzpicture}
\hspace*{2mm}
\begin{tikzpicture}[scale = 1.0]
\draw[style=help lines,step = 0.5] (0,0) grid (2.4,2.4);
\draw[->] (-0.2,0) -- (2.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {death};
\draw[-] (0,0) -- (2.4,2.4) node[right]{};
\foreach \x/\xtext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw [fill=red] (0,0.5) circle (2pt);
\filldraw [fill=blue] (0.0,1) circle (2pt);
\filldraw [fill=blue] (0.0,2) circle (2pt);
\filldraw [fill=blue] (0.5,1.0) circle (2pt);
\filldraw [fill=blue] (1.0,2.0) circle (2pt);
\filldraw [fill=blue] (2,2.6) circle (2pt);
\end{tikzpicture}
\caption{\textbf{Left}: single-linkage dendrogram \footnote{Note: The horizontal axes of the dendrograms are distorted} $\Delta_{SL}(A)$ for $A=\{0,1,3,7\}\subset\mathbb{R}$.
\textbf{Middle}: the 0D persistence diagram for the sublevel filtration of the distance to $A$.
\textbf{Right}: mergegram $\mathrm{MG}(\Delta_{SL}(A))$. }
\label{fig:4-point_set1}
\end{figure}
\begin{figure}[H]
\begin{tikzpicture}[scale = 0.5][sloped]
\draw [->] (-1,0) -- (-1,5) node[above] {scale $s$};
\draw[style=help lines,step = 1] (-1,0) grid (6.4,4.4);
\foreach \i in {0,0.5,...,2} { \node at (-1.5,2*\i) {\i}; }
\node (a) at (0,-0.4) {0};
\node (b) at (2,-0.4) {1};
\node (c) at (4,-0.4) {5};
\node (d) at (6,-0.4) {7};
\node (ab) at (1,1) {};
\node (cd) at (5,2) {};
\node (abcd) at (3,4){};
\node (x) at (3,5) {};
\node (y) at (3,4) {};
\draw [line width=0.5mm, blue] (a) |- (ab.center);
\draw [line width=0.5mm, blue] (b) |- (ab.center);
\draw [line width=0.5mm, blue] (c) |- (cd.center);
\draw [line width=0.5mm, blue] (d) |- (cd.center);
\draw [line width=0.5mm, blue] (ab.center) |- (abcd.center);
\draw [line width=0.5mm, blue] (cd.center) |- (abcd.center);
\draw [line width=0.5mm, blue] [->] (y.center) -> (x.center);
\end{tikzpicture}
\hspace*{2mm}
\begin{tikzpicture}[scale = 1.0]
\draw[style=help lines,step = 0.5] (0,0) grid (2.4,2.4);
\draw[->] (-0.2,0) -- (2.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {};
\draw[-] (0,0) -- (2.4,2.4) node[right]{};
\foreach \x/\xtext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw [fill=blue] (0,0.5) circle (2pt);
\filldraw [fill=blue] (0,1) circle (2pt);
\filldraw [fill=blue] (0,2) circle (2pt);
\filldraw [fill=blue] (0,2.6) circle (2pt);
\end{tikzpicture}
\hspace*{2mm}
\begin{tikzpicture}[scale = 1.0]
\draw[style=help lines,step = 0.5] (0,0) grid (2.4,2.4);
\draw[->] (-0.2,0) -- (2.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {death};
\draw[-] (0,0) -- (2.4,2.4) node[right]{};
\foreach \x/\xtext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw[fill=red] (0,0.5) circle (2pt);
\filldraw[fill = red] (0.0,1) circle (2pt);
\filldraw [fill=blue] (0.5,2.0) circle (2pt);
\filldraw [fill=blue] (1.0,2.0) circle (2pt);
\filldraw [fill=blue] (2,2.6) circle (2pt);
\end{tikzpicture}
\caption{\textbf{Left}: single-linkage dendrogram $\Delta_{SL}(B)$ for $B=\{0,1,5,7\}\subset\mathbb{R}$.
\textbf{Middle}: the 0D persistence diagram for the sublevel filtration of the distance to $B$.
\textbf{Right}: mergegram $\mathrm{MG}(\Delta_{SL}(B))$. }
\label{fig:4-point_set2}
\end{figure}
\end{exa}
\section{Distances and stability of persistence modules}
\label{sec:stability_persistence}
Definition~\ref{dfn:homo_modules} introduces homomorphisms between persistence modules, which are needed to state the stability of persistence diagrams $\mathrm{PD}\{H_0(X_s^f)\}$ under perturbations of a function $f:X\to\mathbb{R}$.
This result will imply a similar stability for the mergegram $\mathrm{MG}(\Delta_{SL}(A))$ for the dendrogram $\Delta_{SL}(A)$ of the single-linkage clustering of a set $A$ within a metric space $X$.
\begin{dfn}[a homomorphism of a degree $\delta$ between persistence modules]
\label{dfn:homo_modules}
Let $\mathbb{U}$ and $\mathbb{V}$ be persistent modules over $\mathbb{R}$.
A \emph{homomorphism} $\mathbb{U}\to\mathbb{V}$ of \emph{degree} $\delta\in\mathbb{R}$ is a collection of linear maps $\phi_t:U_t \rightarrow V_{t+\delta}$, $t \in \mathbb{R}$, such that the diagram commutes for all $s \leq t$.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=1.0]
\matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em]
{
U_s & U_t \\
V_{s+\delta} & V_{t+\delta} \\};
\path[-stealth]
(m-1-1) edge node [left] {$\phi_s$} (m-2-1)
edge [-] node [above] {$u^t_s$} (m-1-2)
(m-2-1.east|-m-2-2) edge node [above] {$v^{t+\delta}_{s+\delta}$}
node [above] {} (m-2-2)
(m-1-2) edge node [right] {$\phi_t$} (m-2-2);
\end{tikzpicture}
\end{figure}
Let $\text{Hom}^\delta(\mathbb{U},\mathbb{V})$ be all homomorphisms $\mathbb{U}\rightarrow \mathbb{V}$ of degree $\delta$.
Persistence modules $\mathbb{U},\mathbb{V}$ are \emph{isomorphic} if they have inverse homomorphisms $\mathbb{U}\to\mathbb{V}$ and $\mathbb{V}\to\mathbb{U}$ of degree $\delta=0$.
\hfill $\blacksquare$
\end{dfn}
For a persistence module $\mathbb{V}$ with maps $v_s^t:V_s\to V_t$, the simplest example of a homomorphism of a degree $\delta\geq 0$
is $1_{\mathbb{V}}^{\delta}:\mathbb{V}\to\mathbb{V}$ defined by the maps $v_s^{s+\delta}$, $t\in\mathbb{R}$.
So the maps $v_s^t$ defining the structure of $\mathbb{V}$ shift all vector spaces $V_s$ the difference of scale $\delta=t-s$.
\medskip
The concept of interleaved modules below is an algebraic generalization of a geometric perturbation of a set $X$ in terms of (the homology of) its sublevel sets $X_s$.
\begin{dfn}[interleaving distance ID]
\label{dfn:interleaving_distance}
Persistent modules $\mathbb{U}$ and $\mathbb{V}$ are $\delta$-interleaved if there are homomorphisms $\phi\in \text{Hom}^\delta(\mathbb{U},\mathbb{V})$ and $\psi \in \text{Hom}^\delta(\mathbb{V},\mathbb{U}) $ such that $\phi\circ\psi = 1_{\mathbb{V}}^{2\delta} \text{ and } \psi\circ\phi = 1_{\mathbb{U}}^{2\delta}$.
The \emph{interleaving distance} is
$\mathrm{ID}(\mathbb{U},\mathbb{V})=\inf\{\delta\geq 0 \mid \mathbb{U} \text{ and } \mathbb{V} \text{ are } \delta\text{-interleaved} \}$.
\hfill $\blacksquare$
\end{dfn}
If $f,g:X\to\mathbb{R}$ are continuous functions such that $||f-g||_{\infty}\leq\delta$ in the $L_{\infty}$-distance, the persistence modules $H_k\{f^{-1}(-\infty,s]\}$, $H_k\{g^{-1}(-\infty,s]\}$ are $\delta$-interleaved for any $k$ \cite{cohen2007stability}.
The last conclusion extended to persistence diagrams in terms of the bottleneck distance below.
\begin{dfn}[bottleneck distance BD]
\label{dfn:bottleneck_distance}
Let multisets $C,D$ contain finitely many points $(p,q)\in\mathbb{R}^2$, $p<q$, of finite multiplicity and all diagonal points $(p,p)\in\mathbb{R}^2$ of infinite multiplicity.
For $\delta\geq 0$, a $\delta$-matching is a bijection $h:C\to D$ such that $|h(a)-a|_{\infty}\leq\delta$ in the $L_{\infty}$-distance on the plane for any point $a\in C$.
The \emph{bottleneck} distance between persistence modules $\mathbb{U},\mathbb{V}$ is $\mathrm{BD}(\mathbb{U},\mathbb{V}) = \text{inf}\{ \delta \mid \text{ there is a }\delta\text{-matching between } \mathrm{PD}(\mathbb{U}) \text{ and } \mathrm{PD}(\mathbb{V})\}$.
\hfill $\blacksquare$
\end{dfn}
The original stability of persistence for sequences of sublevel sets persistence was extended as Theorem~\ref{thm:stability_persistence} to $q$-tame persistence modules.
Intuitively, a persistence module $\mathbb{V}$ is $q$-tame any non-diagonal square in the persistence diagram $\mathrm{PD}(\mathbb{V})$ contains only finitely many of points, see \cite[section~2.8]{chazal2016structure}.
Any finitely decomposable persistence module is $q$-tame.
\begin{thm}[stability of persistence modules]\cite[isometry theorem~4.11]{chazal2016structure}
\label{thm:stability_persistence}
Let $\mathbb{U}$ and $\mathbb{V}$ be q-tame persistence modules. Then $\mathrm{ID}(\mathbb{U},\mathbb{V}) = \mathrm{BD}(\mathrm{PD}(\mathbb{U}),\mathrm{PD}(\mathbb{V}))$,
where $\mathrm{ID}$ is the interleaving distance, $\mathrm{BD}$ is the bottleneck distance between persistence modules.
\hfill $\blacksquare$
\end{thm}
\section{Stability of the mergegram for any single-linkage dendrogram}
\label{sec:stability}
In a dendrogram $\Delta$ from Definition~\ref{dfn:dendrogram}, any merge set $A$ of $\Delta$ has a life interval $\mathrm{life}(A)=[b,d)$ from its birth scale $b$ to its death scale $d$.
Lemmas~\ref{lem:merge_module_decomposition} and~\ref{lem:merge_modules_interleaved} are proved in appendices.
\begin{lem}[merge module decomposition]
\label{lem:merge_module_decomposition}
For any dendrogram $\Delta$ in the sense of Definition~\ref{dfn:dendrogram}, the merge module $M(\Delta)\cong\bigoplus\limits_{A}\mathbb{I}(\mathrm{life}(A))$ decomposes over all merge sets $A$.
\hfill $\blacksquare$
\end{lem}
Lemma~\ref{lem:merge_module_decomposition} will allow us to use the stability of persistence in Theorem~\ref{thm:stability_persistence} for merge modules and also Lemma~\ref{lem:merge_modules_interleaved}.
Stability of the mergegram $\mathrm{MG}(\Delta_{SL}(A))$ will be proved under perturbations of $A$ in the Hausdorff distance defined below.
\begin{dfn}[Hausdorff distance HD]
\label{dfn:Hausdorff_distance}
For any subsets $A,B$ of a metric space $(X,d)$, the \emph{Hausdorff distance} $\mathrm{HD}(A,B)$ is $\max\{\sup\limits_{a\in A}\inf\limits_{b\in B} d(a,b), \sup\limits_{b\in B}\inf\limits_{a\in A} d(a,b)\}$.
\hfill $\blacksquare$
\end{dfn}
\begin{lem}[merge modules interleaved]
\label{lem:merge_modules_interleaved}
If any subsets $A,B$ of a metric space $(X,d)$ have $\mathrm{HD}(A,B)=\delta$, then the merge modules $M(\Delta_{SL}(A))$ and $M(\Delta_{SL}(B))$ are $\delta$-interleaved.
\hfill $\blacksquare$
\end{lem}
\begin{thm}[stability of a mergegram]
\label{thm:stability_mergegram}
Any finite subsets $A,B$ of a metric space $(X,d)$ have the mergegrams $\mathrm{BD}(\mathrm{MG}(\Delta_{SL}(A)),\mathrm{MG}(\Delta_{SL}(B))\leq \mathrm{HD}(A,B)$.
Hence any small perturbation of $A$ in the Hausdorff distance yields a similarly small perturbation in the bottleneck distance for its mergegram $\mathrm{MG}(\Delta_{SL}(A))$ of the single-linkage clustering dendrogram $\Delta_{SL}(A)$.
\end{thm}
\begin{proof}
The given subsets $A,B$ with $\mathrm{HD}(A,B)=\delta$ have $\delta$-interleaved merge modules by Lemma~\ref{lem:merge_modules_interleaved}, i.e. $\mathrm{ID}(\mathrm{MG}(\Delta_{SL}(A)),\mathrm{MG}(\Delta_{SL}(B))\leq\delta$.
Since any merge module $M(\Delta)$ is finitely decomposable, hence $q$-tame, by Lemma~\ref{lem:merge_module_decomposition}, the corresponding mergegram $\mathrm{MG}(M(\Delta))$ satisfies Theorem~\ref{thm:stability_persistence}, i.e.
$\mathrm{BD}(\mathrm{MG}(\Delta_{SL}(A)),\mathrm{MG}(\Delta_{SL}(B))\leq\delta$ as required.
\end{proof}
Theorem~\ref{thm:stability_mergegram} is confirmed by the following experiment on cloud perturbations in Fig.~\ref{fig:BD-vs-noise_bound}.
\begin{figure}[h]
\begin{tikzpicture}[scale=0.85]
\begin{axis}[xlabel = noise bound, ylabel = bottleneck distance,grid]
\addplot table [x=a, y=b, col sep=comma] {TableAvg.csv};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}[scale=0.85]
\begin{axis}[xlabel = noise bound,grid]
\addplot table [x=a, y=b, col sep=comma] {TableMax.csv};
\end{axis}
\end{tikzpicture}
\caption{The bottleneck distances (average on the left, maximum on the right) between mergegrams of sampled point clouds and their perturbations.
Both graphs are below the line $y=2x$. }
\label{fig:BD-vs-noise_bound}
\end{figure}
\begin{enumerate}
\item We uniformly generate $N=100$ black points in the cube $[0,100]^3\subset\mathbb{R}^3$.
\item Then we generate a random number of red points such that the $\epsilon$ ball of every black point randomly has 1, 2 or 3 red points for a noise bound $\epsilon\in[0.1,10]$ taken with a step size 0.1.
\item Compute the bottleneck distance between the mergegrams of black and red points.
\item Repeat the experiment $K=100$ times, plot the average and maximum in Fig.~\ref{fig:BD-vs-noise_bound}.
\end{enumerate}
\section{Experiments on a classification of point sets and conclusions}
\label{sec:experiments}
Algorithm~\ref{alg:mergegram} computes the mergegram of the SL dendrogram for any finite set $A\subset\mathbb{R}^m$.
\begin{alg}
\begin{algorithmic}
\STATE
\STATE \textbf{Input} : a finite point cloud $A\subset\mathbb{R}^m$
\STATE Compute $\mathrm{MST}(A)$ and sort all edges of $\mathrm{MST}(A)$ in increasing order of length
\STATE Initialize Union-Find structure $U$ over $A$. Set all points of $A$ to be their components.
\STATE Initialize the function $\text{prev: Components}[U] \rightarrow \mathbb{R}$ by setting $\text{prev}(t) = 0$ for all $t$
\STATE Initialize the vector Output that will consists of pairs in $\mathbb{R} \times \mathbb{R}$
\FOR{Edge $e = (a,b)$ in the set of edges (increasing order)}
\STATE Find components $c_1$ and $c_2$ of $a$ and $b$ respectively in Union-Find $U$
\STATE Add pairs (prev$[c_1]$,length($e$)), (prev$[c_2]$,length($e$)) $\in \mathbb{R}^2$ to Output
\STATE Merge components $c_1$ and $c_2$ in Union-Find $U$ and denote the component by $t$
\STATE Set prev$[t]$ = length($e$)
\ENDFOR
\STATE \textbf{return} Output
\end{algorithmic}
\label{alg:mergegram}
\end{alg}
Let $\alpha(n)$ be the inverse Ackermann function.
Other constants below are defined in \cite{march2010fast}.
\begin{thm}[a fast mergegram computation]
\label{thm:complexity}
For any cloud $A\subset\mathbb{R}^m$ of $n$ points, the mergegram $\mathrm{MG}(\Delta_{SL}(A))$ can be computed in time $O(\max\{c^6,c_p^2c^2_l \}c^{10}n\log n\,\alpha(n))$.
\end{thm}
\begin{proof}
A Minimum Spanning Tree $\mathrm{MST}(A)$ needs $O(\max\{c^6,c_p^2c^2_l \}c^{10}n\log n\,\alpha(n))$ time by \cite[Theorem~5.1]{march2010fast}.
The rest of Algorithm~\ref{alg:mergegram} is dominated by $O(n\alpha(n))$ Union-Find operations.
Hence the full algorithm has the same computational complexity as the MST.
\end{proof}
The experiments summarized in Fig.~\ref{fig:100-point_clouds} show that the mergegram curve in blue outperforms other isometry invariants on the isometry classification by the state-of-the-art PersLay.
We generated 10 classes of 100-point clouds within the unit ball $\mathbb{R}^m$ for $m=2,3,4,5$.
For each class, we made 100 copies of each cloud and perturbed every point by a uniform random shift in a cube of the size $2\times\epsilon$, where $\epsilon$ is called a \emph{noise bound}.
For each of 100 perturbed clouds, we added 25 points such that every new point is $\epsilon$-close to an original point.
Within each of 10 classes all 100 clouds were randomly rotated within the unit ball around the origin, see Fig.~\ref{fig:clouds}.
For each of the resulting 1000 clouds, we computed the mergegram, 0D persistence diagram and the diagram of pairs of distances to two nearest neighbors for every point.
\newcommand{44mm}{44mm}
\begin{figure}[h!]
\includegraphics[height=44mm]{images/cloud0.png}
\includegraphics[height=44mm]{images/cloud0+noise+extra.png}
\includegraphics[height=44mm]{images/cloud0+noise+extra+rotation.png}
\caption{\textbf{Left}: an initial random cloud with 100 blue points.
\textbf{Middle}: all blue points are perturbed, 25 extra orange points are added.
\textbf{Right}: a cloud is rotated through a random angle.
Can we recognize that the initial and final clouds are in the same isometry class modulo small noise?}
\label{fig:clouds}
\end{figure}
The machine learning part has used the obtained diagrams as the input-data for the Perslay \cite{carriere2019perslay}.
Each dataset was split into learning and test subsets in ratio 4:1.
The learning loops ran by iterating over mini-batches consisting of 128 elements and going through the full dataset for a given number of epochs.
The success rate was measured on the test subset.
\medskip
The original Perslay module was rewritten in Tensorflow v2 and RTX 2080 graphics card was used to run the experiments.
The technical concepts of PersLay are explained in \cite{carriere2019perslay}:
\begin{itemize}
\item Adam(Epochs = 300, Learning rate = 0.01)
\item Coefficents = Linear coefficents
\item Functional layer = [PeL(dim=50), PeL(dim=50, operationalLayer=PermutationMaxLayer)].
\item Operation layer = TopK(50)
\end{itemize}
The PersLay training has used the following invariants compared in Fig.~\ref{fig:100-point_clouds}:
\begin{itemize}
\item cloud : the initial cloud $A$ of points corresponds to the baseline curve in black;
\item PD0: the 0D persistence diagram $\mathrm{PD}$ for distance-based filtrations of sublevel sets in red;
\item NN(2) brown curve: for each point $a\in A$ includes distances to two nearest neighbors;
\item the mergegram $\mathrm{MG}(\Delta_{SL}(A))$ of the SL dendrogram has the blue curve above others.
\end{itemize}
\begin{figure}[t]
\begin{tikzpicture}[scale=0.85]
\begin{axis}[xlabel = 2 dimensions; noise bound, ylabel = success rate,grid]
\addplot table [x=e, y=m, col sep=comma] {100Points25NoiseResults/dim2.csv};
\addlegendentry{mergegram}
\addplot table [x=e, y=h, col sep=comma] {100Points25NoiseResults/dim2.csv};
\addlegendentry{PD0}
\addplot table [x=e, y=n, col sep=comma] {100Points25NoiseResults/dim2.csv};
\addlegendentry{NN(2)}
\addplot table [x=e, y=c, col sep=comma] {100Points25NoiseResults/dim2.csv};
\addlegendentry{cloud}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}[scale=0.85]
\begin{axis}[xlabel = 2 dimensions; noise bound, grid]
\addplot table [x=e, y=m, col sep=comma] {100Points25NoiseResults/dim3.csv};
\addplot table [x=e, y=h, col sep=comma] {100Points25NoiseResults/dim3.csv};
\addplot table [x=e, y=n, col sep=comma] {100Points25NoiseResults/dim3.csv};
\addplot table [x=e, y=c, col sep=comma] {100Points25NoiseResults/dim3.csv};
\end{axis}
\end{tikzpicture}
\medskip
\begin{tikzpicture}[scale=0.85]
\begin{axis}[xlabel = 4 dimensions; noise bound, ylabel = success rate,grid]
\addplot table [x=e, y=m, col sep=comma] {100Points25NoiseResults/dim4.csv};
\addplot table [x=e, y=h, col sep=comma] {100Points25NoiseResults/dim4.csv};
\addplot table [x=e, y=n, col sep=comma] {100Points25NoiseResults/dim4.csv};
\addplot table [x=e, y=c, col sep=comma] {100Points25NoiseResults/dim4.csv};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}[scale=0.85]
\begin{axis}[xlabel = 5 dimensions; noise bound,grid]
\addplot table [x=e, y=m, col sep=comma] {100Points25NoiseResults/dim5.csv};
\addplot table [x=e, y=h, col sep=comma] {100Points25NoiseResults/dim5.csv};
\addplot table [x=e, y=n, col sep=comma] {100Points25NoiseResults/dim5.csv};
\addplot table [x=e, y=c, col sep=comma] {100Points25NoiseResults/dim5Cor.csv};
\end{axis}
\end{tikzpicture}
\caption{Success rates of PersLay in identifying isometry classes of 100-point clouds uniformly sampled in a unit ball, averaged over 5 different clouds and 5 cross-validations with 20/80 splits.
}
\label{fig:100-point_clouds}
\end{figure}
Fig.~\ref{fig:100-point_clouds} shows that the new mergegram has outperformed all other invariants on the isometry classification problem.
The 0D persistence turned out to be weaker than the pairs of distances to two neighbors.
The topological persistence has found applications in data skeletonization with theoretical guarantees \cite{kurlin2015homologically,kalisnik2019higher}.
We are planning to extend the experiments in section~\ref{sec:experiments} for classifying rigid shapes by comining the new mergegram with the 1D persistence, which has the fast $O(n\log n)$ time for any 2D cloud of $n$ points \cite{kurlin2014fast, kurlin2014auto}.
\medskip
In conclusion, the paper has extended the 0D persistence to a stronger isometry invariant, which has kept the celebrated stability under noise important for applications to noisy data.
The initial C++ code for the mergregram is at https://github.com/YuryUoL/Mergegram and will be updated.
We thank all the reviewers for their valuable time and helpful suggestions.
\bibliographystyle{plainurl
|
1,477,468,750,416 | arxiv | \section{Introduction}
Right-handed neutrinos ($\nu_{R}$) are one of the most intriguing
pieces to be added to the Standard Model (SM). Not only can they resolve
several problems of the SM including neutrinos masses, dark mater,
and baryon asymmetry of the universe,\footnote{See, e.g., the so-called $\nu$MSM~\cite{Asaka:2005pn,Asaka:2005an}
which extends the SM by $\nu_{R}$ to incorporate neutrino masses,
dark mater, and leptogenesis simultaneously.} their singlet nature under the SM gauge symmetry also allows for
couplings to hidden or dark sectors, a feature known as the neutrino
portal to physics beyond the SM.
Among various new physics extensions built on $\nu_{R}$, a gauge
singlet scalar $\phi$ coupled exclusively to $\nu_{R}$, referred
to as the $\nu_{R}$-philic scalar, is arguably the simplest.\footnote{It has recently been shown that the $\nu_{R}$-philic scalar could
assist low-scale leptogenesis~\cite{Alanne:2018brf}.} At tree level, the $\nu_{R}$-philic scalar does not interact directly
with normal matter that consists of electrons and quarks, which implies
that it might have been well hidden from low-energy laboratory searches.
At the one-loop level, there are loop-induced couplings of $\phi$
to charged fermions, which are suppressed by neutrino masses ($m_{\nu}$)
in the framework of Type I seesaw~\cite{Minkowski:1977sc,yanagida1979proceedings,GellMann:1980vs, glashow1979future,mohapatra1980neutrino}.
The suppression can be understood from that in the zero limit of neutrino
masses, which corresponds to vanishing couplings of the SM Higgs to
$\nu_{R}$ and left-handed neutrinos ($\nu_{L}$), the $\nu_{R}$
sector would be entirely decoupled from the SM content. As we will
show, for electrons, the loop-induced effective Yukawa coupling is
of the order of
\begin{equation}
\frac{G_{F}m_{e}m_{\nu}}{16\pi^{2}}\sim{\cal O}\left(10^{-21}\right),\label{eq:m-83}
\end{equation}
where $G_{F}$ is the Fermi constant and $m_{e}$ is the electron
mass.
Despite the small value of the loop-induced coupling, the magnitude
coincides with the sensitivity of current precision tests of gravity.
For long-range forces mediated by ultra-light bosons coupled to electrons
or quarks, experimental tests of the strong (based on the lunar laser-ranging
technology~\cite{Turyshev:2006gm}) and weak (e.g., torsion-balance
experiments~\cite{Schlamminger:2007ht,Heckel:2008hw}) equivalence
principles are sensitive to Yukawa/gauge couplings spanning from
$10^{-20}$ to $10^{-24}$. Very recently, gravitational waves from
black hole (BH) and neutron star (NS) binary mergers have been detected
by the LIGO/VIRGO collaboration~\cite{TheLIGOScientific:2017qsa,Abbott:2020khf},
providing novel methods to test theories of gravity as well as other
long-range forces~\cite{Croon:2017zcu,Baryakhtar:2017ngi,Sagunski:2017nzb,Hook:2017psm,Huang:2018pbu, Kopp:2018jom,Alexander:2018qzg,Choi:2018axi,Fabbrichesi:2019ema,Seymour:2019tir,Dror:2019uea}.
For instance, the process of BH superradiance can be used to exclude
a wide range of ultra-light boson masses~\cite{Baryakhtar:2017ngi}.
The sizable abundance of muons in NS binary systems allows us to
probe muonic forces as they could modify the orbital dynamics. It
is expected that~\cite{Dror:2019uea} current and future observations
of NS binaries are sensitive to muonic Yukawa/gauge couplings ranging
from $10^{-18}$ to $10^{-22}$ which, again, coincidentally covers
the theoretical expectation of the loop-induced coupling for muons,
$G_{F}m_{\mu}m_{\nu}/(16\pi^{2})\sim10^{-19}$.
In light of the frontiers of precision and novel tests of gravity
and gravity-like forces, it is important to perform an in-depth study
on the loop-induced interactions of the $\nu_{R}$-philic scalar,
which is the main goal of this work. We note here that in the seminal
work on majorons~\cite{Chikashige:1980ui}, similar loop-induced interactions
have been computed and confronted with experimental limits in the
1980s. More recently, Ref.~\cite{Garcia-Cely:2017oco} studied majoron
decay caused by the loop-induced couplings to charged fermions. In
addition, majoron decay to photons is also possible at two-loop level~\cite{Heeck:2019guh}.
While the majoron considered in Refs.~\cite{Chikashige:1980ui, Garcia-Cely:2017oco,Heeck:2019guh}
is a pseudo-scalar boson, in this work we compute loop-induced interactions
for a generic scalar and take three lepton flavors into account, with
loop calculation details presented. The loop-induced interactions
computed in this work could be of importance in phenomenological studies
of long-range forces~\cite{Joshipura:2003jh,Grifols:2003gy,Bandyopadhyay:2006uh, GonzalezGarcia:2006vp, Nelson:2007yq,GonzalezGarcia:2008wk,Samanta:2010zh,Heeck:2010pg, Davoudiasl:2011sz, Heeck:2014zfa, Chatterjee:2015gta, Bustamante:2018mzu,Khatun:2018lzs,Wise:2018rnb, Krnjaic:2017zlz,Berlin:2016woy,Brdar:2017kbt,Smirnov:2019cae,Babu:2019iml}.
The paper is organized as follows. In Sec.~\ref{sec:basic}, we briefly
review the Type I seesaw extended by a gauge singlet scalar, and derive
the tree-level interactions for later use. In Sec.~\ref{sec:Loop},
we compute the loop-induced interactions of $\phi$ with charged fermions.
The calculation, for simplicity, is first performed assuming only
one generation of leptons and then generalized to three flavors in
Sec.~\ref{sec:3nu-1}. In Sec.~\ref{sec:Phenomenology}, we confront
the theoretical predictions to experimental limits including searches
for long-range forces of normal matter and the LIGO observations of
NS events which are sensitive to muonic couplings. We conclude in
Sec.~\ref{sec:Conclusion} and delegate some details of our calculations
to the appendix.
\section{The model\label{sec:basic}}
\subsection{Notations\label{sub:Notations}}
Throughout this paper, Weyl spinors are frequently used in our discussions
for simplicity. On the other hand, for Feynman diagram calculations,
Dirac or Majorana spinors are more convenient due to a variety of
techniques and especially many modern computation packages that have
been developed. As both will used in this paper, it is necessary to
clarify our notations regarding Weyl spinors versus Dirac/Majorana
spinors.
All four-component Dirac/Majorana spinors in this paper are denoted
by $\psi_{X}$ with some interpretative subscripts $X$. Otherwise,
they are Weyl spinors. For instance, $\nu_{L}$ and $\ell_{R}$ are
Weyl spinors of a left-handed neutrino and a right-handed charged
lepton, respectively. In contrast to that, $\psi_{\ell}$ is a Dirac
spinor of a charged lepton containing both left- and right-handed
components.
For Weyl spinors, our notation follows the convention in Ref.~\cite{Dreiner:2008tw}.
For example, the mass and kinetic terms of $\nu_{R}$ are
\begin{equation}
M_{R}\nu_{R}\nu_{R}\equiv M_{R}\left(\nu_{R}\right)^{\alpha}\left(\nu_{R}\right)_{\alpha},\ \ \nu_{R}^{\dagger}\overline{\sigma}^{\mu}i\partial_{\mu}\nu_{R}\equiv\left(\nu_{R}^{\dagger}\right)_{\dot{\alpha}}\left(\overline{\sigma}^{\mu}\right)^{\dot{\alpha}\beta}i\partial_{\mu}\left(\nu_{R}\right)_{\beta}.\label{eq:m-13}
\end{equation}
Here and henceforth, the Weyl spinor indices $\alpha$, $\dot{\alpha}$,
$\beta$ will be suppressed.
Dirac and Majorana spinors can be built from Weyl spinors. Hence the
Dirac spinors of charged leptons and neutrinos can be written as
\begin{equation}
\psi_{\ell}=\left(\begin{array}{c}
\ell_{L}\\[2mm]
\ell_{R}^{\dagger}
\end{array}\right),\ \psi_{\nu}=\left(\begin{array}{c}
\nu_{L}\\[2mm]
\nu_{R}^{\dagger}
\end{array}\right).\label{eq:m-14}
\end{equation}
The Majorana spinor of a neutrino mass eigenstate $\nu_{i}$ (where
$i=1$, 2, 3, $\cdots$) is defined as
\begin{equation}
\psi_{i}\equiv\left(\begin{array}{c}
\nu_{i}\\[2mm]
\nu_{i}^{\dagger}
\end{array}\right).\label{eq:m-15}
\end{equation}
Note that it is self-conjugate: $\psi_{i}^{c}=\psi_{i}$. For
later convenience, some identities are listed below to convert Weyl
spinors into Dirac/Majorana spinors :
\begin{equation}
\nu_{i}\nu_{j}=\nu_{j}\nu_{i}=\overline{\psi_{i}}P_{L}\psi_{j},\ \ \nu_{i}^{\dagger}\nu_{j}^{\dagger}=\nu_{j}^{\dagger}\nu_{i}^{\dagger}=\overline{\psi_{i}}P_{R}\psi_{j},\ \ \nu_{i}^{\dagger}\overline{\sigma}^{\mu}\nu_{j}=\overline{\psi_{i}}\gamma^{\mu}P_{L}\psi_{j},\label{eq:m-18}
\end{equation}
\begin{equation}
\ell_{L}\nu_{i}=\nu_{i}\ell_{L}=\overline{\psi_{i}}P_{L}\psi_{\ell},\ \ \ell_{R}\nu_{i}=\nu_{i}\ell_{R}=\overline{\psi_{\ell}}P_{L}\psi_{i},\ \ \ell_{L}^{\dagger}\overline{\sigma}^{\mu}\nu_{i}=\overline{\psi_{\ell}}\gamma^{\mu}P_{L}\psi_{i},\label{eq:m-19}
\end{equation}
where $P_{L/R}\equiv(1\mp\gamma^{5})/2$ and $\gamma_{L}^{\mu}\equiv\gamma^{\mu}P_{L}$.
\subsection{Lagrangian}
We consider the SM extended by several right-handed neutrinos $\nu_{R}$
and a singlet scalar $\phi$. In Type I seesaw, the number of $\nu_{R}$
needs to be $\geq2$ in order to accommodate the observed neutrino
oscillation data. Let us start with one generation of leptons and
ignore the flavor structure (for the realistic case including three
generations, see Sec.~\ref{sec:3nu-1}). The Lagrangian of $\nu_{R}$
and $\phi$ reads:
\begin{equation}
{\cal L}\supset\nu_{R}^{\dagger}\overline{\sigma}^{\mu}i\partial_{\mu}\nu_{R}+\frac{1}{2}(\partial\phi)^{2}+\frac{1}{2}m_{\phi}^{2}\phi^{2}+\left[\frac{M_{R}}{2}\thinspace\nu_{R}\nu_{R}+\frac{y_{R}}{2}\thinspace\nu_{R}\nu_{R}\phi+{\rm h.c.}\right].\label{eq:m}
\end{equation}
Here we assume $\phi$ is a real scalar or pseudo-scalar field. If
it is a complex field, one can decompose it as $\phi=\phi_{r}+i\phi_{i}$
with $\phi_{r}$ and $\phi_{i}$ being real scalar and pseudo-scalar
fields respectively. To make our calculation applicable to both scalar
and pseudo-scalar cases, we allow $y_{R}$ to be a complex coupling.
The Dirac masses of leptons are generated by
\begin{equation}
{\cal L}\supset y_{\nu}\widetilde{H}^{\dagger}L\nu_{R}+y_{\ell}H^{\dagger}L\ell_{R}+{\rm h.c.},\label{eq:m-1}
\end{equation}
where $H$ is the SM Higgs doublet ($\widetilde{H}\equiv i\sigma_{2}H^{*}$),
$L=(\nu_{L},\ \ell_{L})^{T}$ is a left-handed lepton doublet, and
$\ell_{R}$ is a right-handed charged lepton. After electroweak symmetry
breaking, $\langle H\rangle=(0,\ v)^{T}/\sqrt{2}$, Eq.~(\ref{eq:m-1})
leads to the following mass terms:
\begin{equation}
{\cal L}\supset m_{D}\nu_{L}\nu_{R}+m_{\ell}\ell_{L}\ell_{R}+{\rm h.c.},\label{eq:m-2}
\end{equation}
where
\begin{equation}
m_{D}\equiv y_{\nu}\frac{v}{\sqrt{2}},\ m_{\ell}\equiv y_{\ell}\frac{v}{\sqrt{2}}.\label{eq:m-3}
\end{equation}
The Dirac and Majorana mass terms of neutrinos can be formulated as
\begin{equation}
{\cal L}_{\nu\thinspace{\rm mass}}=\frac{1}{2}(\nu_{L},\ \nu_{R})\left(\begin{array}{cc}
0 & m_{D}\\
m_{D} & M_{R}
\end{array}\right)\left(\begin{array}{c}
\nu_{L}\\
\nu_{R}
\end{array}\right),\label{eq:m-4}
\end{equation}
which then can be diagonalized by
\begin{equation}
\left(\begin{array}{c}
\nu_{L}\\
\nu_{R}
\end{array}\right)=U\left(\begin{array}{c}
\nu_{1}\\
\nu_{4}
\end{array}\right),\ U^{T}\left(\begin{array}{cc}
0 & m_{D}\\
m_{D} & M_{R}
\end{array}\right)U=\left(\begin{array}{cc}
m_{1}\\
& m_{4}
\end{array}\right).\label{eq:m-5}
\end{equation}
Here $\nu_{1}$ and $\nu_{4}$ are the light and heavy mass eigenstates
with their masses determined by
\begin{equation}
m_{1}=\frac{1}{2}\left(\sqrt{4m_{D}^{2}+M_{R}^{2}}-M_{R}\right),\ m_{4}=\frac{1}{2}\left(\sqrt{4m_{D}^{2}+M_{R}^{2}}+M_{R}\right).\label{eq:m-17}
\end{equation}
The unitary matrix $U$ is parametrized as
\begin{equation}
U=\left(\begin{array}{cc}
-i\thinspace c_{\theta} & s_{\theta}\\
i\thinspace s_{\theta} & c_{\theta}
\end{array}\right),\label{eq:m-6}
\end{equation}
where $c_{\theta}\equiv\cos\theta$, $s_{\theta}\equiv\sin\theta$,
and
\begin{equation}
\theta=\arctan\sqrt{m_{1}/m_{4}}.\label{eq:m-21}
\end{equation}
Eq.~(\ref{eq:m-6}) has been parametrized in such a way that $m_{D}$,
$M_{R}$, $m_{1}$ and $m_{4}$ are all positive numbers.
\subsection{Interactions in the mass basis}
Since $\nu_{L}$ and $\nu_{R}$ are not mass eigenstates, we need
to reformulate neutrino interactions in the mass basis, i.e., the
basis of $\nu_{1}$ and $\nu_{4}$. The two bases are related by
\begin{eqnarray}
\nu_{L} & = & -i\thinspace c_{\theta}\thinspace\nu_{1}+s_{\theta}\thinspace\nu_{4}\thinspace,\label{eq:m-8}\\
\nu_{R} & = & i\thinspace s_{\theta}\thinspace\nu_{1}+c_{\theta}\thinspace\nu_{4}\thinspace.\label{eq:m-9}
\end{eqnarray}
Neutrino interactions in the original basis (chiral basis) include
gauge interactions and Yukawa interactions, summarized as follows:
\begin{equation}
{\cal L}\supset\frac{g}{2c_{W}}Z_{\mu}\nu_{L}^{\dagger}\overline{\sigma}^{\mu}\nu_{L}+\left[\frac{g}{\sqrt{2}}W_{\mu}^{-}\ell_{L}^{\dagger}\overline{\sigma}^{\mu}\nu_{L}-y_{\nu}H^{+}\ell_{L}\nu_{R}+y_{\ell}H^{-}\nu_{L}\ell_{R}+\frac{y_{R}}{2}\thinspace\nu_{R}\nu_{R}\phi+{\rm h.c.}\right],\label{eq:m-10}
\end{equation}
where $g$ is the gauge coupling of $SU(2)_{L}$ in the SM, $c_{W}$
is the cosine of the Weinberg angle, and $H^{\pm}$ is the charged
component of $H$, i.e. the Goldstone boson associated to $W^{\pm}$.
Now applying the basis transformation in Eqs.~(\ref{eq:m-8}) and
(\ref{eq:m-9}) to Eq.~(\ref{eq:m-10}), we get
\begin{equation}
{\cal L}\supset g_{Z}^{ij}Z_{\mu}\nu_{i}^{\dagger}\overline{\sigma}^{\mu}\nu_{j}+\left[g_{W}^{i}W_{\mu}^{-}\ell_{L}^{\dagger}\overline{\sigma}^{\mu}\nu_{i}-y_{\nu}^{i}H^{+}\ell_{L}\nu_{i}+y_{\ell}^{i}H^{-}\nu_{i}\ell_{R}+\frac{y_{R}^{ij}}{2}\thinspace\nu_{i}\nu_{j}\phi+{\rm h.c.}\right].\label{eq:m-10-2}
\end{equation}
Here $i$ and $j$ take either 1 or 4. The couplings $g_{Z}^{ij}$,
$g_{W}^{i}$, $y_{\nu}^{i}$, $y_{\ell}^{i}$, $y_{R}^{ij}$ are given
by the following matrices or vectors:
\begin{eqnarray}
g_{Z}^{ij} & = & \frac{g}{2c_{W}}\left(\begin{array}{cc}
c_{\theta}^{2} & ic_{\theta}s_{\theta}\\
-ic_{\theta}s_{\theta} & s_{\theta}^{2}
\end{array}\right),\ \ y_{R}^{ij}=y_{R}\left(\begin{array}{cc}
-s_{\theta}^{2} & ic_{\theta}s_{\theta}\\
ic_{\theta}s_{\theta} & c_{\theta}^{2}
\end{array}\right),\label{eq:m-11}\\
g_{W}^{i} & = & \frac{g}{\sqrt{2}}(-ic_{\theta},\ s_{\theta}),\ \ y_{\nu}^{i}=y_{\nu}(is_{\theta},\ c_{\theta}),\ \ y_{\ell}^{i}=y_{\ell}(-ic_{\theta},\ s_{\theta}).\label{eq:m-12}
\end{eqnarray}
Eq.~(\ref{eq:m-10-2}) can be straightforwardly expressed in terms
of Dirac and Majorana spinors according to Eqs.~(\ref{eq:m-18})
and (\ref{eq:m-19}):
\begin{equation}
{\cal L}\supset g_{Z}^{ij}Z_{\mu}\overline{\psi_{i}}\gamma_{L}^{\mu}\psi_{j}+\left[g_{W}^{i}W_{\mu}^{-}\overline{\psi_{\ell}}\gamma_{L}^{\mu}\psi_{i}+H^{-}\overline{\psi_{\ell}}(y_{\ell}^{i}P_{L}-y_{\nu}^{i*}P_{R})\psi_{i}+\frac{y_{R}^{ij}}{2}\thinspace\overline{\psi_{i}}P_{L}\psi_{j}\phi+{\rm h.c.}\right].\label{eq:m-20}
\end{equation}
Note that in the mass basis, $\phi$ couples to both heavy and light
neutrinos but the coupling of the latter is suppressed by $s_{\theta}$.
\section{Loop-induced interactions of $\phi$ with charged leptons\label{sec:Loop}}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{fig/WZ_diagram}
\caption{One-loop diagrams that give rise to effective couplings of $\phi$
with charged leptons ($\ell$) or quarks ($u$, $d$). The left diagram
is computed in Eqs.~(\ref{eq:m-29})-(\ref{eq:F2}), and the right
diagram leads to a pseudo-scalar coupling (with $\gamma^{5}$), the
effect of which however is suppressed in unpolarized matter. The
diagrams are presented in the mass basis ($\nu_{i}$ and $\nu_{j}$
are mass eigenstates). For an equivalent description in the chiral
basis, see Fig.~\ref{fig:W-chiral}. \label{fig:WZ}}
\includegraphics[width=0.5\textwidth]{fig/W_diagram_chiral}
\caption{The $W^{\pm}$-mediated loop diagram in the chiral basis, which is
equivalent to the left diagram in Fig.~\ref{fig:WZ} in the mass
basis. It shows explicitly how chirality changes in the process. Since
in the chiral basis $W^{\pm}$ only couples to left-handed leptons
and $\phi$ only to $\nu_{R}$, we need two mass insertions of $m_{D}$
to connect $\nu_{L}$ and $\nu_{R}$. Other two mass insertions, $M_{R}$
and $m_{\ell}$, are also necessary due to additional requirements---see
discussions in the text. \label{fig:W-chiral}}
\end{figure}
As shown in the previous section, at tree level the scalar singlet
$\phi$ only couples to neutrinos, including light and heavy ones
in the mass basis. It does not interact with other fermions directly.
In this section, we show that one-loop corrections lead to effective
interactions of $\phi$ with charged leptons.
From Eq.~(\ref{eq:m-10-2}), it is straightforward to check that at
the one-loop level, in the unitarity gauge (which means Goldstone
boson interactions can be ignored), there are only two possible diagrams
that can connect $\phi$ to charged leptons or quarks, as shown in
Fig.~\ref{fig:WZ}. The second diagram involving the $Z$ boson actually
leads to a pseudo-scalar coupling (see calculations later on). In
unpolarized matter, pseudo-scalar interactions cannot cause significant
long-range forces~\cite{Wilczek:1982rv,Moody:1984ba} because the
Yukawa potential between two fermions are spin dependent. When taking
an average over the spins, the effect of pseudo-scalar interactions
vanishes. Therefore, we will focus our discussions on the first diagram
where the external fermion lines have to be charged leptons.
The diagrams in Fig.~\ref{fig:WZ} are in the mass basis which is
technically convenient for evaluation. Nonetheless it is illuminating
to show Fig.~\ref{fig:W-chiral}, another diagram in the chiral basis
which explicitly shows how chirality changes in the process. The physical
results should be basis independent.
Fig.~\ref{fig:W-chiral} follows directly from Eq.~(\ref{eq:m-10}),
which suggests that $\phi$ only couples to $\nu_{R}$ while $W^{\pm}$
interacts with $\nu_{L}$. Therefore, two Dirac mass insertions ($m_{D}\nu_{L}\nu_{R}$
and $m_{D}\nu_{L}^{\dagger}\nu_{R}^{\dagger}$) are necessarily introduced
to connect $\nu_{R}$ and $\nu_{L}$, or $\nu_{R}^{\dagger}$ and
$\nu_{L}^{\dagger}$. Note that the two $W^{\pm}$ vertices have to
be conjugate to each other, which implies that from the $W^{\pm}$
side, a pair of $\nu_{L}$ and $\nu_{L}^{\dagger}$ is provided. On
the other hand, the Yukawa vertex couples $\phi$ to two $\nu_{R}$'s
rather than a pair of $\nu_{R}$ and $\nu_{R}^{\dagger}$. So a Majorana
mass insertion is required to flip the lepton number and convert one
of them to $\nu_{R}^{\dagger}$. The direction of lepton-number flow
in this diagram are represented by the arrows. Note that according
to the conventions in Sec.~\ref{sub:Notations}, $\nu_{L}$ and $\nu_{R}$
have opposite lepton numbers. So for $\nu_{R}\nu_{R}\phi$, the arrow
of $\nu_{R}$ should be outgoing. In contrast to that, the arrow
of $\nu_{L}$ in the $W_{\mu}^{-}\ell_{L}^{\dagger}\overline{\sigma}^{\mu}\nu_{L}$
vertex goes inwardly. Finally, there should be a mass insertion of
$m_{\ell}\ell_{L}\ell_{R}$ on one of the external fermion lines because
it is impossible to write down an effective operator that consists
of $\phi$ and two $\ell_{L}$'s\,---\,the operator $\phi\ell_{L}\ell_{L}$
is not allowed due to electric charge conservation.
The chirality analysis in Fig.~\ref{fig:W-chiral} indicates that
the diagram would be proportional to $m_{D}{}^{2}M_{R}m_{\ell}$ if
all these masses are sufficiently small. If $M_{R}$ is much larger
than the typical scale of the loop momentum, then the propagators
of $\nu_{R}$ also contribute an additional factor of $M_{R}^{-2}$.
In this case, the diagram is expected to be proportional to $m_{D}{}^{2}M_{R}^{-1}m_{\ell}\sim m_{\nu}m_{\ell}$
where $m_{\nu}$ is the light neutrino mass.
Now let us compute the loop diagrams explicitly. Using the Dirac/Majorana
spinor representation in Eq.~(\ref{eq:m-20}), we can write down the
amplitudes of the two diagrams in Fig.~\ref{fig:WZ}:
\begin{eqnarray}
i{\cal M}_{W} & = & (i)^{3}\int\frac{d^{4}k}{(2\pi)^{4}}\overline{u(p_{2})}g_{W}^{j}\gamma_{L}^{\mu}\Delta_{j}(p_{j})\frac{y_{R}^{ji}P_{L}+y_{R}^{ji*}P_{R}}{2}\Delta_{i}(p_{i})g_{W}^{i*}\gamma_{L}^{\nu}u(p_{1})\Delta_{\mu\nu}^{W}(k),\label{eq:m-25}\\
i{\cal M}_{Z} & = & (i)^{3}\int\frac{d^{4}p_{i}}{(2\pi)^{4}}\overline{u(p_{2})}g_{Z}^{(\ell)}\gamma_{L}^{\mu}u(p_{1}){\rm tr}\left[-g_{Z}^{ij}\gamma_{L}^{\nu}\Delta_{j}(p_{j})\frac{y_{R}^{ji}P_{L}+y_{R}^{ji*}P_{R}}{2}\Delta_{i}(p_{i})\right]\Delta_{\mu\nu}^{Z}(q),\label{eq:m-26}
\end{eqnarray}
where $(i)^{3}$ comes from three vertices; $p_{1}$ and $p_{2}$
are the momenta of the upper and lower external fermion lines; $p_{i}$
and $p_{j}$ are the momenta of $\nu_{i}$ and $\nu_{j}$; $q=p_{2}-p_{1}=p_{j}-p_{i}$;
$k$ is the momentum of $W$ propagator; and $g_{Z}^{(\ell)}$ is
the gauge coupling of $Z$ to the charge fermion $\ell$. The symbol
$\Delta$ denotes propagators. For Majorana spinors in the mass basis,
their propagators have the same form as Dirac propagators:
\begin{equation}
\Delta_{i}(p)=\frac{i}{\slashed{p}-m_{i}}.\label{eq:m-24}
\end{equation}
The propagators of $W^{\pm}$ and $Z$ are gauge dependent. Most generally,
in $R_{\xi}$ gauges, they are:
\begin{eqnarray}
\Delta_{\mu\nu}^{W}(k) & = & \frac{-i}{k^{2}-m_{W}^{2}}\left[g_{\mu\nu}-\frac{k_{\mu}k_{\nu}}{k^{2}-\xi m_{W}^{2}}(1-\xi)\right],\label{eq:m-22}\\
\Delta_{\mu\nu}^{Z}(k) & = & \frac{-i}{k^{2}-m_{Z}^{2}}\left[g_{\mu\nu}-\frac{k_{\mu}k_{\nu}}{k^{2}-\xi m_{Z}^{2}}(1-\xi)\right].\label{eq:m-23}
\end{eqnarray}
The unitarity gauge corresponds to $\xi\rightarrow\infty$. Except
for the unitarity gauge, other gauges with finite $\xi$, e.g., the
Feynman-'t Hooft gauge ($\xi=1$) and the Lorentz gauge ($\xi=0$),
require the inclusion of Goldstone boson diagrams. The unitarity gauge,
albeit involving fewer diagrams by virtue of infinitely large masses
of Goldstone boson propagators, has a disadvantage in that the cancellation
of UV divergences is less obvious---see discussions in Sec.~\ref{sub:Cancellation-of-UV}.
Nonetheless, it is straightforward to compute $i{\cal M}_{W}$ and
$i{\cal M}_{Z}$ for general values of $\xi$.
First, let us inspect the $i{\cal M}_{Z}$ amplitude. The loop integral
of the trace part gives rise to a quantity proportional to $q^{\nu}$:
\begin{equation}
\int\frac{d^{4}p_{i}}{(2\pi)^{4}}{\rm tr}\left[\gamma_{L}^{\nu}\Delta_{j}(p_{j})P_{L/R}\Delta_{i}(p_{i})\right]\propto q^{\nu},\label{eq:m-28}
\end{equation}
which can be expected from Lorentz invariance, explained as follows.
On the left-hand side of Eq.~(\ref{eq:m-28}) there are only two independent
momenta $p_{j}=p_{i}+q$ and $p_{i}$. After $p_{i}$ being integrated
out, the only quantity that can carry a Lorentz index is $q$ so the
result is proportional to $q^{\nu}$. Now plugging this in Eq.~(\ref{eq:m-26}),
we can immediately get a $\gamma^{5}$ sandwiched between $\overline{u(p_{2})}$
and $u(p_{1})$:
\begin{eqnarray}
\overline{u(p_{2})}\slashed{q}P_{L}u(p_{1}) & = & \overline{u(p_{2})}(\slashed{p}_{2}P_{L}-P_{R}\slashed{p}_{1})u(p_{1})\nonumber \\
& = & m_{\ell}\overline{u(p_{2})}(P_{L}-P_{R})u(p_{1})\nonumber \\
& = & -m_{\ell}\overline{u(p_{2})}\gamma^{5}u(p_{1}).\label{eq:m-37}
\end{eqnarray}
Therefore, the $Z$-mediated diagram induces a pseudo-scalar coupling,
which is computed in Appendix~\ref{sec:gamma5}.
The $i{\cal M}_{W}$ amplitude can be computed by splitting the $W^{\pm}$
propagator in Eq.~(\ref{eq:m-22}) to two parts:
\begin{equation}
\Delta_{\mu\nu}^{W}(k)=-i\frac{g_{\mu\nu}-k_{\mu}k_{\nu}/m_{W}^{2}}{k^{2}-m_{W}^{2}}-i\frac{k_{\mu}k_{\nu}/m_{W}^{2}}{k^{2}-\xi m_{W}^{2}},\label{eq:m-27}
\end{equation}
where the first part does not contain $\xi$ and the second part is
important for cancellation of UV divergences. Note that when computing
Eq.~(\ref{eq:m-25}), because of the chiral projectors in $y_{R}^{ji}P_{L}+y_{R}^{ji*}P_{R}$,
the product of Dirac matrices gives
\begin{equation}
\gamma_{L}^{\mu}\frac{\slashed{p}_{j}+m_{j}}{p_{j}^{2}-m_{j}^{2}}\left[y_{R}^{ji}P_{L}+y_{R}^{ji*}P_{R}\right]\thinspace\frac{\slashed{p}_{i}+m_{i}}{p_{i}^{2}-m_{i}^{2}}\gamma_{L}^{\nu}=\gamma_{L}^{\mu}\frac{\slashed{p}_{j}m_{i}y_{R}^{ji*}+y_{R}^{ji}m_{j}\slashed{p}_{i}}{(p_{j}^{2}-m_{j}^{2})(p_{i}^{2}-m_{i}^{2})}\gamma_{L}^{\nu}.\label{eq:m-36}
\end{equation}
It implies that if $m_{i}\rightarrow0$ and $m_{j}\rightarrow0$,
the result would be zero, which agrees with our analysis in the chiral
basis.
With the above details being noted, we compute\footnote{We use {\tt Package-X} \cite{Patel:2015tea} to compute loop integrals
analytically and our code is available from [\url{https://github.com/xunjiexu/vR_loop}].} Eq.~(\ref{eq:m-25}) in the soft scattering limit ($q\rightarrow0$)
with the approximation of $m_{\ell}\ll m_{W}$ and obtain:
\begin{equation}
i{\cal M}_{W}=i\frac{m_{\ell}G^{ij}}{256\pi^{2}m_{W}^{2}}\left[F_{1}(m_{i},\ m_{j})+F_{2}(m_{i},\ m_{j})\right]\overline{u(p_{2})}u(p_{1})+i\lambda_{\phi\ell\ell}^{(W)}\overline{u(p_{2})}i\gamma^{5}u(p_{1}),\label{eq:m-29}
\end{equation}
where
\begin{equation}
G^{ij}\equiv g_{W}^{i*}g_{W}^{j}(m_{j}y_{R}^{ij}+m_{i}y_{R}^{ij*})=\frac{g^{2}c_{\theta}^{2}s_{\theta}^{2}}{2}\left[\begin{array}{cc}
-m_{1}(y_{R}+y_{R}^{*}) & m_{1}y_{R}^{*}-m_{4}y_{R}\\
m_{1}y_{R}-m_{4}y_{R}^{*} & m_{4}(y_{R}+y_{R}^{*})
\end{array}\right],\label{eq:m-48}
\end{equation}
and $F_{1}$ and $F_{2}$ correspond to the contributions of the first
and second parts of the $W^{\pm}$ propagator in Eq.~(\ref{eq:m-27}),
respectively. Their explicit forms are given in Appendix~\ref{sec:FF}.
The second term of Eq.~(\ref{eq:m-29}) leads to pseudo-scalar couplings
which cannot cause significant effect in unpolarized matter. Nevertheless,
we compute the loop-induced pseudo-scalar couplings in Appendix~\ref{sec:gamma5}.
We need to sum over $i$ and $j$ in Eq.~(\ref{eq:m-29}) to get a
finite and gauge independent result. There are several cancellations
involved in the summation, which are discussed in detail in Appendix~\ref{sec:Some-Cancellations}.
After a careful treatment of these cancellations, we obtain:
\begin{equation}
i{\cal M}_{W}\approx i\overline{u(p_{2})}y_{\phi\ell\ell}u(p_{1}),\label{eq:m-54}
\end{equation}
with
\begin{equation}
y_{\phi\ell\ell}=-\frac{3G_{F}m_{1}m_{\ell}{\rm Re}(y_{R})}{16\sqrt{2}\pi^{2}}.\label{eq:m-55}
\end{equation}
It implies that the loop diagram generates the effective interaction
\begin{equation}
{\cal L}\supset y_{\phi\ell\ell}\phi\overline{\psi_{\ell}}\psi_{\ell},\label{eq:m-56}
\end{equation}
where the effective coupling $y_{\phi\ell\ell}$, given in Eq.~(\ref{eq:m-55}),
is suppressed by the neutrino mass $m_{\nu}$ and the charged lepton
mass $m_{\ell}$.
\section{Generalization to three flavors\label{sec:3nu-1}}
So far we have only considered leptons of a single flavor for which
we have computed the loop-induced coupling $y_{\phi\ell\ell}$, as
given in Eq.~(\ref{eq:m-55}). Now we would like to generalize it
to the realistic scenario with three flavors.
Assuming there are three generations of $\nu_{L}$ and $\nu_{R}$,
we can express the neutrino mass terms in a similar way to Eq.~(\ref{eq:m-4})
except that now the mass matrix is interpreted as a $6\times6$
matrix:
\begin{equation}
M_{6\nu}=\left[\begin{array}{cc}
0 & m_{D}\\
m_{D}^{T} & M_{R}
\end{array}\right]_{6\times6},\label{eq:m-57-1}
\end{equation}
where $m_{D}$ and $M_{R}$ are $3\times3$ Dirac and Majorana mass
matrices respectively. In principle, the number of right-handed neutrinos
does not have to be three. It can be two or more. But to make it concrete,
let us concentrate on the case with three $\nu_{L}$ plus three $\nu_{R}$.
The neutrino mass terms and Yukawa terms are formulated as:
\begin{equation}
{\cal L}\supset\frac{1}{2}(\nu_{L}^{T},\ \nu_{R}^{T})M_{6\nu}\left(\begin{array}{c}
\nu_{L}\\
\nu_{R}
\end{array}\right)+\frac{1}{2}\phi\nu_{R}^{T}Y_{R}^{0}\nu_{R}+{\rm h.c.},\label{eq:m-84-1}
\end{equation}
where $Y_{R}^{0}$ is a $3\times3$ Yukawa coupling matrix.
A detailed analysis of this scenario is delegated to Appendix~\ref{sec:3nu-app}.
Here we simply summarize the results. In general, without any requirements
of $m_{D}$, $M_{R}$ and $Y_{R}^{0}$, the loop-induced coupling
can be numerically obtained from
\begin{equation}
y_{\phi\ell\ell}=\frac{G_{F}m_{\ell}}{64\sqrt{2}\pi^{2}}\sum_{i,\thinspace j}U_{\ell i}^{*}U_{\ell j}\left(Y_{R}M_{d}+M_{d}Y_{R}^{\dagger}\right)_{ij}F_{12}(m_{i},\ m_{j}),\label{eq:m-63}
\end{equation}
where $F_{12}$ can be computed using Eq.~(\ref{eq:m-43}), $U$ is
the full $6\times6$ mixing matrix that can diagonalize $M_{6\nu}$,
$M_{d}\equiv U^{T}M_{6\nu}U={\rm diag}(m_{1},\ m_{2},\ \cdots,\ m_{6})$
is the diagonalized form of $M_{6\nu}$, and $Y_{R}$ is the mass-basis
form of $Y_{R}^{0}$:
\begin{equation}
Y_{R}\equiv U^{T}{\rm diag}(0_{3\times3},\ Y_{R}^{0})U.\label{eq:m-91}
\end{equation}
If $M_{R}$ and $Y_{R}^{0}$ can be simultaneously editorialized\footnote{Such a feature could arise from flavor symmetries, see models in Refs.~\cite{Smirnov:2018luj,Rodejohann:2017lre,Rodejohann:2015hka}.},
then without loss of generality, we can assume $M_{R}$ and $Y_{R}^{0}$
are diagonal. Under this assumption, the result can be further simplified
to
\begin{equation}
y_{\phi\ell\ell}\approx-\frac{3G_{F}m_{\ell}}{32\sqrt{2}\pi^{2}}\left[m_{D}(Y_{R}^{0}+Y_{R}^{0\dagger})M_{R}^{-1}m_{D}^{\dagger}\right]_{\ell\ell}.\label{eq:m-90}
\end{equation}
Eq.~(\ref{eq:m-90}) can also be expressed in the Casas-Ibarra parametrization~\cite{Casas:2001sr}:
\begin{equation}
y_{\phi\ell\ell}\approx-\frac{3G_{F}m_{\ell}}{32\sqrt{2}\pi^{2}}\left[U_{L}^{*}\sqrt{m_{\nu}^{d}}R^{T}(Y_{R}^{0}+Y_{R}^{0\dagger})R^{*}\sqrt{m_{\nu}^{d}}U_{L}^{T}\right]_{\ell\ell},\label{eq:m-92}
\end{equation}
where $U_{L}$ is the PMNS matrix, $m_{\nu}^{d}={\rm diag}(m_{1},\ m_{2},\ m_{3})$,
and $R$ is a complex orthogonal matrix ($RR^{T}=1$), which is determined
by $m_{D}=iU_{L}^{*}\sqrt{m_{\nu}^{d}}R^{T}\sqrt{M_{R}^{-1}}$ in
the Casas-Ibarra parametrization. Note that our convention of $U_{L}$
is chosen in the way that $U_{L}^{T}m_{\nu}U_{L}=m_{\nu}^{d}$ for
$m_{\nu}\equiv-m_{D}M_{R}^{-1}m_{D}^{T}$.
\section{Phenomenology\label{sec:Phenomenology}}
The loop-induced interaction of $\phi$ with electrons leads to a
Yukawa potential between two objects containing $N_{1}$ and $N_{2}$
electrons,
\begin{equation}
V(r)=-\frac{y_{\phi ee}^{2}N_{1}N_{2}}{4\pi r}e^{-m_{\phi}r}.\label{eq:m-76}
\end{equation}
The effective Yukawa coupling $y_{\phi ee}$ is of order $G_{F}m_{e}m_{\nu}/(16\pi^{2})\sim{\cal O}(10^{-21})$,
which reaches the current sensitivity of long-range force searches.
If we replace electrons with muons, the effective coupling is generally
two orders of magnitude larger because $m_{\mu}/m_{e}\approx200$.
The muonic long-range force can be tested in binary systems of neutron
stars (NS) which contain ${\cal O}(0.1\sim1\%)$ muons of the total
mass~\cite{Pearson:2018tkr}. In particular, the recent gravitational
wave observation of NS binary mergers by the LIGO collaboration~\cite{TheLIGOScientific:2017qsa,Abbott:2020khf}
are able to test the muonic force with unprecedented sensitivity.
As indicated by Eq.~(\ref{eq:m-63}), the value of $y_{\phi\ell\ell}$
depends on neutrino masses and the Yukawa couplings of $\phi$ to
$\nu_{R}$. Since there are many free parameters in $Y_{R}$ and
$M_{\nu}$ (where Majorana phases, the Dirac CP phase, the lightest
neutrino mass are still unknown), we would like to simply parametrize
$y_{\phi\ell\ell}$ as follows:
\begin{eqnarray}
y_{\phi ee} & = & \frac{3G_{F}m_{e}Y_{R}^{(e)}m_{\nu}^{(e)}}{16\sqrt{2}\pi^{2}}\approx8.0\times10^{-22}\thinspace Y_{R}^{(e)}\left(\frac{m_{\nu}^{(e)}}{0.01\ {\rm eV}}\right),\label{eq:m-81}\\
y_{\phi\mu\mu} & = & \frac{3G_{F}m_{\mu}Y_{R}^{(\mu)}m_{\nu}^{(\mu)}}{16\sqrt{2}\pi^{2}}\approx5.0\times10^{-19}\thinspace Y_{R}^{(\mu)}\left(\frac{m_{\nu}^{(\mu)}}{0.03\ {\rm eV}}\right),\label{eq:m-80}
\end{eqnarray}
where $Y_{R}^{(e)}$ and $Y_{R}^{(\mu)}$ account for the suppression
caused by the original Yukawa couplings if they are not of ${\cal O}(1)$,
while $m_{\nu}^{(e)}$ and $m_{\nu}^{(\mu)}$ account for the suppression
due to neutrino masses. In the limit of $Y_{R1}=Y_{R2}=Y_{R3}$ and
$U_{{\rm PMNS}}^{*}=U_{{\rm PMNS}}$, $m_{\nu}^{(e)}$ would be identical
to the neutrino mass matrix element responsible for neutrinoless double
beta decay (often denoted as $m_{ee}$ in the literature). But in
general, they are different. Since $Y_{R}^{(e)}m_{\nu}^{(e)}$ and
$Y_{R}^{(\mu)}m_{\nu}^{(\mu)}$ depend on a lot of unknown fundamental
parameters, it is possible that the Majorana phases and other free
parameters conspire in such a way that $Y_{R}^{(e)}m_{\nu}^{(e)}=0$
while $Y_{R}^{(\mu)}m_{\nu}^{(\mu)}$ is not suppressed or vice versa,
analogous to the well-known fact that $m_{ee}$ for neutrinoless double
beta decay can vanish in the normal mass ordering.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{fig/y_ee}
\caption{The effective Yukawa coupling of $\phi$ to $e$, compared with
experimental limits. The predictions of our model (red) are evaluated
according to Eq.~(\ref{eq:m-81}) with $m_{\nu}^{(e)}=0.01$ eV. The
experimental limits come from the E\"ot-Wash torsion-balance tests
of the equivalence principle (blue)~\cite{Wagner:2012ui}, tests
of gravitational inverse-square law (orange)~\cite{Adelberger:2009zz},
lunar laser-ranging (LLR, green) measurements~\cite{Wagner:2012ui,Turyshev:2006gm},
and black hole superradiance (hatched bands)~\cite{Baryakhtar:2017ngi}.\label{fig:Yee}}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{fig/y_mumu}
\caption{The effective Yukawa coupling of $\phi$ to $\mu$, compared with
experimental limits. The predictions of our model (red) are evaluated
according to Eq.~(\ref{eq:m-80}) with $m_{\nu}^{(\mu)}=0.03$ eV.
The muonic force could be probed in binary systems of neutron stars
(NS) due to the considerable abundance of muons. The blue and green
curves represent current sensitivity of the LIGO observations of GW170817
(NS-NS merger) and GW190814 (NS-BH merger) events, respectively. Solid
(dashed) curves take conservative (optimistic) estimates of the muon
abundance~\cite{Dror:2019uea}. In addition, precision measurements
of binary pulsar systems are also sensitive to the muonic force (orange
curves)~\cite{Dror:2019uea}. \label{fig:Ymumu}}
\end{figure}
Next, we shall confront the theoretical predictions with experimental
limits, as shown in Figs.~\ref{fig:Yee} and \ref{fig:Ymumu} for
$y_{\phi ee}$ and $y_{\phi\mu\mu}$ respectively.
For $y_{\phi ee}$, current limits come from long-range force searches
of normal matter, which have long been investigated in precision
tests of gravity, in particular, in tests of the equivalence principle.
The Yukawa force mediated by $\phi$ can affect the former by
contributing an exponential term to the total force and affect the
latter due to its leptophilic coupling, which causes differential
free-fall accelerations for different materials. So far, the E\"ot-Wash
torsion-balance experiment has performed tests of the weak equivalence
principle with the highest precision~\cite{Schlamminger:2007ht,Heckel:2008hw},
leading to the most stringent constraint on $y_{\phi ee}$ in the
regime of very small $m_{\phi}$. In addition, the lunar laser-ranging
(LLR) technology which is able to measure the varying distance between
the moon and the earth to high precision using laser pulses is also
sensitive to new long-range forces~\cite{Turyshev:2006gm}. These
two bounds, reviewed in Ref.~\cite{Wagner:2012ui}, are presented
in Fig.~\ref{fig:Yee} and overlap with the theoretically most favored
region (red lines).
For larger masses, $y_{\phi ee}$ is constrained by tests of the inverse-square
law of gravity~\cite{Adelberger:2006dh,Adelberger:2009zz}, the Casimir
effect~\cite{Bordag:2001qi}, stellar cooling processes~\cite{Davidson:2000hf,Redondo:2013lna},
$N_{{\rm eff}}$ in cosmology~\cite{Boehm:2012gr, Kamada:2015era, Huang:2017egl,Kamada:2018zxi,Luo:2020sho},
supernovae~\cite{Choi:1987sd,Choi:1989hi,Kachelriess:2000qc,Hannestad:2002ff,Farzan:2002wx,Dent:2012mx,Dreiner:2013mua},
neutrino scattering~\cite{Bilmis:2015lja,Lindner:2016wff,Farzan:2018gtr,Lindner:2018kjo,Khan:2019jvr,Link:2019pbm},
etc. But all these bounds are significantly higher than the largest
expected values of $y_{\phi ee}$---see Ref.~\cite{Heeck:2014zfa}
for a recent compilation of these bounds.
In Fig.~\ref{fig:Yee} (also Fig.~\ref{fig:Ymumu}), we add hatched
bands to represent the constraint from black hole superradiance~\cite{Baryakhtar:2017ngi},
which is independent of the Yukawa couplings because the effect is
caused by $\phi$ coupling to the spacetime.
For $y_{\phi\mu\mu}$, the aforementioned laboratory constraints do
not apply since normal matter does not contain muons. Neutron stars,
however, can be a powerful probe of muonic forces due to a significant
abundance muons in them, which is expected when the Fermi energy
exceeds the muon mass. According to the calculations in Refs.~\cite{Pearson:2018tkr,Dror:2019uea},
the number density of muons is typically of ${\cal O}(1\sim10)\%$
of the total number density, which is lower than but still comparable
to the electron number density\footnote{See Fig.~23 in Ref.~\cite{Pearson:2018tkr} and Fig.~3 in Ref.~\cite{Dror:2019uea}.
In the former, the number densities of protons and electrons are presented.
Assuming charge neutrality of the NS, the difference between proton
and electron number densities is approximately the muon number density.
The latter needs to be converted from mass ratios to number density
ratios by multiplying a factor of $m_{\mu}/m_{n}$ where $m_{n}$
is the neutron mass.}.
In fact, since the electron and muon number densities are of the same
order of magnitude while $y_{\phi\ell\ell}\propto m_{\ell}$, for
NS binaries we have
\begin{equation}
F^{(\mu)}\sim\left(\frac{m_{\mu}}{m_{e}}\right)^{2}F^{(e)}\gg F^{(e)},\label{eq:m-82}
\end{equation}
where $F^{(\mu)}$ and $F^{(e)}$ are the forces caused by muons and
electrons respectively.
The recent observations of NS-NS and NS-BH mergers by the LIGO collaboration
provide very promising data to probe the muonic force in this model.
For a NS-NS merger, the effect of $\phi$ is two-fold~\cite{Kopp:2018jom}.
First, the attractive force affects the orbital dynamics in a classical
way, i.e., modifying the Kepler\textquoteright s law when $r\sim m_{\phi}^{-1}$.
Second, since $\phi$ is a ultra-light boson, there is radiation
of $\phi$ due to the rotating dipole, which causes extra energy
loss. For a NS-BH merger, only the effect of $\phi$ radiation is
relevant. An in-depth analysis of the sensitivity to muonic forces
based on the recent two events GW170817 (NS-NS merger) and GW190814
(NS-BH merger) has been performed in Ref.~\cite{Dror:2019uea}. Their
results have been incorporated in Fig.~\ref{fig:Ymumu}, where solid
(dashed) curves are derived using a conservative (optimistic) estimate
of the muon abundance. For GW170817, the sensitivity curves of the
two effects are evaluated and presented separately. The first effect
(orbital dynamics) are more sensitive than the second to $m_{\phi}$
when it is in the large-mass ($10^{-12}\sim10^{-10}$ eV) regime.
In addition to binary mergers, precision measurements of binary
pulsars can also be sensitive to muonic forces ~\cite{Poddar:2019wvu,Dror:2019uea}.
As shown in Fig.~\ref{fig:Ymumu}, the LIGO curves cross the red
lines of $Y_{R}^{(\mu)}=10^{-3}\sim1$, which implies that the loop-induced
muonic force in this model could be probed in the theoretically most
favored regime. Future experiments such as the Einstein Telescope\footnote{See the ET conceptual design document:~\url{https://tds.virgo-gw.eu/?call_file=ET-0106C-10.pdf}.}
and Cosmic Explorer~\cite{Reitze:2019iox} can substantially improve
the sensitivity to muonic forces and thus have great potential of
probing this scenario.
\section{Conclusions and Discussions\label{sec:Conclusion}}
The $\nu_{R}$-philic scalar model naturally gives rise to extremely
small couplings of charged leptons to a long-range force mediator
via loop-level processes. The small values of the loop-induced couplings
coincidentally meet the current sensitivity of long-range force searches
in laboratories and in astrophysical observations such as the recent
detection of GW from NS mergers by LIGO, as we have shown in Figs.~\ref{fig:Yee}
and \ref{fig:Ymumu}.
In this model, loop-induced couplings to quarks also exist, due to
the $Z$-mediated diagram in Fig.~\ref{fig:WZ}. However, our calculation
shows that only pseudo-scalar couplings are generated in this case,
the effect of which is suppressed in unpolarized matter.
Our loop calculation result for the most general three-flavor case
is given by Eq.~(\ref{eq:m-63}) which, though involving diagonalization
of the full $6\times6$ mass matrix, can be numerically evaluated.
For the special case where $M_{R}$ and $Y_{R}^{0}$ can be simultaneously
diagonalized, the result can be further simplified to Eq.~(\ref{eq:m-92}),
where the dependence on the PMNS matrix is manifestly extracted.
Our results can also be used to obtain loop-induced interactions for
other similar models that contain the diagrams in Fig.~\ref{fig:WZ},
via proper replacements of the couplings in vertices and masses in
propagators. However, one caveat should be noted here that incomplete
models where the tree-level couplings of $\phi$ to light neutrino
mass eigenstates are not governed by the active-sterile neutrino mixing
would lead to gauge dependent results.
\begin{acknowledgments}
We thank Andreas Trautner and Toby Opferkuch for useful discussions.
\end{acknowledgments}
|
1,477,468,750,417 | arxiv | \section{Introduction}
Lars Hedin's $GW$ method~\cite{Hedin} is an approximate
treatment of the propagation of electrons in condensed matter
where an electron interacts with itself via a Coulomb
interaction that is screened by virtual
electron-hole pairs.
In periodic semiconductors, the $GW$ approximation is known to
lead to surprisingly accurate gaps \cite{Schilfgaarde-Kotani-Faleev:2006},
while for finite clusters and molecules it provides
qualitatively correct values of ionization
energies and electron affinities \cite{LouieRohlfing}.
Hedin's $GW$ approximation is also needed, as a first step,
when using the Bethe-Salpeter equation to find the optical properties
of systems in which the Coulomb interaction is only weakly screened.
The present work is motivated by the rapid progress, during
the last decade, in the field of organic semiconductors, especially
in organic photovoltaics
and organic luminescent diodes~\cite{OrganicsReview}.
To optimize such systems, it would be useful to know the key
properties of their molecular constituents
before actually synthesizing them.
In order to make such predictions it is necessary to develop algorithms with
a favorable complexity scaling, since many of the technologically relevant
molecules are fairly large.
The method presented here is a step forward along this direction.
Its $O(N^3)$ scaling, with $N$ the number of atoms in the molecule,
is an improvement over most existing methodologies.
While computational techniques for treating the $GW$
approximation for clusters and molecules have become
sophisticated enough for treating molecules of interest in
photovoltaics \cite{Blase} or in the physiology of vision
\cite{RohlfingRetinol}, such calculations remain computationally expensive.
The scaling with the number of atoms of these recent calculations
has not been published. However, in many cases it is unlikely to
be better than $O(N^4)$~\cite{private-comm-scaling}.
A recently published method for computing total energies of molecules that uses
the random phase approximation also has $O(N^4)$ scaling~\cite{Furche}.
Actually, at this point it is difficult for us to envisage
a scaling exponent less than three because the construction
of the screened Coulomb interaction ---
the central element of the $GW$ approach --- requires
inverting a matrix of size $O(N)$ which, in general,
takes $O(N^3)$ operations.
The algorithm described in this paper
is based on two main ingredients:
\textit{i}) respecting the locality of the underlying
interactions and, \textit{ii}) the use of
spectral functions to describe the frequency/time
dependence of the correlators. The latter ingredient allows
for the use of the fast Fourier transform (FFT) to accelerate
the calculations, while the former idea of respecting locality
has been also at the heart of other efficient $GW$ methods, like
the successful ``space-time approach'' for periodic systems~\cite{Godby}.
Our method is based upon the use of spatially localized basis
sets to describe the electronic states within
the linear combination of atomic orbitals (LCAO) technique.
In particular, we have implemented our method as a post processing
tool of the SIESTA code~\cite{siesta}, although
interfaces with other LCAO codes should be simple to construct.
The precision of the LCAO
approach is difficult to control and to improve,
but a basis of atom centered local orbitals is useful for systems
that are too large to be
treated by plane-wave methods~\cite{reviewPayne}.
In order to solve Hedin's equations we construct a basis set
that gets rid of the over completeness of the orbital products
while keeping locality.
In molecular computations this is frequently done
through a fitting procedure (using Gaussians or
other localized functions). We use an alternative
mathematical procedure~\cite{DF,DF+PK} that dispenses
with this fitting and defines a basis of \textit{dominant products}.
The basis of dominant products was
instrumental to develop an efficient linear response code
for molecular absorption spectra \cite{PK+DF+OC,Licence}.
In the present paper we have developed
an additional, non local
compression technique that further reduces
the size of the product basis. The compression allows to store the whole
matrix representation of the screened Coulomb interaction at all times/frequencies
and needs much less memory.
Moreover, the compression strongly accelerates the
calculation of the screened Coulomb interaction
because it involves a matrix inversion. This leads
to a gain in computational efficiency
which is even more important than that associated
with the reduction of the needed memory.
Of course there are other methods that use a localized basis different
from LCAO and, thus, equally appropriate for dealing with clusters and
molecules while exploiting locality. One method uses a lattice in real
space~\cite{RealSpaceChelikowsky}. Another method uses wavelets that
represent a useful compromise between localized and extended
states~\cite{BigDFT}. Localized Wannier orbitals obtained from
transforming plane waves~\cite{Wannier} have also been used in
$GW$ calculations~\cite{Baroni$GW$,DanishWannierMolecules}.
In this paper we use a basis of
dominant functions to span the space of products of atomic orbitals \cite{DF}
and we use a compression scheme to deal with
the screened interaction.
It is clear, however, that some of the ideas and techniques of the present
paper can be combined with the alternative approaches quoted above.
The actual implementation of the algorithm that we report in this
paper can be considered as a ``proof of principle'' only and
the prefactor of our implementation leaves room for further improvement.
Therefore, we validate our method with molecules of moderate size
(we consider molecules of only up to three aromatic rings: benzene,
naphthalene and anthracene), leaving further improvements and
applications to molecules of larger sizes for a future publication.
This paper is organized as follows.
In section \ref{s:elementary-aspects-$GW$}
we recall the equations of the $GW$ approximation.
In section \ref{s:tensor-form} we rewrite the $GW$ approximation
for molecules in tensorial form.
Section \ref{s:instant-scr-inter} describes the instantaneous component of the self-energy,
while in section \ref{s:sf-techniques}
we describe a spectral function technique for solving these tensorial equations.
Section \ref{s:results-1} describes our $GW$ results for benzene, a typical small molecule.
Section \ref{s:compression} describes our algorithm for the compression of the screened Coulomb interaction
that is needed to treat larger molecules.
Section \ref{s:maintain-n3} explains how $O(N^{3})$ scaling can be maintained for
large molecules by alternatively compressing and decompressing the Coulomb interaction.
Section \ref{s:algorithm-summary} presents a summary of the entire algorithm for performing $GW$ calculations.
In section \ref{s:results-2} we test our method on naphthalene and anthracene, and our conclusions are presented
in section \ref{s:conclusion}.
\section{Elementary aspects of Hedin's $GW$ approximation}
\label{s:elementary-aspects-$GW$}
The one-electron Green's function of a many-body system has proved to be
a very useful concept in condensed matter theory. It allows to compute
the total energy, the electronic density and other quantities arising from one-particle operators.
The one-electron Green's function $G(\bm{r},\bm{r}',t)$
has twice as many spatial arguments as the electronic density but it remains
a far less complex object than the many-body wave function.
Furthermore, Hedin has found an exact set of equations for a finite
set of correlation functions of which the one-electron Green's function
is the simplest element. This set of equations has not been solved so far for
any system whatsoever.
However, as a zeroth order starting point to his coupled equations,
Hedin suggested the very successful $GW$ approximation for the
self-energy
$\Sigma(\bm{r},\bm{r}',t)$. This approximation describes the change
of the non interacting electron propagator $G_0(\bm{r},\bm{r}',t)$ due
to interactions among the electrons. With the help of a self-energy, one can
find the interacting Green's function from Dyson's equation
\begin{equation}
G=G_0+G_0\Sigma G_0+G_0\Sigma G_0\Sigma G_{0}+\ldots=\frac{1}{G_0^{-1}-\Sigma}, \label{Dyson}
\end{equation}
where products and inversions in the equation must be understood in an operator sense as
required in many-body perturbation theory.
In Hedin's $GW$ approximation, the interaction of electrons with themselves
is taken into account by the following self-energy
\begin{equation}
\Sigma (\bm{r},\bm{r}',t)=\mathrm{i}G_{0}(\bm{r},\bm{r}',t)W(\bm{r},\bm{r}',t),
\label{self-energy}
\end{equation}%
where $W(\bm{r},\bm{r}',t)$ is a screened Coulomb interaction.
The key idea of Hedin's $GW$ approximation \cite{Hedin} is to incorporate
the screening of the Coulomb interaction from the very beginning in
a zeroth order approximation.
Let $v(\bm{r},\bm{r}')=|\bm{r}-\bm{r}'|^{-1}$ be the bare Coulomb interaction and let
$\chi_0(\bm{r},\bm{r}',t-t')=\frac{\delta n(\bm{r},t)}{\delta V(\bm{r}',t')}$
be the density response $\delta n(\bm{r},t)$ of non interacting electrons with
respect to a change of the external potential $\delta V(\bm{r}',t')$.
Hedin then replaces the original Coulomb interaction $v(\bm{r},\bm{r}')$
by the screened Coulomb interaction $W(\bm{r},\bm{r}', \omega)$ within
the random phase approximation (RPA)~\cite{RPA}
\begin{equation}
W(\bm{r},\bm{r}',\omega)=\frac{1}{\delta(\bm{r}-\bm{r}''')-
v(\bm{r},\bm{r}'')\chi_0(\bm{r}'',\bm{r}''',\omega)}v(\bm{r}''',\bm{r}'),
\label{Coulombscreening}
\end{equation}
here and in the following we assume integration over
repeated spatial coordinates (in our case $\bm{r}''$ and $\bm{r}'''$) on the right hand
side of an equation
if they do not appear on its left hand side. This convention makes our equations more
transparent without introducing ambiguities and it is analogous to the familiar Einstein's
convention of summing over repeated indices.
We can justify the expression (\ref{Coulombscreening}) by considering an internal screening
field $\delta V_{\mathrm{induced}}(\bm{r},\omega)$ that is generated by an extra external field
$\delta V_{\mathrm{external}}(\bm{r},\omega)$
\begin{equation}
\delta V_{\mathrm{total}}(\bm{r},\omega) =\delta V_{\mathrm{external}}(\bm{r},\omega)+\delta V_{\mathrm{induced}}(\bm{r},\omega),
\label{screening}
\end{equation}%
where
\begin{equation}
\delta V_{\mathrm{induced}}(\bm{r},\omega) =v(\bm{r},\bm{r}'')\delta n_{\mathrm{induced}}(\bm{r}'',\omega)
=v(\bm{r},\bm{r}'')\chi_0(\bm{r}'',\bm{r}''',\omega)\delta V_{\mathrm{total}}(\bm{r}''',\omega).
\nonumber
\end{equation}%
As a consequence we obtain a frequency dependent change of the total potential
\begin{equation}
\delta V_{\mathrm{total}}(\bm{r},\omega)=\frac{1}{\delta(\bm{r}-\bm{r}''')-
v(\bm{r},\bm{r}'')\chi_0(\bm{r}'',\bm{r'''},\omega)}\delta V_{\mathrm{external}}(\bm{r}''',\omega).
\label{RPA}
\end{equation}
If we assume that large fields are screened the same way as small field changes, then
we may replace $\delta V_{\mathrm{external}}(\bm{r},\omega)$
by the singular Coulomb interaction $v(\bm{r},\bm{r}')$
and we obtain the screened counterpart $W(\bm{r},\bm{r}',\omega)$ of the original
bare Coulomb interaction as in eq (\ref{Coulombscreening}).
Because of the relation
\begin{equation}
\mathrm{i}\chi_0(\bm{r},\bm{r}',t)=2G_{0}(\bm{r},\bm{r}',t)G_{0}(\bm{r}',\bm{r},-t)
\label{more_screening}
\end{equation}
the screening in eq (\ref{Coulombscreening}) may be interpreted as being due to
the creation of virtual electron-hole pairs.
The screening by virtual electron-hole
pairs is the quantum analogue of classical Debye
screening in polarizable media~\cite{Debyemodel}.
The factor of $2$ in eq~(\ref{more_screening}) takes
into account the summation over spins.
Many body theory uses Feynman-Dyson perturbation theory \cite{ManyBodyText}
and the latter is formulated in terms of time ordered correlators. For instance,
a Green's function is represented as a time ordered correlator
of electron creation $\psi^{+}(\bm{r},t)$ and annihilation $\psi(\bm{r},t)$ operators
\begin{equation}
\mathrm{i}G(\bm{r},\bm{r}',t-t') =
\theta (t-t')\langle 0|\psi(\bm{r},t)\psi^{+}(\bm{r}',t')|0\rangle-
\theta (t'-t)\langle 0|\psi^{+}(\bm{r}',t')\psi (\bm{r},t)|0\rangle,
\label{gf-via-creation-annihilation}
\end{equation}%
where the minus sign is due to Fermi statistics, $|0\rangle$ denotes
the electronic ground state, and where
$\theta(t)$ denotes Heaviside's step function. This completes
our formal description of Hedin's $GW$ approximation.
In practice, Hedin's equations are solved ``on top'' of
a density functional or Hartree-Fock calculation.
The framework of density functional theory (DFT)~\cite{HK,KS}
already includes electron correlations at the mean-field
level via the exchange correlation energy
$E_{\mathrm{\mathrm{xc}}}[n(\bm{r})]$, where $[n]$ denotes the functional
dependence of $E_{\mathrm{\mathrm{xc}}}$ on the electron density.
DFT calculations are usually performed using the
Kohn-Sham scheme~\cite{KS}, in which electrons move
as independent particles
in an effective potential. The Kohn-Sham Hamiltonian $H_{\mathrm{KS}}$ reads
\begin{align}
H_{\mathrm{\mathrm{KS}}} &=-\frac{1}{2}\nabla^2+V_{\mathrm{KS}}, \\
V_{\mathrm{KS}} &=V_{\mathrm{ext}}+V_{\mathrm{Hartree}}+V_{\mathrm{xc}}
\text{, where }
V_{\mathrm{xc}}(\bm{r})=\frac{\delta E_{\mathrm{xc}}}{\delta n(\bm{r})}. \notag
\end{align}%
To avoid including the interaction twice, the exchange correlation potential
$V_{\mathrm{xc}}(\bm{r})$
must be subtracted from $\Sigma(\bm{r},\bm{r}',t)$ in eq (\ref{self-energy})
when using the output of a DFT calculation as input for a $GW$ calculation.
This is done by making the replacement
\begin{equation}
\Sigma(\bm{r},\bm{r}',t) \rightarrow \Sigma(\bm{r},\bm{r}',t) -
\delta(\bm{r}-\bm{r}')\delta(t) V_{\mathrm{xc}}(\bm{r})
\end{equation}%
in Dyson's equation (\ref{Dyson}).
Our aim is to compute the electronic density of states (DOS)
that is defined as the trace of the
imaginary part of the electron propagator
\begin{equation}
\rho(\omega +\mathrm{i}\varepsilon )=-\frac{1}{\pi}
\mathrm{Im}\int G(\omega +\mathrm{i}\varepsilon,\bm{r},\bm{r})d^3r.
\end{equation}%
The electronic DOS
$\rho (\omega +\mathrm{i}\varepsilon )$ can be compared with experimental
data from direct and inverse photo-emission~\cite{PhotoEmissionExperiments}.
From it, we can read off the energy position of the highest occupied and
the lowest unoccupied molecular orbitals (HOMO and LUMO)
or, alternatively, the ionization energy and the electron affinity.
Finally, let us list the equations that define the $GW$
approximation
\begin{equation}
\begin{aligned}
\mathrm{i}\chi_0(\bm{r},\bm{r}',t) &=
2 G_0(\bm{r},\bm{r}',t)G_0(\bm{r}',\bm{r},-t); & \text{free electron response} \\
W(\bm{r},\bm{r}',\omega ) &=\left[\delta(\bm{r}-\bm{r}''')
-v(\bm{r},\bm{r}'')\chi_0(\bm{r}'',\bm{r}''',\omega )\right]^{-1}
v(\bm{r}''',\bm{r}'); & \text{RPA screening} \\
\Sigma (\bm{r},\bm{r}',t) &=
\mathrm{i}G_0(\bm{r},\bm{r}',t)W(\bm{r},\bm{r}',t); &\text{$GW$ self-energy} \\
G^{-1}(\bm{r},\bm{r}',\omega +\mathrm{i}\varepsilon ) &=G_0^{-1}(\bm{r},\bm{r}',
\omega +\mathrm{i}\varepsilon )-\Sigma (\bm{r},\bm{r}',\omega +\mathrm{i}\varepsilon ).
&\text{Dyson equation}
\end{aligned}
\label{all_GW_equations}
\end{equation}%
The next two sections describe
the tensor form of equations (\ref{all_GW_equations}) as well as
the main ingredients of our
implementation of the $GW$ approximation as embodied
in equations (\ref{all_GW_equations}) for
the case of small molecules. Later sections will describe the compression/decompression
of the Coulomb interaction that is needed for treating
large molecules without over flooding the computer memory.
\section{Tensor form of Hedin's equations}
\label{s:tensor-form}
In order to compute the non interacting Green's function (\ref{gf-via-creation-annihilation}),
we will use the LCAO method
where one expresses the electron operator in terms of a set of
fermions $c_{a}(t)$ that belong to localized atomic orbitals \cite{Fulde}
\begin{equation}
\psi(\bm{r},t)\sim \sum_{a}f^{a}(\bm{r})c_{a}(t). \label{Fulde}
\end{equation}%
Such a parametrization is parsimonious in the number of degrees of freedom,
although its quality is difficult to control and to improve in a systematic way.
The output of a DFT calculation (that serves as input for the $GW$ calculation)
is the Kohn-Sham Hamiltonian $H^{ab}$ and the overlap matrix $S^{ab}$ of
the LCAO basis functions $f^{a}(\bm{r})$ \cite{Martin}.
One may use the eigenvectors $\{X_{a}^{E}\}$ of the Kohn-Sham Hamiltonian
\begin{equation}
H^{ab}X_{b}^{E}=ES^{ab}X_{b}^{E} \label{KS_equation}
\end{equation}%
to express the (time-ordered) propagation of electrons between localized atomic orbitals
\begin{equation}
G_{ab}^0(\omega \pm \mathrm{i} \varepsilon)= \sum_E \frac{X_a^E X_b^E}
{\omega \pm \mathrm{i} \varepsilon -E}. \label{local_propagator}
\end{equation
In this paper we measure energies relative to a Fermi energy, so that
$E<0$ $(E>0)$ refers to occupied (empty) states, respectively,
and the infinitesimal constant $\varepsilon$ shifts the poles of the Greens function
away from the real axis. Moreover, to avoid cluttering up the notation, we will often use
Einstein's convention of summing over repeated indices, as in eq (\ref{KS_equation}).
The set of equations (\ref{all_GW_equations}) contains correlation functions,
such as the
density response function $\chi(\bm{r}, \bm{r}', t)$
that must be represented in a basis of \textit{products of atomic orbitals}
\cite{ManyBodyText}
\begin{equation}
\mathrm{i}\chi(\bm{r},\bm{r}',t-t') =\theta(t-t')
\langle 0|n(\bm{r},t)n(\bm{r}',t')|0\rangle
+\theta(t'-t)\langle 0|n(\bm{r}',t')n(\bm{r},t)|0\rangle.
\end{equation}
Indeed, by virtue of eq (\ref{Fulde}), the electronic density
$n(\bm{r},t)=\psi^{+}(\bm{r},t)\psi(\bm{r},t)$ involves products
of atomic orbitals
\begin{equation*}
n(\bm{r},t)=\psi^{+}(\bm{r},t)\psi(\bm{r},t)=
\sum_{a,b}f^a(\bm{r})f^b(\bm{r})c_a^{+}(t)c_b(t).
\end{equation*}
The set of products $\{f^a(\bm{r})f^b(\bm{r})\}$ is well known to be strongly
linearly dependent \cite{LinearDependenceOldpaper}. As an improved solution
of this very old technical difficulty, we previously
developed an algorithm to construct a local basis of ``dominant products''
$F^{\mu}(\bm{r})$ that
\textit{i}) spans the space of orbital products with exponential accuracy and which
\textit{ii}) respects the locality of the original atomic orbitals \cite{DF}. Moreover, the products
of atomic orbitals $f^a(\bm{r})f^b(\bm{r})$ relate to dominant products
$F^{\mu}(\bm{r})$ via a \textit{product vertex} $V_{\mu }^{ab}$
\begin{equation}
f^a(\bm{r})f^b(\bm{r})=\sum_{\mu }V_{\mu }^{ab}F^{\mu }(\bm{r}).
\label{VertexDefinition}
\end{equation}%
Because the dominant products $F^{\mu}(\bm{r})$ are themselves special linear
combinations of the original products, arbitrary extra fitting functions do
not enter into this scheme. In order to respect the principle of locality,
the above decomposition is carried out separately for each pair of atoms,
the orbitals of which overlap.
By their construction, the set of coefficients $V_{\mu }^{ab}$
is sparse in the sense that $V _{\mu }^{ab}\neq
0$ only if $a,b,\mu $ all reside on the same atom pair \cite{DF}.
In the construction of the dominant product basis,
we made use of Talman's algorithms and computer codes for the expansion of products of
orbitals about an arbitrary center and we also used his fast Bessel transform~\cite{Talman}.
To rewrite the defining equations of the $GW$ approximation (\ref{all_GW_equations})
in our basis, we expand both $G(\bm{r},\bm{r}',t-t')$
and $\Sigma (\bm{r},\bm{r}',t-t')$ in atomic orbitals
$f^a(\bm{r})$ \cite{PK+DF+OC}
\begin{equation}
\begin{aligned}
G(\bm{r},\bm{r}',t-t') &= G_{ab}(t-t')f^a(\bm{r})f^b(\bm{r}'); \\
\Sigma (\bm{r},\bm{r}',t-t') &= \Sigma_{ab}(t-t')f^a(\bm{r})f^b(\bm{r}').
\end{aligned}
\label{expansion1}
\end{equation}
We also develop the screened Coulomb interaction in dominant products
\begin{equation}
W^{\mu \nu }(t-t')=\int d^3r d^3r'F^{\mu }(\bm{r})W(\bm{r},\bm{r'},t-t')F^{\nu }(\bm{r}').
\label{expansion2}
\end{equation}%
Using eqs (\ref{VertexDefinition}, \ref{expansion1}, \ref{expansion2})
it is easy to show \cite{PK+DF+OC}
that Hedin's equations (\ref{all_GW_equations}) take the following tensorial form in our basis
\begin{align}
\mathrm{i}\chi_{\mu\nu}^0(t) &= 2 V _{\mu }^{aa'}G_{ab}^0(t)V _{\nu }^{bb'}G_{a'b'}^0(-t);
&\text{free electron response} \label{tensorform-response} \\
W^{\mu \nu }(\omega ) &=\frac{1}{\delta_{\alpha }^{\mu }-v^{\mu \beta }
\chi_{\beta\alpha}^{0}(\omega)}v^{\alpha\nu};
&\text{RPA screening} \label{tensorform-scr-inter} \\
\Sigma^{ab}(t) &=\mathrm{i}V_{\mu}^{aa'}G_{a'b'}^0(t)V_{\nu}^{b'b} W^{\mu \nu }(t);
&\text{$GW$ approximation} \label{tensorform-self-energy} \\
G_{ab}^{-1}(\omega +\mathrm{i}\varepsilon ) &=
G_{0ab}^{-1}(\omega +\mathrm{i}\varepsilon )-\Sigma_{ab}(\omega +\mathrm{i}\varepsilon).
&\text{Dyson's equation}
\label{tensorform-dyson}
\end{align}%
Here $v^{\mu \nu }$ denotes the Coulomb interaction
$v^{\mu \nu }=\int d^3r d^3r'\, F^{\mu }(\bm{r})\frac{1}{|\bm{r}-\bm{r}'|}F^{\nu }(\bm{r}')$
which, due to its positivity and symmetry, we also refer to as a ``Coulomb metric''.
Indices are raised or lowered using either the overlaps of the dominant
functions $F^{\mu }(\bm{r})$ or the overlaps of the
atomic orbitals $f^a(\bm{r})$
and which are defined as follows
\begin{equation}
O^{\mu \nu }=\int d^3r\, F^{\mu }(\bm{r})F^{\nu }(\bm{r}), \
S^{ab}=\int d^3r\, f^{a}(\bm{r})f^{b}(\bm{r}).
\end{equation}%
\begin{figure}
\centerline{\includegraphics[width=7cm, angle=0,clip]{mass-operator-graph.pdf}}
\caption{Feynman diagram for the $GW$ self-energy
expressed in our local LCAO and dominant products basis.
\label{f:self-energy-diagram}}
\end{figure}
Figure~\ref{f:self-energy-diagram} shows the Feynman diagram corresponding
to eq~(\ref{tensorform-self-energy}). The local character of the product
vertex $V_{\mu}^{aa'}$ is emphasized in this figure.
\section{The instantaneous part of the self-energy }
\label{s:instant-scr-inter}
When the screened Coulomb interaction $W^{\mu \nu }$ in
eq (\ref{tensorform-scr-inter}) is expanded as a function of
$v \chi^0$, its first term is the bare
Coulomb interaction $v^{\mu \nu}\delta (t-t')$
and the corresponding self-energy in eq~(\ref{tensorform-self-energy}) is
frequency independent. In textbook treatments of the
theory of the electron gas, it is explained \cite{ManyBodyText}
that the Green's function $G_{ab}(t-t')$ of the electron gas
must be defined, at $t=t'$, by setting $t-t'=0^{-}$ or by first annihilating
and then creating electrons. Using this prescription and eqs
(\ref{tensorform-scr-inter}, \ref{tensorform-self-energy}) one finds
the following result for the frequency-independent self-energy
that corresponds to the exchange operator
\begin{equation}
\Sigma_{\mathrm{x}}^{ab}(t-t')=
\mathrm{i}V _{\mu }^{aa'}G_{a'b'}^0(0^{-})V_{\nu }^{b'b} \delta(t-t') v^{\mu\nu} \nonumber.
\end{equation}
In the frequency domain, the last operator becomes a frequency independent matrix
\begin{equation}
\Sigma_{\mathrm{x}}^{ab}= V_{\mu}^{aa'} \sum_{E<0} X^{E}_{a'} X^{E}_{b'} V_{\nu}^{b'b} v^{\mu\nu},
\label{instantaneous}
\end{equation}%
which can be computed in $O(N^3)$ operations by using the sparsity
of the product vertex $V_{\mu}^{aa'}$.
For small molecules and clusters, the instantaneous self-energy
that incorporates the effects of electron exchange may dominate over
the remaining frequency dependent self-energy. If this is the case,
we may substitute $\Sigma_{\mathrm{x}}^{ab}$ into
eq (\ref{tensorform-self-energy}) and finish the
calculation by computing the DOS
$$
\rho(\omega +\mathrm{i}\varepsilon ) =
-\frac{1}{\pi } S^{ab}\,\mathrm{Im}G_{ba}(\omega +\mathrm{i}\varepsilon ),
$$
where we have emphasized the non-orthogonality of the basis orbitals
by the explicit inclusion of the overlap $S^{ab}$.
However, the frequency dependent part of the self-energy contains correlation
effects that significantly improve the calculation
quantitatively and qualitatively as we demonstrate in sections
\ref{s:results-1} and \ref{s:results-2}.
Therefore, we shall present our approach for the frequency dependent part
of the self-energy in the next section.
\section{Using spectral functions to compute the self-energy}
\label{s:sf-techniques}
One might want to solve the eqs (\ref{tensorform-scr-inter}) and (\ref{tensorform-dyson})
directly as matrix valued equations in time $t$ and to use FFTs
\cite{NumericalRecipesFFT} to shuttle back and forth between the time and
frequency domains. Unfortunately, however, this direct approach is
doomed to fail---the functions $\{G_{ab}(t), \Sigma ^{ab}(t), W^{\mu\nu}(t)\}$
are too singular at $t=0$ to be multiplied together.
We will now show how spectral functions come to the rescue and allow us to
\textit{i}) respect locality in our calculations and to
\textit{ii}) accelerate our calculation
by means of FFT.
Let us consider the energy dependent density matrix
\begin{equation}
\rho _{ab}(\omega )=\sum_{E}X_{a}^{E}X_{b}^{E}\delta (\omega -E)
\end{equation}%
and rewrite the electronic propagator (\ref{local_propagator}) with the help of it.
\begin{equation}
G_{ab}^{0}(\omega \pm i\varepsilon )=
\int_{-\infty }^{\infty }ds\,\frac{\rho _{ab}(s)}{\omega \pm i\varepsilon - s}.
\label{gf-via-sf-freq}
\end{equation}%
Integral representations such as these are very useful, even in finite
systems where the spectral weight is concentrated at isolated
frequencies~\cite{DF+PK,PK+DF+OC}.
Because a spectral function is broadened by the
experimental resolution $\varepsilon$, it can be
represented on a discrete mesh of
frequencies, with the distance between mesh points
somewhat smaller than $\varepsilon$.
All the response functions considered in the present paper
have a spectral representation because their retarded and advanced parts
taken together define a single analytic function in the complex frequency
plane with a cut on the real axis.
A spectral representation is merely a rather thinly disguised Cauchy
integral as we can see by considering the Cauchy integral representation of the
electronic Greens function%
\begin{equation}
G_{ab}(z)=\frac{1}{2\pi \mathrm{i}}\oint_{C}\frac{G_{ab}(\xi)d\xi }{\xi -z},
\end{equation}%
where $C$ is a path surrounding the point $z$ with $\mathrm{Im}z>0$
in an anti-clockwise direction.
If the point $z=\infty $ is regular, the complex
plane may be treated like a sphere and we may deform
the contour on this sphere in such a way that it wraps
around the cut on the real axis in a clockwise direction.
Finally, because Green's functions take mutually hermitian conjugate values
$G_{ab}(z^{\ast })=G^{\ast }_{ba}(z)$ across
the cut on the real axis, the above integral can the rewritten as
\begin{equation}
G_{ab}(z)=\int ds\frac{\rho _{ab}(s)}{z-s}\text{, }
\rho _{ab}(z)=-\frac{1}{\pi }\mathrm{Im} G_{ab}^{0}(z)
\end{equation}%
with $z$ on the upper branch of the cut.
In writing the preceding equation we have used the simplifying feature that
the electronic Green's function is
a symmetric matrix in our real representation of angular momenta (the same is
true for the screened Coulomb interaction).
In the following, we will always reconstruct correlation functions such as
$\{G_{ab}(\omega ),\,\Sigma ^{ab}(\omega ),\,W^{\mu \nu }(\omega )\}$ from their
imaginary part or from their spectral functions. The time ordered version of such
correlators is determined above (below) the real axis for positive
(negative) frequencies, respectively.
\subsection{The spectral function of a product of two correlators}
The well known convolution theorem \cite{NumericalRecipesFFT} tells us
that the spectral content of a product of two signals is the convolution
of the spectral contents of its factors. The situation is entirely analogous
for Green's functions and the other correlators considered here
and their spectral functions.
To see this, we use the spectral representations of the time ordered factors
$G_{ab}(t)$,
$\Sigma ^{ab}(t)$,
$W^{\mu \nu }(t)$
(the quantities in eqs (\ref{tensorform-response}--\ref{tensorform-dyson})
are time ordered)
\begin{equation}
\begin{aligned}
G_{ab}(t) &=
-\mathrm{i}\theta(t)\int_{0}^{\infty }ds\,\rho_{ab}^{+}(s)e^{-\mathrm{i}st}
+\mathrm{i}\theta(-t)\int_{-\infty }^{0}ds\,\rho_{ab}^{-}(s)e^{-\mathrm{i}st}; \\
\Sigma ^{ab}(t) &=
-\mathrm{i}\theta(t)\int_{0}^{\infty }ds\,\sigma_{+}^{ab}(s)e^{-\mathrm{i}st}
+\mathrm{i}\theta(-t)\int_{-\infty }^{0}ds\sigma_{-}^{ab}(s)e^{-\mathrm{i}st}; \\
W^{\mu \nu }(t) &=
-\mathrm{i}\theta(t)\int_{0}^{\infty }ds\,\gamma_{+}^{\mu \nu}(s)e^{-\mathrm{i}st}
+\mathrm{i}\theta(-t)\int_{-\infty }^{0}ds\gamma_{-}^{\mu \nu}(s)e^{-\mathrm{i}st},
\end{aligned}
\label{spectral_1}
\end{equation}
where ``positive'' and ``negative'' spectral functions
define the whole spectral function by means of Heaviside functions. For instance,
the spectral function of the electronic Green's function reads
$$
\rho_{ab}(s)=\theta(s)\rho^{+}_{ab}(s)+\theta (-s)\rho^{-}_{ab}(s).
$$
These representations can be checked by transforming (for example)
the representation of $G_{ab}(t)$ into the frequency domain
and by comparing with the known expression (\ref{gf-via-sf-freq}).
We then compute $\Sigma^{ab}(t)$ from eq (\ref{tensorform-self-energy}) and
compare the result with the second of eqs (\ref{spectral_1})
The spectral function of the self-energy is seen to have the expected convolution form
\begin{align}
\sigma_{+}^{ab}(s) &=\int_{0}^{\infty }\,\int_{0}^{\infty }
\delta (s_{1}+s_{2}-s)\,V_{\mu }^{aa'}\rho _{a'b'}^{+}(s_{1})
V_{\nu }^{b'b}\gamma_{+}^{\mu \nu }(s_{2})ds_1 ds_2, \label{spectral_3} \\
\sigma_{-}^{ab}(s) &=-\int_{-\infty }^{0}\,\int_{-\infty}^{0}
\delta (s_{1}+s_{2}-s)V _{\mu }^{aa'}\rho _{a'b'}^{-}(s_{1})
V_{\nu }^{b'b}\gamma _{-}^{\mu \nu}(s_{2})ds_1 ds_2. \notag
\end{align}%
Note that, as commented above, the $V_{\mu }^{aa'}$ matrices are sparse
and respect spatial locality.
Finally, we can easily construct the full self-energy from its spectral functions
$\sigma_{\pm }^{ab}(s)$ by a Cauchy type integral
\begin{equation}
\Sigma ^{ab}(\omega \pm i\varepsilon )=\int_{-\infty }^{\infty }\frac{\sigma ^{ab}(s)ds}{%
\omega \pm i\varepsilon -s}. \label{Cauchy1}
\end{equation}%
By entirely analogous arguments, we can find the spectral function of the non
interacting response $\chi_{\mu \nu }^{0}$ from eq (\ref{tensorform-response})
\begin{equation}
\begin{aligned}
a_{\mu \nu }(s) &=\int_{0}^{\infty } \int_{0}^{\infty }%
V _{\mu }^{ab}\rho _{bc}^{+}(s_{1})V _{\nu }^{cd}
\rho_{da}^{-}(-s_{2})\delta (s_{1}+s_{2}-s)ds_{1}ds_{2}, &\text{for }s>0;
\\
a_{\mu \nu }(-s) &= -a_{\mu \nu }(s), &\text{for all } s; \\
\chi _{\mu \nu }^{0}(\omega +\mathrm{i}\varepsilon ) &=\int_{-\infty }^{\infty }ds\,%
\frac{a_{\mu \nu }(s)}{\omega +\mathrm{i}\varepsilon -s}, & \text{for }\omega >0.
\end{aligned}%
\label{chi_by_spectra}
\end{equation}
We implemented the convolutions in eqs (\ref{spectral_3}, \ref{chi_by_spectra})
conveniently by FFT without encountering any singularities.
Please observe that analytic continuations are not needed in our approach.
We have seen in this subsection that the locality of the expressions for
$\Sigma ^{ab}(t)$ and $\chi_{\mu \nu }^{0}(t)$ in eqs
(\ref{tensorform-response}) and (\ref{tensorform-self-energy})
can be taken into account without multiplying singular Green's functions and
by focusing instead on the spectral functions of their products.
\subsection{ The second window technique}
\label{s:two-window-technique}
Although we only need results in a suitable low energy window
$-\lambda \leq \omega \leq \lambda $ of a few electron volts,
eqs (\ref{Cauchy1}, \ref{chi_by_spectra}) show that high energy processes
at $|\omega |>|\lambda |$ influence
quantities at low energies, such as,
for example, the self-energy. Therefore, these
high energy processes cannot be ignored and we need the imaginary part
of the screened Coulomb interaction $W^{\mu \nu }$
not only for small $|\omega |\leq \lambda $ but also for larger frequencies.
To find the imaginary part of $W^{\mu \nu }$, we also need, in view of
eq (\ref{tensorform-self-energy}), the non interacting response
$\chi_{\mu\nu}^{0}$ both at small and at large frequencies.
Let us see in the case of the density response, how the necessary spectral
information can be obtained from two separate calculations in two distinct
frequency windows \cite{DF+PK}. In the large spectral window
$-\Lambda \leq \omega \leq \Lambda $ a low resolution calculation with a
large broadening
(and, therefore, a coarse grid of frequencies)
is sufficient to find $\chi _{\mu \nu }^{0}$ at large
energies $|\omega |>|\lambda |$
\begin{equation}
\chi _{\mu \nu }^{0}(\omega +\mathrm{i}\varepsilon _{\text{large}})=\int_{-\Lambda
}^{\Lambda }ds\frac{a_{\mu \nu }(s)}{\omega +\mathrm{i}\varepsilon_{\text{large}}-s}.
\end{equation}%
To get correct results in the low energy window
$-\lambda \leq \omega \leq \lambda $ we must take into account the spectral
weight in this window
\begin{equation}
\begin{aligned}
\chi _{\mu \nu }^0(\omega +\mathrm{i}\varepsilon _{\text{small}}) &=\int_{-\lambda
}^{\lambda }ds\frac{a_{\mu \nu }(s)}{\omega +\mathrm{i}\varepsilon _{\text{small}}-s}%
\ +\left( \int_{-\Lambda }^{-\lambda }+\int_{\lambda }^{\Lambda }\right) ds%
\frac{a_{\mu \nu }(s)}{\omega +\mathrm{i}\varepsilon _{\text{large}}-s} \\
&=\chi _{\mu \nu }^{\text{small window}}(\omega +\mathrm{i}\varepsilon _{\text{%
small}})+\left[ \chi _{\mu \nu }^{\text{large window}}(\omega +\mathrm{i}\varepsilon
_{\text{large}})\right] _{\text{truncated spectral function}}.
\end{aligned}
\label{win2-for-response}
\end{equation}
Instead of doing the second Cauchy integral directly, we construct
$\chi_{\mu \nu }^{\text{large window}}$ from the large spectral window, using
spectral data that are truncated for $|s|<\lambda $ to avoid counting
the spectral weight in $-\lambda \leq \omega \leq \lambda $ twice. Moreover,
the broadening constant $\varepsilon$ is set differently in
the first and second windows and corresponds to the spectral resolution
in these windows. The broadening constant $\varepsilon$ is chosen
automatically in each frequency window, by setting $\varepsilon=1.5 \Delta \omega$,
where $\Delta \omega$ is the distance between two points on the
corresponding frequency grid.
We use the second window technique (presented in eq (\ref{win2-for-response})
for the case of the density response) again in the calculation of the
self-energy $\Sigma^{ab}(\omega)$, where we also need the screened interaction
in two windows. We combine the spectral functions of the self-energy in
exactly the same way as for the density response.
For the cases considered here, computations
using two spectral windows were up to one order of magnitude
faster than computations using a single spectral window.
\section{Testing our implementation of $GW$ on a small molecule \label{s:results-1}}
The methods presented above are sufficient to compute the self-energy
(\ref{self-energy}) of small molecules~\cite{comp-details}.
As a test, we will compute the interacting
Green's function by solving Dyson's equation (\ref{Dyson}).
From this Green's function we can obtain the DOS and estimate
the positions of the HOMO and LUMO levels.
Here we illustrate this procedure in the case of benzene.
This molecule has been chosen because extensive theoretical results
and experimental data are available for it.
Our calculations show a considerable improvement
using the $GW$ approximation
as compared to the results obtained with plain DFT calculations using local or
semi-local functionals.
In general, for small molecules we find a reasonable agreement with
experimental data and previous $GW$ calculations
of the ionization potentials and electron affinities.
The input for our $GW$ method has
been obtained from calculations using the local density approximation (LDA)
and the SIESTA package~\cite{siesta}.
SIESTA uses a basis set of strictly confined numerical atomic orbitals.
The extension of these orbitals is
consistently determined by
an \textit{energy shift} parameter. In general, the smaller the energy shift
the larger the extension of the orbitals, although the procedure
results in different cutoff radii
for each multiplet of orbitals~\cite{Artacho-about-Energy-Shift}.
In the present calculations we have used the default
double-$\zeta$ polarized (DZP) basis, along with
the Perdew-Zunger LDA exchange-correlation functional~\cite{Perdew-Zunger}
and pseudo-potentials
of the Troullier-Martins type~\cite{Troullier-Martins}.
Our calculations indicate (see table~\ref{t:ip-ea-benzene}) that it is necessary to
use rather extended orbitals to obtain converged
results for the HOMO and LUMO levels.
For the most extended basis used here
(determined from an energy shift of 3~meV) all the
orbitals in benzene have a non-zero overlap and, in principle, the number
of products of orbitals is 108(108+1)/2=5886. This number is
reduced
using the algorithm described in Ref.~\cite{DF}, and the
dominant product basis (see eq~(\ref{VertexDefinition}) )
only contains 2325 functions.
The spectral functions have been discretized
using a grid with $N_{\omega}=1024$ points
in the range from $-80$ eV to $80$ eV. The broadening constant has been
set automatically to $\varepsilon=1.5\Delta \omega=0.234375$ eV.
The frequency range was chosen manually by inspecting the
non interacting absorption spectrum. The results of the calculation
depend only weakly on the frequency range.
Figure \ref{f:benzene-dos} shows the DOS
calculated with different Green's functions.
As one can see, the input Green's function $G_0$
from a DFT-LDA calculation has
a very small HOMO-LUMO gap. The Green's function $G$ obtained
with the instantaneous part of
the self-energy (see eq~\ref{instantaneous}) opens the HOMO-LUMO gap.
This part of the self-energy $\Sigma_{\mathrm{x}}$ incorporates the effect
of exchange and is very important for small molecules. However,
the gap is over-estimated as one can already anticipate from typical
mean-field Hartree-Fock calculations.
Correlation effects are partially taken into account by
the dynamical part of the $GW$ self-energy. This brings the
HOMO-LUMO gap closer to the experimental value.
Our results stay also in agreement with other works using
similar approximations ($G_0W_0$ on top of DFT-LDA)
\cite{Baroni$GW$,Tiago-Chelikowsky-and-Frauenheim}.
\begin{figure}[htb]
\centerline{
\begin{tabular}{m{7cm}m{0.1cm}m{5cm}}
\centerline{
\includegraphics[width=7cm, viewport=50 50 410 300, angle=0,clip]{dos-benzene-003-dft-x-xc.pdf}} & &
\centerline{\includegraphics[width=4.5cm,angle=0,clip]{benzene.pdf}} \\[-0.2cm]
\centerline{a)} & & \centerline{b)}
\end{tabular}}
\caption{a) Density of states of benzene computed from different
Green's functions using as an input the
results of a DFT-LDA calculation performed with the SIESTA
package. A DZP basis set, with orbital radii determined
using a value of the energy shift parameter of 3~meV, has been used.
The results shown in this figure are obtained with a single energy
window. $GW_{\mathrm{x}}$ refers to the results obtained with only
the instantaneous part of the self-energy (only exchange),
while $GW_{\mathrm{xc}}$ labels the results obtained with the
whole self-energy (incorporating additional correlation effects).
b) Ball and stick model of benzene produced with the
XCrysDen package \cite{xcrysden}.
\label{f:benzene-dos}}
\end{figure}
Apart from the $GW$ approximations to the self-energy (\ref{self-energy}),
our numerical method is controlled by precision parameters
of a more technical nature.
Table~\ref{t:ip-ea-benzene} present the results for the ionization
(IP) and electron affinity
(EA) as a function of the
extension of the atomic orbitals in the original LDA calculation
as determined from the energy shift
parameter~\cite{Artacho-about-Energy-Shift}.
An energy shift of 150~meV is usually sufficient to have
an appropriate description of the ground-state properties
of the molecules~\cite{siesta}. However, we can see
that our $GW$ calculation requires more extended (smaller energy shift)
orbitals.
The slow convergence of the ionization potential of benzene
with respect to the quality/completeness of the basis set
has also been observed in the plane-wave calculations (using
Wannier functions) of Ref.~\cite{Baroni$GW$}.
Table \ref{t:ip-ea-benzene}
also shows the results of calculations using
one and two energy windows.
The former calculation is more straightforward but requires
the same density of frequency points at higher energies as in the region
of interest near the HOMO and LUMO levels.
The latter calculation uses two separate frequency grids:
as described in section \ref{s:two-window-technique}
a lower resolution and a larger imaginary part of the energy are
used for the whole spectral range, while
high resolution and a small width are used
in the low energy range to resolve the HOMO and LUMO levels.
Thus, the second window technique
requires the computation of both the response
function and screened Coulomb interaction at far fewer frequencies than
the one-window calculation.
For instance, the one-window results presented above have been
obtained with $N_{\omega}=1024$ frequencies, while the two-window
results used only $N_{\omega}=192$ frequencies in both windows,
implying a gain of a factor 2.7 in speed and in memory.
The first and second window extend to 12.58 eV and 80 eV, respectively.
The first window is chosen as
$2.5 (E_{\text{DFT LUMO}}-E_{\text{DFT HOMO}})$.
The computational result depends only
weakly on the extension of the first window. The second window is chosen
manually as in the one-window calculation above.
The broadening constant $\varepsilon$ has been set separately for
each spectral window, using the default value $\varepsilon=1.5\Delta \omega$.
The lower number of frequencies obviously accelerates the calculation
and saves memory, while introducing very small inaccuracies in the
low frequency region. According to table~\ref{t:ip-ea-benzene} the positions
of HOMO and LUMO agree within $0.1$~eV.
Figure~\ref{f:benzene-dos-win1-win12} shows that the second window
leads to changes that are small, both in the HOMO and LUMO positions,
and in the density of states at low energies.
\begin{table}\centerline{
\begin{tabular}{|c|c|c||c|c|}
\hline
Energy-shift, & \multicolumn{2}{c||}{One window} & \multicolumn{2}{c|}{Two windows} \\
meV & IP, eV & EA, eV & IP, eV & EA, eV \\
\hline
150 & 8.48 & -1.89 & 8.48 & -2.01 \\
30 & 8.71 & -1.45 & 8.72 & -1.57 \\
3 & 8.76 & -1.29 & 8.78 & -1.41 \\
\hline
Experiment & 9.25 & -1.12 & 9.25 & -1.12 \\
\hline
\end{tabular}}
\caption{
Ionization potentials and electron affinities for
benzene versus the extension of the basis functions. The extension
of the atomic orbitals is determined using the energy shift parameter
of the SIESTA method~\cite{Artacho-about-Energy-Shift}.
Note that rather extended orbitals are necessary to achieve converged
results. Differences associated with the use of the
second window
technique introduced in section (\ref{s:two-window-technique}) are of the
order of 0.1~eV. The experimental ionization potential
is taken from the NIST server \cite{NIST}.
The electron affinity of benzene is taken from \cite{Rienstra-Kiracofe:2001}.
\label{t:ip-ea-benzene}}
\end{table}
\begin{figure}[htb]
\centerline{\includegraphics[width=7cm,viewport=50 50 410 300, angle=0,clip]{dos-benzene-003-gw-win1-win12.pdf}}
\caption{The density of states of benzene computed with a uniformly discretized
spectral function and using the second window technique. The peak positions are
very weakly perturbed by using the two windows technique. The parameters of the
calculation are identical with those of figure \ref{f:benzene-dos}.
The two windows technique allowed to reduce the number of frequency
points from $N_{\omega}=1024$ to $N_{\omega}=192$.
\label{f:benzene-dos-win1-win12}}
\end{figure}
The calculations presented in this section needed a fairly
large amount of random access memory (RAM). The amount of RAM
increases as $N^2$ with $N$ the number of dominant products,
which prohibits the treatment of larger molecules
using the methods described above in a straightforward manner.
However, as we will see in the next section,
we can use a compression method that dramatically reduces the
required memory.
\section{Compression of the Coulomb interaction}
\label{s:compression}
As shown in Ref.~\cite{PK+DF+OC}
it is possible to solve the Petersilka-Gossmann-Gross
equations~\cite{Petersilka} for time-dependent density functional theory (TDDFT)
using a Lanczos type approach if, for example, we are only interested in the
polarizability tensor of the system. In this way, we avoid keeping
the entire linear response matrix $\chi _{\mu \nu }^{0}(\omega )$ in
the computer memory.
Unfortunately, we were unable to find an analogous Lanczos type procedure
for the self-energy matrix.
However, we have found an alternative solution to this problem.
It consists of taking into account the electron dynamics
and keeping preferentially those dominant products that are necessary
to describe $\chi^{0}_{\mu\nu}$ in the relevant range of frequencies.
\subsection{Defining a subspace within the space of products}
Consider the following closed form expression of the non interacting response
$\chi_{\mu \nu }^{0}(\omega )$ of eq (\ref{tensorform-response})
\begin{equation}
\chi _{\mu \nu }^{0}(\omega +i\varepsilon ) = 2\sum_{E,F}
V _{\mu}^{EF}\frac{n_{F}-n_{E}}{\omega +i\varepsilon -(E-F)}V_{\nu}^{EF}%
\text{, where }V_{\mu }^{EF}=X_{a}^{E}V_{\mu }^{ab}X_{b}^{F}.
\end{equation}
This is a well known expression, but rewritten in the basis of dominant products \cite{DF+PK}.
It must be emphasized that we do not use this equation to compute
$\chi_{\mu \nu }^{0}(\omega)$
(it would require $O(N^4)$ operations) but this explicit representation
of the exact non interacting response is nonetheless crucial
for motivating our method of compression.
Clearly $\chi_{\mu \nu }^{0}(\omega )$ is built up from $O(N^2)$ vectors $V_{\mu }^{EF}$.
On the other hand, the entire space of products is, by construction, of only $O(N)$ dimensions.
Therefore, there must be a significant amount of collinearity in the
set of vectors $V_{\mu}^{EF}$ and a much smaller subset of such
vectors should span the space where $\chi_{\mu\nu}^{0}(\omega)$
acts. As candidates for the generators of this subspace, we
sort the vectors $\{V_{\mu }^{EF}$ $\}$ according to $|E-F|$
up to a certain \textit{rank} $N_{\mathrm{rank}}$
\begin{equation}
\{X_{\mu }^{n}\}\equiv \text{subset of }\{V_{\mu }^{EF}\}
\text{ limited according to } |E-F|<E_{\mathrm{threshold}},
n=1\ldots N_{\mathrm{rank}}.
\label{subsetof_vef}
\end{equation}%
Here we treat $\{E,F\}$ as electron-hole pairs, i.e. $E<0$ and $F>0$.
As a first test of whether the subspace carries
enough information, we define a projector onto it
\begin{align}
g^{mn} &=X_{\mu }^{m} v^{\mu\nu} X_{\nu }^{n}; \label{projector0} \\
P_{\mu \nu } &=X_{\mu }^m g_{mn} X_{\nu }^{n}, \text{ where }
g_{mn}=\left( g^{mn}\right) ^{-1}; \notag \\
P_{\nu }^{\mu } &=v^{\mu\mu'}P_{\mu' \nu }. \notag
\end{align}%
It can be shown without difficulty that $P_{\nu }^{\mu }$ is indeed
a projector in the sense of $P^{2}=P$. We can use it to project
the screened Coulomb interaction onto the subspace generated by
the set $\{X_{\mu}^{n}\}_{n=1..N_{\mathrm{rank}}}$
\begin{equation}
W_{\mathrm{projected}}^{\mu \nu }(\omega )=
P_{\mu'}^{\mu }P_{\nu'}^{\nu }W^{\mu'\nu'}(\omega).
\label{proj-scr-inter}
\end{equation}%
We must choose $N_{\mathrm{rank}}$ large enough so that the
trace of the projected spectral density
$W_{\mathrm{projected}}^{\mu \nu }(\omega )$ is sufficiently close
to the original one. We checked that this works even for
$N_{\mathrm{rank}}$ considerably
smaller than the original dimension of the space of products.
We can go further and reduce the dimension of the subspace by
eliminating collinear vectors from it. We do this by diagonalizing
the matrix $g^{mn}$ in eq (\ref{projector0}) and by defining
new vectors $Z_{\mu}^{\lambda }$\cite{note1}
\begin{align}
g^{mn}\xi_{n}^{\lambda } &=\lambda \xi _{m}^{\lambda }, \notag \\
Z_{\mu}^{\lambda } & \equiv X_{\mu }^{m}\xi _{m}^{\lambda }/\sqrt{\lambda}.
\label{subspace}
\end{align}%
To define the vector space $\{Z_{\mu}^{\lambda }\}$, we first
discard the eigenvectors $\xi _{m}^{\lambda }$ that correspond
to eigenvalues $\lambda$ smaller than a threshold with respect to the Coulomb
metric $v^{\mu\nu}$ and we then normalize the remaining vectors, for simplicity.
As a result of this procedure, we obtain a smaller set of vectors that we
denote again by $\{Z_{n \mu}\}$ with
$n=1\ldots N_{\mathrm{subrank}}$,
with $N_{\mathrm{subrank}}\leq N_{\mathrm{rank}}$
and the additional property that they are orthonormal with
respect to the Coulomb metric $v^{\mu\nu}$
\begin{equation}
Z_{m}^{\mu }Z_{n \mu}=\delta _{mn}\text{, where }Z_{m}^{\mu }=
v^{\mu\nu }Z_{m \nu}, \text{ for } m,n=1\ldots N_{\mathrm{subrank}}.
\label{global-basis-orthogonality}
\end{equation}%
\subsection{ Construction of the screened interaction
from the action of the response function in the subspace}
From the preceding discussion we know that $\chi _{\mu \nu }^{0}$ can be adequately
represented in the previously constructed subspace $\{Z_{\mu}^{n}\}$
in the sense of
\begin{equation}
\chi _{\mu \nu }^{0}=Z_{m \mu }\chi _{mn}^{0}Z_{n \nu },\text{ with }%
\chi _{mn}^{0}=Z_{m}^{\mu }\chi _{\mu \nu }^{0}Z_{n}^{\nu }.
\label{subspace_chi0}
\end{equation}%
To see which form the screened Coulomb interaction (\ref{Coulombscreening})
takes for such a density response $\chi_{\mu \nu }^{0}$, we write
it as a series
\begin{equation}
W^{\mu \nu }=\left( \frac{1}{v^{-1}-\chi^0}\right)^{\mu \nu }=v^{\mu \nu}
+v^{\mu \mu '}\chi_{\mu '\nu '}^{0} v^{\nu'\nu }+v^{\mu \mu '}
\chi _{\mu'\nu'}^0 v^{\nu'\mu''}\chi_{\mu'' \nu''}^0 v^{\nu''\nu}+\cdots
\label{W-series}
\end{equation}%
Because $\chi ^{0}$ acts --- by hypothesis --- only in the subspace,
the series may be simplified. Lets insert the representation of
the response function $\chi_{\mu \nu }^{0}$ of eq (\ref{subspace_chi0})
into the series (\ref{W-series})
\begin{equation}
W^{\mu \nu }=v^{\mu \nu }+v^{\mu \mu '}
\left[ Z_{m \mu'}\chi _{mn}^{0}Z_{n \nu'}\right]v^{\nu '\nu}+
v^{\mu \mu '}\left[ Z_{m \mu'}\chi_{mn}^{0}Z_{n \nu'}\right]v^{\nu '\mu''}
\left[ Z_{m \mu''}\chi _{mn}^{0}Z_{n \nu''}\right] v^{\nu''\nu}+\cdots,\notag
\end{equation}%
then use the orthogonality property of the basis vectors
$Z_{m \mu}$ (\ref{global-basis-orthogonality}) and find
\begin{align}
W^{\mu \nu } &=v^{\mu \nu }+Z_{m}^{\mu}\chi _{mr}^{0}
\left[ \delta_{rn}+\chi_{rn}^0
+\chi _{rs}^0\chi_{sn}^0+\cdots\right] Z_{n}^{\nu } = \label{chi_RPA} \\
&=v^{\mu \nu }+Z_{m}^{\mu }\chi _{mn}^{\mathrm{RPA}}Z_{n}^{\nu }. \notag
\end{align}%
Here we introduced the new response function
$\chi_{mn}^{\mathrm{RPA}} \equiv \left( \delta_{mk}-\chi_{mk}^{0}\right)^{-1}\chi_{kn}^{0}$.
From the preceding arguments we conclude that the dynamically
screened Coulomb interaction $W^{\mu \nu }$
can be computed in terms of the response function $\chi_{mn}^{\mathrm{RPA}}$
within the previously constructed subspace and the matrix inversion in this
smaller space is of course much cheaper than in the original space.
This is a welcome feature --- the number of operations for matrix inversion
scales with the cube of the dimension and
a compression by a factor 10 will lead to a 1000 fold acceleration
of this part of the computation.
It is important to note that, although an energy cut-off $E_{\mathrm{threshold}}$
is used to choose the relevant $\{V_{\mu}^{EF}\}$ vectors,
high frequencies components of the
response and the screened interaction are explicitly
calculated.
$E_{\mathrm{threshold}}$ only
serves to construct the frequency independent basis vectors $\{Z_{n}^{\nu}\}$
according to eq (\ref{subspace}). This basis is later used
to calculate the response function $\chi^0_{\mu\nu}(\omega)$
in the whole frequency range (see eq (\ref{subspace_chi0})).
Of course, we can expect that, if $E_{\mathrm{threshold}}$
is chosen too small, the ability of the compressed
basis to represent
the high energy components of the response
will eventually deteriorate. However,
we are interested in the low energy excitations of the system
and, as we will show in the next subsection, those can be
accurately described using values of $E_{\mathrm{threshold}}$
that allow for a large reduction in the size of the product
basis. Furthermore, it is also important to note that
the instantaneous self-energy $\Sigma_x$, for which
a compression criterion based on our definition of $E_{\mathrm{threshold}}$
is dubious, is calculated within the original dominant product basis,
i.e. before this non local compression is performed.
\subsection{The compression in the case of benzene }
\label{ss:compression-benzene}
The non local compression depends on two parameters:
i) the maximum energy $E_{\text{threshold}}$
of the Kohn-Sham electron-hole pairs
$\{V_{\mu}^{EF}\}$ in eq (\ref{subsetof_vef}) ,
and ii) the eigenvalue threshold $\lambda$ for identifying
the important basis vectors
$\{Z_{n}^{\nu}\}$ in eq (\ref{subspace}).
Table \ref{t:ea-benzene}
shows the electron affinity of benzene
as a function of $E_{\text{threshold}}$ and $\lambda$.
The computational parameters have been chosen as in section \ref{s:results-1}
and the energy shift to define the extension of the orbitals is 3~meV.
Table \ref{t:ea-benzene} illustrates a general feature that we have found in
many test for several systems:
$N_{\text{rank}}$ can be chosen of the order of the number of
atomic orbitals $N_{\text{orb}}$. We have found that
$N_{\text{rank}}\approx 5 N_{\text{orb}}$
usually guarantees a converged result for the HOMO and LUMO
levels. In any case, the number of
relevant linear combinations $N_{\text{subrank}}$ was always
much smaller than the number of dominant
functions, with a typical compression ratio of ten or more.
\begin{table}
\centerline{
\begin{tabular}{|c|c|c|c|}
\hline
& $\lambda=10^{-2}$ & $\lambda=10^{-3}$ & $\lambda=10^{-4}$ \\
\hline
$E_{\text{threshold}}=10$ eV & 2.48 (33) & 2.46 (37) & 2.45 (39) \\
$E_{\text{threshold}}=20$ eV & 1.38 (96) & 1.39 (133) & 1.40 (171) \\
$E_{\text{threshold}}=40$ eV & 1.41 (132) & 1.41 (192) & 1.41 (279) \\
\hline
\end{tabular}}
\caption{
Electron affinities for benzene versus the compression parameters $E_{\text{threshold}}$ and
eigenvalues cutoff $\lambda$ in eq (\ref{subspace}). In brackets the dimension of compressed subspace
is given $N_{\text{subrank}}$. The dimension $N_{\text{rank}}$ is governed by $E_{\text{threshold}}$
and was $39$, $297$ and $765$ for $E_{\text{threshold}}=10$ eV, $E_{\text{threshold}}=20$ eV and
$E_{\text{threshold}}=40$ eV, accordingly. Number of atomic orbitals $N_{\text{orb}}=108$, while
the number dominant products is $N_{\text{prod}}=2325$.
\label{t:ea-benzene}}
\end{table}
To further illustrate the quality of the basis, we will compare the trace
of the original screened interaction with the trace of the projected screened
interaction for the benzene molecule. The result of the comparison can be seen
in figure \ref{f:sf-scr-inter-benzene}.
In this test calculation, the dominant product basis consists of 921 functions,
while the compressed basis contains only 248 functions. Examples of compression for
larger molecules will be presented in section \ref{s:results-2}.
\begin{figure}[htb]
\begin{tabular}{m{7cm}m{0.1cm}m{5cm}}
\includegraphics[width=7cm,viewport=50 50 410 300, angle=0,clip]{sf-scr-inter-benzene-272.pdf} &&
\includegraphics[width=7cm,viewport=50 50 410 300, angle=0,clip]{sf-scr-inter-benzene-272-error.pdf} \\
\centerline{a)} && \centerline{b)}
\end{tabular}
\caption{
a) Comparison of the screened interaction calculated
for benzene using our original dominant product basis and
the screened interaction projected to a compressed product basis
(see eq~(\ref{proj-scr-inter})). We plot the sum of all the
matrix elements of the imaginary part of the screened
interaction.
b) A plot of the difference of the functions represented in panel a.
The change in spectral weight of
the screened interaction due to compressing the space
of dominant products is seen to be small. Please notice
the different scales of the y-axis in both panels.
\label{f:sf-scr-inter-benzene}}
\end{figure}
The examples presented in this section show that the screened Coulomb interaction
can be effectively compressed. A practical algorithm that uses the non local compression
and maintains the $O(N^3)$ complexity scaling of the calculation will be presented
in the next section.
\section{Maintaining $O(N^3)$ complexity scaling by compressing / decompressing}
\label{s:maintain-n3}
The favorable $O(N^{2})$ scaling of the construction of the uncompressed
non interacting response $\chi^{0}_{\mu\nu}$
is due to its locality. On the other hand,
we need compression for $\chi^{0}_{\mu\nu}$ to fit into the computer memory and
the compressed $\chi^{0}_{mn}$ is no longer local. To satisfy the
two mutually antagonistic criteria of
\textit{i}) locality (for computational speed) and
\textit{ii}) small dimension (to fit into the computer memory) we shuttle back and
forth as needed between the uncompressed/local and the compressed/non local
representations of the response $\chi^{0}$ and of the screened interaction $W$.
Both compression and decompression are matrix operations that scale as
$O(N^{3})$ and this, along with the matrix inversion in eq (\ref{chi_RPA})
in the computation of the screened Coulomb interaction, and the computation
of the spectral densities $\rho_{ab}^{\pm}(s)$ is the reason why our
implementation
of $GW$ scales as $O(N^{3})$.
\subsection{A construction of the subspace response in $O(N^{3})$ operations}
Let us describe an efficient construction of the response $\chi_{\mu \nu}^{0}$
and its compressed counterpart $\chi_{mn}^{0}$
that, besides, gives us an opportunity to describe our
use of frequency and time domains during the calculation.
Consider eq (\ref{chi_by_spectra}) that involves convolutions
of the spectral functions $\rho_{bc}^{+}(\omega)$ and
$\overline{\rho}^{-}_{ad}(\omega)\equiv\rho^{-}_{ad}(-\omega)$.
To make use of the convolution theorem, we will first compute the
spectral function of the non interacting response $a_{\mu \nu}(s)$ in the time domain
\begin{align}
a_{\mu \nu }(t) &=\int \frac{ds}{2\pi }
a_{\mu \nu }(s)e^{\mathrm{i}st}=2\pi
\int_{0}^{\infty }V _{\mu }^{ab}\rho_{bc}^{+}(s_{1})
e^{\mathrm{i}s_1t}\frac{ds_{1}}{2\pi }\cdot %
\int_{0}^{\infty }V _{\nu }^{cd}\rho
_{ad}^{-}(-s_{2})e^{\mathrm{i}s_2 t }\frac{ds_{2}}{2\pi } \notag \\
&=2\pi V _{\mu }^{ab}\rho _{bc}^{+}(t)V _{\nu }^{cd}
\overline{\rho}_{ad}^{-}(t). \label{sf-of-response-time}
\end{align}%
In other words, we prepare the use of the FFT driven convolution
by first computing the Fourier transforms of the electronic
spectral densities $\rho^{\pm }$ and once $a_{\mu \nu }(t)$ is computed, we
return to $a_{\mu \nu }(s)$ by an inverse Fourier transformation.
This is nothing else but the fast convolution method with the
Fourier transform of the spectral densities $\rho_{bc}^{+}(\omega)$ and
$\overline{\rho}_{ad}^{-}(\omega)$ carried out prior to the tensor operations in
eq (\ref{sf-of-response-time}).
Above we saw how to compute the spectral function of the non interacting density response.
However, as we mentioned before,
we cannot easily store this information in the memory of the computer and we must therefore compress
this quantity as soon as it is found to avoid over flooding the computer memory. An efficient way to do this
is to compute $a_{\mu \nu }(t)$, the spectral function of the non interacting response in the time domain,
in a ``time by time'' fashion, with the time variable $t$
in the outer loop. Fortunately, the compression of the
response $\chi^0_{\mu\nu}$ according to equation (\ref{subspace_chi0})
can be done on the level of its spectral function $a_{\mu \nu }(t)$
separately for each time $t$.
\subsection{A construction of the self-energy in $O(N^{3})$ operations}
Although we use the spectral function given by eq (\ref{spectral_3}) to compute
the self-energy in the second of eqs (\ref{spectral_1}),
it is useful to think also
of eq (\ref{tensorform-self-energy}) that corresponds to the
Feynman diagram of Figure \ref{f:self-energy-diagram} and
which has the same locality properties.
Please recall that the product vertex in eq (\ref{VertexDefinition}) is sparse and local
and that the indices $\{a,a',\mu \}$ and $\{b,b',\nu\}$ must each reside on
a single pair of overlapping atoms. Once the indices $a,b$ of the self-energy
are specified, there are only $O(N^{0})$ possibilities of choosing the remaining
indices.
Therefore, the calculation of $\Sigma ^{ab}(t)$ requires
asymptotically $O(N^{2})$ operations
provided that the screened Coulomb interaction $W^{\mu\nu}$
in a basis of localized functions is known.
However, the local screened Coulomb interaction $W^{\mu \nu }$ in the original
space of dominant products does not fit into the computer memory as opposed
to the compressed, but non local, response $\chi_{mn}^{\mathrm{RPA}}$ that we store
(see eq (\ref{chi_RPA})).
We may, however, regain locality by decompressing $\chi_{mn}^{\mathrm{RPA}}$
at the cost of $O(N^{3})$ operations, using the identity
$W_{\mathrm{dynamical}}^{\mu \nu }=Z_{m}^{\mu}\chi_{mn}^{\mathrm{RPA}}Z_{n}^{\nu}$
in eq (\ref{chi_RPA}). As we cannot keep $W_{\mathrm{dynamical}}^{\mu \nu }$ in
the computer memory, we must try to ``decompress $\chi_{mn}^{\mathrm{RPA}}$ on the fly''.
To do this, let us transform the first of eqs (\ref{spectral_3}) into
the time domain.
For instance, for the positive part of the spectral density
$\sigma_{+}^{ab}(t)$ of the self-energy, we find
\begin{equation}
\sigma_{+}^{ab}(t)=2\pi V_{\mu }^{aa'}\rho _{a'b'}^{+}(t)
V_{\nu }^{b'b}\gamma_{+}^{\mu \nu }(t). \notag
\end{equation}%
Again, the representation in time of $\rho_{a'b'}^{+}(t)$ is prepared only once.
However, the transform
$\gamma_{+}^{\mu \nu}(t)=-\frac{1}{\pi }
Z_{m}^{\mu} \mathrm{Im}\chi_{mn}^{\mathrm{RPA}}(t)Z_{n}^{\nu }$
for all times does not fit into the computer memory.
Therefore we also decompress $\gamma_{+}^{\mu \nu }(t)$ time by time by letting
the time $t$ run in the outer loop, by computing $\gamma _{+}^{\mu \nu }(t)$
via decompression for a single time, and by storing only the result
$\sigma _{\pm }^{ab}(t)$ for each time. Once we have computed
$\sigma _{\pm }^{ab}(t)$ for all times, we can find $\sigma_{\pm }^{ab}(s)$
from it.
\section{A summary of the complete algorithm}
\label{s:algorithm-summary}
At this point, it is useful to briefly recapitulate the different
steps of our implementation of Hedin's $GW$ approximation.
It consists of the following steps:
\begin{enumerate}
\item Export the results of a DFT code that uses numerical
local atomic orbitals as a basis set.
Here we use the SIESTA code~\cite{siesta},
but other codes like the FHI-AIMS code~\cite{aims}
could also be used.
\item Set up a basis of dominant products in $O(N)$ operations. Here we
use the method of Ref.~\cite{DF}.
\item Set up a space of reduced dimension where the
screened Coulomb interaction will act
and exploiting the low effective rank of this set. Such a subspace is
determined by a set of $N_{\mathrm{rank}}$ vectors
$V _{\mu }^{EF}$ that correspond to
electron-hole pairs with a predetermined maximum value of $|E-F|$.
Further compression is obtained by diagonalizing the Coulomb metric projected
onto this subspace. This step requires $O(N^3)$ operations.
\item Choose low and high energy spectral windows and a frequency grid.
Prepare the electronic spectral density $\rho_{ab}(s)$ in these two windows from
the output of the DFT calculation.
\item Find $\chi_{mn}^{\mathrm{RPA}}$ by constructing and compressing
the local $\chi_{\mu \nu }^{0}$ ``on the fly'' in $O(N^{3})$ operations
and by solving for $\chi_{mn}^{\mathrm{RPA}}$
for all frequencies in $O(N^{3})$
operations. The construction must be done in two frequency windows.
Truncate the spectral
data where needed in order to avoid double counting and store
$\chi_{mn}^{\mathrm{RPA}}$ in the two windows.
\item Find the spectral function of the self-energy by decompressing
$\chi_{mn}^{RPA}\rightarrow W^{\mu \nu }$ ``on the fly'' in $O(N^3)$ operations
and by convolving it with the electronic spectral function. Again this must
be done in two frequency windows and the results must be combined consistently.
\item Construct the self-energy from its spectral representation.
\item Solve Dyson's equation and find the density of states
from the interacting
Green's function. Obtain the desired spectroscopic information from
the density of states.
\end{enumerate}
Results obtained with the above algorithm will be discussed in the next section.
\section{Tests for molecules of intermediate size \label{s:results-2}}
The compression technique has been carefully tested in the case
of the benzene molecule. The tests show excellent agreement of the density
of states computed with and without compression. In this section, we will
consider larger molecules such as the
hydrocarbons naphthalene and anthracene~\cite{comp-details}.
These molecules are well known to differ in their
character as electron acceptors:
naphthalene, like benzene, has a
negative electron affinity,
while anthracene is an electron acceptor with
positive electron affinity.
A compression of the dominant product basis is necessary
to treat the molecules considered in this section. These molecules
are too large for a calculation without compression
on ordinary desktop machines because of memory requirements.
For naphthalene the dominant product basis contained 4003 functions,
which were reduced to 433 functions after compression.
In the case of anthracene, the dominant product basis contained
5796 functions, while the compressed basis had only 598 functions.
\begin{table}\centerline{
\begin{tabular}{|c|c|c||c|c||c|c|}
\hline
& \multicolumn{4}{c||}{Naphthalene} & \multicolumn{2}{c|}{Anthracene} \\
\hline
Energy-shift, & \multicolumn{2}{c||}{One window}
& \multicolumn{2}{c||}{Two windows}
& \multicolumn{2}{c|}{Two windows}\\
meV & IP, eV & EA, eV & IP, eV & EA, eV & IP, eV & EA, eV \\
\hline
200 & 7.24 & -0.68 & 7.27 & -0.79 & 6.44 & 0.20 \\
20 & 7.61 & -0.083 & 7.67 & -0.18 & 6.89 & 0.77 \\
\hline
Experiment & 8.14 & -0.191 & 8.14 & -0.191 & 7.439 & 0.530 \\
\hline
\end{tabular}}
\caption{
Ionization potentials and electron affinities for naphthalene and anthracene
and their dependence on the extension of the atomic orbitals.
For naphthalene we compared the results obtained with
spectral functions discretized in one or two windows.
The experimental data has been taken from the NIST server~\cite{NIST}.
For naphthalene and anthracene, vertical ionization potentials are
not available at the NIST database. Therefore we give experimental
ionization energies including effects of geometry relaxation.
\label{t:ip-ea-naphthalene-anthracene}}
\end{table}
\begin{figure}[htb]
\centerline{
\begin{tabular}{m{7cm}m{0.1cm}m{5cm}}
\centerline{\includegraphics[width=7cm,viewport=50 50 410 300, angle=0,clip]{dos-naphthalene-200-gw-win1-win12.pdf}} &&
\centerline{\includegraphics[width=5cm, angle=0,clip]{naphthalene.pdf}} \\[-0.2cm]
\centerline{a)} && \centerline{b)}
\end{tabular}}
\caption{a) Density of states for naphthalene.
The results
have been obtained with our most extended basis orbitals (corresponding
to an energy shift of 20~meV~\cite{Artacho-about-Energy-Shift}).
We can appreciate the accuracy of the second window technique.
b) Ball and stick model of naphthalene produced with the
XCrysDen package \cite{xcrysden}.
\label{f:dos-naphthalene}}
\end{figure}
Table \ref{t:ip-ea-naphthalene-anthracene}
shows our results for naphthalene and anthracene.
The computational details were similar to those used for benzene
and already described in section~\ref{s:results-1}.
The two-window results were obtained with frequency grids of only
128 points for naphthalene (in the ranges $\pm$~8.32 eV and $\pm$~80 eV)
and yet it provides an accuracy on the 0.1 eV level,
while the one-window calculation is done again with
1024 frequencies (in the range of $\pm$~80 eV).
In the case of anthracene, results using frequency grids
of 256 points (in the ranges $\pm$~16 eV and $\pm$~90 eV) are presented.
Again we find a large improvement over the position
of the Kohn-Sham levels in a DFT-LDA calculation.
The agreement with the experimental data
is certainly improved, although there are still significant
deviations, particularly with respect to the reported ionization
potentials. Interestingly, however, our calculations
recover the important
qualitative feature of anthracene being an electron acceptor.
In the case of anthracene, the results obtained with our most
extended basis orbitals (energy shift of 20~meV) are
in excellent agreement with the recent calculations
by Blase \textit{et al.}~\cite{Blase}. In the case
of naphthalene we can see that the two frequency windows
technique introduces only tiny differences (below 0.1~eV)
in the positions of the HOMO and LUMO levels.
The corresponding DOS is shown in
figures \ref{f:dos-naphthalene} and \ref{f:dos-anthracene}.
One can see in figure \ref{f:dos-anthracene} that
it is the dynamical part of the self-energy, including
correlation effects, that turns our theoretical
anthracene into an acceptor, while including only the
instantaneous self-energy predicts anthracene to be a donor.
\begin{figure}[htb]
\centerline{
\begin{tabular}{m{7cm}m{0.1cm}m{7cm}}
\centerline{\includegraphics[width=7cm,viewport=50 50 410 300, angle=0,clip]{dos-anthracene-020.pdf}} &&
\centerline{\includegraphics[width=7cm, angle=0,clip]{anthracene.pdf}} \\[-0.2cm]
\centerline{a)} && \centerline{b)}
\end{tabular}}
\caption{
\label{f:dos-anthracene}
a) Density of states for anthracene.
The results have been obtained using
the extended basis orbitals corresponding
to an energy shift of $20$~meV~\cite{Artacho-about-Energy-Shift}.
Here we compare calculations using
the instantaneous (exchange-only) self-energy and
the full self-energy (including correlation effects).
The correlation component of the self-energy is crucial
to reproduce the experimental observation that anthracene
is an acceptor. In contrast, the exchange-only calculation locates
the LUMO level above the vacuum level.
b) Ball and stick model of anthracene produced with the
XCrysDen package \cite{xcrysden}.
}
\end{figure}
These results for molecules of modest size are just a
first application of our algorithm. With its favorable scaling,
our method aims at $GW$ calculations
for larger molecules of the type used in organic semiconductors.
However, before carrying out such studies, we should reduce the initial
number of dominant products i.~e. before any compression is applied to it.
\section{Conclusions and outlook}
\label{s:conclusion}
In the present paper we have described our approach to
Hedin's $GW$ approximation for finite systems. This approach
provides results for densities of states and gaps that are
in reasonable agreement with experiment and it requires only
modest computer resources~\cite{comp-details} for the systems
presented here. The complexity of our
algorithm scales asymptotically as the third power of the number of atoms,
while the needed memory grows with the second power of the number of atoms.
We hope that these features, along with a further reduction
of the size of the basis describing the
products of localized
orbitals, will allow to apply our method to describe
the electronic structure of large molecules
and contribute to an ab-initio design of organic semiconductors
for technological applications.
The algorithm described here is built upon the LCAO technique~\cite{siesta}
and uses a previously constructed basis in the space of orbital products
that preserves their locality and avoids fitting procedures~\cite{DF}.
Moreover, a (non local) compression technique has been developed to
reduce the size of this basis. This allows to store the whole
matrix representation of the screened Coulomb interaction at
all time/frequencies
in random access memory
while significantly reducing the computational time.
The time (and frequency) dependence
of observables is treated with the help of spectral functions. This avoids
analytical continuations and allows for operations to be
accelerated by the use of FFTs.
As a useful byproduct of our focus on spectral functions we obtain,
as primary result,
an electronic spectral function of the type observed in photo-emission
and from which we
then read off the HOMO and LUMO levels.
We have applied our method to benzene, naphthalene and anthracene.
As expected, we find that our estimations of the HOMO and LUMO positions
and the corresponding gaps are significantly improved over the results
obtained from the Kohn-Sham eigenvalues in a plain DFT-LDA calculation.
Our results approach the experimental data but, as observed
by other authors~\cite{Blase}, these ``single-shot'' $GW$-LDA calculations
(or $G_0W_0$-LDA using a more
standard terminology)
still present sizeable deviations from the measured ionization
potentials and electron affinities.
In general, our results are in good agreement with previous
$G_0W_0$-LDA calculations for similar
systems~\cite{Blase,Baroni$GW$,Tiago-Chelikowsky-and-Frauenheim}.
Thus, we expect further improvements
by iterating our procedure until self consistency or, as suggested
by other authors in the case relatively small
molecules~\cite{Hahn2005,Blase,DanishWannierMolecules},
by using Hartree-Fock results as an input for our $G_0W_0$ calculations.
For periodic systems it is well known that $G_0W_0$-LDA systematically
underestimates the size of the gaps of semiconductors. The best results
so far were found using the so-called ``improved quasi-particle method''
\cite{Schilfgaarde-Kotani-Faleev:2006, chinese-impl-of-kotani}.
A realization of this method in our framework should also improve
the precision of our results.
The method presented in this paper depends crucially
on the quality and size of the original LCAO basis.
A possible limitation is that
the typical LCAO basis used in electronic structure calculations
are constructed and optimized in order to describe ground-state
properties~\cite{Artacho-about-Energy-Shift}. However,
it is possible to optimize an LCAO basis, for example
using a technique similar to that described in Ref.~\cite{Daniel},
to represent electronically excited states. This will increase
the accuracy and applicability of the method and could even
allow to reduce the size of the original LCAO basis used to
represent the electronic states.
Moreover, by comparing our basis with that of other authors,
there are indications that the (local) basis
of dominant products used in this paper
can be reduced in dimension without changing
the physical results~\cite{Blase}.
Such a reduction should lead to an important improvement
of the prefactor in our implementation
of $GW$, but also, as a side effect, introduce a similar
acceleration in our published TDDFT algorithm \cite{PK+DF+OC} that is already
competitive, in its present form, with other TDDFT codes.
The quantities calculated in the presented algorithm can be
useful in other branches of many-body perturbation theory.
For instance, the screened Coulomb interaction is
a crucial ingredient of the Bethe-Salpeter technique that
is needed to study excitons and the optical response of excitonic systems.
In this context it is interesting to note
\cite{Benedict} that the solution of the Bethe-Salpeter equation scales
as $O(N^{3})$ for clusters of size $N$, at least when suppressing
the dynamic part of
the fermion self-energy and the dynamic part of the screening of
the Coulomb interaction. Calculations of
the transport properties of molecular junctions~\cite{Brandbyge2002}
are another possible application of the $GW$ approach described here.
\section*{Acknowledgments}
We thank Olivier Coulaud for useful advice on computing, and Isabelle Baraille and Ross Brown
for discussions on chemistry, both in the context of the ANR project ``NOSSI''.
James Talman has kindly provided essential computer algorithms and codes,
and we thank him, furthermore, for inspiring discussions and
for correspondence. We are indebted to the organizers
of the ETSF2010 meeting at Berlin
for feedback and perspective on the ideas of this paper.
Arno Schindlmayr, Xavier Blase and Michael Rohlfing helped with extensive
correspondence on various aspects of the $GW$ method.
DSP and PK acknowledge financial support from
the Consejo Superior de Investigaciones Cient\'{\i}ficas (CSIC),
the Basque Departamento de Educaci\'on, UPV/EHU (Grant No. IT-366-07),
the Spanish Ministerio de Ciencia e Innovaci\'on
(Grants No. FIS2007-6671-C02-02 and FIS2010-19609-C02-02) and,
the ETORTEK program funded by the Basque Departamento de Industria and the
Diputaci\'on Foral de Guipuzcoa.
|
1,477,468,750,418 | arxiv | \section{Introduction}
This paper builds on an existing notion of group responsibility in \cite{coalitional} and proposes two ways to define the degree of group responsibility: \emph{structural} and \emph{functional} degrees of responsibility. These notions measure the potential responsibilities of (agent) groups for avoiding a state of affairs. According to these notions, a degree of responsibility for a state of affairs can be assigned to a group of agents if, and to the extent that, the group has the potential to preclude the state of affairs.
\vspace{-.3cm}
\section {Preliminaries}
In this work, the behaviour of the multi-agent system is modelled in a \emph{Concurrent Game Structure} (CGS) \cite{ATL} which is a tuple $M= (N,Q,Act, d, o)$, where $N=\{1, \dots ,k\}$ is a set of agents, $Q$ is a set of states, $Act$ is a set of actions, function $d:N \times Q \to \mathcal{P}(Act)$ identifies the set of available actions for each agent in $N$ at each state $q\in Q$, and $o$ is a transition function that assigns a state $q'=o(q,\alpha_1, \dots ,\alpha_k)$ to a state $q$ and an action profile $(\alpha_1, \dots ,\alpha_k)$ such that all $k$ agents in $N$ choose actions in the action profile respectively. Finally, a \emph{state of affairs} refers to a set $S \subseteq{Q}$ and $\bar{S}$ denotes the set $Q\setminus S$. In the rest of this paper, we say $C\subseteq{N}$ is (weakly) $q$-responsible for $S$ iff it can preclude $S$ in $q$ (see \cite{coalitional} for formal details).
Let $M$ be a multi-agent system, $S$ a state of affairs in $M$, $C\subseteq{N}$ an arbitrary group, and $\hat{C}$ be a (weakly) $q$-responsible for $S$ in $M$.
\begin{mydef}[Power measures]\label{def:one}
We say that the structural power difference of $C$ and $\hat{C}$ in $q \in Q$ with respect to $S$ in $M$, denoted by $\Theta_q^{S,M}(\hat{C},C)$, is equal to cardinality of $\hat{C}\backslash C$. Moreover, we say that $C$ has a power acquisition sequence $\langle \bar{\alpha_1}, \dots ,\bar{\alpha_n} \rangle$ in $q\in Q$ for $S$ in $M$ iff for $q_i \in Q$, $o(q_i,\bar{\alpha_i})=q_{i+1}$ for $1\leq i \leq n$ such that $q=q_1$ and $q_{n+1}=q'$ and $C$ is (weakly) $q'$-responsible for $S$ in $M$.
\end{mydef}
\vspace{-.3cm}
\section{Structural Degree of Responsibility}
In our conception of \emph{Structural Degree of Responsibility} ($\mathcal{SDR}$), we say that any (agent) group that shares members with the responsible groups, should be assigned a degree of responsibility that reflects its proportional contribution to the responsible groups. Accordingly, the relative size of a group and its share in the responsible groups for the state of affairs are substantial parameters in our formulation of the structural responsibility degree. We would like to emphasize that this concept of responsibility degree is supported by the fact that beneficiary parties, e.g., lobbyists in the political context, do proportionally invest their limited resources on the groups that can play a role in some key decisions.
\begin{mydef}[Structural degree of responsibility] \label{def:SDR}
Let $\mathbb{W}_q^{S,M}$ denote the set of all (weakly) $q$-responsible groups for state of affairs $S$ in multi-agent system $M$, and $C\subseteq{N}$ be an arbitrary group. In case $\mathbb{W}_q^{S,M}=\varnothing$, the structural degree of $q$-responsibility of any $C$ for $S$ in $M$ is undefined; otherwise, the structural degree of $q$-responsibility of $C$ for $S$ in $M$ denoted $\mathcal{SDR}_q^{S,M}(C)$, is defined as follows:
$$\mathcal{SDR}_q^{S,M}(C)=\max\limits_{\hat{C}\in \mathbb{W}_q^{S,M}}(\{i \mid i=1-\frac{\Theta_q^{S,M}(\hat{C},C)}{\mid \hat{C} \mid}\})$$
\end{mydef}
Intuitively, $\mathcal{SDR}_q^{S,M}(C)$ measures the highest contribution of a group $C$ in a (weakly) $q$-responsible $\hat{C}$ for $S$. Hence, structural degree of responsibility is in range of $[0,1]$.
\vspace{-.3cm}
\section{Functional Degree of Responsibility}
\emph{Functional Degree of Responsibility} ($\mathcal{FDR}$) addresses the dynamics of preclusive power of a group of agents (in the sense of \cite{power}) with respect to a given state of affairs. We deem that a reasonable differentiation could be made between the groups which do have the chance of acquiring the preclusive power and those they do not have any chance of power acquisition. This notion addresses the eventuality of a state in which a group possesses the preclusive power regarding the state of affairs. This degree is formulated based on the notion of \emph{power acquisition sequence} (Definition \ref{def:one}) by tracing the number of necessary state transitions from a source state, in order to reach a state in which the group in question is responsible for the state of affairs.
\begin{mydef}[Functional degree of responsibility]\label{def:fdr}
Let $\mathbb{P}_q^{S,M}(C)$ denote the set of all power acquisition sequences of $C\subseteq{N}$ in $q$ for $S$ in $M$. Let also $\ell$ = $\min\limits_{k \in \mathbb{P}_q^{S,M}(C)} (\{ i \mid i=length(k) \})$ be the length of a shortest power acquisition sequence. The functional degree of $q$-responsibility of $C$ for $S$ in $M$, denoted by $\mathcal{FDR}_q^{S,M}(C)$, is defined as follows:
\[ \mathcal{FDR}_q^{S,M}(C) = \left\{
\begin{array}{l l}
0 & \quad \text{if }\ \mathbb{P}_q^{S,M}(C)=\varnothing\\
\frac{1}{(\ell+1)} & \quad \text{otherwise}
\end{array} \right.\]
\end{mydef}
The notion of $\mathcal{FDR}_q^{S,M}(C)$ is formulated based on the minimum length of power acquisition sequences, which taken to be $0$ if $C$ is a (weakly) $q$-responsible for $S$. Hence, the functional degree of $q$-responsibility of such a $C$ for $S$ is equal to $1$. If there exists no power acquisition sequence for $C$, then the minimum length of a power acquisition sequence is taken to be $\infty$ and the functional degree of $q$-responsibility of $C$ for $S$ becomes $0$. In other cases $\mathcal{FDR}_q^{S,M}(C)$ is strictly between $0$ and $1$.
\vspace{-.3cm}
\section{Conclusion}\label{Sec:Conc}
The proposed notions can be used as a tool for analyzing the potential responsibility of agent groups towards a state of affairs. In our approach, the \emph{structural degree of responsibility} captures the responsibility of an agent group based on the accumulated preclusive power of the included agents while the \emph{functional degree of responsibility} captures the responsibility of a group of agents due to the potentiality of reaching a state in which it has the preclusive power. In the full version of the paper, we specify pertinent properties of the notions and consider additional semantics.
\vspace{-.3cm}
\bibliographystyle{plain}
|
1,477,468,750,419 | arxiv | \section{Introduction}
A possible relevance of non-Archimedean geometry and $p$-adic number theory within different contexts of theoretical physics is being discussed for more than thirty years. Originally p-adic concepts have been introduced in string theory \cite{Volovich:1987wu,Volovich:1987nq,Freund:1987kt,Brekke:1988dg,Frampton:1987sp,Frampton:1988kr} as a model of spacetime beyond the Planck distance, where one should not expect the Archimedean axiom to hold true. The concept of ultrametric spaces and the corresponding mathematical machinery percolated into other fields of knowledge from high-energy \cite{Manin,IA-padic,Arefeva:2004qqr} to condensed matter physics \cite{parisi,kozyrev,Bentsen:2019rlr,Zharkov:2012qea} and biology \cite{bio1, bio2}, since they were appreciated for being a natural language to describe hierarchical systems \cite{VVZ,Dragovich:2017kge}.
Recently, interest in $p$-adic quantum field theory has revived due to the possible relations between geometry of non-Archimedean number fields and the holographic correspondence \cite{Gubser2016}. The $p$-adic version of the $AdS/CFT$ duality with bulk geometry represented by Bruhat-Tits tree has been introduced \cite{Gubser2016,Gubser:2016htz,Heydeman:2016ldy,Stoica:2018zmi}. Since the boundary quantum field theory in this case is defined over a $p$-adic number field (either $\mathbb{Q}_p$ or its unramified extension $\mathbb{Q}_{p^n}$), further development of non-Archimedean holographic duality requires a deeper insight into the structure of $p$-adic field theory. Among other things, the Wilson renormalization group has been studied perturbatively, and critical exponents were computed in hierarchical bosonic and fermionic models \cite{Lerner1,Lerner:1991cd,Lerner:1994th}, for large-N models \cite{GubserON}, and for scalar field theory defined over mixed ($p$-adic/real) number fields \cite{Gubser:2018ath}. In this note, we attempt to make a step further in this direction and construct a non-Archimedean analogue of the Coleman-Weinberg effective potential \cite{ColemanWeinberg}. The Coleman-Weinberg potential is an important and illustrative concept which allows to incorporate quantum effects on the level of theory action, and provides a natural language to speak of symmetry breaking in interacting field theory \cite{Dolan:1973qd}. The renormalization group flow is convenient to represent in terms of the effective potential as well. In ${\mathbb R}^n$ scalar field theory, one can derive the renormalization group by computing scattering amplitudes or integrating out UV momentum shells in the Wilsonian approach. In $p$-adic field theory, coordinates/momenta and wave functions take values in different number fields, making certain construction normally used to describe RG flows (e.g. the Callan-Symanzik equation) tricky to define. This gives an additional motivation to study the non-Archimedean effective potential.
A critical issue that one almost unavoidably encounters when trying to construct a quantum field theory over $p$-adic numbers is the lack of well-defined space-time signature that makes the very concept of Lorentzian or Euclidean symmetry poorly defined. Pragmatically speaking, it means that Wick rotation cannot be used to bypass difficulties emerging in the Lorentzian case by performing analytical continuation to Euclidean time. For non-Archimedean $AdS_2/CFT_1$ holography, this problem has been addressed in \cite{Stoica-Lorentz}, where a possible approach to defining spacelike and timelike geodesics in the $p$-adic bulk via constructing quadratic extension $\mathbb{Q}_p\left[\sqrt{\tau}\right]$ of the number field, and expanding the original ``spacelike'' Bruhat-Tits tree with a set of branches that are postulated to be timelike. However, to a large extent this problem remains unresolved, especially outside of the holographic context, and one has to cope with it to compute observables in $p$-adic quantum field theory.
The Coleman-Weinberg potential is usually computed in Euclidean signature, while its Lorentzian treatment leads to appearance of certain pathological structures such as logarithmic divergences and imaginary terms in the potential already in field theories defined over ${\mathbb R}^n$.
Since in the $p$-adic case, there is no way to do analytical continuation, we take the measure of the path integral $e^{-S}$ rather than $e^{iS}$ as a starting point of our consideration.
We compute one-loop effective potential of a real-valued $\lambda\phi^4$ scalar field theory with quadratic dispersion defined on $\mathbb{Q}_{p^n}$ space, mainly focusing on the $n=1$ (``$p$-adic quantum mechanics'' \cite{VVZ}), $n=2$ and $n=4$ cases. The quantum corrections to the potential are given by integrals over ${\mathbb Q}_{p^n}$ that can be expressed as infinite (divergent) series. We find a tractable approximation that allows to evaluate them and, after renormalization, obtain an explicit expression for the effective potential. In all considered dimensions, the resulting potentials have very similar structure to their real analogues. Moreover, in the formal $p\rightarrow 1$ limit, an exact matching occurs. A very peculiar behavior is observed in the opposite, $p \rightarrow\infty$ limit, where the potential acquires logarithmic term $\ln\left(1+\lambda \phi_b^2/2\right)$.
However, given the fact that this takes place in any dimension including $n=1$, which is suspicious even in the rather exotic non-Archimedean setting, we think that this could either be an artifact of the one-loop approximation or should be cured with renormalization group transformation of the potential.
The paper has the following structure. In Sec. \ref{sec:main_section}, we define the model, obtain a formal expression for the Coleman-Weinberg potential, and perform its renormalization for three particular cases, $n=1, \,2,\, 4$. In Sec. \ref{sec:pto1}, we define the formal $p\rightarrow 1$ limit, and use it to relate the p-adic effective potential to its real cousin. In Sec. \ref{sec:ptoinfty}, we consider the $p\rightarrow\infty$ limit. In Sec. \ref{sec:EM_est}, an alternative approach to computing the effective potential via the Euler-Maclaurin formula is proposed, and its validity limits are discussed. Sec. \ref{sec:summary} briefly summarizes the obtained results. App. A contains definition of the unramified extension $\mathbb{Q}_{p^n}$. App. B proves an identity relating integrals over $\mathbb{Q}_{p^n}$ and $\mathbb{R}^n$. App. C is to remind the standard Coleman-Weinberg calculation in real field theory.
\section{Coleman-Weinberg potential in $p$-adic field theory} \label{sec:main_section}
We shall focus on the real-valued scalar field theory defined over the unramified extension $\mathbb{Q}_{p^n}$ of $p$-adic number field:
\begin{equation}\label{eq:main}
S = \int\limits_{{\mathbb Q}_{p^n}}dk \widetilde{\phi}(-k) ( |k|^{s}) \widetilde{\phi}( k)+\frac{\lambda}{4!} \int\limits_{\mathbb{Q}_{p^n}}dx \phi(x)^4, \,\,\,\, x \in {\mathbb{Q}}_{p^n}
\end{equation}
Here $|\,.\,|=|\,.\,|_{p^n}=|\,.\,|_p$ is the norm on $\mathbb{Q}_{p^n}$, $k$ is the $p$-adic ``momentum'', and $\widetilde\phi$ is the Fourier transform of $\phi$:
\be
\phi(x)=\int\limits_{\mathbb{Q}_{p^n}} dk \chi(k)\widetilde\phi(k x),
\ee
where $\chi(x)=\exp{2\pi i \left\{x\right\}}$
is the additive character on $\cQ_{p^n}$. Dispersion $s$ corresponds to the Vladimirov derivative ``power'' in the configuration space:
\be
D^s\phi(x)=\frac{1}{\Gamma_p(-s)}\int dy \frac{\phi(y)-\phi(x)}{|y-x|^{1+s}}.
\ee
Our aim is to compute the one-loop effective potential for the theory given by \eqref{eq:main} with $p, s$ and $n$ fixed. Here $n$ plays the role of space-time ``dimension'' as explained in \cite{Gubser2016}, so one can think of the $n=1$ case as of $p$-adic quantum mechanics, and $n=4$ corresponds to four-dimensional scalar field. Since $\phi$ is real-valued, derivation of the effective potential in general follows the strategy of calculating of the conventional Coleman-Weinberg potential but with a different propagator:
\be
G(k)=\frac{1}{|k|^s}.
\ee
As usual, we split $\phi$ in the background field $\phi_b$ and the dynamical field (see App. \ref{rvCW} for the outline of the conventional calculation), and sum all one-loop diagrams having $2m$ background field external legs each. Assigning $(- \lambda\phi_b^2/2)$ factor to each vertex and taking into account symmetry factor $1/2m$, we write
\be
\Delta \Gamma(\phi_b)= V_{p^n}\underset{m}{\sum}\int \frac{1}{2m}\Big( \frac{-\lambda \phi_b^2}{2|k|^s} \Big)^m dk=-\frac{V_{p^n}}{2}\int \ln{\Big(1+\frac{\lambda \phi_b^2}{2|k|^s} \Big)} dk,
\ee
where $V_{p^n}$ is the (infinite) normalization constant corresponding to the volume of ${\mathbb{Q}}_{p^n}$.
Thus the one-loop correction to the effective action is given by
\be
\Delta V = - \frac{\Delta \Gamma(\phi_b)}{V_{p^n}}=\frac{1}{2} \int\limits_{\cQ_{p^n}} dk \log(1+\frac{\lambda \phi_b^2}{2|k|^s}). \label{eq:dGamma_starting}
\ee
This is a direct analogue of the conventional expression for the Coleman-Weinberg potential.
We should make a remark that here we used $(- \lambda\phi_b^2/2)$ vertex factor from the very beginning instead of taking $ i\lambda\phi_b^2/2$ and performing analytical continuation to Euclidean signature later on. The reason for doing this is that, in the non-Archimedean case, the notion of space-time signature is not well-defined, and the Wick rotation cannot be performed to eliminate the logarithmic singularity and imaginary contributions at small $|k|$. Thus we mimic the Euclidean signature by using a prescription for the vertex that corresponds to real measure in the path integral of the theory.
Since the integrand in \eqref{eq:dGamma_starting} depends only on the ${\mathbb Q}_{p^n}$ norm, one can use the formula (see \eqref{eq:appendix_basic_integral}):
\begin{equation}\label{eq:Qp_integration}
\int\limits_{{\mathbb Q}_{p^{n}}}f(|x|)dx=(1-p^{-n})\sum^{\infty}_{i=-\infty}p^{ni} f(p^i),
\end{equation}
which leads to the formal expression for one-loop correction to the effective potential:
\begin{equation}
\Delta V (\phi_b)= \frac12 (1-p^{-n})\sum^{\infty}_{i=-\infty}p^{ni}\ln(1+\frac{\lambda\phi_b^2}{2p^{si}})
\label{eq:potential_Qp^n}.
\end{equation}
This series is the starting point of our analysis. In a general case, it is divergent, so we have to regularize it by imposing a finite-scale cut-off by analogy with the real case:
\begin{equation}
\Delta V(\alpha)=\frac12(1-p^{-n})\sum^{M}_{i=-\infty}p^{n i}\ln(1+\alpha p^{-s i})
\label{eq:potential_Qp^n},
\end{equation}
where $\alpha=\lambda\phi^2_b/2$. One can think of number $M$ as of a logarithm of the corresponding ultraviolet momentum scale $|k|_{UV}=\Lambda=p^M$.
A more subtle feature of $\Delta V$ is that, as a function of background field $\phi_b$, it contains a set of logarithmic singularities at $\phi_b^2=2p^{si}/\lambda$ points if $\lambda<0$.
These points are concentrated around $\phi_b=0$ and $\phi_b=+\infty$. Since we deal with finite $\phi_b$, only the divergences around $\phi_b=0$ matter.
These singularities arise only for tachyonic expression $\lambda<0$, while the stable case $\lambda>0$ leads to non-singular expression.
To proceed further, we split sum \eqref{eq:potential_Qp^n} into two parts. For that, we introduce index $I_{\alpha}$ as
\be
I_{\alpha}=[\ln|\alpha|/s\ln p],
\ee
where the brackets denote integer part.
If $i>I_{\alpha}$ and $|\alpha| p^{-si}<1$, logarithm in \eqref{eq:potential_Qp^n} can be expanded as a convergent Taylor series, $\ln(1+ x)= x-x^2/2-\ldots$
For $i<I_{\alpha}$, we rewrite and expand the logarithm as $\ln(1+ x)=\ln( x)+\ln(1+ 1/x)=\ln( x)+ 1/x-1/(2x^2)-\ldots$
Summing these two parts after expansion we obtain an expression that is valid everywhere except for the aforementioned singular points $\alpha = p^{is}$:
\begin{gather}\label{eq:firstest}
2\Delta V(\alpha)=-(1-p^{-n})\sum^{M}_{i=I_{\alpha}}p^{n i}\sum^{\infty}_{l=1}\frac{(-\alpha)^l p^{-sli}}{l}+(1-p^{-n})\sum^{I_{\alpha}-1}_{i=-\infty}p^{n i}\ln(\alpha p^{-s i})-\\
-(1-p^{-n})\sum^{I_{\alpha}-1}_{i=-\infty}p^{n i}\sum^{\infty}_{l=1}\frac{p^{sli}}{l(-\alpha)^l }= \nonumber \\ \label{eq:double_expansion}
-(1-p^{-n})\sum^{\infty}_{l=1}\frac{(-\alpha)^l (p^{(M+1)(n-sl)}-p^{I_{\alpha}(n-sl)})}{l(p^{n-sl}-1)}+(1-p^{-n})\left[- \frac{p^{n(I_{\alpha}-1)}}{p^{-n}-1}\ln(\alpha) -\right.\\ \left.
-p^{n(I_{\alpha}-2)}\frac{(I_{\alpha}-1)p^{n}-I_{\alpha}}{(1-p^{-n})^2}s\ln p \right]
-(1-p^{-n})\sum^{\infty}_{l=1}\frac{ p^{I_{\alpha}(n+sl)}}{l(-\alpha)^l(p^{n+sl}-1)}\nonumber
\end{gather}
If $\alpha>0$, the series converges and no issues arise.
If $\alpha<0$, the series in the first line of \eqref{eq:double_expansion} diverges, -- this will be cured by the renormalization procedure.
This sum can be approximated by neglecting integer part operation in $I_\alpha$ and taking $I_{\alpha}=\ln|\alpha|/s\ln p$, so that $p^{I_{\alpha}}=|\alpha|^{\frac{1}{s}}$. We obtain:
\begin{gather}
2\Delta V(\alpha)=-(1-p^{-n})\sum^{l_0}_{l=1}\frac{(-\alpha)^l \Lambda^{n-sl}}{l(1-p^{sl-n})}+\label{eq:dGamma_no_log}\\+ p^{-n}|\alpha|^{\frac{n}{s}}\ln(\sign\alpha)
+\frac{p^{-n}}{1-p^{-n}}|\alpha|^{\frac{n}{s}}s\ln p+\nonumber \\
+(1-p^{-n})|\alpha|^{\frac{n}{s}}\sum^{\infty}_{l=1}\frac{(-\sign\alpha)^l}{l}\Big( \frac{1}{p^{n-sl}-1}-\frac{1}{p^{n+sl}-1}\Big). \nonumber
\end{gather}
Here we introduced $l_0=[n/s]$ to separate the terms that diverge as $\Lambda\rightarrow\infty$. If $l_0=n/s$, a logarithmic term emerges:
\begin{gather}\label{eq:dGamma_log}
2\Delta V(\alpha)=-(1-p^{-n})\sum^{l_0-1}_{l=1}\frac{(-\alpha)^l \Lambda^{n-sl}}{l(1-p^{sl-n})}-(1-p^{-n})\frac{(-\alpha)^{\frac{n}{s}}}{l_0 s\ln p}\ln\frac{\Lambda^s}{-\alpha}+\\+ p^{-n}|\alpha|^{\frac{n}{s}}\ln(\sign\alpha
+\frac{p^{-n}}{1-p^{-n}}|\alpha|^{\frac{n}{s}}s\ln p+\nonumber \\
+(1-p^{-n})|\alpha|^{\frac{n}{s}}\sum^{\infty}_{l=1, l\neq l_0}\frac{(-\sign\alpha)^l}{l} \frac{1}{p^{n-sl}-1}-(1-p^{-n})|\alpha|^{\frac{n}{s}}\sum^{\infty}_{l=1}\frac{(-\sign\alpha)^l}{l}\frac{1}{p^{n+sl}-1}. \nonumber
\end{gather}
The singular terms can be removed by means of the standard renormalization protocol. For $\alpha>0$, we need to take care only of the terms dependent on $\Lambda$. For $\alpha<0$, we also need to remove the divergent series (the last sum in \eqref{eq:dGamma_log}).
In both cases, the renormalization conditions are\footnote{Note, that in the case $n=2, s=2$ the renormalization conditions are slightly different, see Sec. \ref{sec:n2s2}}:
\begin{gather} \label{eq:renormalization_conditions}
V^{(4)}_{\phi_b}(\phi_0)=\lambda,\\
V^{''}_{\phi_b}(0)=m_R^2=0,\nonumber
\end{gather}
where we introduced additional scale $\phi_0$ to step away from the logarithmic singularity.
Let us now perform the renormalization procedure for three concrete choices of $n$ and $s$.
\subsection{Case n=1 s=2}
Series \eqref{eq:dGamma_no_log} converges for positive $\lambda$ and contains no terms dependent on $\Lambda$ since $l_0=0$, so we can readily evaluate it without the need to renormalize:
\begin{gather} \label{eq:case_n1_s2}
\Delta V=\frac{1}{2}\sqrt{\frac{|\lambda|}{2}}|\phi_b|\left[
\frac{p^{-1}}{1-p^{-1}}2\ln p+ \right. \left.
(1-p^{-1})\sum^{\infty}_{l=1}\frac{(-1)^l}{l}\left( \frac{1}{p^{1-2l}-1}-\frac{1}{p^{1+2l}-1}\right) \right]
\end{gather}
Convergence of the latter sum follows from the Leibniz criterion.
If $\lambda<0$, we add a counterterm $A|\phi_b|$ to \eqref{eq:dGamma_no_log} and impose conditions\footnote{Although two renormalization conditions for one counterterm seem like an overdefined problem, they can be consistently resolved.} \eqref{eq:renormalization_conditions}. Then the counterterm exactly cancels the bare terms, and $\Delta V$ becomes trivial. This makes the cases of $\lambda>0$ and $\lambda<0$ qualitatively different. If $\lambda>0$, a term $\sim |\phi_b|$ adds to the effective potential, while for $\lambda<0$ the effective potential does not receive any one-loop corrections.
\subsection{Case n=2 s=2}\label{sec:n2s2}
To perform renormalization, we need to modify conditions \eqref{eq:renormalization_conditions} by shifting the mass renormalization condition to scale $\phi_0$ as well
\be \label{n2s2}
V^{''}_{\phi_b}(\phi_0)=0
\ee
Solving then equations on $A\phi_b^2$ and $B\phi_b^4$ counterterms, we arrive at:
\begin{equation} \label{eq:case_n2_s2}
\Delta V = -\frac{\lambda \phi_b^4}{4t^2}+\frac{(1-p^{-2})\lambda\phi_b^2}{2\log p}\left(-1 +\frac{t^2}{24} +\frac{\log t^2 }{4}\right),\,\,t=\phi_b/\phi_0
\end{equation}
This flow is the most non-trivial one among considered cases. Depending on the dimensionless parameter $t$, the effective potential can take different forms with renormalized coupling constant $\lambda_R = \lambda \left(1-6/t^2\right)$ and mass acquiring both positive and negative values.
\subsection{Case n=4 s=2}
Adding counterterms $A\phi_b^2$ and $B\phi_b^4$ and solving \eqref{eq:renormalization_conditions}, we obtain renormalized one-loop correction to the effective potential of the following form:
\begin{equation}\label{eq:case_n4_s2}
\Delta V= \frac{\lambda^2\phi_b^4}{32\log p} \left(1-\frac{1}{p^4}\right)\left(\log \frac{\phi_b^2}{\phi_0^2}-\frac{25}{6}\right).\end{equation}
\vskip20pt
\noindent
Interesting to note that the $p$-adic one-loop corrections to the effective potential have structure very similar to that of their ${\mathbb R}^n$ cousins. Moreover, as we will show in the next section, the Archimedean case can be reproduced from the non-Archimedean one in the formal limit of $p\rightarrow 1$.
\section{$p\to1$ limit}\label{sec:pto1}
One of important reasons why physical theories defined over $p$-adic number fields attract attention is their possible connections to real-domain theories. There are different ways to relate Archimedean and non-Archimedean physical models. The most canonical approach is via adelic formulas, when observables in real theory are decomposed into products over their $p$-adic analogues at all possible values of $p$ \cite{Freund:1987ck,Arefeva:1988kr}. Recently, a construction employing Berkovich spaces was suggested to relate energy spectra of $p$-adic and real quantum mechanics \cite{Huang:2020vwx}. Not widely discussed but elegant approach is based onto $p\rightarrow 1$ limit \cite{Spokoiny:1988zk,Gerasimov:2000zp,Bocardo-Gaspar:2017atv}. To proceed along this line, one first obtains an explicit $p$-dependent expression (e.g., some observable) in non-Archimedean theory and then takes the formal limit $p\to 1$ treating $p$ as a real number.
The approach was taken in \cite{Gerasimov:2000zp} to relate $p$-adic string theory to conventional string field theory.
To rigorously justify this limit, or to even explain why it provides a connection to real space theories, might require quite some effort \cite{Bocardo-Gaspar:2017atv}. However, in our particular case the reason why this limit could lead to meaningful results is rather transparent. Quantum corrections to the effective potential in $p$-adic field theory are given by integrals of the form \eqref{eq:Qp_integration}. For that kind of expression, the following identity can be proven\footnote{To the best of our knowledge, for $n=1$ it was first derived in \cite{Spokoiny:1988zk}.} (see App. \ref{sec:Apppto1}):
\begin{gather}\label{eq:lim_pto1}
\lim_{p\to1}\int_{\mathbb{Q}_{p^n}}f(|x|)dx=\frac{n\Gamma(n/2)}{2\pi^{n/2}}\cdot\int_{\mathbb{R}^n}f(|x|)dx,\,\,\,n>1, \\
\lim_{p\to1}\int_{\mathbb{Q}_{p}}f(|x|)dx=\int_{\mathbb{R}}f(|x|)dx,\,\,\,n=1,
\end{gather}
where r.h.s. integral is exactly what defines corrections to the effective potential in real space field theory modulo the overall volume factor, see e.g. \eqref{eq:real_correction_integral}. This gives us an exact relation between the effective potentials of $p$-adic and real field theories for arbitrary $s$ and $n$.
To illustrate this statement, we shall go through the three particular cases.
First of all, let us make a comment on the validity of Eqs. \eqref{eq:case_n1_s2}-\eqref{eq:case_n4_s2}. Those were derived from \eqref{eq:double_expansion} under assumption that $[\ln |\alpha| / s \ln p] \simeq \ln |\alpha| / s \ln p$ which is valid at the points $|\alpha|p^{-si}=1, \,\, i \in {\mathbb Z}$. In the limit $p\rightarrow 1$, such points form a dense set, and approximation \eqref{eq:dGamma_no_log}-\eqref{eq:dGamma_log} becomes exact, which means that we can just take $p\rightarrow 1$ limit of \eqref{eq:case_n1_s2}-\eqref{eq:case_n4_s2}.
From that we readily obtain
\begin{itemize}
\item for $n=1, s=2$:
\begin{gather}
\Delta V^{(1,2)}_{p\rightarrow 1} = \frac{1}{2}\theta(\alpha) \lim\limits_{p\rightarrow 1}\left[\frac{p^{-1}}{1-p^{-1}}|\alpha|^{\frac{1}{2}}2\ln p
+ \right. \\ \left. (1-p^{-1})|\alpha|^{\frac{1}{2}}\sum^{\infty}_{l=1}\frac{(-1)^l}{l}\left( \frac{1}{p^{1-2l}-1}-\frac{1}{p^{1+2l}-1}\right) \right] = \nonumber \\
\left(2 |\alpha|^{\frac12}+
|\alpha|^{\frac12}\sum^{\infty}_{l=1}\frac{4(-1)^l}{1-4l^2}\right)\theta(\alpha) = \frac{1}{2}\pi |\alpha|^{\frac12}\theta(\alpha)=\frac{1}{2}\pi \sqrt{\frac{|\lambda|}{2}}|\phi_b| \theta(\lambda),\nonumber
\end{gather}
where we introduced Heaviside $\theta$-function to highlight that the one-loop correction is trivial at $\lambda <0$.
\item For $n=2, s=2$:
\begin{equation}
\Delta V^{(2,2)}_{p\rightarrow 1} = -\frac{\lambda \phi_b^4}{4t^2}+\lambda\phi_b^2\left(-1 +\frac{t^2}{24} +\frac{\log t^2 }{4}\right),\,\,t=\phi_b/\phi_0.
\end{equation}
\item For $n=4, s=2$:
\begin{equation}
\Delta V^{(4,2)}_{p\rightarrow 1}=
\frac{\lambda^2\phi_b^4}{8}\left(\log \frac{\phi_b^2}{\phi_0^2}-\frac{25}{6}\right).
\end{equation}
\end{itemize}
Computing the corresponding one-loop corrections in real space field theory, for $n=1$ and $n=4$ we conclude:
\begin{gather}
\Delta V^{(1,2)}_{\mathbb R} =\frac{1}{4}\sqrt{\frac{|\lambda|}{2}}|\phi_b|= \frac{1}{2\pi}\Delta V^{(1,2)}_{p\rightarrow 1},\\
\Delta V^{(4,2)}_{\mathbb R} =\frac{\lambda^2\phi_b^4}{256\pi^2}\left(\log \frac{\phi_b^2}{\phi_0^2}-\frac{25}{6}\right)= \frac{1}{32\pi^2}\Delta V^{(4,2)}_{p\rightarrow 1}=\frac{2\pi^2}{(2\pi)^4 4 \Gamma(2)} \Delta V^{(4,2)}_{p\rightarrow 1}. \nonumber
\end{gather}
This looks like a nice evidence supporting our claim. At the same time, the $n=2$ case is more subtle. Before renormalization, the real Coleman-Weinberg potentials perfectly matches its p-adic cousin obtained by means of {\it the Euler-Maclaurin integral approximation} (see Sec. \ref{sec:EM_est} for details, and Eq. \eqref{eq:EM_n2s2_nonrenorm} in particular):
\begin{equation}
\Delta \widetilde{V}^{(2,2)}_{\mathbb R} = -\frac{\lambda\phi_b^2}{16\pi}\left(1+\ln(\frac{2\Lambda^2}{\lambda\phi^2_b})-\ln(-1)\right) = \frac{1}{4\pi} \Delta \widetilde{V}^{(2,2)}_{p \rightarrow 1} =\frac{2\pi}{2(2\pi)^2\Gamma(1)} \Delta \widetilde{V}^{(2,2)}_{p \rightarrow 1},
\end{equation}
where we use tilde to stress out that those are potentials before renormalization.
After renormalization a mismatch occurs. The reason is that in two dimensions, we have to impose mass renormalization condition at some scale $\phi_0\neq 0$. This leads us to appearance of $\sim 1/t^2$ term in the renormalized potential which comes with different relative coefficients in $p$-adic and in real field theories:
\begin{gather}
\Delta V^{(2,2)}_{p\rightarrow 1} = -\frac{\lambda \phi_b^4}{4t^2}+\lambda\phi_b^2\left(-1 +\frac{t^2}{24} +\frac{\log t^2 }{4}\right),\\
\Delta V^{(2,2)}_{\mathbb R} = -\frac{\lambda \phi_b^4}{4t^2}+\frac{\lambda\phi_b^2}{4\pi}\left(-1 +\frac{t^2}{24} +\frac{\log t^2 }{4}\right). \nonumber
\end{gather}
If we assume $\phi_b\gg\phi_0$, this term becomes negligible, and the matching restores.
\section{$p\rightarrow\infty$ limit}
\label{sec:ptoinfty}
Another limit which is instructive to consider is $p\rightarrow\infty$. It seems to exhibit very different behavior from what one can see for any fixed finite $p$. In this case, $I_{\alpha}=[\ln |\alpha|/s\ln p]=0$, and only the first sum in \eqref{eq:double_expansion} survives:
\begin{gather}
2\Delta V(\alpha)=-(1-p^{-n})\sum^{\infty}_{l=1}\frac{(-\alpha)^l (p^{(M+1)(n-sl)}-1)}{l(p^{n-sl}-1)}
\end{gather}
As before, introducing $l_0=[n/s]$ to separate UV-divergent terms from the rest, we can write
\begin{equation}
\sum^{\infty}_{l=1}\frac{(-\alpha)^l}{l}\frac{p^{(M+1)(n-sl)}-1}{p^{n-sl}-1}\simeq \sum^{l=l_0}_{l=1}\frac{(-\alpha)^l}{l}[p^{M(n-sl)}-1]-\ln(1+\alpha).
\end{equation}
Note that in contrast with Sec. \ref{sec:main_section}, here we restrict our considerations to $|\alpha|=|\lambda|\phi_b^2/2 <1$. Restoring $\Lambda=p^M$ notation, for the one-loop correction we obtain:
\begin{equation}\label{eq:DeltaGamma_infp_nonlog}
2\Delta V(\phi_b)=-\sum_{l=1}^{l=l_0}\frac{(-\lambda \phi^2_b)^l}{l2^l}[\Lambda^{n-sl}-1]
+\ln(1+\frac{\lambda\phi^2_b}{2}).
\end{equation}
If $n=s l_0$, it rather acquires the form:
\begin{equation}\label{eq:DeltaGamma_infp_log}
2\Delta V(\phi_b)=-\sum^{l=l_0-1}_{l=1}\frac{(-\lambda \phi^2_b)^l}{l2^l}[\Lambda^{n-sl}-1]-\frac{(-\lambda \phi^2_b)^{l_0}}{ l_0 2^{l_0}}[\frac{\ln \Lambda}{\ln p}-1
+\ln(1+\frac{\lambda\phi^2_b}{2}).
\end{equation}
Now we shall consider the three cases of interest discussed before.
If $n=1, s=2$, \eqref{eq:DeltaGamma_infp_nonlog} contains no divergent terms, and in the $p\rightarrow \infty$ the effective potential reduces to
\begin{equation}
V^{(1,2)}_{\mbox{eff}} = \frac{\lambda}{4!} \phi_b^4 + \frac{1}{2}\ln \left(1+\frac{\lambda\phi^2_b}{2} \right)
\end{equation}
If $n=2, s=2$, there is a logarithmic term we need to renormalize. In that case, we do not need to make a shift to some $\phi_0$ scale, and renormalization \eqref{eq:renormalization_conditions} conditions can be imposed at $\phi_b=0$. Adding $A\phi_b^2$ and $B\phi_b^4$ counterterms, we obtain
\begin{equation}
V^{(2,2)}_{\mbox{eff}} = \frac{\lambda\phi_b^4}{4!}(1+\frac{3}{2}\lambda)-\frac{\lambda \phi_b^2}{4}+\frac{1}{2}\ln \left(1+\frac{\lambda\phi^2_b}{2} \right).
\end{equation}
If $n=4, s=2$, there are both logarithmic and $\Lambda^2$ terms. However, after renormalization we obtain exactly the same result:
\begin{equation}
V^{(4,2)}_{\mbox{eff}} = \frac{\lambda\phi_b^4}{4!}(1+\frac{3}{2}\lambda)-\frac{\lambda \phi_b^2}{4}+\frac{1}{2}\ln \left(1+\frac{\lambda\phi^2_b}{2} \right).
\end{equation}
A peculiar feature of the $p\rightarrow\infty$ limit is the logarithmic term in the effective potential for all ``space-time'' dimensions.
It does not have an analogue in the conventional real field theory, but does not lead to any unusual or pathological behavior causing neither symmetry breaking nor singularities in the potential if $\lambda >0$.
\section{Euler-Maclaurin estimate of the effective potential}\label{sec:EM_est}
While we managed to compute the effective potential by evaluating series \eqref{eq:dGamma_no_log}-\eqref{eq:dGamma_log}, relying on the assumption that $[\ln \alpha / s\ln p]\simeq \ln \alpha / s\ln p$, it is instructive to discuss another possible approach to do that. Naively, a sum of that kind can be approximated by a continuous integral:
\begin{equation}
\sum _{j=-\infty}^{M}f(j)\simeq \int _{-\infty}^{M}f(x)\,dx.
\end{equation}
That would be possible if the Euler--Maclaurin formula for infinitely differentiable functions was valid:
\begin{equation}
\label{eq:EMsumformula_inf}
\sum _{i=m}^{M}f(i)=\int _{m}^{M}f(x)\,dx+{\frac {f(M)+f(m)}{2}}+\sum _{k=1}^{\infty}{\frac {B_{2k}}{(2k)!}}(f^{(2k-1)}(M)-f^{(2k-1)}(m)),
\end{equation}
and the residual term was small enough.
Mildly speaking, applicability of this formula in our case is questionable. However, we can plainly compute the integral estimation and make an attempt to relate the outcome of the evaluation to the previously obtained results.
If $n=1, s=2$, the integral converges as $M\rightarrow \infty$ for $\lambda >0$ (the other case becomes trivial after renormalization), and we get:
\be
\Delta V=\frac12 (1-p^{-1})\int^{+\infty}_{-\infty} p^x \ln(1+\frac{\lambda\phi^2_b}{2p^{2x}})dx=(1-p^{-1})|\phi_b|\frac{\sqrt{|\lambda|}\pi}{2\sqrt{2}\ln p}
\ee
versus the result of series summation \eqref{eq:case_n1_s2}:
\begin{gather}
\Delta V=\frac{\sqrt{|\lambda|}}{2\sqrt{2}}|\phi_b| N(p), \\
N(p)=\frac{p^{-1}}{1-p^{-1}}2\ln p+ (1-p^{-1})\sum^{\infty}_{l=1}\frac{(-1)^l}{l}\left( \frac{1}{p^{1-2l}-1}-\frac{1}{p^{1+2l}-1}\right). \nonumber
\end{gather}
There is a clear discrepancy between these two expressions for large values of $p$ since:
\begin{gather}
\lim\limits_{p\rightarrow \infty} N(p) = \ln 2, \nonumber \\
\lim\limits_{p\rightarrow \infty} \frac{\pi(1-p^{-1})}{\ln p} = 0.\nonumber
\end{gather}
On the other hand, for small $p$ the Euler-Maclaurin estimate has surprisingly good accuracy. For example, for $p=7$:
\begin{gather}
N(7) \simeq 1.387, \nonumber \\ \frac{\pi(1-7^{-1})}{\ln 7} \simeq 1.384. \nonumber
\end{gather}
If $n=2, s=2$, the integral approximation gives:
\begin{equation}
\Delta V=-\frac{(1-p^{-2})\lambda\phi_b^2}{8\ln p}\left(1+\ln(\frac{2\Lambda^2}{\lambda\phi^2_b})\right), \label{eq:EM_n2s2_nonrenorm}
\end{equation}
which after renormalization with conditions $V''(\phi_0)=0$, $V^{(4)}(\phi_0)=\lambda$, becomes
\begin{equation} \label{eq:case_n2_s2}
\Delta V = -\frac{\lambda \phi_b^4}{4t^2}+\frac{(1-p^{-2})\lambda\phi_b^2}{2\log p}\left(-1 +\frac{t^2}{24} +\frac{\log t^2 }{4}\right),\,\,t=\phi_b/\phi_0.
\end{equation}
Finally, for $n=4, s=2$:
\begin{gather}
\Delta V=\frac12 \left( 1-p^{-4}\right) \int^{M}_{-\infty} p^{4x}\ln\left(1+\frac{\lambda \phi^2_b}{2p^{2x}}\right)\,dx =\label{eq:Euler_n4s2}\\
\frac{1-p^{-4}}{8\ln p}\left(\Lambda^4\ln\left(1+\frac{\lambda\phi^2_b}{2\Lambda^2}\right)+\frac{\lambda\phi^2_b}{2}\Lambda^{2}-\frac{\lambda^2\phi^4_b}{4}\ln\left(1+\frac{2\Lambda^2}{\lambda\phi^2_b}\right)\right) \simeq \nonumber \\
\frac{1-p^{-4}}{8\ln p}\left(\lambda \phi_b^2 \Lambda^2+\frac{\lambda \phi_b^2}{2}-\frac{\lambda^2\phi^4_b}{4}\left(\ln\frac{2\Lambda^2}{-\lambda\phi^2_b}\right)\right),\,\,\,\Lambda \rightarrow \infty \nonumber
\end{gather}
where we restored $\Lambda=p^{M}$ notation. Renormalization of \eqref{eq:Euler_n4s2} with conditions \eqref{eq:renormalization_conditions} leads to
\begin{equation}
\Delta V=\frac{\lambda^2\phi_b^4}{32\log p} \left(1-\frac{1}{p^4}\right)\left(\log \frac{\phi_b^2}{\phi_0^2}-\frac{25}{6}\right),
\end{equation}
Contra to the $n=1$ case, for $n=2$ and $n=4$, there is no difference between the Euler-Maclaurin estimate and the discreet sum.
Technically this happens because the coefficients in front of logarithmically divergent terms $\ln (2\Lambda^2/\lambda \phi_b^2)$ are the same in sum \eqref{eq:dGamma_log} and in the Euler-Maclaurin estimate \eqref{eq:Euler_n4s2}. These terms define the form of renormalized potential, while the terms that do not depend on $\Lambda$ and the divergences of higher order are eliminated by renormalization completely. It can be shown that this kind of perfect matching between the Euler-Maclaurin and series expressions for the effective potential always takes place if $n/s\in {\mathbb N}$.
\section{Summary and discussion}
\label{sec:summary}
We have studied one-loop effective potential in the real-valued scalar field theory over unramified extension $\mathbb Q_{p^n}$ of $p$-adic numbers. Typically, by computing the effective potential one can easily gain information on quantum behavior of field theory, since it provides a transparent representation of such concepts as symmetry breaking and renormalization group flow.
In the conventional textbook case, the Feynman diagrams contributing to the effective potential are usually computed in Euclidean signature. In $p$-adic field theory, the Wick rotation is not well defined, hence we have to change the measure of the path integral, simulating Euclidean behavior of the partition function.
For arbitrary fixed $p$, the effective potential is given by a formal series that can be evaluated approximately. In all studied dimensions ($n=1, 2, 4$), the analytical structure of the potential is very similar to that in Archimedean theory, and all the results regarding vacuum stability in conventional $\lambda\phi^4$ theory hold true in the non-Archimedean case. Moreover, in the $p \rightarrow 1$ limit, the effective potential of real field theory can be exactly reproduced from the $p$-adic one. At first glance, this correspondence seems surprising, given the huge difference between real and $p$-adic geometries, and deserves a more detailed discussion. First, an interesting analogy can be drawn with deformations of quantum mechanics. As shown in \cite{Arefeva1991}, in some cases, $q$-deformation of quantum mechanics is related to $p$-adic quantum mechanics with $p=q^{-1}$. It can be possible that the observed similarity between real and $p$-adic effective potentials indicates that the quantum field theory over $p$-adic field can be interpreted as a deformation of the Archimedean one. At the same time, it might well be that the similarity fades away once higher-loop corrections are taken into account. On the level of one loop, the Coleman-Weinberg potential is given by effectively one-dimensional integral \eqref{eq:Qp_integration_app} that can be matched with its real analogue. If there are more than one momentum running in the loops, the real/$p$-adic correspondence can be destroyed, and $p$-adic theory starts qualitatively deviating from its ${\mathbb R}^n$ cousin. We find this aspect interesting and important to investigate.
Another limit we have considered is $p\to\infty$. In contrast with the finite $p$ and $p\rightarrow 1$ cases, it leads to a totally different vacuum structure than in the real field theory. An unusual logarithmic term emerges that survives the renormalization procedure. If $\lambda<0$, the potential has a singular minimum at $\phi_b = \sqrt{2}/|\lambda|$. However, there is a high chance that it is an artifact of either one-loop approximation or the fact that this limit is singular (i.e. cannot be smoothly derived from finite-$p$ effective potential expression).
Our study leaves a number of open questions. Hopefully, some of them can be answered by computing the next-order quantum corrections which could shed light on what the main difference between $p$-adic and real field theories is. Apart from that, it seems essential to proceed further along the line of studying objects that can be potentially used to clarify the RG flows structure in $p$-adic theories, where one has to deal with scaling transformations in two different number fields. We hope to address these issues in the future.
\section*{Acknowledgements}
We would like to thank Irina Aref'eva and Igor Volovich for useful discussions.
This work was performed at the Steklov International Mathematical Center and supported by the Ministry of Science and Higher Education of the Russian Federation (agreement no. 075-15-2019-1614) and Dutch Science Foundation NWO/FOM under
Grant No. 16PR1024.
|
1,477,468,750,420 | arxiv | \section{Introduction}
LoRa/LoRaWAN is considered one of the Low Power Wide Area Networks \cite{raza2017low} that promise to connect massive numbers of low-cost wireless devices/nodes\footnote{We use device and node interchangeably.}, thousands per cell, in a simple star topology. Nodes operate with low energy consumption and data can be transmitted over long distances, e.g. many kilometers.
The wide coverage area of LoRaWAN\footnote{From here on we use LoRaWAN to refer to the whole stack and network architecture that uses LoRa modulation as defined by the LoRa Alliance.} is due to its unique modulation, Long Range (\textit{LoRa}) modulation (subsection~\ref{lora}), which has a large link budget.
LoRa provides multiple transmission parameters: Spreading Factor $SF$, Bandwidth $BW$, Coding Rate $CR$ and Transmission Power $TP$ that can be tuned to trade data rate for range, power consumption, or sensitivity.
Spreading codes associated with $SFs$ in LoRa are pseudo-orthogonal, thus LoRa can support simultaneous transmissions using different $SFs$ as long as none is received with significantly higher power than the others \cite{mikhaylov2017scalability} \cite{goursaud2015dedicated}, as otherwise the strongest signal suppresses weaker signals. Also, when multiple simultaneously transmitted signals have the same $SF$, the strongest signal will suppress the weaker signals if the power difference is sufficiently high \cite{bor2016lora}. This is known as the capture effect.
In our earlier work in \cite{abdelfadeel2018fadr} we showed that the capture effect and especially the imperfect-orthogonality of $SFs$ can make LoRaWAN an unfair system because of the near-far problem. Transmissions from nodes that are far from the gateway are not received when colliding with transmissions from nodes closer to the gateway that have significantly higher received power. This effect is magnified by LoRaWAN's large link budget leading to large power difference between transmissions from far and near nodes. Therefore, controlling the received signal power of all nodes is important to achieve fairness.
Another source of unfairness is the data rate assigned to a node. Each data rate, defined through the combination of $SF$, $BW$ and $CR$, experiences different airtime, thus different collision probability. The collision probability is higher when using slow data rate combinations and low when using fast combinations. Following these considerations, we propose a data rate allocation and $TP$ control algorithm, called FADR, to achieve a fair data rate for all nodes within a LoRaWAN cell while at the same time being energy efficient.
The contributions of this paper are as follows: we firstly formulate the general fairest data rate distribution to achieve a fair collision probability among all deployed data rates in a LoRaWAN cell. Then based on this distribution, we propose FADR, a data rate allocation and $TP$ control algorithm, to achieve a fair data rate independent of distance from the gateway while avoiding excessively high $TPs$ in order to reduce energy consumption. We provide detailed results, comparisons and discussions to show and explain how FADR performs under various network configurations. Overall, simulations show that FADR outperforms the state-of-art in almost all network configurations.
The remainder of this paper is organized as follows: Section~\ref{literaturereview} provides an overview of LoRa/LoRaWAN and highlights related work. Section~\ref{proposedapproach} describes FADR in detail. We present a detailed evaluation and discussion of FADR and comparison to the state-of-the-art in Section~\ref{evaluationanddiscussion}. Finally, Section~\ref{conclusion} presents the conclusions.
\begin{figure*}
\label{airtimeandenergy}
\vspace{-2em}
\centering
\subfloat[Airtime]{
\includegraphics[width=0.6\columnwidth]{figures/airtime}\label{fig:airtime_sf}
}
\subfloat[Energy]{
\includegraphics[width=0.6\columnwidth]{figures/energy}\label{fig:energy_sf}
}
\vspace{-0.5em}
\caption{Effect of Spreading Factor on Airtime and Energy}
\vspace{-0.5em}
\end{figure*}
\section{Literature Review} \label{literaturereview}
Here we provide an overview of the LoRaWAN protocol stack and highlight related work in the LoRaWAN domain.
\subsection{Long Range (LoRa)} \label{lora}
LoRa is a proprietary low-cost implementation of Chirp Spread Spectrum (CSS) modulation by Semtech that provides long range wireless communication with low power characteristics \cite{semtech2015lora} and represents the physical layer of the LoRaWAN stack. CSS uses wideband linear frequency modulated pulses, called \textit{chirps} to encode symbols. A LoRa symbol covers the entire bandwidth, making the modulation robust to channel noise and insensitive to frequency shifts.
LoRa modulation is defined by two main parameters: Spreading Factor $sf \in SFs (7,...,12)$, which affects the number of bits encoded per symbol, and Bandwidth $bw \in BWs (125,250,500) KHz$, which is the spectrum occupied by a symbol. A LoRa symbol consists of $2^{sf}$ chirps in which chirp rate equals bandwidth.
LoRa supports forward error correction code rates $cr$ equal to $4/(4+n)$ where $n$ ranges from 1 to 4 to increase resilience. The theoretical bit rate $R_{b}$ of LoRa is shown in Eq.~\ref{eq1} \cite{semtech2015lora}.
\begin{equation}\label{eq1}
R_{b} = sf*\dfrac{bw}{2^{sf}}*cr \hspace{1cm}bits/s
\end{equation}
Moreover, a LoRa transceiver allows adjusting the Transmission Power $TP$. Due to hardware limitations the adjustment range is limited from $2dBm$ to $14dBm$ in $1dB$ steps.
A LoRa packet can be transmitted using a constant combination of $SF$, $BW$, $CR$ and $TP$, resulting in over 936 possible combinations. Tuning these parameters has a direct effect on the bit rate and hence the airtime, affecting reliability and energy consumption. Each increase in $SF$ nearly halves the bit rate and doubles the airtime and energy consumption but enhances the link reliability as it slows the transmission. Whereas each increase in the $BW$ doubles the bit rate and halves the airtime and energy consumption but reduces the link reliability as it adds more noise.
The airtime of a LoRa packet can be precisely calculated by the LoRa airtime calculator \cite{lora2013calculator}. Fig.~\ref{fig:airtime_sf} shows the effect of $SFs$ and $BWs$ at code rate $CR=4/5$ on the airtime to transmit an 80 bytes packet length. As shown, the fastest combination uses the lowest $SF$ with the highest $BW$, whereas, the highest $SF$ with the lowest $BW$ achieves the slowest combination. Fig.~\ref{fig:energy_sf} shows the energy consumption for combinations of $SFs$ and $TPs$ at $CR=4/5$ and $BW=500KHz$ to transmit an 80 bytes packet. As shown, the $SF$ has much higher impact than the $TP$ on the energy consumption, e.g. increasing $SF$ consumes more energy than increasing $TP$ especially for large $SFs$.
LoRa modulation can enable concurrent transmissions, exploiting the pseudo-orthogonality of $SFs$ as long as none of the simultaneous transmissions is received with significantly higher power than the others \cite{goursaud2015dedicated}. Otherwise, the strongest transmission suppresses weaker transmissions if the power difference is higher than the Co-channel Interference Rejection (CIR) of weaker $SFs$. In case of the same $SF$, all simultaneous transmissions are lost, unless one of the transmissions is received with higher power than the CIR of the $SF$. This suppression of weaker signals by the strongest signal is called \textit{capture effect} \cite{bor2016lora}. The CIR of all $SF$ pairs has been calculated using simulations in \cite{goursaud2015dedicated} and validated by real LoRa link measurements in \cite{croce2017impact}.
\subsection{LoRaWAN} \label{lorawan}
LoRaWAN \cite{lorawan2017specs} is an open-source Medium Access Control (MAC) layer, system architecture and regional specifications using the LoRa modulation.
LoRaWAN MAC is based on simple Aloha, where a LoRa radio can transmit at any time as long as it respects the spectrum regulation. LoRaWAN operates in the Industrial Scientific and Medical (ISM) frequency band (868 MHz in Europe), which imposes a duty cycle of not more than 1\% on radios that do not adopt Listen-Before-Talk (LBT). The LoRaWAN system architecture is a simple star-of-stars topology where nodes communicate directly to one or more gateways which connect to a common network server.
A LoRaWAN gateway is usually equipped with multiple LoRa transceivers, thus is able to receive multiple transmissions on all transmission parameter combinations at the same time. Therefore, a LoRa device can transmit data to a network server with any transmission parameter combination without any prior configuration.
\begin{table}
\centering
\caption{LoRaWAN Data Rates in Europe \cite{lorawan2017regionalparameters}} \label{tab1}
\begin{tabular}{*{15}{c}}
Data Rates & Parameter Combination & Indicative physical bit rate [bit/s]\\
\hline
0 & SF12 / 125 kHz & 250\\
1 & SF11 / 125 kHz & 440\\
2 & SF10 / 125 kHz & 980\\
3 & SF9 / 125 kHz & 1760\\
4 & SF8 / 125 kHz & 3125\\
5 & SF7 / 125 kHz & 5470\\
6 & SF7 / 250 kHz & 11000\\
\hline
\end{tabular}
\end{table}
LoRaWAN defines an Adaptive Data Rate (ADR) scheme to control the uplink transmission parameters of LoRa devices. A LoRa device expresses an interest in using the ADR scheme by setting the \textit{ADR} flag in any uplink MAC header. When the ADR scheme is enabled, the network server can control transmission parameters of a LoRa device using \textit{LinkADRReq} MAC commands. Typically, the network server collects the 20 most recent transmissions from a node, including Signal-to-Noise Ratio (SNR) and the number of gateways that received each transmission. Based on that history, the network server assigns transmission parameters to be more airtime and energy efficient. To reduce the \textit{LinkADRReq} command length, not all transmission parameters are available, but a subset of only 7 ($SF$ and $BW$) combinations as shown in table~\ref{tab1} and 5 $TPs$ (2,5,8,11, or 14) can be set \cite{lorawan2017specs}.
\subsection{Related Work}
Recent research on LoRa/LoRaWAN has mainly focused on LoRa performance evaluation in terms of coverage, capacity, scalability and lifetime. The studies have been carried out using real deployments in \cite{oliveira2017longrange} and \cite{juha2017performance}, mathematical models in \cite{bankov2017mathematical} and \cite{georgiou2017lpwan}, or computer simulations in \cite{bor2016lora} and \cite{margin2017performance}. Almost all these works have assumed perfectly orthogonal $SFs$ although it has been shown in \cite{mikhaylov2017scalability} and \cite{croce2017impact} that this is not a valid assumption.
Furthermore, recent work has proposed transmission parameter allocation approaches for LoRaWAN with different objectives. For example, authors in \cite{bor2017transmissionparameter} proposed a transmission parameter selection approach for LoRa to achieve low energy consumption at a specific link reliability. Here a LoRa node probes a link using a transmission parameter combination to determine the link reliability. It then chooses the next probe combination based on whether the new combination achieves lower energy consumption while maintaining at least the same link reliability. Finally, the approach terminates when reaching the optimal combination from an energy consumption perspective.
Authors in \cite{cuomo2017explora} proposed two $SF$ allocation approaches, namely EXP-SF and EXP-AT, to help LoRaWAN achieve a high overall data rate. EXP-SF equally allocates $SFs$ to $N$ nodes based on the Received Signal Strength Indicator (RSSI), where the first $N/6$ nodes with the highest RSSI get $SF7$ assigned and then the next $N/6$ nodes $SF8$ and so on. EXP-AT is more dynamic than EXP-SF, where the $SF$ allocation theoretically equalizes the airtime of nodes. The two aforementioned works \cite{bor2017transmissionparameter} and \cite{cuomo2017explora} assumed perfectly orthogonal $SFs$, which leads to a higher overall data rate than in reality.
In the context of our work presented here, allocating data rates and $TPs$ to achieve data rate fairness in LoRaWAN is not well investigated, with the exception of \cite{reynders2017power}, where authors proposed a power and spreading factor control approach to achieve fairness within a LoRaWAN cell. We provide an overview of \cite{reynders2017power} and a detailed comparison with our proposal in Section~\ref{evaluationanddiscussion}. While in general data rate and power control approaches have been well studied for cellular systems and WiFi \cite{yates1995uplinkpower} \cite{subramanian2005joint}, we argue that these solutions are not suitable for constrained systems like LoRaWAN. The reason is that cellular based approaches require fast feedback and high data rates to work, which are not available in LoRaWAN.
In the end, an interesting work was done to ensure an interoperability between LoRaWAN and the native IoT stack i.e. IPv6/UDP/CoAP at the device level. The interoperability was done by adopting legacy solution like 6LoWPAN over LoRaWAN \cite{weber2017ipv6} or by developing a new header compression technique to be more suitable for the constraints of LoRaWAN \cite{abdelfadeel2017lschc}.
\begin{figure*}
\vspace{-2em}
\centering
\subfloat[Fairness Index]{
\includegraphics[width=0.6\columnwidth]{figures/SFA_fairness}\label{fig:sfa_fairness}
}
\subfloat[DER]{
\includegraphics[width=0.6\columnwidth]{figures/SFA_der}\label{fig:sfa_der}
}
\subfloat[DER vs SFs (4000 nodes)]{
\includegraphics[width=0.6\columnwidth]{figures/SFA_sfder}\label{fig:sfa_sfder}
}
\vspace{-0.5em}
\caption{Different SF Allocations Study}\label{fig:sfastudy}
\vspace{-0.5em}
\end{figure*}
\section{FADR Algorithm} \label{proposedapproach}
In the following we present our fair data rate allocation and power control proposal, which we call FADR, to achieve data rate fairness among nodes in a LoRaWAN cell. Firstly, we derive the fair data rate distribution in subsection~\ref{datarateallocation}, which tries to achieve an equal collision probability for all deployed data rates, then we provide our $TP$ control algorithm proposal in subsection~\ref{transmissionpowerallocation}, aimed at mitigating the capture and $SFs$ non-orthogonality effects.
\subsection{FADR - Data Rate Allocation} \label{datarateallocation}
Each transmission parameter combination ($SF$, $BW$, with $CR$) leads to a different data rate and thus airtime, which causes different collision probabilities, resulting in unfairness among nodes within a cell. Finding the fair data rate deployment ratios within a cell is therefore crucial.
$SF$ fair distribution ratios were derived in \cite{reynders2017power} as follows:
\begin{equation} \label{eq2}
p_{sf} = \frac{sf}{2^{sf}}/\sum_{i=7}^{12} \dfrac{i}{2^{i}} \hspace{1cm} \forall sf \in SFs,
\end{equation}
where $p_{sf}$ indicates the fraction of nodes using a specific $SF$. Eq.~\ref{eq2} has been derived by equalizing the collision probability of each $SF$ with taking into account the constraint that the sum of all probabilities must be unity $\sum_{s=7}^{12} p_{sf} = 1$.
However, Eq.~\ref{eq2} does not consider the impact of $BW$ and $CR$ on the collision probability. Assuming all $SFs$ will be deployed with the same $BW$ and $CR$ may not always be the case as the network operator may consider assigning different $BW$ and $CR$ to the same $SF$ in order to achieve a different data rate, reliability, or sensitivity. Therefore, we extend Eq.~\ref{eq2} into Eq.~\ref{eq3} to consider the impact of $BW$ in addition to $SF$ as follows:
\begin{equation} \label{eq3}
p_{sf,bw} = \dfrac{p_{sf}*bw}{\sum_{i \in BWs} i} \hspace{0.5cm} \forall sf \in SFs \hspace{0.1cm}\&\hspace{0.1cm} bw \in BWs,
\end{equation}
where $p_{sf,bw}$ indicates the fraction of nodes using a specific $SF$ and $BW$ combination. Eq.~\ref{eq3} is derived with respect to the constraint $\sum_{i \in BWs} p_{sf,bw} = p_{sf}$. In order to also consider the impact of $CR$, Eq.~\ref{eq3} is finally extended to Eq.~\ref{eq4} as follows:
\begin{equation} \label{eq4}
p_{sf,bw,cr} = \dfrac{p_{sf,bw}*cr}{\sum_{i \in CRs} i} \forall sf \in SFs \& bw \in BWs \& cr \in CRs,
\end{equation}
where $p_{sf,bw,cr}$ indicates the fraction of nodes using a specific $SF$, $BW$ and $CR$ combination. Eq.~\ref{eq4} is also derived with respect to the constraint $\sum_{i \in CRs} p_{sf,bw,cr} = p_{sf,bw}$.
Eq.~\ref{eq4} is the generalized form of Eq.~\ref{eq2}, where in case of deploying all $SFs$ with the same $BW$ and $CR$, the values expressed by Eq. \ref{eq4} equal the values derived with Eq.~\ref{eq2}. Hence, the fair ratios of using a potential LoRaWAN data rates as per table~\ref{tab1} without considering $CR$ are: $p_{0}=0.024$, $p_{1}=0.044$, $p_{2}=0.08$, $p_{3}=0.144$, $p_{4}=0.257$, $p_{5}=0.0898$, and $p_{6}=0.3592$.
Observing Eq.~\ref{eq4} in allocating the data rates within LoRaWAN cell ensures each node has the same probability of collision. However, this leaves the question as to what is the criteria of allocating data rates over RSSI within a cell? $BWs$ and $CRs$ are perfectly orthogonal, while the same is not true for $SFs$, which depend on the received power. We propose the region concept as a way of allocating $SFs$ within a LoRaWAN cell. For this a LoRaWAN cell is divided into regions, where each region consists of a number of nodes that are assigned to that region based on their RSSI. Nodes per region should be allocated using the fair $SF$ ratios from Eq.~\ref{eq2}. We recommend that the smallest size of a region should be equivalent to representing the smallest fair ratio, which for $SF12$ equals 2\% for a better representation of all ratios with in a region. Thus, we recommend the smallest region size should equal 50 nodes, which means $SF12$ is used by only one node in a region. We investigate the impact of region size in section~\ref{evaluationanddiscussion}.
To verify the impact of the fair data rate distribution, we compared the fair data rates, assuming all nodes are within a single region, versus equal $SF$ allocation across nodes, which has been considered in \cite{croce2017impact} and \cite{cuomo2017explora}, versus the proposed allocation in \cite{adelantado2017understanding}, where authors showed 28\% of nodes should use $SF12$. The fairness is calculated using Jain's fairness index \cite{jain1984quantitative}:
\begin{equation} \label{eq5}
\zeta = \frac{(\sum_{i=1}^N DER_i)^2}{N\sum_{i=1}^N DER_i^2},
\end{equation}
where $DER_i$ denotes the Data Extraction Rate (DER) of a node $i$ in a cell with $N$ nodes. The DER metric was introduced in \cite{bor2016lora} as the ratio of received packets to transmitted packets over a period of time. The fairness index varies from zero to one, where a higher index indicates a higher fairness. The results are shown in Fig.~\ref{fig:sfastudy} for different numbers of nodes, assuming perfectly orthogonal $SFs$ and neglecting the capture effect. This provides an insight into the fairness within a cell regardless of the assigned $TPs$.
Fig.~\ref{fig:sfa_fairness} shows the fairness index, where the fair allocation is almost one regardless of the number of nodes. However, it dramatically degrades in the other allocations with increasing number of nodes due to increasing collisions. The impact of allocation is clear in Fig.~\ref{fig:sfa_sfder}, which shows DER versus $SFs$ for a cell of 4000 nodes. The DER for nodes using a low $SF$ is higher than for those using a high $SF$ with equal $SF$ allocation and with $SF$ allocation as per \cite{adelantado2017understanding}. DER is almost zero for $SF10$, $SF11$ and $SF12$, which represent half of the nodes for the equal $SF$ allocation. This means, half of nodes cannot deliver any packets due to collisions, whereas the DER for the fair allocation is nearly equal for all $SFs$ and around the random access limit (see Fig.~\ref{fig:sfa_der}). The overall DER at the fair allocation outperforms the other two allocations up to about 3250 nodes. After that the overall number of collisions becomes higher, which means a lower DER than the other two allocations for the sake of equalizing the DER per $SF$ as shown in Fig.~\ref{fig:sfa_sfder}.
\begin{algorithm}[!htp]
\caption{FADR - $TP$ Control Algorithm}\label{alg:powercontrol}
\begin{algorithmic}[1]
\small
\STATE \textbf{Input} List of nodes \textbf{N}, corresponding \textbf{RSSI}, power levels \textbf{PowLevels}, matrix of \textbf{CIR}
\STATE \textbf{Output} $\forall n \in \textbf{N}, \textbf{P}[n] \in \textbf{PowLevels}$
\STATE Sort \textbf{N} by \textbf{RSSI}
\STATE \# Calculate MinRSSI, MaxRSSI, MinCIR
\STATE $MinRSSI = min(\textbf{RSSI})$, $MaxRSSI = max(\textbf{RSSI})$, $MinCIR=min(\textbf{CIR})$
\STATE $\textbf{PowLevels}.pop(0)$
\FORALL {$i \in \textbf{PowLevels}$}
\STATE $MaxPower = i$
\IF {$|MaxRSSI+MinPower-MinRSSI-MaxPower| <= MinCIR$}
\STATE $\textbf{PowLevels} = \textbf{PowLevels}[0:\textbf{PowLevels}.index(i)]$
\STATE $break$
\ELSIF {$i == max(\textbf{PowLevels})$}
\STATE $\textbf{powLevels}.pop()$
\ENDIF
\ENDFOR
\STATE \# Recalculate the minimum and the maximum of \textbf{RSSI}
\STATE $MinRSSI = min[MinRSSI + MaxPower, MaxRSSI + MinPower]$
\STATE $MaxRSSI = max[MinRSSI + MaxPower, MaxRSSI + MinPower]$
\STATE \# Assign the minimum power and save the MinPowIndex
\FORALL {$i \in range(0, len(\textbf{N}), 1)$}
\IF {$|\textbf{RSSI}[i]+MinPower| > |MinRSSI|$}
\STATE $MinPowIndex = i-1$
\STATE $break$
\ELSE
\STATE $\textbf{P}[i] = MinPower$
\ENDIF
\ENDFOR
\STATE \# Assign the maximum power and save the MaxPowIndex
\FORALL {$i \in range(len(N)-1, MinPowIndex,−1)$}
\IF {$|\textbf{RSSI}[i]+MaxPower-MinRSSI| > MinCIR$}
\STATE $MaxPowIndex = i-1$
\STATE $break$
\ELSE
\STATE $\textbf{P}[i] = MaxPower$
\ENDIF
\ENDFOR
\STATE \# Assign the nodes in between with the remaining power levels
\STATE $TempIndex = MinPowIndex$
\FORALL {$i \in \textbf{PowLevels}$}
\IF {$(|\textbf{RSSI}[TempIndex]+i-MinRSSI| <= MinCIR)$\hspace{0.5cm}\AND$(|\textbf{RSSI}[TempIndex]+i-\textbf{RSSI}[MaxPowIndex]-MaxPower| <= MinCIR)$}
\FORALL {$j \in range(TempIndex, MaxPowIndex, 1)$}
\IF {$|\textbf{RSSI}[j]+i-\textbf{RSSI}[MaxPowIndex]-MaxPower| > MinCIR$}
\STATE $TempIndex = j-1$
\STATE $break$
\ELSE
\STATE $\textbf{P}[j] = i$
\ENDIF
\ENDFOR
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{FADR - Transmission Power Allocation} \label{transmissionpowerallocation}
The other aspect that creates unfairness in LoRaWAN is the near-far problem, which influences the capture effect, especially with not perfectly orthogonal $SFs$. These characteristics favor near nodes because of their higher received power than far nodes. Therefore, balancing the received powers of all nodes is required in order to achieve a fair data rate among all nodes regardless of their distance from the gateway.
Our proposed $TP$ control algorithm is shown in Algorithm~\ref{alg:powercontrol}. The algorithm requires a list of nodes (N) with a list of corresponding RSSIs, a list of available $TP$ levels (PowLevels) that can be assigned, and a matrix of CIR of all $SF$ pairs as inputs (line 1). To avoid RSSI instability, the algorithm is run after a certain number of packets have been collected by the network server in order to calculate the average RSSI. RSSI stability has been investigated in \cite{aref2014free}, which showed that the RSSI standard deviation of nodes close to the gateway is less than $3dBm$, however, the deviation increases to $20dBm$ for far nodes. Algorithm~\ref{alg:powercontrol} does not make assumptions on the initial $TP$ assignment of the collected packets, but recommends that nodes are initiated with the same $TP$ before running the algorithm to have RSSIs with a common reference.
Algorithm~\ref{alg:powercontrol} allocates a $TP$ to each node as output (line 2). The algorithm starts with sorting the nodes by their RSSI (line 3), next, calculates the maximum (MaxRSSI) and the minimum (MinRSSI) values of the measured RSSIs in addition to the minimum value of CIR (MinCIR), which represents the safe margin of all $SFs$ (line 5). Subsequently, the algorithm finds the maximum $TP$ (MaxPower) that can reduce the difference between RSSI extremes to below the safe margin (lines 6-13), where the minimum $TP$ (MinPower) is the minimum of PowLevels. In case that MaxPower is less than the maximum of PowLevels, the higher values are removed from the list because they will not be used (line 10). This will reduce the energy consumption, thus, extend the nodes' lifetime. Next, the algorithm assigns MinPower to the node with MaxRSSI and MaxPower to the node with MinRSSI, then recalculates MinRSSI and MaxRSSI accordingly (lines 15-16). Subsequently, the algorithm starts allocating the $TPs$ that can be divided into three stages. Firstly, allocating MinPower to high RSSI nodes as long as the new RSSI is not lower than the MinRSSI (lines 18-23). The index of the last node that complies with this approach is saved in MinPowIndex. Secondly, allocating MaxPower to low RSSI nodes as long as the new RSSI plus the safe margin is not higher than MinRSSI (line 25-30). The index of the last node that complies with this approach is saved in MaxPowIndex. Finally, the algorithm assigns to the nodes between MinPowIndex and MaxPowIndex the remaining $TPs$ from low to high as long as the allocation of each $TP$ complies with the rules that the new RSSI plus the safe margin is not lower than the first node using the same $TP$ (lines 33-40).
\begin{figure}
\vspace{-2em}
\centering
\includegraphics[width=0.9\columnwidth]{figures/FADR_power}
\vspace{-0.5em}
\caption{FADR Power Allocation}\label{fig:fadr_powerallocation}
\vspace{-0.5em}
\end{figure}
Fig.~\ref{fig:fadr_powerallocation} provides a visual example of how Algorithm~\ref{alg:powercontrol} works. In Fig.~\ref{fig:fadr_powerallocation}, MinPower is $2dBm$ and MaxPower is $14dBm$ (the maximum $TP$ for LoRaWAN) since the difference between RSSIs ($\sim50dBm$ in that example) is higher than the difference between $TPs$ ($12dBm$). Algorithm~\ref{alg:powercontrol} iterates forwardly to allocate MinPower until MinPowerIndex, then iterates backwardly to allocate MaxPower until MaxPowerIndex and finally iterates in between to assign the remaining $TPs$. We note that strong signal suppression of weaker signals effect can not be totally eliminated in all cases due to the limited, discrete $TP$ levels of LoRaWAN. However, Algorithm~\ref{alg:powercontrol} minimizes this effect as much as possible.
The run time of our algorithm is linear $O(N)$, where $N$ is the number of nodes per cell since the algorithm iterates over all nodes just once. This is important property because LoRaWAN potentially supports a massive number of nodes per cell. Therefore, our algorithm informally increases running time linearly with the nodes number.
\begin{figure*}
\vspace{-2em}
\centering
\subfloat[Fairness Index]{
\includegraphics[width=0.6\columnwidth]{figures/MC_fairness}\label{fig:mc_fairness}
}
\subfloat[Overall DER]{
\includegraphics[width=0.6\columnwidth]{figures/MC_der}\label{fig:mc_der}
}
\subfloat[Overall Energy]{
\includegraphics[width=0.6\columnwidth]{figures/MC_energy}\label{fig:mc_energy}
}
\vspace{-0.5em}
\caption{Main Comparison Results}\label{fig:maincomprsion}
\vspace{-0.5em}
\end{figure*}
\section{Evaluation and Discussion} \label{evaluationanddiscussion}
To evaluate our ideas, we implemented FADR in LoRaSim \cite{bor2016lora}. LoRaSim is an open-source LoRa simulator that takes into account the capture effect only from the same $SF$, but otherwise assumes perfectly orthogonal $SFs$. In order to model collisions more comprehensively, we extended LoRaSim to include the non-perfect orthogonality property of $SFs$ based on the work in \cite{croce2017impact}, which adds a conservative $6dBm$ CIR threshold to all $SF$ pairs. We compared FADR to state-of-art approaches \cite{reynders2017power} and \cite{bor2016lora} by conducting multiple experiments that examine almost all factors that affect the algorithm. All experiments were run for a real-time of one day and repeated 10 times with different random seeds.
\subsection{State-of-the-art}
Authors of \cite{reynders2017power} present a $SF$ and $TP$ control algorithm to optimize the packet error rate fairness of LoRaWAN. The $SFs$ are allocated by sorting nodes first by their path loss, then by using the optimal distribution ratios Eq.~\ref{eq2}, where nodes with the lowest path loss get the lowest $SF$. The $TP$ control of \cite{reynders2017power} is based on the observation that nodes with high path loss and $SF8$ are the nodes with the highest packet error rate. Therefore, the algorithm assigns a high enough $TP$ to these nodes and allocates $SF7$ and $TP=2dBm$, i.e. short airtime and low $TP$, to all nodes that can corrupt these nodes' packets. Then the algorithm iterates again over all nodes to allocate enough $TP$ to all remaining nodes. We argue that this observation depends on the node distribution around the gateway, where these nodes may have lower or higher path loss depending on their locations from the gateway. We show when this assumption can be valid later on.
Authors of \cite{bor2016lora} show in their $SN^{5}$ experiment a way of allocating data rate and $TP$ in which each node chooses its transmission parameter combination locally to minimize first the airtime and secondly the lifetime. A node uses a combination that ensures its packets are received by the gateway and at the same time consumes less energy.
\subsection{Cell lay-out}
In this work, we consider a LoRaWAN cell that consists of one gateway located in the cell center and $N$ nodes placed randomly around the gateway. We investigated various cell radii $R$ and various number of nodes that are placed in the cell using different node distributions. Nodes generate data packets of length $L$ using transmission rate $\lambda$. A gateway is able to receive a configurable number of concurrent signals $MaxRecv$, based on its number of LoRa transceivers, on the same carrier frequency $CF$ as long as concurrent transmissions use different $SFs$ and are within the safe margin. For a given combination of $SF$ and $BW$, packets are only decoded by the gateway if their RSSI is higher than the corresponding sensitivity.
We used LoRaSim's propagation model which is based on the log-distance propagation model to calculate the RSSI of a node that transmits with $TP$. The same propagation model is used in \cite{croce2017impact} and \cite{margin2017performance}. Authors of \cite{croce2017impact} assume that any node, using any transmission parameter combination, is able to reach the gateway regardless of its distance from the gateway. To achieve the same assumption, the minimum sensitivity of all $SF$ and $BW$ combinations in LoRaSim was lowered to $-155dBm$, so that all nodes can reach the gateway with all combinations. Simulation parameters are shown in table~\ref{tab:simulationparameters}.
\begin{table}
\centering
\caption{Simulation Parameters} \label{tab:simulationparameters}
\begin{tabular}{*{15}{c}}
Parameter & Value & Unit\\
\hline
Nodes [$N$] & 100-4000 & \\
Packet Length [$L$] & 80 & byte \\
Transmission Rate [$\lambda$] & 60 & sec \\
Max Reception [$MaxRecv$] & 8 & \\
Cell Radius [$R$] & 100-3200 & m \\
Channel Number & 1 & \\
Channel Frequency [$CP$] & 868 & MHz \\
Simulation Time & 86400 & sec \\
Random Seeds & 10 & \\
\hline
\end{tabular}
\end{table}
\subsection{Evaluation Experiments}
We conducted various experiments to show the performance evaluation of FADR versus the state-of-the-art. We firstly present the main performance evaluation in Sec.~\ref{maincomparison}. Then, the results are discussed in depth in Sec.~\ref{versusdistance}. Subsequently, the impact of the cell size is shown in Sec.~\ref{cellsize} and finally the impact of the node distribution is shown in Sec.~\ref{nodedistribution}.
\begin{figure*}
\vspace{-2em}
\centering
\subfloat[DER vs Distance]{
\includegraphics[width=0.6\columnwidth]{figures/VD_der}\label{fig:vd_der}
}
\subfloat[DER vs SFs]{
\includegraphics[width=0.6\columnwidth]{figures/VD_sfder}\label{fig:vd_sfder}
}
\subfloat[Transmission Power]{
\includegraphics[width=0.6\columnwidth]{figures/VD_txs}\label{fig:vd_txs}
}
\vspace{-1em}
\\
\subfloat[Spreading Factor]{
\includegraphics[width=0.6\columnwidth]{figures/VD_sfdist}\label{fig:vd_sfdist}
}
\subfloat[RSSI vs Distance]{
\includegraphics[width=0.6\columnwidth]{figures/VD_rssi}\label{fig:vd_rssi}
}
\vspace{-0.5em}
\caption{Distance Study}\label{fig:vdstudy1}
\vspace{-0.5em}
\end{figure*}
\subsubsection{Main Comparison}\label{maincomparison}
Fig.~\ref{fig:maincomprsion} shows the overall results of this study. Fig.~\ref{fig:mc_fairness} shows the fairness index using Eq.~\ref{eq5}, Fig~\ref{fig:mc_der} shows the overall DER, and Fig.~\ref{fig:mc_energy} shows the overall energy consumption. We evaluate FADR with two region configurations.
First, in FADR-One Region, the entire cell is considered a single region. The second approach, where nodes are sorted according to their RSSI and then divided into groups of 50 nodes, was proposed in \cite{abdelfadeel2018fadr}. The data rate \textit{per region} is allocated based on Eq.~\ref{eq4} and $TP$ allocation is based on Algorithm~\ref{alg:powercontrol}.
Overall, both FADR region size approaches surpass the other approaches in terms of fairness without sacrificing the overall DER compared to \cite{reynders2017power} and with a remarkable improvement compared to $SN^{5}$ in \cite{bor2016lora}. On the other hand, both FADR region size approaches consume overall less energy than the approach in \cite{reynders2017power} but a higher energy than $SN^{5}$ in \cite{bor2016lora}, where all nodes choose to transmit using the lowest $TP$.
The low fairness and DER performance of $SN^{5}$ in \cite{bor2016lora} is due to the fact that data rate and $TP$ allocation was no studied at the cell level. Rather, nodes choose their transmission parameter locally, which leads to all nodes choosing the same transmission combination that achieves the lowest airtime and using the lowest $TP$, which achieves the lowest energy consumption, regardless of the cell status. This leads to a degradation of the cell performance by increasing the number of collisions within the cell and leads to aggressive unfairness for far nodes especially when increasing the number of nodes. In the following, we focus on the impact of the region size and analyze the performance of FADR versus \cite{reynders2017power} in the next subsection.
The region size has a notable impact on fairness and overall energy consumption, but almost no impact on the overall DER. Decreasing the region size, on one hand, mixes up all $SFs$ in a small variance of RSSI, on the other hand, $SFs$ are distributed everywhere in the cell, not just in contiguous areas as is the case in single region deployment, which allocates low $SFs$ to high RSSIs and high $SFs$ to low RSSIs. Therefore, small regions serve high $SFs$ better to the detriment of lower $SFs$, especially $SF7$, by decreasing the imperfect-orthogonality effect of $SF7$ over high $SFs$. However, small region deployment increases the impact of the capture effect, especially of $SF7$, because nodes with the same $SF$ now have high variance in their RSSIs. As overall nodes with $SF7$ represent the majority of nodes in a cell, small region deployment leads to lower fairness index, as shown in Fig~\ref{fig:mc_fairness}. However, excluding $SF7$ from the analysis and recalculating the fairness index shows that the small region deployment achieves higher fairness than single region deployment.
In terms of energy consumption, the FADR $TP$ control algorithm assigns high $TPs$ to low RSSI and vice versa, which leads to single region deployment consuming higher energy than small region deployments. The reason for this is that the nodes with low RSSI, in single region deployment, are allocated with high $SFs$, i.e. high airtime, and transmit using high $TPs$, but in small region deployments $SFs$ are distributed over the whole cell, thus, airtimes are distributed over $TPs$ as well.
\subsubsection{Distance Study}\label{versusdistance}
Figure ~\ref{fig:vdstudy1} shows DER (Fig.~\ref{fig:vd_der}), transmission powers (Fig.~\ref{fig:vd_txs}), $SF$ distribution (Fig.~\ref{fig:vd_sfdist}), and RSSIs (Fig.~\ref{fig:vd_rssi}) versus distance in addition to DER per $SF$ (Fig.~\ref{fig:vd_sfder}). These figures provide insights as to why FADR outperforms the approach published in \cite{reynders2017power}. The results of this study were collected from a cell with 1000 nodes, but we performed the same experiment with a larger number of nodes and got the same behavior.
FADR's advantage over \cite{reynders2017power} is shown in Fig.~\ref{fig:vd_der} in which FADR achieves roughly the same DER for a larger proportion of the network compared to \cite{reynders2017power} making FADR fairer. Between 400-700m, Reynders' approach \cite{reynders2017power} experiences high variation in the DER, corresponding to nodes using $SF7$ and low RSSIs. These nodes suffer from an aggressive capture effect by other nodes using $SF7$ and higher RSSIs and at the same time suffer from a capture effect due to the non-orthogonality of $SFs$ from nodes using different $SFs$ and higher RSSIs as shown in Fig.~\ref{fig:vd_rssi} because they do not get enough $TP$ as shown in Fig.~\ref{fig:vd_txs}.
It seems that the $TP$ control algorithm in \cite{reynders2017power} provides a $TP$ boost to nodes with $SF8-12$ over nodes with $SF7$ and low RSSIs as shown in Fig.~\ref{fig:vd_txs}. This boost yields an advantage to nodes with high $SFs$ over low RSSI nodes with $SF7$ (these low RSSI nodes suffer from low DER as shown in Fig.~\ref{fig:vd_der}) by reducing their non-orthogonality impact on high $SFs$. However, this boost creates a non-orthogonality impact from $SF8-9$ over higher $SFs$ if their RSSIs surpass RSSI of nodes using higher $SFs$ by the safe margin as shown in Fig.~\ref{fig:vd_sfder}. Because fewer nodes use $SF10-12$ than use $SF8-9$, \cite{reynders2017power} has slightly higher overall DER, but lower fairness than FADR. This $TP$ boost is the reason for a higher energy consumption compared to FADR.
On the other hand, our FADR $TP$ control algorithm increases the $TP$ gradually and within the safe margin after reaching the minimum limit of using the minimum $TP$ independently of the $SF$. This ensures that a large proportion of distances around the gateway have a balanced RSSI within the safe margin. With FADR, the nodes close to the gateway have an equal impact over the rest of the cell's nodes. This leads to a slight reduction in the overall DER. However, the DER will be more uniform over distance leading to higher fairness as shown in Fig.~\ref{fig:vd_der} and Fig.~\ref{fig:vd_sfder}.
\begin{figure}
\vspace{-2em}
\centering
\includegraphics[width=0.6\columnwidth]{figures/CS_fairness}
\vspace{-0.5em}
\caption{Cell Size Study}\label{fig:cs_fairness}
\vspace{-0.5em}
\end{figure}
\subsubsection{Cell Size Study}\label{cellsize}
We investigated the impact of the cell radius, while keeping the number of nodes constant, on the fairness, with results shown in Fig.~\ref{fig:cs_fairness}. Increasing the cell radius should provide an increase in the difference of nodes' RSSIs at the gateway. The result shown are collected from a cell with 1000 nodes. However, the behavior is identical to scenarios with other number of nodes. As shown, the cell radius does not have any impact on the fairness of either algorithms, where the difference is always the same. The reason for this is the slow increase in path loss when moving further away. For example, the difference of the RSSIs experienced at $1Km$ cell radius is ca. $62dBm$ and ca. $72dBm$ for $3Km$ cell radius. The $10dBm$ difference between the two cell radii can be handled well within the safe margin of either algorithms.
\begin{figure*}
\vspace{-2em}
\centering
\subfloat[Overall DER]{
\includegraphics[width=0.6\columnwidth]{figures/der_comparison}\label{fig:distr_der}
}
\subfloat[Fairness Index]{
\includegraphics[width=0.6\columnwidth]{figures/fair_comparison}\label{fig:distr_fairness}
}
\subfloat[Energy Consumption]{
\includegraphics[width=0.6\columnwidth]{figures/energy_comparison}\label{fig:distr_energy}
}
\vspace{-0.5em}
\caption{Node Distribution study}\label{fig:nodediststudy}
\vspace{-0.5em}
\end{figure*}
\subsubsection{Node Distribution Study}\label{nodedistribution}
We used the node distribution implemented in LoRaSim in all aforementioned studies, which randomly distributes the nodes around the gateway. Changing the node distribution will affect the performance of either algorithms. in regard of \cite{reynders2017power}, it changes the location of the nodes with $SF8$, which is the reference for this approach. In regard to FADR, it changes the distribution of collisions between nodes. Therefore, to investigate the impact of different node distributions, the cell is divided into three areas (inner, middle and outer), each 0.33 of the cell radius. Then the distribution of the nodes was adjusted to allocate 66.6\% of nodes to one area and the rest was uniformly distributed in the other two areas. Therefore, the inner distribution, for example, has 66.6\% of nodes in the inner 33\% of the cell radius. Fig.~\ref{fig:nodediststudy} shows the DER (Fig.~\ref{fig:distr_der}), Fairness (Fig.~\ref{fig:distr_fairness}), and Energy consumption (Fig.~\ref{fig:distr_energy}) in the different node distributions. The results shown were collected from a cell with 4000 nodes in each distribution.
Overall, the results validate the observations made so far that the approach in \cite{reynders2017power} achieves higher DER, but higher energy consumption and lower fairness than FADR with the exception of the inner distribution, where FADR achieves lower fairness. Most of the unfairness in Reynders' approach \cite{reynders2017power} comes from the impact of the non-orthogonality of low $SFs$ over high $SFs$ in which \cite{reynders2017power} has higher collisions than FADR. Therefore, FADR from an overall point of view is more suitable for high $SFs$, i.e. edge nodes, compared to \cite{reynders2017power}. However, this comes at the expense of DER in the area close to the gateway, where \cite{reynders2017power} achieves higher DER than FADR.
The inner distribution case is stressful for both approaches because most of the nodes are placed in the high path loss region around the gateway, which affects the remaining 33\% of nodes in the rest of the network. The unfairness stems mostly from the impact of non-orthogonality in which \cite{reynders2017power} has 2.6 times more packets affected by this than FADR. Nevertheless those packets are concentrated in nodes with $SF10-12$ on the outer region of the network. This is because the power boost in nodes with $SF8-9$ is now closer to the gateway, creating a big difference in RSSI larger than the CIR threshold for the nodes in $SF10-12$. Therefore, \cite{reynders2017power} achieves slightly higher DER in nodes with $SF8-9$, but \textit{zero} DER in nodes with $SF10-12$. Whereas, FADR achieves uniformly distributed DER albeit slightly lower over all those nodes. Due to $SF8-9$ being used by more nodes than $SF10-12$, \cite{reynders2017power} achieves slightly higher fairness than FADR. However, if the nodes with $SF7$ are not considered, which have much higher DER than all the remaining $SFs$, FADR achieves 76\% fairness, whereas \cite{reynders2017power} achieves only 64\%.
\subsection{Discussion} \label{discussions}
\subsubsection{Scalability of Fairness} \label{fairnessscability}
From the above studies, it should be noted that increasing the number of nodes, i.e. increasing the number of collisions, has a negative impact on the cell fairness. As LoRaWAN has discrete, limited number of $TPs$ ($2-14dBm$), a cell cannot totally eliminate all collisions using a $TP$ control mechanism. This leads to collisions being not uniformly distributed over distance, but concentrated in certain areas. We see this in the impact of the region of high path loss increase near to the gateway over the rest of the cell. Therefore, increasing the number of collisions magnifies this non-uniformity of collisions, thus, amplifies the unfairness within a cell. Since the transmission rate and the packet length have an impact on the number of collisions as well, these factors affect the fairness as well. While we showed results in this work based on a generated traffic of 80 byte long packets generated once per minute, we found that increasing the transmission rate or packet length with the same number of nodes degrades the fairness.
\subsubsection{Real World Considerations} \label{realconsideration}
The effectiveness of FADR $TP$ control in a real world implementation is affected by the variability of the RSSI, which is not totally stable over time \cite{aref2014free}. Therefore, to avoid RSSI instability, the algorithm is run after a certain number of packets have been collected by the network server to average over RSSI samples. The number of packets that the algorithm should consider before running is under investigation as a future work. Furthermore, it is known that the RSSI values are highly correlated with the propagation model. We used the same log-distance propagation model as in the state-of-art work we compared our approach to. However, we argue the propagation model should not aggressively affect FADR's behavior because FADR does not depend on the RSSI values, but the difference between RSSI values, making FADR more relevant for real world implementations than other approaches that depend on the path-loss estimation. However, as a future work we plan to test FADR's behavior using different propagation models and in real world deployments.
\section{Conclusion} \label{conclusion}
We proposed FADR to achieve a fair data extraction rate in LoRaWAN cells by deploying the fairest data rate ratios that achieve equal collision probability and by controlling transmission power such that it balances the nodes' received power within a safe margin, thus mitigating the capture effect. FADR achieves an almost uniform data extraction rate for all nodes regardless of their distances from the gateway and maintains the nodes' lifetime by not using excessively high transmission power levels. We implemented and compared FADR to other relevant state-of-art work for various network configurations, which showed FADR's advantages.
\section*{Acknowledgment}
This publication has emanated from research conducted with the financial support of Science Foundation Ireland (SFI) and is co-funded under the European Regional Development Fund under Grant Number 13/RC/2077.
\bibliographystyle{IEEEtran}
|
1,477,468,750,421 | arxiv | \section{Introduction\label{s1}}
Recently the bound state problem for strained graphene with a dipole
became of interest in the physics literature (De Martino et al
\cite{Martinoetal2014}). This is described effectively by a
two-dimensional massive Dirac operator $D_\phi$ with a dipole
potential $\phi$, i.e.,
\begin{equation}
\label{eq:dirac}
D_\phi:=\sigma\cdot p +\sigma_3 - \phi
\end{equation}
with $\sigma:=(\sigma_1, \sigma_2)$ (the first two Pauli matrices),
$p:=(1/\mathrm{i})(\partial_1,\partial_2)$, and real valued potential
\begin{equation}
\label{eq:dipol0}
\phi := d + s
\end{equation}
where
$$d(x):=
\begin{cases}
\mathfrak{d}\cdot \tfrac x{|x|} |x|^{-2}& |x|>1\\
0&|x|\leq1
\end{cases}
$$
with $\mathfrak{d}\in\rz^2$ is the potential of a pure point dipole at the
origin outside the ball of radius one around
the origin. Without loss of generality, we can -- and will from now on
-- pick $\mathfrak{d}:=(b,0)$ with $b>0$, i.e., a multiple of the unit vector
along the first coordinate axis. The potential $s$ will be the --
possibly -- singular part of the potential that is short range in the
sense that $-\Delta -s$ has only finitely many bound states.
It is folklore that the discrete spectrum of $D_\phi$ would be
infinite, if $\phi$ had a non-vanishing Coulomb tail. This is also
true for its three dimensional analogue. However, dimension two and
three differ in the case of a dipole potential: whereas in three
dimensions there are -- for small coupling constant -- only finitely
many eigenvalues in the gap \cite{AbramovKomarov1972},
$\sigma_\mathrm{d}(D_\phi)$ is always infinite in two dimensions. This
has been predicted by De Martino et al \cite{Martinoetal2014} and
proved in \cite{CueninSiedentop2014}. In fact, De Martino et al even
derived a formula for the accumulation rate of the eigenvalues at the
band edge. The purpose of this paper is to prove their formula. To
formulate our result we need some notation:
\begin{definition}
We write
\begin{itemize}
\item \label{eq:N} $N_I(A)$ for the number of eigenvalues
of a linear operator $A$ in $I\subset\cz$ -- counting
multiplicity,
\item $M_b$ for the -- rescaled -- Mathieu operator with
periodic boundary conditions at $0$ and $2\pi$ defined by
\begin{equation}
\label{eq:mathieu}
(M_bg)(\varphi) = -g''(\varphi)- b\cos(\varphi)g(\varphi),
\end{equation}
\item $B_{\mathrm{i}\nu}$ for the Bessel operator with imaginary
order defined by
\begin{equation}
\label{eq:bessel}
(B_{\mathrm{i}\nu}f)(z)= -f''(z)-\frac1zf'(z)-{\nu^2\over z^2}f(z).
\end{equation}
\end{itemize}
\end{definition}
\begin{remark}
The lowest eigenvalue $a_0$ of the rescaled Mathieu operator $M_b$
fulfills the transcendental equation (McLachlan
\cite[3.11.(8)]{McLachlan1947})
\begin{equation}
a_0
= \tfrac{1}{4}\left(- \frac{\tfrac 12 \left(\tfrac b4 \right)^2}{1-\tfrac 14\tfrac{a_0}{4}} - \frac{\tfrac {1}{64} \left(\tfrac b4\right)^2 }{1-\tfrac {1}{16}\tfrac{a_0}{4}} - \frac{\tfrac {1}{576} \left(\tfrac b4\right)^2 }{1-\tfrac {1}{36}\tfrac{a_0}{4}}- ...\right),
\end{equation}
in particular $a_0$ is negative for all $b$ (McLachlan
\cite[3.25. diagram]{McLachlan1947}).
\end{remark}
The above notation allows to formulate our main result.
\begin{theorem} \label{Satz1} Assume that $\phi=d+s$ is real valued
with $d$ as in \eqref{eq:dipol0}, and that the singular part $s$ is
relatively compact with respect to $D_0$ and its negative part $s_-$ even fulfills
$$\int_{\rz^2}s_-(x)\log(2+|x|)\mathrm{d} x +\int_0^1s_-^*(\pi t) |\log(t)|] \mathrm{d} t<\infty$$
and also
$$\int_{\rz^2}s^2(x)\log(2+|x|)\mathrm{d} x +\int_0^1(s^2)^*(\pi t) |\log(t)|] \mathrm{d} t<\infty.$$
Then
\begin{equation}
\label{eq:formel}
\lim_{E\nearrow1} {N_{(-E,E)}(D_\phi)\over|\log(1-E)|}
= \frac1\pi \mathrm{tr}\sqrt{{M_{2b}}_-}.
\end{equation}
\end{theorem}
Before we embark on the proof we make a few comments:
\begin{description}
\item[Domain] Our condition on the potential $\phi$ assures that $D_\phi$ is
self-adjoint on $H^1(\rz^2:\cz^2)$ and that the essential spectrum
of $D_\phi$ is $\rz\setminus(-1,1)$.
\item[Electric Potentials] One possible realization of $\phi$ is to
think of it as the electric potential of some sufficiently smooth
and localized charge density $\rho$, i.e.,
$$\phi(x) = \int_{\rz^3}{\rho(\mathfrak{y})\mathrm{d}\mathfrak{y}\over |(x,0)-\mathfrak{y}|}$$
with vanishing monopole moment, i.e., $\int_{\rz^3}\rho=0$ and to
assume that the dipole moment $\int_{\rz^3}\mathfrak{y}\rho(\mathfrak{y})\mathrm{d} \mathfrak{y}$ of
$\rho$ points in the direction of the first coordinate axis. (See,
e.g., Jackson \cite[Chapter 4]{Jackson1965} for a discussion of
multipole expansions of potentials. Note that we extend the vector
$x\in\rz^2$ by zero to a vector $\mathfrak{x}=(x,0)\in\rz^3$.)
Instead of three dimensional densities, we could also allow for
densities $\rho(\mathfrak{x})=\rho^{(2)}(x)\delta(x_3)$ that are confined to
the electron plane.
\item[Point Charges] Our hypothesis excludes point charges located
directly in the graphene sheet, i.e., the plane in which the
electrons move. Although it is certainly possible to treat a finite
number of such singularities with subcritical coupling constants,
i.e., less than $1/2$, this would require an analysis of the
compactness properties of $D_\phi^2$ when restricted to functions in
a ball containing the singularities of $\phi$. We refrain from
embarking on this subtlety, since it is irrelevant for the long
range behavior of the potential which determines the asymptotic
behavior of the eigenvalues.
\end{description}
\section{Proof of the Main Theorem\label{s2}}
Intuitively the long range behavior determines the asymptotic behavior
of the eigenvalues. This motivates to prove the following lemma
including pure dipole potentials.
\begin{lemma}
For real $a,b\in\rz$, $R\in\rz_+$, $A:=\{x\in\rz^2|\ |x|>1\}$. Set
$$H_{a,b}:= -\Delta - {a+b x_1/|x|\over|x|^2}
$$
on $H^2_0(A)$ or $H^2(A)$, i.e., the operator with Dirichlet
respectively Neumann boundary conditions on the boundary of $A$. Then
\begin{equation}
\label{hilf}
\lim_{E\nearrow0}{N_{(-\infty, E)}(H_{a,b})\over|\log(|E|)|}
= \frac1{2\pi}\mathrm{tr}\sqrt{(M_b-a)_-}.
\end{equation}
\end{lemma}
Note that the theorem is only nontrivial, if the sum of the lowest
Mathieu eigenvalue $-\mu$ and $-a$ is negative. This, however, is the
case for $a=0$, since the first Mathieu eigenvalue is always negative
(McLachlan \cite{McLachlan1947}).
We know of two ways proving the lemma. The first relies on
Dirichlet-Neumann bracketing; it adapts an argument that Kirsch and
Simon \cite{KirschSimon1988} developed for direction independent pure
$1/r^2$-potentials to the case of dipole potentials (direction
dependent decay (!)). (See Rademacher \cite{Rademacher2015} for
details.) The method presented here is somewhat different. As we will
see in the proof, the essential ingredient is the short range behavior
of modified Bessel functions with imaginary order which -- we feel --
is more direct. Moreover it is closer to the derivation by De Martino
et al \cite{Martinoetal2014}.
We begin with the proof of the Lemma.
\begin{proof}
We solve $H_{a,b}\psi = -\lambda \psi$ by separating variables,
i.e., we make the ansatz $\psi(x)= f(r)g(\varphi)$ in spherical
coordinates $x_1=r\cos(\varphi)$ and $x_2:=r\sin(\varphi)$ and
require periodic boundary conditions for $g$, i.e.,
$g(\varphi)=g(\varphi+2\pi)$. Then $g$ is an eigenfunction of the
Mathieu operator, i.e., $M_bg= -\mu g$, and
$f$ is an eigenfunction
of the Bessel operator $B_{\mathrm{i}\sqrt{a+\mu}}f=-\lambda f$ which we
need to solve with Dirichlet respectively Neumann boundary condition
at $1$
and Dirichlet condition at infinity, depending on whether we
solve the problem in $H^2_0(A)$ or $H^2(A)$.
As mentioned above, the Mathieu operator $M_b$ has for all positive
$b$ at least one negative eigenvalue. Because of the boundary
condition at infinity we have that the only solutions of
\eqref{eq:bessel} are
\begin{equation}
\label{eq:loesung}
f(r) = K_{\mathrm{i}\sqrt{\mu+a}}(\sqrt{\lambda}r).
\end{equation}
(Note that the functions $I_{\mathrm{i}\sqrt{\mu+a}}(\sqrt{\lambda}\cdot)$
and $K_{\mathrm{i}\sqrt{\mu+a}}(\sqrt{\lambda}\cdot)$ are two linearly
independent solutions of Bessel's equation.
However, $I_{\mathrm{i}\sqrt{\mu+a}}(\sqrt{\lambda}\cdot)$ is
excluded because of its exponential blow up at infinity (Watson
\cite[Chapter 7.23, Formula (2)]{Watson1922}).
Next we note that, since the change from Dirichlet to Neumann
boundary condition at $1$ is a perturbation of rank one for each
fixed eigenvalue of the Mathieu equation, we merely need to consider
the Dirichlet case. Thus we are interested in finding the maximal
number of nodes that a function
$K_{\mathrm{i}\sqrt{\mu+a}}(\sqrt{\lambda}\cdot)$ can have for fixed
$\mu+a$, assuming that $K_{\mathrm{i}\sqrt{\mu+a}}(\sqrt{\lambda})=0$ and
any $-\lambda\leq E$, i.e.,
\begin{equation}
\label{eq:n}
\left|\{r\geq 1|K_{\mathrm{i}\sqrt{\mu+a}}(\sqrt{\lambda}r)=0, -\lambda\leq E\}\right|.
\end{equation}
If $a+\mu\leq0$, the operator $B_{\sqrt{|a+\mu|}}$, has no eigenfunctions and
the claim is trivially true, i.e., we may assume that $a+\mu>0.$
Thus, we need to find the maximal $n$ such that
\begin{equation}
\label{eq:bed}
\sqrt{-E}\leq k_{\sqrt{\mu+a},n}
= O(\exp(-(n\pi-\phi_{\sqrt{\mu+a}})/\sqrt{\mu+a}))
\end{equation}
where $k_{\sqrt{\mu+a},n}$ denotes the zeros of
$K_{\mathrm{i}\sqrt{\mu+a}}(\sqrt\lambda\cdot)$ using their asymptotic expansion (see
\eqref{eq:b1}). Taking the logarithm and dividing by $|\log(-E)|$ yields
\begin{equation}
\label{eq:r}
n/|\log(-E)| \to {\sqrt{\mu+a}\over2\pi}
\end{equation}
as $E\nearrow0$. Since the lower bound can deviate by at most one,
this is covered as well.
\end{proof}
Next we turn to the proof of the theorem:
\begin{proof}
We begin by noting that the operator $D_\phi$ is a relatively compact
perturbation of the free Dirac operator $D_0$. This implies that
$D_\phi$ has only eigenvalues of finite multiplicity in $(-1,1)$,
the spectral gap of $D_0$. Furthermore those eigenvalue can only
accumulate at $-1$ or $1$. Thus, by the spectral theorem
\begin{equation}
\label{eq:quadrat}
N_{(-1,1)}(D_\phi) = N_{(-\infty,0)}(D_\phi^2-1).
\end{equation}
which reduces the problem to study the negative eigenvalues of a
relatively compact perturbation of the Laplacian, since
\begin{equation}
\label{eq:quadrat2}
D_\phi^2 -1 = -\Delta +\phi^2 +(\sigma\cdot p) \phi +\phi(\sigma\cdot p) - 2\sigma_3\phi.
\end{equation}
The Schwarz inequality -- followed by the geometric-arithmetic mean
inequality -- yields for any positive $\epsilon$
\begin{equation}
\label{eq:schwarz}
|2\Re(\psi,\epsilon (\sigma\cdot p)(\epsilon^{-1}\phi)\psi)| \leq \epsilon^2\|p\psi\|^2 +\epsilon^{-2}\|\phi\psi\|^2.
\end{equation}
Thus
\begin{equation}
\label{eq:oben}
D_\phi^2 -1 \leq (1+\epsilon^2)p^2 +(1+\epsilon^{-2})\phi^2-2\sigma_3\phi
\end{equation}
and
\begin{equation}
\label{eq:unten}
D_\phi^2 -1 \geq (1-\epsilon^2)p^2 +(1-\epsilon^{-2})\phi^2-2\sigma_3\phi.
\end{equation}
Note the lower bound \eqref{eq:unten} is bounded from below for
$\epsilon\in(0,1)$, since both $\phi$ and $\phi^2$ are relative compact
perturbations of $p^2$.
Both right hand sides of \eqref{eq:oben} and \eqref{eq:unten} separate
in two independent one component operators, since $\sigma_3$ is
diagonal. We shall focus on the first component. (As the proof shows
the second component will give the same answer because of the symmetry
of the pure dipole part.) We write
\begin{equation}
\label{eq:pm}
(1\pm\epsilon^2)h_\pm
:= (1\pm\epsilon^2)\left(p^2 +\epsilon^{-2}\phi^2-{2\over (1\pm\epsilon^2)}\phi\right)
\end{equation}
for the first components of the right hand side of $\eqref{eq:oben}$
and $\eqref{eq:unten}$. The task is now to estimate
$N_{(-\infty,E)}(h_+)$ from below and $N_{(-\infty,E)}(h_-)$ from
above as $E\nearrow0$. We begin with the lower bound to
$N_{(-\infty,0)}(h_+)$ and write in the spirit of \eqref{eq:sum3}
\begin{align}
\label{h} {h_\pm} &= -\Delta + V_\pm + W_\pm,\\
\label{V}
V_\pm :&= -2(1\pm\epsilon^2)^{-1}d,\\
\label{W}
W_\pm :&= \epsilon^{-2}\phi^2-2(1\pm\epsilon^2)^{-1}s.
\end{align}
(Note that the indices $\pm$ at $h$, $V$, and $W$, are just indices
motivated by the signs in \eqref{eq:oben} and \eqref{eq:unten} and not
to the positive part and negative part of an operator as elsewhere in
the paper.) Thus, by \eqref{eq:sum3}
\begin{equation}
\label{oben}
\limsup_{E\nearrow0}{N_{(-\infty,E)}(h_+)\over|\log(|E|)}
\geq \limsup_{E\nearrow0}{N_{(-\infty,E)}(-\Delta-(1-\epsilon) V_+)\over|\log(|E|)},
\end{equation}
since $N_{(-\infty,E)}(-\Delta+(1-\epsilon)\epsilon^{-1}W_+)\leq
N_{(-\infty,0)}(-\Delta+(1-\epsilon)\epsilon^{-1}W_+)<\infty$ by
\eqref{eq:shargo1} and thus vanishes when divided by $\log(|E|)$ as
$E\nearrow0$. Finally, we take $\epsilon$ to zero and get
\begin{multline}
\label{eq:epsilon=0}
\limsup_{E\nearrow0}{N_{(-\infty,E)}(h_+)\over|\log(|E|)|}\geq \limsup_{E\nearrow0}{N_{(-\infty,E)}(-\Delta-2d)\over|\log(|E|)|}\\
\geq
\limsup_{E\nearrow0}{N_{(-\infty,E)}\left((-\Delta-2d)|_{H_0^2(\{x\in\rz^2||x|>1)\}}\right)\over|\log(|E|)|}
={1\over2\pi}\mathrm{tr}\sqrt{{M_{2b}}_-}
\end{multline}
using \eqref{hilf} in the last step. Repeating the argument for the
second component yields the same result. Adding the results for both
components gives the claimed upper bound.
We now turn to the upper bound. We use \eqref{eq:unten} to estimate
the operator from below and thus, the number of eigenvalues from
above. Again the operator decouples into two one-component operators
and we are left with the task to compute twice the number of
eigenvalues of $h_-$ below $-E$ as remarked already above. Next we use
\eqref{eq:sum2} to estimate from above and note -- similarly to the
lower bound -- that the $W_-$ part does not contribute. As above we now get
\begin{multline}
\label{eq:epsilon=0o}
\limsup_{E\nearrow0}{N_{(-\infty,E)}(h_-)\over|\log(|E|)|}\leq \limsup_{E\nearrow0}{N_{(-\infty,E)}(-\Delta-2d)\over|\log(|E|)|}\\
\leq
\limsup_{E\nearrow0}{N_{(-\infty,E)}\left((-\Delta-2d)|_{H^2(\{x\in\rz^2||x|>1)\}}\right)\over|\log(|E|)|}
={1\over2\pi}\mathrm{tr}\sqrt{{M_{2b}}_-}
\end{multline}
where we estimated by the Neumann operator and used \eqref{hilf}
again. Doubling the bound because of the two components gives the
desired result.
\end{proof}
\textsc{Acknowledgment:} Thanks go to Sergey Morozov for directing our
attention to \cite{Shargorodsky2013}. We acknowledge partial support
of the Deutsche Forschungsgemeinschaft through its TR-SFB 12
(Symmetrien und Universalit\"at in mesoskopischen Systemen).
|
1,477,468,750,422 | arxiv | \section{Introduction}\label{sec:introduction}
Over the past two decades, designing energy-efficient communication terminals has become an important issue. This is not surprising for terminals which have to be autonomous as far as energy is concerned, such as cellular phones, unplugged laptops, wireless sensors, and mobile robots. More surprisingly, energy consumption has also become a critical issue for the fixed infrastructure of wireless networks. For instance, Vodafone's global energy consumption for 2007-2008 was about 3000 GWh \cite{genref1}, which corresponds to emitting 1.45 million tons of CO2 and represents a monetary cost of a few hundred million Euros. This context explains, in part, why concepts like ``green communications'' have emerged as seen from \cite{palicot,genref2} and \cite{genref3}. Using large multiple antennas, virtual multiple input multiple output (MIMO) systems, and small cells is envisioned to be one way of contributing to reducing energy consumption drastically. The work reported in this paper concerns point-to-point MIMO systems in which communication links evolve in a quasi-static manner, these channels are referred to as MIMO slow fading channels. The performance metric considered for measuring energy-efficiency of a MIMO communication corresponds to a trade-off between the net transmission rate (transmission benefit) and the consumed power (transmission cost).
The ultimate goal pursued in this paper is a relatively important problem in signal processing for communications. It consists of tuning the covariance matrix of the transmitted signal (called the pre-coding matrix) optimally. But, in contrast with the vast literature initiated by \cite{telatar} in which the transmission rate is of prime interest, the present paper aims at optimizing the pre-coding matrix in the sense of energy-efficiency as stated in \cite{veronica2}. Interestingly, in \cite{veronica2} the authors bridge a gap between the pioneering work by Verd\'{u} on the capacity per unit cost for static channels \cite{verdu} and the more pragmatic definition of energy-efficiency proposed by \cite{goodman} for quasi-static single input single output (SISO) channels. Indeed, in \cite{veronica2}, energy-efficiency is defined as the ratio of the probability that the channel mutual information is greater than a given threshold to the used transmit power. Assuming perfect channel state information at the receiver (CSIR) and the knowledge of the channel distribution at the transmitter, the pre-coding matrix is then optimized for several special cases. While \cite{veronica2} provides interesting insights into how to allocate and control power at the transmitter, a critical issue is left unanswered; to what extent do the conclusions of \cite{veronica2} hold in more practical scenarios such as those involving imperfect CSI? Answering this question was one of the motivations for the work reported here. Below, the main differences between the approach used in this work and several existing relevant works are reviewed.
In the proposed approach, the goal pursued is to maximize the number of information bits transmitted successfully per Joule consumed at the transmitter. This is different from the most conventional approach which consists in minimizing the transmit power under a transmission rate constraint: \cite{cui} perfectly represents this body of literature. In the latter and related works, efficiency is not the main motivation. \cite{tse} provides a good motivation as to how energy-efficiency can be more relevant than minimizing power under a rate constraint. Indeed, in a communication system without delay constraints, rate constraints are generally irrelevant whereas the way energy is used to transmit the (sporadic) packets is of prime interest. Rather, our approach follows the original works on energy-efficiency which includes \cite{shah,goodman,saraydar,similar,buzzi}. The current state of the art indicates that, since \cite{veronica2}, there have been no works where the MIMO case is treated by exploiting the cumulative distribution of the channel mutual information (i.e., the outage probability) at the numerator of the performance metric. As explained below, our analysis goes much further than \cite{veronica2} by considering effects such as channel estimation error effects. In the latter respect, several works address the issue of power allocation for outage probability minimization \cite{yoo-goldsmith-2007,giuseppe-it-2010,cesar-08} under imperfect channel state information. The latter will serve as a basis for the analysis conducted in the present paper. At this point, it is possible to state the contributions of the present work.
In comparison to \cite{veronica2}, which is the closest related work, the main contributions of the paper can be summarized as follows:
\begin{itemize}
\item one of the scenarios under investigation concerns he case where CSI is also available at the transmitter (only the case with CSIR and CSI distribution at the transmitter is studied in \cite{veronica2}).
\item The assumption of perfect CSI is relaxed. In Sec. III, it is assumed that only imperfect CSIT and imperfect CSIR is available. Sec. IV considers the case with no CSIT and imperfect CSIR. In particular, this leads us to the problem of tuning the fraction of training time optimally. Exploiting existing works for the transmission rate analysis \cite{mimo} and \cite{mimot}, it is shown that this problem can also be treated for energy-efficiency.
\item The realistic assumption of finite block length is made. This is particularly relevant, since block finiteness is also a source of outage and therefore impacts energy-efficiency. Note that recent works on transmission under the finite length regime such as \cite{polynaskiy} provide a powerful theoretical background for possible extensions of this paper.
\item Instead of considering the radiated power only for the cost of transmitting, the total power consumed by the transmitter is accounted for. Based on works such as \cite{richter}, an affine relation between the two is assumed. Although more advanced models can be assumed, this change is sufficient to show that the behavior of energy-efficiency is also modified.
\end{itemize}
The paper is therefore structured as follows. Sec. II describes the proposed framework to tackle the aforementioned issues. Sec. III and IV treat the case with and without CSIT respectively. They are followed by a section dedicated to numerical results (Sec. V) whereas Sec. \ref{sec:conclusion} concludes the paper with the main messages of this paper and some relevant extensions.
\section{System model}\label{sec:system-model}
A point-to-point multiple input and multiple output communication unit is studied in this work. In this paper, the dimensionality of the input and output is given by the numbers of antennas but the analysis holds for other scenarios such as virtual MIMO systems \cite{vmimo}. If the total transmit power is given as $P$, the average SNR is given by~:
\begin{equation}\label{eq:def-snr}
\rho = \frac{P}{\sigma^2}
\end{equation}
where $\sigma^2$ is the reception noise variance.The signal at the receiver is modeled by~:
\begin{equation}
\label{eq:system-model-mimo}
\underline{y}= \sqrt{\frac{\rho}{M}} \mathbf{H} \underline{s}
+ \underline{z}
\end{equation}
where $\mathbf{H}$ is the $N \times M$ channel transfer matrix and $M$ (resp. $N$) the number of transmit (resp. receive) antennas. The entries of $\mathbf{H}$ are i.i.d. zero-mean unit-variance complex Gaussian random variables. The vector $\underline{s}$ is the $M$-dimensional column vector of transmitted symbols follows a complex normal distribution, and $\underline{z}$ is an $N$-dimensional complex white Gaussian noise distributed as $\mathcal{N}(\underline{0}, \mathbf{I})$. Denoting by $\mathbf{Q} = \mathbb{E}[\underline{s}\underline{s}^H]$ the input covariance matrix (called the pre-coding matrix), which satisfies
\begin{equation}
\label{eq:pre-coding} \frac{1}{M} \mathrm{Tr}(\mathbf{Q}) = 1
\end{equation}
where $\mathrm{Tr}$ stands for the trace operator. The power constraint is expressed as~:
\begin{equation}
\label{eq:power-contraint} P \leq P_{\max}
\end{equation}
where $P_{\max}$ is the maximum available power at the transmitter.
The channel matrix $\mathbf{H}$ is assumed to evolve in a quasi-static manner~: the channel is constant for some time interval, after which it changes to an independent value that it holds for the next interval \cite{mimo}. This model is appropriate for the slow-fading case where the time with which ${\bf H}$ changes is much larger than the symbol duration.
\subsection{Defining the energy efficiency metric}
In this section, we introduce and justify the proposed definition of energy-efficiency of a communication system with multiple input and output antennas, and experiences slow fading.
In \cite{goodman}, the authors study multiple access channels with SISO links and use the properties of the energy efficiency function defined as $\frac{f(\rho)}{P}$ to establish a relation between the channel state (channel complex gain) ($h$) and the optimal power ($P^*$). This can be written as:
\begin{equation}
P^*=\frac{\mathrm{SNR}^* \sigma^2}{|h|^2}
\end{equation}
where $\mathrm{SNR}^*$ is the optimal SNR for any channel state and (when $f$ is a sigmoidal/S-shaped function, i.e, it is initially convex and after some point becomes concave) is the unique strictly positive
solution of
\begin{equation}
x f'(x) - f(x)=0
\end{equation}
where $1-f(.)$ is the outage probability. Formulating this problem in the case of MIMO channels is non-trivial as there is a problem of choosing the total transmit power as well as the power allocation.
When the same (imperfect) CSI is available at the transmitter and receiver, by estimating the channel for $t$ time, and sending the information to the transmitter for $t_f$ time, the energy-efficiency $\nu_T$ is defined as:
\begin{equation}\label{eq:def-ee-csi}
\nu_T(P,{\bf Q},\hat{{\bf H}}) = \frac{R \left(1 - \frac{t+t_f}{T}\right) F_{L} \left[ I_{\mathrm{ICSITR}}(P,{\bf Q},\hat{{\bf H}})-\frac{R}{R_0} \right] } {a P +b}
\end{equation}
where $R$ is the transmission rate in bit/s, $T$ is the block duration in s, $R_0$ is a parameter which has unit Hz (e.g., the system bandwidth), and $a>0$, $b\geq0$ are parameters to relate the transmitter radiated power to its total consumed power~; we define $\xi = \frac{R}{R_0}$ as the spectral efficiency. $I_{\mathrm{ICSITR}}(P,{\bf Q},\hat{{\bf H}})$ denotes the mutual information with imperfect CSITR (the receiver also has the exact same CSI as the transmitter). This form of the energy-efficiency is inspired from early definitions provided in works like \cite{goodman}, and studies the gain in data rate with respect to the cost which is the power consumed. The numerator represents the benefit associated with transmitting namely, the net transmission rate (called the goodput in \cite{goodput}) of the communication and is measured in bit/s. The goodput comprises a term $1 - \frac{t+t_f}{T}$ which represents the loss in terms of information rate due to the presence of a training and feedback mechanism (for duration $t$ seconds and $t_f$ seconds resp. in a $T$s long block) \footnote{In this case, we assume that the feedback mechanism is sufficient to result in perfect knowledge of $\hat{{\bf H}}$ at the transmitter. This is done because, assuming a different imperfect CSI at the transmitter from the receiver creates too much complexity and this problem is beyond the scope of a single paper.}. The denominator of (\ref{eq:def-ee-csi}) represents the cost of transmission in terms of power. The proposed form for the denominator of (\ref{eq:def-ee-csi}) is inspired from \cite{richter} where the authors propose to relate the average power consumption of a transmitter (base stations in their case),to the average radiated or radio-frequency power by an affine model.
The term $F_L(.)$ represents the transmission success probability. The quantity $F_L(.)$ gives the probability that the ``information'' denoted by $\hat{I}$ as defined in \cite{Buckingham-08}) is greater than or equal to the coding rate ($\xi$), i.e., it is the complementary cumulative distribution function of the information $\hat{I}$, $\mathrm{Prob}( \hat{I} \geq \xi)$. Formally, $\hat{I}$ is defined as $\hat{I}=\log \frac{\mathrm{PDF}_{X,Y}(x,y)}{\mathrm{PDF}_{X}(x)\mathrm{PDF}_{Y}(y)}$, where PDF$_{X,Y}$ and PDF$_{X}$ represents the joint and marginal probability distribution functions, $x$ and $y$ are samples of the process $X$ and $Y$, which in this case represent the transmitted and received signals. The average mutual information $I = E(\hat{I})$ is used to calculate this probability and $F_L(.)$ depends on the difference between $I$ and $\xi$. $F_L(.)$ can be verified to be sigmoidal (this is the cumulative probability distribution function of a variable with a single peaked probability distribution function) and $F_L(0)=0.5$ (If $\xi=I$, $F_L(.)$ is the probability that a random variable is equal to or larger than its mean). When CSIT is available, it is possible to ensure that the data transmission rate is just below the channel capacity. If this is done, then there is no possibility of outage when the block length is infinite \cite{shannon}. However, in most practical cases, the block length is finite and this creates an outage effect which depends on the block length $L$ \cite{Buckingham-08}.
The bounds on $F_L$ can be expressed as $F_L(I_{\mathrm{ICSITR}}(0,0,{\bf H})-\xi)=0$ (no reliable communication when transmit power is zero) and as $F_L \to 1$ when $P \to \infty$. This proposed form for this function, $F_L(I_{\mathrm{ICSITR}}(P,{\bf Q},\hat{{\bf H}})-\xi)$, is supported by works like \cite{Buckingham-08} and \cite{hoydis-2012}. An approximation for this function based on the automatic repeat request protocol \cite{arq} is $F_L(x)=Q_{func}(-Tx)$, where $Q_{func}$ is the tail probability of the standard normal distribution.
Therefore, in the presence of CSI at the transmitter, outage occurs even when the mutual information is more than the targeted rate due to the noise and finite code-lengths. In this scenario, the energy-efficiency is maximized when the parameters ${\bf Q}$ and $P$ are optimized.
In the absence of CSI at the transmitter, the earlier definition of energy efficiency is not suitable since ${\bf H}$ is random, $\nu_{T}$ is also a random quantity. Additionally, in this case, it is impossible to know if the data transmission rate is lower than the instantaneous channel capacity as the channel varies from block to block. Therefore, in this case, the source of outage is primarily the variation of the channel \cite{ozarow-1994}, and using (\ref{eq:def-ee-csi}) directly is not suitable. As the channel information is unavailable at the transmitter, define $\mathbf{Q} = \frac{\mathbf{I}_M}{M}$, meaning that the transmit power is allocated uniformly over the transmit antennas; in Sec. \ref{sec:subsec-opt-M}, we will comment more on this assumption. Under this assumption, the average energy-efficiency can be calculated as the expectation of the instantaneous energy-efficiency over all possible channel realizations. This can be rewritten as:
\begin{equation}
\nu_R(P,t) = \frac{R \left(1 - \frac{t}{T}\right)
\mathbb{E}_{{\bf H}}\left( F_L\left[ I_{\mathrm{ICSIR}}(P,{\bf Q},\hat{{\bf H}})-\frac{R}{R_0} \right] \right) }{a P +b}.
\end{equation}
For large $L$, it has been shown in \cite{ozarow-1994} (and later used in other works like \cite{veronica2}) that the above equation can be well approximated to~:
\begin{equation}\label{eq:def-ee-nocsit}
\nu_R(P,t) = \frac{R \left(1 - \frac{t}{T}\right)\mathrm{Pr}_{{\bf H}} \left[ I_{\mathrm{ICSIR}}(P,t, \hat{{\bf H}}) \geq \xi \right] }{a P +b}
\end{equation}
where $\mathrm{Pr}_{{\bf H}}$ represents the probability evaluated over the realizations of the random variable ${\bf H}$. Here, $I_{ICSIR}$ represents the mutual information of the channel with imperfect CSI at the receiver. Let us comment on this definition of energy efficiency. This definition is similar to the earlier definition in all most ways. Here the parameter $t$, represents the length of the training sequence used to learn the channel at the receiver\footnote{In this case, the optimization is done over $P$ and $t$ assuming imperfect CSI at the receiver. A parameter here not explicitly stated, but indicated nevertheless, is $M$ due to the number of transmit antennas affecting the effectiveness of training}. The major difference here is that the expression for the success rate is the probability that the associated mutual information is above a certain threshold. This definition of the outage is shown to be appropriate and compatible with the earlier definition when only statistical knowledge of the channel is available \cite{ozarow-1994}.
Although very simple, these models allow one, in particular, to study two regimes of interest.
\begin{itemize}
\item The regime where $\frac{b}{a}$ is small allows one to study not only communication systems where the power consumed by the transmitter is determined by the radiated power but also those which have to been green in terms of electromagnetic pollution or due to wireless signal restrictions (see e.g., \cite{wifir}).
\item The regime where $\frac{b}{a}$ is large allows one to study not only communication systems where the consumed power is almost independent of the radiated power but also those where the performance criterion is the goodput.
\end{itemize}
Note that when $b=0$, $t \rightarrow +\infty$, $T \rightarrow +\infty$, and $\frac{t}{T}
\rightarrow 0$ equation (\ref{eq:def-ee-nocsit}) boils down to the performance
metric investigated in \cite{veronica2}.
\subsection{Modeling channel estimation noise}
Each transmitted block of data is assumed to comprise a training sequence in order for the receiver to be able to estimate the channel; the training sequence length in symbols is denoted by $t_s$ and the block length in symbols by $T_s$. Continuous counterparts of the latter quantities are defined by $t= t_s S_d$ and $T = T_s S_d$, where $S_d$ is the symbol duration in seconds. In the training phase, all $M$ transmitting antennas broadcast orthogonal sequences of known pilot/training symbols of equal power on all antennas. The receiver estimates the channel, based on the observation of the training sequence, as $\widehat{{\bf H}}$ and the error in estimation is given as $\Delta {\bf H} = {\bf H} - \widehat{{\bf H}}$. Concerning the number of observations needed to estimate the channel, note that typical channel estimators generally require at least as many measurements as unknowns \cite{mimot}, that is to say $N t_s \geq N M$ or more simply
\begin{equation}\label{eq:condition-ts-M}
t_s \geq M.
\end{equation}
The channel estimate normalized to unit variance is denoted by $\widetilde{{\bf H}}$. From \cite{mimot} we know that the mutual information is the lowest when the estimation noise is Gaussian. Taking the worst case noise, it has been shown in \cite{mimo} that the following observation equation
\begin{equation}\label{eq:cmodel2n}
\widetilde{\underline{y}} = \sqrt{\frac{\rho_{\mathrm{eff}}(\rho,t)}{M}}\widetilde{{\bf H}}
\underline{s} + \widetilde{\underline{z}}
\end{equation}
perfectly translates the loss in terms of mutual information\footnote{It is implicitly assumed that the mutual information is taken between the system input and output; this quantity is known to be very relevant to characterize the transmission quality of a communication system (see e.g. \cite{cover} for a definition).} due to channel estimation provided that the effective SNR $\rho_{\mathrm{eff}}(\rho,t)$ and equivalent observation noise $\widetilde{\underline{z}}$ are defined properly namely,
\begin{equation}
\label{eq:rhofun}
\left\{
\begin{array}{ccc}
\widetilde{\underline{z}} & = & \sqrt{\frac{\rho}{M}} \Delta {\bf H} \underline{s} +\underline{z}\\
\rho_{\mathrm{eff}}(\rho,t) & = & \frac{\frac{t}{M S_d}\rho^2}{1+\rho + \rho \frac{t}{M S_d}}
\end{array}
.
\right.
\end{equation}
As the worst case scenario for the estimation noise is assumed, all formulas derived in the following sections give lower bounds on the mutual information and success rates. Note that the lower bound is tight (in fact, the lower bound is equal to the actual mutual information) when the estimation noise is Gaussian which is true in practical cases of channel estimation. The effectiveness of this model will not be discussed here but has been confirmed in many other works of practical interest (see e.g., \cite{samson}). Note that the above equation can be utilized for the cases of imperfect CSITR and CSIR as well as the case of imperfect CSIR with no CSITR. This is because in both cases, the outage is determined by calculating the mutual information $I_{\mathrm{ICSITR}}$ or $I_{\mathrm{ICSIR}}$ respectively.
\section{Optimizing energy-efficiency with imperfect CSITR available}
\label{sec:optee-csit}
When perfect CSITR or CSIR is available, the mutual information of a MIMO system, with a pre-coding scheme ${\bf Q}$ and channel matrix ${\bf H}$ can be expressed as:
\begin{equation}\label{eq:def-cappcsit}
I_{\mathrm{CSITR}}(P,{\bf Q},{\bf H}) = \log \left| \mathbf{I}_M + \frac{P}{M\sigma^2} {\bf H} {\bf Q}
{\bf H}^H \right|
\end{equation}
The notation $|\mathbf{A}|$ denotes the determinant of the (square) matrix $\mathbf{A}$.
With imperfect CSIT, which is exactly the same as the CSIR (i.e., both the transmitter and the receiver have the same channel estimate $\hat{{\bf H}}$), a lower bound on the mutual information can be found from several works like \cite{yoo-goldsmith-2007,cesar-08} etc. This lower bound for $I_{\mathrm{ICSITR}}$ is used, which is expressed as:
\begin{equation}\label{eq:def-capipcsit-1}
I_{\mathrm{ICSITR}}(P,{\bf Q},\hat{{\bf H}}) = \log \left| \mathbf{I}_M + \hat{{\bf H}} \frac{P}{M\sigma^2(1+\rho \sigma^2_E) } {\bf Q} \hat{{\bf H}}^H \right|
\end{equation}
where $\hat{{\bf H}}$ is the estimated channel and $1-\sigma^2_E$ is the variance of $\hat{{\bf H}}$. Considering the block fading channel model, from \cite{yoo-goldsmith-2007} and \cite{mimot} we conclude that $\sigma^2_E=\frac{1}{1+\rho \frac{t}{M}}$. Simplifying :
\begin{equation}\label{eq:def-capipcsit}
I_{\mathrm{ICSITR}}(P,{\bf Q},\widehat{{\bf H}}) = \log \left| \mathbf{I}_M + \frac{\rho_{\text{eff}}}{M} \widehat{{\bf H}} {\bf Q}
\widehat{{\bf H}}^H \right|.
\end{equation}
Having defined the mutual information to be used for (\ref{eq:def-ee-csi}), we proceed with optimizing $\nu_T$.
\subsection{Optimizing the pre-coding matrix ${\bf Q}$}
Studying (\ref{eq:def-ee-csi}) and (\ref{eq:def-capipcsit}), we see that varying the power allocation (or the corresponding pre-coding matrix) ${\bf Q}$, affects only the success rate $F_L(.)$ and the total power $P$ is the only term that is present outside $F_L(.)$. As $F_L(.)$ is known to be an increasing function, if the total power is a constant, optimizing the energy efficiency $\nu_T$ amounts to simply maximizing the mutual information $I_{\mathrm{ICSITR}}(P,{\bf Q},\hat{{\bf H}})$. This is a well documented problem and it gives a ``water-filling'' type of solution \cite{love-2008}. Rewriting (\ref{eq:def-cappcsit}) as
\begin{equation}\label{eq:def-cappcsitmod}
I_{\mathrm{ICSITR}}(P,{\bf Q},\widehat{{\bf H}}) = \log \left| \mathbf{I}_M + \frac{\rho_{\mathrm{eff}}}{M} {\bf D} {\bf S}
{\bf D}^H \right|
\end{equation}
where the optimal covariance matrix ${\bf Q}={\bf V}{\bf S}{\bf V}^H$ is achieved through the singular value decomposition of the channel matrix $\widehat{{\bf H}}={\bf U}{\bf D}{\bf V}^H$ and an optimal diagonal covariance matrix ${\bf S}={\hbox{diag}}[s_1, \dots, s_{\min(M,N)},0, \dots,0]$. The water-filling algorithm can be performed by solving:
\begin{align} \label{eq:wf}
& s_i = \left( \mu - \frac{1}{\rho \|d_i\|^2} \right)^{+} ,\:\mathrm{ for }\: i=1,2,\cdots,{\min(M,N)}
\end{align}
where $d_i$ are the diagonal elements of ${\bf D}$ and $\mu$ is selected such that $\Sigma_{i=1}^{\min(M,N)} s_i =M$. Here $(x)^+ = \max(0,x)$, this implies that $s_i$ can never be negative. The actual number of non-vanishing entries in ${\bf S}$ depends on the values of $d_i$ as well $\rho$ (and thus $P$). Examining (\ref{eq:wf}), we can see that when $\rho \to 0$, the water-filling algorithm will lead to choosing $s_j=M$ and $s_i=0$ for all $i \neq j$, where $j$ is chosen such that $d_j=\max(d_i)$ (beamforming). Similarly for $\rho \to \infty $, $s_i=\frac{M}{\min(M,N)}$ (uniform power allocation).
\subsection{Determining the optimal total power}
${\bf Q}$ has been optimized in the previous section. From (\ref{eq:def-ee-csi}), we see that the parameters that can be optimized in order to maximize the energy efficiency are ${\bf Q}$ and $P$. Therefore, in this section, we try to optimize $P$, the total power. Note that for every different $P$, the optimal power allocation ${\bf Q}$ changes according to (\ref{eq:wf}) as $\rho$ is directly proportional to $P$. Therefore optimizing this parameter is not a trivial exercise. Practically, $P$ represents the total radio power, that is, the total power transmitted by the antennas. This power determines the total consumed power $b+aP$, of base stations or mobile terminals and so, optimizing this power is of great importance.
In this section, a theorem on the properties of $\nu_T(P,{\bf Q}_{WF(P)},\widehat{{\bf H}})$ is provided, where ${\bf Q}_{WF(P)}$ is the power allocation obtained by using the water-filling algorithm and iteratively solving (\ref{eq:wf}) with power $P$. This procedure is said to be ``iterative" because, after solving equation \ref{eq:wf}, if any $s_j<0$, then we set $s_j=0$ and the equation is resolved until the all solutions are positive. For optimization, desirable properties on $\nu_T(P,{\bf Q}_{WF(P)},\widehat{{\bf H}})$ are differentiability, quasi-concavity and the existence of a maximum. The following theorem states that these properties are in fact satisfied by $\nu_T$.
\begin{theorem}
\label{th:qcwf}
The energy-efficiency function $\nu_T(P,{\bf Q}_{WF(P)},\widehat{{\bf H}}) $ is quasi-concave with respect to $P$ and has a unique maximum $\nu_T(P^*,{\bf Q}_{WF(P^*)},\widehat{{\bf H}})$, where $P^*$ satisfies the following equation~:
\begin{eqnarray}\label{eq:optp-csit}
&\frac{ \partial F_L[I_{\mathrm{ICSITR}}(P^*,{\bf Q}_{WF(P^*)},\widehat{{\bf H}})-\xi]}{\partial P}\left(P^*+\frac{b}{a}\right) &\\ \nonumber
&-F_L[I_{\mathrm{ICSITR}}(P^*,{\bf Q}_{WF(P^*)},\widehat{{\bf H}})-\xi]&=0
\end{eqnarray}where $\frac{\partial}{\partial P} $ is the partial derivative.
\end{theorem}
The proof of this theorem can be found in Appendix \ref{sec:proofqcwf}. From the above theorem and equation, we can conclude that the optimal transmit power for imperfect CSITR depends on several factors like
\begin{itemize}
\item the channel estimate $\widehat{{\bf H}}$,
\item the target spectral efficiency $\xi$,
\item the ratio of the constant power consumption to the radio-frequency (RF) power efficiency $\frac{b}{a}$,
\item the channel training time $t$ and
\item the noise level $\sigma^2$.
\end{itemize}
Note that in this model, we always assume the CSI at the transmitter to be exactly identical to CSI at the receiver. Because of this, we take the feedback mechanism to be perfect and take a constant time $t_f$. Although in practice, $t_f$ plays a role in determining the efficiency and the optimal power, in our model $t_f$ is a constant and does not appear in the equation for $P^*$. In our numerical results we focus on the impact of $\widehat{{\bf H}}$, $\xi$ and $\frac{b}{a}$ on $P^*$ and $\nu^*$. The impact of $t$ is not considered for this case, but is instead studied where we have no CSITR and imperfect CSIR, this choice helps in making the results presented easier to interpret and understand.
\subsection{An illustrative special case~: SISO channels}
A study on energy-efficiency in SISO systems have been studied in many works like \cite{goodman} and \cite{verdu}. However, the approach used in this paper is quite novel even for the SISO case and presents some interesting insights that have not been presented before. For the case of SISO, the pre-coding matrix is a scalar and ${\bf Q}=1$. The optimal power can be determined by solving (\ref{eq:optp-csit}). For a SISO system with perfect CSITR and CSIR, $F_L$ can be expressed as
\begin{eqnarray}
&F_L[I_{\mathrm{ICSITR}}(P^*,{\bf Q}_{WF(P^*)},\widehat{{\bf H}})-\xi]= &\nonumber \\
& Q_{\mathrm{function}}\left(L (1+\|h\|^2 \rho) \frac{\xi-\log(1+\|h\|^2 \rho )}{ \|h\|^2 \rho}\right)&
\end{eqnarray}
from \cite{Buckingham-08}. Using this expression, we can find $P^*$ maximizing $\nu_T$.
In the case of high SNR (and high $\xi$), a solution to this problem can be found as
\begin{equation}
\lim_{\rho \to \infty} F_L(P,1,h)= Q_{func}\left( L [\xi-\log(1+\|h\|^2 \rho] )\right).
\end{equation}
Solving (\ref{eq:optp-csit})
\begin{eqnarray}
&\frac{-L \|h\|^2 }{ \sqrt{\pi}(1+\|h\|^2 \rho^*)} \exp\left( - L^2 \left[ \xi-\log(1+\|h\|^2 \rho^* )\right]^2 \right) \left(\rho^*+\frac{b}{a\sigma^2}\right) &\nonumber\\
&- Q_{func}\left( L [\xi-\log(1+\|h\|^2 \rho^*] )\right) =0&.
\end{eqnarray}
From the above equation it can be deduced that if $b=0$, for large $\xi$, $\log(1+\rho^*) \approx \xi$. While for low SNR, $\lim_{\rho \to 0} F_L(P,1,h) = Q_{func}\left( L \frac{\xi-\log(1+\|h\|^2 \rho )}{\|h\|^2 \rho}\right)$ and so, if $b=0$,
\begin{eqnarray}
&\frac{1 }{ \sqrt{\pi}}\left[ L + L\frac{\xi-|h\|^2 \rho^*}{ \|h\|^2 \rho^*} \right] \exp\left( \frac{-1}{2}\left[ L\frac{\xi-\|h\|^2 \rho^* }{\|h\|^2 \rho^*}\right]^2 \right)& \\ \nonumber- &Q_{func}\left( L \frac{\xi-\|h\|^2 \rho^* }{\|h\|^2 \rho^*}\right) & = 0
\end{eqnarray}
Substitute $x= L \frac{\xi-\|h\|^2 \rho^* }{\|h\|^2 \rho^*}$ and we have
\begin{equation}
\frac{1 }{ \sqrt{\pi}}\left[ L + x \right] \exp\left( \frac{-1}{2} x^2 \right) - Q_{func}\left( x \right)=0.
\end{equation}
As seen from the above equation, the value of $x$ depends only on $L$ the block length. For example if $L=10$, we get $x \approx -1.3$. So, $\rho^* = 1.14 \frac{\xi}{\|h\|^2}$. Whereas if $L=100$ we get $\rho^* = 1.02 \frac{\xi}{\|h\|^2}$. Note that these calculations are true only for $\xi \to 0$ so that $\rho \to 0$ is satisfied.
The above equations signify that for finite block lengths, the energy efficiency at $\xi \to 0$ is lower than the value calculated in \cite{verdu} (of course, a direct comparison does not make sense as in \cite{verdu}, infinite block lengths are assumed). This suggests that a non-zero value of $\xi$ might optimize the energy efficiency. This value is evaluated in our numerical section and we find that the energy efficiency is optimized at a non-zero power.
\subsection{Special Case: Infinite code-length and perfect CSITR}
When a very large block is used then the achievable rate approaches the mutual information \cite{shannon}, i.e $\lim_{L \to \infty , I_{\mathrm{CSITR}}-\xi \to 0^+} F_L(I_{\mathrm{CSITR}}-\xi) =1$. Therefore in this limit, we can now simplify (\ref{eq:def-ee-csi}) to:
\begin{equation}
\nu_T(P,{\bf Q},\widehat{{\bf H}}) = \frac{R_0 \left(1 - \frac{t+t_f}{T}\right) I_{\mathrm{CSITR}}(P,{\bf Q},\widehat{{\bf H}}) } {a P +b}.
\end{equation}
This is done because we replace $\xi$ with $I_{\mathrm{CSITR}}$ to maximize efficiency as $F_L$ is $0$ when $I_{\mathrm{CSITR}} < \xi$, and choosing $\xi \to I_{\mathrm{CSITR}}$ maximizes efficiency. Water-filling optimizes the efficiency in this situation as well, and so we use ${\bf Q}={\bf Q}_{WF(P)}$. It can be easily verified that for $b \to 0$: $\nu_T$ is maximized for $P \to 0$. And in this case, water-filling also implies that only the antenna with the best channel is used to transmit. Interestingly, when in the domain of finite code-lengths, our simulations indicate that there is a non-zero rate and power that optimizes the energy-efficiency function.
For general $b$, $\nu_T$ is optimized for $P^*$ satisfying:
\begin{equation}\label{eq:ee-infcl}
\frac{\partial I_{\mathrm{CSITR}}(P,{\bf Q}_{WF(P)},\widehat{{\bf H}}) }{\partial P} (a P +b) - I_{\mathrm{CSITR}}(P,{\bf Q},\widehat{{\bf H}}) =0.
\end{equation}
The above equation admits a unique maximum because $I_{\mathrm{CSITR}}(P,{\bf Q}_{WF(P)},\widehat{{\bf H}})$ is a concave function of $P$ (can be seen from Appendix \ref{sec:proofqcwf}) and is mathematically appealing. For $\lim_{b \to 0} P^*=0$ and as $\frac{b}{a}$ increases, $P^*$ also increases. A special case of this, with $b=0$, and perfect CSITR, for a SISO channel has been studied in \cite{verdu}.
\section{Optimizing energy-efficiency with no CSIT and imperfect CSIR}\label{sec:optee-nocsit}
This problem has already been well analyzed in \cite{veronica2} when perfect CSI is available at the receiver and $b=0$. So, in this paper we focus on the case when imperfect CSI is available and is obtained through channel training. For $I_{ICSIR}(P,t,{\bf H})$, we use a lower bound on the mutual information obtained from the equivalent observation equation (\ref{eq:cmodel2n}), derived in \cite{mimot}:
\begin{equation}\label{eq:def-capt}
I_{ICSIR}(P,t,\hat{{\bf H}})=\log \left| \mathbf{I}_M + \frac{1}{M} \rho_{\mathrm{eff}}\left(\frac{L P}{\sigma^2} ,t\right)
\hat{\mathbf{H}} \hat{\mathbf{H}}^H \right|
\end{equation}
Note that here, $Q=\frac{\mathbf{I}_M}{M}$ is used and has been shown to be optimal in \cite{veronica2}. In this section our focus is to generalize \cite{veronica2} to a more realistic scenario where the total power consumed by the transmitter (instead of the radiated power only) and imperfect channel knowledge are accounted for.
\subsection{Optimal transmit power}
\label{sec:subsec-opt-P}
By inspecting (\ref{eq:def-ee-nocsit}) and (\ref{eq:def-capt}) we see that using all the available transmit power can be suboptimal. For instance, if the available power is large and all of it is used, then $\nu_R(P,t)$ tends to zero. Since $\nu_R(P,t)$ also tends to zero when $P$ goes to zero (see \cite{veronica2}), there must be at least one maximum at which energy-efficiency is maximized, showing the importance of using the optimal fraction of the available power in certain regimes. The objective of this section is to study those aspects namely, to show that $\nu_R$ has a unique maximum for a fixed training time length and provide the equation determining the optimum value of the transmit power.
From \cite{rodriguez} we know that a sufficient condition for the function $\frac{f(x)}{x}$ to have a unique maximum is that the function $f(x)$ be sigmoidal. To apply this result in our context, one can define the function $f$ by
\begin{equation}
f(\rho_{\mathrm{eff}}) = \mathrm{Pr} \left[ \log \left| \mathbf{I}_M + \frac{1}{M} \rho_{\mathrm{eff}} \mathbf{H} \mathbf{H}^H \right| \geq \xi \right].
\end{equation}
For the SISO case, for a channel with $h$ following a complex normal distribution, it can be derived that $f(\rho)= \exp\left(- \frac{2^{\xi}-1}{\rho} \right)$ which is sigmoidal. It turns out that proving that $f$ is sigmoidal in the general case of MIMO is a non-trivial problem, as advocated by the current state of relevant literature \cite{veronica2,jorsweik,eigenrmt}. In \cite{veronica2}, $\nu_R(P)$ under perfect CSIR, was conjectured to be quasi-concave for general MIMO, and proven to be quasi-concave for the follwing special cases:
\begin{description}
\item[(a)] $M\geq1$, $N=1$;
\item[(b)] $M\rightarrow +\infty$, $N < +\infty$, $\lim_{M \to \infty} \frac{N}{M}=0$;
\item[(c)] $M < +\infty$, $N \rightarrow +\infty$, $\lim_{N \to \infty} \frac{M}{N}=0$;
\item[(d)] $M\rightarrow +\infty$, $N \rightarrow + \infty$, $\displaystyle{\lim_{M\rightarrow+\infty, N\rightarrow+\infty} \frac{M}{N}} = \ell < +\infty$;
\item[(e)] $\sigma^2 \rightarrow 0$;
\item[(f)] $\sigma^2 \rightarrow + \infty$;
\end{description}
In the following proposition, we give a sufficient condition to ensure that $\nu_R(P,t)$ is quasi-concave w.r.t $P$.
\begin{proposition} [Optimization of $\nu_R(P,t)$ w.r.t $P$]\label{prop:qcp} If $\nu_R(P)$ with perfect CSIR is quasi-concave w.r.t $P$, then $\nu_R(P,t)$ is a quasi-concave function with respect to $P$, and has a unique maximum.
\label{tr:propforP}
\end{proposition}
This proposition is proved in Appendix \ref{sec:appproofqcp}. The above proposition makes characterizing the unique solution of $\frac{\partial \nu_R}{\partial P}(P,t) = 0$ relevant. This solution can be obtained through the root $\rho_{\mathrm{eff}}^*$ (which is unique because of \cite{rodriguez}) of:
\begin{equation}\label{eq:determine-P}
\frac{L}{\sigma^2}\left(P + \frac{b}{a}\right)
\frac{\tau \rho \left[(\tau+1) \rho +2 \right]}{\left[(\tau+1)^2 +1 \right]^2} f'(\rho_{\mathrm{eff}}) - f(\rho_{\mathrm{eff}}) =0
\end{equation}
with $\tau=\frac{t_s}{M}$. Note that $P$ is related to $\rho$ through $P = \sigma^2 \rho$ and $\rho$ is related to $\rho_{\mathrm{eff}}$ through (\ref{eq:rhofun}) and can be expressed as
\begin{equation}\label{eq:rho-vs-rho_eff}
\rho = \frac{1}{2 \tau} \rho_{\mathrm{eff}} \sqrt{\left(1+\tau\right)^2+\frac{4 \tau}{\rho_{\mathrm{eff}}}}.
\end{equation}
Therefore (\ref{eq:determine-P}) can be expressed as a function of $\rho_{\mathrm{eff}}$ and solved numerically; once $\rho_{\mathrm{eff}}^*$ has been determined, $\rho^*$ follows by (\ref{eq:rho-vs-rho_eff}), and eventually $P^*$ follows by (\ref{eq:def-snr}). As a special case we have the scenario where $b=0$ and $\tau\rightarrow +\infty$; this case is solved by finding the unique root of $\rho^* f'(\rho^*) - f(\rho^*)=0$ which corresponds to the optimal operating SNR in terms of energy-efficiency of a channel with perfect CSI (as training time is infinite). Note that this equation is identical to that in \cite{goodman} and in this work, we provide additional insights into the form of the function $f(.)$.
Quasi-concavity is an attractive property for the energy-efficiency as quasi-concave functions can be easily optimized numerically. Additionally, this property can also be used in multi-user scenarios for optimization and for proving the existence of a Nash Equilibrium in energy-efficient power control games
\cite{goodman,lasaulce-book,lasaulce-spm}.
\subsection{Optimal fraction of training time}
\label{sec:subsec-opt-t}
The expression of $\nu_R(P,t)$ shows that only the numerator depends on the fraction of training time. Choosing $t=0$ maximizes $1-\frac{t}{T}$ but the block success rate vanishes. Choosing $t=T$ maximizes the latter but makes the former term go to zero. Again, there is an optimal trade-off to be found. Interestingly, it is possible to show that the function $\nu_R(P^*,t)$ is strictly concave w.r.t. $t$ for any MIMO channels in terms of $(M,N)$, where $P^*$ is a maximum of $\nu_R$ w.r.t $P$. This property can be useful when performing a joint optimization of $\nu_R$ with respect to both $P$ and $t$ simultaneously. This is what the following proposition states.
\begin{proposition}[Maximization of $\nu(P^*(t),t)$ w.r.t $t$]\label{prop:tcon}
The energy-efficiency function $\nu_R(P^*(t),t)$ is a strictly concave function with respect to $t$ for any $P^*(t)$ satisfying $\frac{\partial \nu_R}{\partial P}(P^*,t) =0$ and $\frac{\partial^2 \nu_R}{\partial P^2}(P^*,t) <0$, i.e, at the maximum of $\nu_R$ w.r.t. $P$.
\label{tr:propfortcc}
\end{proposition}
The proof of this proposition is provided in Appendix \ref{sec:appproofct}. The parameter space of $\nu_R$ is two dimensional and continuous as both $P$ and $t$ are continuous and thus the set $\nu(P^*(t),t)$ is also continuous and the proposition is mathematically sound. The proposition assures that the energy-efficiency can been maximized w.r.t. the transmit power and the training time jointly, provided $\nu_R(P,t)$ is quasi-concave w.r.t $P$ for all $t$. Based on this, the optimal fraction of training time is obtained by setting $\frac{\partial \nu_R}{\partial t}(P,t)$ to zero which can be written as:
\begin{equation}\label{eq:optimal-t}
\left(\frac{T_s}{M} - \tau \right) \frac{\rho^2 (\rho+1)}{\left[\tau \rho + \rho+1 \right]^2} f'(\rho_{\mathrm{eff}}) - f(\rho_{\mathrm{eff}}) = 0
\end{equation}
again with $\tau = \frac{t_s}{M}$. In this case, following the same reasoning as for optimizing the $\nu_R$ w.r.t. $P$, it is possible to solve numerically the equation w.r.t. $\rho_{\mathrm{eff}}$ and find the optimal $t_{s}$, which is denoted by $t^*_s$.
Note that the energy-efficiency function is shown to be concave only when it has already been optimized w.r.t $P$. The optimization problem studied here is basically, a joint-optimization problem, and we show that once $\nu(P,t)$ is maximized w.r.t $P$ for all $t$, then, $\nu(P^*(t),t)$ is concave w.r.t $t$. A solution to (\ref{eq:optimal-t}) exists only if $\nu_R$ has been optimized w.r.t $P$. However, in many practical situations, this optimization problem might not be readily solved as the optimization w.r.t $P$ for all $t$ has to be implemented first.
The following proposition describes how the optimal training time behaves as the transmit power is very large:
\begin{proposition}[Optimal $t$ in the high SNR regime]
\label{prop:optt} We have that:
$ \displaystyle{ \lim_{P \rightarrow +\infty} t^*_s} = M $
for all MIMO systems.
\label{tr:propforthighsnr}
\end{proposition}
The proof for this can be found in Appendix \ref{sec:appproofoptt}.
\subsection{Optimal number of antennas}
\label{sec:subsec-opt-M}
So far we have always been assuming that the pre-coding matrix was chosen to be the identity matrix i.e., $\mathbf{Q} = \mathbf{I}_M$. Clearly, if nothing is known about the channel, the choice $\mathbf{Q} = \mathbf{I}_M$ is relevant (and may be shown to be optimal by formulating the problem as an inference problem). On the other hand, if some information about the channel is available (the channel statistics as far as this paper is concerned), it is possible to find a better pre-coding matrix. As conjectured in \cite{telatar} and proved in some special cases (see e.g., \cite{jorsweik}), the outage probability is minimized by choosing a diagonal pre-coding matrix and a certain number of 1's on the diagonal. The position of the 1's on the diagonal does not matter since channel matrices with i.i.d. entries are assumed. However, the optimal number of 1's depends on the operating SNR. The knowledge of the channel statistics can be used to compare the operating SNR with some thresholds and lead to this optimal number. Although we consider (\ref{eq:def-ee-nocsit}) as a performance metric instead of the outage probability, we are in a similar situation to \cite{veronica2}, meaning that the optimal pre-coding matrix in terms of energy-efficiency is conjectured to have the same form and that the number of used antennas have to be optimized. In the setting of this paper, as the channel is estimated, an additional constraint has to be taken into account that is, the number of transmit antennas used, $M$, cannot exceed the number of training symbols $t_s$. This leads us to the following conjecture.
\begin{conjecture}[Optimal number of antennas] For a given coherence time $T_s$, $\nu_R$ is maximized for $M^*=1$ in the limit of $P \to 0$. As $P$ increases, $M^*$ also increases monotonically until some $P_+$ after which, $M^*$ and $t_s^*$ decreases. Asymptotically, as $P \to \infty$, $M^*=t_s^*=1$.
\label{tr:conforM}
\end{conjecture}
This conjecture can be understood intuitively by noting that the only influence of $M$ on $\nu_R$ is through the success rate. Therefore, optimizing $M$ for any given $P$ and $t$ amounts to minimizing outage. In \cite{telatar}, it is conjectured that the covariance matrices minimizing the outage probability for MIMO channels with Gaussian fading are diagonal with either zeros or constant values on the diagonal. This has been proven for the special case of MISO in \cite{jorsweik}, we can conclude that the optimal number of antennas is one in the very low SNR regime and that it increments as the SNR increases. However, the effective SNR decreases by increasing $M$ (seen from expression of $\rho_{\mathrm{eff}}$ and $\tau$) , this will result in the optimal $M$ for each $P$ with training time lower than or equal to the optimal $M$ obtained with perfect CSI. Concerning special cases, it can be easily verified that the optimal number of antennas is $1$ at very low and high SNR.
At last, we would like to mention a possible refinement of the definition in (\ref{eq:def-ee-nocsit}) regarding $M$. Indeed, by creating a dependency of the parameter $b$ towards $M$ one can better model the energy consumption of a wireless device. For instance, if the transmitter architecture is such that one radio-frequency transmitter is used per antenna, then, each antenna will contribute to a separate fixed cost. In such a situation the total power can written as $aP+Mb_0$ where $b_0$ is the fixed energy consumption per antenna. It can be trivially seen that this does not affect the goodput in any manner and only brings in a constant change to the total power as long as $M$ is kept a constant. Therefore, the optimization w.r.t $P$ and $t$ will not change but it will cause a significant impact on the optimal number of antennas to use.
\section{Numerical results and interpretations}\label{sec:numerical-study}
We present several simulations that support our conjectures as well as expand on our analytical results. All simulations are performed using Monte-Carlo simulations as there is no expression available for the outage of a general MIMO system.
\subsection{With imperfect CSITR available}
The $F_L$ we use here is based on the results in \cite{Buckingham-08}, $F_L= Q_{func}(\frac{\xi-I_{\mathrm{ICSITR}}(P,{\bf Q}_{WF},{\bf H})}{\sqrt{\frac{2\rho}{(1+\rho)L} } })$, $L$ being the code-length. This is the Gaussian approximation that is very accurate for $L$ large enough and from simulations we observe that for $L \geq 10$ the approximation is quite valid.
First of all, numerical results are presented that support and present our analytical results through figures. The first two figures shown assume imperfect CSITR obtained through training and use a $2 \times 2$ MIMO system. The quasi-concavity of the the energy-efficiency function w.r.t the transmit power is shown in Figure \ref{fig:eff_rho_csit} for $\xi=1$ and $\xi=4$, and $t_s=2$ and $t_s=10$. This figure shows that for a higher target rate, a longer training time yields a better energy-efficiency. We also observe that using a higher $\xi$ can results in a better energy-efficiency as in this figure. This motivates us to numerically investigate if there is also an optimal spectral efficiency to use, given a certain $T_s$, $\frac{b}{a}$ and $L$. Figures \ref{fig:eff_xi_b0} and \ref{fig:eff_xi_b1} present the results of this study.
Surprisingly, we observe that our plots are quasi-concave and so there is an optimal target rate to use for each channel condition and code-length. In Figure \ref{fig:eff_xi_b1}, $\nu_T$ is always optimized over $P$ and ${\bf Q}$. Observe that $\nu_T^*(\xi)$ is also quasi-concave and has a unique maximum for each value of $d_i$ and $t_s$ (representing the channel Eigen-values as from equations (\ref{eq:def-cappcsitmod}), (\ref{eq:wf}) and training time lengths). $d_i$ is ordered in an ascending order, i.e. in this case, with $d_1^2 \leq d_2^2$. The parameters used are: $M=N=2$, $R_0=1$bps, $T_s=100$, $L=100$ and $\frac{b}{a}=1$ mW with $t_s=2,10$ and $20$ for $d_1^2=1$, $d_2^2=3$, and $t_s=2$ for $d_1^2=d_2^2=1$.This figure also implies that the training time and target rate can be optimized to yield the maximum energy-efficiency for a given coherence-time and channel fading. For infinite code-length the plot is maximized at the solution of (\ref{eq:ee-infcl}). While for Figure \ref{fig:eff_xi_b0}, perfect CSIT is assumed with $b=0$, at infinite block length, the optimal transmit rate/power is zero as expected (also seen from (\ref{eq:ee-infcl})). However, remarkably, for finite code-lengths there is a non-zero optimal rate and corresponding optimal power as seen from the figure.
Finally in Figure \ref{fig:eff_comp_pa}, we compare our energy efficiency function that uses optimized power allocation to uniform power allocation, and present the gain from having CSIT. In both cases, the training time and the transmit power is optimized and we plot the optimized energy efficiency v.s $P_{\max}$. Note that the optimized PA always yields a better performance when compared to UPA and at low power, UPA has almost zero efficiency while the optimal PA yields a finite efficiency. The gain observed can be considered as the major justification in using non-uniform power allocation and sending the channel state information to the transmitter. However, when the block length is small, imperfect CSIT results in a smaller gain as seen from the relatively larger gap between $T_s=100$ and $T_s=10000$ when compared to the size of the gap in UPA.
\begin{figure}[h]
\begin{center}
\includegraphics[width=180mm]{rvseffcsit.eps}
\end{center}
\caption{Energy efficiency ($\nu_T$) in bits/J v.s transmit power (P) in dBm for a MIMO system with imperfect CSITR, $M=N=2$, $R_0=1$bps, $T_s=100$, $\frac{b}{a}=10$ mW for certain values of $\xi$ and $t_s$.}
\label{fig:eff_rho_csit}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=180mm]{rvsxicsit.eps}
\end{center}
\caption{Optimal energy-efficiency ($\nu_T(P^*,{\bf Q}_{WF})$) in bits/J v.s spectral efficiency ($\xi$) for a MIMO system with imperfect CSITR, $M=N=2$, $R_0=1$bps, $T_s=100$, $L=100$ and $\frac{b}{a}=1$ mW.}
\label{fig:eff_xi_b1}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=180mm]{2x2_Rvs_eff,comL,b=0.eps}
\end{center}
\caption{Optimal energy-efficiency ($\nu_T(P^*,{\bf Q}_{WF})$) v.s spectral efficiency ($\xi$) for a MIMO system with perfect CSITR, $M=N=2$, $R_0=1$bps and $\frac{b}{a}=0$.}
\label{fig:eff_xi_b0}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=180mm]{2x2R=1comparison.eps}
\end{center}
\caption{Optimal energy-efficiency ($\nu_R(P^*)$) in bits/J v.s available transmit power ($\sup(P)$) for a MIMO system with imperfect CSITR, $M=N=2$, $R_0=1$bps, $R=1$bps and $\frac{b}{a}=1$ mW.}
\label{fig:eff_comp_pa}
\end{figure}
\subsection{With no CSIT}
We start off by confirming our conjecture that for a general MIMO system, $\nu_R(P,t)$ has a unique maximum w.r.t $P$. We also confirm that optimal values of training lengths and transmit antennas represented by $t^*_s$ and $M^*$ are as conjectured.\\
Once the analytical results have been established, we explore further and find out the optimal number of antennas and training time when $\nu_R$ has been optimized w.r.t $P$. For this we use the optimized energy efficiency defined as $\nu^*(P,t)=\max\{\nu_R(p,t) \| p \in [0,P] \}$. As we know $\nu_R$ to be quasi-concave w.r.t $P$ and having a unique maximum, this newly defined $\nu^*$ will indicate what is the best energy efficiency achievable given a certain amount of transmit power $P$. Hence, plotting $\nu^*$ against $P$ for various values of $M$ or $t_{s}$ can be useful to determine the optimal number of antennas and training time while using the optimal power.
In the following plots we take $\frac{\sigma^2}{L}=1mW$ so that $P$ can be expressed in $dBm$ easily. Also note that $\frac{b}{a}$ has the unit of power and is expressed in Watts (W). We also use $S_d = 15$ $\mu$s from LTE standards \cite{lte}.
Figure \ref{fig:eff_rho} studies the energy efficiency as a function of the transmit power ($P$) for different values of $\frac{b}{a}$ and illustrates the quasi-concavity of the energy efficiency function w.r.t $P$. The parameters used are $R=1600$, $\xi=\frac{R}{R_0}=16$, $T_s=55$ and $M=N=t=4$.
Figure \ref{fig:effz} studies the optimized energy efficiency $\nu^*$ as a function of the transmit power with various values of $t_{s}$. The figure illustrates that beyond a certain threshold on the available transmit power, there is an optimal training sequence length that has to be used to maximize the efficiency, when the optimization w.r.t $P$ has been done, which has been proven analytically in proposition \ref{prop:tcon}. The parameters are $R=1$Mbps, $\xi=16$, $\frac{b}{a}=0$, $M=N=4$, $\frac{b}{a}=0$ and $T_s=55$.
Figure \ref{fig:eff_t} studies the optimal training sequence length $t_{s}$ as a function of the transmit power $P$. Note that in this case, we are not optimizing the efficiency with respect to $P$ and so this figure illustrates proposition \ref{tr:propforthighsnr}. With $P$ large enough $t_{s}=M$ becomes the optimal training time and for $P$ small enough $t_{s}=T_s-1$ as seen from the figure. The parameters are $R=1600$, $\frac{b}{a}=0$ W, $\xi=16$ and $T_s=10$. (We use $T_s=10$, as if the coherence time is too large, the outage probabilities for low powers that maximize the training time, such that $t^*_s=T_s-1$, become too small for any realistic computation.)
Figure \ref{fig:eff_fixt} studies the optimal number of antennas $M^*$ as a function of the transmit power $P$ with the training time optimized jointly with $M$. With $P$ large enough $M=t_{s}=1$ becomes the optimal number of antennas and for $P$ small enough $M=1$ as seen from the figure. This figure illustrates conjecture \ref{tr:conforM}. The parameters are $R_0=1$Mbps, $\frac{b}{a}=10$ mW and $T_s=100$.
From all of our theoretical and numerical results so far, we can conclude that given a target spectral efficiency $\xi$, a coherence block length $T_s$ and number of receive antennas, there is an optimal transmit power $P^*$, transmit antennas $M^*$ and training time $t^*_s$ to use that optimizes the energy efficiency.
\begin{figure}[h]
\begin{center}
\includegraphics[width=180mm]{f1_qc.eps}
\end{center}
\caption{Energy efficiency ($\nu_R$) v.s transmit power (P) with $t_{s}=M=N=4$,$R=1600$bps, $\xi=\frac{R}{R_0}$=16 and $T_s=55$ symbols.}
\label{fig:eff_rho}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=180mm]{f6_opteffz_t.eps}
\end{center}
\caption{Optimized efficiency ($\nu^*$) vs. maximum transmit power ($P$) for a MIMO system with $M=N=4$, $R = 1$ Mbps, $\xi=\frac{R}{R_0}$=16, $T_s=55$ and $\frac{b}{a}=0$W.}
\label{fig:effz}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=180mm]{f2_optt.eps}
\end{center}
\caption{Optimal training sequence length ($t_{s}$) vs. Transmit Power (P) MIMO system with $\xi=\frac{R}{R_0}=16$,$R=1$Mbps, $N=4$, $T_s=10$ symbols. The discontinuity is due to the discreteness of $t_s$.}
\label{fig:eff_t}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=180mm]{f3_optm.eps}
\end{center}
\caption{Optimal number of antennas ($M^*$) vs. Transmit Power (P) in dBm for a MIMO system with $R_0=1$Mbps, $T_s=100$, for certain values of $\xi$ and $N$, and $t_s$ optimized jointly with $M$. The discontinuity is due to the discreteness of $M$.}
\label{fig:eff_fixt}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
This paper proposes a framework for studying the problem of energy-efficient pre-coding (which includes the problem of power allocation and control) over MIMO channels under imperfect channel state information and the regime of finite block length. As in \cite{goodman}, energy-efficiency is defined as the ratio of the block success rate to the transmit power. But, in contrast with \cite{goodman} and the vast majority of works originating from it, we do not assume an empirical choice for the success rate such as taking $f(x) = (1 - e^{-x})^L$, $L$ is the block length. Instead, the numerator of the proposed performance metric is built from the notion of information, and more precisely from the average information (resp. mutual information) in the case where CSIT is available (resp. not available). This choice, in addition to giving a more fundamental interpretation to the metric introduced in \cite{goodman}, allows one to take into account in a relatively simple manner effects of practical interest such as channel estimation error and block length finiteness. Both in the case where (imperfect) CSIT is available and not available, it is shown that using all the available transmit power is not optimal. When CSIT is available, whereas determining the optimal power allocation scheme is a well known result (water-filling), finding the optimal total amount of power to be effectively used is a non-trivial choice. Interestingly, the corresponding optimization problem can be shown to be quasi-convex and have a unique solution, the latter being characterized by an equation which is easy to solved numerically. When CSIT is not available, solving the pre-coding problem in the general case amounts to solving the Telatar's conjecture. Therefore, a new conjecture is proposed and shown to become a theorem in several special cases. Interestingly, in this scenario, it is possible to provide a simple equation characterizing the optimal fraction of training time. Numerical results are provided to sustain the proposed analytical framework, from which interesting observations can be made which includes~: block length finiteness gives birth to the existence of a non-trivial trade-off between spectral efficiency and energy efficiency~; using optimal power allocation brings a large gain in terms of energy-efficiency only when the channel has a large enough coherence time,demonstrating the value of CSIT and channel training.
The proposed framework is useful for engineers since it provides considerable insights into designing the physical layer of MIMO systems under several assumptions on CSI. The proposed framework also opens some interesting research problems related to MIMO transmission, which includes~: finding the optimal pre-coding matrix for the general case of i.i.d. channel matrices under no CSIT. Even in the case of large MIMO systems, this problem is not solved~; extending the proposed approach to the case of Rician channels with spatial correlations~; tackling the important case of multiuser MIMO channels~; considering the problem of distributed energy-efficient pre-coding.
\appendices
\section{Proof of theorem \ref{th:qcwf}}
\label{sec:proofqcwf}
In order to prove that $\nu_T(P,{\bf Q}_{WF(P)},{\bf H}) $ is quasi-concave with respect to $P$ and has a unique maximum $\nu_T(P^*,{\bf Q}_{WF(P^*)},{\bf H})$, we exploit the result in \cite{rodriguez} which states that if $f(x)$ is an ``S"-shaped or sigmoidal function, then $\frac{f(x)}{x}$ is a quasi-concave function with a unique maximum. An ``S"-shaped or sigmoidal function has been defined in \cite{rodriguez} in the following manner. A function $f$ is ``S" shaped, if it satisfies the following properties:
\begin{enumerate}
\item Its domain is the interval $[0, \infty)$.
\item Its range is the interval $[0,1)$.
\item It is increasing.
\item (``Initial convexity") It is strictly convex over the interval $[0, x_f ]$, with $x_f$ a positive number.
\item (``Eventual concavity") It is strictly concave over any interval of the form $[x_f, L]$, where $x_f<L$.
\item It has a continuous derivative.
\end{enumerate}
Considering the non-constant terms in $\nu_T$, we see that what we have to show is that $F_L(I_{\mathrm{ICSITR}}(P,{\bf Q}_{WF(P)},{\bf H})-\xi)$ is ``S"-shaped w.r.t $P$. We already have that $F_L(x)$ is sigmoidal, therefore all we have to show is that $F_L(g(P))$ is also sigmoidal where $g(P)= I_{\mathrm{ICSITR}}(P,{\bf Q}_{WF(P)},{\bf H})-\xi$. Trivially, when $P=0$, $F_L(I_{\mathrm{ICSITR}}(P))=0$ and $\lim_{P \to \infty} F_L=1$. The rest can be proved using the following arguments:
\begin{itemize}
\item $g(P)$ is continuous: As $P$ varies, ${\bf Q}_{WF(P)}$ also is modified according to the iterative water-filling algorithm. This results in using one antenna for low $\rho$ to all antennas for high values of $\rho$.
There exists certain ``threshold" points of the total power, $P^{\mathrm{th}}_{i}, i=\{ 1,\dots,M \} $, at which the number of antennas used changes. The convention being, for $P^{\mathrm{th}}_{i-1} \leq P \leq P^{\mathrm{th}}_{i}$, $i$ number of antennas are used ($s$ for the rest are set to zero). $P^{\mathrm{th}}_{0}=0$ and $P^{\mathrm{th}}_{M}=\infty$ . If $I_{\mathrm{CSITR}}(P,{\bf Q}_{WF(P)},{\bf H})$ is continuous at these points, then $g(P)$ is continuous. It can also be observed that in all other points, $I_{\mathrm{CSITR}}(P,{\bf Q}_{WF(P)},{\bf H})$ can be expressed as $\Sigma_{i=1} ^{J} \log(1+\alpha_i+\beta_i s_i), J \leq \min(M,N)$. ($\alpha$ and $\beta$ is obtained from solving (\ref{eq:wf}).)
A ``threshold" point occurs when $P=P_j^{\mathrm{th}}$ $s_j=0$ is obtained by solving (\ref{eq:wf}). The left hand limit is that $j-1$ antennas are used and so, $I_{\mathrm{CSITR}}(P,{\bf Q}_{WF(P)},{\bf H})=\Sigma_{i=1}^{j-1}\log\left(1+\alpha_i+\beta_i s_i\right)$. The right hand limit will be obtained by solving (\ref{eq:wf}), with $s_1 \to 0$ (assuming without loss of generality that $d_1^2$ is the smallest). This will yield a solution which can be easily seen to be the same as the left hand limit as $p_1 \to 0$.
\item We have shown $g(P)$ to be a finite sum of logarithms of a monomial expansion of $P$ in certain intervals (marked by $P_i^{\mathrm{th}}$) . For each interval it is trivial to see that $F_L(g(P))$ is also ``S"-shaped. As $g(P)$ is continuous, $F_L(g(P))$ is ``S"-shaped for all $P$.
\item From Lemma \ref{aux:sshaped} proved in Appendix \ref{sec:appproofqcp} we can show that $\frac{F_L(g(P))}{aP+b}$ is also ``S"-shaped by a simple change of variable $x=aP+b$. Thus, we have $\nu_T(P,{\bf Q}_{WF(P)},{\bf H})$ as a quasi-concave function with a unique maximum.
\item With imperfect CSI, the only change is in $I_{\mathrm{ICSITR}}(P^*,{\bf Q}_{WF(P^*)},\widehat{{\bf H}})$ now given from (\ref{eq:def-capipcsit}). The water-filling algorithm now replaces ${\bf H}$ with $\widehat{{\bf H}}$ and so on. This maintains the continuity of $g(P)$. However we now have $I_{\mathrm{ICSITR}}(P,{\bf Q}_{WF(P)},\widehat{{\bf H}})= \Sigma_{i=1}^{j}\log\left(1+\frac{\alpha_i+\beta_i s_i}{1+\rho \sigma_E^2}\right) $. From \cite{mimot} we have $\frac{1}{1+\rho \sigma_E^2}$ as a concave function and so even in this case, we have $F_L(g(P))$ as a sigmoidal function and $\nu_{T}(P,{\bf Q}_{WF(P)},\widehat{{\bf H}}) $ as quasi-concave with a unique maximum.
As it is continuous and differentiable, the maximum can be found as the unique solution to the equation:
\begin{eqnarray}
&\frac{ \partial F_L[I_{\mathrm{ICSITR}}(P^*,{\bf Q}_{WF(P^*)},\widehat{{\bf H}})-\xi]}{\partial P}\left(P^*+\frac{b}{a}\right) & \\ \nonumber
&-F_L[I_{\mathrm{ICSITR}}(P^*,{\bf Q}_{WF(P^*)},{\bf H})-\xi] & =0
\end{eqnarray}
where $\frac{\partial}{\partial P} $ is the partial derivative.
\end{itemize}
QED
\section{Proof of proposition \ref{prop:qcp}}
\label{sec:appproofqcp}
An ``S"-shaped function has been defined in \cite{rodriguez} in the following manner. A function $f$ is ``S" shaped, if it satisfies the properties as mentioned in Appendix \ref{sec:proofqcwf}.
\begin{Lemma}
\label{aux:sshaped}
If $f$ is a ``S" shaped function, the composite function $f\circ g (x)$ is also ``S" shaped if $g$ satisfies the following properties:
\begin{enumerate}
\item $g$ also satisfies conditions 1, 3, 4 and 6 but with $g(0)=b, b>0$.
\item $\displaystyle{\lim_{x \to \infty}} f'(x)g'(x)=0$.
\item $g''(x)$ is a decreasing function such that $\displaystyle{\lim_{x \to \infty}} g''(x) = 0$.
\end{enumerate}
\end{Lemma}
The proof for the above Lemma is at the end of this section.
In \cite{veronica2}, the authors prove that the energy efficiency function with perfect CSI defined as the goodput ration to transmitted RF signal power is a quasi concave function by showing that the success rate function, $f(\rho)$ is ``S" shaped for the following cases:
\begin{description}
\item[(a)] $M\geq1$, $N=1$;
\item[(b)] $M\rightarrow +\infty$, $N < +\infty$;
\item[(c)] $M < +\infty$, $N \rightarrow +\infty$;
\item[(d)] $M\rightarrow +\infty$, $N \rightarrow + \infty$, $\displaystyle{\lim_{M\rightarrow+\infty, N\rightarrow+\infty} \frac{M}{N}} = \ell < +\infty$;
\item[(e)] $\sigma^2 \rightarrow 0$;
\item[(f)] $\sigma^2 \rightarrow + \infty$;
\end{description}
So, if we can show that the success rate function in our situation is also ``S" shaped, our proof is complete for all the cases mentioned above. From (\ref{eq:cmodel2n}) we know that the worst case mutual information in the case of imperfect CSI with training is mathematically equivalent to that of perfect CSI but with $\rho$ replaced by $\rho_{\mathrm{eff}}$. Thus it is possible to replace $f(\rho)$, in the case of perfect CSI, by $f(\rho_{\mathrm{eff}})$, when we study the case of imperfect CSI, and so we can study the energy efficiency function given by:
\begin{eqnarray}
\nu_R(P,t) &=& R\zeta\frac{f(\rho_{\mathrm{eff}}(p(x)))} {x}
\label{eq:fofnu}
\end{eqnarray}
where $x$ is a new variable that represents the total consumed power and $p(x) = \frac{L(x-b)}{a\sigma^2}$. $p'(x)>0$ and $p''(x)=0$ and $\rho_{\mathrm{eff}}'(\rho)>0$ and $\displaystyle{\lim_{\rho \to \infty}} \rho_{\mathrm{eff}}''(\rho)=0$. Thus $\rho_{\mathrm{eff}}$ and $p$ satisfy the conditions on $g$ detailed in Lemma \ref{aux:sshaped}. Hence we have proven that the numerator is ``S" shaped with respect to $x$ and then it immediately follows from the results in \cite{rodriguez} that $\nu_R$ has a unique maximum and is quasi-concave for all the specified cases.
\bf Proof of Lemma \label{aux:sshaped} \normalfont
Here we show that $f \circ g$ also satisfies all the properties of the ``S" function as described in \cite{rodriguez}.
\begin{enumerate}
\item Its domain is the domain of $g$ which is clearly the non-negative part of the real line; that is, the interval $[0,\infty)$.
\item Its range is the range of $f$, the interval $[0,1)$.
\item It is increasing as both $f$ and $g$ are increasing.
\item (``Initial convexity"). Note that $f(g(x))''=f''(y)g'(x)+g''(x)f'(y)$, with $y=g(x)$. As all terms in this expansion are positive in the interval $[0, x_f ]$, $f\circ g$ is also convex in this interval. Also note that as $g'$ and $f'$ are strictly positive and $g''$ is decreasing, thus for $y>x_f$ once $f(g(x))''<0$ it stays negative till infinity. This implies that if there is an inflexion point, it is unique.
\item (``Eventual concavity") Consider $h(x)=f(g(x))'=f'(y)g'(x)$, due to the initial convexity and increasing nature of $h$, $h(x_f)=k, k>0$. $\displaystyle{\lim_{x \to \infty}} f(g(x_f))'=0$. As $h$ is continuous the mean value theorem imposes $h'(x)<0$ at some point. This implies that there exists some point $x_d > 0$ such that $f\circ g$ is concave in the interval $[x_d, \infty$] and convex before it.
\item It has a continuous derivative. (all the functions used here are continuous)
\end{enumerate}
Hence, $f\circ g$ is ``S" shaped.
QED
\section{Proof of proposition \ref{prop:tcon}}
\label{sec:appproofct}
Let us consider the second partial derivative of $\nu_R$ with respect to $t$. (Note that this is possible as $t$ is a real number with the unit of time while $t_{s}$ is a natural number.) From (\ref{eq:fofnu}), $\nu_R(P,t)=K^{-1} (1-\frac{t}{T})f(\rho_{\mathrm{eff}})$, with $K^-1=\frac{R}{x}$ a constant if $P$ is held a constant.
\begin{eqnarray}
\label{eq:nutty}
\frac{K\partial^2 \nu}{\partial t^2} = & (1-\frac{t}{T})f''(\rho_{\mathrm{eff}})\rho_{\mathrm{eff}}'(t)^2 +(1-\frac{t}{T})f'(\rho_{\mathrm{eff}})\rho_{\mathrm{eff}}''(t) & \nonumber\\
&- \frac{2}{T}f'(\rho_{\mathrm{eff}})\rho_{\mathrm{eff}}'(t) &
\end{eqnarray}
In the above sum, it can be easily verified that the terms $f'(\rho_{\mathrm{eff}})\rho_{\mathrm{eff}} '(t)$ and $f'(\rho_{\mathrm{eff}})$ are positive and that $\rho_{\mathrm{eff}} ''(t)<0$. Thus if we have $f''(\rho_{\mathrm{eff}}) <0$, then $\nu_R(t)$ is strictly concave.
The only way $\nu_R$ depends on $P$ is through $\frac{f(\rho_{\mathrm{eff}})}{aP+b}$ and $R\zeta$ stays a constant if only $P$ changes. So if we use the fact that we are working at a maximum of $\nu_R$ with respect to $\rho$, i.e $\frac{\partial \nu}{\partial \rho}=0$ and $\frac{\partial^2 \nu}{\partial \rho ^2}<0$, we have $\frac{\partial \nu}{\partial \rho}$ as:
\begin{equation}
\label{eq:nudrho1}
0= f'(\rho_{\mathrm{eff}})\rho_{\mathrm{eff}}'(\rho)\rho^{-1} - f(\rho_{\mathrm{eff}})\rho^{-2}
\end{equation}
And, substituting (\ref{eq:nudrho1}) in $\frac{\partial^2 \nu}{\partial \rho^2}<0$, that is:
\begin{eqnarray}
f''(\rho_{\mathrm{eff}})(\rho_{\mathrm{eff}}')^2\rho^{-1}-2f'(\rho_{\mathrm{eff}})\rho_{\mathrm{eff}}'\rho^{-2} + 2f(\rho_{\mathrm{eff}})\rho^{-3} &<0& \nonumber \\
f''(\rho_{\mathrm{eff}})(\rho_{\mathrm{eff}}')^2\rho^{-1} + \frac{2}{\rho}(0) &<0 & \nonumber \\
f''(\rho_{\mathrm{eff}}) < 0& &
\label{eq:nudrho2}
\end{eqnarray}
Thus, using (\ref{eq:nudrho2}) in (\ref{eq:nutty}), we have the result that $\nu_R(P^*,t)$ is strictly concave w.r.t $t$.
QED \\
\section{Proof of proposition \ref{prop:optt}}
\label{sec:appproofoptt}
The equation that describes the optimal training time $t^*_s$ can be written as:
\begin{equation}
\label{eq:nuttymax}
(T-t^*_s) f'(\rho_{\mathrm{eff}})\rho_{\mathrm{eff}}'(t)_{t=S_d t^*_s} - f(\rho_{\mathrm{eff}})=0
\end{equation}
Now, let us study the optimal training time $t^*_s$ in the very high SNR regime, ie. when $\rho \to \infty$. (Note that $P \to \infty$ is equivalent to $\rho \to \infty$).
{ High SNR regime:} Applying the limit of $\rho \to \infty$ in (\ref{eq:nuttymax}), we get that
\begin{equation}
\label{eq:nuttyinf}
T_s-t^*_s = \displaystyle{\lim_{\rho \to \infty}} \frac{f(t\rho(1+t)^{-1})}{\rho f'(t\rho(1+t)^{-1})}
\end{equation}
We know from various works including \cite{veronica2} that $\displaystyle{\lim_{\rho \to \infty}} f(t\rho(1+t)^{-1})=1$. Now let us consider $f'(\frac{t}{1+t} \rho)$.
For a MISO system we know from \cite{chis} that:
\begin{eqnarray}
f_{MISO}(\rho_{\mathrm{eff}}) &= & \frac{ { \it \gamma}\left( M, \frac{2^\xi-1}{\sqrt{\rho_{\mathrm{eff}}}} \right) } {\Gamma(M)}
\label{eq:misoouts}
\end{eqnarray}where $\Gamma$ is the Gamma function and ${ \it \gamma}$ is the lower incomplete Gamma function. Now we can use the special property of the incomplete gamma function, $\displaystyle{\lim_{x \to 0}} \frac{\gamma(M,x)}{x^M} = 1/M$ detailed in \cite{gamma} to determine $\displaystyle{\lim_{\rho \to \infty} } \rho f(\rho_{\mathrm{eff}})' \propto \rho_{\mathrm{eff}}^{-M/2}$. Plugging this into (\ref{eq:nuttyinf}) we have:
\begin{equation}
\label{eq:nuttyinf2}
\displaystyle{\lim_{\rho \to \infty}} T_s-t^*_s = \frac{1+t}{t}\rho_{\mathrm{eff}}^{M/2}
\end{equation}
Thus $\nu_R$ is optimized when $t_s \to - \infty$, but as $M \leq t_s <T_s$, we have $\displaystyle{\lim_{\rho \to \infty}} t^*_s=M$ for all MISO systems.
Now for any MIMO system in general, using the eigen value decomposition of ${\bf H} {\bf H}^H$ we have the eigenvalue decomposition $\log_2| { I_M} + \frac{\rho}{M}{\bf H} {\bf H}^H|=\log_2(\Pi_{i=1}^L(1+\lambda_i))$, where $L=\min(M,N)$ and $\lambda_i$ are the eigenvalues of ${\bf H} {\bf H}^H$. Applying the limit on $\rho$ and ignoring lower order terms we have
\begin{equation}
\displaystyle{\lim_{\rho \to \infty}} f(\rho_{\mathrm{eff}}) = \mathrm{Pr} \left[\Pi_{i=1}^L\lambda_i \geq \frac{2^\xi}{\rho_{\mathrm{eff}}^L} \right].
\end{equation}
We can observe that the above expression is a cumulative distribution function of $\Pi_{i=1}^L\lambda_i$ and so it's derivative is simply the PDF of $\Pi_{i=1}^L\lambda_i$. For $\rho \to \infty$ we have $f'(\rho_{\mathrm{eff}})=\mathrm{Pr} [\Pi_{i=1}^L\lambda_i=\frac{2^\xi}{\rho_{\mathrm{eff}}^L} ]$ As we know that the in general, if the number of transmit antennas are the same, $\mathrm{Pr} [\lambda_{MISO} > x] \leq \mathrm{Pr} [\Pi_{i=1}^L\lambda_i > x]$ for any $x>0$ \cite{eigenrmt} . Thus $f_{MIMO}'(\rho_{\mathrm{eff}})<f_{MISO}'(\rho_{\mathrm{eff}})$, implying that for all MIMO systems $\displaystyle{\lim_{\rho \to \infty}} t^*_s=M$ from (\ref{eq:nuttyinf}) and (\ref{eq:nuttyinf2}).
|
1,477,468,750,423 | arxiv | \section{Introduction}
\label{sec:introduction}
Data integration and access to legacy data sources using end user-oriented languages are increasingly challenging contemporary organizations. In the whole spectrum of data integration and access solutions, the approach based on \emph{Virtual Knowledge Graphs (VKG)} is gaining momentum, especially when the underlying data sources to be integrated come in the form of relational databases (DBs)~\cite{XDCC19}. VKGs replace the rigid structure of tables with the flexibility of a graph that incorporates domain knowledge and is kept virtual, eliminating duplications and redundancies. A VKG specification consists of three main components:
\begin{inparaenum}[\it (i)]
\item \emph{data sources} (in the context of this paper, constituted by relational DBs) where the actual data are stored;
\item a domain \emph{ontology}, capturing the relevant concepts, relations, and constraints of the domain of interest;
\item a set of \emph{mappings} linking the data sources to the ontology.
\end{inparaenum}
One of the most critical bottlenecks towards the adoption of the VKG approach, especially in complex, enterprise scenarios, is the definition and management of mappings. These mappings play a central role in a variety of data management tasks, within both the semantic web and the DB communities. For example, in schema matching, mappings (typically referred to as \emph{matches}) aim at expressing correspondences between atomic, constitutive elements of two different relational schemas, such as attributes and relation names~\cite{rahm2001survey}. This simple type of mappings led to a plethora of very sophisticated (semi-)automated techniques to bootstrap mappings without prior knowledge on the two schemata~\cite{do2002coma,chen2018biggorilla,shragaadnev}.
A similar setting arises in the context of ontology matching (also referred to as ontology alignment), where the atomic elements to be put in correspondence are classes and properties~\cite{EUZENAT2007a}. Just like with schema matching, a huge body of applied research has led to effective (semi-)automatic techniques for establishing mappings~\cite{ivanova2017alignment,kolyvakis2018deepalignment}.
In data exchange, instead, more complex mapping specifications (like the well-known formalism of TGDs) are needed so as to express how data extracted from a source DB schema should be used to populate a target DB schema \cite{Lenz02}. Due to the complex nature of these mappings, research in this field has been mainly foundational, with some notable exceptions \cite{FHHM*09,CKQT18}.
The VKG approach appears to be the one that poses the most advanced challenges when it comes to mapping specification, debugging, and maintenance. Indeed, on the one hand, VKG mappings are inherently more sophisticated than those used in schema and ontology matching. On the other hand, while they appear to resemble those typically used in data exchange, they need to overcome the abstraction mismatch between the relational schema of the underlying data storage, and the target ontology; consequently, they are required to explicitly handle how (tuples of) data values extracted from the DB lead to the creation of corresponding objects in the ontology.
As a consequence, management of VKG mappings throughout
their entire lifecycle is currently a labor-intensive, essentially manual
effort, which requires highly-skilled professionals~\cite{spanos2012bringing} that, at once:
\begin{inparaenum}[\itshape (i)]
\item have in-depth knowledge of the domain of discourse and how it can be represented using structural conceptual models (such as UML class diagrams) and ontologies;
\item possess the ability to understand and query the logical and physical structure of the DB; and
\item master languages, methodologies, and technologies for representing the ontology and the mapping using standard frameworks from semantic web (such as OWL and R2RML).
\end{inparaenum}
Even in the presence of all these skills, writing mappings is demanding and
poses a number of challenges related to semantics, correctness, and
performance. More concretely, no comprehensive approach currently exists to
support ontology engineers in the creation of VKG mappings, exploiting all the
involved information artifacts to their full potential: the relational schema
with its constraints and the extensional data stored in the DB, the ontology axioms, and a conceptual schema that lies, explicitly or implicitly, at the basis of the relational schema.
Bootstrapping techniques~\cite{JKZH*15,KHSB*17} have been developed to relieve the ontology engineer from the ``blank paper syndrome.'' However, they are typically adopted in scenarios where neither the ontology nor the mappings are initially available, and where various assumptions are posed over the schema of the DB (e.g., in terms of normalization). Hence, they essentially bootstrap at once the ontology, as a lossless mirror of the DB, and corresponding one-to-one mappings. This explains why bootstrapping techniques cannot properly handle the relevant, practical cases where the relational schema is poorly structured (e.g., a denormalized, legacy DB), and/or where the ontology is already given and presents a true abstraction mismatch with the DB.
These recurring scenarios typically emerge in the common situation where both
the ontology and the DB schema are derived from a conceptual analysis of
the domain of interest. The resulting knowledge may stay implicit, or may lead
to an explicit representation in the form of a structural conceptual model,
which can be represented using well-established notations such as UML, ORM, or
E-R. On the one hand, this conceptual model provides the basis for creating a
corresponding domain ontology through a series of semantic-preserving
transformation steps. On the other hand, it can trigger the design process that
finally leads to the deployment of an actual DB. This is done via a
series of restructuring and adaptation steps, considering a number of aspects
that go beyond pure conceptualization, such as query load, performance, volume,
and taking into account the abstraction gap that exists between the conceptual
and logical/physical layers. It is precisely the reconciliation of these two
transformation chains (resp., from the conceptual model to the ontology, and
from the conceptual model to the DB) that is reflected in the VKG
mappings.
In this work, we build on this key observation and propose a catalog of mapping patterns that emerge when linking DBs to ontologies.
To do so, we build on well-established methodologies and patterns studied in
data management (such as W3C direct mappings -- W3C-DM\xspace \cite{W3Crec-RDB-Direct-Mapping} -- and their extensions), data analysis (such as algorithms for discovering dependencies), and conceptual modeling (such as relational mapping techniques).
These are suitably
extended and refined, by considering the inherent impedance mismatch between
data sources and ontologies, which requires to handle how objects are built
starting from DB values, and by analyzing the concrete mapping strategies
arising from six VKG benchmarks and real-world use cases, covering a variety of different application domains.
The resulting patterns do not simplistically consider single elements from the
different information artefacts, but rather tackle more complex structures
arising from their combination, and potentially from the cascaded application
with other patterns.
Exploiting this holistic approach, we then discuss how the proposed patterns
can be employed in a variety of VKD design scenarios, depending on which
information artifacts are available, and which ones have to be produced.
Finally, we go back to the concrete VKG scenarios and benchmarks, and report on
the coverage of mappings appearing therein in terms of our patterns, as well as
on how many times the same pattern recurs. This also gives an interesting
indication on which patterns are more pervasively used in practice.
\endinput
As a result, one cannot only rely on what survived from the original conceptual model in the structure of the resulting database in order to reverse-engineer the pure conceptual representation of the domain. We point out that this activity is the one which has to be carried over in an OBDA setting, were the link between the database and the ontology has to be reconstructed.
implicit or explicit model of the domain is then used ...
no assumptions on the schema
start from mappings in the literature and from conceptual modeling pratices.
Il punto dell'articolo e' di mostrare che se teniamo conto di questi artefatti, possiamo studiare uan serie di pattern che da un lato possono supportare e guidare la creazione die mapping, ma dall'altro possono anche essere usati per generare automaticamente dei mapping di default. Questi pattern sono utili in vari contesti d'uso, in cui alcuni information artefacts sono dati e altri sono da produrre.
A few ideas on how to motivate our work, and try to differentiate essentially with MIRROR:
\begin{itemize}
\item Other works ignore the conceptual level because they argue it is anyways available to the mapping designer. In our work, instead, we argue that the conceptual level plays a central role because it is actually our target ontology. The mapping designer has to model it anyways, and reconstruct it. Therefore, understanding the relationships between specific conceptual elements is of crucial importance.
\item Diego said: we give a conceptual interpretation of the patterns we observe at the relational level. This interpretation corresponds to the structure of the target ontology.
\item Let us do a running example similar to MIRROR, but with SSN as pk of Person (so that we can show the pattern Identifier Alignment, that they don't have)
\item Our patterns can also drive the (manual) work of the by the mapping designer.
\item We can use our patterns for debugging mappings
\item In fewer words, we distinguish from other works because: our setting is more general, and our mappings are more refined (e.g., no useless joins).
\item Another important point with respect to literature is that then we go and validate our patterns against real mappings available in the OBDA literature (and, also, debug them).
\item Another possibility is to use an OBDA specification so as to reverse-engineer the conceptual schema underlying the DB.
\item There are also technical differences between our proposal and MIRROR: e.g., subclass only if fkey from pkey to pkey. Subclasses with a 1-N do not make any sense. N-ary relationship is not assumed to be reified in the data (see Mirror pattern 9 in its fine uglyness).
\item TO_CHECK whether Mirror actually gives these mappings or not, cause the table does not tell how they are obtained and looks hidden in the algorithm.
\end{itemize}
\section{Preliminaries}
\label{sec:preliminaries}
In this work, we use the \textbf{bold} font to denote tuples, e.g., $\vec{x}$,
$\vec{y}$ are tuples. When convenient, we treat tuples as sets and allow the
use of set operators on them.
We rely on the VKG\xspace framework of \cite{PLCD*08}, which we formalize here
through the notion of \emph{VKG\xspace specification}, which is a triple
$\S=\tup{\mathcal{T},\mathcal{M},\Sigma}$ where $\mathcal{T}$ is an \emph{ontology TBox}, $\mathcal{M}$ is a set of
\emph{mappings}, and $\Sigma$ is the schema of a DB. The
ontology $\mathcal{T}$ is formulated in \textsc{OWL\,2\,QL}\xspace~\cite{W3Crec-OWL2-Profiles}, whose formal
counterpart is the description logic \textit{DL-Lite}$_{\altmathcal{R}}$\xspace~\cite{CDLLR07}, and for
conciseness we actually adopt the DL notation. Consider four mutually disjoint
sets \textbf{NI}\xspace of \emph{individuals}, \textbf{NC}\xspace of \emph{class names}, \textbf{NP}\xspace of \emph{object
property names}, and \textbf{ND}\xspace of \emph{data property names}. Then an \textsc{OWL\,2\,QL}\xspace
\emph{TBox} $\mathcal{T}$ is a finite set of axioms of the form $B \sqsubseteq C$ or
$r_1 \sqsubseteq r_2$, where $B, C$ are \emph{classes} and $r_1$, $r_2$ are
\emph{object properties}, according to the following grammar, where $A \in
\textbf{NC}\xspace$, $d \in \textbf{ND}\xspace$, and $p \in \textbf{NP}\xspace$:
\[
B ~\rightarrow~ A ~\mid~ \exists r ~\mid~ \exists d
\qquad\qquad
C ~\rightarrow~ B ~\mid~ \lnot B
\qquad\qquad
r ~\rightarrow~ p ~\mid~ p^-
\]
Observe that for simplicity of presentation we do not
consider here datatypes, which are also part of the \textsc{OWL\,2\,QL}\xspace standard.
Mappings specify how to populate classes and properties of the ontology with
individuals and values constructed starting from the data in the underlying
DB. In VKGs\xspace, the adopted standard language for mappings is
R2RML~\cite{W3Crec-R2RML}, but for conciseness we use here a more convenient
abstract notation: A \emph{mapping} $m$ is an expression of the form\\
\centerline{
$\begin{array}{l}
s: Q(\vec{x}) \\
t: L(\vec{\mathfrak{t}}(\vec{x}))
\end{array}$
}
\noindent where $Q(\vec{x})$ is a SQL query over the DB schema $\Sigma$, called \emph{source query}, and $L(\vec{\mathfrak{t}}(\vec{x}))$ is a \emph{target atom} of the form
\begin{inparablank}
\item $C(\mathfrak{t}_1(\vec{x_1}))$,
\item $p(\mathfrak{t}_1(\vec{x_1}), \mathfrak{t}_2(\vec{x_2}))$, or
\item $d(\mathfrak{t}_1(\vec{x_1}), \mathfrak{t}_2(\vec{x_2}))$,
\end{inparablank}
where $\mathfrak{t}_1(\vec{x_1})$ and $\mathfrak{t}_2(\vec{x_2})$ are terms that we call
\emph{templates}. In this work we express source queries using the notation of
\emph{relational algebra} and actually omit answer variables, assuming that
they coincide with the variables used in the target atom. Intuitively, a
template $\mathfrak{t}(\vec{x})$ in the target atom of a mapping corresponds to an
\emph{R2RML template}, and is used to generate object \emph{IRIs} (i.e.,
Internationalized Resource Identifiers) or (RDF) \emph{literals}, starting from
DB values retrieved by the source query in that mapping.
As for the \emph{semantics} of VKG mappings, we illustrate it by means of an
example, and refer, e.g., to~\cite{PLCD*08} for more details.
For examples, we use the concrete syntax adopted in the Ontop\xspace VKG\xspace
system~\cite{CCKK*17}, in which the answer variables of the source query are
indicated in the target atom by enclosing them in \verb|{|\,$\cdots$\verb|}|,
and in which each mapping is identified by an Id. The following is an example
mapping expressed in such syntax:
\begin{lstlisting}[basicstyle=\ttfamily\footnotesize]
mappingId mPerson
source SELECT "ssn" FROM "person_info"
target :person/{ssn} a :Person .
\end{lstlisting}
The effect of such mapping, when applied to a DB instance $\mathcal{D}$ for
$\Sigma$, is to populate the class \texttt{:Person} with IRIs constructed by
replacing the answer variable \texttt{ssn} occurring in the template in the
target atom with the corresponding assigments for that variable in the answers
to the source query evaluated over $\mathcal{D}$.
\endinput
\subsection{Schema-driven Patterns}
\label{sec:mappings}
\input{tables/schema-driven.tex}
Next we briefly comment on schema-driven patterns, shown in Table~\ref{tab:schema-driven-patterns}
\mypar{Schema Entity (SE)} %
This fundamental pattern considers a single table $T_E$ with primary key $\setsym{K}$
and other attributes $\setsym{A}$. The pattern captures how $T_E$ is mapped into a
corresponding class $C_E$. The primary key of $T_E$ is employed to construct
the objects that are instances of $C_E$, using a template $\mathfrak{t}_E$ specific for
that class. Each relevant attribute of $T_E$ is mapped to a data property of
$C_E$.
\\
\textsl{Example}: A client registry table containing SSNs of clients, together
with their name as an additional attribute, is mapped to a \ex{Client} class
using the SSN to construct its objects. In addition, the SSN and name are
mapped to two corresponding data properties.
\\
\textsl{References}: This pattern is widespread, and it is already mentioned in
the W3C-DM\xspace.
\mypar{Schema Relationship (SR)} %
This pattern considers three tables $T_R$, $T_E$, and $T_F$, in which the
primary key of $T_R$ is partitioned into two parts $\setsym{K}_{RE}$ and $\setsym{K}_{RF}$
that are foreign keys to $T_E$ and $T_F$, respectively. $T_R$ has no
additional attributes. The pattern captures how $T_R$ is mapped to
an object property $p_R$, using the two parts $\setsym{K}_{RE}$ and $\setsym{K}_{RF}$ of the
primary key to construct respectively the subject and the object of the triples
in $p_R$.
\\
\textsl{Example}: An additional table in the client registry stores the
addresses of each client, and has a foreign key to a table with locations.
The former table is mapped to an \ex{address} object property, for which the ontology asserts that the domain is the class \ex{Person} and the range an additional class \ex{Location}, which corresponds to the latter table.
\\
\textsl{References}: This pattern is widespread. For instance, it is described
both in BootOX~\cite{JKZH*15} and in Mirror~\cite{mirror}.
\mypar{Schema Relationship with Identifier Alignment (SRa)}%
Such pattern is pattern \pat{SR} plus a \emph{modifier \pat{a}}, indicating that the pattern can be applied after the identifiers involved in the relationship have been \emph{aligned}. The alignment is necessary because now the foreign key in $T_R$ does not point to the primary key $\setsym{K}_F$ of $T_F$, but to an additional key $\setsym{U}_F$. Since the instances of the class $C_F$ corresponding to $T_F$ are constructed using the primary key $\setsym{K}_F$ of $T_F$ (cf.\ pattern \pat{SE}), also the pairs that populate $p_R$ should refer in their object position to that primary key, which can only be retrieved by a join between $T_R$ and $T_F$ on the additional key.
Note that alignment variants can be defined in a straightforward way for other
patterns involving relationships. For conciseness, we prune these variants from
our catalog.
\\
\textsl{Example}: The primary key of the table with locations is not given by
the city and street, which are used in the table that relates clients to their
addresses, but is given by the latitude and longitude of locations.
\\
\textsl{References}: This pattern is widespread. In particular, the alignment
of identifiers is mentioned in the W3C-DM\xspace.
\mypar{Schema Relationship with Merging (SRm)} %
Such pattern considers a table $T_E$ in which the foreign key
$\setsym{K}_{EF}$ to a table $T_F$ is disjoint from its primary key $\setsym{K}_E$. The
table $T_E$ is mapped to an object property, whose subject and object are
derived respectively from $\setsym{K}_E$ and $\setsym{K}_{EF}$.
\\
\textsl{Example}: The relationship between a client and its \emph{unique} billing
address has been merged into the client table. In the ontology, a
\ex{billingAddress} object property relates the \ex{Client} class
to the \ex{Location} class, and is populated via a mapping from the client table.
\\
\textsl{References}: This pattern is widespread, and is one of the basic
patterns described in the W3C-DM\xspace.
\mypar{Schema Reified Relationship (SRR)} %
Such pattern considers a table $T_R$ whose primary key is partitioned in at
least three parts $\setsym{K}_{RE}$, $\setsym{K}_{RF}$, and $\setsym{K}_{RG}$, that are foreign keys
to three additional tables; or when the primary key is partitioned in at least
two such parts, but there are additional attributes in $T_R$.
Such a table naturally corresponds to an $n$-ary relationship $R$
with $n > 2$ (or with attributes), and to represent it at the
ontology level we require a class $C_R$, which reifies $R$,
whose instances are built
from the primary key of $T_R$. The mapping
accounts for the fact that the components of the $n$-ary relationship
have to be represented by suitable object properties, one for each such
component, and that the tuples that instantiate these object properties can all
be derived from $T_R$ alone.
\\
\textsl{Example}: A table containing information about university exams, which
involve a student, a course, and a professor teaching that course. This
information is represented by a relationship that is inherently ternary.
The ontology should contain a class corresponding to the reified relationship, e.g., a class \ex{Exam}.
\\
\textsl{References}: This pattern, which corresponds to \emph{reification} in ontological and conceptual modeling~\cite{CaLN99,BeCD05}, is one of the basic patterns described in the W3C-DM\xspace. A variant of it is also present in Mirror where, however, reification is required in the data.
\mypar{Schema Hierarchy (SH)}
Such pattern considers a table $T_F$ whose primary key is a foreign key to a table $T_E$. Then, $T_F$ is mapped to a class $C_F$ in the ontology that is a sub-class of the class $C_E$ to which $T_E$ is mapped. Hence, $C_F$ ``inherits'' the template $\mathfrak{t}_E$ of $C_E$, so that the instances of the two classes are ``compatible''.
\\
\textsl{Example}: An entity Student in an ISA relation with an entity Person.
\\
\textsl{References}: This pattern goes beyond W3C-DM\xspace, and is first discussed in Mirror (but not in the form presented here) and BootOX.
\mypar{Schema Hierarchy with Identifier Alignment (SHa)}
Such pattern is like \pat{SH}, but the foreign key in $T_F$ is over a key $\setsym{U}_F$ that is not primary. The objects for $C_F$ have to be built out of $\setsym{U}_F$, rather than out of its primary key. For this purpose, the pattern creates a view $V_F$ in which $\setsym{U}_F$ is the primary key, and the foreign key relations are preserved.
\\
\textsl{Example}: An ISA relation between entities Student and Person. Students are identified by their matriculation number, whereas persons are identified by their SSN.
\\
\textsl{References}: We are not aware of works formalizing, or identifying, this pattern.
\endinput
\subsection{Data Driven Mapping Patterns}
\label{sec:mapping-data}
Data-driven patterns are mapping patterns that depends both on the schema and on the actual data in the DB. They are not limited to the variants corresponding to the schema-driven patterns, but they also comprehend specific patterns that do not have a corresponding schema version, e.g., due to \emph{denormalized tables}. Such patterns, for which we provide a detailed description below, are shown in Table~\ref{tab:data-driven-patterns}.
\mypar{Data Entity with Merged 1-N Relationship and Entity with Attributes (\pat{DR1Nm})}
\noindent Such pattern considers a table $T_E$ that has, besides its primary key $\setsym{K}_E$, also attributes $\setsym{K}_F$ which functionally determine attributes $\setsym{A}_F$. Observe that the latter condition is not possible if the DB schema is in normal form.
When this pattern is applied, the key $\setsym{K}_F$ and the attributes $\setsym{A}_F$ that
go along with it, can be projected out from $T_F$, resulting in a view $V_F$
to which further patterns can be applied, including \pat{DR1Nm} itself on
additional attributes. Two additional views $V_E$ and $V_R$ are created,
representing the tables corresponding to the entities $E$ and $R$,
respectively.
\\
\textsl{Example}: A single students table containing information about students
and attended courses, e.g., the course identifier and the course name. The
course identifier, which is not a key for students, uniquely determines course
names. The ontology defines both a \ex{Student} and a \ex{Course} class. The
course identifier is used to build instances of \ex{Course}, and course name is
mapped to a data property that has as domain the \ex{Course} class.
\\
\textsl{References}: We are not aware of works formalizing, or identifying,
this pattern.
\mypar{Data 1-1 Relationship with Merging (DR11m)} %
Such pattern could be applied when a table $T_E$ has, besides its primary key
$\setsym{K}_E$, also an additional key $\setsym{K}_F$, and domain knowledge or the ontology
indicate that objects whose IRI is constructed from $\setsym{K}_F$ are relevant in the
domain, and that they have data properties that correspond to the attributes
$\setsym{A}_F$ of $T_E$.
When this pattern is applied, the key $\setsym{K}_F$ and the attributes $\setsym{A}_F$ that
go along with it, can be projected out from $T_E$, resulting in a view $V_E$
to which further patterns can be applied, including \pat{DR11m} itself on
additional attributes.
\\
\textsl{Example}: A single table containing the information about universities,
and the information about their rector. The ontology contains
both a \ex{University} and a \ex{Rector} class. The attribute SSN,
identifying rectors, is used to build instances of
\ex{Rector}, and additional attributes that intuitively belong to the rector
(such as his name) are mapped to data properties that have
as domain the \ex{Rector} class (as opposed to the \ex{University} class).
Notice that domain knowledge is required to apply this pattern. E.g., if the
table contains an attribute for the salary of the rector, this could either be
considered a property of the university (e.g., if the rector salary is
determined by some regulation), or of the rector (e.g., if the rector salary is
negotiated individually).
\\
\textsl{References}: We are not aware of works formalizing, or identifying,
this pattern.
\input{tables/data-driven.tex}
\mypar{Data Entity with Optional Participation in a Relationship (DH01)}%
Such pattern is characterized by a table $T_E$ that represents the merge of child entity $E_R$ into a father entity $E$, and $E_R$ has a mandatory participation in a relationship $R$. The join between the table $T_R$ and $T_E$ identifies the objects in $E$ instances of $E_R$, and is used in a mapping to create instances of the class $C_{R_E}$, as well as the object property $R$ connecting $E_R$ to $F$. This pattern produces a view $V_{E_R}$ to which further patterns can be applied.
\\
\textsl{Example}: A students table and a table connecting students to undergraduate courses. Each student participating such relationship is an undergraduate student.
\\
\textsl{References}: To the best of our knowledge, this pattern was first described in BootOX, which also provides techniques to automatically discover and use it to generate mappings.
\mypar{Clustering Entity to Class/Data Property/Object Property
\pat{(CE2C/CE2D/CE2O)}}\\
Such patterns are characterized by an entity $E$ and a \emph{derivation rule} defining sub-entities of $E$ according to the values for attributes $\setsym{B}$ in $E$. Instances in these sub-entities can be mapped to objects in the subclasses $C_E^\setsym{v}$ of the ontology (\pat{CE2C}), to objects connected through a data property to some literal constructed through a \emph{value invention} function $\xi$ applied on $\setsym{v}$ (\pat{CE2D}), or to objects connected through an object property to some IRI built from $\setsym{v}$ (\pat{CE2O}). The definition of $\xi$ depends on the actual ontology. As other patterns, this pattern produces views according to the possible values $\setsym{v}$ of $\setsym{B}$.
\\
\textsl{Example}: A table contains people with an attribute defining their
gender and ranging over '\texttt{F}' or '\texttt{M}'. The ontology defines a
data property \ex{hasGender}, ranging over the two RDF literals \texttt{"Male"}
and \texttt{"Female"}. Then, pattern \pat{CE2D} clusters the table according to
the gender attribute, so as to obtain objects to be linked to either of the two
RDF literals.
\\
\textsl{References}: For what concerns the \pat{CE2C} variant, the clustering pattern is widespread, and it is mentioned in several works on bootstrapping like BootOX. We are not aware of mentions regarding the \pat{CE2D} and \pat{CE2O} variants.
\mypar{Clustering Relationship to Object Property \pat{CR2O}}%
Such pattern is a similar to the previous clustering patterns, but the table being clustered corresponds to a relationship $R$ and the result of the clustering are sub-properties of the object property relative to $R$.
\\
\textsl{Example}: A table relating professors to courses. Lecturers are identified by a multi-attribute key in which one attribute discriminates between full or associate professors. Courses are identified by a multi-attribute key in which one attribute discriminates between undergraduate or graduate courses. Such table is mapped to four object properties in the ontology, one for each combination of type of lecturer and type of course (e.g., an undergraduate course taught by a full professor).
\\
\textsl{References}: We are not aware of works formalizing, or identifying, this pattern.
\endinput
shows two main data-driven patterns about merged entities through a 1-N relationship. Such merging breaks the schema normal form, as it introduces functional dependencies in the table from non-key attributes. By looking for such functional dependencies, one could in principle reconstruct the normalized form hidden by the denormalizion process.
Another important class of data-driven patterns are \emph{clustering patterns}. A clustering pattern talks about entities in the DB which can be clustered into several other sub-entities according to a partitioning scheme \davide{Marco, say something about derivation rules etc.}. Also these patterns can be used in combination with other patterns: for instance, a merged entity arising from one application of the pattern DR1Nm can then be clustered according to one of the clustering patterns.
A variant of the clustering pattern on entities is when the clustering is made according to the participation of such entities to some optional relationships. This pattern has been first described in BootOx~\cite{}, and it essentially introduces one subclass for each entity discovered in such way (this process can be filtered on considerations regarding join selectivities and/or diversity from the base class). \davide{TODO: Add picture for this pattern.}
\TODO{
\begin{itemize}
\item Discuss the patterns in the table. [Diego]
\item Discuss how schema-matching techniques can help in discovering data-driven patterns [Avi + Roee]
\item Marco: Make a parallel between the clustering pattern with derived classes in UML, and derivation rules from ORM [Marco]
\end{itemize}
}
\mypar{Data Entity with Merged 1-N Relationship and Entity with Relationship (\pat{DR1NRm})}\\
Such pattern is similar to \pat{DR1NAm}, but with the difference that table $T_F$ does not have attributes $\setsym{A}_F$. To distinguish $F$ from $E$, this time we exploit the role of $F$ in the relationship $S$ to the entity $G$.%
\\
\textsl{Example}: The same example of pattern \pat{DE1RNm}, but with the difference that the course name is \emph{not} present in students, and with the presence of a table relating the taught courses to their lecturers. Such table is rendered as an object property in the ontology, and relates the classes \ex{Course} and \ex{Lecturers}.
\\
\textsl{References}: We are not aware of works formalizing, or identifying, this pattern.
\subsection{Variations and Combinations}
More complex patterns arise from the combination of the patterns described so far. For instance, recall the example we discussed for pattern \pat{DH01}. Graduate students, which are a by-product of the application of such pattern, might be in relationship with an entity \ex{Graduation}. The object property capturing the relationship might be created by applying pattern \pat{DR}. In our analysis, we have observed that combinations are quite common in VKG\xspace specifications where the DB has been created independently from the ontology.
Another important variation is the one introduced by modifiers, such as \emph{value invention or combination}, in which DB values are used and combined to get RDF literals, typically by relying on R2RML \emph{templates}. We have already encountered an instance of value invention, specifically when we introduced the \pat{CE2D} pattern.
\endinput
\davidebox{
Marco: let’s link this with derivation rules. Let us be careful on the observation that such rules are specified at the
level of the database. We then derive them epistemically from the data, and encode them in the mappings. Maybe
such discussion is more general and goes to the general intro for the methodological patterns, rather than relegating
it here to Value Invention... [Marco]
\TODO{
\begin{itemize}
\item It might be that the schema is NOT underspecified (the er-schema might be missing some information that is relevant for the conceptual schema)
\item Object Invention (Marco: Let’s try to is relate to the fact that in conceptual modelling you move through refinements.)
\item Value Invention
\item I would mention these two so as to say that ``somehow-automatizable'' patterns are not the end of the story, and that anyways one might need to move really further away from the ER-representation.
\item Can something be done also for the exotic cases? [Avi + Roee]
\end{itemize}
}
}
\subsection{A Thought on Automatic Discovery of Patterns}
When it comes to discovering data-driven patterns, our methodology may benefit from techniques that were developed in the research discipline of schema matching~\cite{rahm2001survey}. Over the years, the proposed methods were shown to serve as a solid basis to handle small-scale schemata, typically encountered as a part of a mapping process~\cite{shragaadnev}.
Schema matching becomes handy when a pattern involves two (or more) under-specified schemata. By way of motivation, consider the case of an implicit relationship (pattern \pat{DR}). In such a case, the mapping may consider several relation pair candidates that may be semantically interpreted as representing a missing relationship. Let $T_E$ and $T_F$ with primary keys $\pk{\setsym{K}_E} = \pk{\setsym{K}_E}_{1}, \cdot, \pk{\setsym{K}_E}_{n}$ and $\pk{\setsym{K}_F} = \pk{\setsym{K}_F}_{1}, \cdot, \pk{\setsym{K}_F}_{m}$, respectively, be a relation candidate pair. A matching process between $\pk{\setsym{K}_E}$ and $\pk{\setsym{K}_F}$ aligns their attributes using {\em matchers} that utilize matching cues such as attribute names, instance data, schema structure, etc. Accordingly, the matching process yields similarity values (typically a real number in $[0,1]$) between $\pk{\setsym{K}_E}_{i}\in\pk{\setsym{K}_E}$ and $\pk{\setsym{K}_F}_{j}\in\pk{\setsym{K}_F}$. These values are then used to deduce a {\em match} $\sigma(\pk{\setsym{K}_E}, \pk{\setsym{K}_F})$. Such a match may comply with different constraints as set by the environment, e.g., a one-to-one matching. For example, the similarity values may be assigned using a string similarity matcher $\frac{len(\pk{\setsym{K}_E}_{i}.name\cap \pk{\setsym{K}_F}_{j}.name)}{max(len(\pk{\setsym{K}_E}_{i}.name), len(\pk{\setsym{K}_F}_{j}.name))}$~\cite{gal2011uncertain} and a match may be inferred using a threshold selection rule~\cite{do2002coma}. Once a match is obtained, it may serve as a realization of a schema relationship mapping pattern. Obviously, not all tables have relationships. Thus, one should decide which of the generated relationships should be included in the final mapping. To do so, we should assess the quality of the matching outcome, allowing to ranking among them~\cite{gal2019learning} (e.g., selecting matches with high similarity values to be included in the final mapping). Match quality may also be learned as a domain specific input introducing other quality measures to the usefulness of a match.
The schema matching literature offers a mechanism to map multiple schemata to a global schema, which bears similarity to the \pat{SR11m} pattern. \emph{schema cover}~\cite{gal2013completeness} matches parts of schemata (subschemata) with concepts, using schema matching techniques, aiming at covering all attributes of the global schema with minimum number of overlaps between the subschemata. Using a similar approach, one can ``cover'' a schema by using multiple ontology concepts to generate an \pat{SR11m} pattern. Recalling the example we discussed for \pat{SR11m}, using the schema cover methodology, we can cover the university table using the properties relative to universities defined in the ontology, and the data property salary from the class \ex{Rector} (assuming salary is an attribute of rectors).
\section{Mapping Patterns}
\label{sec:patterns}
We now enter into the first contribution of this paper, namely a catalog of \emph{mapping patterns}. In specifying each pattern, we consider not only the three main components of a VKG specification -- namely the relevant portions of the DB schema, the ontology, and the mapping between the two -- but also the conceptual schema of the domain of interest and the underlying data, when available. As pointed out in Section~\ref{sec:introduction}, we do not fix which of these information artifacts are given, and which are produced as output, but we simply describe how they relate to each other, on a per-pattern basis.
To present each pattern, we describe the complete set of attributes of each
table. However, these have to be understood as only those attributes that are
relevant for the considered portion of the application domain.
We show the fragment of the conceptual schema that is affected by the pattern
in E-R notation (adopting the original notation by Chen) -- but any structural
conceptual modeling language, such as UML or ORM, would work as well. To
compactly represent sets of attributes, we use a small diamond in place of the
small circle used for single attributes in Chen notation. In the DB schema, we
use $T(\pk{\setsym{K}},\setsym{A})$ to denote a \emph{table} with name $T$, \emph{primary
key} consisting of the attributes $\setsym{K}$, and additional attributes
$\setsym{A}$. Given a set $\setsym{U}$ of attributes in $T$, we denote by $\key[T]{\setsym{U}}$ the
fact that $\setsym{U}$ form a \emph{key} for $T$. Referential integrity constraints
(like, e.g., foreign keys) are denoted with edges, pointing from the
referencing attribute(s) to the referenced one(s). For readability, we denote
sets of the form $\{o \mid \mathit{condition}\}$ as
$\{o\}_{\mathit{condition}}$.
Formally, a mapping pattern is a quadruple $(\mathcal{C},\S,\mathcal{M},\O)$ where $\mathcal{C}$ is a conceptul schema, $\S$ is a database schema with a distinguished table (called \emph{pattern main table}), $\mathcal{M}$ is a set of mappings, and $\O$ is an (\textsc{OWL\,2\,QL}\xspace) ontology.
In such mapping, the pair $(\mathcal{C}, \S)$ is the \emph{input}, putting into correspondence a conceptual representation to one of its (many) admissible (i.e., formally sound) database schemata. Such variants are due to differences in the applied methodology, as well as to considerations about efficiency, performance optimization, and space consumption of the final database.
The pair $(\mathcal{M}, \O)$, instead, is the output, where the \emph{database schema ontology} $\O$ is the \textsc{OWL\,2\,QL}\xspace encoding of the conceptual schema $\mathcal{C}$, and the set $\mathcal{M}$ of \emph{database schema mappings} provides the link between the $\S$ and $\O$.
We organize patterns in two major groups: \emph{schema-driven patterns}, shaped
by the structure of the DB schema and its explicit constraints, and
\emph{data-driven patterns}, which in addition consider constraints emerging
from specific configurations of the data in the DB.
Observe that, for each schema-driven pattern, we actually identify a
data-driven version in which the constraints over the schema are now not
explicitly specified, but hold in the data. We denote such pattern as its
schema-driven counterpart, but with a leading ``D'' in place of ``S'' (e.g., in
Table~\ref{tab:schema-driven-patterns}, DE is the data-driven version of
SE). The two types of patterns can be used in combination with additional
semantic information from the ontology, for instance on how the data values
from the DB translate into RDF literals. These considerations lead us to
introduce, where necessary, \emph{pattern modifiers}.
It is important to note that some of the patterns come with accessory views defined over the DB-schema. The purpose of these views is to make explicit the presence of specific structures over the DB schema that are revealed through the application of the pattern itself. Such views can be used themselves, together with the original DB schema, to identify the applicability of further patterns.
\endinput
\section{Usage Scenarios for VKG\xspace Patterns}
\label{sec:discussion}
We now comment on how having a catalog of patterns for VKG\xspace specifications like the one introduced in Section~\ref{sec:patterns} is instrumental in a number of usage scenarios.
\mypar{Debugging of a VKG\xspace Specification} This scenario arises when a full VKG\xspace specification is already in place and must be debugged. Here, each component of the specification can be checked for compliance against the patterns.
\mypar{Conceptual Schema Reverse Engineering} Another relevant scenario arising when a full VKG\xspace specification is given, is that of inferring a conceptual schema of the DB that represents the domain of interest by reflecting the content of the VKG\xspace specification. Here the ontology provides the main source to reconstruct entities, attributes, and relationships, while the DB and the mappings provide the basis to ground the conceptual model in the actual DB, and to infer additional constraints that are not captured by the ontology (e.g., for limited expressivity of \textsc{OWL\,2\,QL}\xspace).
\mypar{Mapping Bootstrapping} In this scenario, the DB and the ontology are given, but mappings relating them are not. Patterns can be used to (semi-)automatically bootstrap an initial set of mappings, which can then be further refined and extended manually, possibly exploiting again the patterns. Schema patterns are the most suitable ones to automatically guide the bootstrapping process. When patterns contain tables that merge multiple entities/relationships, the presence of a conceptual schema becomes crucial to configure the left-hand side of bootstrapped mappings. This is, e.g., the case for \pat{DR1Nm} and \pat{DR11m}, and patterns based on clustering.
If the conceptual schema is not available in this tricky case, boostrapping can still be attempted by relying on schema matching techniques~\cite{rahm2001survey}, as done in BootOX.
Specifically, schema matching comes handy when a pattern involves two (or more) under-specified schemas. For instance, in the case of pattern \pat{DR}, \emph{pair candidates} between primary keys can be \emph{matched} in order to make implicit relationships explicit. This can be done through matchers (such as string similarity matchers ~\cite{gal2011uncertain}) that employ attribute names, instance data, schema structure, etc. To separate genuine relationships from false positives generated by poor matchers, ranking techniques have to be employed~\cite{gal2019learning}.
\mypar{Ontology+Mapping Bootstrapping} Here, neither the ontology nor the mappings are given as input, and have to be synthesized. This scenario can be reduced to the one of mapping bootstrapping by first inducing a baseline ontology mirroring the structure of the DB schema. This ontology is typically at a much lower level of abstraction than the one expected by domain experts. In fact, this problem can be tackled in a much more effective way in the case where an explicit conceptual schema is provided. In this case, standard techniques to encode conceptual schemas into corresponding ontology axioms (e.g.,~\cite{BoBr03}) can be readily applied.
\mypar{VKG\xspace Bootstrapping} In this scenario, we just have a conceptual schema of the domain, and the goal is to set up a VKG\xspace specification.
The conceptual schema can be then transformed into a normalized DB schema using well-established \emph{relational mapping} techniques (e.g.,~\cite{HaMo10}). At the same time, as pointed above, a direct encoding into ontology axioms can be applied to bootstrap the ontology. The generation of mappings becomes then a quite trivial task, considering that the induced DB and ontology are very close in terms of abstraction.
This setting resembles, in spirit, that of \emph{object-relational mapping}, used in software engineering to instrument a DB and corresponding access mechanisms starting from classes written in object-oriented code.
\endinput
ROEE ORIGINAL TEXT:
If the conceptual schema is not available in this tricky case, boostrapping can still be attempted, by relying on techniques for schema matching~\cite{rahm2001survey}.
Specifically, schema matching becomes handy when a pattern involves two (or more) under-specified schemata. Consider, e.g., the case of an implicit relationship (pattern \pat{DR}). In such a case, the mapping may consider several relation pair candidates that may be semantically interpreted as representing a missing relationship. Let $T_E$ and $T_F$ with primary keys $\pk{\setsym{K}_E} = \pk{\setsym{K}_E}_{1}, \cdot, \pk{\setsym{K}_E}_{n}$ and $\pk{\setsym{K}_F} = \pk{\setsym{K}_F}_{1}, \cdot, \pk{\setsym{K}_F}_{m}$, respectively, be a relation candidate pair. A matching process between $\pk{\setsym{K}_E}$ and $\pk{\setsym{K}_F}$ aligns their attributes using {\em matchers} that utilize matching cues such as attribute names, instance data, schema structure, etc.
Accordingly, the matching process yields similarity values (typically a real number in $[0,1]$) between $\pk{\setsym{K}_E}_{i}\in\pk{\setsym{K}_E}$ and $\pk{\setsym{K}_F}_{j}\in\pk{\setsym{K}_F}$. These values are then used to deduce a {\em match} $\sigma(\pk{\setsym{K}_E}, \pk{\setsym{K}_F})$. Such a match may comply with different constraints as set by the environment, e.g., a one-to-one matching. For example, the similarity values may be assigned using a string similarity matcher $\frac{len(\pk{\setsym{K}_E}_{i}.name\cap \pk{\setsym{K}_F}_{j}.name)}{max(len(\pk{\setsym{K}_E}_{i}.name), len(\pk{\setsym{K}_F}_{j}.name))}$~\cite{gal2011uncertain} and a match may be inferred using a threshold selection rule~\cite{do2002coma}. Once a match is obtained, it may serve as the basis for inducing a \pat{DR} pattern instance.
Obviously, not all tables have relationships. Thus, one should decide which of the generated relationships should be included in the final mapping. To do so, we should assess the quality of the matching outcome, allowing to ranking among them~\cite{gal2019learning} (e.g., selecting matches with high similarity values to be included in the final mapping). Match quality may also be learned as a domain specific input introducing other quality measures to the usefulness of a match.
Interestingly, the schema matching literature offers a mechanism to map multiple schemata to a global schema, which bears similarity to the \pat{SR11m} pattern. \emph{Schema cover}~\cite{gal2013completeness} matches parts of schemata (subschemata) with concepts, using schema matching techniques, aiming at covering all attributes of the global schema with minimum number of overlaps between the sub-schemata. Using a similar approach, one can ``cover'' a schema by using multiple ontology concepts to generate an \pat{SR11m} pattern. Recalling the example we discussed for \pat{SR11m}, using the schema cover methodology, we can cover the university table using the properties relative to universities defined in the ontology, and the data property salary from the class \ex{Rector} (assuming salary is an attribute of rectors).
We have discussed a number of patterns that arise when linking a database to an ontology. Having identified and, especially, formalized them, lays a solid foundation towards the development of software artifacts helping the management of mappings throughout their lifecycle. With respect to this, relevant tasks range from the \emph{bootstrapping} of a complete VKG\xspace setting starting from an incomplete one (e.g., generating an ontology and mappings given a DB), the \emph{alignment} of the various components of a VKG\xspace setting (e.g., writing mappings in order to align a given ontology to a given DB), or \emph{debugging} of existing VKG\xspace specifications.
\begin{compactitem}
\item when the task is to bootstrap an ontology and mappings, and only a data source is provided, schema and data-driven patterns can be used as a guide to the (manual or automatic) process of designing both ontology axioms and mappings;
\item when an ontology is given, and the task is to link an existing data source to the ontology, patterns such as \pat{SR11m} \pat{DR1Nm}, or clustering, can be used to bend the structure of the database schema towards the needs dictated by the ontology;
\item when only an ontology is given, and mappings and a database have to be provided for them;
\item when both an ontology and a database are provided, and mappings
\end{compactitem}
\section{Analysis of Scenarios}
\label{sec:scenarios}
In this section we look at a number of VKG\xspace scenarios in order to understand how patterns occur in practice, and with which frequency. To this purpose, we have gathered 6 different scenarios, coming either from the literature on VKGs\xspace, or from actual real-world applications. Table~\ref{t:scenarios} shows the results of our analysis, and for each cell pattern/scenario, it reports the number of applications of that pattern over that scenario (leftmost number in the cell) and the number of mappings involved (rightmost number in the cell). The last column in the table reports total numbers. We have manually classified a total of $1559$ mapping assertions, falling in $407$ applications of the described patterns. Of these applications, about $52.8\%$ are of schema-driven patterns, $44.7\%$ of data-driven patterns, and $2.5\%$ are of patterns falling outside of our categorization. In the remainder of this section we describe the detailed results for each scenario.
\mypar{Berlin Sparql Benchmark (BSBM\xspace)~\cite{BiSc09}}
This scenario is built around an e-commerce use case in which products are offered by vendors and consumers review them. Such benchmark does not natively come with mappings, but these have been created in different works belonging to the VKG\xspace literature. We analyzed those in~\cite{SWJ-2019}. The ontology in BSBM\xspace reflects quite precisely the actual organization of data in the DB. Due to this, each mapping falls into one of the patterns we identified. Notably, in the DB foreign key constraints are not specified. Therefore, we notice a number of applications of data-driven patterns, which cannot be captured by simple approaches based on W3C-DM\xspace.
\mypar{NPD Benchmark (NPD\xspace)~\cite{LRXC15}}
This scenario is built around the domain of oil and gas extraction. It presents the highest number of mappings ($>$1k). The majority of these were automatically generated, and fall under W3C-DM\xspace or schema-driven patterns. There are, however, numerous exceptions.
Mainly, there are a few denormalized tables which require the use of the \pat{DR1Nm} pattern, such as for the following mapping:
\begin{lstlisting}
mappingId Mapping:00877:Table:Extra:ex5:npdv:Quadrant
target npd:quadrant/{wlbNamePart1} a npdv:Quadrant .
source SELECT "wlbNamePart1" FROM "wellbore_development_all"
\end{lstlisting}
A quadrant is not an entity in the DB schema (because \ex{wlbNamePart1} is not a key
of \ex{wellbore\_development\_all}), but it is represented as a class in the ontology. Moreover, quadrants have themselves their own data (resp., object) properties, triggering the application of other patterns in composition with \pat{DR1Nm}.
\mypar{University Ontology Benchmark (UOBM)~\cite{uobm}}
This scenario is built around the academic domain. Such benchmark provides a tool to automatically generate OWL ontologies, but does not include mappings nor a DB instance. These two have been manually crafted in~\cite{BCSS*16}, by reverse-engineering the ontology. The mappings in this setting are quite interesting, and are mostly data-driven as witnessed by the many applications of the clustering patterns. One critical aspect about these mappings is the use of a sophisticated version of the identifier alignment pattern modifier. Specifically, the table \ex{People} has the following primary key:
\begin{lstlisting}
PRIMARY KEY (ID,deptID,uniID,role)
\end{lstlisting}
Table \ex{GraduateStudent}, which at the conceptual level corresponds to a subclass of the class \ex{People}, has the following key which is incompatible with the one of the superclass:
\begin{lstlisting}
PRIMARY KEY (studentID,deptID,uniID)
\end{lstlisting}
The subclass relation between \ex{People} and \ex{GraduateStudents} requires the two keys to be aligned. This is done ``artificially'', in the sense that the missing field \ex{role} is created on-the--fly by the mapping:
\begin{lstlisting}
mappingId Graduate Student
target <http://www.Dept{deptID}.Univ{univID}.edu/{role}{studID}> a :GraduateStudent .
source SELECT deptID, univID, studID, 'GraduateStudent' as role FROM GraduateStudents
\end{lstlisting}
\mypar{Suedtirol OpenData (ST-OD\xspace)\footnote{\url{https://github.com/dinglinfang/suedTirolOpenDataOBDA}}} This is an application scenario coming from the turism domain. The ontology has been created independently from the DB. Moreover, the DB is itself highly de-normalized, since it is essentially a relational rendering of a JSON file. These aspects have a direct impact on the patterns we observed. In particular, we identified several applications of the \pat{DR11m} pattern, which, as we discussed, poses a huge challenge to automatic generation of mappings. Further complications arise from a number of applications of the value invention pattern modifier, which appears quite often in the form, for instance, of language tags:
\begin{lstlisting}
mappingId municipality
target :mun/mun={istat_code} a :Municipality ; rdfs:label {name_i}@it, {name_d}@de .
source SELECT istat_code, name_i, name_d FROM municipalities
\end{lstlisting}
\mypar{Open Data Hub VKG (ODH\xspace)\footnote{\url{https://sparql.opendatahub.bz.it/}}} This setting is the one behind the SPARQL endpoint located at the Open Data Hub portal from the Province of Bozen-Bolzano (Italy). This setting is also a denormalized one, and the same considerations we made for ST-OD\xspace apply to this setting as well.
\mypar{Cordis\footnote{\url{https://www.sirisacademic.com/wb/}}}
This setting is provided by SIRIS Academic S.L., a consultancy company specialized in higher education and research, and is designed around the domain of competitive research projects. As opposed to the previous two scenarios, this one comes with a well-structured relational schema, which reflects in a number of applications of schema patterns. Although in this scenario we have DB views, such views have explicit constraints defined on them (such as, \texttt{UNIQUE} constraints in SQL) that allow for the application of schema patterns.
\begin{table}[t!]
\caption{\label{t:scenarios}Occurrences of Mapping Patterns over the Considered Scenarios.}
\centering
\resizebox{.73\textwidth}{!}{%
\setlength{\tabcolsep}{5.5pt}
\begin{tabular}{l|rrrrrrrrrrrrrr}
\toprule
& \multicolumn{2}{c}{BSBM\xspace} & \multicolumn{2}{c}{NPD\xspace} & \multicolumn{2}{c}{UOBM\xspace} & \multicolumn{2}{c}{ODH\xspace} & \multicolumn{2}{c}{ST-OD\xspace} & \multicolumn{2}{c}{Cordis\xspace} & \multicolumn{2}{c}{Total} \\
& & & & & & & & & & & & & & \\
\midrule
\pat{SE} & 8 & 52 & 61 & 454 & 8 & 16 & 10 & 43 & 8 & 37 & 13 & 60 & 108 & 662 \\
\pat{SR} & -- & -- & -- & -- & 2 & 2 & -- & -- & -- & -- & 3 & 3 & 5 & 5 \\
\pat{SRm} & 8 & 8 & 74 & 74 & 5 & 5 & -- & -- & 7 & 7 & 10 & 10 & 97 & 97 \\
\pat{SRR} & -- & -- & 1 & 12 & -- & -- & -- & -- & -- & -- & 1 & 16 & 2 & 28 \\
\pat{SH} & -- & -- & 3 & 132 & -- & -- & -- & -- & -- & -- & -- & -- & 3 & 132 \\
\pat{DE} & -- & -- & -- & -- & -- & -- & -- & -- & 3 & 7 & 4 & 9 & 7 & 16 \\
\pat{DRm} & 5 & 5 & 17 & 17 & 36 & 36 & 2 & 2 & 1 & 1 & 2 & 2 & 63 & 63 \\
\pat{DH} & -- & -- & -- & -- & 5 & 9 & -- & -- & -- & -- & -- & -- & 5 & 9 \\
\pat{DRR} & 2 & 2 & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- & 2 & 2 \\
\pat{DR1Nm} & 4 & 4 & 19 & 54 & -- & -- & -- & -- & 9 & 15 & 1 & 1 & 33 & 74 \\
\pat{D11Rm} & -- & -- & -- & -- & -- & -- & 6 & 78 & 5 & 14 & -- & -- & 11 & 92 \\
\pat{DH0N} & -- & -- & -- & -- & -- & -- & -- & -- & -- & -- & 1 & 2 & 1 & 2 \\
\pat{CE2C} & -- & -- & 11 & 82 & 6 & 19 & 5 & 23 & -- & -- & 1 & 12 & 23 & 136 \\
\pat{CE2D} & -- & -- & 23 & 49 & -- & -- & -- & -- & -- & -- & -- & -- & 23 &49 \\
\pat{CE2O} & -- & -- & 13 & 148 & -- & -- & -- & -- & -- & -- & -- & -- & 13 & 148 \\
\pat{CR2O} & -- & -- & -- & -- & 1 & 12 & -- & -- & -- & -- & 1 & 1 & 1 & 12 \\
\pat{UNKNOWN} & -- & -- & 3 & 6 & 1 & 1 & 1 & 4 & 1 & 12 & 4 & 9 & 10 & 32 \\
\bottomrule
\end{tabular}
}
\end{table}
\endinput
\newpage
[Davide]
\TODO{
\begin{itemize}
\item Report here some notable examples from the scenarios we consider, without making a section for each of them.
\item Notable example include:
\begin{itemize}
\item Examples of patterns we describe
\item Examples of patterns we describe, but misused
\item Examples outside of the patterns we describe
\end{itemize}
\item Put the final table with the statistics. [DAVIDE]
\end{itemize}
}
\begin{table}
\centering
\caption{Ciao, sono un capption.}
\begin{tabular}{l@{\hspace{1.5cm}}l}
\begin{lstlisting}[linewidth=4cm]
BSBM
SE 8 52
SRm 8 8
DRm 5 5
DR1Nm 4 4
DRRID 2 2
\end{lstlisting} &
\begin{lstlisting}[linewidth=4cm]
ODH-VKG
SE 10 43
DRm 2 2
CE2C 5 23
S11Rm 6 78
Unknown 1 4
\end{lstlisting}\\
\begin{lstlisting}[linewidth=4cm]
UOBM
SE 8 16
SR 2 2
CE2C 6 19
SRm 5 5
DHa+ 5 9
DRm 36 36
CR2O 1 12
UNKNOWN 1 1
\end{lstlisting} &
\begin{lstlisting}[linewidth=4cm]
NPD
SE 57 505
SRm 45 45
SRR 1 12
SH 3 132
DRm 22 22
CE2C 11 82
CE2D 26 55
CE2O 13 148
DR1Nm 19 54
UNKNOWN 3 6
\end{lstlisting}\\
\begin{lstlisting}[linewidth=4cm]
Suedtirol OpenData
SE 8 37
SRm 7 7
DRm 1 1
DE 3 7
DR1Nm 9 15
S11Rm 5 14
UNKNOWN 1 12
\end{lstlisting}
&
\begin{lstlisting}[linewidth=4cm]
CORDIS
SE 11 56
SR 3 3
SRm 7 7
SRRid 1 16
DH0N 1 2
DE 4 9
DRm 2 2
DR1Nm 1 1
CR2O 1 1
CE2C 1 12
UNKNOWN 4 9
\end{lstlisting}
\end{tabular}
\end{table}
\section{Related Work}
\label{sec:related}
In the last two decades a plethora of tools and approaches have been developed
to bootstrap an ontology and mappings from a DB. The approaches in the
literature differ in terms of the overall \emph{purposes} of the bootstrapping
(e.g., OBDA, data integration, ontology learning, check of DB schema
constraints using ontology reasoning), the \emph{ontology and mapping
languages} in place (e.g., OWL 2 profiles or RDFS, as ontology languages, and
R2RML or custom languages, for the specification of mappings), the different
focus on \emph{direct and/or complex mappings}, and the assumed \emph{level of
automation}. The majority of the most recent approaches closely follow W3C-DM\xspace,
deriving ontologies that mirror the structure of the input DB.
Our work makes no exception, in the sense that many of the patterns discussed here mostly subsume the W3C-DM\xspace recommendation. The exceptions are on those bits where the recommendation itself differentiates from R2RML (e.g., on the treatment of blank nodes as object identifiers).
Work in~\cite{sequeda2012} is very closely related ours, as it also introduces a catalog of mapping patterns. However, there are major differences between such work and the present, namely in that work:
\begin{compactitem}
\item patterns are not formalized, and presented in a ``by-example'' fashion following the \texttt{R2RML} syntax;
\item patterns are derived from ``commonly-occurring mapping problems'' based on the experience of the authors, whereas in this work patterns are derived from conceptual modeling and database design principles;
\item patterns are not evaluated against a number of different real-world, and complex scenarios over heterogeneous domains and design practices as it was done here.
\end{compactitem}
To the best of our knowledge, there are no other works whose main focus is a systematic categorization of mappings in VKGs\xspace. \cite{mirror} and~\cite{JKZH*15} provide indeed such a categorization, but it is aimed at supporting bootstrapping of mappings. In addition, the vast majority of the scientific contributions, restrict their attention to the algorithms behind the generation of mappings, notably~\cite{pinkel2013incmap,CCKK*17,sequeda2015ultrawrap} for the R2RML language. Another notable difference between our work and mapping bootstrappers is that we provide foundations towards other tasks than bootstrapping, as discussed in Section~\ref{sec:discussion}.
For the sake of completeness w.r.t.\ the existing bootstrapping approaches, we mention here some of the most prominent tools that have been recently implemented. Unfortunately, we have not been able to find, in their related literature, an explicit description of the mappings generated by the tools, and this prevented us from a deeper comparison between the mapping patterns introduced here and these approaches. In \textbf{Karma}~\cite{gupta2012karma}, (ontology) learning techniques are used to mine the source data. In the schema matching literature, simple rule-based mappings are used to create a uniform representation of the data sources to be matched, may they be schemata or ontologies~\cite{aumueller2005schema,gal2004ontobuilder,pinkel2013incmap}. For example, in \textbf{COMA++}~\cite{aumueller2005schema}, class hierarchies, attributes, and relationship types are mapped into a generic model based on directed acyclic graphs. Using such mappings, schema matchers are applied to the uniform model to create a matching result. Similarly, \textbf{IncMap}~\cite{aumueller2005schema} relies on a graph structure called IncGraph to represent schema elements from both ontologies and relational schemata in a unified way. The tools that today are by far most popular are those that are based on W3C-DM\xspace, and leave the user to manually refine the extracted outputs, e.g., the \textbf{D2RQ} system~\cite{bizer2004d2rq}. In such category we find also commercial tools, notably \textbf{Ultrawrap}~\cite{sequeda2015ultrawrap} in the context of data integration.
We point also to a selection of surveys~\cite{sequeda2011survey,spanos2012bringing,Pinkel2016RODIBR} with further information about the tools and techniques mentioned here, and their performance evaluations.
We finally notice that, in our review, we did not find any study introducing an in depth analysis of existing real scenarios of DB-to-ontology mapping, as we do in the present paper, aimed at showing that the identified categories actually reflect the real design choices and methodologies in use by the mapping designers.
\endinput
In the last two decades a plethora of tools and approaches have been developed to bootstrap an ontology and mappings from a relational database.
The approaches in the literature differ in terms of the overall \emph{purposes} of the bootstrapping (e.g., OBDA, data integration, ontology learning, check of database schema constraints using ontology reasoning), the \emph{ontology and mapping languages} in place (e.g., OWL\,2 profiles or RDFS as ontology languages, and R2RML or custom languages for specification of the mappings), the different focus on \emph{direct and/or complex mappings}, the assumed \emph{level of automation}. The majority of the most recent approaches closely follow W3C-DM\xspace and point to the implementation of automatic bootstrappers: these tools derive ontologies that mirror the structure of the input relational database.
We observe that the patterns discussed here mostly adhere to the W3C-DM recommendations, leaving out the where the recommendation itself differentiates from R2RML (e.g., blank nodes as identifiers).
In the literature we analysed, the only works that propose a systematic categorization of mappings are~\cite{mirror} and~\cite{JKZH*15}, while the vast majority of the scientific contributions restrict their attention on the algorithms behind the generation of the ontology and the mappings, and on the bootstrapping techniques behind. Beside that, up to our knowledge only~\cite{pinkel2013incmap,mirror,CCKK*17,DBLP:conf/semweb/SequedaM15} are tools that nowadays produce mappings in the R2RML language.
We also notice that, in our review, we did not find any study introducing an in depth analysis of existing real scenarios of database-to-ontology mapping, as we do in the present paper. In fact, both~\cite{mirror} and~\cite{JKZH*15}
are not aimed at showing that the identified categories actually reflect the real design choices and methodologies in use by the mapping designers.
Another notable difference between this work and bootstrappers is that these tools mostly ignores any additional domain knowledge possibly coming from the conceptual model of the source data (or, from an existing domain ontology) and this, as one would expect, restricts the number of mapping patterns the system is able to deal with, as well as the number of tasks the system could be used for.
For the sake of completeness w.r.t. the existing bootstrapping approaches, we mention here some of the most prominent tools that have been recently implemented. Unfortunately, we have not been able to find, in their related literature, explicit description of the mappings generated by the tools, and this prevented us from a deeper comparison between the mapping patterns introduced here and these approaches. In
\textbf{Karma}~\cite{gupta2012karma}, (ontology) learning techniques are used to mine the source data. In schema matching literature, simple rule-based mappings are used to create a uniform representation of the data sources to be matched, may they be schemata or ontologies~\cite{aumueller2005schema,gal2004ontobuilder,pinkel2013incmap}. For example, in \textbf{COMA++}~\cite{aumueller2005schema}, class hierarchies, attributes and relationship types are mapped into generic model representation based on directed acyclic graphs. Using such mappings, schema matchers are applied to the uniform representation to create a matching result. Similarly, \textbf{IncMap}~\cite{aumueller2005schema} relies on a graph structure called IncGraph to represent schema elements from both ontologies and relational schemata in a unified way. The category that today is by far the most popular and where commercial tools can also be found is the one including those approaches that, first, execute the W3C-DM and, then, leave the user to manually refine the extracted outputs. \textbf{D2RQ} system~\cite{bizer2004d2rq} and, in the context of data integration, \textbf{Ultrawrap}~\cite{DBLP:conf/semweb/SequedaM15} are notable examples in this category.
We conclude this section by pointing to a selection of surveys~\cite{sequeda2011survey,spanos2012bringing,Mogotlane-article,Pinkel2016RODIBR} where the interested reader can find further information about the tools and techniques mentioned here, as well as numbers about their performance evaluations, whenever available.
\section{Conclusion and Future Work}
\label{sec:conclusions}
In this work we propose to use a number of mapping patterns to facilitate the task of linking DBs to ontologies in a typical VKG\xspace setting. We argue that such patterns can enable a number of relevant tasks, apart from the classic one of bootstrapping mappings in an incomplete VKG\xspace scenario. Through a systematic analysis of various VKG\xspace scenarios, ranging from benchmarks to real world and denormalized ones, we observed that the patterns we formalized occur in practice, and capture most cases.
This work is only a first step, with respect to both categorization of patterns, and their actual use. Regarding the former, we plan to better explore the interaction between patterns and pattern modifiers, such as value invention or identifier alignment. Regarding the latter, in this paper we have used patterns to investigate, and highlight, the specific problems to address when setting-up a VKG\xspace scenario. We plan to investigate solutions to these problems, by exploiting approaches from other fields, e.g., schema matching.
\endinput
|
1,477,468,750,424 | arxiv | \section{Introduction, summary and conclusion}
The study of quantum gravity using a holographic formulation requires
understanding how the bulk gravitational physics is described by the boundary gauge theory. Given the ubiquitous nature of Wilson loop
operators in gauge theory, it is of interest to understand how Wilson loop operators in a gauge theory participating in the holographic formulation of a bulk theory are encoded in the corresponding bulk description. Since gauge theories can be formulated using Wilson loops as the basic variables, an improved understanding of holography may arise from identifying these variables in the bulk theory.
This program has recently been completed in \cite{Gomis:2006sb} for
all half-BPS Wilson loop operators in ${\cal N}=4$ SYM (for previous
work see \cite{Drukker:2005kx,Yamaguchi:2006te}). This extends the
bulk description of a Wilson loop in the fundamental representation
in terms of a string world-sheet \cite{Rey:1998ik,Maldacena:1998im}
to all other representations. It was found that for each Wilson
loop, there is a bulk description either in terms of a configuration
of D5-branes or alternatively in terms of a configuration of
D3-branes in AdS$_5\times$S$^5$ (see \cite{Gomis:2006sb} for
details). The equations determining the supergravity background
produced by these D-branes were found in
\cite{Yamaguchi:2006te,Lunin:2006xr} (see also \cite{Gomis:2006cu}
for the closely related equations for half-BPS domain wall
operators), generalizing the supergravity solutions of LLM
\cite{Lin:2004nb} dual to half-BPS local operators to half-BPS
Wilson loop operators. All these asymptotically AdS$_5\times$ S$^5$
solutions encode in their nontrivial topology the information about
the dual operator. For other work see e.g.
\cite{Yamaguchi:2006tq,Hartnoll:2006hr,Okuyama:2006jc,Hartnoll:2006is,Giombi:2006de,Tai:2006bt,Gomis:2006im}.
A very interesting and calculable example of a holographic
correspondence is the one found by Gopakumar and Vafa
\cite{Gopakumar:1998ki} in topological string theory\footnote{See
\cite{Neitzke:2004ni} for an overview of results and the book
\cite{Marino:2005sj} by Mari\~no for a comprehensive introduction.}.
The duality states that the A-model topological string theory on the
deformed conifold $T^*{\rm S}^3$ in the presence of $N$ D-branes
wrapping ${\rm S}^3$ is dual to the A-model topological string
theory on the resolved conifold geometry, with the complexified
K\"{a}hler modulus given by $t=g_s N$, where $g_s$ is the string
coupling constant on both sides of the correspondence. As shown by
Witten \cite{Witten:1992fb}, the physics on the deformed conifold
is described by $U(N)$ Chern-Simons theory on ${\rm S}^3$.
The goal of this paper is to give the bulk description of Wilson
loop operators in Chern-Simons theory\footnote{As suggested in
\cite{Gopakumar:1998ki}, the Wilson loops can be described by string
world-sheets in the resolved conifold. We elaborate on this
description in Appendix \ref{F1}.}. As we shall see, a picture
closely related to the one for ${\cal N}=4$ SYM emerges, with the
added benefit that we have control over all quantum corrections for
the case of Chern-Simons theory. We find that Wilson loops in
Chern-Simons can be described either in terms of a configuration of
D-branes or a configuration of anti-branes in the resolved conifold
geometry. Moreover, by letting the branes undergo a geometric
transition, we find that Wilson loops can be described in terms of
bubbling Calabi-Yau geometries, with rich topology.
We first identify the holographic description of Wilson loop operators
in Chern-Simons theory on ${\rm S}^3$ -- labeled by a knot in ${\rm S}^3$ and a representation of $U(N)$ --
in terms of branes in the resolved conifold geometry.
Just like in \cite{Gomis:2006sb}, we find that there are two different brane configurations
corresponding to each Wilson loop operator.
The information about the knot $\alpha\subset {\rm S}^3$ is encoded in the choice of a Lagrangian submanifold $L$
in the resolved conifold geometry, as explained by Ooguri and Vafa
in \cite{Ooguri:1999bv} (see also e.g. \cite{Labastida:2000zp,Labastida:2000yw,Marino:2001re}).
$L$ ends on the knot $\alpha$ on the ${\rm S}^3$ at asymptotic infinity of the resolved conifold geometry,
where the holographic dual Chern-Simons theory lives\footnote{
Taubes \cite{Taubes:2001wk} proposed a procedure to construct the Lagrangian submanifolds
corresponding to the knots that are invariant under the antipodal map of $S^3$.
The construction of the Lagrangians corresponding to arbitrary knots was achieved by Koshkin \cite{Koshkin}.
}.
We show that the information about the representation $R$ of the
Wilson loop operator -- given by a Young tableau (see Figure
\ref{young-param-1}) -- is encoded in the eigenvalues\footnote{
Geometrically, these eigenvalues correspond to the positions of the
branes up to Hamiltonian deformations (see Appendix
\ref{target-theory}).} of the holonomy matrix obtained by
integrating the gauge connection on the branes wrapping $L$ --
denoted by ${\cal A}$ -- around the non-contractible
$\beta$-cycle\footnote{ $L$ has the topology of a solid torus with a
boundary at infinity given by a $T^2$ with canonical $\alpha$ and
$\beta$ one cycles. The $\beta$ cycle is non-contractible in $L$
while $\alpha$ may or may not be contractible.} of the Lagrangian
submanifold $L$.
\begin{figure}[htbp]
\begin{center}
\psfrag{1}{$R_1$}
\psfrag{2}{$R_2$}
\psfrag{P}{$R_P$}
\psfrag{T1}{$R^T_1$}
\psfrag{T2}{$R^T_2$}
\psfrag{TM}{$R^T_M$}
\includegraphics[width=40mm]{young-diagram_7.eps}
\end{center}
\caption{A Young tableau with $P\leq N$ rows and $M$ columns labeling a representation of $U(N)$. $R_i$ is the number of boxes in the $i$-th row and satisfies $R_i\geq R_{i+1}$. $R^T$ is the tableau conjugate to $R$, obtained by exchanging rows with columns.}
\label{young-param-1}
\end{figure}
We show that a Wilson loop labeled by a knot $\alpha$ and a
representation $R$ given by Figure \ref{young-param-1} is described
either by a configuration of $M$ D-branes
$(L_{x_1},L_{x_{2}},\ldots,L_{x_M})$ or by a configuration of
$P$ anti-branes $({\bar L}_{y_1},{\bar L}_{y_{2}},\ldots,{\bar
L}_{y_P})$, where $L_{x_i}/{\bar L}_{y_i}$ denotes a
D-brane/anti-brane with world-volume $L$ and with a non-trivial
holonomy, shifted by the integral of the K\"{a}hler form ${\mathcal J}$,
\begin{eqnarray}
&&x_i\equiv \oint_{\beta=\partial D} {\cal A}_i+\int_D {\mathcal J}=g_s\left(R^T_i-i+M+\frac{1}{2}\right),~
i=1,\dots,M,\\
&& y_i\equiv \oint_{\beta=\partial D} {\cal A}_i+\int_D
{\mathcal J}=g_s\left(R_i-i+P+\frac{1}{2}\right),~ i=1,\dots,P, \end{eqnarray}
respectively\footnote{${\mathcal J}$ is integrated over a disk $D$. $D$
is a relative cycle in the resolved conifold and has $\beta\subset
L$ as its boundary.}.
For the case of the simplest knot -- the unknot -- we explicitly
check this identification. We show that this Wilson loop operator in
a representation given by Figure \ref{young-param-1} corresponds to
the configuration of $M$ D-branes $(L_{x_1},L_{x_2},\ldots,L_{x_M})$
\begin{figure}[h]
\begin{center}
\psfrag{x1}{$x_1$}
\psfrag{x2}{$x_2$}
\psfrag{xM}{$x_M$}
\includegraphics[width=60mm]{brane-inner_2.eps}
\end{center}
\caption{Brane configuration in resolved conifold describing Wilson loop in a representation given by Figure \ref{young-param-1}.}
\label{brane-inner_2}
\end{figure}
\newline
\noindent where $L_{x_i}$ denotes a D-brane wrapping a Lagrangian
submanifold $L$ sitting at the position $x_i=g_s(R^T_i-i+M+1/2)$ on
the inner edge of the toric diagram of the resolved conifold. In
particular, a single D-brane on the inner edge corresponds to a
Wilson loop in the antisymmetric representation. Since the effective
size of the inner edge of the resolved conifold is
$t_{eff}=g_s(N+1)$, we recover (from $x_1\leq t_{eff}$) the group
theory bound $R^T_1\leq N$ for the number of boxes in a column
from the compactness of the ${\bf P}^1$ of the resolved conifold.
The same Wilson loop operator can also be described by a
configuration of $P$ anti-branes
$(\bar{L}_{y_1},\bar{L}_{y_2},\ldots,\bar{L}_{y_P})$
\begin{figure}[htbp]
\begin{center}
\psfrag{y1}{$y_1$}
\psfrag{y2}{$y_2$}
\psfrag{yP}{$y_P$}
\includegraphics[width=40mm]{brane-outer_2.eps}
\end{center}
\caption{Brane configuration in resolved conifold describing Wilson loop in a representation given by Figure 1.}
\label{brane-outer_2}
\end{figure}
\newline
\noindent where $\bar{L}_{y_i}$ denotes an anti-brane wrapping a
Lagrangian submanifold $L$ sitting at the position
$y_i=g_s(R_i-i+P+1/2)$ on an outer edge of the toric diagram. In
particular, a single D-brane on the outer edge corresponds to a
Wilson loop in the symmetric representation. Since the outer edge is
non-compact, $R_1$ can be an arbitrarily large integer, as expected
from group theory. The bound on the number of anti-branes $P\leq N$
can also be understood geometrically. Once we put $P$ anti-branes in
the resolved conifold, the effective size of the space is
$t_{eff}=g_s(N-P)$, thus recovering the group theory bound $P\leq
N$.
These results are obtained by computing the A-model topological string partition function in the presence of these branes and showing that it agrees with the Chern-Simon results of Witten \cite{Witten:1992fb}.
In the crystal melting description \cite{Okounkov:2003sp,Okuda:2004mb} of these amplitudes, the Young tableau of the dual Wilson loop has a nice geometrical realization in the crystal. The corners produced in the crystal by the insertion of branes \cite{Saulina:2004da,Okuda:2004mb} has exactly the shape of the corresponding Young tableau!
We also find a purely geometric description of the unknot Wilson loop operators in terms of Calabi-Yau geometries that are asymptotic to the resolved conifold. We show that the backreaction of the brane configurations in
Figures 2 and 3 can be taken into account exactly, giving rise to a Calabi-Yau geometry without D-branes! The two different D-brane configurations in Figures 2 and 3 yield the same Calabi-Yau geometry, and the information about the representation $R$ of the Wilson loop is now encoded in the topology of the corresponding Calabi-Yau. Physically, the appearance of this rich class of Calabi-Yau geometries can be understood as arising from a new class of geometric transitions, where non-compact Lagrangian D-branes/anti-branes are replaced by non-trivial topologies supporting the ``flux" produced by the branes.
The computation of the A-model topological string amplitude on these ``bubbling" Calabi-Yau geometries, obtained by letting the branes undergo a geometric transition, exactly reproduces the result for the expectation value of the corresponding Wilson loop obtained by Witten \cite{Witten:1992fb}. Therefore, these bubbling Calabi-Yau geometries
provide a novel representation of knot invariants of Chern-Simons theory.
The topology of the Calabi-Yau geometry corresponding to a Wilson loop is best understood by giving the following parametrization of a Young tableau\footnote{Informally, $l_{odd}$ is the number of rows
in the tableau with the same number of boxes while $l_{even}$ is the number of columns in the tableau with the same number of boxes.}
\begin{figure}[htbb]
\centering
\begin{tabular}{cc}
\psfrag{N}{$N$}
\psfrag{l1}{$l_1$}
\psfrag{l2}{$l_2$}
\psfrag{l2m-1}{$l_{2m-1}$}
\psfrag{l2m}{$l_{2m}$}
\psfrag{l2m+1}{$l_{2m+1}$}
\includegraphics[width=70mm]{young-diagram_5.eps}&
\psfrag{t1}{$t_1$}
\psfrag{t2}{$t_2$}
\psfrag{t3}{$t_3$}
\psfrag{t2m}{$t_{2m}$}
\psfrag{t2m+1}{$t_{2m+1}$}
\includegraphics[width=70mm]{bubbling-geometry_2.eps}\\
(a)&(b)
\end{tabular}
\caption{
The correspondence of the Wilson loop in representation $R$ and the bubbling
geometry.
In (a) the Young tableau $R$, shown rotated, is specified by the lengths $l_i$ of all the
edges. $l_{2m+1}$ is $N$ minus the number of rows.
Equivalently, $l_i$ are the lengths of black and white regions in
the Maya diagram.
The Wilson loop in representation $R$
(a) is equivalent to the toric Calabi-Yau manifold given by the web
diagram (b).
There are a total of $2m+1$ bubbles of ${\bf P}^1$ in the geometry.
The sizes of the ${\bf P}^1$'s are given by $t_i=g_sl_i,~i=1,...,2m+1$.
}
\label{bubbling-geometry_2}
\end{figure}
The Calabi-Yau geometry dual to a Wilson loop labeled by a Young tableau given in Figure \ref{bubbling-geometry_2}(a) is
given by the toric web diagram shown in Figure \ref{bubbling-geometry_2}(b).
The corresponding Calabi-Yau geometry has $2m+1$ nontrivial ${\bf P}^1$'s. The associated complexified K\"{a}hler moduli are given by $t_i=g_sl_i$ for $i=1,\ldots, 2m+1$, where $l_i$ are the integers in Figure \ref{bubbling-geometry_2}(a) parametrizing the Young tableau.
Similarity between bubbling AdS geometries and toric geometries was noted in \cite{Lin:2004nb}.
Although the explicit quantitative checks we make are all based on
the unknot, our physical arguments apply to arbitrary knots and
links. Indeed it is possible to write the closed Gopakumar-Vafa (GV)
invariants \cite{Gopakumar:1998ki} associated to the bubbling Calabi-Yau
corresponding to an arbitrary knot in terms of the
open GV invariants \cite{Ooguri:1999bv,Labastida:2000yw} associated to the
Lagrangian branes giving rise to the bubbling Calabi-Yau after geometric transition. We plan to
report on this relation and on more general geometric transitions in
the forthcoming paper \cite{geom-trans}.
The extent of the analogy between AdS/CFT and the GV duality
is striking.
We hope that the very explicit realization of the holographic
correspondence found in this paper -- including all quantum corrections -- provides us with hints for deeper understanding of
the AdS/CFT correspondence and holography in further detail.
The plan of the rest of the paper is as follows. In section $2$ we
integrate out the degrees of freedom introduced by a configuration
of non-compact branes intersecting ${\rm S}^3$ along a knot $\alpha$
and show that the net effect is to insert a Wilson loop operator for
$U(N)$ Chern-Simons theory on ${\rm S}^3$. In section $3$ we
identify the brane configurations in the resolved conifold dual to a
Wilson loop operator. The identification is explicitly checked for
the case of the unknot. In section $4$ we show that the Wilson loop
operators can be related to the closed string partition function of
bubbling Calabi-Yau's, which are interpreted as arising via a
geometric transition of the brane configuration in section $3$.
Appendix \ref{F1} revisits the description of Wilson loops in terms
of string world-sheets. Appendix \ref{crystal-section} reviews the
melting crystal models found in \cite{Okuda:2004mb}. The explicit
computations for the unknot, which support our proposal in this
paper, are performed using the crystal models. Appendix
\ref{target-theory} explains relevant details of the target space
theory of open topological strings.
\section{Wilson loops as branes in the deformed conifold}
In this section we find a brane configuration in $T^*{\rm S}^3$ whose effective description is given by a Wilson loop operator of Chern-Simons theory on ${\rm S}^3$. The basic idea is to
add extra non-compact branes beyond the $N$ D-branes wrapping the ${\rm S}^3$ that support the $U(N)$ Chern-Simons theory participating in the large $N$ duality. We show that integrating out the degrees of freedom introduced by the extra branes has the effect of inserting into the
$U(N)$ Chern-Simons path integral on ${\rm S}^3$ a Wilson loop operator in a particular representation $R$ of $U(N)$ determined by data of the brane configuration. Having identified the brane configuration in $T^*{\rm S}^3$ corresponding to a Wilson loop, in the next section we find the holographic description of Wilson loops in terms of branes in the resolved conifold.
A Wilson loop operator of Chern-Simon theory on ${\rm S}^3$ is labeled by a representation $R$ of $U(N)$ and by an embedded oriented circle $\alpha$ -- also known as a knot -- in ${\rm S}^3$.
It is given by
\begin{eqnarray}
W_R(\alpha)=\hbox{Tr}_R P \exp\oint_\alpha A,
\label{wilsonopp}
\end{eqnarray}
where $P$ is the path ordering operator.
A microscopic description of a Wilson loop operator can be given by adding new degrees of freedom localized on the knot on which the operator is supported. In the string theory realization, the new localized degrees of freedom arise microscopically by quantizing open strings between the D-branes wrapping ${\rm S}^3$ and new branes intersecting the ${\rm S}^3$ along the knot $\alpha$.
As shown by \cite{Ooguri:1999bv}, a Lagrangian submanifold $L$ in $T^*{\rm S}^3$
can be found such that it intersects ${\rm S}^3$ along an arbitrary knot $\alpha$. Given a knot $\alpha$ in ${\rm S}^3$ parametrized by $q^i(s)$, where $q^i$ are coordinates on ${\rm S}^3$, then $L$ is determined by the following non-compact Lagrangian submanifold\footnote{We recall that the K\"{a}hler form of $T^*{\rm S}^3$ is given by $\omega=\sum_{i=1}^3dp_i\wedge dq^i$.}
\begin{eqnarray}
L=\{(q^i,p_i) | \sum_{i}^3 p_i \frac{dq^i}{ds}=0\},
\label{lagrangian}
\end{eqnarray}
where $p_i$ are coordinates in the fiber of $T^*{\rm S}^3$. The topology of $L$ is ${\rm R}^2\times {\rm S}^1$ or more precisely that of a solid torus. The brane is non-compact and has a boundary at asymptotic infinity. The boundary is a $T^2$ with canonical $\alpha$ and $\beta$ one cycles.
In $L$,
the $\alpha$-cycle of the $T^2$ is non-contractible while the $\beta$-cycle of the $T^2$ is contractible\footnote{As we will see in the next section, the roles of $\alpha$ and $\beta$ are reversed in the corresponding Lagrangian submanifold $L$ in the resolved conifold;
$\beta$ being non-contractible in the corresponding $L$.}.
Therefore, we want to consider the gauge theory living on the brane configuration in $T^*{\rm S}^3$ given by $N$ D-branes wrapping ${\rm S}^3$ intersecting $M$ branes wrapping $L$. The action is given by \cite{Ooguri:1999bv}
\begin{eqnarray}
S=S_{CS}(A)+S_{CS}({\cal A})+\oint_\alpha \chi^\dagger (d+A+{\cal A})\chi,
\label{path}
\end{eqnarray}
where
\begin{eqnarray}
S_{CS}(A)=\frac{1}{g_s}\int_{X} \hbox{Tr}\left(A\wedge dA+\frac{2}{3} A\wedge A \wedge A \right),\cr
\end{eqnarray}
and $A$, ${\cal A}$ are the connections on $X={\rm S}^3$ and $X=L$ respectively. $\chi$ are bifundamental fields localized along the knot $\alpha$ arising from the quantization of open strings with one end on ${\rm S}^3$ and the other one on $L$. Depending on whether we wrap a D-brane or an anti-brane\footnote{In topological string theory an anti-brane is exactly the same \cite{Vafa:2001qf,Okuda:2006fb} as a ghost brane, obtained by reversing the sign of the D-brane boundary state. That's why the statistics of the open string fields is reversed when open strings are stretched between a D-brane and an anti-brane as compared to when they are stretched between two D-branes. } on $L$, $\chi$ is either a fermionic or bosonic field.
\subsection{Wilson loops as D-branes}
If D-branes wrap $L$ then $\chi$ are fermions. Integrating out $\chi$ is straightforward,
it inserts the so called Ooguri-Vafa operator \cite{Ooguri:1999bv}\footnote{
The path integral over $\chi$ reduces to that of free fermions.
This formula can then be derived by
noting that
$\prod_{i=1}^N\prod_{I=1}^M(1+ x_iy_I)=\sum_{Q} {\rm Tr}_{Q^T}X{\rm Tr}_QY$, where $X={\rm diag}(x_i),~Y={\rm diag}(y_I)$.}
\begin{eqnarray}
\sum_{Q}\hbox{Tr}_{Q^T} P \exp\oint_\alpha A\cdot \hbox{Tr}_{Q} P \exp\oint_\alpha {\cal A},
\end{eqnarray}
where $Q^T$ is the Young tableau conjugate to $Q$. We now want to integrate ${\cal A}$ out\footnote{In this analysis we omit the path integral over $A$, which is to be done at the end. In \cite{Ooguri:1999bv} the path integral over $A$ was performed while the integral over ${\cal A}$ wasn't.}
\begin{eqnarray}
e^{iS_{CS}(A)}\sum_{Q}\hbox{Tr}_{Q^T} P \exp\oint_\alpha A\cdot \int[D{\cal A}] e^{iS_{CS}({\cal A})} \hbox{Tr}_{Q} P \exp\oint_\alpha {\cal A}
\label{pathint}
\end{eqnarray}
In order to proceed we now briefly recall some well known facts about Chern-Simons theory. In the present case, Chern-Simons is defined on a solid torus with a boundary at infinity given by a $T^2$.
The path integral is a wave function $\langle \psi |$ for Chern-Simons theory on a $T^2$ and
depends on the boundary condition imposed at infinity.
The path integral in (\ref{pathint}) has the insertion of $\hbox{Tr}_{Q} P \exp\oint_\alpha {\cal A}$, that is a Wilson loop along the non-contractible cycle $\alpha$ of $T^2$. This creates a state of Chern-Simons theory on $T^2$ labeled by $|Q\rangle$. Therefore (\ref{pathint}) yields
\begin{eqnarray}
e^{iS_{CS}(A)}\sum_{Q}\hbox{Tr}_{Q^T} P \exp\oint_\alpha A \cdot
\langle\psi|Q\rangle. \label{pathint-brane}
\end{eqnarray}
The holonomies around the $\alpha$ and $\beta$ cycles of $T^2$ are
canonically conjugate \cite{Elitzur:1989nr}. The path integral is
defined by specifying the holonomy at infinity either around the
$\alpha$ or $\beta$ cycle. Roughly speaking, the holonomy around
$\alpha$ plays the role of the position operator while the holonomy
around the $\beta$ cycle plays the role of momentum. In the context
of Chern-Simons theory, momentum can be identified with the
highest weight vector of a representation $R$, shifted by the Weyl vector.
Since our aim is to find the Wilson loop in a particular
representation $R$ (\ref{wilsonopp}), we choose the boundary condition $\langle \psi |= \langle R^T|$. This means that we choose a boundary condition at infinity with non-trivial holonomy around the contractible cycle $\beta$ in $L$. Since the
states labeled by representations form an orthonormal basis, this
picks out the $Q=R^T$ term in (\ref{pathint-brane}). In other words,
imposing the boundary condition $\langle \psi|=\langle R^T|$ is
equivalent to focusing on the $Q=R^T$ term in (\ref{pathint-brane}).
We now give the physical interpretation of this boundary condition. In a nutshell, the holonomy around the contractible cycle $\beta$ measures fundamental string charge.
\medskip
\medskip
\medskip
\noindent {\it Open String Endpoints as Anyons }
\medskip
\medskip
\medskip
To understand the physical meaning of the boundary condition, let us
consider the effect of the Wilson loop ${\rm Tr}_{R}\exp\oint {\mathcal A}$ on
the non-compact branes.
Consider first the case when $M=1$, i.e. for a single brane on $L$.
The action of the brane in the presence of the Wilson loop (swept by an open string end point) is given by
\begin{eqnarray}
S=S_{CS}({\mathcal A})+k\oint_\alpha {\mathcal A}.
\end{eqnarray}
Therefore, the corresponding equations of motion are given by
\begin{eqnarray}
F_{z\bar z}=g_sk \delta^2(z),\label{vortex}
\end{eqnarray}
where $z$ is the complex coordinate parameterizing the ${\rm R}^2$ in $L$.
Physically, the endpoint of an open string behaves like an
anyon when viewed from the brane world-volume.
By integrating (\ref{vortex}) we conclude that having $k$ fundamental strings ending on $L$ introduces a non-trivial holonomy around the contractible cycle $\beta$ of the $T^2$ in $L$. The holonomy is given by
\begin{eqnarray}
\oint_\beta {\cal A} =g_s k.
\end{eqnarray}
In the general case of $M$ arbitrary, the treatment of the
non-Abelian equations of motion is more subtle, as noted originally
by Witten \cite{Witten:1988hf}. Fortunately, we can borrow results
from \cite{Elitzur:1989nr}, where the insertion of the Wilson loop
in the representation $R^T$ was found to induce the holonomy \begin{eqnarray}
\oint_\beta {\mathcal A}_i=g_s\left(R^T_i-i+\frac{1}{2}
M+\frac{1}{2}\right),~i=1,2,...,M.
\label{holo} \end{eqnarray}
Therefore, the boundary condition $\langle \psi|=\langle R^T|$ is tantamount to introducing the holonomy given by (\ref{holo}) at infinity.
We denote by $(L_{ R^T_1},\ldots,L_{R^T_M})$ the
configuration of branes with this holonomy (\ref{holo}).
When we go through the transition to the resolved conifold, we will
see that there is a shift by $\frac{1}{2} g_s M$ in the effective
holonomy, coming from the backreaction of the geometry on the
branes. In Appendix C we show that the gauge invariant quantity is
given by the holonomy shifted by the integral of the K\"{a}hler form
over a disk ending on $\beta$, thus giving rise to the shift
by\footnote{The shift of the K\"{a}hler modulus due to the insertion of
branes is well known
\cite{Aganagic:2003db,Saulina:2004da,Okuda:2004mb} and it is given
by $g_sM$. Therefore, when the K\"{a}hler modulus is integrated over a
disk bounding $\beta$ -- as opposed to over ${\bf P}^1$ -- we get a
shift by $\frac{1}{2} g_s M$.
\label{footnote-shift}} $\frac{1}{2} g_s M$ in the resolved conifold.
Therefore, the D-brane configuration $(L_{ R^T_1},\ldots,L_{
R^T_M})$ we have considered in $T^*{\rm S}^3$ inserts a Wilson loop
operator along the knot $\alpha$
\begin{eqnarray}
e^{iS_{CS}(A)} \cdot \hbox{Tr}_{R} P \exp\oint_\alpha A.
\end{eqnarray}
Summarizing, we have derived the identification \begin{eqnarray} (L_{
R^T_1},\ldots,L_{ R^T_M})\leftrightarrow \hbox{Tr}_{R} P
\exp\oint_\alpha A. \end{eqnarray}
\subsection{Wilson loops as anti-branes}
In topological string theory an anti--brane has the interpretation as a ghost brane
\cite{Vafa:2001qf,Okuda:2006fb}, whose boundary state is minus that of a brane.
This sign changes the statistics of the open string fields between branes and anti-branes.
This is why the $\chi$ fields in the brane configuration
we have described in $T^*{\rm S}^3$ are bosonic when anti-branes wrap the Lagrangian submanifold $L$.
When $\chi$ are quantized as bosonic fields, integrating them out yields\footnote{
Now the path integral over $\chi$ reduces to that of free bosons. The final answer is obtained by noting that
$\prod_{i=1}^N\prod_{I=1}^M\frac{1}{1- x_iy_I}=\sum_{Q} {\rm Tr}_{Q}X{\rm Tr}_QY)$.}
\begin{eqnarray}
\sum_{Q}\hbox{Tr}_{Q} P \exp\oint_\alpha A\cdot \hbox{Tr}_{Q} P \exp\oint_\alpha {\cal A}.
\end{eqnarray}
We can now easily calculate the path integral over ${\cal A}$
following our previous discussion for D-branes. If we denote by $({\bar L}_{ R_1},\ldots,{\bar L}_{ R_P})$
a configuration of $P$ anti-branes with
holonomy\footnote{As before,
there will be a shift by $\frac{1}{2} g_s P$ in the resolved conifold.}
\begin{eqnarray}
\oint_\beta {\mathcal A}_i=g_s\left(R_i-i+\frac{1}{2} P+\frac{1}{2}\right),~i=1,2,...,P,
\label{taku}
\end{eqnarray}
corresponding to the boundary condition $\langle \psi| =\langle R|$,
we are left with the insertion of a Wilson loop operator
along the knot $\alpha$ given by
\begin{eqnarray}
e^{iS_{CS}(A)} \cdot \hbox{Tr}_{R } P \exp\oint_\alpha A.
\end{eqnarray}
Therefore we arrive at the identification
\begin{eqnarray}
({\bar L}_{ R_1},\ldots,{\bar L}_{ R_P})\leftrightarrow \hbox{Tr}_{R} P \exp\oint_\alpha A.
\end{eqnarray}
We now study the bulk description of Wilson loops in Chern-Simons theory in terms of branes in the resolved conifold geometry.
\section{Wilson loops as branes in the resolved conifold} \label{branes-res}
In the previous section we have shown that a Wilson loop operator on
any knot $\alpha$ and for any representation $R$ can be obtained by
integrating out the physics of a D-brane configuration $(L_{
R^T_1},\ldots,L_{ R^T_M})$ or anti-brane configuration $(\bar{L}_{
R_1},\ldots,\bar{L}_{ R_P})$.
To obtain the resolved conifold description of a Wilson loop we follow the brane configuration through the conifold singularity.
It is possible to construct a Lagrangian submanifold $L$ explicitly for every knot
$\alpha$ in ${\rm S}^3$ \cite{Taubes:2001wk,Koshkin}. Physically, this Lagrangian submanifold in the resolved conifold is the dual description of the Lagrangian submanifold (\ref{lagrangian}) in $T^*{\rm S}^3$ described in the previous section. Topologically $L$ is also ${\rm R}^2\times {\rm S}^1$ and has
an asymptotic boundary given by a $T^2$.
The Lagrangian submanifolds constructed by Taubes
have the property that they end on a knot $\alpha$ in the ${\rm S}^3$ at infinity. From the point of view of holography, this is precisely as expected. The dual $U(N)$ Chern-Simons theory lives on the ${\rm S}^3$ at asymptotic infinity in the resolved conifold. Given that we are looking for the resolved conifold description of Wilson loops, it is expected that the bulk description is given by a bulk object which ends on a knot, thus introducing the appropriate source for a Wilson loop operator.
A crucial role in the derivation in the previous section is played by the holonomies
around the contractible cycle of the $T^2$ in $L$
\begin{eqnarray}
\oint_\beta {\cal A}_i.
\label{holoa}
\end{eqnarray}
This data has a nice geometrical interpretation in the resolved conifold. As one follows the branes through the conifold singularity, the contractible cycle $\beta$ of the Lagrangian $L$ in $T^*{\rm S}^3$ grows to become a non-contractible cycle $\beta$ of the $T^2$ on the corresponding Lagrangian $L$ of the resolved conifold.
As explained in more detail in Appendix C, the holonomy of the
complex
gauge field ${\cal A}$
has the interpretation as the modulus\footnote{In the toric diagram,
the modulus is the position of the brane.} of the brane. More
precisely, in addition to the holonomy of the gauge field, the gauge invariant modulus
has a contribution from the K\"{a}hler form ${\mathcal J}$ integrated over a
disk ending on $\beta$. This contributes $\frac{1}{2} g_s M$ to the modulus
since $\int_D {\mathcal J} =\frac{1}{2} g_s M$ (see footnote \ref{footnote-shift}).
Therefore, the Wilson loop operator described by the brane
configuration $(L_{R^T_1},\ldots,L_{R^T_M})$ in the deformed
conifold has a bulk interpretation in terms of $M$ Lagrangian D-branes
$(L_{x_1},\ldots,L_{x_M})$ in the resolved conifold. Moreover, the
modulus $x_i$ of the $i$-th brane in the resolved conifold is determined by the holonomy data
(\ref{holo}) \begin{eqnarray} x_i=\oint_{\partial D=\beta} {\mathcal A}+\int_D {\mathcal J}=g_s\left(
R^T_i-i+{M } +\frac{1}{2}\right)\qquad i=1,\ldots,M\end{eqnarray}
Similarly, the brane configuration $({\bar L}_{ R_1},\ldots,{\bar L}_{ R_P})$ in the deformed
conifold has a bulk interpretation in terms of $P$ Lagrangian anti-branes
$(\bar L_{y_1},\ldots,\bar L_{y_P})$ in the resolved conifold. Moreover, the
modulus $y_i$ of the $i$-th brane in the resolved conifold is determined by the holonomy data
(\ref{taku}) \begin{eqnarray} y_i=\oint_{\partial D=\beta} {\mathcal A}+\int_D {\mathcal J}=g_s\left(
R_i-i+{P } +\frac{1}{2}\right)\qquad i=1,\ldots,P \end{eqnarray}
We now proceed to verify this identification for the case when
the Wilson loop operator is defined on the simplest knot, the unknot.
When the deformed conifold geometry is defined by
\begin{eqnarray}
z_1 z_4-z_2 z_3=\mu, \label{def-con}
\end{eqnarray}
the unknot is parameterized by $(z_1,z_2,z_3,z_4)=(0,\sqrt \mu
e^{i\theta},-\sqrt\mu e^{-i\theta},0)$ with $0\leq \theta\leq 2\pi$.
For the case of the unknot, the Wilson loop operator $W_{R}$ in (\ref{wilsonopp}) is described by the configuration of D-branes in
Figure
\ref{brane-inner_2}\footnote{
Let us parameterize the resolved conifold by
$|\lambda_1|^2+\lambda_2|^2-|\zeta_1|^2-|\zeta_2|^2={\rm Re}\hspace{.5mm}t$ up to $U(1)$
equivalence with charges $(1,1,-1-1)$.
In Figures \ref{brane-inner_2} and \ref{brane-outer_2},
the left, right, top, and bottom regions correspond to $|\lambda_1|^2=0,
|\lambda_2|^2=0, |\zeta_1|^2=0,$ and $|\zeta_2|^2=0$, respectively.
$L$ is given by $
|\lambda_1|^2-{\rm Re}\hspace{.5mm}x=|\lambda_2|^2-{\rm Re}\hspace{.5mm}(t-x)=|\zeta_1|^2=|\zeta_2|^2,
~\arg(\zeta_1\zeta_2\lambda_1\lambda_2)=\pi$.
When the conifold is singular the $z$ and $(\lambda,\zeta)$ coordinates are related as
$
\left(
\begin{array}{cc}
z_1 & z_2 \\
z_3 & z_4 \\
\end{array}
\right)=\left(\begin{array}{c}
\zeta_1\\
\zeta_2
\end{array}\right)
\left(\begin{array}{cc}
-\lambda_2&\lambda_1
\end{array}\right).
$
\label{res-con-convention}
}.
The corresponding Lagrangian submanifolds end on the inner edge, thus accounting for the group theory
bound $R^T_1\leq N$.
This is consistent with the requirement that $\beta$ should be
non-contractible.
It was shown in \cite{Okuda:2004mb}, as reviewed in Appendix
\ref{crystal-section}, that the Wilson loop vev for the unknot can
be rewritten as \begin{eqnarray} \langle W_R\rangle=
M(q)e^{-\sum_{n=1}^\infty\frac{e^{-nt}}{n[n]^2}}
\left(\prod_{i<j}(1-e^{-(x_i-x_j)})\right) \prod_{i=1}^M \exp
\sum_{n=1}^\infty \frac{e^{-nx_i}+e^{-n({t}-x_i)}}{n[n]} ,
\label{brane-amplitude} \end{eqnarray} up to unimportant factors we suppress
here. Here $M(q)=\prod_{j=1}^\infty (1-q^j)^{-j}$ is the MacMahon function,
$t=g_s(N+M)$ and $x_i=g_s(R^T_i-i+ M+\frac{1}{2}),
~~i=1,...,M.$ This is exactly the A-model amplitude for the brane
configuration in Figure 2\footnote{The factor $\prod_{i<j}(1-e^{-(x_i-x_j)})$ is the
contribution from annulus diagrams between branes.
This is essentially the Vandermonde determinant, i.e. the Weyl denominator.
It demonstrates that
the branes are fermions, as shown first in the mirror B-model \cite{Aganagic:2003db}.
The rest of (\ref{brane-amplitude}),
up to $M(q)$, can be computed by the topological vertex technology \cite{Aganagic:2003db}.},
thus confirming our identification \begin{eqnarray}
\langle W_R\rangle =\langle (L_{x_1},\ldots,L_{x_M})\rangle . \end{eqnarray}
We also show in Appendix \ref{crystal-section} that the Wilson loop
vev can also be written as \begin{eqnarray} \langle W_R\rangle= M(q)
e^{-\sum_{n=1}^\infty\frac{e^{-n{t}}}{n[n]^2}} \left(\prod_{
i<j}(1-e^{-(y_i-y_j)})\right) \prod_{i=1}^P \exp
\sum_{n=1}^\infty \frac{e^{-ny_i}-e^{-n({t}+y_i)}}{n[n]} ,
\label{anti-brane-amplitude} \end{eqnarray} with
$y_i=g_s(R_i-i+P+\frac{1}{2}),~i=1,...,P,~{t}=g_s(N-P)$. This is exactly the A-model
amplitude for the brane configuration in Figure 3, thus confirming
our identification\footnote{
The same amplitude would arise if the branes ended on any of the four outer edges.
One can convince oneself that they end on the lower-left edge by the following argument.
In the convention of footnote \ref{res-con-convention}, one can show that $\alpha$ is the
trivial cycle in $L$
if $L$ ends on the lower-left or upper-right edge.
$\beta$ is trivial if $L$ ends on the lower-right or upper-left edge.
Since we have non-trivial holonomy of a flat connection along $\beta$,
$\beta$ has to be non-trivial, and thus $L$ has to end on either the lower-left or upper-right edge.
By carefully keeping track of the orientation of the cycle $\beta$, one can show that
increasing the holonomy is gauge equivalent to moving the branes toward the lower-left.
Since one can increase the holonomy without bound, the branes must end on the lower-left edge.
Such $L$ is
given by the equations $ |\lambda_1|^2=|\lambda_2|^2-{\rm
Re}\hspace{.5mm}(t+y)=|\zeta_1|^2-{\rm
Re}\hspace{.5mm}y=|\zeta_2|^2,
~\arg(\zeta_1\zeta_2\lambda_1\lambda_2)=\pi$. } \begin{eqnarray} \langle
W_R\rangle =\langle({\bar L}_{y_1},\ldots,{\bar L}_{y_P})\rangle.
\end{eqnarray}
In summary we have identified the bulk description of Wilson loop operators in Chern-Simons theory in terms of D-branes and anti-branes in the resolved conifold. Moreover, this identification has been explicitly verified for the case of the unknot.
\section{Wilson loops as bubbling Calabi-Yau's}
\label{section-bubbling}
A geometric transition is a phenomenon in which a stack of D-branes is
replaced by a new geometry with ``flux''.
More precisely, when the number of branes in the stack is large,
the system is better described by a certain geometry where the appropriate
fields that encode the charges of the branes are turned on.
In physical string theory these fields are RR fluxes originally sourced by the D-branes.
In topological string theory, the role of the fluxes is played by bulk gauge fields, namely the K\"{a}hler form and the B-field.
The change in the geometry is such that the non-trivial cycle, which originally surrounds\footnote{
Let $W$ be the world-volume of the brane, and $M$ a homologically
trivial cycle.
We say that $M$ surrounds $W$ if there is a chain $N$ such that $\partial
N=M$ and if the intersection number of $N$ and $W$ is non-zero.
}
the branes
and is homologically trivial, becomes a non-trivial cycle that supports the ``flux''.
There are by now many examples of this phenomenon.
Here we will find a new class of geometric transitions in topological string theory.
It has been argued in \cite{Yamaguchi:2006te,Lunin:2006xr} that the D-branes
realizing the half BPS straight Wilson lines
in ${\mathcal N}=4$ SYM can undergo transition
to certain {\it bubbling} geometries that asymptote
to $AdS_5\times {\rm S}^5$.
To obtain these geometries
one makes an ansatz for supergravity fields based on the knowledge of the symmetry
of the branes describing the Wilson loops.
One then imposes the BPS condition and finds that the supergravity solution is
determined by simple data, namely a black-and-white pattern on a line as in Figure \ref{bubbling-geometry_2}(a).
It is expected that the data corresponds to
the representation of the Wilson loop \cite{Yamaguchi:2006te}.
We found in section \ref{branes-res} that Wilson loop
operators in Chern-Simons theory can be realized by a configuration
of branes or anti-branes in the resolved conifold.
When the number of D-branes in a stack is large we expect that the system has
a better description in terms of pure geometry.
The D-branes wrap a Lagrangian submanifold $L$ of topology ${\rm R}^2\times {\rm S}^1$.
The neighborhood of the submanifold, modeled by the normal bundle, is locally ${\rm R}^5\times {\rm S}^1$.
A contractible ${\rm S}^2$ in the transverse ${\rm R}^3$ surrounds the branes.
The geometric transition of the D-branes makes the
${\rm S}^2$ non-trivial while making the ${\rm S}^1$ contractible.
More precisely, the topology change is described by a surgery procedure:
we cut out a tubular neighborhood of topology (3-ball)$\times {\rm R}^2\times {\rm S}^1$ from the ambient space $X$
and glue in a region of topology ${\rm S}^2 \times {\rm R}^2 \times$(disk) to get a new space $X'$.
A relative 2-cycle with boundary on $L$ combines with the disk
to become a 2-cycle in $X'$.
\begin{figure}[htbp]
\centering
\begin{tabular}{cccc}
\psfrag{M}{$m$}
\psfrag{L}{$l$}
\includegraphics[width=25mm]{box-young-diagram.eps}&
\psfrag{t1}{$t_1$}
\psfrag{t2}{$t_2$}
\psfrag{t3}{$t_3$}
\includegraphics[width=38mm]{bubbling_1.eps}&
\psfrag{l1}{$\scriptscriptstyle{g_s(N+l)}$}
\includegraphics[width=38mm]{brane-inner.eps}&
\psfrag{l2}{$\scriptscriptstyle{g_s(N-m)}$}
\includegraphics[width=38mm]{brane-outer.eps}\\
(a)&(b)&(c)&(d)
\end{tabular}
\caption{
The simplest example of the correspondence between a Wilson loop and a bubbling
geometry.
The Wilson loop along the unknot in the $U(N)$ representation specified by the Young tableau
(a) is equivalent to the toric Calabi-Yau manifold given by the web
diagram (b).
The K\"{a}hler moduli in (b) are given by $t_1=g_sm,~t_2= g_s l,~t_3=g_s
(N-m)$.
The bubbling CY manifold (b) arises from geometric transition of
$l$ branes in (c) as well as $m$ anti-branes in (d).
}
\label{bubbling-geometry_1}
\end{figure}
Let us now specialize to the Wilson loop along the unknot, in a representation
of the form Figure \ref{bubbling-geometry_1}(a).
The Wilson loop is realized by a stack of $l$ Lagrangian D-branes on the inner edge
at positions $x_i=g_s(l+m-i+1/2),~i=1,...,l$.
These branes live in the resolved conifold with K\"{a}hler modulus
$t=g_s(N+l)$.
We propose that when $l$ and $m$ are large,
these branes undergo geometric transition to the toric Calabi-Yau
manifold whose toric web diagram is shown in
\ref{bubbling-geometry_1}(b).
The three K\"{a}hler moduli are $t_1=g_s m,~t_2=g_s l, ~t_3=g_s(N-m)$.
Note that the new geometry has topology that is precisely as described above.
The contractible sphere that originally surrounded the branes become
a non-trivial cycle of size $g_s l$.
The two holomorphic disks that originally ended on the branes
become spheres of sizes $g_s m$ and $g_s (N-m)$.
Our proposal is supported by the explicit computation for the unknot.
If we substitute $x_i=g_s(l+m-i+1/2)$ into the open$+$closed string partition function
(\ref{brane-amplitude})
in the brane set up of Figure \ref{bubbling-geometry_1}(c),
it becomes\footnote{For precise agreement we should include
$\xi(q)^{l}$ in (\ref{brane-amplitude}) where $\xi(q)=\prod_{j=1}^\infty (1-q^j)^\mo$.
This factor does not contribute to perturbative
amplitudes because of its modular property \cite{Saulina:2004da}.}
\begin{eqnarray}
M(q)^2
\exp\sum_{n=1}^\infty \frac{1}{n[n]^2}\left(
-e^{-n t_1}-e^{-n t_2}-e^{-n t_3}
+ e^{-n (t_1+t_2)} + e^{-n (t_2+t_3)}
-e^{-n (t_1+t_2+t_3)}\right)
\label{bubbling-amplitude_1}
\end{eqnarray}
This is precisely the closed string amplitude for the bubbling
Calabi-Yau in Figure \ref{bubbling-geometry_1}(b),
computed using mirror symmetry and integrality\footnote{See subsection 9.1 of \cite{Aganagic:2003db}
}!
This constitutes very strong evidence for our proposal for the geometric transition.
In section \ref{branes-res} we showed that the same Wilson loop is realized
by a stack of $m$ anti-branes on an outer edge with holonomies $y_i=g_s(l+m-i+1/2),~i=1,...,m$ turned on.
(See Figure \ref{bubbling-geometry_1}(d).)
The anti-branes live in the resolved conifold with $t=g_s(N-m)$.
These anti-branes also undergo geometric transition.
The resulting geometry is the same as for the branes on the inner edge.
In this case the 2-sphere surrounding the anti-branes acquires size $g_s m$,
and one 2-sphere of size $g_s l$ arises from a holomorphic disk ending on the anti-branes.
The open$+$closed string partition function
(\ref{anti-brane-amplitude})
for the anti-branes again becomes\footnote{
Insert $\xi(q)^{m}$ to (\ref{anti-brane-amplitude}) for precise agreement.
} the closed string partition
function (\ref{bubbling-amplitude_1}) after substituting the values for
$y_i$.
Let us now discuss the Wilson loop along the unknot in the general
representation $R$.
It is convenient to parameterize $R$ in terms of lengths $l_i$ of the edges
as in Figure \ref{bubbling-geometry_2}(a).
The open$+$closed string partition functions (\ref{brane-amplitude})
and (\ref{anti-brane-amplitude}) both become, after substituting the
values for holonomies,
\begin{eqnarray}
&&M(q)^{m+1}\exp\sum_{n=1}^\infty \frac{1}{n[n]^2}\left(
-\sum_{1\leq i\leq 2m+1} e^{-nt_i}
+\sum_{1\leq i\leq2m} e^{-n(t_i+t_{i+1})}\right.\nonumber\\
&&\left.~~
-\sum_{1\leq i\leq 2m-1} e^{-n(t_i+t_{i+1}+t_{i+2})}
...
- e^{-n(t_1+...+t_{2m+1})}
\right)\label{bubbling-amplitude_2}
\end{eqnarray}
We recognize this as the closed string partition function for the toric
Calabi-Yau whose toric diagram is shown in Figure
\ref{bubbling-geometry_2}(b) \cite{Aganagic:2003db}!
This complicated geometry arises via
geometric transition of branes or anti-branes.
The description in terms of geometry is more appropriate when
$l_i$, hence the cycles, are large. These Calabi-Yau manifolds provide a novel representation of knot invariants in Chern-Simons theory.
In the forthcoming paper \cite{geom-trans} we will show that
the open GV invariants for an arbitrary knot determine the closed GV
invariants of the bubbling CY geometry as well as discuss generalization of the newly found class of geometric transitions.
\section*{Acknowledgments}
We thank Vincent Bouchard, Laurent Freidel, Amihay Hanany, Amir-Kian Kashani-Poor, Hirosi Ooguri, and Johannes Walcher for discussion.
We thank the Aspen Center for Physics where this project was
initiated.
The research of T.O. is supported in part by the NSF
grants PHY-9907949 and PHY-0456556.
Research of J.G. is supported in part by funds from NSERC of Canada and by MEDT of Ontario.
|
1,477,468,750,425 | arxiv | \section{Introduction}
Maldacena's consistency relation~\cite{Maldacena:2002vr} has stood out as one of the key relations allowing us to test cosmic inflation~\cite{Guth:1980zm, Linde:1981mu, Starobinsky:1980te, Albrecht:1982wi, Mukhanov:1981xt}. It ties together two observables, the size of primordial local non-Gaussianity, $f_{\rm NL}$, and the power spectrum's spectral index, $n_s - 1$, in a simple relation given by:
\be
f_{\rm NL} = \frac{5}{12} (1 - n_s) . \label{s1-consistency-relation}
\ee
This relation has been shown to remain valid for all single-field attractor models of inflation, characterized by the freezing of the curvature perturbation after horizon crossing (attractor models of single field inflation are models in which every background quantity during inflation is determined by a single parameter, for instance, the value of the Hubble expansion rate $H$, regardless of the initial conditions). Moreover, eq.~(\ref{s1-consistency-relation}) is understood to be the consequence of how long wavelength modes of curvature perturbations modulate the amplitude of shorter wavelength modes~\cite{Creminelli:2004yq, Seery:2005wm, Chen:2006nt, Cheung:2007sv, Ganc:2010ff, RenauxPetel:2010ty,Kundu:2014gxa,Kundu:2015xta,Gong:2017yih}. This modulation is found to be enforced by a symmetry of the action for curvature perturbations under a transformation simultaneously involving spatial dilations and a field reparametrization.
Relation~(\ref{s1-consistency-relation}) is not satisfied by non-attractor models of single field inflation, for which the background depends crucially on the initial conditions~\cite{Tsamis:2003px, Kinney:2005vj, Namjoo:2012aa, Martin:2012pe, Mooij:2015yka}. For instance, in the extreme case of ultra slow-roll inflation~\cite{Tsamis:2003px, Kinney:2005vj}, where the amplitude of comoving curvature perturbations grows exponentially fast outside the horizon,\footnote{While the comoving curvature perturbation grows as $a^3$ on superhorizon scales, the non-adiabatic pressure still vanishes, providing a counterexample for the standard intuition that superhorizon freezing is caused by adiabaticity \cite{Romano:2015vxz}. See also \cite{Pajer:2017hmb}.} it has been shown~\cite{Namjoo:2012aa, Martin:2012pe, Mooij:2015yka, Bravo:2017wyw, Finelli:2017fml} that $f_{\rm NL}$ is given by:\footnote{Ultra slow-roll is not a realistic model in at least two ways: (1) The inflationary potential is exactly flat, so inflation does not have an end. (2) The value of the spectral index is essentially zero. In fact, eq.~(\ref{s1-ultra-slow-roll}) may be written as $f_{\rm NL} = 5/2 - (1 - n_s) 5/4$, but $n_s -1$ decays as $e^{- 6 N}$, where $N$ is the number of $e$-folds after horizon crossing. For these reasons, we take ultra slow-roll as a proxy model allowing for large local non-Gaussianity.}
\be
f_{\rm NL} = \frac{5}{2} . \label{s1-ultra-slow-roll}
\ee
This result has been extended to more general models of non-attractor inflation~\cite{Chen:2013aj}, for instance, realized with the help of nontrivial kinetic terms~\cite{Romano:2016gop}. Given that local non-Gaussianity is currently constrained by Planck as $f_{\rm NL} = 2.5 \pm 5.7$ (68\%CL)~\cite{Ade:2015ava}, the result shown in eq.~(\ref{s1-ultra-slow-roll}) has strengthened the importance of reaching new observational targets as a way to rule out (or to confirm) exotic mechanisms underlying the origin of primordial perturbations. Other models known for predicting potentially large local non-Gaussianity include multi-field models of inflation~\cite{Byrnes:2008wi} and curvaton models~\cite{Sasaki:2006kq}, but these scenarios require more than one scalar degree of freedom, so we will not consider them here.
Now, neither eq.~(\ref{s1-consistency-relation}) nor~(\ref{s1-ultra-slow-roll}) correspond to proper predictions for the amount of local non-Gaussianity available to inertial observers, such as us. In a cosmological setup, a physical local observer follows a geodesic path that does not necessarily coincide with that of an observer fixed at a given comoving coordinate. This is simply because inertial observers are themselves subject to the presence of perturbations. In the particular case of attractor models, when this fact is taken into account, one finds that the amount of local non-Gaussianity measured by an inertial observer is given by eq.~(\ref{s1-consistency-relation}) plus a correction of the same order. More to the point, one finds~\cite{Tanaka:2011aj, Pajer:2013ana, Dai:2015rda, Cabass:2016cgp, Tada:2016pmk}:
\bea
f^{\rm obs}_{\rm NL} = 0 . \label{s1-consistency-relation-observable}
\eea
In other words, the true prediction for observable primordial local non-Gaussianity coming from attractor models of inflation is zero.\footnote{More precisely, we should write $f^{\rm obs}_{\rm NL} = 0 + \mathcal O (k_L / k_S)^2$, where the $\mathcal O (k_L / k_S)^2$ terms are caused by non-primordial phenomena such as gravitational lensing and redshift perturbations (the so called projection effects~\cite{Baldauf:2011bh, Yoo:2009au}). See ref.~\cite{Pajer:2013ana} for more details on this.} Because of this, one may wonder about the status of the observable local non-Gaussianity in more general situations in which the value for local non-Gaussianity computed in comoving gauge is predicted to be large, just like in non-attractor models of inflation such as ultra slow-roll.
The purpose of this work is to analyze the prediction of observable local non-Gaussianity in more general contexts, beyond those covered by attractor models of single field inflation. To achieve this, we will adopt the Conformal Fermi Coordinates (CFC) formalism introduced by Pajer, Schmidt and Zaldarriaga in ref.~\cite{Pajer:2013ana} and further developed in refs.~\cite{Dai:2015rda, Cabass:2016cgp}. These coordinates are the natural generalization of the so-called Fermi normal coordinates~\cite{Manasse:1963zz}. In short, the use of CFC in a Friedmann-Robertson-Walker (FRW) spacetime allows one to describe the local environment of freely falling inertial observers up to distances much longer than the Hubble radius. This allows one to follow the fate of primordial curvature perturbations within the CFC frame during the whole relevant period of inflation, and perform the computation of gauge invariant $n$-point correlation functions. As we shall see, the expressions for $n$-point correlation functions in the CFC frame differ from those computed in comoving gauge. To understand why this is so, one has to keep in mind that while an ordinary gauge transformation (diffeomorphism) does not change physical observables, the introduction of CFC's corresponds to a change of coordinates that relate global with local coordinates.
We shall see that the long wavelength perturbations do not only modulate the evolution of perturbations of shorter wavelengths, they also affect the geodesic path of inertial observers. These two effects combined imply that long wavelength modes cannot affect short wavelength modes in any observable way. This has been well understood in the case of attractor models. Here we extend this result to the entire family of canonical single field inflation, regardless of whether they are attractor or non-attractor. To show this, we will compute the squeezed limit of the bispectrum in CFC coordinates, and find that it is given by
\be
B_{\rm CFC} = B_{\rm CM} + \Delta B,
\ee
where $B_{\rm CM}$ is the bispectrum computed in comoving gauge, and $\Delta B$ is the correction due to the change of coordinates. The term $B_{\rm CM}$ is dictated by the invariance of the action for curvature perturbations under a transformation involving both spatial and time dilations together with a field reparametrization~\cite{Bravo:2017wyw}. On the other hand, we will show that the transformation dictating the form of $\Delta B$ is exactly the inverse of such a transformation. As a result, we find that during inflation, after all the modes have exited the horizon, $B_{\rm CFC}$ vanishes, regardless of whether inflation is attractor or non-attractor.
We begin this work by reviewing the construction of conformal Fermi coordinates in a perturbed FRW spacetime in Section~\ref{s2:CFC}. There we also sketch the computation of the squeezed limit of the primordial non-Gaussian bispectrum leading to $f^{\rm obs}_{\rm NL}$. Then, in Section~\ref{s3:FNL} we further develop and simplify some results found in refs.~\cite{Dai:2015rda, Cabass:2016cgp}. In addition, we apply these results to show that the observable primordial local non-Gaussianity vanishes in both, attractor and non-attractor models of inflation. Finally, in Section~\ref{s4:Conclusions} we offer some concluding remarks.
\setcounter{equation}{0}
\section{Review of Conformal Fermi Coordinates}
\label{s2:CFC}
Here we offer a review on how Conformal Fermi Coordinates (CFC) are defined and used to compute correlation functions for inertial observers in terms of those valid for comoving observers. We will mostly base this discussion on refs.~\cite{Dai:2015rda, Cabass:2016cgp}, with some slight variations of the notation.
\subsection{Central geodesic}
Let us start by considering an unperturbed cosmological background described by a Friedmann-Robertson-Walker (FRW) metric in terms of conformal time $\tau$:
\be
ds_0^2 = a^2 (\tau) \eta_{\mu \nu} dx^\mu dx^\nu = a^2(\tau) \left( - d \tau^2 + d {\bf x}^2 \right) .
\ee
Here $a(\tau)$ is the scale factor and ${\bf x}$ is the position in comoving coordinates. The perturbed spacetime may be described with the help of the following metric
\be
ds^2 = g_{\mu \nu} dx^\mu dx^\nu = a^2 (\tau) \big[ \eta_{\mu \nu} + h_{\mu \nu} \big] dx^\mu dx^\nu , \label{s2-perturbed-metric}
\ee
where $h_{\mu \nu}$ parametrize deviations from the FRW background. We will later use a specific form of $h_{\mu \nu}$ in which curvature perturbations are introduced. An inertial observer will follow a geodesic motion determined by $g_{\mu \nu}$, respecting the following equation of motion
\be
\frac{d^2 x^{\mu}}{d\eta^2} + \Gamma^{\mu}_{\rho \sigma} \frac{d x^{\rho}}{d\eta} \frac{d x^{\sigma}}{d\eta} = 0 ,
\ee
where $ \Gamma^{\mu}_{\rho \sigma} = \frac{1}{2} g^{\mu \nu} (\partial_{\rho} g_{\nu \sigma}+ \partial_{\sigma} g_{\rho \nu} - \partial_{\nu} g_{\rho \sigma})$ are the usual Christoffel symbols, and $\eta$ is a given affine parameter. Let us call the resulting geodesic $G$, and take $\bar t$ to be the proper time employed by the inertial observer to parametrize time in his/her local environment. We will introduce a scale factor $a_F(\bar t)$ that parametrizes the expansion felt by the observer in his/her vicinity. The precise definition of $a_F(\bar t)$ is made in eq.~(\ref{s2-choice-a_F}). In the meantime, notice that the introduction of $a_F(\bar t)$ allows us to define a conformal time $\bar \tau$ through the following standard relation:
\be
d \bar \tau = d\bar t / a_F(\bar t) .
\ee
Because the inertial observer follows a geodesic motion that does not remain fixed to a comoving coordinate, it should be clear that $\tau$ and $\bar \tau$ will not coincide. Next, let us consider an arbitrary point $P$ along $G$ corresponding to conformal time $\bar \tau_P$. We wish to introduce a set of coordinates $x^{\bar \alpha} = \left\{ \bar \tau , x^{\bar \imath} \right\}$ in such a way that $x^{\bar \imath}$ parametrizes the 3-dimensional slices of constant $\bar \tau_P$. In these coordinates, one has $x^{\bar \alpha} (P) = \left\{ \bar \tau_P , 0 \right\}$. At this point, one may introduce a set of tetrads $e^\mu_{\bar \alpha}$ such that:
\be
g_{\mu \nu} e^\mu_{\bar \alpha} e^\nu_{\bar \beta} = \eta_{\bar \alpha \bar \beta} , \label{s2-tetrad-def}
\ee
in the vicinity of the entire geodesic. Equation~(\ref{s2-tetrad-def}) will be particularly true at the point $P$. We demand that the $\bar 0$-component $U^{\mu} \equiv e^\mu_{\bar 0}$ coincides with the normalized vector tangent to the geodesic. Then, the rest of the tetrads $e^\mu_{\bar \imath}$ correspond to the space-like vectors orthogonal to the geodesic.
In a perturbed FRW spacetime parametrized by the metric (\ref{s2-perturbed-metric}), the tetrads $e^\mu_{\bar \alpha}$ may be conveniently written as:
\bea
e^\mu_{\bar 0} &=& \frac{1}{a (\tau)} \left( 1 + \frac{1}{2} h_{00} \, , \,V^{i} \right) , \label{tetrad-pert-1} \\
e^\mu_{\bar \jmath} &=& \frac{1}{a (\tau)} \left( V_{j} + h_{0 j} \, , \, \delta^{i}_{j} - \frac{1}{2} h^{i}{}_{j} + \frac{1}{2} \varepsilon_{j}{}^{ik} \omega_k \right) \delta^j_{\bar \jmath} ,\label{tetrad-pert-2}
\eea
where $V^{i}$ parametrizes the 3-component velocity of $U^{\mu}$, and $\omega_k$ parametrizes the rotation of the spatial components of the tetrad induced by the perturbations. In these expressions, and in the rest of this article, spatial indices are raised and lowered with $\delta^{ij}$ and $\delta_{ij}$ respectively. Notice that the construction outlined in the previous section requires that the combination $e^\mu_{\bar \alpha}$ be parallel transported along the geodesic (recall that $U^{\mu} = e^\mu_{\bar 0}$ and that $e^\mu_{\bar \imath}$ are normalized vectors orthogonal to $U^{\mu}$). Then $V^i$ and $\omega_k$ satisfy the following equations
\bea
\partial_0 V^i + \mathcal H V^i &=& \frac{1}{2} \partial^i h_{00} - \partial_0 h^{i}{}_{0} - \mathcal H h^{i}{}_{0} , \label{s2-velocity-equation} \\
\partial_0 \omega^k &=& - \frac{1}{2} \varepsilon^{k i j} \left( \partial_i h_{0 j} - \partial_j h_{0 i} \right) . \label{s2-rotation-equation}
\eea
In Section~\ref{CFC-inflation} we will see that $h_{0 i} = h_{i 0}$ is given by a gradient of the curvature perturbation [see eq.~(\ref{constraint-h-2})]. This will imply that the right hand side of eq.~(\ref{s2-rotation-equation}) vanishes, and $\omega^k$ may be chosen to vanish without loss of generality.
\subsection{Construction of the CFC map} \label{sec:Construction-CFC}
Now we have the challenge to define the slice of constant $\bar \tau_P$. An arbitrary point $Q$ in the vicinity of $P$, in the same slice of constant $\bar \tau_P$, will have coordinates $x^{\bar \alpha} (Q) = \{ \bar \tau_P , x^{\bar \imath}_Q \}$. We may reach $Q$ from $P$ through certain classes of geodesics that are constructed as follows: First, we introduce the conformally flat metric $\tilde g_{\mu \nu} \equiv a_F^{-2} (\bar \tau) g_{\mu \nu}$, and then solve the geodesic equation
\be
\frac{d^2 x^{\mu}}{d\lambda^2} + \tilde \Gamma^{\mu}_{\rho \sigma} \frac{d x^{\rho}}{d\lambda} \frac{d x^{\sigma}}{d\lambda} = 0 , \label{s2-geodesic-conformal}
\ee
where $ \tilde \Gamma^{\mu}_{\rho \sigma}$ are Christoffel symbols computed out of $\tilde g_{\mu \nu}$. To solve this equation, one chooses the following initial conditions:
\be
\frac{d x^{\mu}}{d\lambda} \bigg|_{\lambda = 0} = a_F(\bar \tau_P) e_{\bar \imath}^{\mu} \Delta x^{\bar \imath}_{Q} , \label{s2-initial-condition}
\ee
where $\Delta x_Q^{\bar \imath} = x_Q^{\bar \imath} - x_P^{\bar \imath}$, and $x_Q^{\bar \imath}$ is the position of $Q$ that is reached at $\lambda = 1$. One may solve eq.~(\ref{s2-geodesic-conformal}) perturbatively by writing
\be
x^{\mu} (\lambda) = \sum_{n=0}^{\infty} \alpha_n^{\mu} \lambda^n .
\ee
Since $\lambda = 0$ corresponds to the starting point $P$, one has $ \alpha_0^{\mu} = x^{\mu} (P)$. On the other hand, the initial condition (\ref{s2-initial-condition}) implies $\alpha_1^{\mu} = e_{\bar \imath}^{\mu} \Delta x^{\bar \imath}_{Q} $. It is then possible to show that the solution to eq.~(\ref{s2-geodesic-conformal}), up to cubic order, is given by
\bea
x^{\mu} (\lambda) &=& x^{\mu} (P) + a_F (\bar \tau_P) e_{\bar \imath}^{\mu} \Big|_P \Delta x^{\bar \imath}_{Q} \, \lambda - \frac{1}{2} \tilde \Gamma_{\rho \sigma}^{\mu} a_F^{2}(\bar \tau_P) e_{\bar \imath}^{\rho} e_{\bar \jmath}^{\sigma} \Big|_P \Delta x^{\bar \imath}_{Q} \Delta x^{\bar \jmath}_{Q} \, \lambda^2 \nn \\
&& - \frac{1}{6} \left( \partial_{\nu} \tilde \Gamma^{\mu}_{\rho \sigma} - 2 \tilde \Gamma^{\mu}_{\alpha \rho} \tilde \Gamma^{\alpha}_{\sigma \nu} \right) a_F^{3}(\bar \tau_P) e_{\bar \imath}^{\rho} e_{\bar \jmath}^{\sigma} e_{\bar k}^{\nu} \Big|_P \Delta x^{\bar \imath}_{Q} \Delta x^{\bar \jmath}_{Q} \Delta x^{\bar k}_{Q} \, \lambda^3 + \cdots .
\eea
Evaluating this result at $\lambda = 1$ then gives us the position of an arbitrary point $Q$ with respect to the position of $P$, and so, one may just drop the labels $P$ and $Q$ to obtain the coordinate transformation between the sets of coordinates $x^{\mu}$ and $x^{\bar \alpha}$ up to cubic order:
\bea
\Delta x^{\mu} &=& a_F (\bar \tau_P) e_{\bar \imath}^{\mu} \Big|_P \Delta x^{\bar \imath} - \frac{1}{2} \tilde \Gamma_{\rho \sigma}^{\mu} a_F^{2}(\bar \tau_P) e_{\bar \imath}^{\rho} e_{\bar \jmath}^{\sigma} \Big|_P \Delta x^{\bar \imath} \Delta x^{\bar \jmath} \nn \\
&& - \frac{1}{6} \left( \partial_{\nu} \tilde \Gamma^{\mu}_{\rho \sigma} - 2 \tilde \Gamma^{\mu}_{\alpha \rho} \tilde \Gamma^{\alpha}_{\sigma \nu} \right) a_F^{3}(\bar \tau_P) e_{\bar \imath}^{\rho} e_{\bar \jmath}^{\sigma} e_{\bar k}^{\nu} \Big|_P \Delta x^{\bar \imath} \Delta x^{\bar \jmath} \Delta x^{\bar k} + \cdots, \label{s2:CFC-general-map}
\eea
where $\Delta x^{\mu} = x^{\mu} (Q) - x^{\mu} (P)$.
From this change of coordinates, it is now possible to deduce the form of the metric in conformal Fermi coordinates:
\be
g_{\bar \alpha \bar \beta} = \frac{\partial x^\mu}{\partial x^{\bar \alpha}} \frac{\partial x^\nu}{\partial x^{\bar \beta}} g_{\mu \nu} . \label{standard-coordinate-trans}
\ee
From this expression, one explicitly finds:
\bea
g_{\bar 0 \bar 0} &=& a_F^2 (\bar \tau) \left[ - 1 - \left( \tilde R_{\bar 0 \bar k \bar 0 \bar l} \right)_{P} \, \Delta x^{\bar k} \Delta x^{\bar l} + \mathcal O (\Delta \bar x^3) \right] , \label{CFC-metric-1} \\
g_{\bar 0 \bar \jmath} &=& a_F^2 (\bar \tau) \left[ - \frac{2}{3} \left( \tilde R_{\bar 0 \bar k \bar \jmath \bar l} \right)_{P} \, \Delta x^{\bar k} \Delta x^{\bar l} + \mathcal O (\Delta \bar x^3) \right] , \label{CFC-metric-2} \\
g_{\bar \imath \bar \jmath} &=& a_F^2 (\bar \tau) \left[ \delta_{ij} - \frac{1}{3} \left( \tilde R_{\bar \imath \bar k \bar \jmath \bar l} \right)_{P} \, \Delta x^{\bar k} \Delta x^{\bar l} + \mathcal O (\Delta \bar x^3) \right] , \label{CFC-metric-3}
\eea
where $\tilde R_{\bar \alpha \bar \beta \bar \gamma \bar \delta} $ are the components of the Riemann tensor constructed from $\tilde g_{\mu \nu}$ and projected along the CFC directions with the help of the tetrad introduced earlier:
\be
\tilde R_{\bar \alpha \bar \beta \bar \gamma \bar \delta} = a_F^{4}(\bar \tau_P) e^{\mu}_{\bar \alpha} e^{\nu}_{\bar \beta} e^{\rho}_{\bar \gamma} e^{\sigma}_{\bar \delta} \tilde R_{\mu \nu \rho \sigma} .
\ee
In the previous expressions $\mathcal O (\Delta \bar x^3)$ stands for terms of order $\big( \partial_{\bar \imath} \tilde R_{\bar \alpha \bar \jmath \bar \beta \bar k} \big)_{P} \, \Delta x^{\bar \imath} \Delta x^{\bar \jmath} \Delta x^{\bar k}$. This is in fact one of the salient points of this construction: Higher order corrections to the metric $g_{\bar \alpha \bar \beta}$ are suppressed by both spatial derivatives of the Riemann tensor and powers of $x^{\bar \imath}$. We will see how this plays a role later on, when we examine the validity of the CFC to follow the evolution of perturbations during inflation.
\subsection{Choosing the conformal scale factor $a_F$}
Notice that up to now the scale factor $a_F$ has not been properly defined. This may be done by first recalling that $U^{\mu} \equiv e^\mu_{\bar 0}$ defined earlier satisfies the geodesic equation $U^{\nu} \nabla_\nu U^{\mu} = 0$. In order to study how two nearby parallel geodesics diverge, one may introduce the velocity divergence parameter $\vartheta$ as:
\be
\vartheta \equiv \nabla_{\mu} U^{\mu} .
\ee
We then demand that the scale factor $a_F$ satisfies the following equation:
\be
H_F \equiv \frac{1}{a_F} \frac{d a_F}{d \bar t} = \frac{1}{3} \vartheta . \label{s2-choice-a_F}
\ee
One crucial reason behind this choice is that $\vartheta$ is a local observable, and so $H_F$ is the true Hubble parameter describing the local expansion of the patch surrounding geodesic. One may read another consequence coming out from this choice. The velocity divergence $\vartheta$ satisfies the Raychaudhuri equation:
\be
\frac{d \vartheta}{d \bar t} + \frac{1}{3} \vartheta^2= - \sigma_{\mu \nu} \sigma^{\mu \nu} + \omega_{\mu \nu} \omega^{\mu \nu} - R^{\mu}{}_{\rho \mu \sigma} U^{\rho} U^{\sigma} ,
\ee
where $\sigma_{\mu \nu}$ and $\omega_{\mu \nu}$ are the trace-free symmetric and antisymmetric contributions to $\nabla_{\mu} U_{\nu}$ respectively. It is possible to work out $R^{\mu}{}_{\rho \mu \sigma} U^{\rho} U^{\sigma} = R^{\bar \mu}{}_{\bar \rho \bar \mu \bar \sigma} U^{\bar \rho} U^{\bar \sigma}$ to show that the Raychaudhuri equation reduces to
\be
\frac{d \vartheta}{d \bar t} + \frac{1}{3} \vartheta^2= 3 ( \dot H_F + H_F^2 ) - \sigma_{\bar \imath \bar \jmath} \sigma^{\bar \imath \bar \jmath} + \omega_{\bar \imath \bar \jmath} \omega^{\bar \imath \bar \jmath} - a_F^{-2} \tilde R^{\bar \imath}{}_{\bar 0 \bar \imath \bar 0} .
\ee
But since $a_F$ has been chosen to satisfy (\ref{s2-choice-a_F}), this equation further reduces to
\be
\sigma_{\bar \imath \bar \jmath} \sigma^{\bar \imath \bar \jmath} - \omega_{\bar \imath \bar \jmath} \omega^{\bar \imath \bar \jmath} = - a_F^{-2} \tilde R^{\bar \imath}{}_{\bar 0 \bar \imath \bar 0} .
\ee
In a homogeneous background, both $\sigma$ and $\omega$ vanish. Thus, we see that the choice of eq.~(\ref{s2-choice-a_F}) implies that $\tilde R^{\bar \imath}{}_{\bar 0 \bar \imath \bar 0}$ is necessarily of second order in perturbations.
\subsection{CFC in inflation} \label{CFC-inflation}
Now that we have in our hands the notion of conformal Fermi coordinates, we may examine their form in the specific case of a perturbed FRW spacetime during inflation. First, it is convenient to consider the perturbed metric of eq.~(\ref{s2-perturbed-metric}) in comoving gauge, where the coordinates are such that the fluctuations of the fluid driving inflation vanish. In the case of single field canonical inflation, this gauge corresponds to the case in which the perturbations of the scalar field driving inflation satisfy $\delta \phi = 0$. In this gauge, it is customary to introduce the comoving curvature perturbation variable $\zeta$ through the relation:
\be
h_{ij} = \left[ e^{2 \zeta} - 1 \right] \delta_{ij} . \label{curvature-comoving-def}
\ee
Then, Einstein's equations imply constraint equations for $h_{00}$ and $h_{0i} = h_{i0}$. To linear order, the solutions of these equations are found to be given by
\bea
h_{00} &=& - \frac{2}{\mathcal H} \partial_0 \zeta , \label{constraint-h-1} \\
h_{0i} &=& - \partial_i \left[ \frac{\zeta}{\mathcal H} - \epsilon \partial^{-2} \partial_0 \zeta \right] , \label{constraint-h-2}
\eea
where $\mathcal H \equiv \partial_0 \ln a$ and $\partial_0 \equiv \partial / \partial \tau$. These solutions will change in non-canonical models of inflation. For instance, the fluid driving inflation could induce an effective sound speed $c_s$ parametrizing the speed at which curvature perturbations propagate~\cite{Cheung:2007sv}, as in the case of $P(X)$-models~\cite{Garriga:1999vw} or single field EFT's describing multi-field models with massive fields~\cite{Achucarro:2010da}. In these cases, $c_s$ would modify the second constraint equation (\ref{constraint-h-2}).
Now, we would like to integrate eqs.~(\ref{s2-velocity-equation}) and~(\ref{s2-choice-a_F}) taking into account the introduction of the curvature perturbation $\zeta$. This will allow us to extend the CFC map (\ref{s2:CFC-general-map}) at any time $\tau \geq \tau_P$. Let us start by considering the integration of eq.~(\ref{s2-velocity-equation}) along the geodesic path. To do so, let us introduce the following combination involving $V^i$ and $h_{0i}$:
\be
\mathcal F^i = V^i + h_{0}{}^{i} . \label{V-F}
\ee
Given that $h_{0i}$ is a gradient, it should be clear that eq.~(\ref{s2-velocity-equation}) implies that the non-trivial part of $V^i$ is also given by a gradient. We therefore write $\mathcal F_i = \partial_i \mathcal F$. Then, direct integration of eq.~(\ref{s2-velocity-equation}) gives
\be
\mathcal F(\tau,{\bf x}) = e^{- \int^{\tau}_{\tau_*} ds \mathcal H (s)} \left[ \frac{1}{\mathcal H_*} C_{F} (\tau_* , {\bf x}) + \frac{1}{2} \int^{\tau}_{\tau_*} \!\!\! ds \, e^{\int^s_{\tau_*} dw \mathcal H (w)} h_{00} (s , {\bf x}) \right] , \label{F-integral-solution}
\ee
where $C_{F}(\tau_* , {\bf x}_c)$ is an integration constant defined on the geodesic path that must be taken to be linear in the perturbations. Given that we are interested in gradients of $\mathcal F$, we must allow for the existence of $\partial_i C_{F}(\tau_* , {\bf x})$ and $\partial^2 C_{F}(\tau_* , {\bf x})$. This result, together with $h_{0 i}$ found in (\ref{constraint-h-2}) gives us back $V^i$. Let us now consider the integration of eq.~(\ref{s2-choice-a_F}). By using eq.~(\ref{tetrad-pert-1}) to write $U^{\mu}$ in terms of $V^i$ and $h_{00}$, eq.~(\ref{s2-choice-a_F}) gives a first order differential equation for the combination $a_F (\bar \tau)/ a(\tau)$ which, to leading order in the perturbations yields
\be
\frac{a_F (\bar \tau)}{a (\tau)} - 1 = C_{a} (\tau_* , {\bf x}_c(\tau_*)) + \int^{\tau}_{\tau_*} d s \left[ \partial_0 \zeta (s , {\bf x}_c (s) ) + \frac{1}{3} \partial_i V^i (s , {\bf x}_c (s) ) \right] , \label{a/a-integral-solution}
\ee
where ${\bf x}_c (s)$ denotes the path of the geodesic in comoving coordinates. In the previous expression $\tau_*$ corresponds to a given initial time, and $C_{a}(\tau_* , {\bf x}_c)$ denotes an integration constant which should be considered to be of linear order in the perturbations.
We are now in a condition to deduce the form of the conformal Fermi coordinates valid at times $\tau > \tau_P$. To do so, let us consider an arbitrary point $P_2$ located on the central geodesic $G$ at a given time $\tau > \tau_P$. It is clear that $x^{\mu} (P_2 )$ will differ from the value $x^{\mu}_0 (P_2)$ that would have been obtained in an unperturbed universe. The difference is accounted by a deviation $\rho^{\mu} (\tau )$ that is at least linear in the perturbations:
\be
x^{\mu} (P_2 ) = x^{\mu}_0 (P_2) + \rho^{\mu} (\tau ) . \label{geodesic-dev}
\ee
Having this in mind, we may express the CFC map using the following ansatz for an arbitrary time $\tau > \tau_P$:
\be
x^{\mu} (\bar \tau , \bar {\bf x}) = x^{\mu}_0 (\bar \tau , \bar {\bf x}) + \rho^{\mu} (\tau) + A_{\bar \imath}^{\mu} (\tau) \Delta x^{\bar \imath} + B_{\bar \imath \bar \jmath}^{\mu} (\tau) \Delta x^{\bar \imath} \Delta x^{\bar \jmath} + C_{\bar \imath \bar \jmath \bar k}^{\mu} (\tau) \Delta x^{\bar \jmath} \Delta x^{\bar \jmath} \Delta x^{\bar k} + \cdots, \label{CFC-map-inflat}
\ee
where $x^{\mu}_0 (\bar \tau , \bar {\bf x})$ is the unperturbed map, for which the comoving coordinates and conformal Fermi coordinates coincide. More precisely, $x^{\mu}_0 (\bar \tau , \bar {\bf x})$ is such that:
\be
x^{0}_0 (\bar \tau , \bar {\bf x}) = \bar \tau , \qquad x^{i}_0 (\bar \tau , \bar {\bf x}) = x^i_c + \delta^{i}_{\bar \imath}\Delta x^{\bar \imath} . \label{xmuzero}
\ee
The coefficients $ \rho^{\mu} (\tau )$, $A_{\bar \imath}^{\mu} (\tau)$, $B_{\bar \imath \bar \jmath}^{\mu} (\tau)$ and $C_{\bar \imath \bar \jmath \bar k}^{\mu} (\tau)$ are all linear in the perturbations. In Appendix~\ref{app:map-coeff} these are shown to be given by the following expressions:
\bea
\rho^{0} (\tau ) &=& \int^{\tau}_{\tau_*} \!\!\! ds \left[ \frac{a_F (\bar \tau)}{a (\tau)} (s , {\bf x}_c) - 1 + \frac{1}{2} h_{00} (s , {\bf x}_c) \right] , \label{map-coefficient-int-1} \\
\rho^{i} (\tau ) &=& \int^{\tau}_{\tau_*} \!\!\! ds \, V^{i}(s , {\bf x}_c) , \label{map-coefficient-int-2} \\
A_{\bar \imath}^{0} (\tau) &=& \delta^i_{\bar \imath} F_i (\tau , {\bf x}_c) , \label{map-coefficient-int-3} \\
A_{\bar \imath}^{i} (\tau) &=& \left[ \frac{a_F (\bar \tau)}{a (\tau)} - 1 - \zeta (\tau , {\bf x}_c) \right] \delta^i_{\bar \imath} , \label{map-coefficient-int-4} \\
B_{\bar \imath \bar \jmath}^{\mu} (\tau) &=& - \frac{1}{2} \tilde \Gamma^{\mu}_{i j } (\tau , {\bf x}_c) \delta^{i}_{\bar \imath} \delta^{j}_{\bar \jmath}, \label{map-coefficient-int-5} \\
C_{\bar \imath \bar \jmath \bar k}^{\mu} (\tau) &=& - \frac{1}{6} \partial_{k} \tilde \Gamma^{\mu}_{i j } (\tau , {\bf x}_c) \delta^{i}_{\bar \imath} \delta^{j}_{\bar \jmath} \delta^{j}_{\bar k} . \label{map-coefficient-int-6}
\eea
The right hand sides of the previous expressions are all expanded up to first order in the perturbations (recall that $a_F (\bar \tau) / a (\tau) (s , {\bf x}_c) - 1$ is a quantity of linear order in the perturbations). Along the geodesic one has $\Delta \bar {\bf x} = 0$ and the map reduces to $x^{\mu} (\bar \tau , \bar {\bf x}_c) = x^{\mu}_0 (\bar \tau , \bar {\bf x}_c) + \rho^{\mu} (\tau )$. This implies that:
\be
\tau = \bar \tau + \rho^{0} (\tau ) , \qquad x^i = x_c^i + \rho^i .
\ee
Then $\rho^{0} (\tau ) = \tau - \bar \tau$ informs us how the perturbations shift the equal time slices parametrized by $\tau$ and $\bar \tau$. Similarly, $\rho^i$ parametrizes the spatial shift of the geodesic from the unperturbed position $x_c^i$ (that is, $x_c^i + \rho^i$ is the location of the geodesic at a time $\tau > \tau_*$).
\subsection{Computation of correlation functions with CFC's} \label{correlations-in-CFC}
Here we consider the task of computing correlation functions using the CFC map of eq.~(\ref{CFC-map-inflat}). We are particularly interested in the squeezed limit of the three point function $\langle \zeta \zeta \zeta \rangle$. In this subsection we will sketch the procedure, that will be implemented in more detail in the next section, after we have considered some further simplifications of the map (\ref{CFC-map-inflat}). The main idea is the following: We will split the curvature perturbation $\zeta$ into short and long wavelength contributions:
\be
\zeta = \zeta_{S} + \zeta_{L} .
\ee
Then, we will use $\zeta_{L}$ to find the perturbed spacetime determining the map~(\ref{CFC-map-inflat}) deduced in the previous section. In other words, in global coordinates we study a FRW metric perturbed by $\zeta_L$. The map of eq.~(\ref{CFC-map-inflat}) then shows that $\tau=\bar{\tau}+\mathcal{O}(\zeta_L)$, $x^i=\bar{x}^i+\mathcal{O}(\zeta_L)$. We then use the inverse of this map to see how local quantities (such as $\bar{\zeta}_S(\bar{x})$ and its correlation functions) can be written in terms of well-known global quantities. This will allow us to derive an expression for the short wavelength curvature perturbation $\bar \zeta_S$ (defined in the inertial frame) as a function of both, the short and long wavelength curvature perturbations $\zeta_S$ and $\zeta_L$ (defined in the comoving frame):
\be
\bar \zeta_S = \zeta_S + F_{\rm S} (\zeta_S ,\zeta_L) . \label{Split-1}
\ee
Here the function $F_{\rm S} (\zeta_S ,\zeta_L)$ informs us about how the long wavelength mode $\zeta_{L}$ affects the behavior of $\bar \zeta_S$ due to the fact that this is a local quantity defined within the patch surrounding the geodesic. This function will be of the form (see next subsection)
\be
F_{\rm S} (\zeta_S ,\zeta_L) = \sum_a f_a ( \zeta_L) g_a (\zeta_S) , \label{F_S-form}
\ee
where $f_a ( \zeta_L)$ and $g_a (\zeta_S)$ are linear functions of $\zeta_L$ and $\zeta_S$ respectively (that could include space-times derivatives acting on the perturbations). Of course, the long mode $\zeta_{L}$ does not globally affect itself, in the sense that the CFC map corresponds to a local small scale coordinate transformation that can only affect the short scale contribution $\bar \zeta_S$ inside the patch around the central geodesic. Therefore we effectively write:
\be
\bar \zeta_L = \zeta_L . \label{Split-2}
\ee
Having (\ref{Split-1}) and (\ref{Split-2}) then allows us to compute the squeezed limit of the three point correlation function $\langle \bar \zeta \bar \zeta \bar \zeta \rangle$. To this effect, one first considers the computation of the two point correlation function $\langle \bar \zeta_S (\bar x_1) \bar \zeta_S (\bar x_2) \rangle$. Because $\bar \zeta_S$ is given by (\ref{Split-1}) with $F_S$ given by (\ref{F_S-form}), this two point correlation function may be expanded to linear order in $\zeta_L$ as:
\be
\langle \bar \zeta_S (\bar x_1) \bar \zeta_S (\bar x_2) \rangle = \langle \zeta_S ( x_1) \zeta_S ( x_2) \rangle + \sum_a f_{a} (\zeta_{L}) \left[ \langle \zeta_S (\bar x_1) g_a (\zeta_S (\bar x_2)) \rangle + \langle g_a (\zeta_S (\bar x_1)) \zeta_S (\bar x_2) \rangle \right] .
\ee
Then, by correlating this result with the long mode $\bar \zeta_L$ of eq.~(\ref{Split-2}), one obtains
\bea
\langle \bar \zeta_{L} (\bar x_3) \langle \bar \zeta_S (\bar x_1) \bar \zeta_S (\bar x_2) \rangle \rangle = \langle \zeta_{L} ( x_3) \langle \zeta_S ( x_1) \zeta_S ( x_2) \rangle \rangle \qquad\qquad \qquad \qquad \qquad \qquad \qquad \nn \\
+ \sum_a \langle \zeta_{L} ( x_3) f_{a} (\zeta_{L}) \rangle \left[ \langle \zeta_S (\bar x_1) g_a (\zeta_S (\bar x_2)) \rangle + \langle g_a (\zeta_S (\bar x_1)) \zeta_S (\bar x_2) \rangle \right] . \label{three-point-corrected}
\eea
The squeezed limit in momentum space of this $\langle \bar \zeta_{L} (\bar x_3) \langle \bar \zeta_S (\bar x_1) \bar \zeta_S (\bar x_2) \rangle \rangle$ directly gives $f_{\rm NL}^{\rm obs}$, whereas the squeezed limit of the first term of the right hand side gives the usual $f_{\rm NL}$-parameter computed in comoving coordinates.\footnote{To show that $\langle \zeta_{L} ( x_3) \langle \zeta_S ( x_1) \zeta_S ( x_2) \rangle \rangle$ gives the squeezed limit of the bispectrum, one may start from $\langle \zeta ( x_3) \zeta ( x_1) \zeta ( x_2) \rangle$ and split both the curvature perturbation and the vacuum state as $\zeta = \zeta_L + \zeta_S$ and $| 0 \rangle = | 0 \rangle_L \otimes | 0 \rangle_S$ respectively. Then one only needs to recall that, because of non-linearities, $\zeta_S$ will in ge\-ne\-ral depend on $\zeta_L$.} This means that one finally arrives to an expression of the form
\be
f_{\rm NL}^{\rm obs} = f_{\rm NL} + \Delta f_{\rm NL} ,
\ee
where $\Delta f_{\rm NL}$ arises from those terms entering the second line of eq.~(\ref{three-point-corrected}), due to the CFC transformation. We will see how to perform all these steps in detail in the following section.
\subsection{Short wavelength modes in CFC} \label{sec:splitting}
To finish this section, we deduce how long wavelength modes affect short wavelength modes through the CFC transformation of eq.~(\ref{CFC-map-inflat}). We will only consider the effects of the first three terms in~(\ref{CFC-map-inflat}), involving the perturbations $\rho^{\mu} (\tau )$ and $A_{\bar \imath}^{\mu} (\tau)$. The remaining pieces involving $B_{\bar \imath \bar \jmath}^{\mu} (\tau)$ and $C_{\bar \imath \bar \jmath \bar k}^{\mu} (\tau)$ introduce the so called projection effects, which are suppressed in the squeezed limit. We may organize the transformation as follows:
\be
x^{\mu} (\tau , \bar {\bf x}) = x^{\mu}_0 + \xi^{\mu} (\tau , \bar {\bf x}) , \label{CFC-map-inflat-needed}
\ee
where
\be
\xi^{\mu} (\tau , \bar {\bf x}) \equiv \rho^{\mu} (\tau ) + A_{\bar \imath}^{\mu} (\tau) \Delta x^{\bar \imath} . \label{def-xi-tau-x}
\ee
Now, notice that the proper definition of the curvature perturbation in the CFC frame may be written as~\cite{Rigopoulos:2003ak, Lyth:2004gb}:
\be
\bar \zeta (\bar x) = \frac{1}{6} \log \det (g_{\bar \imath \bar \jmath} / a_F^2 (\bar \tau) ) , \label{def-zeta-bar}
\ee
where the elements $g_{\bar \imath \bar \jmath}$ are given by the spatial components of eq.~(\ref{standard-coordinate-trans}) as:
\be
g_{\bar \imath \bar \jmath} = a^2 (\tau)\left[ \frac{\partial \tau}{\partial x^{\bar \imath}} \frac{\partial \tau}{\partial x^{\bar \jmath}} (1 + h_{00})
+ \frac{\partial x^i}{\partial x^{\bar \imath}} \frac{\partial \tau}{\partial x^{\bar \jmath}} h_{i 0}
+ \frac{\partial \tau}{\partial x^{\bar \imath}} \frac{\partial x^j}{\partial x^{\bar \jmath}} h_{0 j}
+ \frac{\partial x^i}{\partial x^{\bar \imath}} \frac{\partial x^j}{\partial x^{\bar \jmath}} h_{ij} \right] . \label{standard-coordinate-trans-spatial}
\ee
We now consider the role of the long and short modes in the splitting $\zeta = \zeta_{L} + \zeta_{S}$ in order to compute $\bar \zeta (\bar x) $ from (\ref{def-zeta-bar}). First, notice that in the previous expression, the partial derivatives $\partial \tau / \partial x^{\bar \imath}$ and $\partial x^i / \partial x^{\bar \imath}$ are determined by the CFC map of eq.~(\ref{CFC-map-inflat}), and therefore depend on $\zeta_L$ alone. On the other hand, $h_{00}$, $h_{0i} = h_{i0}$ and $h_{ij} = \delta_{ij} e^{2 \zeta}$ depend on both $\zeta_{L}$ and $\zeta_{S}$. In addition, one has to keep in mind that $\zeta_S$ is evaluated at $x = x(\bar x)$, which is determined by the CFC map. Putting together all of these factors, it is possible to deduce
\be
\bar \zeta_S (\bar x) = \zeta_S (x(\bar x)) + \frac{1}{3} [h_{S}]_{0}{}^i (x(\bar x)) \partial_i \xi_L^0 (\bar x) , \label{zeta-s}
\ee
where the arguments of the short scale perturbations are evaluated at $x(\bar x)$. For example, in the case of the first term $\zeta_S(x(\bar x))$, since $x = x_0 + \xi$ (as in eq.~(\ref{CFC-map-inflat-needed})), one has
\be
\zeta_S (x(\bar x)) = \zeta_S (x_0) + \xi^\mu (\bar x) \partial_\mu \zeta_S (x_0) .
\ee
Recall that $x_0^{\mu}$ is given by eq.~(\ref{xmuzero}), and therefore $\zeta (x_0)$ is nothing but the comoving curvature perturbation evaluated with unperturbed comoving coordinates. For this reason, we could simply write $\zeta_S (x_0) = \zeta_S (\bar x)$.
\setcounter{equation}{0}
\section{Local non-Gaussianity in single field inflation}
\label{s3:FNL}
We now put together the results of the previous sections to compute the squeezed limit of the bispectrum in the two regimes of interest: attractor and non-attractor. To simplify matters, we will work to leading order in the slow-roll parameter $\epsilon$ and neglect any corrections that would modify the final results by terms suppressed in $\epsilon$.
\subsection{Further developments} \label{further-devs}
Let us start by obtaining explicit expressions for the integrals of eqs.~(\ref{F-integral-solution}) and~(\ref{a/a-integral-solution}), which in turn lead to simple and manageable expressions for the coefficients $\rho^{\mu} (\tau )$ and $A_{\bar \imath}^{\mu} (\tau)$ appearing in the map (\ref{CFC-map-inflat-needed}). First, notice that $\int^{\tau}_{\tau_*} \!\!\! ds \, \mathcal H (s) $ appearing in~(\ref{F-integral-solution}) may be directly integrated as:
\be
\int^{\tau}_{\tau_*} \!\!\! ds \, \mathcal H (s) = \ln \frac{a (\tau)}{a (\tau_*)} .
\ee
Then, one may re-express the integral of eq.~(\ref{F-integral-solution}) as
\be
{\mathcal F} = \frac{a(\tau_*)}{a(\tau)} \left( \frac{1}{\mathcal H_*} C_F - \left[ \frac{1}{a(\tau_*)} \int^{\tau}_{\tau_*} \!\!\! d \tau \frac{a(\tau)}{\mathcal H (\tau)} \, \frac{\partial \zeta}{\partial \tau} \right] \right) . \label{F-N-int}
\ee
Now, notice that $\mathcal H = - 1 / \tau + \mathcal O (\epsilon)$ and $a (\tau) / a(\tau_*) = \tau_* / \tau + \mathcal O (\epsilon)$. This allows one to integrate eq.~(\ref{F-N-int}) to obtain the following expression for $V_i$:
\bea
V_{i} &=&\frac{1}{\mathcal H_*} \frac{a(\tau_*)}{a(\tau)} \partial_i \Big( C_F + \zeta_* \Big) - \partial_i \left[ \epsilon \partial^{-2} \frac{\partial \zeta}{\partial \tau} \right] + \mathcal O (\epsilon) , \label{F-integral-solution-3}
\eea
where $\mathcal O (\epsilon)$ stands for a function of order $\epsilon$ that decays quickly on superhorizon scales. We will soon argue how to choose the integration constant $C_F$. Our choice will imply that $V_{i} $ is a function of order $\epsilon$. Irrespective of this, $V_{i}$ will contribute terms that quickly decay on superhorizon scales (for both regimes, attractor and non-attractor), and that become negligible in the computation of the bispectrum squeezed limit. Next, we move on to compute the integral of eq.~(\ref{a/a-integral-solution}). Given that $V^i$ is sub-leading, eq.~(\ref{a/a-integral-solution}) may be directly integrated, giving us back
\be
\frac{a_F (\bar \tau)}{a (\tau)} -1 = C_{a} + \zeta - \zeta_* + \mathcal O (\epsilon) , \label{a/a-integral-solution-2}
\ee
where $ \mathcal O (\epsilon)$ stands for those decaying terms of order $\epsilon$. Finally, we may use these results to rewrite the map coefficients (\ref{map-coefficient-int-1})-(\ref{map-coefficient-int-4}) that will be used to compute the squeezed limit. Using the fact that $\mathcal H = - 1/ \tau$ up to corrections of order $\epsilon$, these are found to be:
\bea
\rho^{0} (\tau ) &=& C_{a} (\tau - \tau_*) + \tau ( \zeta - \zeta_*) + \cdots , \label{map-coefficient-pre-1} \\
\rho^{i} (\tau ) &=& 0 + \cdots , \label{map-coefficient-pre-2} \\
A_{\bar \imath}^{0} (\tau) &=& \delta^i_{\bar \imath} \tau \partial_i \left[ \zeta - \zeta_* - C_{F} \right] + \cdots, \label{map-coefficient-pre-3} \\
A_{\bar \imath}^{i} (\tau) &=& \left[ C_{a} - \zeta_* \right] \delta^i_{\bar \imath} + \cdots, \label{map-coefficient-pre-4}
\eea
where the ellipses $\cdots$ denote those decaying terms of order $\epsilon$.
\subsection{Initial conditions} \label{sec:initial-conditions}
As discussed in Section~\ref{correlations-in-CFC}, we are interested in understanding how long wavelength modes affect the geodesic motion of an inertial observer that has access to short wavelength perturbations. This means that the perturbed FRW spacetime considered in the previous section deviates from its unperturbed version due to long wavelength modes $\zeta_L$. We will choose $\tau_*$ at a time when all the relevant modes of $\zeta_L$ have exited the horizon. In practice, we are interested in computing the effects due to a single mode (or a small range of modes) appearing in $\zeta_L$, that will later be selected in the squeezed limit in momentum space. So we could simply say that our condition is that $\tau_*$ corresponds to a moment in time at which $\zeta_L$ has just crossed the horizon.
Given that at $\tau_*$ the perturbation $\zeta_L$ has just crossed the horizon, deviations to the geodesic path are just starting to take over. In particular, any effect of $\zeta_L$ on the velocity field $V_i$ must be negligible. We therefore choose $C_F$ in such a way that $V^i_* = 0$. To leading order this corresponds to
\be
C_F = - \zeta^*_L .
\ee
As explained, this implies that $V_i$ may be neglected, leading to $\rho^{i} (\tau ) = 0$, which was already stated in eq.~(\ref{map-coefficient-pre-2}). Next, a similar argument may be used to state that since $\zeta_L$ has just crossed the horizon, we require that $a (\tau_*) = a_F (\bar \tau_*)$, this corresponds to a synchronized map choice. This leads to $C_{a} = 0$ as evident from eq.~(\ref{a/a-integral-solution-2}). Then, we finally arrive at a simple version of the map coefficients needed to connect the coordinates of inertial and comoving observers, that may be summarized as follows:
\bea
\rho^{0} (\tau ) &=& \tau ( \zeta_L - \zeta^{*}_L) + \cdots ,\label{map-coefficient-1} \\
\rho^{i} (\tau ) &=& 0 + \cdots , \label{map-coefficient-2} \\
A_{\bar \imath}^{0} (\tau) &=& \tau \partial_{\bar \imath} \zeta_L + \cdots, \label{map-coefficient-3} \\
A_{\bar \imath}^{i} (\tau) &=& - \zeta^*_L \delta^i_{\bar \imath} + \cdots. \label{map-coefficient-4}
\eea
\RED{From eq.~(\ref{map-coefficient-1}) it is clear that in attractor inflation (where $\zeta_L$ freezes) time remains synchronized, while in the non-attractor case (where $\zeta_L$ evolves) the watches of a comoving and a free-falling observer do not run at the same rate.}
We notice that the authors of ref.~\cite{Cabass:2016cgp} discuss alternative choices to fix $\tau_*$ and the associated initial conditions.
\subsection{Computation of the squeezed limit}
We are finally ready to compute the observed bispectrum's squeezed limit. This computation was already outlined in Section~\ref{correlations-in-CFC}. We start by explicitly computing the two point correlation function $\langle \bar \zeta_S (\bar x_1) \bar \zeta_S (\bar x_2) \rangle$ using (\ref{zeta-s}) to express $\bar \zeta_S$ in terms of $ \zeta_S$:
\be
\langle \bar \zeta_S (\bar x_1) \bar \zeta_S (\bar x_2) \rangle = \left\langle \left( \zeta_S (x(\bar x_1)) + \frac{1}{3} [h_{S}]_{0}{}^i \partial_i \xi_L^0 \right) \left( \zeta_S (x(\bar x_2)) + \frac{1}{3} [h_{S}]_{0}{}^i \partial_i \xi_L^0 \right) \right\rangle .
\ee
Notice that we have kept $x(\bar x)$ in the argument of $\zeta_S$ at the right hand side, which also depends on $\zeta_L$. We shall deal with this dependence in a moment. Expanding the previous expression up to linear order in $\xi^0_L$, we have
\bea
\langle \bar \zeta_S (\bar x_1) \bar \zeta_S (\bar x_2) \rangle &=& \langle \zeta_S (x(\bar x_1)) \zeta_S (x(\bar x_2)) \rangle \nn \\�
&&+ \frac{1}{3} \left[ \left\langle \zeta_S (x( \bar x_1)) [h_{S}]_{0}{}^i (\bar x_2) \right\rangle + \left\langle [h_{S}]_{0}{}^i (\bar x_1) \zeta_S (x(\bar x_2)) \right\rangle \right] \partial_i \xi_L^0 .
\eea
It is not difficult to show that, because $[h_{S}]_{0i}$ consists of a gradient, the two last terms of this expression cancel each other. Then, we are left with:
\be
\langle \bar \zeta_S (\bar x_1) \bar \zeta_S (\bar x_2)) \rangle = \left\langle \zeta_S (x(\bar x_1)) \zeta_S (x(\bar x_2)) \right\rangle .
\ee
Next, we may expand $x(\bar x)$ appearing in the argument of $\zeta_S$ in terms of $\zeta_L$. This gives:
\be
\langle \bar \zeta_S (\bar x_1) \bar \zeta_S (\bar x_2) \rangle = \left[ 1 + \xi_L^{\mu} (\tau , \bar {\bf x}_1) \partial^{(1)}_{\mu} + \xi_L^{\mu} (\tau , \bar {\bf x}_2) \partial^{(2)}_{\mu} \right] \left\langle \zeta_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) \right\rangle , \label{two-point-00}
\ee
where $\xi^{\mu} (\tau , \bar {\bf x})$ is given in eq.~(\ref{def-xi-tau-x}), and where $\partial^{(1)}_{\mu}$ and $\partial^{(2)}_{\mu}$ are partial derivatives with respect to $\bar x_1$ and $\bar x_2$ respectively. As already explained in Section~\ref{correlations-in-CFC}, the two point correlation function $\left\langle \zeta_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) \right\rangle$ depends on $\zeta_L$. But given that $\xi^{\mu}$ in (\ref{two-point-00}) already depends linearly on $\zeta_L$, in order to keep the leading terms, we may re-write the previous expression as
\be
\langle \bar \zeta_S (\bar x_1) \bar \zeta_S (\bar x_2) \rangle = \left\langle \zeta_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) \right\rangle + \left[ \xi_L^{\mu} (\tau , \bar {\bf x}_1) \partial^{(1)}_{\mu} + \xi_L^{\mu} (\tau , \bar {\bf x}_2) \partial^{(2)}_{\mu} \right] \left\langle \zeta_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) \right\rangle_0 , \label{two-point-0}
\ee
where $\left\langle \zeta_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) \right\rangle_0$ is the two point correlation function in comoving coordinates with $\zeta_L \to 0$ (that is, without a modulation coming from the long wavelength mode). Notice that $\left\langle \zeta_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) \right\rangle_0$ is nothing but the two point correlation function of $\zeta_S$ with spatial arguments given by $\bar x_1$ and $\bar x_2$ as if it was computed in comoving coordinates. The result is a function of time $\tau$, and the difference $|\bar {\bf x}_1- \bar {\bf x}_2|$:
\be
\left\langle \zeta_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) \right\rangle_0 = \left\langle \zeta_S \zeta_S \right\rangle (\bar{\tau} , \bar{r}) , \qquad r \equiv |\bar {\bf x}_2- \bar {\bf x}_1| . \label{spatial-homogeneity}
\ee
Using this result back into eq.~(\ref{two-point-0}) together with the map coefficients of eqs.~(\ref{map-coefficient-1})-(\ref{map-coefficient-1}), we find (see Appendix~\ref{app:two-point} for details):
\bea
\langle \bar \zeta_S (\bar x_1) \bar \zeta_S (\bar x_2) \rangle &=& \left\langle \zeta_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) \right\rangle \nn \\
&& + \left[ ( \zeta_L - \zeta^{*}_L) \frac{\partial}{\partial \ln \tau} + \frac{1}{2} ( x_1^{\bar \imath} + x_2^{\bar \imath}) \partial_{\bar \imath} \zeta_L \frac{\partial}{\partial \ln \tau} - \zeta^*_L \frac{\partial}{\partial \ln r} \right] \left\langle \zeta_S \zeta_S \right\rangle (\tau , r) . \qquad \label{two-point-S}
\eea
Recall that $\zeta_L$ is evaluated at ${\bf x}_c$. However, given that it is a long wavelength mode, we may as well consider it to be evaluated at $\bar {\bf x}_L = (\bar {\bf x}_1 + \bar {\bf x}_2)/2$ without modifying any conclusion. Note that the second term inside the square brackets is necessarily subleading since it involves a spatial derivative of the long-wavelength mode $\zeta_L$. For this reason, we disregard it. To continue, we may now Fourier transform this expression. First, we introduce
\be
\zeta ({\bf x}) = \frac{1}{(2 \pi)^3} \int d^3 k \zeta({\bf k}) e^{ i {\bf k} \cdot {\bf x}} ,
\ee
which implies that
\bea
\langle \bar \zeta_S (\bar x_1) \bar \zeta_S (\bar x_2) \rangle = \left\langle \zeta_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) \right\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \nn \\
+ \int \!\! \frac{d^3 k}{(2 \pi)^3}e^{ i {\bf k} \cdot {\bf x}_L} \left( \left[ \zeta({\bf k}) - \zeta_*({\bf k}) \right] \frac{\partial}{\partial \ln \tau} - \zeta_*({\bf k}) \frac{\partial}{\partial \ln r} \right) \left\langle \zeta_S \zeta_S \right\rangle (\tau , r). \label{zeta-L-SS-Fourier}
\eea
Then, Fourier transforming the fields $\bar \zeta_S (\bar x_1)$ and $\bar \zeta_S (\bar x_2)$, we arrive to (see Appendix~\ref{app:two-point} for details)
\bea
\langle \bar \zeta_S \bar \zeta_S \rangle ({\bf k}_1 ,{\bf k}_2) = \langle \zeta_S \zeta_S \rangle ({\bf k}_1 ,{\bf k}_2) \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \nn \\
+ \left( \left[ \zeta({\bf k}_L) - \zeta_*({\bf k}_L) \right] \frac{\partial}{\partial \ln \tau} + \zeta_*({\bf k}_L) \left[ n_s(k_S , \tau)- 1 \right] \right) P_\zeta (\tau, k_S) , \label{CFC-effect-power}
\eea
where ${\bf k}_L = {\bf k}_1 + {\bf k}_2$ and ${\bf k}_S = ( {\bf k}_1 - {\bf k}_2 ) / 2$. In the previous expressions, the power spectrum $P_\zeta (\tau, k) $ of $\zeta ({\bf k})$ and its spectral index $n_s (k) - 1$ are defined as
\bea
P_\zeta (\tau, k) = \int d^3 r e^{ - i {\bf k} \cdot {\bf r}} \left\langle \zeta \zeta \right\rangle (\tau , r) , \\
n_s(k , \tau)- 1 = \frac{\partial }{\partial \ln k} \ln ( k^3 P_\zeta (\tau, k) ) .
\eea
Equation (\ref{CFC-effect-power}) gives the power spectrum in conformal Fermi coordinates expressed in terms of the curvature perturbations defined in comoving coordinates.
To continue, notice that since we have split the curvature perturbation in short and long wavelength modes, the two point correlation function $\langle \zeta_S \zeta_S \rangle ({\bf k}_1 ,{\bf k}_2) $ will be modulated by the long wavelength mode $\zeta_L$ in comoving coordinates. The squeezed limit of the bispectrum in comoving coordinate appears as the formal limit:
\be
\lim_{k_3 \to 0}(2 \pi)^3 \delta ({\bf k}_1 + {\bf k}_2 +{\bf k}_3) B_{\zeta} ({\bf k}_1 , {\bf k}_2 , {\bf k}_3) = \langle \zeta_L ({\bf k}_3) \langle \zeta_S \zeta_S \rangle ({\bf k}_1 ,{\bf k}_2) \rangle .
\ee
Thus, we see that if we correlate the expression of eq.~(\ref{CFC-effect-power}) with $\zeta_L ({\bf k}_3)$ we obtain (after using eq.~(\ref{Split-2}))
\bea
\langle \bar \zeta_L ({\bf k}_3) \langle \bar \zeta_S \bar \zeta_S \rangle ({\bf k}_1 ,{\bf k}_2) \rangle &=& (2 \pi)^3 \delta ({\bf k}_1 + {\bf k}_2 +{\bf k}_3) B_{\zeta} ({\bf k}_1 , {\bf k}_2 , {\bf k}_3) \nn \\
&& + (2 \pi)^3 \delta ({\bf k}_1 + {\bf k}_2 +{\bf k}_3) P_\zeta (\tau, k_3) \frac{\partial}{\partial \ln \tau} P_\zeta (\tau, k_S) \nn \\
&& - \langle \zeta_L ({\bf k}_3) \zeta_*({\bf k}_L) \rangle \left[ \frac{\partial}{\partial \ln \tau} - \left[ n_s(k_S , \tau)- 1 \right] \right] P_\zeta (\tau, k_S) . \label{3-point-1}
\eea
\RED{Here we still work with the notation ${\bf k}_L = {\bf k}_1 + {\bf k}_2$ and ${\bf k}_S = ( {\bf k}_1 - {\bf k}_2 ) / 2$.} Notice that this expression contains the quantity $ \langle \zeta_L ({\bf k}_3) \zeta_*({\bf k}_L) \rangle $, which correlates two $ \zeta_L $ at two different times. In what follows, we show that
\be
\langle \bar \zeta_L ({\bf k}_3) \langle \bar \zeta_S \bar \zeta_S \rangle ({\bf k}_1 ,{\bf k}_2) \rangle = 0 , \label{3-point-2}
\ee
for both attractor and non-attractor models of inflation.
\subsection{Vanishing of local non-Gaussianity} \label{sec:vanishing_of_NG}
In a recent article~\cite{Bravo:2017wyw} we have derived a generalized version of the non-Gaussian consistency relation valid for the two regimes of interest: attractor and non-attractor models. The squeezed limit of the 3-point correlation function for $\zeta$ was found to be given by:\footnote{In~\cite{Finelli:2017fml} a different expression for the generalization of the squeezed limit was obtained. The derivation of~\cite{Finelli:2017fml} is based on the use of the operator product expansion to find the squeezed limit of the three point functions for $\zeta$. The main difference with the result (\ref{power-S-corr-2}) found in~\cite{Bravo:2017wyw} consists of the presence of terms with two $\ln \tau$-derivatives, instead of one.}
\bea
\langle \zeta_L ({\bf k}_3) \langle \zeta_S \zeta_S \rangle ({\bf k}_1 ,{\bf k}_2) \rangle &=& - \langle \zeta_L ({\bf k}_3) \left[ \zeta_L ({\bf k}_L) - \zeta_L^* ({\bf k}_L )\right] \rangle \frac{d}{d \ln \tau} P_\zeta (\tau, k_S) \nn \\
&& - \langle \zeta_L ({\bf k}_3) \zeta_L^* ({\bf k}_L) \rangle \left[ n_s(k_S , \tau)- 1 \right] P_\zeta (\tau, k_S) , \label{power-S-corr-2}
\eea
where the fields $\zeta_L$ and $\zeta_S$ are evaluated at a time $\tau$, and $\zeta_L^*$ is evaluated at a reference initial time $\tau_*$. In order to derive this expression, we used the fact that the cubic action for the curvature perturbation $\zeta$ is approximately invariant under space-time reparametrizations given by
\bea
&& x \to x' = e^{ \zeta_L (\tau_*) } x , \label{rep-4} \\
&& \tau \to \tau' = e^{- \zeta_L (\tau) + \zeta_L (\tau_*) } \tau . \label{rep-5}
\eea
It may be seen how these relations resemble the coordinate transformation implied by eqs.~(\ref{map-coefficient-1})-(\ref{map-coefficient-4}), except for the signs of $\zeta_L (\tau)$ and $\zeta_L (\tau_*)$.
In the case of attractor models, one has that the comoving curvature perturbation $\zeta$ becomes constant on superhorizon scales. This implies that $\zeta_*({\bf k}_L) = \zeta_L ({\bf k}_3) $ and that $\ln \tau$-derivatives of the power spectrum vanish. Then eq.~(\ref{power-S-corr-2}) reduces to
\bea
B_{\zeta} ({\bf k}_1 , {\bf k}_2 , {\bf k}_3) &=& - \left[ n_s(k_S , \tau)- 1 \right] P_\zeta (\tau, k_L) P_\zeta (\tau, k_S) .
\eea
This is the well known Maldacena's consistency condition. On the other hand, in non-attractor models of inflation (ultra slow-roll) the modes grow exponentially fast on superhorizon scales, and one finds a leading contribution given by (one finds that $n_s(k_3 , \tau)- 1 \propto \epsilon_* (\tau / \tau_*)^6$ and so it may be regarded as formally zero in the long wavelength limit):
\bea
B_{\zeta} ({\bf k}_1 , {\bf k}_2 , {\bf k}_3) &=& - P_\zeta (\tau, k_L) \frac{d}{d \ln \tau} P_\zeta (\tau, k_S) . \label{power-S-corr-2-leading}
\eea
More precisely, the superhorizon modes evolve like $\zeta ({\bf k}, \tau) = (\tau_* / \tau)^3 \zeta ({\bf k}, \tau_*) $~\cite{Kinney:2005vj, Namjoo:2012aa, Martin:2012pe, Mooij:2015yka}. For this reason, the power spectrum scales as $\tau^{-6}$:
\be
P_\zeta (\tau, k) = \left(\frac{\tau_*}{ \tau} \right)^6 P_\zeta (\tau_*, k) .
\ee
In this case, eq.~(\ref{power-S-corr-2-leading}) finally gives
\be
B_{\zeta} ({\bf k}_1 , {\bf k}_2 , {\bf k}_3) = 6 P_\zeta (\tau , k_L ) P_\zeta (\tau, k_S),
\ee
which is the well known ultra slow-roll bispectrum~\cite{Namjoo:2012aa}.
Now, independently of the specific form of the bispectrum in these two regimes, we see that eq.~(\ref{power-S-corr-2}) implies a cancellation between $B_{\zeta} ({\bf k}_1 , {\bf k}_2 , {\bf k}_3)$ and the rest of the terms in eq.~(\ref{3-point-1}):
\be
B_{\zeta} ({\bf k}_1 , {\bf k}_2 , {\bf k}_3) + \Delta B_{\zeta} ({\bf k}_1 , {\bf k}_2 , {\bf k}_3) = 0
\ee
Thus, we conclude that in the CFC frame the bispectrum vanishes \red{during both the attractor and non-attractor regimes. In the next section we will argue why we expect this result to survive after inflation ends.}
\subsection{On the validity of CFC for non-attractor models}
Let us briefly come back to the issue raised in Section~\ref{sec:Construction-CFC}, regarding the validity of the CFC map in the case of non-attractor models. We note that even if $\zeta$ grows as $a^3$ on superhorizon scales, during inflation it is always small, since it should reach its value observed in the CMB $\zeta_{\rm CMB} \simeq 10^{-5}$ when it stops to evolve, i.e., at the end of inflation.
Therefore, the validity of the CFC transformation does not rely on the size of $\zeta$ but on the possibility that the map depends on time derivatives of $\zeta$, which are of order $\mathcal H \zeta$. To be more precise, notice from eqs.~(\ref{CFC-metric-1})-(\ref{CFC-metric-3}) that the size of the patch surrounding the central geodesic G has to be such that
\be
\left| \left( \tilde R_{\bar 0 \bar k \bar 0 \bar l} \right)_{P} \, \Delta x^{\bar k} \Delta x^{\bar l} \right| \ll 1 , \label{eq:condition-1}
\ee
where $\tilde R_{\bar 0 \bar k \bar 0 \bar l}$ are components of the Riemann tensor constructed from the conformally flat metric $\tilde g_{\mu \nu} \equiv a_F^{-2} (\bar \tau) g_{\mu \nu}$. This means that $\tilde R_{\bar 0 \bar k \bar 0 \bar l}$ is at least linear in the perturbations. Then, eq.~(\ref{eq:condition-1}) simply translates into the condition that
$|\Delta x|^2 \mathcal H^2 \zeta_L \ll 1$, where we have used that time derivatives of $\zeta$ are of order $\mathcal{H}\zeta$. This last inequality is guaranteed by our previous remark on the size of $\zeta$.
Next, one could be worried about higher order time derivatives of $\zeta$ emerging in terms of order, for instance those of order $\mathcal O (\Delta \bar x^3)$ that appear in eqs.~(\ref{CFC-metric-1})-(\ref{CFC-metric-3}). However, as noted before, higher order contributions that are linear in $\tilde R_{\bar 0 \bar k \bar 0 \bar l}$ (and therefore linear in $\zeta$) only bring in spatial derivatives with respect to $x^{\bar \imath}$. Higher order derivatives with respect to time will only come about from terms which are quadratic in $\tilde R_{\bar 0 \bar k \bar 0 \bar l}$, and are thus quadratic in $\zeta_L^2$. Therefore, even if in ultra slow-roll $\zeta$ does grow exponentially on superhorizon scales, the convergence of both these expansions is still guaranteed.
\setcounter{equation}{0}
\section{Discussion and conclusions}
\label{s4:Conclusions}
We have studied the computation of local non-Gaussianity accessible to inertial observers in canonical models of single field inflation. It was already known~\cite{Tanaka:2011aj, Pajer:2013ana, Cabass:2016cgp, Tada:2016pmk} that observable local non-Gaussianity vanishes in the case of single field attractor models ($f_{NL}^{\rm obs} = 0$) modulo projection effects. In this work, we have extended this result to the case of non-attractor models (ultra slow-roll) in which the standard derivation gives a sizable value $f_{NL} = 5/2$. This result (the standard result) was thought to represent a gross violation of Maldacena's consistency relation. We have instead shown that for both classes of models, the consistency relation is simply:
\be
f_{\rm NL}^{\rm obs} = 0. \label{conc-main-result}
\ee
This result is noteworthy: In ultra slow-roll comoving curvature perturbations experience an exponential superhorizon growth, and this growth was taken to be the natural explanation underlying large local non-Gaussianity. But this is certainly not the case.
Our results shed new light on our understanding of the role of the bispectrum squeezed limit in inflation to test primordial cosmology. We now know that non-Gaussianity cannot discriminate between two drastically different regimes of inflation. Instead, we are forced to think of new ways of testing the evolution of curvature perturbations in non-attractor backgrounds. This is particularly important once we face the possibility that ultra slow-roll could be representative of a momentary phase within a conventional slow-roll regime~\cite{Germani:2017bcs, Dimopoulos:2017ged}.
In order to derive (\ref{conc-main-result}), we have re-examined the use of conformal Fermi coordinates introduced in ref.~\cite{Pajer:2013ana} and perfected in refs.~\cite{Dai:2015rda, Cabass:2016cgp}. Our results complement these works. For instance, the vanishing of $f_{\rm NL}^{\rm obs}$ in the case of non-attractor models required us to consider in detail the contribution of time-displacements of the CFC map that are irrelevant in the case of attractor models.
The previous remark offers a way to understand the vanishing of local $f_{\rm NL}^{\rm obs}$ for the case of non-attractor models. To appreciate this, let us first focus on the case of attractor models. Notice that in the case of attractor models the freezing of the curvature perturbation can be absorbed at superhorizon scales through a re-scaling of the \RED{spatial} coordinates, which, to linear order in the perturbations, looks like ${\bf x} \to {\bf x}' = {\bf x} + \zeta_* {\bf x}$, where $\zeta_*$ is the value of the mode at horizon crossing. It is precisely this scaling that gives rise to the modulation of small scale perturbations by long scale perturbations in comoving gauge. The map coefficients of eq.~(\ref{map-coefficient-1}) show that in attractor inflation the local transformation corresponds to ${\bf x} \to \bar {\bf x} = {\bf x} - \zeta_* {\bf x}$. This transformation is opposite to the previous re-scaling, and therefore it cancels the effect of the modulation in comoving coordinates. \RED{Note that all these transformations act only on spatial coordinates, the time coordinate remains untouched in the attractor case.}
Now, something similar happens in the case of non-attractor models. Here, the curvature perturbation does not freeze on superhorizon scales. Instead, on superhorizon scales the mode acquires a time dependence that may be absorbed by a re-scaling of time $\tau \to \tau' = \tau - \zeta(\tau) \tau$ in the argument of the scale factor (in comoving coordinates). Similar to the case of attractor models, the map coefficients of eq.~(\ref{map-coefficient-1}) show that in the non-attractor regime the local CFC transformation corresponds to $\tau \to \bar \tau = \tau + \zeta(\tau) \tau$, which is opposite to the previous re-scaling, and so it cancels the whole modulation effect.
More generally, and independently of whether we are looking into the attractor regime, or the non-attractor regime, the cancellation may be understood as follows: The squeezed limit of the 3-point correlation function of canonical models of inflation is the consequence of a symmetry of the action for $\zeta$ under the special class of space-time reparametrization shown in eqs.~(\ref{rep-4})-(\ref{rep-5}). This symmetry is exact in the two regimes that we have studied, but approximate in intermediate regimes. In addition, this symmetry dictates the way in which long-wavelength $\zeta$-modes modulate their short wavelength counterparts. The CFC transformation is exactly the inverse of the symmetry transformation, and so the modulation deduced with the help of the symmetry is cancelled by moving into the CFC frame.
At this point, it is important to emphasize that our computation was performed during inflation. That is, we have performed the CFC transformation while inflation takes place, and the result $B + \Delta B = 0$ found in Section~\ref{sec:vanishing_of_NG} is strictly valid during inflation. The claim that the primordial contribution to $f_{\rm NL}^{\rm obs}$ vanishes for a late time observer must be a consequence of the CFC transformation, taking into account the entire cosmic history. This would require studying the transition from the non-attractor phase to the next phase, which presumably could be of the attractor class, a study already begun in \cite{Cai:2016ngx}. \red{Note that the non-attractor nature of USR inflation leads to many different ways to end this phase. However, given that in both regimes (pure ultra slow-roll and pure slow-roll inflation), we have seen that }both $B$ and $\Delta B$ found in Section~\ref{sec:vanishing_of_NG} are exactly the same (but of opposite signs) and determined by $\tau$-derivatives and $k$-derivatives of the power spectrum, we expect that the end of non-attractor inflation (which could be a transition to a slow-roll phase) will affect equally $B$ and $\Delta B$, in such a way that the net result \red{will} continue to be $B + \Delta B = 0$. Verifying this claim, which seems reasonable, is however out of the scope of the present article.\footnote{In a recent article~\cite{Cai:2017bxr} (written simultaneously to this work), Cai \emph{et al.} studied the effects on the bispectrum $B$ of a transition from a non-attractor phase to an attractor phase. They discovered that the transition can drastically change the \red{comoving} value of $f_{\rm NL}$, suppressing its value if the transition is smooth. Then, the question would be: what happens with $\Delta B$ during such transitions? \red{Does the transition from comoving to Fermi coordinates continue to cancel the squeezed bispectrum computed in comoving coordinates? As we explain above, we expect the answer to be affirmative.}}
\red{Our main reason for expecting that }local non-Gaussianity \red{when expressed in free-falling Fermi vanishes} in both attractor and non-attractor models of single field inflation \red{ is that} in both cases, after the inflaton scalar degree of freedom is swallowed by the gravitational field, the only dynamical scalar degree of freedom corresponds to the curvature perturbation. As a consequence, the interaction coupling together long and short wavelength modes is purely gravitational, and therefore the equivalence principle dictates that long wavelength physics cannot dictate the evolution of short wavelength dynamics, implying that any observable effect must be suppressed by a ratio of scales $\mathcal O (k_L / k_S)^2$. All of this calls for a better examination of the relation between the local ansatz and the squeezed limit of the bispectrum~\cite{dePutter:2016moa}.
Our work leaves several open challenges ahead. First, we have focussed our interest in ca\-no\-ni\-cal models of inflation, namely, those in which the inflaton field is parametrized by a Lagrangian containing a canonical kinetic term. In this category, the ultra slow-roll regime is not fully realistic, and at best should be considered as a toy model allowing the study of perturbations under the extreme conditions of a non-attractor background. However, it has been shown that \RED{non}-attractor regimes may appear more realistically within non-canonical models of inflation such as $P(X)$ models. In these models one has non-gravitational interactions inducing a sound speed $c_s \neq 1$, and so we suspect that our result (\ref{conc-main-result}) will not hold in those cases. \RED{This intuition is mainly based on the fact that in non-attractor models, the squeezed limit gets an enhancement when $c_s^2 \neq 1$ as shown in Ref.~\cite{Chen:2013aj}. At any rate}, this work (together with ref.~\cite{Bravo:2017wyw}) calls for a better understanding of the non-Gaussianity predicted by non-attractor models in general.
Second, given that observable local non-Gaussianity vanishes in ultra slow-roll, in which curvature perturbations grow exponentially on superhorizon scales, one should revisit the status of other classes of inflation, such as multi-field inflation, where local non-Gaussianity may be large (a first look into this issue has already been undertaken in ref.~\cite{Tada:2016pmk}). It is quite feasible that in some models of multi-field inflation the amount of local non-Gaussianity may be understood as the consequence of a space-time symmetry dictating the way in which long-wavelength modes module short modes.
Third, a deeper understanding of our present result is in order. In the case of attractor models, Maldacena's consistency relation (and its vanishing) may be understood as a consequence of soft limit identities linking the non-linear interaction of long wavelength perturbations with shorter ones~\cite{Creminelli:2012ed, Hinterbichler:2012nm, Senatore:2012wy, Assassi:2012zq, Hinterbichler:2013dpa, Goldberger:2013rsa, Creminelli:2013cga}. However, there were good reasons to suspect that these relations would not hold anymore in the case of non-attractor models~\cite{Pajer:2013ana}. Our results suggest that, regardless of the background, these identities continue to be valid, and in an inertial frame the gravitational interaction cannot be responsible of making long wavelength modes affect the local behavior of short wavelength modes.
\subsection*{Acknowledgements}
We would like to thank Xingang Chen, Enrico Pajer, Dong-Gang Wang and Mat\'ias Zaldarriaga for helpful discussions and comments. GAP acknowledges support from the Fondecyt Regular project number 1171811. B.P. acknowledges the CONICYT-PFCHA Magister Nacional Scholarship 2016-22161360. RB also acknowledges the support from CONICYT-PCHA Doctorado Nacional scholarship 2016-21161504. SM is funded by the Fondecyt 2015 Postdoctoral Grant 3150126.
\begin{appendix}
\setcounter{equation}{0}
\renewcommand{\theequation}{\Alph{section}.\arabic{equation}}
\section{Details of computations}
In this appendix we provide details of various intermediate steps performed in the bulk of the work.
\subsection{Map coefficients} \label{app:map-coeff}
Here we show how to compute the map coefficients appearing in eqs.~(\ref{map-coefficient-int-1})-(\ref{map-coefficient-int-6}). To start with, we notice that the map of eq.~(\ref{CFC-map-inflat}) implies that along the geodesic (where $\Delta x^{\bar \imath} = 0$) the following relations must be satisfied:
\bea
\frac{\partial \tau}{\partial x^{\bar \imath}} &=& A^0_{\bar \imath} , \label{coef-der-1} \\
\frac{\partial x^i}{\partial x^{\bar \imath}} &=& A^i_{\bar \imath} , \label{coef-der-2} \\
\frac{\partial x^i}{\partial \bar \tau} &=& \frac{\partial \rho^{i}}{\partial \bar \tau} . \label{coef-der-3}
\eea
In addition, it is useful to recall the transformation rule of eq.~(\ref{standard-coordinate-trans}) connecting the metric in comoving coordinates and conformal Fermi coordinates:
\be
g_{\bar \alpha \bar \beta} = \frac{\partial x^\mu}{\partial x^{\bar \alpha}} \frac{\partial x^\nu}{\partial x^{\bar \beta}} g_{\mu \nu} . \label{standard-coordinate-trans-app}
\ee
At any point on the geodesic we have that $g_{\bar \alpha \bar \beta} = a_{F}^2 (\bar \tau) \eta_{\bar \alpha \bar \beta}$. Then, given that the tetrads satisfy $g_{\mu \nu} e^\mu_{\bar \alpha} e^\nu_{\bar \beta} = \eta_{\bar \alpha \bar \beta}$, we see that eqs.~(\ref{coef-der-1}) and~(\ref{coef-der-2}) imply
\bea
A^0_{\bar \imath} &=& a_F(\bar \tau) e^{0}_{\bar \imath} , \\
A^i_{\bar \imath} &=& a_F(\bar \tau) e^{i}_{\bar \imath} .
\eea
Then, using the expression of eq.~(\ref{tetrad-pert-2}) for the tetrad $e^{\mu}_{\bar \imath}$, and keeping the leading contributions in terms of the perturbations, we find
\bea
A^0_{\bar \imath} &=& \delta^i_{\bar \imath} \partial_{i} \mathcal F, \label{A-0-i-app} \\
A^i_{\bar \imath} &=& \left[ \frac{a_F (\bar \tau)}{a (\tau)} - 1 - \zeta \right] \delta^i_{\bar \imath} ,
\eea
where we used the fact that $a_F (\bar \tau) / a (\tau) - 1$ is of linear order. These correspond to the desired expressions for the map coefficients (\ref{map-coefficient-int-3}) and (\ref{map-coefficient-int-4}).
Next, let us consider eq.~(\ref{standard-coordinate-trans-app}) with the choice $(\bar \alpha , \bar \beta) = (\bar 0 , \bar 0)$. Because along the geodesic one has $g_{\bar 0 \bar 0} = - a_F^2 (\bar \tau)$, using the form of the metric $g_{\mu \nu}$ introduced in eq.~(\ref{s2-perturbed-metric}) one arrives at
\be
\frac{a_F^2 (\bar \tau)}{a^2 (\tau)} = \left( \frac{\partial \tau}{\partial \bar \tau} \right)^2 \left[ 1 - h_{00} \right] - 2 \frac{\partial \tau }{\partial \bar \tau} \frac{\partial x^i}{\partial \bar \tau} h_{0 i} - \frac{\partial x^i}{\partial \bar \tau} \frac{\partial x^j}{\partial \bar \tau} \delta_{ij} e^{2 \zeta} .
\ee
Keeping the leading terms, the previous expression may be rewritten as:
\be
\frac{\partial \tau}{\partial \bar \tau} = \frac{a_F (\bar \tau)}{a (\tau)} + \frac{1}{2} h_{00} .
\ee
Then, integrating with respect to time we arrive to:
\be
\tau - \tau_* = \int^{\tau}_{\tau_*} ds \left[ \frac{a_F (\bar \tau)}{a (\tau)} + \frac{1}{2} h_{00} \right] . \label{app-pre-integration}
\ee
As a last step, note that $\rho^{0} = \tau - \bar \tau$ by definition. This implies that:
\be
\rho^0 = \int^{\tau}_{\tau_*} ds \left[ \frac{a_F (\bar \tau)}{a (\tau)} - 1 + \frac{1}{2} h_{00} \right] .
\ee
which is the desired expression giving us the map coefficient of eq.~(\ref{map-coefficient-int-1}). Notice that we have imposed the condition $\rho^0 = 0$ at $\tau = \tau_*$ to synchronize the map.
To conclude, let us consider eq.~(\ref{standard-coordinate-trans-app}) one more time for the case $(\bar \alpha , \bar \beta) = (\bar 0 , \bar \imath)$:
\be
0 = \frac{\partial \tau}{\partial \bar \tau} \frac{\partial \tau}{\partial x^{\bar \imath}} \left[ - 1 + h_{00} \right] + \frac{\partial x^i }{\partial \bar \tau} \frac{\partial \tau }{\partial x^{\bar \imath}} h_{i 0}+ \frac{\partial \tau }{\partial \bar \tau} \frac{\partial x^j}{\partial x^{\bar \imath}} h_{0 j} + \frac{\partial x^i}{\partial \bar \tau} \frac{\partial x^j}{\partial x^{\bar \imath}} \delta_{ij} e^{2 \zeta} .
\ee
Keeping the leading terms in the perturbations, we obtain
\be
\frac{\partial \rho^i}{\partial \bar \tau} = A^0_{\bar \imath} \delta^{i \bar \imath} - \delta^{ij} h_{0 j} .
\ee
Then, inserting our previous result of eq.~(\ref{A-0-i-app}) and integrating with respect to time, we finally arrive to:
\be
\rho^{i} = \int^{\tau}_{\tau_*} \!\!\! ds \, V^{i} ,
\ee
which corresponds to the map coefficient of eq.~(\ref{map-coefficient-int-2}).
Finally, the map coefficients of eqs.~(\ref{map-coefficient-int-5}) and (\ref{map-coefficient-int-6}) directly follow directly from eq.~(\ref{s2:CFC-general-map}) after keeping the leading therms in the perturbations.
\subsection{The 2-point correlation function for short modes} \label{app:two-point}
Here we give details on how to derive eqs.~(\ref{two-point-S}) and (\ref{CFC-effect-power}). Let us start with eq.~(\ref{two-point-S}). First, notice that the second term at the right hand side of eq.~(\ref{two-point-0}) may be rewritten as
\bea
\left[ \xi_L^{\mu} (\tau , \bar {\bf x}_1) \partial^{(1)}_{\mu} + \xi_L^{\mu} (\tau , \bar {\bf x}_2) \partial^{(2)}_{\mu} \right] \left\langle \zeta_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) \right\rangle_0
= \bigg[ \frac{1}{2}( \xi_L^{0} (\tau , \bar {\bf x}_2) - \xi_L^{0} (\tau , \bar {\bf x}_1) ) ( \partial^{(2)}_{0} - \partial^{(1)}_{0} ) \quad \nn \\
+ \frac{1}{2} \bigg( \xi_L^{0} (\tau , \bar {\bf x}_1) + \xi_L^{0} (\tau , \bar {\bf x}_2) \bigg) \partial_{0} + \xi_L^{i} (\tau , \bar {\bf x}_1) \partial^{(1)}_{i} + \xi_L^{i} (\tau , \bar {\bf x}_2) \partial^{(2)}_{i}\bigg] \left\langle \zeta_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) \right\rangle_0 . \qquad \label{two-point-app-1}
\eea
Let us focus for a moment on the first contribution of the right hand side, which is given by
\be
\frac{1}{2}A^0_{\bar \imath} ( x_2^{\bar \imath} - x_1^{\bar \imath} ) \left\langle ( \zeta_S ( \bar x_1 ) \zeta'_S ( \bar x_2 ) - \zeta'_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) ) \right\rangle_0, \label{app:first-contrib}
\ee
where we have used $\xi_L^{0} (\tau , \bar {\bf x}_2) - \xi_L^{0} (\tau , \bar {\bf x}_1) = A^0_{\bar \imath} ( x_2^{\bar \imath} - x_1^{\bar \imath} )$. Now recall that the label $0$ reminds us that we are dealing with a comoving curvature perturbation in the absence of non-linearities. We may therefore expand it as
\be
\zeta_S ( \bar x ) = \frac{1}{(2 \pi)^3} \int dk e^{- i {\bf k} \cdot {\bf x}} \zeta (\tau , {\bf k}) , \qquad \zeta (\tau , {\bf k}) = \zeta_{k} (\tau) a_{\bf k} + \zeta_{k}^* (\tau) a_{-\bf k}^{\dag} ,
\ee
where $\zeta_{k} (\tau)$ are is the amplitude of the mode, and $a_{\bf k}^{\dag}$ and $a_{\bf k}$ are creation and annihilation operators. I is then easy to show that
\be
\left\langle \zeta (\tau , {\bf k}_1) {\zeta^*}' (\tau , {\bf k}_2) - \zeta' (\tau , {\bf k}_1) \zeta^* (\tau , {\bf k}_2) \right\rangle_0 = (2 \pi)^3 \delta^{(3)} ({\bf k}_2 - {\bf k}_1) [ \zeta_{k_1}' (\tau) \zeta_{k_1}^* (\tau) - \zeta_{k_1} (\tau) {\zeta_{k_1}^*}' (\tau)] .
\ee
Now, $[ \zeta_{k_1}' (\tau) \zeta_{k_1}^* (\tau) - \zeta_{k_1} (\tau) {\zeta_{k_1}^*}' (\tau)]$ must be such that the canonical commutation condition for $\zeta$ is satisfied. This implies that (\ref{app:first-contrib}) is given by
\be
\frac{1}{2}A^0_{\bar \imath} ( x_2^{\bar \imath} - x_1^{\bar \imath} ) \left\langle ( \zeta_S ( \bar x_1 ) \zeta'_S ( \bar x_2 ) - \zeta'_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) ) \right\rangle_0 =\frac{1}{4 \epsilon a^2}A^0_{\bar \imath} ( x_2^{\bar \imath} - x_1^{\bar \imath} ) \delta^{(3)} ({\bf x_2} - {\bf x_1}),
\ee
which gives vanishing contribution to (\ref{two-point-app-1}). Next, using (\ref{spatial-homogeneity}) and replacing in $\xi_L^{\mu}$ the map coefficients of eqs.~(\ref{map-coefficient-int-1})-(\ref{map-coefficient-int-4}), we find that the remaining terms in eq.~(\ref{two-point-app-1}) give:
\bea
&&\left[ \xi_L^{\mu} (\tau , \bar {\bf x}_1) \partial^{(1)}_{\mu} + \xi_L^{\mu} (\tau , \bar {\bf x}_2) \partial^{(2)}_{\mu} \right] \left\langle \zeta_S ( \bar x_1 ) \zeta_S ( \bar x_2 ) \right\rangle_0
\nn \\�
&& \quad = \bigg[ \frac{1}{\tau} \left( \rho^{0} + \frac{1}{2} ( x_1^i + x_2^i )A^{0}_{i} \right) \frac{\partial}{\partial \ln \tau} + A^i_{\bar \imath} ( x_2^{\bar \imath} - x_1^{\bar \imath} ) (x_2^i - x_1^i) \frac{1}{r^2} \frac{\partial}{\partial \ln r} \bigg] \left\langle \zeta_S \zeta_S \right\rangle ( \tau , r) . \qquad \label{two-point-app-2}
\eea
This result then leads directly to eq.~(\ref{two-point-S}).
Next, we wish to derive eq.~(\ref{CFC-effect-power}) out from eq.~(\ref{zeta-L-SS-Fourier}). The first step is to simply multiply eq.~(\ref{zeta-L-SS-Fourier}) by $e^{ - i {\bf k}_1 \cdot \bar {\bf x}_1- i {\bf k}_2 \cdot \bar {\bf x}_2}$ and integrate the two spatial coordinates $\bar x_1$ and $\bar x_2$. This directly gives
\bea
\langle \bar \zeta_S \bar \zeta_S \rangle ({\bf k}_1 ,{\bf k}_2) &=& \langle \zeta_S \zeta_S \rangle ({\bf k}_1 ,{\bf k}_2) + \int d^{3} \bar x_1 d^{3} \bar x_2 e^{ - i {\bf k}_1 \cdot \bar {\bf x}_1- i {\bf k}_2 \cdot \bar {\bf x}_2} \int \!\! \frac{d^3 k}{(2 \pi)^3}e^{ i {\bf k} \cdot {\bf x}_L} \nn \\
&& \times \left( \left[ \zeta({\bf k}) - \zeta_*({\bf k}) + \zeta({\bf k}) i {\bf k} \cdot {\bf x}_L \right] \frac{\partial}{\partial \ln \tau} - \zeta_*({\bf k}) \frac{\partial}{\partial \ln r} \right) \left\langle \zeta_S \zeta_S \right\rangle (\tau , r). \qquad \label{app:zeta-L-SS-Fourier-1}
\eea
We may re-express the integral in terms of $\bf r = \bf {x}_1 - \bf {x}_2$ and ${\bf x}_L = ( \bf {x}_1+ \bf {x}_2) / 2$ instead of $\bar {\bf x}_1$ and $\bar {\bf x}_2$. One has that $d^{3} \bar x_1 d^{3} \bar x_2 = d^3 r d^3 x_L$, and so one finds
\bea
\langle \bar \zeta_S \bar \zeta_S \rangle ({\bf k}_1 ,{\bf k}_2) &=& \langle \zeta_S \zeta_S \rangle ({\bf k}_1 ,{\bf k}_2) + \int d^{3} r d^{3} x_L e^{ - i {\bf k}_S \cdot {\bf r} / 2 } \int \!\! \frac{d^3 k}{(2 \pi)^3} e^{ i ( {\bf k} - {\bf k}_L) \cdot {\bf x}_L } \nn \\
&& \times \left( \left[ \zeta({\bf k}) - \zeta_*({\bf k}) + \zeta({\bf k}) i {\bf k} \cdot {\bf x}_L \right] \frac{\partial}{\partial \ln \tau} - \zeta_*({\bf k}) \frac{\partial}{\partial \ln r} \right) \left\langle \zeta_S \zeta_S \right\rangle (\tau , r), \qquad \label{app:zeta-L-SS-Fourier-2}
\eea
where we have defined ${\bf k}_L = {\bf k}_1 + {\bf k}_2$ and ${\bf k}_S = ( {\bf k}_1 - {\bf k}_2 ) / 2$. The final step consists of explicitly integrating the coordinates ${\bf x}_L$ and $\bf r$. The $r$-integral gives us the power spectrum $P_\zeta (\tau, k_S)$. On the other hand, the $x_L$-integral gives us a $\delta^{(3)} ( {\bf k} - {\bf k}_L)$ that can be used to get rid of the $k$-integral (and produces the appearance of $\nabla_{{\bf k}_L}$). All of this together gives us the final result expressed in eq.~(\ref{CFC-effect-power}), after we throw away the term suppressed by ${\bf k}_L$.
\end{appendix}
|
1,477,468,750,426 | arxiv | \section{Introduction}
The analytic signal, which was introduced by Gabor \cite{Gabor46} and Ville \cite{Ville48}, is a powerful tool in various applications such as communication \cite{Boashash92}, radar-based object detection \cite{Levanon04}, processing of oceanic data \cite{Lilly06}, etc. It is a complex signal which derives from adding its Hilbert transform to the original real-valued signal. This complex signal also can be regarded as being constructed by suppressing all negative frequency components of the original real signal. Since one can separate the qualitative and quantitative information from the local phase and local amplitude of analytic signals, the instantaneous amplitude and phase are the hot topic of analytic signal analysis, and a critical analysis about it has been carried out by Picibono \cite{Picibono97} in 1997. During the last several decades, many new complex analytic signal theories \cite{sangwine07,Hahn11,Unser09,Said08,Felsberg01,Yang17} were well established with the development of Clifford algebras and the associated Fourier transform theory.
The linear canonical transform (LCT) - a powerful tool for optics and signal processing - was first introduced in the 1970s by Collins \cite{Collins70} and Moshinsky \cite{Moshinsky71}. It is a linear integral transform with three free parameters. Many famous transforms such as Fourier transform (FT), fractional transform (FRFT), and the Fresnel transform (FST) are all special cases of the LCT \cite{Wolf79, Ozaktas00, Pei01}. Due to the property that owning more degrees of freedom and needing similar computation cost to the FT and FRFT, the LCT has many applications such as in signal synthesis, radar system analysis, filter design and pattern recognition, etc \cite{Ozaktas00, Pei03}. Recently, with the development of the LCT, the analytic signal has been extended into the LCT domain initially by Fu and Li \cite{Fu08} and to 2D LCT domain by Xu et al \cite{Xu09}. Furthermore, Kou \cite{Kou16} generalized the analytic signal to the quaternion domain with the help of the two-sided quaternion linear canonical transform (QLCT) and got a satisfactory result by using this method to process the envelop detection problems.
In 3-D image processing domain, such as 3-D ultrasound image registration, analytic signal is a powerful tool. Zhang \cite{Zhang06} first used it to process 3-D ultrasound image with the help of its phase information. Following this, phase information of analytic signal is widely used in image processing \cite{Harput11,Maltaverne10,Rajpoot09,Belaid11}. Since local amplitude is another important part of analytic signal, Wang \cite{Wang12} introduced 3-D Clifford biquaternionic analytic signal, which is associated with Clifford Fourier transform. With the help of partial modules, it overcomes the shortcoming of losing information of the classical 1-D analytic envelop detection tools. Since the analytic signal in LCT domain supplies better results in envelop detection than that in FT domain, which was shown by Kou in \cite{Kou16}. We generalize Wang's \cite{Wang12} Clifford biquaternionic analytic signal to LCT domain, and with the help of the local amplitude, the envelopes of 3-D images are successfully detected. Synthesis examples show that our approach presents better results than Wang's method. Furthermore, by comparing with the amplitude method of monogenic signal which is another kind of generalization of analytic signal, the powerful ability of our approach is verified.
The paper is organized as follows. First, we recall the basic knowledge about the Clifford biquaternion and $n$ dimensional analytic signal in section 2. Section 3 is dedicated to give the definition and basic properties of CLCT, which are the keys of using the generalized analytic signal to detect the envelopes of 3-D images. In section 4, a novel approach of envelop detection based on CLCT is supplied and synthetic examples are introduced to show the advantages of this method. Finally we conclude this article in Section 5.
\section{Preliminary}
\subsection{Clifford Biquaternion}
Quaternion, which was discovered by Hamilton in 1843 and is denoted by $\mathbb{H}$, is a generalization of complex number. Each quaternion number $\mathbf{q}$ (denoted by bold letter in this paper) has a form $\mathbf{q}=q_0+q_1\textbf{i}+q_2\textbf{j}+q_3\textbf{k}$, where $q_0,q_1,q_2,q_3$ are real numbers and $\textbf{i}, \textbf{j}$, and $\textbf{k}$ are imaginary units, which satisfy $\textbf{i}^2=\textbf{j}^2=\textbf{k}^2=-1$ and $\textbf{ij}=-\textbf{ji}=\textbf{k}$. For every quaternion number $\mathbf{q}=q_0+q_1\textbf{i}+q_2\textbf{j}+q_3\textbf{k}$, the scalar part and vector part are $Sc(q):=q_0$ and $\underline{\mathbf{q}}:=q_1\textbf{i}+q_2\textbf{j}+q_3\textbf{k}$ respectively. We also use symbols $\overline{\mathbf{q}}:=q_0-\underline{\mathbf{q}}$ and $|\mathbf{q}|:=\sqrt{\mathbf{q}\overline{\mathbf{q}}}=\sqrt{q_0^2+q_1^2+q_2^2+q_3^2}$ to denote the conjugate and norm of $\mathbf{q}$.
Furthermore, for $p=1$ and $2$, the quaternion modules $L^p(\mathbb{R}^n,\mathbb{H})$ is defined by
\begin{equation*}
L^p(\mathbb{R}^n,\mathbb{H}):=\{f|f:\mathbb{R}^n\rightarrow\mathbb{H},\|f\|_{L^p(\mathbb{R}^n,\mathbb{H})}:=\big(\int_{\mathbb{R}^n}|f(\mathbf{x})|^pd\mathbf{x}\big)^{\frac{1}{p}}<\infty\}.
\end{equation*}
Clifford algebra, which is a further generalization of quaternion, was introduced by William Kingdon Clifford in 1878 and is defined as follows.
\begin{defn}\label{def-clif}
Suppose that $\{\mathbf{e}_1,\mathbf{e}_2,\cdots,\mathbf{e}_n\}$ is an orthonormal basis of Euclidean space $\mathbb{R}^n$, and satisfies the relations $\mathbf{e}_i^2=-1$ for $i=1,2,\cdots,n,$ and $\mathbf{e}_i\mathbf{e}_j+\mathbf{e}_j\mathbf{e}_i=0$ for $1\leq i\neq j\leq n.$ Then the Clifford algebra $Cl(0,n)$ is an algebra constructed over these elements, i.e.,
\begin{equation}\label{CliffordDF}
Cl(0,n):=\bigg\{a=\sum\limits_S a_S\mathbf{e}_S: a_S\in \mathbb{R}, \mathbf{e}_S=\mathbf{e}_{j_1}\mathbf{e}_{j_2}\cdots \mathbf{e}_{j_k}\bigg\},
\end{equation}
where $S:=\{j_1,j_2,\cdots,j_k\}\subseteq\{1,2,\cdots,n\}$ with $1\leq j_1< j_2<\cdots< j_k\leq n$; or $S=\emptyset,$ and $\mathbf{e}_{\emptyset}:=1$.
\end{defn}
It can be easily found from the definition that $Cl(0,n)$ is a $2n$ dimensional real linear vector space. Let $|S|$ be the number of elements in the set $S$. For each $k\in\{1,2,\cdots,n\},$ let
\begin{equation*}
Cl^{(k)}(0,n):=\bigg\{a\in Cl(0,n): a=\sum\limits_{|S|=k} a_S\mathbf{e}_S \bigg\},
\end{equation*}
to be the $k$-vectors subset of $Cl(0,n)$.
Hence we have
\begin{equation*}
Cl(0,n)=\bigoplus_{k=0}^{n} Cl^{(k)}(0,n).
\end{equation*}
For any Clifford number $a=\sum\limits_{S}a_{S}\mathbf{e}_{S}$ in $Cl(0,n)$, it has a projection $[a]_k$ on $Cl^{(k)}(0,n)$, and can be represented by
\begin{equation*}
a=\sum\limits_{k=0}^{n} [a]_k.
\end{equation*}
For $k=0$, $[a]_0$ is named the scalar part of $a$. Furthermore $[a]_1$, $[a]_2$ and $[a]_n$, are the vector part, bivector part and pseudoscalar part of $a$ respectively.
Similar to quaternion numbers, we use symbols $\overline{a}:=\sum\limits_{S}a_{S}\overline{\mathbf{e}}_{S}$ and $|a|:=\left(\sum\limits_S|a_S|^2\right)^{1/2}$ to denote the conjugate and norm of $a$, where $\overline{\mathbf{e}}_{S}:=(-1)^{\frac{|S|(|S|+1)}{2}}\mathbf{e}_{S}$. Furthermore the Clifford modules $L^p(\mathbb{R}^n,Cl(0,n))$ is
\begin{equation*}
L^p(\mathbb{R}^n,Cl(0,n)):=\{f|f:\mathbb{R}^n\rightarrow Cl(0,n),\|f\|_{L^p(\mathbb{R}^n,Cl(0,n))}:=\big(\int_{\mathbb{R}^n}|f(\mathbf{x})|^pd\mathbf{x}\big)^{\frac{1}{p}}<\infty\}.
\end{equation*}
For $p=2$, and let $f,g \in L^2(\mathbb{R}^n,Cl(0,n))$, an inner product can be equipped as follows
\begin{equation}\label{innerp}
(f,g)_{L^2(\mathbb{R}^n,Cl(0,n))}:=\int_{\mathbb{R}^n}f(\mathbf{x})\overline{g(\mathbf{x})}d^n\mathbf{x}.
\end{equation}
Since Clifford algebra is generalized from quaternion algebra, they have a close relationship as follows,
\begin{thm}\cite{Girard11}\label{The-clif}
If $p+q=2m$ ($m$ is integer),the Clifford algebra $Cl(p,q)$ is the tensor product of $m$ quaternion algebras. If $p+q=2m-1$, the Clifford algebra $Cl(p,q)$ is the tensor product of $m-1$ quaternion algebra and the algebra $(1,\epsilon)$, where $\epsilon$ is the product of the $2m-1$ generators $(\epsilon=\mathbf{e}_1\mathbf{e}_2\cdots\mathbf{e}_{2m-1})$.
\end{thm}
According to the above theorem, one can easily find some special examples of Clifford algebras such as: complex $\mathbb{C}$ with $p=0, q=1, m=1$ and $\mathbf{e}_1=\mathbf{i}$; quaternions $\mathbb{H}$ with $p=0, q=2, m=1$ and $\mathbf{e}_1=\mathbf{i}, \mathbf{e}_2=\mathbf{j}$.
Here we should pay an attention to a special case of Clifford algebra, $Cl(0,3)$, which will be used in this paper to detect the envelop of 3-D images. According to Theorem \ref{The-clif}, $Cl(0,3)$ corresponds $p=0, q=3,$ and is a tensor product of quaternion algebra $\mathbb{H}$ and the algebra $(1,\epsilon)$, where $\mathbf{e}_1=\epsilon\mathbf{i}, \mathbf{e}_2=\epsilon\mathbf{j},\mathbf{e}_3=\epsilon\mathbf{k}$ are the generators of $Cl(0,3)$, $\epsilon=\mathbf{e}_1\mathbf{e}_2\mathbf{e}_3$ commuting with $\mathbf{i,j,k}$ and $\epsilon^2=1$.
Thus each number in this algebra
$$A=a_0+a_1\mathbf{e}_1+a_2\mathbf{e}_2+a_3\mathbf{e}_3+a_4\mathbf{e}_1\mathbf{e}_2+a_5\mathbf{e}_3\mathbf{e}_1+a_6\mathbf{e}_2\mathbf{e}_3+a_7\mathbf{e}_1\mathbf{e}_2\mathbf{e}_3,$$
can be expressed by $A=\mathbf{p}+\epsilon\mathbf{q}$, where $\mathbf{p},\mathbf{q}$ are two quaternions and
\begin{equation*}
\begin{aligned}
\mathbf{p}&=(a_0+a_6\mathbf{i}+a_5\mathbf{j}+a_4\mathbf{k}),
\\
\mathbf{q}&=(a_7+a_1\mathbf{i}+a_2\mathbf{j}+a_3\mathbf{k}).
\end{aligned}
\end{equation*}
Here $a_0$ is the scalar part of $A$, $a_1,a_2,a_3$ correspond to the vector part, and $a_4,a_5,a_6$ are the bivector parts, while $a_7$ is the pseudoscalar part of $A$.
\begin{rem}\label{clibiquaternion}
Since $Cl(0,3)$ is a special case of geometric algebra (Clifford algebra) and isomorphic to $\mathbb{H}\bigoplus\mathbb{H}$, it is also named Clifford biquaternion\cite{Wang12,Girard11} (clifbquat for short) by some authors.
\end{rem}
Since Clifford biquaternion algebra $Cl(0,3)$ is a special case of geometric algebra (Clifford algebra), it follows the reflecting properties of geometric algebra \cite{Vince08}, which states that under an orthogonal symmetry with respect to a plane which is perpendicular to a unit vector $a$, the reflecting of a Clifford biquaterion $A=\sum\limits_{|S|=0}^3 a_S\mathbf{e}_S$ is $A'=\sum\limits_{|S|=0}^3 (-1)^{|S|+1}a_S a\mathbf{e}_Sa$.
In particular, if $a=\mathbf{e}_1$, we can get
\begin{equation}
A'=K_1(A)=(a_0+a_6\mathbf{i}-a_5\mathbf{j}-a_4\mathbf{k})+\epsilon(-a_7-a_1\mathbf{i}+a_2\mathbf{j}+a_3\mathbf{k});
\end{equation}
if $a=\mathbf{e}_2$, it is
\begin{equation}
A'=K_2(A)=(a_0-a_6\mathbf{i}+a_5\mathbf{j}-a_4\mathbf{k})+\epsilon(-a_7+a_1\mathbf{i}-a_2\mathbf{j}+a_3\mathbf{k});
\end{equation}
if $a=\mathbf{e}_3$,
\begin{equation}
A'=K_3(A)=(a_0-a_6\mathbf{i}-a_5\mathbf{j}+a_4\mathbf{k})+\epsilon(-a_7+a_1\mathbf{i}+a_2\mathbf{j}-a_3\mathbf{k}).
\end{equation}
\subsection{Analytic Signal in N Dimension}
Analytic signal is constructed by suppressing the negative frequency components of the original signal. So it is closely related with the Fourier transform of the original signal. In \cite{Girard11}, Girard generalized the analytic signal to $n$ dimension by introducing a new Clifford-Fourier transform as follows
\begin{defn}Given a Clifford valued function $f(\mathbf{x})\in L^1(\mathbb{R}^n,Cl(0,n))$ with $\mathbf{x}=(x_1,x_2,\cdots,x_n)$, its Clifford Fourier transform $F(u)$ is defined as
\begin{equation}
F(\mathbf{u})=\int_{\mathbb{R}^n}f(\mathbf{x})\prod\limits_{k=1}^{n}e^{-\mathbf{e}_k2\pi u_kx_k}d^n\mathbf{x}.
\end{equation}
\end{defn}
Furthermore, Girard supplied the inverse Clifford Fourier transform as follows
\begin{equation}
f(\mathbf{x})=\int_{\mathbb{R}^n}F(\mathbf{u})\prod\limits_{k=0}^{n-1}e^{\mathbf{e}_{n-k}2\pi u_{n-k}x_{n-k}}d^{n}\mathbf{u}.
\end{equation}
With the help of this CFT, Girard introduced the $n$ dimensional analytic signal $f_A(\mathbf{x})$ which corresponds to $f(\mathbf{x})$ by the following steps
\begin{equation}
F_A(\mathbf{u})=\prod\limits_{k=1}^{n}[1+sign(u_k)]F(\mathbf{u}),
\end{equation}
\begin{equation}
f_A(\mathbf{x})=\int_{\mathbb{R}^n}F_A(\mathbf{u})\prod\limits_{k=0}^{n-1}e^{\mathbf{e}_{n-k}2\pi u_{n-k}x_{n-k}}d^n\mathbf{u},
\end{equation}
where $sign(u_k)$ is the classical signum function and $F(\mathbf{u})$ is the CFT of original function $f(\mathbf{x})$.
It can be easily verified that when $n=1$, CFT degenerates to the classical FT and the corresponding analytic signal is $f_A(t)$. When $n=2$, the CFT turns to the right-sided quaternion Fourier transform and the analytic signal is quaternionic valued function $f_A(x,y)$.
Since LCT is a generalization of FT, one can generalize the analytic signal by replacing the FT by LCT, which was introduced by Fu in \cite{Fu08}. One can connect these two approaches to generalize the analytic signal to $n$ dimensional LCT domain.
Monogenic signal which was introduced by Felsberg \cite{Felsberg01} is another generalization of analytic signal. Since we will compare our method with that of monogenic signal, the basic knowledge about monogenic signal will be reviewed in the following.
\begin{defn}\cite{Yang17}(Monogenic signal) For $f\in L^2(\mathbb{R}^n,Cl(0,n))$, the monogenic signal $f_M\in L^2(\mathbb{R}^n,Cl(0,n))$ is defined by
\begin{equation}\label{mono-defn}
f_M(\underline{x}):=f(\underline{x})+H[f](\underline{x}),
\end{equation}
where $H[f]$ is the isotropic Hilbert transform of $f$ defined by
\begin{equation}
\begin{aligned}
\label{hilber-defn}
H[f](\underline{x}):&=p.v.\frac{1}{\omega_n}\int_{\mathbf{R}^n}\frac{\overline{\underline{x}-\underline{t}}}{|\underline{x}-\underline{t}|^{n+1}}f(\underline{t})d\underline{t}
\\
&=\lim\limits_{\epsilon\rightarrow0^+}\frac{1}{\omega_n}\int_{{|\underline{x}-\underline{t}}|>\epsilon}\frac{\overline{\underline{x}-\underline{t}}}{|\underline{x}-\underline{t}|^{n+1}}f(\underline{t})d\underline{t}
\\
&=-\sum\limits_{j=1}^{n}R_j(f)(\underline{x})\mathbf{e}_j.
\end{aligned}
\end{equation}
Furthermore,
\begin{equation*}
R_j(f)(\underline{x}):=\lim\limits_{\epsilon\rightarrow0^+}\frac{1}{\omega_n}\int_{{|\underline{x}-\underline{t}}|>\epsilon}\frac{x_j-t_j}{|\underline{x}-\underline{t}|^{n+1}}f(\underline{t})d\underline{t}
\end{equation*}
and $\omega_n=\frac{2\pi^{\frac{n+1}{2}}}{\Gamma(\frac{n+1}{2})}$.
\end{defn}
\section{Clifford Linear Canonical Transform}
Since LCT is a generalization of many famous integral transform and has more degrees of freedom, it is natural to generalize the LCT to Clifford domain. In the last decade, many mathematicians tried different approaches to develop this kind of generalization. In \cite{Kou13}, Kou introduced the CLCT of a function $f\in L^1(\mathbb{R}^n,Cl(0,n))$ and investigated the maxima of energy preservation problem. Yang \cite{Yang14} paid attention to the CLCT with the kernel constituted by the complex unit $I$. Here, we will introduce a new type of generalization as follows
\begin{defn} (Right-sided CLCT)
Let $\Lambda$: $A_k=\left(
\begin{array}{cc}
a_k & b_k \\
c_k & d_k \\
\end{array}
\right)\in \mathbb{R}^{2\times2}$
($k=1,2,\cdots,n$) be a set of parameter matrices with $det(A_k)=1$ and $b_k\neq0$, the right-sided CLCT of a signal $f\in L^1(\mathbb{R}^n,CL(0,n))$ is a Clifford valued function $\mathscr{L}^r_{\Lambda}(f): \mathbb{R}^n\rightarrow CL(0,n)$ defined as follows
\begin{equation}\label{def-rCLCT}
\mathscr{L}^r_{\Lambda}(f)(\mathbf{u}):=\int_{\mathbb{R}^n}f(\mathbf{x})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x},
\end{equation}
where $K^{\mathbf{e}_k}_{A_k}(x_k,u_k)$ is the kernel of the CLCT defined as
\begin{equation}
K^{\mathbf{e}_k}_{A_k}(x_k,u_k):=\frac{1}{\sqrt{\mathbf{e}_k2\pi b_k}}e^{\mathbf{e}_k(\frac{a_k}{2b_k}x_k^2-\frac{1}{b_k}x_ku_k+\frac{d_k}{2b_k}u_k^2)}.
\end{equation}
\end{defn}
Due to the non-commutativity of Clifford algebra, there is a different type of CLCT, left-sided CLCT, which can be defined as follows
\begin{defn} (Left-sided CLCT)
Let $\Lambda$: $A_k=\left(
\begin{array}{cc}
a_k & b_k \\
c_k & d_k \\
\end{array}
\right)\in \mathbb{R}^{2\times2}$
($k=1,2,\cdots,n$) be a set of parameter matrices with $det(A_k)=1$ and $b_k\neq0$, the left-sided CLCT of a signal $f\in L^1(\mathbb{R}^n,Cl(0,n))$ is a Clifford valued function $\mathscr{L}^l_{\Lambda}(f): \mathbb{R}^n\rightarrow Cl(0,n)$ defined as follows
\begin{equation}
\mathscr{L}^l_{\Lambda}(f)(\mathbf{u}):=\int_{\mathbb{R}^n}\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)f(\mathbf{x})d^n\mathbf{x}.
\end{equation}
\end{defn}
Here we focus on the right-sided CLCT in the following content. Furthermore, we can simulate the approach Hitzer used in \cite{Hitzer14} to introduce a two-sided Clifford LCT with two square roots of $-1$ in $Cl(0,n)$, which is out of our scope.
\begin{rem}It can be easily verified that when the dimension $n=1$, the CLCT degenerates to the classical LCT. When $n=2$, the CLCT turns to the right-sided quaternion linear canonical transform.
\end{rem}
Now let us investigate an example to see how the CLCT works:
\begin{ex}
Considering the right-sided CLCT of $\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(x_{n-k},v_{n-k})$.
\end{ex}
Since $f(t)$ is product of $n$ LCT kernel functions, we can integrate the functions separately when taking the CLCT.
That is
\begin{equation}\label{ExamK}
\begin{split}
&\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})=\int_{\mathbb{R}^n}\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(x_{n-k},v_{n-k})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}
\\
&=\int_{\mathbb{R}^{n-1}}\prod\limits_{k=0}^{n-2} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(x_{n-k},v_{n-k})\int_{\mathbb{R}}K^{-\mathbf{e}_{1}}_{A_{1}}(x_{1},v_{1})K^{\mathbf{e}_{1}}_{A_{1}}(x_{1},u_{1})dx_1
\\
&\cdot\prod\limits_{k=2}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^{n-1}\mathbf{x},
\end{split}
\end{equation}
where
\begin{equation}\label{Delta}
\begin{split}
\int_{\mathbb{R}}K^{-\mathbf{e}_{1}}_{A_{1}}(x_{1},v_{1})K^{\mathbf{e}_{1}}_{A_{1}}(x_{1},u_{1})&dx_1=\int_{\mathbb{R}}\frac{1}{\sqrt{-\mathbf{e}_12\pi b_1}}e^{-\mathbf{e}_1(\frac{a_1}{2b_1}x_1^2-\frac{1}{b_1}x_1v_1+\frac{d_1}{2b_1}v_1^2)}
\\
&\cdot\frac{1}{\sqrt{\mathbf{e}_12\pi b_1}}e^{\mathbf{e}_1(\frac{a_1}{2b_1}x_1^2-\frac{1}{b_1}x_1u_1+\frac{d_1}{2b_1}u_1^2)}dx_1
\\
&=\frac{1}{2\pi}\int_{\mathbb{R}}e^{-\mathbf{e}_1\frac{x_1}{b_1}(u_1-v_1)}d(\frac{x_1}{b_1})e^{-\mathbf{e}_1\frac{d_1}{2b_1}(u_1^2-v_1^2)}
\\
&=\delta(u_1-v_1).
\end{split}
\end{equation}
The last step of the above equation comes from the relationship of Fourier transform $\widehat{1}(\omega)=2\pi \delta(\omega)$.
Applying result (\ref{Delta}) to the right side of equation (\ref{ExamK}), one can derive
$$\mathscr{L}^r_{\Lambda}\big(\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(x_{n-k},v_{n-k})\big)(\mathbf{u})=\prod\limits_{k=1}^{n}\delta(u_k-v_k).$$
In the following content, we turn to investigate the main properties of CLCT.
\begin{prop} (Left linearity) Let $\alpha \in Cl(0,n), \beta \in Cl(0,n)$, for $f_1(\mathbf{x}),f_2(\mathbf{x})\in L^1(\mathbb{R}^n, Cl(0,n))$, we have $\mathscr{L}^r_{\Lambda}(\alpha f_1+\beta f_2)(\mathbf{u})=\alpha\mathscr{L}^r_{\Lambda} (f_1)(\mathbf{u})+\beta \mathscr{L}^r_{\Lambda} (f_2)(\mathbf{u})$.
\end{prop}
This proposition can be easily verified by the linearity of integral.
\begin{prop}(Translation) Let $\mathbf{\bm{\alpha}}=(\alpha_1,0,\cdots,0,\alpha_n)$, for $f(\mathbf{x})\in L^1(\mathbb{R}^n,\mathbb{R})$, we have
$
\mathscr{L}^r_{\Lambda}(f(\mathbf{x}-\bm{\alpha}))(\mathbf{u})=e^{\mathbf{e}_1(c_1u_1\alpha_1)}e^{-\mathbf{e}_1\frac{a_1c_1}{2}\alpha_1^2}\mathscr{L}^r_{\Lambda}(f)(u_1-a_1\alpha_1,u_2,\cdots,u_n-a_n\alpha_n)e^{\mathbf{e}_n(c_nu_n\alpha_n)}e^{-\mathbf{e}_n\frac{a_nc_n}{2}\alpha_n^2}.
$
\end{prop}
\begin{proof}
According to definition of right-sided CLCT, one can get
\begin{equation}\label{trans1}
\mathscr{L}^r_{\Lambda}(f(\mathbf{x}-\bm{\alpha}))(\mathbf{u})=\int_{\mathbb{R}^n}f(x_1-\alpha_1,x_2,\cdots,x_{n-1},x_n-\alpha_n)\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}.
\end{equation}
Let $t_1=x_1-\alpha_1$, $t_n=x_n-\alpha_n$, the above integral turns to a new integral about variables $t_1,x_2,\cdots,x_{n-1},t_n$, where the first LCT kernel becomes
\begin{equation*}
\begin{aligned}
K^{\mathbf{e}_1}_{A_1}(x_1,u_1)&=\frac{1}{\sqrt{\mathbf{e}_12\pi b_1}}e^{\mathbf{e}_1[\frac{a_1}{2b_1}(t_1+\alpha_1)^2-\frac{1}{b_1}(t_1+\alpha_1)u_1+\frac{d_1}{2b_1}u_1^2]}
\\
&=\frac{1}{\sqrt{\mathbf{e}_12\pi b_1}}e^{\mathbf{e}_1[\frac{a_1}{2b_1}t_1^2+\frac{a_1}{b_1}t_1\alpha_1+\frac{a_1}{2b_1}\alpha_1^2-\frac{t_1u_1}{b_1}-\frac{u_1\alpha_1}{b_1}+\frac{d_1}{2b_1}u_1^2]}
\\
&=e^{\mathbf{e}_1(c_1u_1\alpha_1)}e^{-\mathbf{e}_1\frac{a_1c_1}{2}\alpha_1^2}\frac{1}{\sqrt{\mathbf{e}_12\pi b_1}}e^{\mathbf{e}_1[\frac{a_1}{2b_1}t_1^2-\frac{t_1}{b_1}(u_1-a_1\alpha_1)+\frac{d_1}{2b_1}(u_1-a_1\alpha_1)^2]},
\end{aligned}
\end{equation*}
and the last LCT kernel turns to
\begin{equation*}
K^{\mathbf{e}_n}_{A_n}(x_n,u_n)=e^{\mathbf{e}_n(c_nu_n\alpha_n)}e^{-\mathbf{e}_n\frac{a_nc_n}{2}\alpha_n^2}\frac{1}{\sqrt{\mathbf{e}_n2\pi b_n}}e^{\mathbf{e}_n[\frac{a_n}{2b_n}t_n^2-\frac{t_n}{b_n}(u_n-a_n\alpha_n)+\frac{d_n}{2b_n}(u_n-a_n\alpha_n)^2]}.
\end{equation*}
Since $f(\mathbf{x})$ is real valued function in this case and it can interchange the position with $K^{\mathbf{e}_1}_{A_1}(x_1,u_1)$ in equation (\ref{trans1}) freely. Hence equation (\ref{trans1}) derives
\begin{equation*}
\begin{aligned}
\mathscr{L}^r_{\Lambda}(f(\mathbf{x}-\bm{\alpha}))(\mathbf{u})=&e^{\mathbf{e}_1(c_1u_1\alpha_1)}e^{-\mathbf{e}_1\frac{a_1c_1}{2}\alpha_1^2}\mathscr{L}^r_{\Lambda}(f)(u_1-a_1\alpha_1,u_2,\cdots,u_n-a_n\alpha_n)
\\
&e^{\mathbf{e}_n(c_nu_n\alpha_n)}e^{-\mathbf{e}_n\frac{a_nc_n}{2}\alpha_n^2}.
\end{aligned}
\end{equation*}
\end{proof}
\begin{prop}(Scaling)Let $\sigma_1,\sigma_2,\cdots,\sigma_n>0$, for $f(\mathbf{x})\in L^1(\mathbb{R}^n,Cl(0,n))$, we have
$$
\mathscr{L}^r_{\Lambda}(f(\sigma_1x_1,\sigma_2x_2,\cdots,\sigma_nx_n)(\mathbf{u})=\frac{1}{\sqrt{\prod\limits_{k=1}^{n}\sigma_k}}\mathscr{L}^r_{\Lambda'}(f)(\mathbf{u}),
$$
where $\Lambda'$: $B_k=\left(
\begin{array}{cc}
a_k/\sigma_k & b_k\sigma_k \\
c_k/\sigma_k & d_k\sigma_k \\
\end{array}
\right).$
\end{prop}
\begin{proof}According to equation (\ref{def-rCLCT}), one can find
\begin{equation}\label{Escaling}
\begin{split}
\mathscr{L}^r_{\Lambda}(f(\sigma_1x_1,\sigma_2x_2,\cdots,\sigma_nx_n))(\mathbf{u})=&\int_{\mathbb{R}^n}f(\sigma_1x_1,\sigma_2x_2,\cdots,\sigma_nx_n)
\\
&\cdot\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}.
\end{split}
\end{equation}
Let $t_k=\sigma_kx_k\quad(k=1,\cdots,n)$, each kernel $K^{\mathbf{e}_k}_{A_k}(x_k,u_k)$ turns to
\begin{equation*}
\begin{aligned}
K^{\mathbf{e}_k}_{A_k}(x_k,u_k)&=\frac{1}{\sqrt{\mathbf{e}_k2\pi b_k}}e^{\mathbf{e}_k[\frac{a_k}{2b_k}(\frac{t_k}{\sigma_k})^2-\frac{1}{b_k}(\frac{t_k}{\sigma_k})u_k+\frac{d_k}{2b_k}u_k^2]}
\\
&=\frac{\sqrt{{\sigma_k}}}{\sqrt{\mathbf{e}_k2\pi b_k{\sigma_k}}}e^{\mathbf{e}_k[\frac{a_k/{\sigma_k}}{2b_k{\sigma_k}}t_k^2-\frac{1}{b_k{\sigma_k}}t_ku_k+\frac{d_k{\sigma_k}}{2b_k{\sigma_k}}u_k^2]}.
\end{aligned}
\end{equation*}
Hence the left side of equation (\ref{Escaling}) becomes
\begin{equation*}
\begin{split}
\mathscr{L}^r_{\Lambda}(f(t_1,t_2,\cdots,t_n))(\mathbf{u})&=\int_{\mathbb{R}^n}f(t_1,t_2,\cdots,t_n)
\\
&\cdot\prod\limits_{k=1}^{n} \frac{\sqrt{{\sigma_k}}}{\sqrt{\mathbf{e}_k2\pi b_k{\sigma_k}}}e^{\mathbf{e}_k[\frac{a_k/{\sigma_k}}{2b_k{\sigma_k}}t_k^2-\frac{1}{b_k{\sigma_k}}t_ku_k+\frac{d_k{\sigma_k}}{2b_k{\sigma_k}}u_k^2]}d{\frac{t_1}{\sigma_1}}\cdots d{\frac{t_n}{\sigma_n}}
\\
&=\frac{1}{\sqrt{\prod\limits_{k=1}^{n}\sigma_k}}\int_{\mathbb{R}^n}f(t_1,t_2,\cdots,t_n)\prod\limits_{k=1}^{n} \frac{1}{\sqrt{\mathbf{e}_k2\pi b_k{\sigma_k}}}
\\
&\cdot e^{\mathbf{e}_k[\frac{a_k/{\sigma_k}}{2b_k{\sigma_k}}t_k^2-\frac{1}{b_k{\sigma_k}}t_ku_k+\frac{d_k{\sigma_k}}{2b_k{\sigma_k}}u_k^2]}d^n\mathbf{t},
\end{split}
\end{equation*}
which is the desired result.
\end{proof}
\begin{prop}\label{Partial}(Partial derivative) For $f(\mathbf{x})$ and $\frac{\partial f(\mathbf{x})}{\partial x_1},\frac{\partial f(\mathbf{x})}{\partial x_n}\in L^1(\mathbb{R}^n,\mathbb{R})$, we have
\begin{equation}\label{Partial1}
\mathscr{L}^r_{\Lambda}(\frac{\partial f(\mathbf{x})}{\partial x_1})(\mathbf{u})=(a_1\frac{\partial}{\partial u_1}-\mathbf{e}_1c_1u_1)\mathscr{L}^r_{\Lambda}(f)(\mathbf{u}),
\end{equation}
and
\begin{equation}\label{Partialn}
\mathscr{L}^r_{\Lambda}(\frac{\partial f(\mathbf{x})}{\partial x_n})(\mathbf{u})=\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})(a_n\frac{\partial}{\partial u_n}-\mathbf{e}_nc_nu_n).
\end{equation}
\end{prop}
\begin{proof}
Since $\frac{\partial f(\mathbf{x})}{\partial x_1}\in L^1(\mathbb{R}^n,\mathbb{R})$, it has CLCT as following
\begin{equation*}
\begin{aligned}
\mathscr{L}^r_{\Lambda}(\frac{\partial f(\mathbf{x})}{\partial x_1})(\mathbf{u})&=\int_{\mathbb{R}^n}\frac{\partial f(\mathbf{x})}{\partial x_1}\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}
\\
&=\int_{\mathbb{R}^{n-1}}\bigg[\int_\mathbb{R}\frac{\partial f(\mathbf{x})}{\partial x_1}K^{\mathbf{e}_1}_{A_1}(x_1,u_1)dx_1\bigg]\prod\limits_{k=2}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^{n-1}\mathbf{x},
\end{aligned}
\end{equation*}
where
\begin{equation*}
\begin{aligned}
\int_\mathbb{R}\frac{\partial f(\mathbf{x})}{\partial x_1}&K^{\mathbf{e}_1}_{A_1}(x_1,u_1)dx_1=\big[f(\mathbf{x})K^{\mathbf{e}_1}_{A_1}(x_1,u_1)\big]\big|_{-\infty}^{+\infty}-\int_\mathbb{R} f(\mathbf{x})\frac{\partial}{\partial x_1}K^{\mathbf{e}_1}_{A_1}(x_1,u_1)dx_1
\\
&=-\int_\mathbb{R} f(\mathbf{x})\frac{1}{\sqrt{\mathbf{e}_k2\pi b_1}}e^{\mathbf{e}_1(\frac{a_1}{2b_1}x_1^2-\frac{1}{b_1}x_1u_1+\frac{d_1}{2b_1}u_1^2)}\mathbf{e}_1(\frac{a_1}{b_1}x_1-\frac{u_1}{b_1})dx_1
\\
&=\int_\mathbb{R}(\frac{u_1}{b_1}-\frac{a_1}{b_1}x_1)f(\mathbf{x})\mathbf{e}_1K^{\mathbf{e}_1}_{A_1}(x_1,u_1)dx_1.
\end{aligned}
\end{equation*}
Hence
\begin{equation}\label{PartialD}
\mathscr{L}^r_{\Lambda}(\frac{\partial f(\mathbf{x})}{\partial x_n})(\mathbf{u})=\int_{\mathbb{R}^n}(\frac{u_1}{b_1}-\frac{a_1}{b_1}x_1)f(\mathbf{x})\mathbf{e}_1\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}.
\end{equation}
While
\begin{equation*}
\begin{aligned}
a_1\frac{\partial}{\partial u_1}\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})&=a_1\frac{\partial}{\partial u_1}\int_{\mathbb{R}^n}f(\mathbf{x})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}
\\
&=\int_{\mathbb{R}^n}(\frac{a_1d_1}{b_1}u_1-\frac{a_1x_1}{b_1})f(\mathbf{x})\mathbf{e}_1\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x},
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
\mathbf{e}_1c_1u_1\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})=\int_{\mathbb{R}^n}c_1u_1\mathbf{e}_1 f(\mathbf{x})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}.
\end{aligned}
\end{equation*}
According to equation (\ref{PartialD}), We can easily verified that $\mathscr{L}^r_{\Lambda}(\frac{\partial f(\mathbf{x})}{\partial x_1})(\mathbf{u})=(a_1\frac{\partial}{\partial u_1}-\mathbf{e}_1c_1u_1)\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})$, where the property $a_1d_1-b_1c_1=1$ is used. The proof of equation (\ref{Partialn}) is similar to that of equation (\ref{Partial1}), and we will omit it here.
\end{proof}
\begin{thm}(Plancherel) Suppose $f(\mathbf{x})$ and $g(\mathbf{x}) \in L^1\bigcap L^2(\mathbb{R}^n,Cl(0,n))$, $F(\mathbf{u}):=\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})$ and $G(\mathbf{u}):=\mathscr{L}^r_{\Lambda}(g)(\mathbf{u})$ are their CLCTs respectively, then we have
\begin{equation}\label{innerp1}
(f,g)_{L^2(\mathbb{R}^n,Cl(0,n))}=(F,G)_{L^2(\mathbb{R}^n,Cl(0,n))}.
\end{equation}
Furthermore, when $f=g$, they turn to
\begin{equation}\label{parseval1}
(f,f)_{L^2(\mathbb{R}^n,Cl(0,n))}=(F,F)_{L^2(\mathbb{R}^n,Cl(0,n))}.
\end{equation}
\end{thm}
\begin{proof} According to equation (\ref{innerp}),
\begin{equation}\label{Planchel1}
(F,G)_{L^2(\mathbb{R}^n,Cl(0,n))}=\int_{\mathbb{R}^n}F(\mathbf{u})\overline{G(\mathbf{u})}d^n\mathbf{u}.
\end{equation}
Inputting the integral formulaes of $F(\mathbf{u})$ and $G(\mathbf{u})$ into the above equation, one can derive
\begin{equation*}
\begin{aligned}
(F,G)&=\int_{\mathbb{R}^n}\bigg[\int_{\mathbb{R}^n}f(\mathbf{x})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}\bigg]\bigg[\overline{\int_{\mathbb{R}^n}g(\mathbf{y})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(y_k,u_k)d^n\mathbf{y}}\bigg]d^n\mathbf{u}
\\
&=\int_{\mathbb{R}^n}\bigg[\int_{\mathbb{R}^{2n}}f(\mathbf{x})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(y_{n-k},u_{n-k})\overline{g(\mathbf{y})}d^n\mathbf{x}d^n\mathbf{y}\bigg]d^n\mathbf{u}
\\
&=\int_{\mathbb{R}^{2n}}f(\mathbf{x})\bigg[\int_{\mathbb{R}^n}\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(y_{n-k},u_{n-k})d^n\mathbf{u}\bigg]\overline{g(\mathbf{y})}d^n\mathbf{x}d^n\mathbf{y}.
\end{aligned}
\end{equation*}
The interchanging of the integral in the above equation is due to the Fubini theorem which is insured by $f(\mathbf{x})$ and $g(\mathbf{x}) \in L^2(\mathbb{R}^n,Cl(0,n))$.
Furthermore, the integral in bracket can be turned into
\begin{equation*}
\begin{aligned}
\int_{\mathbb{R}^n}&\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(y_{n-k},u_{n-k})d^n\mathbf{u}=\int_{\mathbb{R}^{n-1}}\prod\limits_{k=1}^{n-1}K^{\mathbf{e}_k}_{A_k}(x_k,u_k)
\\
&\bigg[\int_{\mathbb{R}}K^{\mathbf{e}_n}_{A_n}(x_n,u_n)K^{-\mathbf{e}_n}_{A_n}(y_n,u_n)du_n\bigg]\prod\limits_{k=1}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(y_{n-k},u_{n-k})d^{n-1}\mathbf{u},
\end{aligned}
\end{equation*}
where
$\int_{\mathbb{R}}K^{\mathbf{e}_n}_{A_n}(x_n,u_n)K^{-\mathbf{e}_n}_{A_n}(y_n,u_n)du_n$
can be calculated as equation (\ref{Delta}) and equals $\delta (x_n-y_n)$.
Hence, the above equation is
\begin{equation*}
\int_{\mathbb{R}^n}\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(y_{n-k},u_{n-k})d^n\mathbf{u}=\prod\limits_{k=1}^n\delta (x_k-y_k),
\end{equation*}
and $(F,G)$ turns to
\begin{equation*}
\begin{aligned}
(F,G)&=\int_{\mathbb{R}^n}f(\mathbf{x})\bigg[\int_{\mathbb{R}^n}\prod\limits_{k=1}^n\delta (x_k-y_k)\overline{g(\mathbf{y})}d^n\mathbf{y}\bigg]d^n\mathbf{x}
\\
&=\int_{\mathbb{R}^n}f(\mathbf{x})\overline{g(\mathbf{x})}d^n\mathbf{x}=(f,g).
\end{aligned}
\end{equation*}
Let $f=g$, the Parseval theorem (\ref{parseval1}) is derived.
\end{proof}
\begin{thm}\label{inverset}(Inverse Theorem) Let $\Lambda$: $A_k=\left(
\begin{array}{cc}
a_k & b_k \\
c_k & d_k \\
\end{array}
\right)$
($k=1,2,\cdots,n$), suppose $f(\mathbf{x})\in L^1(\mathbb{R}^n,Cl(0,n))$. Then the CLCT of $f$, $F(\mathbf{u}):=\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})$, is an invertible transform and its inverse is
\begin{equation}\label{inversef}
f(\mathbf{x})=\mathscr{L}^{-1}_{\Lambda}(F)(\mathbf{x})=\int_{\mathbb{R}^n}F(\mathbf{u})\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k}, x_{n-k})d^n\mathbf{u},
\end{equation}
where $A^{-1}_k=\left(
\begin{array}{cc}
d_k & -b_k \\
-c_k & a_k \\
\end{array}
\right),$ ($k=1,2,\cdots,n$).
\end{thm}
\begin{proof}
$f(\mathbf{x})\in L^1(\mathbb{R}^n,Cl(0,n))$, by straightforward computation, one can find
\begin{equation}\label{inversef1}
\begin{aligned}
\int_{\mathbb{R}^n}F(\mathbf{u})\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k}, x_{n-k})d^n\mathbf{u}=\int_{\mathbb{R}^n}\bigg[\int_{\mathbb{R}^n}f(\mathbf{y})\prod\limits_{k=1}^{n} K^{\mathbf{e}_{k}}_{A_k}(y_k, u_k)d^n\mathbf{y}\bigg]
\\
\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k},x_{n-k})d^n\mathbf{u}
\\
=\int_{\mathbb{R}^{2n}}f(\mathbf{y})\prod\limits_{k=1}^{n} K^{\mathbf{e}_{k}}_{A_k}(y_k, u_k)d^n\mathbf{y}\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k},x_{n-k})d^n\mathbf{y}d^n\mathbf{u},
\\
=\int_{\mathbb{R}^n}f(\mathbf{y})\bigg[\int_{\mathbb{R}^n}\prod\limits_{k=1}^{n} K^{\mathbf{e}_{k}}_{A_k}(y_k, u_k)
\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k},x_{n-k})d^n\mathbf{u}\bigg]d^n\mathbf{y}
\end{aligned}
\end{equation}
where $K^{\mathbf{e}_k}_{{A^{-1}_k}}(u_k,x_k)=\frac{1}{\sqrt{\mathbf{e}_k2\pi b_k}}e^{\mathbf{e}_k(-\frac{a_k}{2b_k}x_k^2+\frac{1}{b_k}x_ku_k-\frac{d_k}{2b_k}u_k^2)}$,
similar to equation (\ref{Delta}), the integral in bracket of above equation is equal to $\prod\limits_{k=1}^{n}\delta(x_k-y_k)$.
Thus equation (\ref{inversef1}) becomes
\begin{equation*}
\int_{\mathbb{R}^n}F(\mathbf{u})\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k}, x_{n-k})d^n\mathbf{u}=\int_{\mathbb{R}^n}f(\mathbf{y})\prod\limits_{k=1}^{n}\delta(x_k-y_k)d^n\mathbf{y}=f(\mathbf{x}),
\end{equation*}
which completes the proof.
\end{proof}
\section{Analytic Signal in CLCT Domain and Envelop Detection}
\subsection{Analytic Signal in CLCT Domain}
Following the approach of generalizing the analytic signal to LCT domain, which was introduced by Fu and Li \cite{Fu08} and extended to quaternion algebra by Kou \cite{Kou16}, we supply the definition of generalized analytic signal which is associated with CLCT.
\begin{defn}Given a Clifford valued function $f(\mathbf{x})$ with $\mathbf{x}=(x_1,x_2,\cdots,x_n)$, its corresponding analytic signal $f_A(\mathbf{x})$ is defined as
\begin{equation}
f_A(\mathbf{x})=\int_{\mathbb{R}^n}F_A(\mathbf{u})\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k}, x_{n-k})d^n\mathbf{u},
\end{equation}
where $F_A(\mathbf{u})$ is
\begin{equation}
F_A(\mathbf{u})=\prod\limits_{k=1}^{n}[1+sign(\frac{u_k}{b_k})]F(\mathbf{u}),
\end{equation}
and $F(\mathbf{u}):=\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})$ is the CLCT of original function $f(\mathbf{x})$, $\Lambda:$ $A_k=\left(
\begin{array}{cc}
a_k & b_k \\
c_k & d_k \\
\end{array}
\right)\in \mathbb{R}^{2\times2}$
($k=1,2,\cdots,n$) be a set of parameter matrices with $det(A_k)=1$ and $b_k\neq0$.
\end{defn}
When the dimension $n=1$, the analytic signal $f_A(t)$ is just what Fu introduced in \cite{Fu08}. While $n=2$, this analytic signal $f_A(x,y)$ is different from Kou's generalized quaternionic analytic signal, which is associated with two-sided QLCT.
Since we intend to investigate the envelop detection problems of 3-D images, we will focus on $n=3$ case. Due to the advantages of Clifford algebra $Cl(0,3)$, we will adopt this special Clifford algebra in the following part.
\begin{defn}
Let $\Lambda$: $A_k=\left(
\begin{array}{cc}
a_k & b_k \\
c_k & d_k \\
\end{array}
\right)\in \mathbb{R}^{2\times2}$
($k=1,2,3$) be a set of parameter matrices with $det(A_k)=1$ and $b_k\neq0$, the right-sided CLCT of a signal $f\in L^1(\mathbb{R}^3,Cl(0,3))$ is a Clifford valued function $\mathscr{L}^r_{\Lambda}(f): \mathbb{R}^3\rightarrow Cl(0,3)$ defined as follows
\begin{equation}
\mathscr{L}^r_{\Lambda}(f)(\mathbf{u}):=\int_{\mathbb{R}^3}f(\mathbf{x}) K^{\mathbf{e}_1}_{A_1}(x_1,u_1)K^{\mathbf{e}_2}_{A_2}(x_2,u_2)K^{\mathbf{e}_3}_{A_3}(x_3,u_3)d^3\mathbf{x},
\end{equation}
where $K^{\mathbf{e}_1}_{A_1}(x_1,u_1)$, $K^{\mathbf{e}_2}_{A_2}(x_2,u_2)$ and $K^{\mathbf{e}_3}_{A_3}(x_3,u_3)$ are kernels of the CLCT and defined by
\begin{equation*}
K^{\mathbf{e}_k}_{A_k}(x_k,u_k):=\frac{1}{\sqrt{\mathbf{e}_k2\pi b_k}}e^{\mathbf{e}_k(\frac{a_k}{2b_k}x_k^2-\frac{1}{b_k}x_ku_k+\frac{d_k}{2b_k}u_k^2)}.
\end{equation*}
\end{defn}
\begin{defn}\label{analyticC}
Given a Clifford biquaternion valued function $f(\mathbf{x})$ with $\mathbf{x}=(x_1,x_2,x_3)$, its corresponding Clifford biquaternion analytic signal $f_A(\mathbf{x})$ is defined as
\begin{equation}
f_A(\mathbf{x})=\int_{\mathbb{R}^3}F_A(\mathbf{u})K^{\mathbf{e}_3}_{{A^{-1}_{3}}}(u_3, x_3)K^{\mathbf{e}_2}_{{A^{-1}_{2}}}(u_2, x_2)K^{\mathbf{e}_1}_{{A^{-1}_{1}}}(u_1, x_1)du_1du_2du_3,
\end{equation}
where $F_A(\mathbf{u})$ is
\begin{equation}
F_A(\mathbf{u})=[1+sign(\frac{u_1}{b_1})][1+sign(\frac{u_2}{b_2})][1+sign(\frac{u_3}{b_3})]F(\mathbf{u}),
\end{equation}
and $F(\mathbf{u}):=\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})$ is the CLCT of original function $f(\mathbf{x})$, $\Lambda:$ $A_k=\left(
\begin{array}{cc}
a_k & b_k \\
c_k & d_k \\
\end{array}
\right)\in \mathbb{R}^{2\times2}$
($k=1,2,3$) be a set of parameter matrices with $det(A_k)=1$ and $b_k\neq0$.
\end{defn}
The analytic signal $f_A(\mathbf{x})$ defined by Definition \ref{analyticC} is a Clifford biquaternion valued function, which can be written as
\begin{equation*}
f_A(\mathbf{x})=(p_0+\mathbf{i}p_1+\mathbf{j}p_2+\mathbf{k}p_3)+\epsilon(q_0+\mathbf{i}q_1+\mathbf{j}q_2+\mathbf{k}q_3).
\end{equation*}
From this function, one can derive three quaternions:
\begin{equation*}
f_{A_{\mathbf{e}_1\mathbf{e}_2}}=p_0+q_1\mathbf{e}_1+q_2\mathbf{e}_2+p_3\mathbf{e}_1\mathbf{e}_2,
\end{equation*}
\begin{equation*}
f_{A_{\mathbf{e}_2\mathbf{e}_3}}=p_0+q_2\mathbf{e}_2+q_3\mathbf{e}_3+p_1\mathbf{e}_2\mathbf{e}_3,
\end{equation*}
and
\begin{equation*}
f_{A_{\mathbf{e}_3\mathbf{e}_1}}=p_0+q_3\mathbf{e}_3+q_1\mathbf{e}_1+p_2\mathbf{e}_3\mathbf{e}_1.
\end{equation*}
For each of this quaternionic valued signals, one can write them into polar form representations, and obtain their modules which named the partial modules of the original Clifford biquaternion valued analytic signal $f_A(\mathbf{x})$ as following
\begin{equation*}
mod_{\mathbf{e}_1\mathbf{e}_2}=\sqrt{f_{A_{\mathbf{e}_1\mathbf{e}_2}}\overline{(f_{A_{\mathbf{e}_1\mathbf{e}_2}})}},
\end{equation*}
\begin{equation*}
mod_{\mathbf{e}_2\mathbf{e}_3}=\sqrt{f_{A_{\mathbf{e}_2\mathbf{e}_3}}\overline{(f_{A_{\mathbf{e}_2\mathbf{e}_3}})}},
\end{equation*}
\begin{equation*}
mod_{\mathbf{e}_3\mathbf{e}_1}=\sqrt{f_{A_{\mathbf{e}_3\mathbf{e}_1}}\overline{(f_{A_{\mathbf{e}_3\mathbf{e}_1}})}}.
\end{equation*}
\subsection{Envelop Detection of 3-D Images}
In \cite{Wang12}, Wang constructed a experiment platform for acquiring radio frequency (RF) ultrasound volume with a biopsy needle in it. The volume is a $128\times1280\times33$ pixels in lateral ($x_1$ axes), elevation ($x_2$ axes) and axial ($x_3$ axes) directions respectively. By
using classical 1-D analytic signal and novel 3-D analytic signal approaches, different resolution appears in ultrasound image envelop detection. Since 3-D analytic signal takes into account the information of the neighbouring scan lines, this novel approaches supplies better result than classical 1-D analytic signal approach. In this paper, we will take the same experiment platform and show the shortcoming of the analytic signal associated with CFT. We also propose the analytic signal associated with CLCT in Clifford biquaternion domain to this problem and show that by modifying the matrix parameters, one can obtain a satisfactory result.
\begin{figure}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{ox1-32.jpg}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{ox2-64.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{ox3-64.jpg}
\end{minipage}
\caption{original envelop of 3-D image.}
\end{figure}
Here we modify the volume to a $128\times128\times128$ pixels, and in the plane of $x_3=64$ there is a biopsy needle located at $x_2=64$ with $0\leq x_1\leq32$, which can be easily found in Figure 1. The flow-process diagram of using the amplitude method of analytic signals is supplied in Figure 2.
\begin{figure}
\begin{tikzpicture}[node distance=1.0cm]
\centering
\node[startstop](start){Inputting original data $f(x_1,x_2,x_3)$};
\node[process, below of = start, yshift = -0.5cm](pro1){Obtain analytic signal $f_A(x_1,x_2,x_3)$};
\node[process, below of = pro1, yshift = -0.5cm](pro2){Derive the polar form};
\node[process, below of = pro2, yshift = -0.5cm](pro3){Obtain the modulus $|f_A(x_1,x_2,x_3)|$};
\node[process, below of = pro3, yshift = -0.5cm](stop){draw the picture of $|f_A(x_1,x_2,x_3)|$};
\coordinate (point1) at (-3cm, -6cm);
\draw [arrow] (start)--(pro1);
\draw [arrow] (pro1)--(pro2);
\draw [arrow] (pro2)--(pro3);
\draw [arrow] (pro3)--(stop);
\end{tikzpicture}
\caption{ The flow-process diagram of using analytic signal's amplitude to envelop detection.}
\end{figure}
Figure 3 shows the result by using Wang's analytic signal to detect the envelop of biopsy needle which locates at plane of $x_3=64$. Thank to take into account the information of the neighbouring scan lines, one can find the biopsy needle in the nearest planes besides in plane $x_3=64$, which are planes $x_3=63$ and $x_3=65$. Since it will induce many noise and lose information in converting RF ultrasound signal to the Brightness mode (B-mode) signal, this property will insure one can find a clear envelop in B-mode images. But unfortunately, the envelops in these three planes distort badly, which are much longer than the true needle. This shortcoming can be well overcame by our approach based on the analytic signal associated with CLCT, which can be found in Figure 4. The envelop also can be found in planes $x_3=63$,$x_3=64$ and $x_3=65$, and the length is very close to that of true needle. Here the parameter matrices are $A_1=\left(
\begin{array}{cc}
1 & 10 \\
-0.5 & 0.5 \\
\end{array}
\right)$, $A_2=\left(
\begin{array}{cc}
1 & 10 \\
-0.5 & 0.5 \\
\end{array}
\right)$ and $A_3=\left(
\begin{array}{cc}
1000 & 1 \\
-0.1 & 0.0009 \\
\end{array}
\right)$, and they are found by two steps: first, to the 2-D image, plane $x_3=64$, we try several times and get good result. Furthermore, we slightly modify the parameters of $A_3$ and find a better result. Since there are two many parameters in these three matrices, we can not insure these three matrices are the best parameters. Due to the fact, Wang's method is a special case of our approach, we can insure that our approach will supply a better result by choosing proper parameter matrices.
Furthermore, we also use the envelop method of monogenic signal, which was introduced by Yang \cite{Yang17} and is another higher dimensional generalization of analytic signal, to detect the envelop of this image. Figure 4 shows that it supplies an accurate envelop in every planes but one can find this envelop in every $x_3$ plane, which means that the image will be contaminated by the background signal badly.
\begin{figure}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{FTx3-62.jpg}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{FTx3-63.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{FTx3-64.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{FTx3-65.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{FTx3-66.jpg}
\end{minipage}
\caption{Wang's analytic signal associated with CFT \cite{Wang12}.}
\end{figure}
\begin{figure}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{Lctx3-62.jpg}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{Lctx3-63.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{Lctx3-64.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{Lctx3-65.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{Lctx3-66.jpg}
\end{minipage}
\caption{This paper's method: analytic signal associated with CLCT, where\qquad \qquad $A_1=(1,10;-0.05,0.5)$,
$A_2=(1,10;-0.05,0.5)$ and $A_3=(1000,1;-0.1,0.0009)$.}
\end{figure}
\begin{figure}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{x3-1.jpg}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{x3-63.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{x3-64.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{x3-65.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{x3-66.jpg}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\centering
\includegraphics[width=2in]{x3-128.jpg}
\end{minipage}
\caption{Monogenic signal approach of \cite{Yang17}.}
\end{figure}
\section{Conclusion}
Inspired by the success of 3-D biquaternionic analytic signal in detecting 3-D ultrasound images envelop, we generalize this approach to LCT domain. Thanks to the special structure of the Clifford biquaternion algebra, this generalized analytic signal has three partial modules where $mod_{\mathbf{e}_1\mathbf{e}_2}(f_A)$ corresponds the envelop of 3-D images in $x_3$ planes. Real valued synthetic 3-D image is introduced to test the valid of this novel envelop detection method. Comparing with the classical 1-D analytic signal method and Wang's generalized biquaternionic analytic signal method even the envelop method of monogenic signal, our approach supplies a best result. Furthermore, our method can derive a better resolution by modifying the parameter matrices and adding post processing such as average filter processing.
\subsection*{Acknowledgment}
The first author acknowledges financial support from the PhD research startup foundation of Hubei University of Technology No. BSQD2019052. Partial support by the Foundation for Science and Technology from Department of Education of Hubei province (B2019047). Partial support by the Macao Science and Technology Development Fund 0085/2018/A2.
\section{Introduction}
The analytic signal, which was introduced by Gabor \cite{Gabor46} and Ville \cite{Ville48}, is a powerful tool in various applications such as communication \cite{Boashash92}, radar-based object detection \cite{Levanon04}, processing of oceanic data \cite{Lilly06}, etc. It is a complex signal which derives from adding its Hilbert transform to the original real-valued signal. This complex signal also can be regarded as being constructed by suppressing all negative frequency components of the original real signal. Since one can separate the qualitative and quantitative information from the local phase and local amplitude of analytic signals, the instantaneous amplitude and phase are the hot topic of analytic signal analysis, and a critical analysis about it has been carried out by Picibono \cite{Picibono97} in 1997. During the last several decades, many new complex analytic signal theories \cite{sangwine07,Hahn11,Unser09,Said08,Felsberg01,Yang17} were well established with the development of Clifford algebras and the associated Fourier transform theory.
The linear canonical transform (LCT) - a powerful tool for optics and signal processing - was first introduced in the 1970s by Collins \cite{Collins70} and Moshinsky \cite{Moshinsky71}. It is a linear integral transform with three free parameters. Many famous transforms such as Fourier transform (FT), fractional transform (FRFT), and the Fresnel transform (FST) are all special cases of the LCT \cite{Wolf79, Ozaktas00, Pei01}. Due to the property that owning more degrees of freedom and needing similar computation cost to the FT and FRFT, the LCT has many applications such as in signal synthesis, radar system analysis, filter design and pattern recognition, etc \cite{Ozaktas00, Pei03}. Recently, with the development of the LCT, the analytic signal has been extended into the LCT domain initially by Fu and Li \cite{Fu08} and to 2D LCT domain by Xu et al \cite{Xu09}. Furthermore, Kou \cite{Kou16} generalized the analytic signal to the quaternion domain with the help of the two-sided quaternion linear canonical transform (QLCT) and got a satisfactory result by using this method to process the envelop detection problems.
In 3-D image processing domain, such as 3-D ultrasound image registration, analytic signal is a powerful tool. Zhang \cite{Zhang06} first used it to process 3-D ultrasound image with the help of its phase information. Following this, phase information of analytic signal is widely used in image processing \cite{Harput11,Maltaverne10,Rajpoot09,Belaid11}. Since local amplitude is another important part of analytic signal, Wang \cite{Wang12} introduced 3-D Clifford biquaternionic analytic signal, which is associated with Clifford Fourier transform. With the help of partial modules, it overcomes the shortcoming of losing information of the classical 1-D analytic envelop detection tools. Since the analytic signal in LCT domain supplies better results in envelop detection than that in FT domain, which was shown by Kou in \cite{Kou16}. We generalize Wang's \cite{Wang12} Clifford biquaternionic analytic signal to LCT domain, and with the help of the local amplitude, the envelopes of 3-D images are successfully detected. Synthesis examples show that our approach presents better results than Wang's method. Furthermore, by comparing with the amplitude method of monogenic signal which is another kind of generalization of analytic signal, the powerful ability of our approach is verified.
The paper is organized as follows. First, we recall the basic knowledge about the Clifford biquaternion and $n$ dimensional analytic signal in section 2. Section 3 is dedicated to give the definition and basic properties of CLCT, which are the keys of using the generalized analytic signal to detect the envelopes of 3-D images. In section 4, a novel approach of envelop detection based on CLCT is supplied and synthetic examples are introduced to show the advantages of this method. Finally we conclude this article in Section 5.
\section{Preliminary}
\subsection{Clifford Biquaternion}
Quaternion, which was discovered by Hamilton in 1843 and is denoted by $\mathbb{H}$, is a generalization of complex number. Each quaternion number $\mathbf{q}$ (denoted by bold letter in this paper) has a form $\mathbf{q}=q_0+q_1\textbf{i}+q_2\textbf{j}+q_3\textbf{k}$, where $q_0,q_1,q_2,q_3$ are real numbers and $\textbf{i}, \textbf{j}$, and $\textbf{k}$ are imaginary units, which satisfy $\textbf{i}^2=\textbf{j}^2=\textbf{k}^2=-1$ and $\textbf{ij}=-\textbf{ji}=\textbf{k}$. For every quaternion number $\mathbf{q}=q_0+q_1\textbf{i}+q_2\textbf{j}+q_3\textbf{k}$, the scalar part and vector part are $Sc(q):=q_0$ and $\underline{\mathbf{q}}:=q_1\textbf{i}+q_2\textbf{j}+q_3\textbf{k}$ respectively. We also use symbols $\overline{\mathbf{q}}:=q_0-\underline{\mathbf{q}}$ and $|\mathbf{q}|:=\sqrt{\mathbf{q}\overline{\mathbf{q}}}=\sqrt{q_0^2+q_1^2+q_2^2+q_3^2}$ to denote the conjugate and norm of $\mathbf{q}$.
Furthermore, for $p=1$ and $2$, the quaternion modules $L^p(\mathbb{R}^n,\mathbb{H})$ is defined by
\begin{equation*}
L^p(\mathbb{R}^n,\mathbb{H}):=\{f|f:\mathbb{R}^n\rightarrow\mathbb{H},\|f\|_{L^p(\mathbb{R}^n,\mathbb{H})}:=\big(\int_{\mathbb{R}^n}|f(\mathbf{x})|^pd\mathbf{x}\big)^{\frac{1}{p}}<\infty\}.
\end{equation*}
Clifford algebra, which is a further generalization of quaternion, was introduced by William Kingdon Clifford in 1878 and is defined as follows.
\begin{defn}\label{def-clif}
Suppose that $\{\mathbf{e}_1,\mathbf{e}_2,\cdots,\mathbf{e}_n\}$ is an orthonormal basis of Euclidean space $\mathbb{R}^n$, and satisfies the relations $\mathbf{e}_i^2=-1$ for $i=1,2,\cdots,n,$ and $\mathbf{e}_i\mathbf{e}_j+\mathbf{e}_j\mathbf{e}_i=0$ for $1\leq i\neq j\leq n.$ Then the Clifford algebra $Cl(0,n)$ is an algebra constructed over these elements, i.e.,
\begin{equation}\label{CliffordDF}
Cl(0,n):=\bigg\{a=\sum\limits_S a_S\mathbf{e}_S: a_S\in \mathbb{R}, \mathbf{e}_S=\mathbf{e}_{j_1}\mathbf{e}_{j_2}\cdots \mathbf{e}_{j_k}\bigg\},
\end{equation}
where $S:=\{j_1,j_2,\cdots,j_k\}\subseteq\{1,2,\cdots,n\}$ with $1\leq j_1< j_2<\cdots< j_k\leq n$; or $S=\emptyset,$ and $\mathbf{e}_{\emptyset}:=1$.
\end{defn}
It can be easily found from the definition that $Cl(0,n)$ is a $2n$ dimensional real linear vector space. Let $|S|$ be the number of elements in the set $S$. For each $k\in\{1,2,\cdots,n\},$ let
\begin{equation*}
Cl^{(k)}(0,n):=\bigg\{a\in Cl(0,n): a=\sum\limits_{|S|=k} a_S\mathbf{e}_S \bigg\},
\end{equation*}
to be the $k$-vectors subset of $Cl(0,n)$.
Hence we have
\begin{equation*}
Cl(0,n)=\bigoplus_{k=0}^{n} Cl^{(k)}(0,n).
\end{equation*}
For any Clifford number $a=\sum\limits_{S}a_{S}\mathbf{e}_{S}$ in $Cl(0,n)$, it has a projection $[a]_k$ on $Cl^{(k)}(0,n)$, and can be represented by
\begin{equation*}
a=\sum\limits_{k=0}^{n} [a]_k.
\end{equation*}
For $k=0$, $[a]_0$ is named the scalar part of $a$. Furthermore $[a]_1$, $[a]_2$ and $[a]_n$, are the vector part, bivector part and pseudoscalar part of $a$ respectively.
Similar to quaternion numbers, we use symbols $\overline{a}:=\sum\limits_{S}a_{S}\overline{\mathbf{e}}_{S}$ and $|a|:=\left(\sum\limits_S|a_S|^2\right)^{1/2}$ to denote the conjugate and norm of $a$, where $\overline{\mathbf{e}}_{S}:=(-1)^{\frac{|S|(|S|+1)}{2}}\mathbf{e}_{S}$. Furthermore the Clifford modules $L^p(\mathbb{R}^n,Cl(0,n))$ is
\begin{equation*}
L^p(\mathbb{R}^n,Cl(0,n)):=\{f|f:\mathbb{R}^n\rightarrow Cl(0,n),\|f\|_{L^p(\mathbb{R}^n,Cl(0,n))}:=\big(\int_{\mathbb{R}^n}|f(\mathbf{x})|^pd\mathbf{x}\big)^{\frac{1}{p}}<\infty\}.
\end{equation*}
For $p=2$, and let $f,g \in L^2(\mathbb{R}^n,Cl(0,n))$, an inner product can be equipped as follows
\begin{equation}\label{innerp}
(f,g)_{L^2(\mathbb{R}^n,Cl(0,n))}:=\int_{\mathbb{R}^n}f(\mathbf{x})\overline{g(\mathbf{x})}d^n\mathbf{x}.
\end{equation}
Since Clifford algebra is generalized from quaternion algebra, they have a close relationship as follows,
\begin{thm}\cite{Girard11}\label{The-clif}
If $p+q=2m$ ($m$ is integer),the Clifford algebra $Cl(p,q)$ is the tensor product of $m$ quaternion algebras. If $p+q=2m-1$, the Clifford algebra $Cl(p,q)$ is the tensor product of $m-1$ quaternion algebra and the algebra $(1,\epsilon)$, where $\epsilon$ is the product of the $2m-1$ generators $(\epsilon=\mathbf{e}_1\mathbf{e}_2\cdots\mathbf{e}_{2m-1})$.
\end{thm}
According to the above theorem, one can easily find some special examples of Clifford algebras such as: complex $\mathbb{C}$ with $p=0, q=1, m=1$ and $\mathbf{e}_1=\mathbf{i}$; quaternions $\mathbb{H}$ with $p=0, q=2, m=1$ and $\mathbf{e}_1=\mathbf{i}, \mathbf{e}_2=\mathbf{j}$.
Here we should pay an attention to a special case of Clifford algebra, $Cl(0,3)$, which will be used in this paper to detect the envelop of 3-D images. According to Theorem \ref{The-clif}, $Cl(0,3)$ corresponds $p=0, q=3,$ and is a tensor product of quaternion algebra $\mathbb{H}$ and the algebra $(1,\epsilon)$, where $\mathbf{e}_1=\epsilon\mathbf{i}, \mathbf{e}_2=\epsilon\mathbf{j},\mathbf{e}_3=\epsilon\mathbf{k}$ are the generators of $Cl(0,3)$, $\epsilon=\mathbf{e}_1\mathbf{e}_2\mathbf{e}_3$ commuting with $\mathbf{i,j,k}$ and $\epsilon^2=1$.
Thus each number in this algebra
$$A=a_0+a_1\mathbf{e}_1+a_2\mathbf{e}_2+a_3\mathbf{e}_3+a_4\mathbf{e}_1\mathbf{e}_2+a_5\mathbf{e}_3\mathbf{e}_1+a_6\mathbf{e}_2\mathbf{e}_3+a_7\mathbf{e}_1\mathbf{e}_2\mathbf{e}_3,$$
can be expressed by $A=\mathbf{p}+\epsilon\mathbf{q}$, where $\mathbf{p},\mathbf{q}$ are two quaternions and
\begin{equation*}
\begin{aligned}
\mathbf{p}&=(a_0+a_6\mathbf{i}+a_5\mathbf{j}+a_4\mathbf{k}),
\\
\mathbf{q}&=(a_7+a_1\mathbf{i}+a_2\mathbf{j}+a_3\mathbf{k}).
\end{aligned}
\end{equation*}
Here $a_0$ is the scalar part of $A$, $a_1,a_2,a_3$ correspond to the vector part, and $a_4,a_5,a_6$ are the bivector parts, while $a_7$ is the pseudoscalar part of $A$.
\begin{rem}\label{clibiquaternion}
Since $Cl(0,3)$ is a special case of geometric algebra (Clifford algebra) and isomorphic to $\mathbb{H}\bigoplus\mathbb{H}$, it is also named Clifford biquaternion\cite{Wang12,Girard11} (clifbquat for short) by some authors.
\end{rem}
Since Clifford biquaternion algebra $Cl(0,3)$ is a special case of geometric algebra (Clifford algebra), it follows the reflecting properties of geometric algebra \cite{Vince08}, which states that under an orthogonal symmetry with respect to a plane which is perpendicular to a unit vector $a$, the reflecting of a Clifford biquaterion $A=\sum\limits_{|S|=0}^3 a_S\mathbf{e}_S$ is $A'=\sum\limits_{|S|=0}^3 (-1)^{|S|+1}a_S a\mathbf{e}_Sa$.
In particular, if $a=\mathbf{e}_1$, we can get
\begin{equation}
A'=K_1(A)=(a_0+a_6\mathbf{i}-a_5\mathbf{j}-a_4\mathbf{k})+\epsilon(-a_7-a_1\mathbf{i}+a_2\mathbf{j}+a_3\mathbf{k});
\end{equation}
if $a=\mathbf{e}_2$, it is
\begin{equation}
A'=K_2(A)=(a_0-a_6\mathbf{i}+a_5\mathbf{j}-a_4\mathbf{k})+\epsilon(-a_7+a_1\mathbf{i}-a_2\mathbf{j}+a_3\mathbf{k});
\end{equation}
if $a=\mathbf{e}_3$,
\begin{equation}
A'=K_3(A)=(a_0-a_6\mathbf{i}-a_5\mathbf{j}+a_4\mathbf{k})+\epsilon(-a_7+a_1\mathbf{i}+a_2\mathbf{j}-a_3\mathbf{k}).
\end{equation}
\subsection{Analytic Signal in N Dimension}
Analytic signal is constructed by suppressing the negative frequency components of the original signal. So it is closely related with the Fourier transform of the original signal. In \cite{Girard11}, Girard generalized the analytic signal to $n$ dimension by introducing a new Clifford-Fourier transform as follows
\begin{defn}Given a Clifford valued function $f(\mathbf{x})\in L^1(\mathbb{R}^n,Cl(0,n))$ with $\mathbf{x}=(x_1,x_2,\cdots,x_n)$, its Clifford Fourier transform $F(u)$ is defined as
\begin{equation}
F(\mathbf{u})=\int_{\mathbb{R}^n}f(\mathbf{x})\prod\limits_{k=1}^{n}e^{-\mathbf{e}_k2\pi u_kx_k}d^n\mathbf{x}.
\end{equation}
\end{defn}
Furthermore, Girard supplied the inverse Clifford Fourier transform as follows
\begin{equation}
f(\mathbf{x})=\int_{\mathbb{R}^n}F(\mathbf{u})\prod\limits_{k=0}^{n-1}e^{\mathbf{e}_{n-k}2\pi u_{n-k}x_{n-k}}d^{n}\mathbf{u}.
\end{equation}
With the help of this CFT, Girard introduced the $n$ dimensional analytic signal $f_A(\mathbf{x})$ which corresponds to $f(\mathbf{x})$ by the following steps
\begin{equation}
F_A(\mathbf{u})=\prod\limits_{k=1}^{n}[1+sign(u_k)]F(\mathbf{u}),
\end{equation}
\begin{equation}
f_A(\mathbf{x})=\int_{\mathbb{R}^n}F_A(\mathbf{u})\prod\limits_{k=0}^{n-1}e^{\mathbf{e}_{n-k}2\pi u_{n-k}x_{n-k}}d^n\mathbf{u},
\end{equation}
where $sign(u_k)$ is the classical signum function and $F(\mathbf{u})$ is the CFT of original function $f(\mathbf{x})$.
It can be easily verified that when $n=1$, CFT degenerates to the classical FT and the corresponding analytic signal is $f_A(t)$. When $n=2$, the CFT turns to the right-sided quaternion Fourier transform and the analytic signal is quaternionic valued function $f_A(x,y)$.
Since LCT is a generalization of FT, one can generalize the analytic signal by replacing the FT by LCT, which was introduced by Fu in \cite{Fu08}. One can connect these two approaches to generalize the analytic signal to $n$ dimensional LCT domain.
Monogenic signal which was introduced by Felsberg \cite{Felsberg01} is another generalization of analytic signal. Since we will compare our method with that of monogenic signal, the basic knowledge about monogenic signal will be reviewed in the following.
\begin{defn}\cite{Yang17}(Monogenic signal) For $f\in L^2(\mathbb{R}^n,Cl(0,n))$, the monogenic signal $f_M\in L^2(\mathbb{R}^n,Cl(0,n))$ is defined by
\begin{equation}\label{mono-defn}
f_M(\underline{x}):=f(\underline{x})+H[f](\underline{x}),
\end{equation}
where $H[f]$ is the isotropic Hilbert transform of $f$ defined by
\begin{equation}
\begin{aligned}
\label{hilber-defn}
H[f](\underline{x}):&=p.v.\frac{1}{\omega_n}\int_{\mathbf{R}^n}\frac{\overline{\underline{x}-\underline{t}}}{|\underline{x}-\underline{t}|^{n+1}}f(\underline{t})d\underline{t}
\\
&=\lim\limits_{\epsilon\rightarrow0^+}\frac{1}{\omega_n}\int_{{|\underline{x}-\underline{t}}|>\epsilon}\frac{\overline{\underline{x}-\underline{t}}}{|\underline{x}-\underline{t}|^{n+1}}f(\underline{t})d\underline{t}
\\
&=-\sum\limits_{j=1}^{n}R_j(f)(\underline{x})\mathbf{e}_j.
\end{aligned}
\end{equation}
Furthermore,
\begin{equation*}
R_j(f)(\underline{x}):=\lim\limits_{\epsilon\rightarrow0^+}\frac{1}{\omega_n}\int_{{|\underline{x}-\underline{t}}|>\epsilon}\frac{x_j-t_j}{|\underline{x}-\underline{t}|^{n+1}}f(\underline{t})d\underline{t}
\end{equation*}
and $\omega_n=\frac{2\pi^{\frac{n+1}{2}}}{\Gamma(\frac{n+1}{2})}$.
\end{defn}
\section{Clifford Linear Canonical Transform}
Since LCT is a generalization of many famous integral transform and has more degrees of freedom, it is natural to generalize the LCT to Clifford domain. In the last decade, many mathematicians tried different approaches to develop this kind of generalization. In \cite{Kou13}, Kou introduced the CLCT of a function $f\in L^1(\mathbb{R}^n,Cl(0,n))$ and investigated the maxima of energy preservation problem. Yang \cite{Yang14} paid attention to the CLCT with the kernel constituted by the complex unit $I$. Here, we will introduce a new type of generalization as follows
\begin{defn} (Right-sided CLCT)
Let $\Lambda$: $A_k=\left(
\begin{array}{cc}
a_k & b_k \\
c_k & d_k \\
\end{array}
\right)\in \mathbb{R}^{2\times2}$
($k=1,2,\cdots,n$) be a set of parameter matrices with $det(A_k)=1$ and $b_k\neq0$, the right-sided CLCT of a signal $f\in L^1(\mathbb{R}^n,CL(0,n))$ is a Clifford valued function $\mathscr{L}^r_{\Lambda}(f): \mathbb{R}^n\rightarrow CL(0,n)$ defined as follows
\begin{equation}\label{def-rCLCT}
\mathscr{L}^r_{\Lambda}(f)(\mathbf{u}):=\int_{\mathbb{R}^n}f(\mathbf{x})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x},
\end{equation}
where $K^{\mathbf{e}_k}_{A_k}(x_k,u_k)$ is the kernel of the CLCT defined as
\begin{equation}
K^{\mathbf{e}_k}_{A_k}(x_k,u_k):=\frac{1}{\sqrt{\mathbf{e}_k2\pi b_k}}e^{\mathbf{e}_k(\frac{a_k}{2b_k}x_k^2-\frac{1}{b_k}x_ku_k+\frac{d_k}{2b_k}u_k^2)}.
\end{equation}
\end{defn}
Due to the non-commutativity of Clifford algebra, there is a different type of CLCT, left-sided CLCT, which can be defined as follows
\begin{defn} (Left-sided CLCT)
Let $\Lambda$: $A_k=\left(
\begin{array}{cc}
a_k & b_k \\
c_k & d_k \\
\end{array}
\right)\in \mathbb{R}^{2\times2}$
($k=1,2,\cdots,n$) be a set of parameter matrices with $det(A_k)=1$ and $b_k\neq0$, the left-sided CLCT of a signal $f\in L^1(\mathbb{R}^n,Cl(0,n))$ is a Clifford valued function $\mathscr{L}^l_{\Lambda}(f): \mathbb{R}^n\rightarrow Cl(0,n)$ defined as follows
\begin{equation}
\mathscr{L}^l_{\Lambda}(f)(\mathbf{u}):=\int_{\mathbb{R}^n}\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)f(\mathbf{x})d^n\mathbf{x}.
\end{equation}
\end{defn}
Here we focus on the right-sided CLCT in the following content. Furthermore, we can simulate the approach Hitzer used in \cite{Hitzer14} to introduce a two-sided Clifford LCT with two square roots of $-1$ in $Cl(0,n)$, which is out of our scope.
\begin{rem}It can be easily verified that when the dimension $n=1$, the CLCT degenerates to the classical LCT. When $n=2$, the CLCT turns to the right-sided quaternion linear canonical transform.
\end{rem}
Now let us investigate an example to see how the CLCT works:
\begin{ex}
Considering the right-sided CLCT of $\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(x_{n-k},v_{n-k})$.
\end{ex}
Since $f(t)$ is product of $n$ LCT kernel functions, we can integrate the functions separately when taking the CLCT.
That is
\begin{equation}\label{ExamK}
\begin{split}
&\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})=\int_{\mathbb{R}^n}\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(x_{n-k},v_{n-k})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}
\\
&=\int_{\mathbb{R}^{n-1}}\prod\limits_{k=0}^{n-2} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(x_{n-k},v_{n-k})\int_{\mathbb{R}}K^{-\mathbf{e}_{1}}_{A_{1}}(x_{1},v_{1})K^{\mathbf{e}_{1}}_{A_{1}}(x_{1},u_{1})dx_1
\\
&\cdot\prod\limits_{k=2}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^{n-1}\mathbf{x},
\end{split}
\end{equation}
where
\begin{equation}\label{Delta}
\begin{split}
\int_{\mathbb{R}}K^{-\mathbf{e}_{1}}_{A_{1}}(x_{1},v_{1})K^{\mathbf{e}_{1}}_{A_{1}}(x_{1},u_{1})&dx_1=\int_{\mathbb{R}}\frac{1}{\sqrt{-\mathbf{e}_12\pi b_1}}e^{-\mathbf{e}_1(\frac{a_1}{2b_1}x_1^2-\frac{1}{b_1}x_1v_1+\frac{d_1}{2b_1}v_1^2)}
\\
&\cdot\frac{1}{\sqrt{\mathbf{e}_12\pi b_1}}e^{\mathbf{e}_1(\frac{a_1}{2b_1}x_1^2-\frac{1}{b_1}x_1u_1+\frac{d_1}{2b_1}u_1^2)}dx_1
\\
&=\frac{1}{2\pi}\int_{\mathbb{R}}e^{-\mathbf{e}_1\frac{x_1}{b_1}(u_1-v_1)}d(\frac{x_1}{b_1})e^{-\mathbf{e}_1\frac{d_1}{2b_1}(u_1^2-v_1^2)}
\\
&=\delta(u_1-v_1).
\end{split}
\end{equation}
The last step of the above equation comes from the relationship of Fourier transform $\widehat{1}(\omega)=2\pi \delta(\omega)$.
Applying result (\ref{Delta}) to the right side of equation (\ref{ExamK}), one can derive
$$\mathscr{L}^r_{\Lambda}\big(\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(x_{n-k},v_{n-k})\big)(\mathbf{u})=\prod\limits_{k=1}^{n}\delta(u_k-v_k).$$
In the following content, we turn to investigate the main properties of CLCT.
\begin{prop} (Left linearity) Let $\alpha \in Cl(0,n), \beta \in Cl(0,n)$, for $f_1(\mathbf{x}),f_2(\mathbf{x})\in L^1(\mathbb{R}^n, Cl(0,n))$, we have $\mathscr{L}^r_{\Lambda}(\alpha f_1+\beta f_2)(\mathbf{u})=\alpha\mathscr{L}^r_{\Lambda} (f_1)(\mathbf{u})+\beta \mathscr{L}^r_{\Lambda} (f_2)(\mathbf{u})$.
\end{prop}
This proposition can be easily verified by the linearity of integral.
\begin{prop}(Translation) Let $\mathbf{\bm{\alpha}}=(\alpha_1,0,\cdots,0,\alpha_n)$, for $f(\mathbf{x})\in L^1(\mathbb{R}^n,\mathbb{R})$, we have
$
\mathscr{L}^r_{\Lambda}(f(\mathbf{x}-\bm{\alpha}))(\mathbf{u})=e^{\mathbf{e}_1(c_1u_1\alpha_1)}e^{-\mathbf{e}_1\frac{a_1c_1}{2}\alpha_1^2}\mathscr{L}^r_{\Lambda}(f)(u_1-a_1\alpha_1,u_2,\cdots,u_n-a_n\alpha_n)e^{\mathbf{e}_n(c_nu_n\alpha_n)}e^{-\mathbf{e}_n\frac{a_nc_n}{2}\alpha_n^2}.
$
\end{prop}
\begin{proof}
According to definition of right-sided CLCT, one can get
\begin{equation}\label{trans1}
\mathscr{L}^r_{\Lambda}(f(\mathbf{x}-\bm{\alpha}))(\mathbf{u})=\int_{\mathbb{R}^n}f(x_1-\alpha_1,x_2,\cdots,x_{n-1},x_n-\alpha_n)\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}.
\end{equation}
Let $t_1=x_1-\alpha_1$, $t_n=x_n-\alpha_n$, the above integral turns to a new integral about variables $t_1,x_2,\cdots,x_{n-1},t_n$, where the first LCT kernel becomes
\begin{equation*}
\begin{aligned}
K^{\mathbf{e}_1}_{A_1}(x_1,u_1)&=\frac{1}{\sqrt{\mathbf{e}_12\pi b_1}}e^{\mathbf{e}_1[\frac{a_1}{2b_1}(t_1+\alpha_1)^2-\frac{1}{b_1}(t_1+\alpha_1)u_1+\frac{d_1}{2b_1}u_1^2]}
\\
&=\frac{1}{\sqrt{\mathbf{e}_12\pi b_1}}e^{\mathbf{e}_1[\frac{a_1}{2b_1}t_1^2+\frac{a_1}{b_1}t_1\alpha_1+\frac{a_1}{2b_1}\alpha_1^2-\frac{t_1u_1}{b_1}-\frac{u_1\alpha_1}{b_1}+\frac{d_1}{2b_1}u_1^2]}
\\
&=e^{\mathbf{e}_1(c_1u_1\alpha_1)}e^{-\mathbf{e}_1\frac{a_1c_1}{2}\alpha_1^2}\frac{1}{\sqrt{\mathbf{e}_12\pi b_1}}e^{\mathbf{e}_1[\frac{a_1}{2b_1}t_1^2-\frac{t_1}{b_1}(u_1-a_1\alpha_1)+\frac{d_1}{2b_1}(u_1-a_1\alpha_1)^2]},
\end{aligned}
\end{equation*}
and the last LCT kernel turns to
\begin{equation*}
K^{\mathbf{e}_n}_{A_n}(x_n,u_n)=e^{\mathbf{e}_n(c_nu_n\alpha_n)}e^{-\mathbf{e}_n\frac{a_nc_n}{2}\alpha_n^2}\frac{1}{\sqrt{\mathbf{e}_n2\pi b_n}}e^{\mathbf{e}_n[\frac{a_n}{2b_n}t_n^2-\frac{t_n}{b_n}(u_n-a_n\alpha_n)+\frac{d_n}{2b_n}(u_n-a_n\alpha_n)^2]}.
\end{equation*}
Since $f(\mathbf{x})$ is real valued function in this case and it can interchange the position with $K^{\mathbf{e}_1}_{A_1}(x_1,u_1)$ in equation (\ref{trans1}) freely. Hence equation (\ref{trans1}) derives
\begin{equation*}
\begin{aligned}
\mathscr{L}^r_{\Lambda}(f(\mathbf{x}-\bm{\alpha}))(\mathbf{u})=&e^{\mathbf{e}_1(c_1u_1\alpha_1)}e^{-\mathbf{e}_1\frac{a_1c_1}{2}\alpha_1^2}\mathscr{L}^r_{\Lambda}(f)(u_1-a_1\alpha_1,u_2,\cdots,u_n-a_n\alpha_n)
\\
&e^{\mathbf{e}_n(c_nu_n\alpha_n)}e^{-\mathbf{e}_n\frac{a_nc_n}{2}\alpha_n^2}.
\end{aligned}
\end{equation*}
\end{proof}
\begin{prop}(Scaling)Let $\sigma_1,\sigma_2,\cdots,\sigma_n>0$, for $f(\mathbf{x})\in L^1(\mathbb{R}^n,Cl(0,n))$, we have
$$
\mathscr{L}^r_{\Lambda}(f(\sigma_1x_1,\sigma_2x_2,\cdots,\sigma_nx_n)(\mathbf{u})=\frac{1}{\sqrt{\prod\limits_{k=1}^{n}\sigma_k}}\mathscr{L}^r_{\Lambda'}(f)(\mathbf{u}),
$$
where $\Lambda'$: $B_k=\left(
\begin{array}{cc}
a_k/\sigma_k & b_k\sigma_k \\
c_k/\sigma_k & d_k\sigma_k \\
\end{array}
\right).$
\end{prop}
\begin{proof}According to equation (\ref{def-rCLCT}), one can find
\begin{equation}\label{Escaling}
\begin{split}
\mathscr{L}^r_{\Lambda}(f(\sigma_1x_1,\sigma_2x_2,\cdots,\sigma_nx_n))(\mathbf{u})=&\int_{\mathbb{R}^n}f(\sigma_1x_1,\sigma_2x_2,\cdots,\sigma_nx_n)
\\
&\cdot\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}.
\end{split}
\end{equation}
Let $t_k=\sigma_kx_k\quad(k=1,\cdots,n)$, each kernel $K^{\mathbf{e}_k}_{A_k}(x_k,u_k)$ turns to
\begin{equation*}
\begin{aligned}
K^{\mathbf{e}_k}_{A_k}(x_k,u_k)&=\frac{1}{\sqrt{\mathbf{e}_k2\pi b_k}}e^{\mathbf{e}_k[\frac{a_k}{2b_k}(\frac{t_k}{\sigma_k})^2-\frac{1}{b_k}(\frac{t_k}{\sigma_k})u_k+\frac{d_k}{2b_k}u_k^2]}
\\
&=\frac{\sqrt{{\sigma_k}}}{\sqrt{\mathbf{e}_k2\pi b_k{\sigma_k}}}e^{\mathbf{e}_k[\frac{a_k/{\sigma_k}}{2b_k{\sigma_k}}t_k^2-\frac{1}{b_k{\sigma_k}}t_ku_k+\frac{d_k{\sigma_k}}{2b_k{\sigma_k}}u_k^2]}.
\end{aligned}
\end{equation*}
Hence the left side of equation (\ref{Escaling}) becomes
\begin{equation*}
\begin{split}
\mathscr{L}^r_{\Lambda}(f(t_1,t_2,\cdots,t_n))(\mathbf{u})&=\int_{\mathbb{R}^n}f(t_1,t_2,\cdots,t_n)
\\
&\cdot\prod\limits_{k=1}^{n} \frac{\sqrt{{\sigma_k}}}{\sqrt{\mathbf{e}_k2\pi b_k{\sigma_k}}}e^{\mathbf{e}_k[\frac{a_k/{\sigma_k}}{2b_k{\sigma_k}}t_k^2-\frac{1}{b_k{\sigma_k}}t_ku_k+\frac{d_k{\sigma_k}}{2b_k{\sigma_k}}u_k^2]}d{\frac{t_1}{\sigma_1}}\cdots d{\frac{t_n}{\sigma_n}}
\\
&=\frac{1}{\sqrt{\prod\limits_{k=1}^{n}\sigma_k}}\int_{\mathbb{R}^n}f(t_1,t_2,\cdots,t_n)\prod\limits_{k=1}^{n} \frac{1}{\sqrt{\mathbf{e}_k2\pi b_k{\sigma_k}}}
\\
&\cdot e^{\mathbf{e}_k[\frac{a_k/{\sigma_k}}{2b_k{\sigma_k}}t_k^2-\frac{1}{b_k{\sigma_k}}t_ku_k+\frac{d_k{\sigma_k}}{2b_k{\sigma_k}}u_k^2]}d^n\mathbf{t},
\end{split}
\end{equation*}
which is the desired result.
\end{proof}
\begin{prop}\label{Partial}(Partial derivative) For $f(\mathbf{x})$ and $\frac{\partial f(\mathbf{x})}{\partial x_1},\frac{\partial f(\mathbf{x})}{\partial x_n}\in L^1(\mathbb{R}^n,\mathbb{R})$, we have
\begin{equation}\label{Partial1}
\mathscr{L}^r_{\Lambda}(\frac{\partial f(\mathbf{x})}{\partial x_1})(\mathbf{u})=(a_1\frac{\partial}{\partial u_1}-\mathbf{e}_1c_1u_1)\mathscr{L}^r_{\Lambda}(f)(\mathbf{u}),
\end{equation}
and
\begin{equation}\label{Partialn}
\mathscr{L}^r_{\Lambda}(\frac{\partial f(\mathbf{x})}{\partial x_n})(\mathbf{u})=\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})(a_n\frac{\partial}{\partial u_n}-\mathbf{e}_nc_nu_n).
\end{equation}
\end{prop}
\begin{proof}
Since $\frac{\partial f(\mathbf{x})}{\partial x_1}\in L^1(\mathbb{R}^n,\mathbb{R})$, it has CLCT as following
\begin{equation*}
\begin{aligned}
\mathscr{L}^r_{\Lambda}(\frac{\partial f(\mathbf{x})}{\partial x_1})(\mathbf{u})&=\int_{\mathbb{R}^n}\frac{\partial f(\mathbf{x})}{\partial x_1}\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}
\\
&=\int_{\mathbb{R}^{n-1}}\bigg[\int_\mathbb{R}\frac{\partial f(\mathbf{x})}{\partial x_1}K^{\mathbf{e}_1}_{A_1}(x_1,u_1)dx_1\bigg]\prod\limits_{k=2}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^{n-1}\mathbf{x},
\end{aligned}
\end{equation*}
where
\begin{equation*}
\begin{aligned}
\int_\mathbb{R}\frac{\partial f(\mathbf{x})}{\partial x_1}&K^{\mathbf{e}_1}_{A_1}(x_1,u_1)dx_1=\big[f(\mathbf{x})K^{\mathbf{e}_1}_{A_1}(x_1,u_1)\big]\big|_{-\infty}^{+\infty}-\int_\mathbb{R} f(\mathbf{x})\frac{\partial}{\partial x_1}K^{\mathbf{e}_1}_{A_1}(x_1,u_1)dx_1
\\
&=-\int_\mathbb{R} f(\mathbf{x})\frac{1}{\sqrt{\mathbf{e}_k2\pi b_1}}e^{\mathbf{e}_1(\frac{a_1}{2b_1}x_1^2-\frac{1}{b_1}x_1u_1+\frac{d_1}{2b_1}u_1^2)}\mathbf{e}_1(\frac{a_1}{b_1}x_1-\frac{u_1}{b_1})dx_1
\\
&=\int_\mathbb{R}(\frac{u_1}{b_1}-\frac{a_1}{b_1}x_1)f(\mathbf{x})\mathbf{e}_1K^{\mathbf{e}_1}_{A_1}(x_1,u_1)dx_1.
\end{aligned}
\end{equation*}
Hence
\begin{equation}\label{PartialD}
\mathscr{L}^r_{\Lambda}(\frac{\partial f(\mathbf{x})}{\partial x_n})(\mathbf{u})=\int_{\mathbb{R}^n}(\frac{u_1}{b_1}-\frac{a_1}{b_1}x_1)f(\mathbf{x})\mathbf{e}_1\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}.
\end{equation}
While
\begin{equation*}
\begin{aligned}
a_1\frac{\partial}{\partial u_1}\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})&=a_1\frac{\partial}{\partial u_1}\int_{\mathbb{R}^n}f(\mathbf{x})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}
\\
&=\int_{\mathbb{R}^n}(\frac{a_1d_1}{b_1}u_1-\frac{a_1x_1}{b_1})f(\mathbf{x})\mathbf{e}_1\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x},
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
\mathbf{e}_1c_1u_1\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})=\int_{\mathbb{R}^n}c_1u_1\mathbf{e}_1 f(\mathbf{x})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}.
\end{aligned}
\end{equation*}
According to equation (\ref{PartialD}), We can easily verified that $\mathscr{L}^r_{\Lambda}(\frac{\partial f(\mathbf{x})}{\partial x_1})(\mathbf{u})=(a_1\frac{\partial}{\partial u_1}-\mathbf{e}_1c_1u_1)\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})$, where the property $a_1d_1-b_1c_1=1$ is used. The proof of equation (\ref{Partialn}) is similar to that of equation (\ref{Partial1}), and we will omit it here.
\end{proof}
\begin{thm}(Plancherel) Suppose $f(\mathbf{x})$ and $g(\mathbf{x}) \in L^1\bigcap L^2(\mathbb{R}^n,Cl(0,n))$, $F(\mathbf{u}):=\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})$ and $G(\mathbf{u}):=\mathscr{L}^r_{\Lambda}(g)(\mathbf{u})$ are their CLCTs respectively, then we have
\begin{equation}\label{innerp1}
(f,g)_{L^2(\mathbb{R}^n,Cl(0,n))}=(F,G)_{L^2(\mathbb{R}^n,Cl(0,n))}.
\end{equation}
Furthermore, when $f=g$, they turn to
\begin{equation}\label{parseval1}
(f,f)_{L^2(\mathbb{R}^n,Cl(0,n))}=(F,F)_{L^2(\mathbb{R}^n,Cl(0,n))}.
\end{equation}
\end{thm}
\begin{proof} According to equation (\ref{innerp}),
\begin{equation}\label{Planchel1}
(F,G)_{L^2(\mathbb{R}^n,Cl(0,n))}=\int_{\mathbb{R}^n}F(\mathbf{u})\overline{G(\mathbf{u})}d^n\mathbf{u}.
\end{equation}
Inputting the integral formulaes of $F(\mathbf{u})$ and $G(\mathbf{u})$ into the above equation, one can derive
\begin{equation*}
\begin{aligned}
(F,G)&=\int_{\mathbb{R}^n}\bigg[\int_{\mathbb{R}^n}f(\mathbf{x})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)d^n\mathbf{x}\bigg]\bigg[\overline{\int_{\mathbb{R}^n}g(\mathbf{y})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(y_k,u_k)d^n\mathbf{y}}\bigg]d^n\mathbf{u}
\\
&=\int_{\mathbb{R}^n}\bigg[\int_{\mathbb{R}^{2n}}f(\mathbf{x})\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(y_{n-k},u_{n-k})\overline{g(\mathbf{y})}d^n\mathbf{x}d^n\mathbf{y}\bigg]d^n\mathbf{u}
\\
&=\int_{\mathbb{R}^{2n}}f(\mathbf{x})\bigg[\int_{\mathbb{R}^n}\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(y_{n-k},u_{n-k})d^n\mathbf{u}\bigg]\overline{g(\mathbf{y})}d^n\mathbf{x}d^n\mathbf{y}.
\end{aligned}
\end{equation*}
The interchanging of the integral in the above equation is due to the Fubini theorem which is insured by $f(\mathbf{x})$ and $g(\mathbf{x}) \in L^2(\mathbb{R}^n,Cl(0,n))$.
Furthermore, the integral in bracket can be turned into
\begin{equation*}
\begin{aligned}
\int_{\mathbb{R}^n}&\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(y_{n-k},u_{n-k})d^n\mathbf{u}=\int_{\mathbb{R}^{n-1}}\prod\limits_{k=1}^{n-1}K^{\mathbf{e}_k}_{A_k}(x_k,u_k)
\\
&\bigg[\int_{\mathbb{R}}K^{\mathbf{e}_n}_{A_n}(x_n,u_n)K^{-\mathbf{e}_n}_{A_n}(y_n,u_n)du_n\bigg]\prod\limits_{k=1}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(y_{n-k},u_{n-k})d^{n-1}\mathbf{u},
\end{aligned}
\end{equation*}
where
$\int_{\mathbb{R}}K^{\mathbf{e}_n}_{A_n}(x_n,u_n)K^{-\mathbf{e}_n}_{A_n}(y_n,u_n)du_n$
can be calculated as equation (\ref{Delta}) and equals $\delta (x_n-y_n)$.
Hence, the above equation is
\begin{equation*}
\int_{\mathbb{R}^n}\prod\limits_{k=1}^{n} K^{\mathbf{e}_k}_{A_k}(x_k,u_k)\prod\limits_{k=0}^{n-1} K^{-\mathbf{e}_{n-k}}_{A_{n-k}}(y_{n-k},u_{n-k})d^n\mathbf{u}=\prod\limits_{k=1}^n\delta (x_k-y_k),
\end{equation*}
and $(F,G)$ turns to
\begin{equation*}
\begin{aligned}
(F,G)&=\int_{\mathbb{R}^n}f(\mathbf{x})\bigg[\int_{\mathbb{R}^n}\prod\limits_{k=1}^n\delta (x_k-y_k)\overline{g(\mathbf{y})}d^n\mathbf{y}\bigg]d^n\mathbf{x}
\\
&=\int_{\mathbb{R}^n}f(\mathbf{x})\overline{g(\mathbf{x})}d^n\mathbf{x}=(f,g).
\end{aligned}
\end{equation*}
Let $f=g$, the Parseval theorem (\ref{parseval1}) is derived.
\end{proof}
\begin{thm}\label{inverset}(Inverse Theorem) Let $\Lambda$: $A_k=\left(
\begin{array}{cc}
a_k & b_k \\
c_k & d_k \\
\end{array}
\right)$
($k=1,2,\cdots,n$), suppose $f(\mathbf{x})\in L^1(\mathbb{R}^n,Cl(0,n))$. Then the CLCT of $f$, $F(\mathbf{u}):=\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})$, is an invertible transform and its inverse is
\begin{equation}\label{inversef}
f(\mathbf{x})=\mathscr{L}^{-1}_{\Lambda}(F)(\mathbf{x})=\int_{\mathbb{R}^n}F(\mathbf{u})\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k}, x_{n-k})d^n\mathbf{u},
\end{equation}
where $A^{-1}_k=\left(
\begin{array}{cc}
d_k & -b_k \\
-c_k & a_k \\
\end{array}
\right),$ ($k=1,2,\cdots,n$).
\end{thm}
\begin{proof}
$f(\mathbf{x})\in L^1(\mathbb{R}^n,Cl(0,n))$, by straightforward computation, one can find
\begin{equation}\label{inversef1}
\begin{aligned}
\int_{\mathbb{R}^n}F(\mathbf{u})\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k}, x_{n-k})d^n\mathbf{u}=\int_{\mathbb{R}^n}\bigg[\int_{\mathbb{R}^n}f(\mathbf{y})\prod\limits_{k=1}^{n} K^{\mathbf{e}_{k}}_{A_k}(y_k, u_k)d^n\mathbf{y}\bigg]
\\
\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k},x_{n-k})d^n\mathbf{u}
\\
=\int_{\mathbb{R}^{2n}}f(\mathbf{y})\prod\limits_{k=1}^{n} K^{\mathbf{e}_{k}}_{A_k}(y_k, u_k)d^n\mathbf{y}\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k},x_{n-k})d^n\mathbf{y}d^n\mathbf{u},
\\
=\int_{\mathbb{R}^n}f(\mathbf{y})\bigg[\int_{\mathbb{R}^n}\prod\limits_{k=1}^{n} K^{\mathbf{e}_{k}}_{A_k}(y_k, u_k)
\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k},x_{n-k})d^n\mathbf{u}\bigg]d^n\mathbf{y}
\end{aligned}
\end{equation}
where $K^{\mathbf{e}_k}_{{A^{-1}_k}}(u_k,x_k)=\frac{1}{\sqrt{\mathbf{e}_k2\pi b_k}}e^{\mathbf{e}_k(-\frac{a_k}{2b_k}x_k^2+\frac{1}{b_k}x_ku_k-\frac{d_k}{2b_k}u_k^2)}$,
similar to equation (\ref{Delta}), the integral in bracket of above equation is equal to $\prod\limits_{k=1}^{n}\delta(x_k-y_k)$.
Thus equation (\ref{inversef1}) becomes
\begin{equation*}
\int_{\mathbb{R}^n}F(\mathbf{u})\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k}, x_{n-k})d^n\mathbf{u}=\int_{\mathbb{R}^n}f(\mathbf{y})\prod\limits_{k=1}^{n}\delta(x_k-y_k)d^n\mathbf{y}=f(\mathbf{x}),
\end{equation*}
which completes the proof.
\end{proof}
\section{Analytic Signal in CLCT Domain and Envelop Detection}
\subsection{Analytic Signal in CLCT Domain}
Following the approach of generalizing the analytic signal to LCT domain, which was introduced by Fu and Li \cite{Fu08} and extended to quaternion algebra by Kou \cite{Kou16}, we supply the definition of generalized analytic signal which is associated with CLCT.
\begin{defn}Given a Clifford valued function $f(\mathbf{x})$ with $\mathbf{x}=(x_1,x_2,\cdots,x_n)$, its corresponding analytic signal $f_A(\mathbf{x})$ is defined as
\begin{equation}
f_A(\mathbf{x})=\int_{\mathbb{R}^n}F_A(\mathbf{u})\prod\limits_{k=0}^{n-1} K^{\mathbf{e}_{n-k}}_{{A^{-1}_{n-k}}}(u_{n-k}, x_{n-k})d^n\mathbf{u},
\end{equation}
where $F_A(\mathbf{u})$ is
\begin{equation}
F_A(\mathbf{u})=\prod\limits_{k=1}^{n}[1+sign(\frac{u_k}{b_k})]F(\mathbf{u}),
\end{equation}
and $F(\mathbf{u}):=\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})$ is the CLCT of original function $f(\mathbf{x})$, $\Lambda:$ $A_k=\left(
\begin{array}{cc}
a_k & b_k \\
c_k & d_k \\
\end{array}
\right)\in \mathbb{R}^{2\times2}$
($k=1,2,\cdots,n$) be a set of parameter matrices with $det(A_k)=1$ and $b_k\neq0$.
\end{defn}
When the dimension $n=1$, the analytic signal $f_A(t)$ is just what Fu introduced in \cite{Fu08}. While $n=2$, this analytic signal $f_A(x,y)$ is different from Kou's generalized quaternionic analytic signal, which is associated with two-sided QLCT.
Since we intend to investigate the envelop detection problems of 3-D images, we will focus on $n=3$ case. Due to the advantages of Clifford algebra $Cl(0,3)$, we will adopt this special Clifford algebra in the following part.
\begin{defn}
Let $\Lambda$: $A_k=\left(
\begin{array}{cc}
a_k & b_k \\
c_k & d_k \\
\end{array}
\right)\in \mathbb{R}^{2\times2}$
($k=1,2,3$) be a set of parameter matrices with $det(A_k)=1$ and $b_k\neq0$, the right-sided CLCT of a signal $f\in L^1(\mathbb{R}^3,Cl(0,3))$ is a Clifford valued function $\mathscr{L}^r_{\Lambda}(f): \mathbb{R}^3\rightarrow Cl(0,3)$ defined as follows
\begin{equation}
\mathscr{L}^r_{\Lambda}(f)(\mathbf{u}):=\int_{\mathbb{R}^3}f(\mathbf{x}) K^{\mathbf{e}_1}_{A_1}(x_1,u_1)K^{\mathbf{e}_2}_{A_2}(x_2,u_2)K^{\mathbf{e}_3}_{A_3}(x_3,u_3)d^3\mathbf{x},
\end{equation}
where $K^{\mathbf{e}_1}_{A_1}(x_1,u_1)$, $K^{\mathbf{e}_2}_{A_2}(x_2,u_2)$ and $K^{\mathbf{e}_3}_{A_3}(x_3,u_3)$ are kernels of the CLCT and defined by
\begin{equation*}
K^{\mathbf{e}_k}_{A_k}(x_k,u_k):=\frac{1}{\sqrt{\mathbf{e}_k2\pi b_k}}e^{\mathbf{e}_k(\frac{a_k}{2b_k}x_k^2-\frac{1}{b_k}x_ku_k+\frac{d_k}{2b_k}u_k^2)}.
\end{equation*}
\end{defn}
\begin{defn}\label{analyticC}
Given a Clifford biquaternion valued function $f(\mathbf{x})$ with $\mathbf{x}=(x_1,x_2,x_3)$, its corresponding Clifford biquaternion analytic signal $f_A(\mathbf{x})$ is defined as
\begin{equation}
f_A(\mathbf{x})=\int_{\mathbb{R}^3}F_A(\mathbf{u})K^{\mathbf{e}_3}_{{A^{-1}_{3}}}(u_3, x_3)K^{\mathbf{e}_2}_{{A^{-1}_{2}}}(u_2, x_2)K^{\mathbf{e}_1}_{{A^{-1}_{1}}}(u_1, x_1)du_1du_2du_3,
\end{equation}
where $F_A(\mathbf{u})$ is
\begin{equation}
F_A(\mathbf{u})=[1+sign(\frac{u_1}{b_1})][1+sign(\frac{u_2}{b_2})][1+sign(\frac{u_3}{b_3})]F(\mathbf{u}),
\end{equation}
and $F(\mathbf{u}):=\mathscr{L}^r_{\Lambda}(f)(\mathbf{u})$ is the CLCT of original function $f(\mathbf{x})$, $\Lambda:$ $A_k=\left(
\begin{array}{cc}
a_k & b_k \\
c_k & d_k \\
\end{array}
\right)\in \mathbb{R}^{2\times2}$
($k=1,2,3$) be a set of parameter matrices with $det(A_k)=1$ and $b_k\neq0$.
\end{defn}
The analytic signal $f_A(\mathbf{x})$ defined by Definition \ref{analyticC} is a Clifford biquaternion valued function, which can be written as
\begin{equation*}
f_A(\mathbf{x})=(p_0+\mathbf{i}p_1+\mathbf{j}p_2+\mathbf{k}p_3)+\epsilon(q_0+\mathbf{i}q_1+\mathbf{j}q_2+\mathbf{k}q_3).
\end{equation*}
From this function, one can derive three quaternions:
\begin{equation*}
f_{A_{\mathbf{e}_1\mathbf{e}_2}}=p_0+q_1\mathbf{e}_1+q_2\mathbf{e}_2+p_3\mathbf{e}_1\mathbf{e}_2,
\end{equation*}
\begin{equation*}
f_{A_{\mathbf{e}_2\mathbf{e}_3}}=p_0+q_2\mathbf{e}_2+q_3\mathbf{e}_3+p_1\mathbf{e}_2\mathbf{e}_3,
\end{equation*}
and
\begin{equation*}
f_{A_{\mathbf{e}_3\mathbf{e}_1}}=p_0+q_3\mathbf{e}_3+q_1\mathbf{e}_1+p_2\mathbf{e}_3\mathbf{e}_1.
\end{equation*}
For each of this quaternionic valued signals, one can write them into polar form representations, and obtain their modules which named the partial modules of the original Clifford biquaternion valued analytic signal $f_A(\mathbf{x})$ as following
\begin{equation*}
mod_{\mathbf{e}_1\mathbf{e}_2}=\sqrt{f_{A_{\mathbf{e}_1\mathbf{e}_2}}\overline{(f_{A_{\mathbf{e}_1\mathbf{e}_2}})}},
\end{equation*}
\begin{equation*}
mod_{\mathbf{e}_2\mathbf{e}_3}=\sqrt{f_{A_{\mathbf{e}_2\mathbf{e}_3}}\overline{(f_{A_{\mathbf{e}_2\mathbf{e}_3}})}},
\end{equation*}
\begin{equation*}
mod_{\mathbf{e}_3\mathbf{e}_1}=\sqrt{f_{A_{\mathbf{e}_3\mathbf{e}_1}}\overline{(f_{A_{\mathbf{e}_3\mathbf{e}_1}})}}.
\end{equation*}
\subsection{Envelop Detection of 3-D Images}
In \cite{Wang12}, Wang constructed a experiment platform for acquiring radio frequency (RF) ultrasound volume with a biopsy needle in it. The volume is a $128\times1280\times33$ pixels in lateral ($x_1$ axes), elevation ($x_2$ axes) and axial ($x_3$ axes) directions respectively. By
using classical 1-D analytic signal and novel 3-D analytic signal approaches, different resolution appears in ultrasound image envelop detection. Since 3-D analytic signal takes into account the information of the neighbouring scan lines, this novel approaches supplies better result than classical 1-D analytic signal approach. In this paper, we will take the same experiment platform and show the shortcoming of the analytic signal associated with CFT. We also propose the analytic signal associated with CLCT in Clifford biquaternion domain to this problem and show that by modifying the matrix parameters, one can obtain a satisfactory result.
\begin{figure}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{ox1-32.jpg}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{ox2-64.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{ox3-64.jpg}
\end{minipage}
\caption{original envelop of 3-D image.}
\end{figure}
Here we modify the volume to a $128\times128\times128$ pixels, and in the plane of $x_3=64$ there is a biopsy needle located at $x_2=64$ with $0\leq x_1\leq32$, which can be easily found in Figure 1. The flow-process diagram of using the amplitude method of analytic signals is supplied in Figure 2.
\begin{figure}
\begin{tikzpicture}[node distance=1.0cm]
\centering
\node[startstop](start){Inputting original data $f(x_1,x_2,x_3)$};
\node[process, below of = start, yshift = -0.5cm](pro1){Obtain analytic signal $f_A(x_1,x_2,x_3)$};
\node[process, below of = pro1, yshift = -0.5cm](pro2){Derive the polar form};
\node[process, below of = pro2, yshift = -0.5cm](pro3){Obtain the modulus $|f_A(x_1,x_2,x_3)|$};
\node[process, below of = pro3, yshift = -0.5cm](stop){draw the picture of $|f_A(x_1,x_2,x_3)|$};
\coordinate (point1) at (-3cm, -6cm);
\draw [arrow] (start)--(pro1);
\draw [arrow] (pro1)--(pro2);
\draw [arrow] (pro2)--(pro3);
\draw [arrow] (pro3)--(stop);
\end{tikzpicture}
\caption{ The flow-process diagram of using analytic signal's amplitude to envelop detection.}
\end{figure}
Figure 3 shows the result by using Wang's analytic signal to detect the envelop of biopsy needle which locates at plane of $x_3=64$. Thank to take into account the information of the neighbouring scan lines, one can find the biopsy needle in the nearest planes besides in plane $x_3=64$, which are planes $x_3=63$ and $x_3=65$. Since it will induce many noise and lose information in converting RF ultrasound signal to the Brightness mode (B-mode) signal, this property will insure one can find a clear envelop in B-mode images. But unfortunately, the envelops in these three planes distort badly, which are much longer than the true needle. This shortcoming can be well overcame by our approach based on the analytic signal associated with CLCT, which can be found in Figure 4. The envelop also can be found in planes $x_3=63$,$x_3=64$ and $x_3=65$, and the length is very close to that of true needle. Here the parameter matrices are $A_1=\left(
\begin{array}{cc}
1 & 10 \\
-0.5 & 0.5 \\
\end{array}
\right)$, $A_2=\left(
\begin{array}{cc}
1 & 10 \\
-0.5 & 0.5 \\
\end{array}
\right)$ and $A_3=\left(
\begin{array}{cc}
1000 & 1 \\
-0.1 & 0.0009 \\
\end{array}
\right)$, and they are found by two steps: first, to the 2-D image, plane $x_3=64$, we try several times and get good result. Furthermore, we slightly modify the parameters of $A_3$ and find a better result. Since there are two many parameters in these three matrices, we can not insure these three matrices are the best parameters. Due to the fact, Wang's method is a special case of our approach, we can insure that our approach will supply a better result by choosing proper parameter matrices.
Furthermore, we also use the envelop method of monogenic signal, which was introduced by Yang \cite{Yang17} and is another higher dimensional generalization of analytic signal, to detect the envelop of this image. Figure 4 shows that it supplies an accurate envelop in every planes but one can find this envelop in every $x_3$ plane, which means that the image will be contaminated by the background signal badly.
\begin{figure}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{FTx3-62.jpg}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{FTx3-63.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{FTx3-64.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{FTx3-65.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{FTx3-66.jpg}
\end{minipage}
\caption{Wang's analytic signal associated with CFT \cite{Wang12}.}
\end{figure}
\begin{figure}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{Lctx3-62.jpg}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{Lctx3-63.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{Lctx3-64.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{Lctx3-65.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{Lctx3-66.jpg}
\end{minipage}
\caption{This paper's method: analytic signal associated with CLCT, where\qquad \qquad $A_1=(1,10;-0.05,0.5)$,
$A_2=(1,10;-0.05,0.5)$ and $A_3=(1000,1;-0.1,0.0009)$.}
\end{figure}
\begin{figure}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{x3-1.jpg}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{x3-63.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{x3-64.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{x3-65.jpg}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{x3-66.jpg}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\centering
\includegraphics[width=2in]{x3-128.jpg}
\end{minipage}
\caption{Monogenic signal approach of \cite{Yang17}.}
\end{figure}
\section{Conclusion}
Inspired by the success of 3-D biquaternionic analytic signal in detecting 3-D ultrasound images envelop, we generalize this approach to LCT domain. Thanks to the special structure of the Clifford biquaternion algebra, this generalized analytic signal has three partial modules where $mod_{\mathbf{e}_1\mathbf{e}_2}(f_A)$ corresponds the envelop of 3-D images in $x_3$ planes. Real valued synthetic 3-D image is introduced to test the valid of this novel envelop detection method. Comparing with the classical 1-D analytic signal method and Wang's generalized biquaternionic analytic signal method even the envelop method of monogenic signal, our approach supplies a best result. Furthermore, our method can derive a better resolution by modifying the parameter matrices and adding post processing such as average filter processing.
\subsection*{Acknowledgment}
The first author acknowledges financial support from the PhD research startup foundation of Hubei University of Technology No. BSQD2019052. Partial support by the Foundation for Science and Technology from Department of Education of Hubei province (B2019047). Partial support by the Macao Science and Technology Development Fund 0085/2018/A2.
|
1,477,468,750,427 | arxiv | \section{Introduction}\label{intro}
The weighted Bergman space with exponent $p>0$ and weight $\alpha>-1$ is denoted by $A_\alpha ^p (\D)$ and is defined to be the set of all holomorphic functions $f$ on the unit disk $\mathbb{D}$ such that
\[\left\| f \right\|_{A_\alpha ^p (\D)}^p : = \int_\mathbb{D} {{{\left| {f\left( z \right)} \right|}^p}{{\left( {1 - {{\left| z \right|}^2}} \right)}^\alpha }dA\left( z \right)} < \infty,
\]
where $dA$ denotes the Lebesgue area measure on $\mathbb{D}$. The unweighted Bergman space ($\alpha=0$) is simply denoted by $A^p (\D)$ and it is known as the Bergman space with exponent $p$. Weighted Bergman spaces are related to the classical Hardy spaces. The Hardy space with exponent $p>0$ is denoted by $H^p (\D)$ and is the set of all holomorphic functions $f$ on $\D$ such that
\[\left\| f \right\|_{H^p (\D)}^p: = \sup_{0<r<1}\int_{0}^{2\pi} {{{\left| {f\left( re^{it} \right)} \right|}^p} dt} < \infty.\]
The Hardy space $H^p (\D)$ is identified with the limit space of $A_\alpha ^p (\D)$, as $\alpha\to -1^+$, in the sense that $\lim_{\alpha\to -1^+}\left\| f \right\|_{A_\alpha ^p (\D)}= \left\| f \right\|_{H ^p(\D)}$. Moreover, $H^p (\D)\subset A_{\alpha}^p (\D)$, for all $\alpha>-1$ and $p>0$ (see \cite{Zhu}). More on the theory of Bergman spaces can be found in \cite{HKZ} and \cite{DS}.
A classical problem in geometric function theory is to characterize conformal mappings which are contained in such spaces. See, for example, \cite{Ba2}, \cite{Kar1}, \cite{Per}, \cite{Cor}, \cite{Cor2}. In the first part of this paper, we characterize conformal mappings which are contained in $A_\alpha^p(\D)$ or $H^p(\D)$ by using conditions involving certain harmonic measures and Euclidean areas.
First, we fix some notation. For a domain $\Omega$ in the plane, a point $z \in \Omega$ and a Borel subset $A$ of $\overline \Omega$, let ${\omega _{\Omega}}\left( {z,A} \right)$ denote the harmonic measure at $z$ of $A$ with respect to the component of $\Omega \backslash A$ containing $z$. The function ${\omega _{\Omega}}\left( { \cdot ,A} \right)$ is the solution of the generalized Dirichlet problem with boundary data $\varphi = {1_A}$. For the general theory of harmonic measure, see \cite{Gar}.
Henceforth, let $f$ be a conformal mapping on $\D$. For $r>0$, set $F_r=\{z\in\D:\ |f(z)|=r\}$. Note that $f(F_r)=f(\D)\cap\{|z|=r\}$ is the union of countably many open arcs in $f(\D)$. It follows (see \cite[Prop. 2.14]{Pom2}) that $F_r$ is the union of countably many analytic open arcs in $\D$ so that each such arc has two distinct endpoints on $\partial\D$. Moreover, it is well known (see, for example, \cite{Pom2}) that $f$ has nontangential boundary values $f(e^{it})$ for a.e. real $t$ and thus, for $r>0$, we can define the set $E_r=\{\zeta\in \partial \D:|f(\zeta)|> r\}$. See Figure \ref{fig}. In \cite{Cor} Poggi-Corradini gave necessary and sufficient conditions for $f$ to belong to $H^p (\D)$ by studying the harmonic measures $\omega_{\D}(0,F_r)$ and $\omega_{\D}(0,E_r)$. He actually proved (see also \cite{Ess2}) that, for $p>0$, $f\in H^p (\D)$ if and only if
\begin{equation}\label{poco1}
\int_{0}^{\infty} r^{p-1}\omega_{\D} (0,F_r)dr<\infty
\end{equation}
or if and only if
\begin{equation}\label{poco2}
\int_{0}^{\infty} r^{p-1}\omega_{\D} (0,E_r)dr<\infty.
\end{equation}
Furthermore, he observed that the Beurling-Nevanlinna projection theorem implies that for every $r>0$,
\begin{equation}\label{bn}
\omega_{\D}(0,F_r)\ge \frac{2}{\pi}e^{-d_{\D}(0,F_r)},
\end{equation}
where $d_{\D}(0, F_r)$ denotes the hyperbolic distance in $\D$ between $0$ and the set $F_r$, i.e., $d_{\D}(0, F_r)= \inf_{z\in F_r} d_{\D}(0,z)$. Here, $d_{\D}(0,z)$ is the hyperbolic distance between $0$ and $z$ in $\D$. This observation led him to the question whether $\omega_{\D}(0,F_r)$ and $e^{-d_{\D}(0,F_r)}$ are comparable. Moreover, he posed the question whether we could obtain a condition similar to the condition (\ref{poco1}) by replacing the harmonic measure $\omega_{\D}(0,F_r)$ with the quantity $e^{-d_{\D}(0,F_r)}$. More precisely, he asked whether $f\in H^p(\D)$ if and only if
\begin{equation}\label{kaha}
\int_{0}^{\infty} r^{p-1}e^{-d_{\D}(0,F_r)}dr<\infty.
\end{equation}
Obviously, if $\omega_{\D}(0,F_r)$ and $e^{-d_{\D}(0,F_r)}$ were comparable, the answer would be positive trivially. However, the first author proved in \cite{Kar2} that these quantities are not comparable in general. So, it was not clear whether the equivalence above is true or not. By applying different methods, the first author showed in \cite{Kar1} that $f\in H^p(\D)$ if and only if \eqref{kaha} is true.
Later, in \cite{Kar} Betsakos and the current authors generalized this condition to weighted Bergman spaces. More specifically, they proved the following theorem. Note that, henceforth, we use the convention that $A^{p}_{-1}(\D)=H^p(\D)$, for every $p>0$, because all the results we state below for weighted Bergman spaces also hold for Hardy spaces.
\begin{customthm}{A}\label{bkk}
\textit{Let $p>0$ and $\alpha\ge-1$. Suppose $f$ is a conformal mapping on $\D$ and, for $r>0$, let $F_r=\{z\in\D:\ |f(z)|=r\}$. Then $f\in A_{\alpha}^p (\D)$ if and only if
\[\int_{0}^{\infty} r^{p-1}e^{-(\alpha+2)d_{\D}\left(0, F_r\right)}dr<\infty.\]
}
\end{customthm}
Note that the case $\alpha=-1$ is (\ref{kaha}). Therefore, the remaining question is whether we can also extend the conditions (\ref{poco1}) and (\ref{poco2}) to weighted Bergman spaces. In the next section, we prove Theorem \ref{main} which shows that the answer is positive and thus we obtain necessary and sufficient conditions for conformal mappings of $\D$ to belong to $ A_{\alpha}^p (\D)$ by studying the harmonic measure.
\begin{theorem}\label{main}
Let $f$ be a conformal mapping on $\D$. For $r>0$, we set $F_r=\{z\in\D:\ |f(z)|=r\}$ and $E_r=\{\zeta\in \partial \D: |f(\zeta)|>r\}$. If $p>0$ and $\alpha \ge -1$, the following statements are equivalent.
\begin{enumerate}
\item $f\in A_{\alpha}^p (\D)$,
\item $\displaystyle \int_{0}^{\infty} r^{p-1}\omega_{\D} (0,F_r)^{\alpha+2}dr<\infty$,
\item $\displaystyle \int_{0}^{\infty} r^{p-1}\omega_{\D} (0,E_r)^{\alpha+2}dr<\infty$.
\end{enumerate}
\end{theorem}
Note that the case $\alpha=-1$ is (\ref{poco1}) and (\ref{poco2}). An immediate consequence of Theorem \ref{main} is the following corollary. It is well known (see \cite[Theorem 3.16]{Dur}) for Hardy spaces that if $f$ is conformal on $\D$, then $f\in H^p(\D)$ for all $p\in (0,\frac{1}{2})$. Using Theorem \ref{main}, we can extend this result to weighted Bergman spaces.
\begin{corollary}\label{corol}
If $f$ is a conformal mapping on $\D$, then $f\in A_\alpha^p(\D)$ for every $p>0$ and $\alpha>-1$ such that $\frac{p}{\alpha+2}\in (0,\frac{1}{2})$. Moreover, the inequality $\frac{p}{\alpha+2}<\frac{1}{2}$ is sharp.
\end{corollary}
The case $\alpha=0$ of Corollary \ref{corol} appears in \cite[p. 850]{Ba2}. Combining Theorems \ref{bkk} and \ref{main} we can prove one more characterization of conformal mappings in weighted Bergman spaces (or Hardy spaces) involving this time both the harmonic measure and the hyperbolic distance.
\begin{corollary}\label{corb}
Let $p>0$ and $\alpha\ge-1$. Suppose $f$ is a conformal mapping on $\D$ and, for $r>0$, set $F_r=\{z\in\D:\ |f(z)|=r\}$. The following statements hold.
\begin{enumerate}
\item If
\begin{equation}
\int_{0}^{\infty}r^{p-1}e^{-\beta d_{\D}(0,F_r)}\omega_{\D}(0,F_r)^{\gamma}dr<\infty \nonumber
\end{equation}
for some $\beta,\gamma \geq 0$ with $\beta+\gamma=\alpha+2$, then $f\in A_{\alpha}^p(\D)$.
\item If $f\in A_{\alpha}^p(\D)$, then
\begin{equation}
\int_{0}^{\infty}r^{p-1}e^{-\beta d_{\D}(0,F_r)}\omega_{\D}(0,F_r)^{\gamma}dr<\infty \nonumber
\end{equation}
for any $\beta,\gamma \geq 0$ with $\beta+\gamma=\alpha+2$.
\end{enumerate}
\end{corollary}
With the aid of Corollary \ref{corb}, we establish a Euclidean geometric condition for conformal mappings in $A_\alpha^p(\D)$ (or $H^p(\D)$) involving the area of the set $\{z\in\D:\ |f(z)|>r\}$ for $r>0$. See Figure \ref{fig}.
\begin{theorem}\label{geom}
Let $p>0$ and $\alpha \ge -1$. Suppose that $f$ is a conformal mapping on $\D$ and, for $r>0$, set $U_r=\{z\in\D:\ |f(z)|>r\}$. Then $f\in A_\alpha^p(\D)$ if and only if
\begin{equation}\nonumber
\int_{0}^{\infty}r^{p-1}\area(U_r)^{\frac{\alpha+2}{2}}dr<\infty.
\end{equation}
\end{theorem}
An interesting consequence of Theorems \ref{bkk}, \ref{main} and \ref{geom} is the following result which, in case $\Phi (r)=\omega_{\D}(0,F_r)$, is an extension of Ess{\' e}n's main lemma in \cite{Ess} for conformal mappings to weighted Bergman spaces.
\begin{figure}\label{fig}
\includegraphics[width=0.98\linewidth]{FF1.png}
\caption{The set $F_r$ is the preimage, under $f$, of $f(\D)\cap\{|z|=r\}$. The shaded portion of $\D$ is the set $U_r$. It is the preimage of the portion of $f(\D)$ lying outside the red circle of radius $r$. Each component of $\partial U_r$ consists of a component of $F_r$ and a component of $E_r$.}
\end{figure}
\begin{corollary}\label{l2}
Let $f$ be a conformal mapping on $\D$. Let $\Phi (r)$ denote $\omega_{\D}(0,F_r)$, $\omega_{\D}(0,E_r)$, $e^{-d_{\D}(0,F_r)}$, or $(\area (U_r))^{1/2}$. If $f\in A_\alpha^p(\D)$ for some $p>0$ and $\alpha\ge-1$, then there is a positive constant $C=C(f,p,\alpha)$ which depends only on $f,p,\alpha$ such that
\[ \Phi (r)\le Cr^{-\frac{p}{\alpha+2}},\]
for every $r>0$. Moreover, if there are $p'>0$, $\alpha '\ge-1$, $C>0$, and $r_0>0$ such that
\[\Phi (r)\le Cr^{-\frac{p'}{\alpha'+2}}\]
for every $r>r_0$, then $f\in A_\alpha^p(\D)$ for all $p>0$ and $\alpha\geq -1$ such that $\frac{p}{\alpha+2}\in (0,\frac{p'}{\alpha'+2})$.
\end{corollary}
Next, we define a new notion which is the analogue of the Hardy number for weighted Bergman spaces. We call this the Bergman number. This allows us to establish new results for the Hardy number and the relation between Hardy and weighted Bergman spaces. Furthermore, it provides new ways to determine whether a conformal mapping on $\D$ belongs to $A_\alpha^p(\D)$; this time by studying limits instead of integrals.
In \cite{Han1}, Hansen studied the problem of determining the numbers $p>0$ for which a holomorphic function $f$ on $\mathbb{D}$ belongs to ${H^p}\left( \mathbb{D} \right)$ by studying $f\left( \mathbb{D} \right)$. For this purpose, he introduced a number which he called the Hardy number of a region. Since we are studying conformal mappings, we state the definition just in this case. Let $f$ be a conformal mapping on $\D$. The Hardy number of $f$ is defined by
\[h(f) = \sup \left\{ {p > 0:f \in {H^p}\left( \mathbb{D} \right)} \right\}.\]
Since $f$ is conformal, according to what we mentioned above, $h(f)$ takes values in $[1/2,\infty]$. The Hardy number has been studied extensively over the years. A classical problem is to find estimates or exact descriptions for it. See, for example, \cite{Han1}, \cite{Kar3} \cite{Kim}, \cite{Cor} and references therein.
In a similar way, we introduce a corresponding number for weighted Bergman spaces. Let $f$ be a conformal mapping on $\D$. We define the Bergman number, $b(f)$, of $f$ as
\[b(f)=\sup\left\{ \frac{p}{\alpha+2}:\ f\in A_{\alpha}^p(\D) \right\}.\]
By Corollary \ref{corol} we deduce that $b(f)\ge 1/2$. Corollary \ref{l2} and the definition above imply the following result.
\begin{corollary}\label{bnd}
Let $p>0$, $\alpha>-1$. If $\frac{p}{\alpha+2}<b(f)$ then $f\in A_{\alpha}^p(\D)$ and if $\frac{p}{\alpha+2}>b(f)$ then $f\notin A_{\alpha}^p(\D)$.
\end{corollary}
As in the case of the Hardy number, if $\frac{p}{\alpha+2}=b(f)$ then both can happen (see \cite{Kar3}). Using Corollary \ref{l2} and Theorems \ref{bkk}, \ref{main} and \ref{geom} we prove that $b(f)$ is given by the following limits.
\begin{theorem}\label{benu}
Let $f$ be a conformal mapping on $\D$. If $F_r=\{z\in \D : |f(z)|=r \}$, $E_r=\{\zeta\in \partial \D:|f(\zeta)|> r\}$ and $U_r=\{z\in\D :|f(z)|>r\}$, then
\begin{enumerate}
\item $\displaystyle b(f)=\liminf_{r\to \infty}\frac{\log\left(\omega_{\D}(0,F_r)\right)^{-1}}{\log r}$,
\item $\displaystyle b(f)=\liminf_{r\to \infty}\frac{\log\left(\omega_{\D}(0,E_r)\right)^{-1}}{\log r}$,
\item $\displaystyle b(f)=\liminf_{r\to \infty}\frac{d_{\D}(0,F_r)}{\log r}$,
\item $\displaystyle b(f)=\frac{1}{2}\liminf_{r\to \infty}\frac{\log \left(\area(U_r)\right)^{-1}}{\log r}$.
\end{enumerate}
\end{theorem}
By Lemma 3.2 in \cite{Kim} and Theorem 1.1 in \cite{Kar3}, an immediate consequence of Theorem \ref{benu} is that if $f$ is a conformal mapping on $\D$, then
\begin{equation}\label{habe}
h(f)=b(f).
\end{equation}
Thus, by Theorem \ref{benu} and (\ref{habe}) we derive new characterizations of the Hardy number in terms of the harmonic measure $\omega_{\D}(0,E_r)$ and the area of $U_r$. That is,
\[h(f)=\liminf_{r\to \infty}\frac{\log\left(\omega_{\D}(0,E_r)\right)^{-1}}{\log r}\]
and
\[h(f)=\frac{1}{2}\liminf_{r\to \infty}\frac{\log \left(\area(U_r)\right)^{-1}}{\log r}.\]
Moreover, by (\ref{habe}) and Theorem \ref{main}, we can establish the following inclusions between Hardy spaces and weighted Bergman spaces. Let $\mathcal{U}$ denote the set of all conformal mappings on $\D$. By a theorem of Hardy and Littlewood \cite{HL} (also see \cite{Ba2}), for any $p>0$, $H^p(\D)\subset A^{2p}(\D)$ ($\alpha=0$). The second part of the following corollary is an improvement of this fact for the class $\mathcal{U}$.
\begin{corollary}\label{inclu}
Let $p>0$, $\alpha>-1$. We have
\[A_{\alpha}^p(\D)\cap\mathcal{U}\subset H^{q}(\D)\cap\mathcal{U},\]
for every $q\in(0,\frac{p}{\alpha+2})$. Moreover,
\[H^{q}(\D)\cap\mathcal{U}\subset A_{\alpha}^p(\D)\cap\mathcal{U},\]
for every $p>0$, $\alpha>-1$ satisfying $\frac{p}{\alpha+2}\in (0,q]$.
\end{corollary}
In Section \ref{exa} we give an example of a conformal mapping $f$ such that $f\in A_1^3(\D)$ but $f\notin H^1(\D)$, showing that the second inclusion of Corollary \ref{inclu} is, in general, strict. Furthermore, we prove that this conformal mapping $f$ has the property that $f\in A_1^3(\D)$ but $f\notin A_0^2(\D)$. This gives a negative answer to the following natural question. If $\alpha,\alpha'>-1$ and $p,p'>0$ are such that $\frac{p}{\alpha+2}=\frac{p'}{\alpha'+2}$, is it true that
\[A_{\alpha}^p(\D)\cap\mathcal{U}=A_{\alpha'}^{p'}(\D)\cap\mathcal{U}?\]
We now discuss a different characterization of conformal mappings in weighted Bergman spaces. For $t\in [0,1)$, let $M_f(t)=M(t):=\max_{|z|=t}|f(z)|$. We have the following result.
\begin{customthm}{B}\label{bgp}
\textit{Let $p>0$, $\alpha\geq -1$ and suppose $f$ is a conformal mapping on $\D$. Then $f\in A_{\alpha}^p(\D)$ if and only if
\begin{equation}\label{growth}
\int_{0}^{1}(1-t)^{\alpha+1}M(t)^pdt<\infty.
\end{equation}
}
\end{customthm}
The case $\alpha=-1$ is a statement about Hardy spaces and it is due to Hardy and Littlewood \cite{HL}, Pommerenke \cite{Pom1}, and Prawitz \cite{Pra}. The case $\alpha>-1$, is due to Baernstein, Girela and Pel{\'a}ez \cite{Ba2} and also to P{\'e}rez-Gonz{\'a}lez and R{\"a}tty{\"a} \cite{Per}. Using Theorem \ref{bgp} and a geometric characterization for conformal mappings in weighted Bergman spaces which appears in \cite[relation (2.9)]{Per}, we can prove the following corollary for conformal mappings in $A_{\alpha}^p(\D)$ (or $H^p(\D)$). Let $D(0,r)$ denote the disk centered at $0$ of radius $r$. If $f$ is a conformal mapping on $\D$, we set $\psi(r)=\area\left(f(D(0,r))\right)$, for $r\in (0,1)$.
\begin{corollary}\label{g1}
Let $p>0$, $\alpha\geq-1$ and suppose $f\in A_{\alpha}^p(\D)$ is a conformal mapping. Then
\begin{enumerate}
\item $\displaystyle \lim_{r\to 1}(1-r)M(r)^{\frac{p}{\alpha+2}}=0$,
\item $\displaystyle \lim_{r\to 1}(1-r)\psi(r)^{\frac{p}{2(\alpha+2)}}=0$.
\end{enumerate}
\end{corollary}
Theorem \ref{bgp}, Corollary \ref{g1}, and the aforementioned geometric characterization in \cite{Per}, allow us to provide the following expressions for the Bergman number, and thus for the Hardy number as well, in terms of the functions $M_f$ and $\psi$.
\begin{theorem}\label{g2}
Suppose $f$ is a conformal mapping in $\D$. Then
\begin{enumerate}
\item $\displaystyle b(f)=h(f)=\liminf_{r\to 1}\frac{-\log(1-r)}{\log M(r)}$,
\item $\displaystyle b(f)=h(f)=2\liminf_{r\to 1}\frac{-\log(1-r)}{\log \psi(r)}$,
\item $\displaystyle b(f)=h(f)=\sup\{\lambda>0:\ \lim_{r\to 1}(1-r)M(r)^{\lambda}=0\}$,
\item $\displaystyle b(f)=h(f)=2\sup\{\lambda>0:\ \lim_{r\to 1}(1-r)\psi(r)^{\lambda}=0\}$.
\end{enumerate}
\end{theorem}
The proofs of all the results above follow in Section \ref{proofs}.
\section{Proofs}\label{proofs}
\subsection{Proof of Theorem \ref{main}}
The main tools we use in the proof of Theorem \ref{main} include Smith's characterization of functions belonging to weighted Bergman spaces \cite{Smith}, a potential theoretic identity of Baernstein connecting harmonic measure and the Green function \cite{Ba1} and an estimate for harmonic measure due to Poggi-Corradini \cite{Cor2}.
\proof
Let $f$ be a conformal mapping defined on $\D$ and for $r>0$, let $F_r$ and $E_r$ be as in the statement of the theorem. Set $D=f(\D)$. By comparing boundary values and using the maximum principle, we have that
\[\omega_{\D}(0,E_r)\le \omega_{\D}(0,F_r),\]
for any $r>0$. Moreover, it is proved in \cite[p. 34]{Cor} (see also \cite[Lemma 3.3(i)]{Cor2}) that there exists a constant $M>1$ such that
\begin{equation}\label{hmest}
\omega_{\D}(0,F_{Mr})\le 2\omega_{\D}(0,E_r),
\end{equation}
for any $r>|f(0)|$. Therefore, combining these two estimates and using a change of variable, we can easily show that (2) and (3) are equivalent.
Suppose now that (2) holds, i.e.,
\[\int_{0}^{\infty} r^{p-1}\omega_{\D} (0,F_r)^{\alpha+2}dr<\infty.\]
Theorem \ref{bkk} and \eqref{bn} imply directly that $f\in A_{\alpha}^p (\D)$ and thus (2) implies (1). Conversely, assume that (1) holds. Since (2) and (3) are equivalent, in order to complete the proof of the theorem, it suffices to prove that (3) also holds. Since $f\in A_\alpha^p(\D)$, it follows (see \cite[p. 2336]{Smith}) that
\begin{equation}\label{finite}
\int_{\D} {|f(z)|^{p - 2} | f '(z)|^2 \left(\log \frac{1}{|z|}\right)^{\alpha+2}dA(z)}<\infty.
\end{equation}
For the Green function (see \cite{Gar} for the definition) of the domain $D$, we set $g_D(f(0),w)=0$, for $w\notin D$. By a change of variable and the conformal invariance of the Green function, we have
\begin{align}\label{equal}
\int_{\D} |f(z)|^{p - 2} | f '(z)|^2 & \left( \log \frac{1}{|z|}\right)^{\alpha+2}dA(z) \nonumber \\
&=\int_{\D} {|f(z)|^{p - 2} | f '(z)|^2 g_{\D}(0,z)^{\alpha+2}dA(z)} \nonumber \\
&=\int_D {|w|^{p - 2} g_{\D}(0,f^{-1}(w))^{\alpha+2}dA(w)} \nonumber \\
&= \int_D {|w|^{p - 2} g_{D}(f(0),w)^{\alpha+2}dA(w)} \nonumber \\
&=\int_0^{\infty } {r ^{p - 1}\left( {\int_0^{2\pi } {{g_D}( f(0),re^{i\theta })^{\alpha+2}d\theta } } \right)dr }.
\end{align}
Since $\alpha+2>1$, by Jensen's inequality we derive that, for every $r>0$,
\begin{equation}
\left( \frac{1}{2\pi} \int_0^{2\pi} {g_D(f(0),re^{i\theta})d\theta} \right)^{\alpha+2} \le\frac{1}{2\pi} \int_0^{2\pi} {{g_D}( f(0),re^{i\theta })^{\alpha+2}d\theta}. \nonumber
\end{equation}
This in conjunction with (\ref{finite}) and (\ref{equal}) implies that
\begin{equation}\label{doubleint}
\int_0^{ \infty } {r ^{p - 1}\left( {\int_0^{2\pi } {{g_D}( f(0),re^{i\theta })d\theta } } \right)^{\alpha+2}dr }<\infty.
\end{equation}
Now we state a known relation between harmonic measure and the Green function. For the proof, see \cite[Lemma 2]{Ba1}. If $\Omega \subset \mathbb{C}$ is a simply connected domain and $a\in \Omega$ then, for $r>|a|$, it is true that
\begin{equation}\label{baernid}
\int_r^{\infty}\omega_{\Omega}\left({a,\{|z|>t\}}\cap\partial\Omega\right)\frac{dt}{t}=\frac{1}{2\pi}\int_0^{2\pi} g_{\Omega}({a,re^{i\theta})}d\theta.
\end{equation}
By \eqref{baernid} and the conformal invariance of the harmonic measure, for every $r>|f(0)|$,
\begin{align}\label{baern}
\int_0^{2\pi} g_D(f(0),re^{i\theta})d\theta &=2\pi\int_r^{\infty} \omega_D (f(0),f(E_t))\frac{dt}{t}=2\pi\int_r^{\infty} \omega_{\D}(0,E_t)\frac{dt}{t} \nonumber \\
&\ge 2\pi \int_r^{2r} \omega_{\D}(0,E_t)\frac{dt}{t} \ge 2\pi \log 2\, \omega_{\D}(0,E_{2r}).
\end{align}
In the last estimate of \eqref{baern}, we have used the fact that $\omega_{\D}(0,E_t)$ is decreasing for $t>0$ because of the maximum principle. Therefore, by (\ref{baern}) we have
\[ \int_{|f(0)|}^{\infty} r^{p-1} \omega_{\D}(0,E_{2r})^{\alpha+2}dr \le C_1 \int_{|f(0)|}^{\infty} r^{p-1} \left( {\int_0^{2\pi } {{g_D}( f(0),re^{i\theta })d\theta } } \right)^{\alpha+2}dr,\]
where $C_1=1/(2\pi\log 2)^{\alpha+2}$. Equivalently, we can write
\[ \int_{2|f(0)|}^{\infty} r^{p-1} \omega_{\D}(0,E_r)^{\alpha+2}dr \le C_2 \int_{|f(0)|}^{\infty} r^{p-1} \left( {\int_0^{2\pi } {{g_D}( f(0),re^{i\theta })d\theta } } \right)^{\alpha+2}dr, \]
where $C_2=2^p/(2\pi\log 2)^{\alpha+2}$. Combining this with (\ref{doubleint}), we obtain
\[\int_{2|f(0)|}^{\infty} r^{p-1} \omega_{\D}(0,E_r)^{\alpha+2}dr<\infty \]
and thus
\[\int_0^{\infty} r^{p-1} \omega_{\D}(0,E_r)^{\alpha+2}dr<\infty.\]
This shows that (3) holds and the proof is complete.
\qed
\subsection{Proof of Corollary \ref{corol}}
By the definition of $A_\alpha^p(\D)$ we can directly derive that it contains any bounded conformal mapping on $\D$, for any $a>-1$ and $p>0$. Hence, it suffices to consider unbounded conformal mappings on $\D$. Let $f$ be an unbounded conformal mapping on $\D$. By \cite[Lemma 3.3(iii)]{Cor2}, there is a constant $C>0$ such that, for $r>|f(0)|$,
\[\omega_{\D}(0,F_r)\leq \frac{C}{\sqrt{r}}.\]
Therefore, we have
\[\int_{|f(0)|}^{\infty}r^{p-1}\omega_{\D}(0,F_r)^{\alpha+2}dr\leq C^{\alpha+2}\int_{|f(0)|}^{\infty}r^{p-\frac{\alpha}{2}-2}dr<\infty,\]
for every $p>0$ and $\alpha>-1$ such that $\frac{p}{\alpha+2}\in(0,\frac{1}{2})$. Thus, Theorem \ref{main} implies that $f\in A_\alpha^p(\D)$ for every $p>0$ and $\alpha>-1$ such that $\frac{p}{\alpha+2}\in(0,\frac{1}{2})$.
Now, we show that the inequality $\frac{p}{\alpha+2}<\frac{1}{2}$ is sharp. Consider the Koebe function $K(z)=\frac{z}{(1-z)^2}$. Using the conformal invariance of the harmonic measure, we deduce that for $r>1/4$,
\[\omega_{\D}(0,F_r)=1-\frac{2}{\pi}\arctan\left(\sqrt{r}\left(1-\frac{1}{4r}\right)\right)\]
and hence, by elementary calculus, if $r$ is large enough, there is a constant $C>0$ such that
\[\omega_{\D}(0,F_r)\geq \frac{C}{\sqrt{r}}.\]
Therefore, for any $\delta$ sufficiently large, if $\frac{p}{\alpha+2}=\frac{1}{2}$, then
\[\int_{\delta}^{\infty}r^{p-1}\omega_{\D}(0,F_r)^{\alpha+2}dr\geq C^{\alpha+2}\int_{\delta}^{\infty}r^{p-\frac{\alpha}{2}-2}dr=C^{\alpha+2} \int_{\delta}^{\infty}\frac{1}{r}dr=\infty.\]
By Theorem \ref{main} it follows that $K\notin A_\alpha^p(\D)$ and the proof is complete.
\qed
\subsection{Proof of Corollary \ref{corb}}
We first prove (1). Let $f$ be a conformal mapping on $\D$ and, for $r>0$, let $F_r$ be as in the statement of Corollary \ref{corb}. Suppose $\beta,\gamma \geq 0$ satisfy $\beta+\gamma=\alpha+2$ and
\[
\int_{0}^{\infty}r^{p-1}e^{-\beta d_{\D}(0,F_r)}\omega_{\D}(0,F_r)^{\gamma}dr<\infty.
\]
By \eqref{bn} and since $\beta+\gamma=\alpha+2$, we immediately find that
\[
\int_{0}^{\infty} r^{p-1}e^{-(\alpha+2)d_{\D}\left(0, F_r\right)}dr<\infty
\]
and thus, by Theorem \ref{bkk}, $f\in A_\alpha^p(\D)$.
We now proceed with the proof of (2). Suppose $f\in A_\alpha^p(\D)$ is a conformal mapping and let $\beta ,\gamma>0$ satisfy $\beta+\gamma=\alpha+2$. Note that the cases $(\beta,\gamma)=(0,\alpha+2)$ and $(\beta,\gamma)=(\alpha+2,0)$ are covered by Theorems \ref{main} and \ref{bkk}, respectively. If $t=\frac{\alpha+2}{\beta}$ and $s=\frac{\alpha+2}{\gamma}$, then $\frac{1}{t}+\frac{1}{s}=1$. By H{\"o}lder's inequality, Theorem \ref{bkk} and Theorem \ref{main}, we have
\begin{align*}
\int_{0}^{\infty}&r^{p-1}e^{-\beta d_{\D}(0,F_r)}\omega_{\D}(0,F_r)^{\gamma}dr\\
&=\int_{0}^{\infty}r^{(p-1)/t}e^{-\beta d_{\D}(0,F_r)}r^{(p-1)/s}\omega_{\D}(0,F_r)^{\gamma}dr\\
&\leq \left(\int_{0}^{\infty}r^{p-1}e^{-(\alpha+2)d_{\D}(0,F_r)}dr\right)^{1/t}\left(\int_{0}^{\infty}r^{p-1}\omega_{\D}(0,F_r)^{\alpha+2}dr\right)^{1/s} \\
&<\infty.
\end{align*}
and the proof is complete.
\qed
\subsection{Proof of Theorem \ref{geom}}
For the proof of Theorem \ref{geom}, we need the following lemma. Suppose $f$ is a conformal mapping on $\D$ and, for $r>0$, let $F_r$ and $E_r$ be as defined in Section \ref{intro}. We also set $U_r=\{z\in\D:\ |f(z)|>r\}$. Note that for each $r>0$, the open set $U_r$ consists of (at most) countably many components each having the property that its boundary consists of a component of $F_r$ and a component of $E_r$. See Figure \ref{fig}.
\begin{lemma}\label{LA}
There is a universal constant $C>0$ such that if $r>0$ is sufficiently large, then
\[
\area(U_r)\geq C\omega_{\D}(0,E_{2r})^2.
\]
\end{lemma}
\begin{figure}
\includegraphics[width=0.5\linewidth]{FF2.png}
\caption{The red arc $J$ is a component of $F_r$ with endpoints $x,y$. Its length is at least as large as the length of the dotted chord $J_c$. The angle $\theta$ is equal to $2\pi\omega_{\D}(0,K)$, where $K$ is the component of $E_r$ with endpoints $x,y$.}
\end{figure}
\proof
Let $D=f(\D)$ and for $r>0$, set $I_r=\{\theta\in [0,2\pi):\ re^{i\theta}\in D\}$. Using a change of variable, we have
\begin{align*}
\area(U_r)=\int_{U_r}dA(z)&=\int_{D\cap\{|z|>r\}}\frac{dA(w)}{|f'\left(f^{-1}(w)\right)|^2}\\
&= \int_{r}^{\infty}\int_{I_t}\frac{t}{|f'\left(f^{-1}(te^{i\theta})\right)|^2}d\theta dt\\
&= \int_{r}^{\infty}\frac{1}{t}\int_{I_t}\frac{t^2}{|f'\left(f^{-1}(te^{i\theta})\right)|^2}d\theta dt.
\end{align*}
Note that for $t>0$, we may parametrize the curve $F_t$ as $F_t(\theta)=f^{-1}\left(te^{i\theta}\right)$, for $\theta\in I_t$. Using the Cauchy-Schwarz inequality, we infer that
\begin{align*}
\text{length}(F_t)^2 &=\left(\int_{F_t}|dz|\right)^2=\left(\int_{I_t}\frac{t}{|f'\left(f^{-1}(te^{i\theta})\right)|}d\theta\right)^2\\
&\leq \text{length}\left(I_t\right)\int_{I_t}\frac{t^2}{|f'\left(f^{-1}(te^{i\theta})\right)|^2}d\theta\\
&\leq 2\pi\int_{I_t}\frac{t^2}{|f'\left(f^{-1}(te^{i\theta})\right)|^2}d\theta.
\end{align*}
Therefore, by the calculation above, we obtain
\[
\area(U_r)\geq \frac{1}{2\pi}\int_{r}^{\infty}\frac{\text{length}(F_t)^2}{t}dt.
\]
If $r$ is large and $J$ denotes any component of $F_r$, then $J$ is an analytic arc in $\D$ having two distinct endpoints, $x,y$, on $\partial\D$. Let $J_c$ denote the chord of the disk connecting $x,y$. If $r$ is sufficiently large, then the arc of the circle with endpoints $x,y$ and having smaller length, is a component $K$ of $E_r$. See Figure 2. By elementary geometry and since $\frac{\sin t}{t}\ge \frac{1}{2}$, for $t>0$ sufficiently small,
\[
\text{length}(J)\geq\text{length}(J_c)=2\sin \left(\pi\omega_{\D}(0,K)\right) \ge \pi\omega_{\D}(0,K).
\]
Taking the sum over all the components of $F_r$ and $E_r$ we deduce that
\[
\text{length}(F_t)\geq \pi\omega_{\D}(0,E_t),
\]
for all $t>r$, provided that $r$ is sufficiently large. Hence,
\begin{align*}
\area(U_r)&\geq \frac{\pi}{2}\int_{r}^{\infty}\frac{\omega_{\D}(0,E_t)^2}{t}dt
\geq\frac{\pi}{2}\int_{r}^{2r}\frac{\omega_{\D}(0,E_t)^2}{t}dt\\
&\geq \frac{\pi}{2}\omega_{\D}(0,E_{2r})^2\int_{r}^{2r}\frac{1}{t}dt
=\frac{\pi\log 2}{2}\omega_{\D}(0,E_{2r})^2,
\end{align*}
for $r$ sufficiently large and the proof is complete.
\qed\\[0.5cm]
\begin{remark}
With a slight modification of the proof of Lemma \ref{LA}, it is possible to show that for each $M>1$, there exists a constant $C=C(M)>0$ which depends only on $M$ such that for all sufficiently large $r$,
\[
\area(U_r)\geq C\omega_{\D}(0,E_{Mr})^2.
\]
However, for our purposes, the case $M=2$ suffices.
\end{remark}
We can now prove Theorem \ref{geom}.
\proof Let $p>0$, $\alpha \ge -1$ and suppose $f\in A_\alpha^p(\D)$ is a conformal mapping. There is a universal constant $C>0$ such that if $r>0$, then
\begin{equation}\label{marsmith}
\area(U_r)\leq Ce^{-d_{\D}(0,F_r)}\omega_{\D}(0,F_r).
\end{equation}
For a proof of this fact, see \cite[section 2.1]{Mar}. Applying Corollary \ref{corb} (2) with $\beta=\gamma=\frac{\alpha+2}{2}$ we have
\[
\int_{0}^{\infty}r^{p-1}e^{-\frac{\alpha+2}{2}d_{\D}(0,F_r)}\omega_{\D}(0,F_r)^\frac{\alpha+2}{2}dr<\infty.
\]
This in combination with \eqref{marsmith} implies that
\begin{equation}
\int_{0}^{\infty}r^{p-1}\area(U_r)^{\frac{\alpha+2}{2}}dr<\infty. \nonumber
\end{equation}
Conversely, let $p>0$, $\alpha \ge-1$ and assume that $f$ is a conformal mapping on $\D$ satisfying
\begin{equation}\label{area}
\int_{0}^{\infty}r^{p-1}\area(U_r)^{\frac{\alpha+2}{2}}dr<\infty.
\end{equation}
By Lemma \ref{LA}, there is some universal constant $C>0$ such that
\begin{equation}\label{el}
\area(U_r)\geq C\omega_{\D}(0,E_{2r})^2,
\end{equation}
for all $r$ sufficiently large. Let $\delta$ be large enough so that if $r>\delta$, then \eqref{el} holds. Then, by a change of variable and \eqref{el},
\begin{align*}
\int_{\delta}^{\infty}r^{p-1}\area(U_r)^{\frac{\alpha+2}{2}}dr &\geq C^{\frac{\alpha+2}{2}}\int_{\delta}^{\infty}r^{p-1}\omega_{\D}(0,E_{2r})^{\alpha+2}dr \\
&= C_1\int_{2\delta}^{\infty}r^{p-1}\omega_{\D}(0,E_r)^{\alpha+2}dr,
\end{align*}
where $C_1={C^{\frac{\alpha+2}{2}}}/{2^p}$. This in combination with (\ref{area}) shows that
\[
\int_{0}^{\infty}r^{p-1}\omega_{\D}(0,E_r)^{\alpha+2}dr<\infty,
\]
and thus by Theorem \ref{main}, $f\in A_\alpha^p(\D)$.
\qed
\subsection{Proof of Corollary \ref{l2}}
Let $f\in A_{\alpha}^p(\D)$ be conformal. Suppose that $\Phi (r)=\omega_{\D}(0,F_r)$. By Theorem \ref{main} we have
\[\int_{0}^{\infty}r^{p-1}\omega_{\D}(0,F_r)^{\alpha+2}dr<\infty.
\]
Since $\omega_{\D}(0,F_r)$ is a decreasing function of $r$, it follows that for $R>0$,
\begin{align}
\int_{0}^{\infty}r^{p-1}\omega_{\D}(0,F_r)^{\alpha+2}dr & \geq\int_{0}^{R}r^{p-1}\omega_{\D}(0,F_r)^{\alpha+2}dr \nonumber\\
& \geq\omega_{\D}(0,F_R)^{\alpha+2}\int_{0}^{R}r^{p-1}dr=\frac{R^p}{p}\omega_{\D}(0,F_R)^{\alpha+2}. \nonumber
\end{align}
Combining the results above we infer that, for every $R>0$,
\[\omega_{\D}(0,F_R)\leq CR^{-\frac{p}{\alpha+2}},\]
where
\[C=\left(p\int_{0}^{\infty}r^{p-1}\omega_{\D}(0,F_r)^{\alpha+2}dr\right)^{\frac{1}{\alpha+2}}\]
is a constant which depends only on $f,p$, and $\alpha$. This completes the proof of the first part of the theorem.
Now, suppose there are $p'>0$, $\alpha '\geq -1$, $C>0$, and $r_0>0$ such that
\[\omega_{\D}(0,F_r)\leq Cr^{-\frac{p'}{\alpha'+2}}\]
for every $r>r_0$. If $\alpha\geq -1$ and $p>0$ satisfy $\frac{p}{\alpha+2}<\frac{p'}{\alpha'+2}$, then it follows that
\[\int_{r_0}^{\infty}r^{p-1}\omega_{\D}(0,F_r)^{\alpha+2}dr\le C^{\alpha+2}\int_{r_0}^{\infty}r^{p-1-\frac{p'}{\alpha'+2}(\alpha+2)}dr<\infty.\]
Therefore, Theorem \ref{main} implies that $f\in A_{\alpha}^p(\D)$ for all $p>0$ and $\alpha \geq -1$ such that $\frac{p}{\alpha+2}\in (0,\frac{p'}{\alpha'+2})$.
Since $\omega_{\D}(0,E_r)$, $e^{-d_{\D}(0,F_r)}$ and $\left(\area (U_r)\right)^{1/2}$ are decreasing functions of $r$, by Theorems \ref{main}, \ref{bkk} and \ref{geom}, respectively, we see that the proof above also works if $\Phi (r)$ is equal to $\omega_{\D}(0,E_r)$, $e^{-d_{\D}(0,F_r)}$ or $\left(\area (U_r)\right)^{1/2}$ and thus the proof is complete.
\qed
\subsection{Proof of Corollary \ref{bnd}}
Let $p>0$, $\alpha>-1$ be such that $\frac{p}{\alpha+2}<b(f)$. Then we may find $\epsilon>0$ so that $\frac{p}{\alpha+2}+\epsilon<b(f)$. Thus there exist $p'>0$ and $\alpha'>-1$ satisfying $\frac{p}{\alpha+2}+\epsilon<\frac{p'}{\alpha'+2}$ and $f\in A_{\alpha'}^{p'}$. By the first part of Corollary \ref{l2}, \[\Phi(r) \leq Cr^{-\frac{p'}{\alpha'+2}},\] for all $r>0$ and some constant $C>0$. Therefore,
\[\Phi(r)\leq Cr^{-\frac{p+\epsilon(\alpha+2)}{\alpha+2}},\]for all $r>1$. By the second part of Corollary \ref{l2}, it follows that $f\in A_{\beta}^q(\D)$, for all $q>0$, $\beta>-1$ such that $\frac{q}{\beta+2}<\frac{p+\epsilon(\alpha+2)}{\alpha+2}$. Choosing $\beta=\alpha$ and $q=p$ yields $f\in A_{\alpha}^p(\D)$. If $\frac{p}{\alpha+2}>b(f)$, then we clearly have $f\notin A_{\alpha}^p(\D)$.
\qed
\subsection{Proof of Theorem \ref{benu}}
First, we prove (1). If $f\in A_\alpha^p(\D)$, then by Corollary \ref{l2} there is a constant $C>0$ such that
\[\omega_{\D}(0,F_r)\le Cr^{-\frac{p}{\alpha+2}},\]
for every $r>0$. Equivalently, for $r>1$,
\[\frac{\log \omega_{\D}(0,F_r)^{-1}}{\log r}\ge \frac{\log C^{-1}}{\log r}+ \frac{p}{\alpha+2}.\]
Thus, taking limits as $r\to \infty$, we deduce that
\[\liminf_{r\to \infty}\frac{\log\omega_{\D}(0,F_r)^{-1}}{\log r}\ge\frac{p}{\alpha+2}.\]
This holds for any $p>0$ and $\alpha>-1$ for which $f\in A_\alpha^p(\D)$ and hence
\begin{align}\label{miaf}
\liminf_{r\to \infty}\frac{\log\omega_{\D}(0,F_r)^{-1}}{\log r}\ge b(f).
\end{align}
Now, we set
\[I:=\liminf_{r\to \infty}\frac{\log\omega_{\D}(0,F_r)^{-1}}{\log r}.\]
If $\frac{p}{a+2}<I$, then there exist $\epsilon>0$ and $r_0>0$ such that for every $r>r_0$,
\[
\frac{p}{a+2}+\epsilon\le\frac{\log\omega_{\D}(0,F_r)^{-1}}{\log r}
\]
or, equivalently,
\[r^{p-1}\omega_{\D}(0,F_r)^{\alpha+2}\leq r^{-1-\epsilon(\alpha+2)}.\]
Therefore, it follows that
\[
\int_{r_0}^{\infty}r^{p-1}\omega_{\D}(0,F_r)^{\alpha+2}dr \le \int_{r_0}^{\infty}r^{-1-\epsilon(\alpha+2)}dr <\infty.
\]
Theorem \ref{main} implies that $f\in A_{\alpha}^p(\D)$. This shows that the interval $(0,I)$ is contained in the set
\[
\left\{\frac{p}{\alpha+2}:\ f\in A_{\alpha}^p(\D)\right\}
\]
and hence
\begin{align}\nonumber
\liminf_{r\to \infty}\frac{\log\omega_{\D}(0,F_r)^{-1}}{\log r}\le b(f).
\end{align}
This in conjunction with (\ref{miaf}) gives the desired result.
For the proof of (2), (3) and (4), it suffices to replace $\omega_{\D}(0,F_r)$ with $\omega_{\D}(0,E_r)$, $e^{-d_{\D}(0,F_r)}$ and $\left(\area (U_r)\right)^{1/2}$, use Theorems \ref{main}, \ref{bkk} and \ref{geom}, respectively, and repeat the proof of (1).
\qed
\subsection{Proof of Corollary \ref{inclu}}
Let $f\in A_{\alpha}^p(\D)\cap\mathcal{U}$ for some $p>0$ and $\alpha>-1$. Then $\frac{p}{\alpha+2}\leq b(f)$. By (\ref{habe}) we infer that $\frac{p}{\alpha+2}\leq h(f)$ and hence $f\in H^{q}(\D)$ for every $q\in(0,\frac{p}{\alpha+2})$.
Now, let $f \in H^{q}(\D)\cap\mathcal{U}$ for some $q>0$. By \eqref{poco1} it follows that
\[\int_{0}^{\infty}r^{q-1}\omega_{\D}(0,F_r)dr<\infty.\]
Moreover, by Corollary \ref{l2} (or \cite[Lemma 1]{Ess}), there is a constant $C>0$ such that
\[\omega_{\D}(0,F_r)\le Cr^{-q},\]
for every $r>0$. Therefore, if $p>0$ and $\alpha>-1$ satisfy $\frac{p}{\alpha+2}\leq q$, then
\begin{align*}
\int_{1}^{\infty}r^{p-1}\omega_{\D}(0,F_r)^{\alpha+2}dr&=\int_{1}^{\infty}r^{p-1}\omega_{\D}(0,F_r)^{\alpha+1}\omega_{\D}(0,F_r)dr\\
&\le C^{\alpha+1}\int_{1}^{\infty}r^{p-1}r^{-q(\alpha+1)}\omega_{\D}(0,F_r)dr \\
&\le C^{\alpha+1}\int_{1}^{\infty}r^{q-1}\omega_{\D}(0,F_r)dr<\infty.
\end{align*}
So, Theorem \ref{main} implies that $f\in A_{\alpha}^p(\D)\cap\mathcal{U}$ for any $p>0$ and $\alpha>-1$ such that $\frac{p}{\alpha+2}\in (0,q]$.
\qed
\subsection{Proof of Corollary \ref{g1}}
Let $p>0$, $\alpha\geq -1$ and suppose that $f\in A_{\alpha}^p(\D)$ is conformal. We first prove (1). Note that $M_f$ is an increasing function of $r$. Hence, for $r\in (0,1)$,
\[
\int_{r}^{1}(1-t)^{\alpha+1}M(t)^pdt\geq M(r)^p\int_{r}^{1}(1-t)^{\alpha+1}dt=\frac{1}{{\alpha+2}}{(1-r)^{\alpha+2}M(r)^p}.
\]
By Theorem \ref{bgp}, we have
\[
\lim_{r\to 1} \int_{r}^{1}(1-t)^{\alpha+1}M(t)^pdt= 0.
\]
Combining the above, we deduce that
\[\lim_{r\to 1}(1-r)^{\alpha+2}M(r)^{p}=0\]
or, equivalently,
\[
\lim_{r\to 1}(1-r)M(r)^{\frac{p}{\alpha+2}}=0\]
which completes the proof of (1).
We now prove (2). For $r\in(0,1)$, let $\psi(r)=\area\left(f(D(0,r))\right)$, where $D(0,r)$ is the disk centered at $0$ of radius $r$. By a result which appears in \cite[(2.9) on p. 132]{Per}, we have that
\begin{equation}\label{geom29}
\int_{0}^{1}(1-t)^{\alpha+1}\psi(t)^{\frac{p}{2}}dt<\infty.
\end{equation}
Since $\psi$ is an increasing function of $r$, repeating the argument above and using \eqref{geom29} instead of Theorem \ref{bgp} we get (2).
\qed
\subsection{Proof of Theorem \ref{g2}}
Let $f$ be a conformal mapping on $\D$. We first prove (1). Set
\[
L_f:=\liminf_{r\to 1}\frac{-\log(1-r)}{\log M(r)}.
\]
Let $p>0$, $\alpha >-1$ and assume that $f\in A_{\alpha}^p(\D)$. By (1) of Corollary \ref{g1}, we have
\[
\lim_{r\to 1}(1-r)M(r)^{\frac{p}{\alpha+2}}=0,
\]
which in turn implies that, for $r$ sufficiently close to $1$,
\[(1-r)M(r)^{\frac{p}{\alpha+2}}\le 1 \]
and thus
\[
\frac{p}{\alpha+2}\leq \liminf_{r\to 1}\frac{-\log(1-r)}{\log M(r)}.
\]
Taking the supremum over all $\frac{p}{\alpha+2}$ for which $f\in A_{\alpha}^p$ yields
\begin{equation}\label{equ1}
b(f)\leq L_f.
\end{equation}
Note that by the Koebe distortion theorem, or by Corollary \ref{corol}, $L_f\geq 1/2$. Choose any $\mu\in(0,L_f)$. Then $\mu<\frac{-\log(1-r)}{\log M(r)}$, for all $r$ sufficiently close to $1$ and thus
\[
M(r)^{\mu}<\frac{1}{1-r},
\]
for $r$ close to $1$. This implies that, if $\delta \in (0,1)$ is sufficiently close to $1$, then
\[
\int_{\delta}^{1}M(r)^{\mu(1-\epsilon)}dr \le \int_{\delta}^{1} (1-r)^{\epsilon-1}<\infty,
\]
for all $\epsilon\in (0,1)$. Therefore, by Theorem \ref{bgp}, $f\in H^{\mu(1-\epsilon)}(\D)$ and hence
\[
h(f)\geq \mu(1-\epsilon),
\]
for all $\epsilon\in (0,1)$. Letting $\epsilon\to 0$ and using the fact that $h(f)=b(f)$ we infer that
\[
b(f)\geq \mu.
\]
Finally, letting $\mu\to L_f$ we obtain
\begin{equation}\label{equ2}
b(f)\geq L_f.
\end{equation}
By \eqref{equ1} and \eqref{equ2}, we have $b(f)=L_f$ and the proof of (1) is complete.
We now proceed with the proof of (3). Set
\[S_f:=\sup\{\lambda>0:\ \lim_{r\to 1}(1-r)M(r)^{\lambda}=0\}.\]
Let $p>0$, $\alpha>-1$ and assume that $f\in A_{\alpha}^p(\D)$ is conformal. Then by Corollary \ref{g1},
\[
\lim_{r\to 1}(1-r)M(r)^{\frac{p}{\alpha+2}}=0
\]
and thus $\frac{p}{\alpha+2}\leq S_f$. Taking the supremum, we have
\begin{equation}\label{equ3}
b(f)\leq S_f.
\end{equation}
We would now like to show that equality holds in \eqref{equ3}. Suppose this is not the case. Then there is an $\epsilon>0$ so that $b(f)+\epsilon<S_f$. By the definition of $S_f$, it follows that
\[
\lim_{r\to 1}(1-r)M(r)^{b(f)+\epsilon}=0,
\]
which in turn implies that
\[
M(r)^{b(f)+\frac{\epsilon}{2}}\leq (1-r)^{-\frac{b(f)+\frac{\epsilon}{2}}{b(f)+\epsilon}},
\]
for $r$ close to 1. Therefore,
if $\delta \in (0,1)$ is sufficiently close to $1$, then
\[
\int_{\delta}^{1}M(r)^{b(f)+\frac{\epsilon}{2}}dr<\infty.
\]
By Theorem \ref{bgp}, we deduce that $f\in H(\D)^{b(f)+\frac{\epsilon}{2}}$. Since $h(f)=b(f)$, this is evidently a contradiction. Therefore, $b(f)=S_f$.
The proofs of (2) and (4) are carried out exactly in the same manner, the only differences being that we use (2), instead of (1), of Corollary \ref{g1} and \eqref{geom29} instead of \eqref{growth}.
\qed
\section{Example}\label{exa}
Let $\mathcal{U}$ denote the class of all conformal mappings defined on $\D$. By the second part of Corollary \ref{inclu},
\begin{equation}\label{sp}
H^q(\D)\cap\mathcal{U}\subset A_{\alpha}^p(\D)\cap\mathcal{U},
\end{equation}
for any $p>0$ and $\alpha>-1$ satisfying $0<\frac{p}{\alpha+2}\leq q$. For $\alpha=0$ and $p=2q$, we obtain the inclusion $H^q(\D)\cap\mathcal{U}\subset A^{2q}(\D)\cap\mathcal{U}$, for any $q>0$. A stronger statement is actually true, namely $H^q(\D)\subset A^{2q}(\D)$, and it follows from a theorem of Hardy and Littlewood \cite{HL}. The authors in \cite[p. 852]{Ba2} prove by means of explicit functions that, for any $q>0$, there exists a conformal mapping in $A^{2q}(\D)\setminus H^q(\D)$. Next, we exhibit a conformal mapping $f$ having the following properties:
\begin{enumerate}
\item $f\in\mathcal{U}$,
\item $f\in A_1^3(\D)$,
\item $f\notin A^2_0(\D)$,
\item $f\notin H^1(\D)$.
\end{enumerate}
Properties (2) and (4) show that the inclusion \eqref{sp} is, in general, strict. Moreover, properties (2) and (3) prove that if we consider $\alpha,\alpha'>-1$ and $p,p'>0$ such that $\frac{p}{\alpha+2}=\frac{p'}{\alpha'+2}$, then the equality
\[A_{\alpha}^p(\D)\cap\mathcal{U}=A_{\alpha'}^{p'}(\D)\cap\mathcal{U},\]
is not always true.
We note that our function is very similar to the functions considered in \cite{Ba2} and the proof is along the same lines.
\begin{proof}
Consider the function
\[
f(z)=\frac{1}{(1-z)\sqrt{\log\frac{2e}{1-z}}},\ z\in\D.
\]
The mapping $\frac{2e}{1-z}$ maps $\D$ conformally onto $\{\Re z>e\}$ and we can define $\log z$ to be analytic there by choosing the argument in $(-\pi/2,\pi/2)$. The image, under this logarithm, of $\{\Re z>e\}$ is a simply connected subregion of $\{\Re z>1\}$ and thus by choosing the argument for the square root to be in $(-\pi/2,\pi/2)$ again, we see that $f$ is well defined and analytic in $\D$.
First, we show that
\begin{equation}\label{maxim}
M_f(r)=\frac{1}{(1-r)\sqrt{\log\frac{2e}{1-r}}}.
\end{equation}
Note that the function $\psi(x)=x\sqrt{\log\frac{2e}{x}}$ is increasing for $x\in(0,2)$. Therefore, for $|z|=r\in(0,1)$,
\[
\bigg\lvert(1-z)\sqrt{\log\frac{2e}{1-z}}\bigg\rvert\geq |1-z|\sqrt{\log\frac{2e}{|1-z|}}\geq (1-r)\sqrt{\log\frac{2e}{1-r}}.
\]
This implies that
\[
M_f(r)\leq \frac{1}{(1-r)\sqrt{\log\frac{2e}{1-r}}}.
\]
However, the right hand side of this estimate equals $f(r)$ and thus we have \eqref{maxim}. Suppose for a moment that (1) holds. If we choose $p>0$ and $\alpha \ge -1$ such that $\alpha+2-p=0$, then, by \eqref{maxim},
\[\int_{0}^{1} (1-r)^{\alpha+1}M_f^p(r) dr=\int_0^1 {\frac{1}{(1-r)\left( \log\frac{2e}{1-r}\right)^{{p}/{2}}}dr}=\int_{\log (2e)}^{\infty} \frac{1}{y^{p/2}}dy,\]
where we made the change of variable $\log\frac{2e}{1-r}=y$. This and Theorem \ref{bgp} imply that (2), (3), and (4) hold as well. Therefore, it only remains to prove (1). To do this, we need the following lemma.
\begin{lemma}\label{lemap}
For $z\in\D$,
\[
\Re\left[\left(\log\frac{2e}{1-z}\right)^{-1/2}-\frac{1}{2}\left(\log\frac{2e}{1-z}\right)^{-3/2}\right]>0.
\]
\end{lemma}
\begin{proof}
Recall that the mapping $\frac{2e}{1-z}$ maps $\D$ conformally onto $\{\Re z>e\}$ and the mapping $\log z$, as chosen above, maps $\{\Re z>e\}$ conformally onto a simply connected subregion of $\{\Re z>1\}$. It is not hard to check that the mapping $z^{-1/2}$ maps $\{\Re z>1\}$ conformally onto a simply connected subregion of the slice $\{z\in\D:\ |\arg z|<\pi/4\}$. It follows that the composition
\[
g(z)=z^{-1/2}\circ\log z\circ \frac{2e}{1-z}=\left(\log\frac{2e}{1-z}\right)^{-1/2}
\]
maps $\D$ conformally onto a simply connected domain $\Omega\subset\{z\in\D:\ |\arg z|<\pi/4\}$. Now, we show that $\Re\left(z-\frac{1}{2}z^3\right)\geq 0$, for $z\in\partial\{z\in\D:\ |\arg z|<\pi/4\}$.
If $z=re^{i\pi/4}$, $r\in[0,1]$, then
\[
\Re \left(re^{i\pi/4}-\frac{1}{2}r^3e^{3i\pi/4}\right)=\frac{\sqrt{2}}{2}r(1+\frac{r^2}{2})\geq 0.
\]
By symmetry, the same calculation is valid for $z=re^{-i\pi/4}$. If $z=e^{it}$, for $|t|<\pi/4$, then
\[
\Re\left(e^{it}-\frac{1}{2}e^{3it}\right)=\cos t-\frac{1}{2}\cos (3t)\geq \frac{\sqrt{2}}{2}-\frac{1}{2}\cos 3t>0.
\]
Hence, $\Re\left(z-\frac{1}{2}z^3\right)\geq 0$, for $z\in\partial\{z\in\D:\ |\arg z|<\pi/4\}$ and by the maximum principle, the inequality is strict for $z\in \{z\in\D:\ |\arg z|<\pi/4\}$. Since $\Omega\subset \{z\in\D:\ |\arg z|<\pi/4\}$, we have
$\Re\left(z-\frac{1}{2}z^3\right)>0$ in $\Omega$ and therefore
\[
\Re\left[\left(z-\frac{1}{2}z^3\right)\circ g(z)\right]=\Re\left[g(z)-\frac{1}{2}g(z)^3\right]>0,\ z\in\D.
\]
\end{proof}
We can now prove (1). Upon differentiating $f$, we find
\[
f'(z)=(1-z)^{-2}\left[\left(\log\frac{2e}{1-z}\right)^{-1/2}-\frac{1}{2}\left(\log\frac{2e}{1-z}\right)^{-3/2}\right].
\]
By Lemma \ref{lemap}, if $z\in\D$,
\begin{align*}
\Re (1-z)^2f'(z)&=\Re \frac{zf'(z)}{K(z)}\\
&=\Re \left[\left(\log\frac{2e}{1-z}\right)^{-1/2}-\frac{1}{2}\left(\log\frac{2e}{1-z}\right)^{-3/2}\right]>0,
\end{align*}
where $K(z)=\frac{z}{(1-z)^2}$ is the Koebe function. Using the terminology of Sections 2.2 and 2.3 of \cite{Pom0}, it follows that, since $K$ is starlike, the function $f(z)-f(0)$ is close-to-convex and thus by \cite[Theorem 2.11, p. 51]{Pom0}, $f\in\mathcal{U}$.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{Ba1}{article}{
title={The size of the set on which a univalent function is large},
author={A. Baernstein},
journal={J. d' Anal. Math.}
volume={70},
date={1996},
pages={157-173}
}
\bib{Ba2}{article}{
title={Univalent functions, Hardy spaces and spaces of Dirichlet type},
author={A. Baernstein and D. Girela and J.{\'A}. Pel{\'a}ez},
journal={Illinois J. Math.}
volume={48},
date={2004},
pages={837-859}
}
\bib{Kar}{article}{
title={Hyperbolic metric and membership of
conformal maps in the Bergman space},
author={D. Betsakos and C. Karafyllia and N. Karamanlis},
journal={Canad. Math. Bull.}
volume={64(1)},
date={2021},
pages={174-181}
}
\bib{Dur}{book}{
title={Theory of $H^p$ Spaces},
author={P. Duren},
date={1970},
publisher={Academic Press},
address={New York-London}
}
\bib{DS}{book}{
title={Bergman Spaces},
author={P. Duren and A. Schuster},
date={2004},
publisher={American Mathematical Society},
address={Providence, RI}
}
\bib{Ess}{article}{
title={On analytic functions which are in $H^p$ for some positive $p$},
author={M. Ess{\' e}n},
journal={Ark. Mat.},
volume={19},
date={1981},
pages={43--51}
}
\bib{Ess2}{article}{
title={A value distribution criterion for the class $L\log L$ and some related questions},
author={M. Ess{\' e}n and D.F. Shea and C.S. Stanton},
journal={Ann. Inst. Fourier, Grenoble},
volume={35(4)},
date={1985},
pages={127--150}
}
\bib{Gar}{book}{
title={Harmonic Measure},
author={J.B. Garnett and D.E. Marshall},
date={2005},
publisher={Cambridge University Press},
address={Cambridge}
}
\bib{Han1}{article}{
title={Hardy classes and ranges of functions},
author={L.J. Hansen},
journal={Michigan Math. J.},
volume={17},
date={1970},
pages={235--248}
}
\bib{HL}{article}{
title={Some properties of fractional integrals II},
author={G.H. Hardy and J.E. Littlewood},
journal={Math. Z.},
volume={34},
date={1932},
pages={403--439}
}
\bib{HKZ}{book}{
title={Theory of Bergman Spaces},
author={H. Hedenmalm and B. Korenblum and K. Zhu},
date={2000},
publisher={Springer-Verlag},
address={New York}
}
\bib{Kar1}{article}{
title={Hyperbolic metric and membership of
conformal maps in the Hardy space},
author={C. Karafyllia},
journal={Proc. Amer. Math. Soc.}
volume={147},
date={2019},
pages={3855-3858}
}
\bib{Kar2}{article}{
title={On a relation between harmonic measure and hyperbolic distance on planar domains},
author={C. Karafyllia},
journal={Indiana Univ. Math. J.}
volume={69},
date={2020},
pages={1785-1814}
}
\bib{Kar3}{article}{
title={On the Hardy number of a domain in terms of harmonic measure and hyperbolic distance},
author={C. Karafyllia},
journal={Ark. Mat.},
volume={58},
date={2020},
pages={307--331}
}
\bib{Kim}{article}{
title={Hardy spaces and unbounded quasidisks},
author={Y.C. Kim and T. Sugawa},
journal={Ann. Acad. Sci. Fenn. Math.},
volume={36},
date={2011},
pages={291--300}
}
\bib{Mar}{article}{
title={The angular distribution of mass by Bergman functions},
author={D.E. Marshall and W. Smith},
journal={Rev. Mat. Iberoam.}
volume={15},
date={1999},
pages={93-116}
}
\bib{Per}{article}{
title={Univalent functions in Hardy, Bergman, Bloch and related spaces},
author={F. P{\'e}rez-Gonz{\'a}lez and J. R{\"a}tty{\"a}},
journal={J. d' Anal. Math.}
volume={105},
date={2008},
pages={125-148}
}
\bib{Cor}{article}{
title={Geometric models, iteration and composition operators},
author={P. Poggi-Corradini},
journal={Ph.D. Thesis, University of Washington},
date={1996}
}
\bib{Cor2}{article}{
title={The Hardy Class of Geometric Models and the Essential Spectral Radius of Composition Operators},
author={P. Poggi-Corradini},
journal={Journal of Functional Analysis}
volume={143},
date={1997},
pages={129-156}
}
\bib{Pom0}{book}{
title={Univalent functions},
author={C. Pommerenke},
date={1975},
publisher={Vandenhoeck {\&} Ruprecht},
address={G{\"o}ttingen}
}
\bib{Pom1}{article}{
title={Schlichte Funktionen und analytische Funktionen von beschr{\"a}nkten Oszillation},
author={C. Pommerenke},
journal={Comment. Math. Helv.},
volume={52},
date={1977},
pages={591--602}
}
\bib{Pom2}{book}{
title={Boundary Behaviour of Conformal Maps},
author={C. Pommerenke},
date={1992},
publisher={Springer-Verlag},
address={Berlin}
}
\bib{Pra}{article}{
title={{\"U}ber Mittelwerte analytischer Funktionen},
author={H. Prawitz},
journal={Ark. Mat. Astr. Fys.},
volume={20},
date={1927},
pages={1--12}
}
\bib{Smith}{article}{
title={Composition operators between Bergman and Hardy spaces},
author={W. Smith},
journal={Trans. Amer. Math. Soc.}
volume={348},
date={1996},
pages={2331-2348}
}
\bib{Stein}{book}{
title={Real Analysis: Measure Theory, Integration, and Hilbert Spaces},
author={E.M. Stein and R. Shakarchi},
date={2005},
publisher={Princeton University Press},
address={Princeton, N.J. and Oxford}
}
\bib{Zhu}{article}{
title={Translating inequalities between Hardy and Bergman spaces},
author={K. Zhu},
journal={Amer. Math. Monthly}
volume={111},
date={2004},
pages={520-525}
}
\end{biblist}
\end{bibdiv}
\end{document}
|
1,477,468,750,428 | arxiv |
\section{Introduction}
The top quark as the heaviest known elementary particle plays a
fundamental role, both in the Standard Model and in new physics scenarios.
Experimental analyses of Large Hadron Collider (LHC) data collected during
run II will provide unprecedented reach at high energy and in exclusive
phase space regions with associated production of jets and vector bosons or Higgs bosons.
The production of a $t\bar t$ system in association with multiple jets plays an
especially important role as a background to new physics searches and
to various Higgs and Standard Model analyses. In particular, the precise
theoretical control of $t\bar t+$multijet backgrounds is one of the most
important prerequisites for the observation of top-quark production in
association with a Higgs boson, which would give direct access to the
top-quark Yukawa coupling.
In addition, $t\bar t+$multijet production allows for
powerful test of perturbative QCD and is also routinely exploited
for the validation of Monte Carlo tools that are used
in a multitude of LHC studies.
All these analyses require theoretical predictions at the highest possible accuracy.
Inclusive top-quark pair production at hadron colliders has been computed
fully differentially to next--\-to--\-next--\-to--\-leading order (NNLO)
in the strong coupling expansion~\cite{Czakon:2013goa,Czakon:2015owf}.
Predictions for top-quark pair production in association with up to two jets are
available at the next--\-to--leading order
(NLO)~\cite{Dittmaier:2007wz,Bredenstein:2009aj,Bredenstein:2010rs,Bevilacqua:2009zn,Bevilacqua:2010ve,
Bevilacqua:2011aa}, and
NLO calculations for inclusive top-quark pair production and in association with up to one or two jets were matched
to parton showers in order to provide predictions at the particle level~\cite{Frixione:2002ik,Frixione:2007nw,Kardos:2011qa,
Alioli:2011as,Frederix:2012ps,Kardos:2013vxa,Cascioli:2013era,Hoeche:2013mua,Hoeche:2014qda,Czakon:2015cla}.
In this letter we report on the first computation of top-quark pair production with up to
three jets at NLO QCD.
At present only few scattering processes with more than six
external legs are known at NLO~\cite{Berger:2010zx,Ita:2011wn,Bern:2013gka,Badger:2013yda,Badger:2013ava,Denner:2015yca,Bevilacqua:2015qha,Denner:2016jyo},
and the calculation at hand is the first one that deals with a
$2\to 5$ process with seven colored external particles including also heavy quarks.
Detailed predictions are presented for $pp\to t\bar t+0,1,2,3$\,jets
at 13\,TeV, both at the level of cross sections and differential distributions.
We also investigate the scaling behavior of $t\bar t+$multijet cross sections
with varying jet multiplicity.
The characteristic scales of $t\bar t+$multijet production, i.e.~the invariant mass of the
$t\bar t$ system and the transverse momentum threshold
for jet production, are typically separated by more than one order of
magnitude, while differential observables involve multiple scales,
which can be distributed over more than two orders of
magnitude. In this situation, finding renormalization and
factorization scales that ensure a decent convergence of perturbative QCD
for the widest possible range of observables is not trivial.
Moreover, in the presence of a wide spectrum of scales, the usage of
standard factor-two variations for the estimation of theoretical
uncertainties due to missing higher-order effects becomes questionable.
Motivated by these observations, to gain more insights into the scale dependence of $t\bar t+$multijet
production and related uncertainties we compare a fixed-order
calculation, with the standard scale choice $H_\rT /2$, against results based
on the M\protect\scalebox{0.8}{I}NLO\xspace method~\cite{Hamilton:2012np}.
The scale $H_\rT/2$ was found to yield stable and reliable
NLO predictions for $V+$multijet production~\cite{Berger:2009ep},
while the M\protect\scalebox{0.8}{I}NLO\xspace method is especially well suited for
multi-scale QCD processes, as it controls, through next-to-leading logarithmic (NLL) resummation,
the various higher-order logarithms
that emerge from soft and collinear effects
in the presence of widely separated scales.
The present study provides a first systematic comparison of
the two approaches.
\section{Details of the calculation}
Our calculations are performed using the event generator
S\protect\scalebox{0.8}{HERPA}\xspace~\cite{Gleisberg:2003xi,Gleisberg:2008ta} in combination with O\protect\scalebox{0.8}{PEN}L\protect\scalebox{0.8}{OOPS}\xspace~\cite{Cascioli:2011va,OLhepforge}, a
fully automated one-loop generator based on a numerical recursion that allows
the fast evaluation of scattering amplitudes with many external particles.
For the reduction to scalar integrals and for the
numerical evaluation of the latter we used C\protect\scalebox{0.8}{UT}T\protect\scalebox{0.8}{OOLS}\xspace~\cite{Ossola:2007ax} in
combination with O\protect\scalebox{0.8}{NE}LO\protect\scalebox{0.8}{OP}\xspace~\cite{vanHameren:2010cp} and, alternatively, the
C\protect\scalebox{0.8}{OLLIER}\xspace library~\cite{Denner:2016kdg}, which implements the methods
of~\cite{Denner:2002ii,Denner:2005nn,Denner:2010tr}.
Tree amplitudes are computed using C\protect\scalebox{0.8}{OMIX}\xspace~\cite{Gleisberg:2008fv},
a matrix-element generator based on the color-dressed Berends-Giele recursive
relations~\cite{Duhr:2006iq}. Infrared singularities are canceled using the
dipole subtraction method~\cite{Catani:1996vz,Catani:2002hc}, as automated in C\protect\scalebox{0.8}{OMIX}\xspace,
with the exception of K- and P-operators that are taken from the implementation
described in~\cite{Gleisberg:2007md}. C\protect\scalebox{0.8}{OMIX}\xspace is also used for the evaluation
of all phase-space integrals. Analyses are performed with the help of
R\protect\scalebox{0.8}{IVET}\xspace~\cite{Buckley:2010ar}.
\begin{table}
\begin{center}
\begin{tabular}{l|rrrr}
partonic channel \textbackslash~$N$ & 0 & 1 & 2 & 3\\
\hline
$gg\tot\bar t+N\,g$ & 47 & 630 & 9'438 & 152'070 \\
$u\bar u \tot\bar t+N\,g$ & 12 & 122 & 1'608 & 23'835\\
$u\bar u \tot\bar t u\bar u+(N-2)\,g$ & -- & -- & 506 & 6'642\\
$u\bar u \tot\bar t d\bar d+(N-2)\,g$ & -- & -- & 252 & 3'321
\end{tabular}
\end{center}
\label{tab:diagcounting}
\caption{Number of one-loop Feynman diagrams in representative partonic channels in
$pp\to t\bar t+N$\,jets for $N=0,1,2,3$.}
\end{table}
We carry out a series of $pp\to t\bar t+N$\,jet
NLO calculations with $N=0,1,2,3$, taking into account the exact dependence on the
number of colors, $N_c=3$. As an illustration of the rapid growth of complexity
at high jet multiplicity, in Table~\ref{tab:diagcounting} we list the
number of one-loop Feynman diagrams that contribute
to a few representative partonic channels.
In addition to the presence of more than $10^5$ loop diagrams
in the $gg\to t\bar t+3g$ channel, we note that also
the very large number of channels not listed in
Table~\ref{tab:diagcounting} as well as the
computation of real contributions
pose very serious challenges in the
\mbox{ $t\bar t+3$\,jet} calculation.
Proton--proton cross sections are obtained by using,
both at LO and NLO, the CT14 NLO PDF set~\cite{Dulat:2015mca}
with five active flavors, and the corresponding strong coupling.
Matrix elements are computed with massless $b$-quarks,
and top-quarks are kept stable. Hence,
our results can be compared to data only upon
reconstruction of the $t\bar t$ system and extrapolation
of fiducial measurements to the full phase space.
However, we expect the main features shown in our analysis to be
present also in computations including top-quark decays
and acceptance cuts.
The latter
will undoubtedly play a role,
but the reduction of scale uncertainties is generic as long as
the radiative phase space is not heavily restricted by experimental cuts. Apart from
performing a direct analysis, we also provide Root NTuples~\cite{Bern:2013zja} that can be
used in the future for more detailed studies including top-quark decays and matching
to parton showers.
In our standard perturbative calculations we employ renormalization
and factorization scales
defined as $\mu_{\mathrm{R}}=\mu_{\mathrm{F}}=H_\rT/2$, where
$H_\rT=\sum_i \sqrt{p_{\mathrm{T},i}^2+m_i^2}$, with the
sum running over all (anti)top quarks and light partons, including also real radiation at NLO.
Results generated in this manner are compared to alternative computations based
on the M\protect\scalebox{0.8}{I}NLO\xspace procedure~\cite{Hamilton:2012np}.
To this end, we have realized
a fully automated implementation of the
M\protect\scalebox{0.8}{I}NLO\xspace method in S\protect\scalebox{0.8}{HERPA}\xspace.
\section{M\protect\scalebox{0.8}{I}NLO\xspace method and implementation}
The M\protect\scalebox{0.8}{I}NLO\xspace method can be regarded as a generalized scale setting approach
that guarantees a decent perturbative
convergence
for differential multi-jet cross sections. This is achieved via
appropriate scale choices~\cite{Amati:1980ch} and Sudakov form factors~\cite{Catani:1991hj}
that resum NLL enhancements in the soft and collinear regions of phase space.
To this end, in the case of $t\bar t+$multijet production,
LO partonic events of type $ab\to t\bar t+N$\,partons
are recursively clustered back to a core process $\tilde{a}\tilde{b}\to t\bar t$
by means of a $k_\mathrm{T}$ jet algorithm~\cite{Catani:1993hr}.
The resulting clustering history is interpreted as an event topology,
where the $N$-jet final state emerges from the core process
through a sequence of successive branchings that
take place at the scales
$q_N,\ldots,q_2,q_1$
and are connected by propagators.
The nodal scales $q_i$ correspond to the $k_\mathrm{T}$ measure of the jet algorithm,
and only $1\to 2$ branchings consistent with the QCD interaction vertices are
allowed.
In our implementation of the
$k_\mathrm{T}$ jet algorithm we use the definition of $\Delta R$
given in Eq.~(11) of~\cite{Catani:1993hr}
and we set $\Delta R=0.4$.
Typically, the $k_\mathrm{T}$ algorithm gives rise
to ordered branching histories with
$q_1<\dots<q_N<\mu_{\mathrm{core}}$, where $\mu_{\mathrm{core}}$ is the characteristic hard scale of the
core process. However, also unordered branchings can occur.
For instance, this can happen in the presence of jets with transverse momenta above
$\mu_{\mathrm{core}}$.
Since soft-collinear resummation does not make sense for such hard emissions,
in our M\protect\scalebox{0.8}{I}NLO\xspace implementation
possible unordered clusterings are undone and alternative
ordered configurations are considered.
At the end, the branching history is restricted to
ordered branchings $q_1<\dots<q_{\tilde{N}}<\mu_{\mathrm{core}}$, where
$\tilde{N}=N-M$. The remaining $M$ jets that can not be clustered
in an ordered way are treated as part of the core process,
and $\mu_{\mathrm{core}}$ is evaluated according to the kinematics of the
corresponding $t\bar t+M$\,jet hard event.
At LO, the renormalization scale $\mu_{\mathrm{R}}$ is chosen according to
the event branching history in such a way that
\begin{equation}\label{eq:muR}
\left[\alpha_{\mathrm{s}}(\mu_{R})\right]^{N+2} = \left[\alpha_{\mathrm{s}}(\mu_{\mathrm{core}})\right]^{2+M} \prod_{i=1}^{\tilde{N}} \alpha_{\mathrm{s}}(q_i),
\end{equation}
and in our calculation we set $\mu_{\mathrm{core}}=H_\rT/2$.
The resummation of soft and collinear logarithms is achieved by dressing external and internal
lines of the event topology by Sudakov form factors.
At variance with the original formulation of M\protect\scalebox{0.8}{I}NLO\xspace~\cite{Hamilton:2012np},
in our implementation we employ the symmetry of the LO DGLAP splitting
functions, $P_{ab}(z)$, to define physical Sudakov form factors
\begin{equation}\label{eq:def_nll_sudakov}
\begin{split}
&\Delta_a(Q_0,Q)=\exp\left\{-\int_{Q_0}^Q\frac{{\rm d} q}{q}
\frac{\alpha_{\mathrm{s}}(q)}{\pi}\sum_{b=q,g}\right.\\
&\quad\left.\int_0^{1-q/Q}{\rm d}z
\left(z\,P_{ab}(z)+\delta_{ab}\frac{\alpha_{\mathrm{s}}(q)}{2\pi}\frac{2C_a}{1-z}K\right)\right\}\;,
\end{split}
\end{equation}
where~\cite{Catani:1990rr}
\begin{equation}
K=\left(\frac{67}{18}-\frac{\pi^2}{6}\right)C_A-\frac{10}{9}T_R\,n_f\;,
\end{equation}
and $a=g,q$ corresponds to massless gluons and quarks, respectively.
The representation~\eqref{eq:def_nll_sudakov}
allows the interpretation of $\Delta_a(Q_0,Q)$ in terms of no-branching probabilities
between the scales $Q_0$ and $Q$.
Given a LO event topology
with $\tilde{N}$ ordered branchings,
the lowest branching scale,
$q_{\mathrm{min}}=q_1$, is identified as resolution scale,
and
the $\tilde{N}$ emissions are supplemented by Sudakov form factors
that render them exclusive w.r.t.~any extra emissions above $q_{\mathrm{min}}$.
This is achieved by dressing each external line of flavor $a=q,g$
connected with the $i$-th branching
by a form factor $\Delta_{a}(q_{\mathrm{min}},q_i)$, while internal lines that
connect successive branchings $k<l$ are dressed by factors
$\Delta_{a}(q_{\mathrm{min}},q_l)/$ $\Delta_{a}(q_{\mathrm{min}},q_k)$, which correspond to
no-branching probabilities between $q_k$ and $q_l$ at resolution scale $q_{\mathrm{min}}$.
For internal lines that connect branchings at $q_k$ to the core process
analogous no-branching probabilities
between $q_k$ and $\mu_{\mathrm{core}}$ are applied.
Sudakov form factors along the incoming lines
provide a NLL resummation
that corresponds to the evolution of PDFs
from the resolution scale $q_{\mathrm{min}}$
to the hard scale of the core process.
Therefore, for consistency, PDFs are evaluated at the factorization scale $\mu_{\mathrm{F}}=q_{\mathrm{min}}$.
The generalization to NLO requires only two straightforward modifications of the LO
algorithm. First, for what concerns the scale setting and
Sudakov form factors, the contributions that live in the $N$-parton phase space, i.e.~Born and one-loop
contributions as well as all IR-subtraction terms, are handled exactly as in LO.
Instead, real-emission events that lead to histories with
$\tilde{N}+1\le N+1$ ordered branchings
at scales $q_0<q_1<\dots <q_{\tilde{N}}$ are handled as Born-like $\tilde{N}$-parton
events with resolution scale
$q_{\mathrm{min}}=q_1$, i.e.~the softest branching at the scale $q_0$ is considered as unresolved and is simply
excluded from the M\protect\scalebox{0.8}{I}NLO\xspace procedure. In other words, the softest emission at NLO is not
dressed with Sudakov form factors and does not enter the definitions of
$\mu_{\mathrm{R}}$ and $\mu_{\mathrm{F}}$.
Second, appropriate counterterms are introduced in order to subtract the
overall $\mathcal{O}(\alpha_{\mathrm{s}})$ contribution from Sudakov form factors,
such as to avoid double counting of NLO effects.
Concerning the treatment of top quarks a few extra comments are in order.
Given the low rate at which top quarks radiate jets, such emissions are
simply neglected in our implementation of the M\protect\scalebox{0.8}{I}NLO\xspace procedure
by excluding top quarks from the clustering algorithm.
To quantify the uncertainty arising from this approach,
we implemented an alternative algorithm that allows the combination of top quarks with other
final-state partons in the massive Durham scheme~\cite{Krauss:2003cr,Rodrigo:2003ws}.
The difference between the two procedures is found to be about 10\% at leading order and
5\% at next-to-leading order for the observables studied here, and it is therefore smaller
than the renormalization and factorization scale uncertainties.
Finally, also the top quarks that enter the core process are dressed with
Sudakov form factors $\Delta_t(q_{\mathrm{min}},\mu_{\mathrm{core}})$,
which render them exclusive w.r.t.~emissions above
$q_{\mathrm{min}}$.
To compute the Sudakov form factors $\Delta_t$,
we include quark masses in the splitting functions,
according to the method
described in~\cite{Krauss:2003cr,Rodrigo:2003ws}, using the corresponding extension
of Eq.~\eqref{eq:def_nll_sudakov}. This means
in particular that we use the massive splitting functions from~\cite{Catani:2000ef},
the propagator corrections listed in~\cite{Krauss:2003cr,Rodrigo:2003ws},
and we replace the two-loop cusp term $K\,2C_F/(1-z)$ by $K\,C_F(2/(1-z)-m^2/p_ip_j)$
in the case of massive quark splittings $\widetilde{\imath\jmath}\to i,j$.
Scale uncertainties in the M\protect\scalebox{0.8}{I}NLO\xspace framework are assessed through standard
factor-two variations of $\mu_{\mathrm{R}}$ and $\mu_{\mathrm{F}}$.
The renormalization scale is kept fixed in the
Sudakov form factors but is varied as usual in the rest of the (N)LO
cross section, including the counterterms that subtract the
$\mathcal{O}(\alpha_{\mathrm{s}})$ parts of the Sudakov form factors at NLO.
Variations $\mu_{\mathrm{F}}\to \xi_{\mathrm{F}}\,\mu_{\mathrm{F}}$ of the factorization scale
are more subtle.
They have to be applied at the level of PDFs and related NLO counterterms, as well
as in the Sudakov form factors that depend on $q_{\mathrm{min}}=\mu_{\mathrm{F}}$. More precisely,
$q_{\mathrm{min}}\to \xi_{\mathrm{F}}\,q_{\mathrm{min}}$ variations are applied only to Sudakov form factors associated with external and internal
initial-state lines,
and Sudakov form factors $\Delta_a(\xi_{\mathrm{F}}\,q_{\mathrm{min}},q_k)$ are set to one
when $\xi_{\mathrm{F}}\,q_{\mathrm{min}}$ exceeds $q_k$.
\section{Predictions for the 13\,TeV LHC}
In the following we present selected predictions
for $pp\to t\bar t+0,1,2,3$\,jets at 13\,TeV.
We construct jets by clustering light partons with the anti-$k_t$
algorithm~\cite{Cacciari:2008gp} at $R=0.4$,
and by default we select jets with pseudorapidity \mbox{$|\eta_{\mathrm{jet}}|<2.5$}
and a jet-$p_{\mathrm{T}}$ threshold of $25$\,GeV.
Unless stated otherwise, depending on the minimum number $N$ of jets that is required by the
observable at hand,
inclusive (N)LO or MI(N)LO calculations
with $N$ jets are used.
\begin{figure}[]
\begin{minipage}{0.49\textwidth}
\begin{center}
\includegraphics[scale=0.62,trim=0 25 10 0,clip]{plots/main/jet_multi_25}\\[-4pt]
\includegraphics[scale=0.62,trim=0 25 10 0,clip]{plots/ratioNLO/jet_multi_25}\\[-4pt]
\includegraphics[scale=0.62,trim=0 25 10 0,clip]{plots/ratioMINLO/jet_multi_25}\\[-4pt]
\includegraphics[scale=0.62,trim=0 0 10 0,clip]{plots/ratioNLOvsMINLO/jet_multi_25}
\end{center}
\end{minipage}
\caption{\label{fig:njet_bsp}
Inclusive $t\bar t+$multijet cross sections with a minimum number
$N=0,1,2,3$ of jets at $p_{\rT,\mathrm{jet}}\ge$25\,GeV. See the main text for details.}
\end{figure}
\begin{figure}[]
\begin{minipage}{0.49\textwidth}
\begin{center}
\includegraphics[scale=0.62,trim=0 25 10 0,clip]{plots/main/BSP_jet_multi_25}\\[-4pt]
\includegraphics[scale=0.62,trim=0 25 10 0,clip]{plots/ratioNLO/BSP_jet_multi_25}\\[-4pt]
\includegraphics[scale=0.62,trim=0 25 10 0,clip]{plots/ratioMINLO/BSP_jet_multi_25}\\[-4pt]
\includegraphics[scale=0.62,trim=0 0 10 0,clip]{plots/ratioNLOvsMINLO/BSP_jet_multi_25}
\end{center}
\end{minipage}
\caption{\label{fig:njet_ratios}
Ratios of $t\bar t+N$\,jet
over $t\bar t+(N-1)$\,jet inclusive cross sections
for $N=1,2,3$ and $p_{\rT,\mathrm{jet}}\ge$25\,GeV.
}
\end{figure}
The jet multiplicity distribution is presented in~\reffi{fig:njet_bsp}.
The top panel displays four predictions, stemming from fixed-order LO and NLO calculations,
and from M\protect\scalebox{0.8}{I}NLO\xspace computations
at LO and NLO (labeled `MILO' and `MINLO'). The second panel shows the ratio between
LO and NLO predictions at fixed order,
while the third panel
shows the ratio between MILO and MINLO predictions.
The last panel
shows the ratio between MINLO and NLO.
The bands illustrate scale uncertainties
estimated through independent factor-two rescaling of
$\mu_{\mathrm{R}}$ and $\mu_{\mathrm{F}}$ excluding antipodal variations.
Fixed-order predictions feature rather large NLO corrections of about $+50\%$ for all
jet multiplicities, while M\protect\scalebox{0.8}{I}NLO\xspace results feature steadily decreasing
corrections for increasing $N_{\mathrm{jets}}$. In both cases, LO scale uncertainties tend to grow
by more than $10\%$ at each extra jet emission, while
(MI)\-NLO scale uncertainties are significantly reduced and the total width of
the (MI)NLO variation bands is about 20--25\% for all considered $N_{\mathrm{jets}}$ values.
Comparing fixed-order NLO and MINLO predictions
we observe a remarkable agreement at the level of 4--8\%.
This supports NLO and MINLO scale-uncertainty estimates based on factor-two variations and
encourages the usage of either
of the two calculations (NLO and MINLO) in practical applications.
\begin{table*}[h]
\include{table_INC_all}
\caption{\label{tab:incxsptall}
Inclusive ($N_{\mathrm{jets}}\ge n$) and exclusive ($N_{\mathrm{jets}}=n$) cross sections with $n=0,1,2,3$ jets
and different transverse momentum thresholds, $p_{\rT,\mathrm{jet}}\ge 25, 40, 60, 80$ GeV.
Uncertainties represent the envelope of the independent $\mu_{\mathrm{R}}$ and $\mu_{\mathrm{F}}$
variations around the central value (antipodal variations excluded).}
\end{table*}
As demonstrated in Table~\ref{tab:incxsptall},
the good agreement between fixed-order NLO and MINLO results and the
consistency of the observed NLO--MINLO differences with factor-two scale variations
persist also for a range of other commonly used $p_{\rT,\mathrm{jet}}$-thresholds~\cite{ATLAS-CONF-2015-065}.
More precisely, for
inclusive $t\bar t+N$\,jet cross sections with jet-$p_{\mathrm{T}}$ thresholds of 25, 40, 65 and \mbox{80\,GeV},
MINLO predictions lie between
5\% and 19\% above NLO ones. The largest differences are observed at large jet multiplicity and for
large $p_{\mathrm{T}}$-thresholds, in which case M\protect\scalebox{0.8}{I}NLO\xspace cross sections feature significantly better
perturbative convergence and smaller scale uncertainties as compared to fixed-order ones.
In Table~\ref{tab:incxsptall} also exclusive cross sections with exactly $N$ jets are presented.
In that case, the difference between MINLO and
NLO predictions varies between -7\% and +11\%. Apart from the zero-jet case, where the M\protect\scalebox{0.8}{I}NLO\xspace
approach is not well motivated, the MINLO/NLO ratio is almost independent of the number of jets
and grows from 0.95 to 1.10 when the $p_{\mathrm{T}}$-threshold increases from 25 to 80\,GeV.
Similarly as in the inclusive case, at $p_{\mathrm{T}}$-thresholds above 40\,GeV
MINLO predictions for exclusive $N$-jet cross sections with $N\ge 2$
feature much better convergence and
smaller scale uncertainties w.r.t.~fixed order.
However, for lower $p_{\mathrm{T}}$-thresholds the opposite is observed, and
in the three-jet case the MINLO scale uncertainty becomes twice as large at the NLO one.
This can be attributed to the fact that Sudakov logarithms related to the vetoing of
NLO radiation are not resummed in the M\protect\scalebox{0.8}{I}NLO\xspace approach.
In spite of this caveat, the general agreement of fixed-order NLO and
M\protect\scalebox{0.8}{I}NLO\xspace results remains remarkably good for all considered observables.
Figure~\ref{fig:njet_ratios} shows ratios of
inclusive $t\bar t+N$\,jet cross sections for successive jet multiplicities.
Due to the cancellation of various sources of experimental and theoretical uncertainties,
such ratios are ideally suited for precision tests of QCD.
Corresponding ratios have been widely studied in
vector-boson plus multi-jet production~\cite{Bern:2014fea,Bern:2014voa},
where a striking scaling behavior was observed at
high jet multiplicity.
In the case of \mbox{$t\bar t$+multijet} ratios involving up to three jets
we find a moderate dependence on the number of jets but
no clear scaling.
This behavior is rather similar to scaling violations in $V+$\,multijet
production at lower multiplicity and, analogously as for $V+$\,multijets,
can be attributed to the suppression of important partonic channels in the zero-jet process at LO.
In fact, quark--gluon channels are not active in $t\bar{t}$ production at LO.
In addition, at LHC energies the gluonic initial state is strongly favored due to the
parton luminosity and the $t$-channel enhancement of the $gg\tot\bar t$ cross section, such that
the situation becomes similar to vector boson production, except for the difference of
quark versus gluon initial states at LO.
When adding additional jets, firstly quark--gluon
initial states and secondly quark--quark initial states (including $t$-channel top-quark diagrams)
are added, which contribute sizably to the cross section at larger invariant mass and/or
transverse momentum.
In order to test scaling hypotheses, it would therefore ultimately
be necessary to compute the $t\bar{t}+4$ jet over $t\bar{t}+3$ jet ratio, and eventually
the $t\bar{t}+5$ jet over $t\bar{t}+4$ jet ratio. This is out of reach of present technology,
therefore we do not investigate the scaling behavior in more detail.
Nevertheless, given the excellent agreement between
MINLO and NLO predictions up to three jets, the ratios in \reffi{fig:njet_ratios}
can be regarded as optimal benchmarks for precision tests.
\begin{figure*}[h]
\begin{center}\hskip 2mm
\includegraphics[scale=0.3999,trim=0 25 10 0,clip]{plots/main/0j_25_top_pt}\hskip 1pt}%{\hskip -1.05pt
\includegraphics[scale=0.3999,trim=40 25 10 0,clip]{plots/main/1j_25_top_pt}\hskip 1pt}%{\hskip -1.05pt
\includegraphics[scale=0.3999,trim=40 25 10 0,clip]{plots/main/2j_25_top_pt}\hskip 1pt}%{\hskip -1.05pt
\includegraphics[scale=0.3999,trim=40 25 10 0,clip]{plots/main/3j_25_top_pt}\\[-3pt]\hskip 2mm
\includegraphics[scale=0.3999,trim=0 25 10 0,clip]{plots/ratioNLO/0j_25_top_pt}\hskip 1pt}%{\hskip -1.05pt
\includegraphics[scale=0.3999,trim=40 25 10 0,clip]{plots/ratioNLO/1j_25_top_pt}\hskip 1pt}%{\hskip -1.05pt
\includegraphics[scale=0.3999,trim=40 25 10 0,clip]{plots/ratioNLO/2j_25_top_pt}\hskip 1pt}%{\hskip -1.05pt
\includegraphics[scale=0.3999,trim=40 25 10 0,clip]{plots/ratioNLO/3j_25_top_pt}\\[-3pt]\hskip 2mm
\includegraphics[scale=0.3999,trim=0 25 10 0,clip]{plots/ratioMINLO/0j_25_top_pt}\hskip 1pt}%{\hskip -1.05pt
\includegraphics[scale=0.3999,trim=40 25 10 0,clip]{plots/ratioMINLO/1j_25_top_pt}\hskip 1pt}%{\hskip -1.05pt
\includegraphics[scale=0.3999,trim=40 25 10 0,clip]{plots/ratioMINLO/2j_25_top_pt}\hskip 1pt}%{\hskip -1.05pt
\includegraphics[scale=0.3999,trim=40 25 10 0,clip]{plots/ratioMINLO/3j_25_top_pt}\\[-3pt]\hskip 2mm
\includegraphics[scale=0.3999,trim=0 0 10 0,clip]{plots/ratioNLOvsMINLO/0j_25_top_pt}\hskip 1pt}%{\hskip -1.05pt
\includegraphics[scale=0.3999,trim=40 0 10 0,clip]{plots/ratioNLOvsMINLO/1j_25_top_pt}\hskip 1pt}%{\hskip -1.05pt
\includegraphics[scale=0.3999,trim=40 0 10 0,clip]{plots/ratioNLOvsMINLO/2j_25_top_pt}\hskip 1pt}%{\hskip -1.05pt
\includegraphics[scale=0.3999,trim=40 0 10 0,clip]{plots/ratioNLOvsMINLO/3j_25_top_pt}\\[3mm]
%
\end{center}\vskip -2mm
\caption{\label{fig:top_pt}
Distribution in the top-quark $p_{\mathrm{T}}$ for $pp\to t\bar t+0,1,2,3$\,jets with $p_{\rT,\mathrm{jet}}\ge 25$\,GeV.}
\end{figure*}
\begin{figure*}[h]
\begin{center
\includegraphics[scale=0.525,trim=0 25 10 0,clip]{plots/main/1j_25_ttbar_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/main/2j_25_ttbar_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/main/3j_25_ttbar_pt}\\[-3.675pt]
\includegraphics[scale=0.525,trim=0 25 10 0,clip]{plots/ratioNLO/1j_25_ttbar_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/ratioNLO/2j_25_ttbar_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/ratioNLO/3j_25_ttbar_pt}\\[-3.675pt]
\includegraphics[scale=0.525,trim=0 25 10 0,clip]{plots/ratioMINLO/1j_25_ttbar_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/ratioMINLO/2j_25_ttbar_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/ratioMINLO/3j_25_ttbar_pt}\\[-3.675pt]
\includegraphics[scale=0.525,trim=0 0 10 0,clip]{plots/ratioNLOvsMINLO/1j_25_ttbar_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 0 10 0,clip]{plots/ratioNLOvsMINLO/2j_25_ttbar_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 0 10 0,clip]{plots/ratioNLOvsMINLO/3j_25_ttbar_pt}
\end{center}\vskip -2mm
\caption{\label{fig:ttbar_pt}
Distribution in the $p_{\mathrm{T}}$ of the $t\bar t$ system for $pp\to t\bar t+1,2,3$\,jets with $p_{\rT,\mathrm{jet}}\ge 25$\,GeV.}
\end{figure*}
Figure~\ref{fig:top_pt} shows the transverse momentum spectrum of the top quark for varying
jet multiplicities. From low to very high $p_{\mathrm{T}}$ NLO scale uncertainties remain at a similarly
small level as for integrated cross sections. For $N_{\mathrm{jets}}\ge 1$, we observe
significant shape corrections, which tend to decrease at high jet multiplicity in M\protect\scalebox{0.8}{I}NLO\xspace,
while in fixed order they remain important.
We also observe a shape difference between
fixed-order and M\protect\scalebox{0.8}{I}NLO\xspace predictions, which tends to increase with increasing jet multiplicity
but is clearly reduced at NLO.
The overall agreement
between fixed-order NLO and MINLO
results is quite good, both in shape and normalization, with differences that lie within the individual scale
uncertainties.
Figure~\ref{fig:ttbar_pt} shows the top-quark pair transverse momentum spectrum in 1-, 2-
and 3-jet samples.
We observe a large increase in the cross section between LO and NLO in the one-jet case,
where the effect of additional radiation not modeled by the LO calculation is largest.
At higher jet multiplicities correction effects tend to decrease.
Fixed-order NLO uncertainties are similarly small as in~\reffi{fig:top_pt}, while MINLO scale uncertainties
tend to be more pronounced in the tails.
However, we find a very good overall agreement between fixed-order NLO
and MINLO predictions, especially for $N_{\mathrm{jets}}\ge 2$ and 3.
The jet transverse momentum spectrum of the first, second and third
jet, as predicted by $t\bar t+N$\,jet calculations of corresponding jet
multiplicity, is displayed in~\reffi{fig:njet_pt}. In general we observe
approximately constant NLO $K$-factors over the entire range of transverse
momenta analyzed here, but in terms of perturbative convergence and
scale uncertainties at NLO we find that the M\protect\scalebox{0.8}{I}NLO\xspace approach performs better
than fixed order. Comparing fixed-order and M\protect\scalebox{0.8}{I}NLO\xspace results, at LO we find
significant deviations that grow with $N_{\mathrm{jets}}$ and can reach 60\% in the
tails. Such differences are largely reduced by the transition to NLO.
The fairly decent agreement between fixed-order NLO and MINLO results
exemplifies nicely how the convergence of the perturbative series leads to a
reduced dependence not only on constant scale variations, but also on the
functional form of the scale.
Figure~\ref{fig:njet_jetpt_ht} shows inclusive
\mbox{$t\bar t+1,2,3$\,jet} predictions for the total light-jet transverse energy,
which is defined as \mbox{$H_\rT^{\mathrm{jets}}=\sum_{j} |p_{\mathrm{T},j}|$}, with the sum running over
all reconstructed jets within acceptance.
This observable is typically badly described by LO calculations, as a sizable
fraction of events, especially at large $H_\rT^{\mathrm{jets}}$,
contains additional jets originating in initial-state
radiation~\cite{Rubin:2010xp}. Correspondingly we observe a very large increase in the cross section
between LO and NLO in the one-jet samples, where the effect of additional radiation not modeled by
the calculation is largest. At higher jet multiplicities, the increase is smaller, but well visible.
In M\protect\scalebox{0.8}{I}NLO\xspace it tends to be more pronounced than at fixed order, and for $N_{\mathrm{jets}}\ge 3$ also MINLO
uncertainties are larger than NLO ones.
Nevertheless, we find good overall agreement between fixed-order NLO and MINLO predictions, independent of the
jet multiplicity.
However, given the strong sensitivity of $H_\rT^{\mathrm{jets}}$ to multi-jet emissions, NLO or MINLO calculations
with fixed jet multiplicity might significantly underestimate the effect of additional QCD radiation,
and an approach like multijet merging at NLO~\cite{Hoeche:2014qda} would be more appropriate for this particular observable.
Studying differential distributions in several angular variables we did not
find any sizable shape effect. We thus refrain from showing corresponding
plots.
\begin{figure*}[p]
\begin{center}
\includegraphics[scale=0.525,trim=0 25 10 0,clip]{plots/main/25_ljet_1_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/main/25_ljet_2_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/main/25_ljet_3_pt}\\[-3.675pt]
\includegraphics[scale=0.525,trim=0 25 10 0,clip]{plots/ratioNLO/25_ljet_1_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/ratioNLO/25_ljet_2_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/ratioNLO/25_ljet_3_pt}\\[-3.675pt]
\includegraphics[scale=0.525,trim=0 25 10 0,clip]{plots/ratioMINLO/25_ljet_1_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/ratioMINLO/25_ljet_2_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/ratioMINLO/25_ljet_3_pt}\\[-3.675pt]
\includegraphics[scale=0.525,trim=0 0 10 0,clip]{plots/ratioNLOvsMINLO/25_ljet_1_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 0 10 0,clip]{plots/ratioNLOvsMINLO/25_ljet_2_pt}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 0 10 0,clip]{plots/ratioNLOvsMINLO/25_ljet_3_pt}\\[3mm]
%
\end{center}\vskip -2mm
\caption{\label{fig:njet_pt}
Distribution in the $p_{\mathrm{T}}$ of the $n$-th jet for $pp\to t\bar t+n$\,jets with $p_{\rT,\mathrm{jet}}\ge 25$\,GeV and $n=1,2,3$.}
\end{figure*}
\begin{figure*}
\begin{center
\includegraphics[scale=0.525,trim=0 25 10 0,clip]{plots/main/1j_25_htl}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/main/2j_25_htl}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/main/3j_25_htl}\\[-3.675pt]
\includegraphics[scale=0.525,trim=0 25 10 0,clip]{plots/ratioNLO/1j_25_htl}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/ratioNLO/2j_25_htl}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/ratioNLO/3j_25_htl}\\[-3.675pt]
\includegraphics[scale=0.525,trim=0 25 10 0,clip]{plots/ratioMINLO/1j_25_htl}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/ratioMINLO/2j_25_htl}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 25 10 0,clip]{plots/ratioMINLO/3j_25_htl}\\[-3.675pt]
\includegraphics[scale=0.525,trim=0 0 10 0,clip]{plots/ratioNLOvsMINLO/1j_25_htl}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 0 10 0,clip]{plots/ratioNLOvsMINLO/2j_25_htl}\hskip 2pt}%{\hskip -1.375pt
\includegraphics[scale=0.525,trim=40 0 10 0,clip]{plots/ratioNLOvsMINLO/3j_25_htl}
\end{center}\vskip -2mm
\caption{\label{fig:njet_jetpt_ht}
Distribution in the total transverse energy of light jets for $pp\tot\bar t+1,2,3$\,jets with $p_{\rT,\mathrm{jet}}\ge 25$\,GeV.}
\end{figure*}
\section{Conclusions}
We have computed predictions for top-quark pair production with up to three
additional jets at the next-to-leading order in perturbative QCD using the
automated programs O\protect\scalebox{0.8}{PEN}L\protect\scalebox{0.8}{OOPS}\xspace and S\protect\scalebox{0.8}{HERPA}\xspace. This is the first calculation of
this complexity involving massive QCD partons in the final state.
Given the multi-scale nature of $t\bar t$+multijet production, finding a scale
that guarantees optimal perturbative convergence is not trivial.
Moreover, standard factor-two scale variations might not provide a correct
estimate of theoretical uncertainties related to missing higher-order effects.
These issues have been addressed by comparing predictions obtained at fixed order
using the scale $H_\rT/2$ and, alternatively, with the M\protect\scalebox{0.8}{I}NLO\xspace method.
The hard scale $H_\rT/2$ is known to yield good perturbative
convergence for a large class of processes, while the
M\protect\scalebox{0.8}{I}NLO\xspace approach is more favorable from
the theoretical point of view, as it implements NLL resummation
for soft and collinear logarithms that emerge
in the presence of large ratios of scales.
For a rather wide range of observables at the 13\,TeV LHC, we find
very good agreement between the predictions generated
at fixed order and with the M\protect\scalebox{0.8}{I}NLO\xspace method.
The differences turn out to be well consistent with factor-two scale variations
of the respective predictions, which are typically at the 10\% level.
These observations suggest that the fixed-order NLO and MINLO approach
can---to a large extent---be used interchangeably.
Moreover, and most importantly, they significantly consolidate the picture of theoretical uncertainties
that results from standard scale variations alone.
\acknowledgement{
We are grateful to A.~Denner, S.~Dittmaier and L.~Hofer for providing us with pre-release
versions of the one-loop tensor-integral library C\protect\scalebox{0.8}{OLLIER}\xspace.
This research was supported by the US Department of Energy under contract
DE--AC02--76SF00515, by the Swiss National Science Foundation under
contracts BSCGI0-157722 and PP00P2-153027, by the Research Executive Agency
of the European Union under the Grant Agreements PITN--GA--2012--316704~({\it
HiggsTools}),
by the Ka\-vli Institute for Theoretical Physics through
the National Science Foundation's Grant No. NSF PHY11-25915
and by the German Research Foundation (DFG) under grant No.\ SI 2009/1-1.
We used resources of the National Energy Research Scientific Computing
Center, which is supported by the Office of Science of the U.S. Department
of Energy under Contract No.~DE--AC02--05CH11231.}
\bibliographystyle{amsunsrt_mod}
|
1,477,468,750,429 | arxiv | \section{Introduction}
Precision measurements of the CKM matrix put the Standard Model to a
stringent test and constrain possible physics beyond it. Using the
measured frequency of $B_{q} - \overline{B}_{q}, \; q \in \{d,s\}$
oscillations to determine the CKM matrix elements $|V_{tq}|$
requires a reliable lattice calculation of the non-perturbative $B_q
- \overline{B}_q$ mixing matrix elements $\frac{8}{3}m^2_{B_q}
f^{\,2}_{B_q} B_{B_q}$. A $2+1$ flavor, unquenched calculation of
$f_{B_q}$ and $B_{B_q}$ has been carried out by the RBC-UKQCD
collaboration in the infinite heavy quark mass limit using light
domain-wall fermions on a $(2 \; \rm{fm})^3$ spatial volume
\cite{Wennekers:2007,AokiRBC:2007}; this is currently being extended
to a $(3 \; \rm{fm})^3$ spatial volume and towards physical light
quark masses \cite{Aoki:2007}. In the following, we discuss the
perturbative lattice-continuum matching of the operators relevant
for the RBC-UKQCD calculation, following in part the detailed
discussion in Refs.~\cite{AokiRBC:2007,Loktik:2006kz}. We also point
out the subtle degeneracy of heavy-light meson ground states, and
discuss its implications for the extraction of $f_B$ and $B_B$ from
lattice correlation functions.
\section{Action and Feynman Rules}
\label{sec:action}
\paragraph{}
The heavy $b$ quark is described by an improved lattice version of
the static limit of heavy quark effective theory with smeared,
SU(3)-projected gauge links $\overline{V}_0(\vec{x},t)$ to reduce
noise:
\begin{equation}\label{eq:hqa_imp}
S_{\text{static}} = \sum_{\vec{x}, \, t} \; \overline{h}(\vec{x},t +
a) \left[ h(\vec{x},t + a) -
\overline{V}^{\dagger}_0(\vec{x},t)h(\vec{x},t) \right].
\end{equation}
\noindent The SU(3) projection (discussed in
Ref.~\cite{AokiRBC:2007}) simplifies perturbative calculations by
allowing the smeared gauge links to be expanded in terms of an
effective gauge field $B^{\,a}_0(\vec{x}, t)$; in momentum space
$B^{\,a}_0(q) = h_\mu(q) A^{\,a}_{\mu}(q)$, where $A^{\,a}_{\mu}(q)$
is the physical gauge field and $h_\mu(q)$ is a form factor
depending on the smearing scheme. We focus on one of the two schemes
used in the RBC-UKQCD calculation (one-level APE blocking with
parameter $\alpha = 1$), resulting in a heavy quark gluon vertex
\begin{equation}
Y^a_{\mu}(k,k') = -ig_0T^a \delta_{\mu0} e^{-i(k_0+k_0')/2}
\rightarrow \overline{Y}^{\,a}_{\mu}(k,k') = -ig_0T^a h_{\mu}(q)
e^{-i(k_0+k_0')/2},
\end{equation}
\noindent where $g_0$ is the bare lattice coupling, $q$ is the gluon
momentum, and $h_{\mu}(q)$ is given by
\begin{equation}
\label{eq:formfactor} h_{\mu}(q) = (h_0(q), \, h_j(q)) = \left(1 -
\frac{2}{3}\sum_{l = 1}^3 \sin^{2}\left(\frac{q_l}{2}\right), \,
\frac{2}{3}\sin\left(\frac{q_0}{2}\right)
\sin\left(\frac{q_j}{2}\right)\right).
\end{equation}
\noindent The heavy quark two-gluon vertex and the heavy quark
propagator are given in Ref.~\cite{Loktik:2006kz}.
\paragraph{}
The light quarks are described by the domain-wall fermion action.
Each light flavor is represented by a $(4+1)$-dimensional
Wilson-style fermion field $\psi_s(\vec{x}, t)$ where $1 \leq s \leq
N$ labels the coordinate in the fifth dimension. The physical quark
field $q(\vec{x}, t)$ is constructed from chiral surface states at
$s = 1$ and $s = N$ via $q(\vec{x}, t) = P_R \psi_1(\vec{x}, t) +
P_{L} \psi_N(\vec{x}, t)$. The domain-wall height $M_5$ is a fixed
parameter of the theory; we set $M_5 = 1.8$ to match the RBC-UKQCD
calculation. A detailed description of domain-wall fermions and
their perturbative treatment for our choice of gauge action is given
in Ref.~\cite{Loktik:2006kz} and references therein, especially
Ref.~\cite{Aoki:2002iq}. In the perturbative calculation the light
quark masses were set to zero and the size $N$ of the fifth
dimension was taken to be large, resulting in an exact chiral
symmetry as $N \rightarrow \infty$. The gluons were described by the
Iwasaki gauge action, whose Feynman rules are given in
Ref.~\cite{Loktik:2006kz}.
\section{Perturbative Lattice-Continuum Matching at One-Loop}
The full QCD operators relevant for the extraction of $f_B$ and
$B_B$, defined in $\overline{\text{MS}}$(NDR) at the scale $\mu_b =
m_b$ of the $b$ quark mass, are the axial vector current $A_\rho =
\overline b \gamma_\rho \gamma_5 q$ and the parity-even part of the
$\Delta B = 2$ vector-axial four-quark operator:
\begin{equation}
\left[\,\overline b \gamma^{\,\rho}(1-\gamma_5)q\right]
\left[\,\overline b \gamma_\rho(1-\gamma_5)q\right] \rightarrow O_{VV+AA} =
\left(\overline b\gamma^{\,\rho} q\right)\left(\overline b \gamma_\rho q\right) +
\left(\overline b\gamma^{\,\rho}\gamma_5q\right)
\left(\overline b\gamma_\rho\gamma_5q\right).
\end{equation}
\noindent We match these operators at the scale $\mu_b$ to lattice
operators in the static effective theory (described in
Sec.~\ref{sec:action}) at the lattice scale $a^{-1}$ via the
continuum version of the static effective theory renormalized at a
scale $\mu$. Throughout our one-loop calculation we choose to set
$\mu = a^{-1}$; in the RBC-UKQCD calculation, the lattice scale is
given by $a^{-1} = 1.62$ GeV. The full QCD operators are related to
continuum static operators by
\begin{equation}
\label{eq:full_A_matching} A_\rho(\mu_b) =
C_A(\mu_b,\mu) \widetilde A_\rho(\mu) +
\mathcal O(\Lambda_\mathrm{QCD}/\mu_b),
\end{equation}
\begin{equation}
\label{eq:full_vv+aa_matching} O_{VV+AA}(\mu_b) =
Z_1(\mu_b,\mu) \widetilde{O}_{VV+AA}(\mu) +
Z_2(\mu_b,\mu) \widetilde{O}_{SS+PP}(\mu) +
\mathcal O(\Lambda_\mathrm{QCD}/\mu_b).
\end{equation}
\noindent In terms of the static quark and antiquark fields
$h^{(\pm)}(x) = e^{\pm i m_b v\cdot x}(1 \pm \slashed{v}) b(x) / 2$
and for $m_b \rightarrow \infty$,
\begin{equation}
\widetilde A_\rho = \overline h^{(+)} \gamma_\rho\gamma_5 q,
\end{equation}
\begin{equation}
\label{eq:statOVVAA} \widetilde O_{VV+AA} =
2\left(\overline h^{(+)}\gamma^{\,\rho} q\right)
\left(\overline h^{(-)}\gamma_\rho q\right) +
2\left(\overline h^{(+)}\gamma^{\,\rho}\gamma_5q\right)
\left(\overline h^{(-)}\gamma_\rho\gamma_5q\right),
\end{equation}
\begin{equation}
\label{eq:statOSSPP} \widetilde O_{SS+PP} =
2\left(\overline h^{(+)}q \right)\left(\overline h^{(-)}q\right) +
2\left(\overline h^{(+)}\gamma_5q\right)\left(\overline
h^{(-)}\gamma_5q\right).
\end{equation}
\noindent The static effective action discussed in
Sec.~\ref{sec:action} describes $h^{(+)}$ with $v = (1, \vec{0})$,
corresponding to a stationary meson.
The constants $C_A(\mu_b,\mu)$ and $Z_{1,2}(\mu_b,\mu)$ are known at
one-loop; they are summarized in Ref.~\cite{AokiRBC:2007}. Using the
latest PDG values for $\alpha^{\overline{\text{MS}}}_s(m_Z)$ and
$m_b$, and running the coupling down at four-loops with the physical
number of flavors to determine
$\alpha^{\overline{\text{MS}}}_s(\mu_b)$ and
$\alpha^{\overline{\text{MS}}}_s(\mu)$ we obtain $C_A = 1.057, \;
Z_1 = 0.934 , \; Z_2 = -0.151$.
\paragraph{}
We now describe the matching $\widetilde A_\rho(\mu) =
\widetilde{C}_{A}(\mu,a^{-1}) a^{-3}A^{\text{lat}}_\rho$ of the
heavy-light axial currents $\widetilde A_\rho(\mu)$ and
$A^{\text{lat}}_\rho$ (which is dimensionless) in the continuum and
lattice versions of the static effective theory. Results for the
four-fermion operators are summarized at the end of this section. We
compare the correlation function $\langle (\overline{h}(x)\Gamma
q(x))h(y)\overline{q}(z)\rangle$ in both theories; in this
discussion only one heavy quark field $h^{(+)} \equiv h$ enters. For
the axial current $\Gamma = \gamma_\rho\gamma_5$, but the light
quark chiral symmetry and the heavy quark spin symmetry $h
\rightarrow e^{-i\phi_j \epsilon_{jkl} \sigma_{kl}}h$ of both the
continuum and the lattice theory render the matching
$\Gamma$-independent. At one-loop and for small external quark
momenta $p \simeq 0$ the continuum and lattice correlation functions
are
\begin{equation}
\langle(\overline{h}\Gamma q)h\overline{q}\rangle = \frac{Z_h}{i
p_0}\Gamma(1 + \delta V) \frac{Z_2}{i \slashed{p}}, \quad
\label{eq:compgreenfunc} \langle(\overline{h}\Gamma
q)h\overline{q}\rangle_{\text{lat}} = \frac{Z^{\text{lat}}_h}{i
p_0}\Gamma(1 + \delta
V^{\text{lat}})\frac{(1-w^2_0)Z_w\,Z^{\text{lat}}_2}{i \slashed{p}},
\end{equation}
\noindent where the Feynman diagrams contributing at one-loop are
shown in Fig.~\ref{fig:heavylight}. All $Z$-factors have values $1 +
{\mathcal O}(\alpha_s)$, and the vertex corrections $\delta V,
\delta V^{\text{lat}}$ are ${\mathcal O}(\alpha_s)$ and
$\Gamma$-independent as noted above.
\begin{wrapfigure}{r}{6.0cm}
\begin{center}
\includegraphics{heavylight_nlo.jpg}
\end{center}
\caption{\label{fig:heavylight} One-Loop Corrections to the
Heavy-Light Axial Current}
\end{wrapfigure}
\paragraph{}
The continuum quantities are known \cite{Loktik:2006kz}; we focus on
the lattice correlation function: $w_0 = 1-M_5$ is a domain-wall
fermion specific constant, and an overlap factor $1-w^2_0$
connecting the five-dimensional and physical quark fields is present
even at tree level. The light quark wavefunction renormalization
$Z_w Z^{\text{lat}}_2$ due to Fig.~\ref{fig:heavylight} (a) and (b)
was calculated in Ref.~\cite{Aoki:2002iq}. $Z^{\text{lat}}_2$ can be
viewed as the four-dimensional wavefunction renormalization, while
$Z_w$ renormalizes the overlap factor $1-w^2_0$. Due to tadpoles,
the one-loop correction to $Z_w$ is enormous. As described in
Ref.~\cite{Aoki:2002iq}, this is remedied by reorganizing the
perturbation series according to the mean-field approach, resulting
in the prescriptions $M_5 \rightarrow \widetilde{M}_5 = M_5 -
4(1-u)$, $w_0 \rightarrow w_0^{\text{MF}} = 1 - \widetilde{M}_5$ and
$q^{\text{lat}} \rightarrow q^{\text{lat, MF}} = u^{-1/2}
q^{\text{lat}}$ to be made throughout the calculation; here $u =
P^{1/4}$ where $P$ is the measured average plaquette (for the
RBC-UKQCD calculation $u = 0.8757$) and the superscript \lq MF\rq~
identifies mean-field improved quantities. We calculate the matching
factor $\widetilde{C}_{A}(\mu,a^{-1})$ using
\begin{wrapfigure}{l}{5.5cm}
\begin{center}
\includegraphics{vertex_nlo.jpg}
\end{center}
\caption{\label{fig:heavylightvertex} One-Loop Vertex Correction to
the Heavy-Light Axial Current}
\end{wrapfigure}
\noindent both the usual continuum $\overline{\text{MS}}$ coupling
and a mean-field improved version, enabling an estimate of
${\mathcal O}(\alpha_s^2)$ corrections.
$\alpha_s^{\overline{\text{MS}}}(\mu)$ was obtained by running down
to the $c$ quark mass with the physical number of flavors and back
up to $\mu$ using only three dynamical flavors to match the
RBC-UKQCD $2+1$ flavor calculation:
$\alpha_s^{\overline{\text{MS}}}(\mu) = 0.326$ and
$\alpha_s^{\text{MF}}(\mu) = 0.177$. The calculation of the vertex
correction $\delta V^{\text{lat}}$ in Fig.~\ref{fig:heavylight} (c)
and the heavy quark wavefunction renormalization $Z_h^{\text{lat}}$
in Fig.~\ref{fig:heavylight} (d) and (e) is straightforward
\cite{AokiRBC:2007,Loktik:2006kz}. Infrared divergences only occur
in QED-like diagrams and are regulated by a gluon mass $\lambda$
which cancels from the matching factor. Furthermore, only the
unsmeared $\delta_{\mu0}$ part of $h_\mu(q)$ in
Eq.~(\ref{eq:formfactor}) gives rise to infrared divergences; the
sine functions in the smeared part of $h_\mu(q)$ cancel all infrared
divergent loop propagators. A generic feature of domain-wall fermion
perturbation theory is the appearance of correlation functions
$\langle q(-p) \overline{\psi}_s(p)\rangle$, $\langle
\psi_s(-p)\overline{q}(p)\rangle$ connecting external
four-dimensional quarks to five-dimensional quarks propagating in
loops, as shown in Fig.~\ref{fig:heavylightvertex}. A subtlety
pointed out in Ref.~\cite{Boucaud:1992nf} is that the correct
renormalization prescription for $Z^{\text{lat,\,MF}}_h$ includes
the linearly divergent heavy quark mass renormalization:
\begin{equation}
Z^{\text{lat,\,MF}}_h = 1 -i\frac{\partial \Sigma(p_0)}{\partial
p_0}\Big|_{p_0 = 0} + \Sigma(p_0 = 0),
\end{equation}
\noindent where the heavy quark self energy $\Sigma(p_0)$ itself is
not affected by mean-field improvement. Comparing the correlation
functions in Eq.~(\ref{eq:compgreenfunc}) after mean-field
improvement gives a matching factor
\begin{equation}
\widetilde{C}_{A}(\mu,a^{-1}) = \frac{\sqrt u}{\sqrt{(1-(w^{\rm
MF}_0)^2)Z^{\rm MF}_w}}\,
Z^\mathrm{MF}_A(\mu,a^{-1}), \quad Z_A^{\text{MF}}(\mu,a^{-1}) = 1 +
\frac{\alpha_s}{3\pi}(-1.584).
\end{equation}
\noindent The overall factor $Z_{\Phi}(\mu_b,a^{-1}) = C_{A}(\mu_b,
\mu)\widetilde{C}_{A}(\mu,a^{-1})$ relating the axial currents in
full QCD and the lattice static effective theory, computed using
both $\alpha_s^{\overline{\text{MS}}}(\mu)$ and
$\alpha_s^{\text{MF}}(\mu)$, is
$Z_{\Phi}^{\overline{\text{MS}}}(\mu_b,a^{-1}) = 0.902, \;
Z_{\Phi}^{\text{MF}}(\mu_b,a^{-1}) = 0.961$. While the one-loop
result is small and reliable, the large difference between
$\alpha_s^{\overline{\text{MS}}}(\mu)$ and
$\alpha_s^{\text{MF}}(\mu)$ induces a $\sim 7\%$ systematic error
ultimately warranting nonperturbative renormalization.
\paragraph{}
For completeness, we quote the lattice-continuum matching constants
(calculated in Refs.~\cite{AokiRBC:2007,Loktik:2006kz}) for the
four-fermion operators in Eqs.~(\ref{eq:statOVVAA}) and
(\ref{eq:statOSSPP}). For $i \in \{VV + AA, SS + PP\}$ and at
one-loop
\begin{equation}
\label{eq:fourfermionlattcont} \widetilde O_{i}(\mu) = \frac
u{(1-(w^{\rm MF}_0)^2)Z^{\rm MF}_w}\,
Z^\mathrm{MF}_{i}(\mu,a^{-1}) a^{-6}O^{\rm lat}_{i}, \quad Z_{VV + AA}^{\text{MF}} = 1 +
\frac{\alpha_s}{4\pi} (-4.462), \quad Z_{SS + PP}^{\text{MF}}
= 1,
\end{equation}
\noindent where the $O_i^{\text{lat}}$ are dimensionless. Since the
coefficient $Z_2$ of $\widetilde O_{SS+PP}$ in
Eq.~(\ref{eq:full_vv+aa_matching}) is ${\mathcal O}(\alpha_s)$, only
the domain-wall overlap factors contribute to the lattice-continuum
matching for this operator. While formally inconsistent, we use the
one-loop mean-field improved values of the overlap factors
throughout to ensure tadpole-safety. Combining
Eqs.~(\ref{eq:full_vv+aa_matching}) and
(\ref{eq:fourfermionlattcont}) we get:
\begin{equation}
\label{eq:fullfourfermionmatching} O_{VV+AA} =
Z_{VA}(\mu_b,a^{-1}) a^{-6}O^{\text{lat}}_{VV+AA} +
Z_{SP}(\mu_b,a^{-1}) a^{-6}O^{\text{lat}}_{SS+PP},
\end{equation}
\begin{equation}
Z_{VA}^{\overline{\text{MS}}} = 0.902, \quad Z_{VA}^{\text{MF}} =
0.769, \quad Z_{SP}^{\overline{\text{MS}}} = -0.123, \quad
Z_{SP}^{\text{MF}} = -0.133.
\end{equation}
\section{Ground State Degeneracies of Static-Light Mesons and $f_B$, $B_B$ on the Lattice}
\paragraph{}
Let $H$ be the Hamiltonian corresponding to the full lattice action
in Sec.~\ref{sec:action}. For any $t$, the heavy quark action in
Eq.~(\ref{eq:hqa_imp}) is invariant under $h(\vec{x}) \rightarrow
e^{i\theta(\vec{x})}h(\vec{x})$ for a set of $V/a^3$ parameters
$\theta(\vec{x})$, where $V = L^3$ is the spatial lattice volume. If
$\Theta(\vec{x})$ is the generator corresponding to
$\theta(\vec{x})$ then
\begin{equation}
\label{eq:ThetaHComm} \left[\Theta(\vec{x}), h(\vec{y})\right] =
h(\vec{x}) \delta_{\vec{x}\vec{y}}, \quad \left[\Theta(\vec{x}),
\overline{h}(\vec{y})\right] = -\overline{h}(\vec{x})
\delta_{\vec{x}\vec{y}}, \quad
\left[\Theta(\vec{x}),\Theta(\vec{y})\right] = 0,\quad
\left[\Theta(\vec{x}), H\right] = 0.
\end{equation}
\noindent Simultaneously diagonalize $H$ and all $\Theta(\vec{x})$.
Since Eq.~(\ref{eq:ThetaHComm}) implies that $h(\vec{x})$ and
$\overline{h}(\vec{x})$ raise and lower the eigenvalues of
$\Theta(\vec{x})$ by $1$, and the charge conjugation invariance of
QCD implies $\Theta(\vec{x})|0\rangle = 0$, the spectrum of
$\Theta(\vec{x})$ contains $\mathbb{Z}$. Define the unit-norm state
$|B(\vec{x})\rangle$ to be the lowest energy state with the quantum
numbers of a $B$ meson which also satisfies
$\Theta(\vec{y})|B(\vec{x})\rangle =
\delta_{\vec{x}\vec{y}}|B(\vec{x})\rangle$. Thus $\langle B(\vec{x})
| B(\vec{y}) \rangle = \delta_{\vec{x}\vec{y}}$, and we can
interpret these states as having the heavy quark localized at a
fixed lattice site with the light quark smeared out around it. Since
$T(\hat{i}\,) \Theta(\vec{x}) T(\hat{i}\,)^{-1} = \Theta(\vec{x} +
\hat{i}\,)$, where $T(\hat{i}\,)$ is a lattice translation by $a$ in
the spatial direction $\hat{i}$, all $B$ meson ground states
$|B(\vec{x})\rangle$ are degenerate. We also define total spatial
momentum eigenstates $|\widetilde{B}(\vec{k}_l)\rangle$, where $l_i
\in \mathbb{Z} \;\; (i = 1,2,3)$:
\begin{equation}
|\widetilde{B}(\vec{k}_l)\rangle = \sqrt{2a^3} \sum_{\vec{x}}
e^{-i\,\vec{k}_l\cdot\vec{x}}|B(\vec{x})\rangle, \;\; \vec{k}_l =
\frac{2\pi}{L}(l_1,l_2,l_3), \;\; -\frac{L}{2a} < l_i \leq
\frac{L}{2a}, \;\; \langle \widetilde{B}(\vec{k}_{l'}) |
\widetilde{B}(\vec{k}_{l})\rangle = 2V\delta_{l'l}.
\end{equation}
\noindent As $a\rightarrow 0, V \rightarrow \infty$, these states
reduce to continuum momentum eigenstates
$|\widetilde{B}(\vec{p})\rangle^{\text{c}}$ with conventional static
effective theory normalization
$^{\text{c}}\langle\widetilde{B}(\vec{p}\,')|\widetilde{B}(\vec{p})\rangle^{\text{c}}
= 2 (2\pi)^3 \delta^{(3)}(\vec{p}\,' - \vec{p})$. In the $m_b
\rightarrow \infty$ limit, these states only differ from the
corresponding full QCD states by a factor of $\sqrt{m_B}$. Thus:
\begin{align}
f_B\sqrt{m_B} \equiv \langle 0 |
A_{0}(\vec{0},0)|\widetilde{B}(\vec{p} = \vec{0})\rangle^{\text{c}}
&= Z_{\Phi}^{\text{MF}}a^{-3}\langle 0 |
A_{0}^{\text{lat}}(\vec{0},0)\left(\sqrt{2a^3} \sum_{\vec{x}}
|B(\vec{x})\rangle \right) = \nonumber \\ &=
\sqrt{2}Z_{\Phi}^{\text{MF}}a^{-3/2}\langle
0|A_0^{\text{lat}}(\vec{0},0)|B(\vec{0})\rangle \equiv
\sqrt{2}Z_{\Phi}^{\text{MF}}a^{-3/2}\Phi_B^{\text{lat}}.
\quad\quad\;\;\;
\end{align}
\noindent In complete analogy to the above, we can construct
$\overline{B}$ meson ground states states
$|\overline{B}(\vec{x})\rangle$. Using these and
Eq.~(\ref{eq:fullfourfermionmatching}), the calculation of the $B -
\overline{B}$ mixing matrix element $\frac{8}{3}\,m^2_B \,f_B^{\,2}
\,B_B = \langle \, \overline{B}|O_{VV+AA}|B\rangle$ is reduced to
the calculation of the lattice quantities $\langle
\overline{B}(\vec{0})
|O^{\text{lat}}_{i}(\vec{0},0)|B(\vec{0})\rangle, \; i \in \{VV +
AA, SS + PP\}$.
\paragraph{}
The degeneracy of the states $|B(\vec{x})\rangle$ complicates the
extraction of $\Phi_B^{\text{lat}}$ and $\langle\,
\overline{B}(\vec{0})
|O^{\text{lat}}_{i}(\vec{0})|B(\vec{0})\rangle$, since even a large
time separation of source and sink may not project onto a unique $B$
meson ground state: different combinations of the
$|B(\vec{x})\rangle$ may enter the correlation functions used for
calculating the matrix elements and those used for normalization. To
see this, consider the extraction of $\Phi_B^{\text{lat}}$; we now
work exclusively in the lattice theory. Define local and smeared $B$
meson interpolation operators $A^{L}_{0}(\vec x, t) =
\overline{h}(\vec x, t)\gamma_{0}\gamma_{5}q(\vec x, t) $,
$A^{S}_{0}(t) = \sum_{\vec y \in \Delta V} \sum_{\vec z \in \Delta
V}
\overline{h}(\vec y,t)\gamma_0 \gamma_5 q(\vec z,t)$, where
$\Delta V$ is a fixed subvolume of $V$ and the smeared operators are
Coulomb gauge fixed. From experience, local-local correlation
functions in the static effective theory are prohibitively noisy;
instead calculate the local-smeared and smeared-smeared correlation
functions. Inserting a complete set of states $\sum_{\vec{w}}
|B(\vec{w})\rangle\langle B(\vec{w})| + (\text{higher energy
states})$ with the correct quantum numbers, we have as $t
\rightarrow \infty$:
\begin{equation}
\label{eq:LScorrelator} \mathcal C^{LS}(t) \equiv \sum_{\vec x \in
V}
\langle0|A^L_0(\vec x,t)A^S_0(0)^\dagger|0\rangle = \Phi_B^{\rm{lat}} e^{-m_B^{*}\,t}
\left(\sum_{\vec{w} \in V} \langle B(\vec{w})|\sum_{\vec{y} \in
\Delta V}\sum_{\vec{z} \in \Delta V}\overline{q}(\vec{y},0) \gamma_0
\gamma_5 h(\vec{z},0)|0\rangle\right),
\end{equation}
\begin{equation}
\mathcal C^{SS}(t) \equiv
\langle0|A^S_0(t)A^S_0(0)^{\dagger}|0\rangle = e^{-m_B^{*}\,t} \left( \sum_{\vec{w} \in
V}\Big|\langle B(\vec{w})|\sum_{\vec{y} \in \Delta V}\sum_{\vec{z}
\in \Delta V}\overline{q}(\vec{y},0) \gamma_0 \gamma_5
h(\vec{z},0)|0\rangle\Big|^2\right).
\end{equation}
\noindent where $m_B^{*}$ is the unphysical mass of the lattice $B$
meson. Since ${\mathcal C}^{SS}(t)$ contains a sum over squares, the
use of a naive ratio $\sim {\mathcal C}^{LS}(t)/\sqrt{{\mathcal
C}^{SS}(t)}$ requires a translationally invariant wall source
$\Delta V = V$ to project onto the unique state of zero-momentum. In
this case the sums over $\vec{w}$ only give a factor of $V/a^3$ and
$\Phi_B^{\text{lat}} = {\mathcal C}^{LS}(t)/\sqrt{{\mathcal
C}^{SS}(t)e^{-m_B^{*}\,t}\,V/a^3}$. To remedy the poor overlap of
the wall source with the $B$ meson ground state - especially on
large lattices - consider a fixed box source and a series of box
sinks summed over an entire timeslice to project onto zero momentum;
this approach also allows more general types of smearing, such as
the use of an atomic wavefunction. Let
$\widetilde{A}^{S}_{0}(\vec{w}, t) = \sum_{\vec y \in \Delta
V_{\vec{w}}} \sum_{\vec z \in \Delta V_{\vec{w}}}
\overline{h}(\vec y,t)\gamma_0 \gamma_5 q(\vec z,t)$ where
$\Delta V_{\vec{w}}$ is a box of fixed size located at
$\vec{w}$ and $\Delta V_{\vec{0}} = \Delta V$, $\widetilde{A}^{S}_{0}(\vec{0}, t) = A^S_0(t)$. Define a corresponding smeared-smeared correlation
function and insert a complete set of momentum eigenstates $\frac{1}{2V} \sum_{\vec{k}_l}
|\widetilde{B}(\vec{k}_l)\rangle \langle \widetilde{B}(\vec{k}_l)| +
(\text{higher energy states})$; then as $t \rightarrow \infty$,
\begin{equation}
\mathcal C^{\widetilde{S}\widetilde{S}}(t) \equiv
\sum_{\vec{w}}\langle0|\widetilde{A}^S_0(\vec{w},t)A^S_0(0)^{\dagger}|0\rangle = \frac{e^{-m_B^{*}\, t}}{2V}\sum_{\vec{w}}\langle 0 |
\widetilde{A}^S_0(\vec{w},t) | \widetilde{B}(\vec{0}) \rangle \langle
\widetilde{B}(\vec{0}) | A^S_0(0)^{\dagger}|0\rangle =
\frac{e^{-m_B^{*} \, t}}{2a^3}\Big | \langle
\widetilde{B}(\vec{0}) | A^S_0(0)^{\dagger}|0\rangle \Big |^2.
\end{equation}
\noindent Since $| \widetilde{B}( \vec{0} ) \rangle = (2 a^3)^{1/2}
\sum_{\vec{w}} | B ( \vec{ w } ) \rangle$, we can rewrite the right
side of Eq.~(\ref{eq:LScorrelator}) and obtain another ratio for
$\Phi_B^{\rm{lat}}$ which reaches a plateau more quickly due to the
improved ground state overlap:
\begin{equation}
\label{eq:boxphibcalc} \mathcal C^{\,LS}(t)e^{m_B^{*} \, t/2}\Big
/{\sqrt{\mathcal C^{\widetilde{S}\widetilde{S}}(t)}} =
\Phi_B^{\rm{lat}}\langle
\widetilde{B}(\vec{0}) | \,A^S_0(0)^{\dagger}|0\rangle\Big / \sqrt{\Big | \langle
\widetilde{B}(\vec{0}) | A^S_0(0)^{\dagger}|0\rangle \Big |^2} =
\Phi_B^{\rm{lat}}.
\end{equation}
\noindent The calculation of $\langle \overline{B}(\vec{0})
|O^{\text{lat}}_{i}(\vec{0})|B(\vec{0})\rangle$, $i \in \{VV + AA,
SS + PP\}$ is considerably simpler. Define
\begin{equation}
\mathcal C_{O_i}(T,t) \equiv \sum_{\vec x \in V}\langle 0| \,
\overline{A}^{\,S}_{0}(T)O^{\, \rm lat}_{i}(\vec x,t) \,
A^{S}_{0}(0)^{\dagger}|0\rangle,
\end{equation}
\noindent where $\overline{A}^{\,S}_{0}(T) = \sum_{\vec y \in \Delta
V} \sum_{\vec z \in \Delta V}
\overline{q}(\vec y,T)\gamma_0 \gamma_5 h(\vec z,T)$. Proceeding as above, we have as $\,t, \; T - t \rightarrow \infty$:
\begin{equation}
\label{eq:boxmixcalc} \langle \, \overline{B}(\vec{0}) |
O^{\text{lat}}_i(\vec{0},0)|B(\vec{0})\rangle = \mathcal
C_{O_i}(T,t)\Big / \mathcal C^{SS}(T) = \mathcal C_{O_i}(T,t)
e^{m_B^{*}T/2} \Big / \sqrt{\mathcal C^{SS}(T - t)\mathcal
C^{SS}(t)}.
\end{equation}
\noindent Here no zero momentum projection is necessary; the use of
$\mathcal C^{SS}$ for smaller time separations simply reduces noise.
Using Eqs.~({\ref{eq:boxphibcalc}) and (\ref{eq:boxmixcalc}) we can
thus calculate $f_B$ and $B_B$ using only box sources and sinks.
These are preferable to wall sources, whose poor ground state
overlap led to late plateaus in the $V = (2 \; \rm{fm})^3$ RBC-UKQCD
calculation and presents an even bigger problem for the ongoing
extension to $V =(3 \; \rm{fm})^3$. It is worth emphasizing that
this simple method relies on the particular properties of the static
effective theory, and further such improvements might be possible.
\acknowledgments
We thank our RBC-UKQCD collaborators C. Albertus, Y. Aoki, P. A.
Boyle, L. Del Debbio, J. M. Flynn, C. T. Sachrajda, A. Soni, and J.
Wennekers. We gratefully acknowledge the support of BNL, Columbia
University, the University of Edinburgh, PPARC, RIKEN, and the U.S.
DOE.
|
1,477,468,750,430 | arxiv | \section{Introduction}
Many practical problems involve signals that can be modeled or approximated by a superposition of a few complex exponential functions. In particular, if we choose the exponential function to be complex sinusoid, it covers signals in acceleration of medical imaging \cite{LDP:MRM:07}, analog-to-digital conversion \cite{TLD:TIT:10}, inverse scattering in seismic imaging \cite{BPTP:IP:02}, etc. Time domain signals in nuclear magnetic resonance (NMR) spectroscopy, that are widely used to analyze the compounds in chemistry and protein structures in biology, are another type of signals that can be modeled or approximated by a superposition of complex exponential functions \cite{QMCC:ACIE:15}. How to recover those superposition of complex exponential functions is of primary importance in those applications.
In this paper, we will consider how to recover those complex exponentials from linear measurements of their superposition. More specifically, let $\hat{\bm{x}}\in\mathbb{C}^{2N-1}$ be a vector satisfying
\begin{equation}\label{eq:hatx}
\hat{x}_j=\sum_{k=1}^{R}c_k z_k^j, \qquad j=0,1,\ldots,2N-2,
\end{equation}
where $z_k\in\mathbb{C}$, $k=1,\ldots,R$, are some unknown complex numbers. In other words, $\hat{\bm{x}}$ is a superposition of $R$ exponential functions. We assume $R\ll 2N-1$. When $|z_k|=1$, $k=1,\ldots,R$, $\hat{\bm{x}}$ is a superposition of complex sinusoids. When $z_k=e^{-\tau_k}e^{2\pi\imath f_k}$, $k=1,\ldots,R$, $\hat{\bm{x}}$ models the signal in NMR spectroscopy.
Since $R\ll 2N-1$, the degree of freedom to determine $\hat{\bm{x}}$ is much less than the ambient dimension $2N-1$. Therefore, it is possible to recover $\hat{\bm{x}}$ from its under-sampling \cite{CLMW:JACM:11,CP:ProcIEEE:09,CRT:TIT:06,Don:TIT:06}.
In particular, we consider to recover $\hat{\bm{x}}$ from its linear measurement
\begin{equation}\label{eq:linmea}
\bm{b}=\mathcal{A}\hat{\bm{x}},
\end{equation}
where $\mathcal{A}\in\mathbb{C}^{M\times (2N-1)}$ with $M\ll 2N-1$.
We will use a Hankel structure to reconstruct the signal of interest $\hat{\bm{x}}$. The Hankel structure originates from the matrix pencil method \cite{HS:TASSP:90} for harmonic retrieval for complex sinusoid. The conventional matrix pencil method assumes fully observed $\hat{\bm{x}}$ as well as the model order $R$, which are both unknown here. Following the ideas of the matrix pencil method in \cite{HS:TASSP:90} and enhanced matrix completion (EMaC) in \cite{CC:TIT:14}, we construct a Hankel matrix based on signal $\hat{\bm{x}}$. \ More specifically, define the Hankel matrix $\hat{\bm{H}}\in\mathbb{C}^{N\times N}$ by
\begin{equation}\label{eq:Hankel}
\hat{H}_{jk}=\hat{x}_{j+k},\qquad j,k=0,1,\ldots,N-1.
\end{equation}
Throughout this paper, indices of all vectors and matrices start from $0$, instead of $1$ in conventional notations. It can be shown that $\hat{\bm{H}}$ is a matrix with rank $R$. Instead of reconstructing $\hat{\bm{x}}$ directly, we reconstruct the rank-$R$ Hankel matrix $\hat{\bm{H}}$, subject to the constraint that \eqref{eq:linmea} is satisfied.
Low rank matrix recovery has been widely studied \cite{CCS:SIOPT:10,CP:ProcIEEE:09,CR:FoCM:09,RFP:SIREV:10}. It is well known that minimizing the nuclear norm tends to lead to a solution of low-rank matrices. Therefore, a nuclear norm minimization problem subject to the constraint \eqref{eq:linmea} is proposed. More specifically,
for any given $\bm{x}\in{\mathbb{C}^{2N-1}}$, let $\bm{H}(\bm{x})\in\mathbb{C}^{N\times N}$ be the Hankel matrix whose first row and last column is $\bm{x}$, i.e., $[\bm{H}(\bm{x})]_{jk}=x_{j+k}$. We propose to solve
\begin{equation}\label{eq:min}
\min_{\bm{x}}\|\bm{H}(\bm{x})\|_*,\qquad\mbox{subject to}\quad \mathcal{A}\bm{x}=\bm{b},
\end{equation}
where $\|\cdot\|_*$ is the nuclear norm function (the sum of all singular values), and $\mathcal{A}$ and $\bm{b}$ are from the linear measurement \eqref{eq:linmea}. When there is noise contained in the observation, i.e.,
$$
\bm{b}=\mathcal{A}\hat{\bm{x}}+\bm{\eta},
$$
we solve
\begin{equation}\label{eq:minnoise}
\min_{\bm{x}}\|\bm{H}(\bm{x})\|_*,\qquad\mbox{subject to}\quad \|\mathcal{A}\bm{x}-\bm{b}\|_2\leq\delta,
\end{equation}
where $\delta=\|\bm{\eta}\|_2$ is the noise level.
An important theoretical question is how many measurements are required to get a robust reconstruction of $\hat{\bm{H}}$ via \eqref{eq:min} or \eqref{eq:minnoise}. For a generic unstructured $N\times N$ matrix of rank $R$, standard theory \cite{CR:FoCM:09,CT:TIT:10,CRPW:FoCM:12,RFP:SIREV:10} indicates that $O(NR\cdot poly(\log N))$ measurements are needed for a robust reconstruction by nuclear norm minimization. This result, however, is unacceptable here since the number of parameters of $\hat{\bm{H}}$ is only $2N-1$. The main contribution of this paper is then to prove that \eqref{eq:min} and \eqref{eq:minnoise} give a robust recovery of $\hat{\bm{H}}$ (hence $\hat{\bm{x}}$) as soon as the number of projections exceeds $O(R\ln^2N)$ if we choose the linear operator $\mathcal{A}$ to be some scaled random Gaussian projections. This result is further extended to the robust reconstruction of low-rank Hankel or Toeplitz matrices from its few Gaussian random projections.
Our result can be applied to various signals of superposition of complex exponentials, including, but not limited to, signals of complex sinusoids and signals in accelerated NMR spectroscopy. When applied to complex sinusoids, our result here does not need any separation condition on the frequencies, while achieving better or comparable bounds on the number of required measurements.
Furthermore,
our theoretical result provides some guidance on how many samples to choose for the model proposed in \cite{QMCC:ACIE:15} to recover NMR spectroscopy.
\begin{itemize}
\item {\bf Complex sinusoids.} When $|z_k|=1$ for $k=1,\ldots,R$, we must have $z_k=e^{2\pi \imath f_k}$ for some frequency $f_k$. In this case, $\hat{\bm{x}}$ is a superposition of complex sinusoids, for examples, in the analog-to-digital conversion of radio signals \cite{TLD:TIT:10}. The problem on recovering $\hat{\bm{x}}$ from its as few as possible linear measurements \eqref{eq:linmea} may be solved using compressed sensing (CS)\cite{CRT:TIT:06}. One can discretize the domain of frequencies $f_k$ by a uniform grid. When the frequencies $f_k$ indeed fall on the grid, $\hat{\bm{x}}$ is sparse in the discrete Fourier transform domain, and CS theory \cite{CRT:TIT:06,Don:TIT:06} suggests that it is possible to reconstruct $\hat{\bm{x}}$ from its very few samples via $\ell_1$-norm minimization, provided that $R\ll 2N-1$. Nevertheless, the frequencies $f_k$ in our setting usually do not exactly fall on a grid. The basis mismatch between the true parameters and the grid based on discretization degenerates the performance of conventional compressed sensing \cite{CLPCR:SP:11}.
To overcome this, \cite{CF:CPAM:12,TBSR:TIT:13} proposed to recover off-the-grid complex sinusoid frequencies using total variation minimization or atomic norm \cite{CRPW:FoCM:12} minimization. They proved that the total variation minimization or atomic norm minimization can have a robust reconstruction of $\hat{\bm{x}}$ from a nonuniform sampling of very few entries of $\hat{\bm{x}}$, provided that the frequencies $f_k$, $k=1,\ldots,R$, has a good separation. Another method for recovering off-the-grid frequencies is enhanced matrix completion (EMaC) proposed by Chen et al \cite{CC:TIT:14}, where the Hankel structure plays a central role similar to our model. The main result in \cite{CC:TIT:14} is that the complex sinusoids $\bm{\hat{x}}$ can be robustly reconstructed via EMaC from its very few nonuniformly sampled entries. Again, the EMaC requires a separation of the frequencies, described implicitly by an incoherence condition.
When applied to complex sinusoids, compared to the aforementioned existing results, our result here does not need any separation condition on the frequencies, while achieving better or comparable bound of number of measurements.
\item {\bf Accelerated NMR spectroscopy.} When $z_k=e^{-\tau_k}e^{2\pi\imath f_k}$, $k=1,\ldots,R$, $\hat{\bm{x}}$ models the signal in NMR spectroscopy, which arises frequently in studying short-lived molecular systems, monitoring chemical reactions in real-time, high-throughput applications, etc. Recently, Qu et al \cite{QMCC:ACIE:15} proposed an algorithm based on low rank Hankel matrix. In this specific application, $\mathcal{A}$ is a matrix that denotes the under-sampling of NMR signals in the time domain. Numerical results show its efficiency in \cite{QMCC:ACIE:15} while theoretical results are still needed to explain. It is vital to give some theoretical results on this model since it will give us some guidance on how many samples should be chosen to guarantee the robust recovery. Though the result in \cite{CC:TIT:14} applies to this problem, it needs an incoherence condition, which remains uncertain for diverse chemical and biology samples. Our result in this paper does not require any incoherence condition. Moreover, our bound is better than that in \cite{CC:TIT:14}.
\end{itemize}
The rest of this paper is organized as follows. We begin with our model and our main results in Section \ref{secMMR}. Proofs for the main result are given in Section \ref{secProof}. Then, in Section \ref{secMatrices}, we extend the main result to the reconstruction of generic low-rank Hankel or Toeplitz matrices. Finally, the performance of our algorithm is demonstrated by numerical experiments in Section \ref{secNum}.
\section{Model and Main Results}\label{secMMR}
Our approach is based on the observation that the Hankel matrix whose first row and last column consist of entries of $\hat{\bm{x}}$ has rank $R$. Let $\hat{\bm{H}}$ be the Hankel matrix defined by \eqref{eq:Hankel}. Eq. \eqref{eq:hatx} leads to a decomposition
$$
\hat{\bm{H}}=
\left[
\begin{matrix}
1&\ldots&1\cr
z_1&\ldots&z_R\cr
\vdots&\vdots&\vdots\cr
z_1^{N-1}&\ldots&z_R^{N-1}\cr
\end{matrix}
\right]
\left[\begin{matrix}
c_1\cr &\ddots\cr&&c_R
\end{matrix}
\right]
\left[
\begin{matrix}
1&z_1\ldots&z_1^{N-1}\cr
\vdots&\vdots&\vdots\cr
1&z_R\ldots&z_R^{N-1}\cr
\end{matrix}
\right]
$$
Therefore, the rank of $\hat{\bm{H}}$ is $R$. Similar to Enhanced Matrix Completion (EMaC) in \cite{CC:TIT:14}, in order to reconstruct $\hat{\bm{x}}$, we first reconstruct the rank-$R$ Hankel matrix $\hat{\bm{H}}$, subject to the constraint that \eqref{eq:linmea} is satisfied. Then, $\hat{\bm{x}}$ is derived directly by choosing the first row and last column of $\hat{\bm{H}}$. More specifically,
for any given $\bm{x}\in{\mathbb{C}^{2N-1}}$, let $\bm{H}(\bm{x})\in\mathbb{C}^{N\times N}$ be the Hankel matrix whose first row and last column is $\bm{x}$, i.e., $[\bm{H}(\bm{x})]_{jk}=x_{j+k}$. We propose to solve
\begin{equation}\label{eq:minrank}
\min_{\bm{x}}\mathrm{rank}(\bm{H}(\bm{x})),\qquad\mbox{subject to}\quad \mathcal{A}\bm{x}=\bm{b},
\end{equation}
where $\mathrm{rank}(\bm{H}(\bm{x}))$ denotes the rank of $\bm{H}(\bm{x})$, and $\mathcal{A}$ and $\bm{b}$ are from the linear measurement \eqref{eq:linmea}. When there is noise contained in the observation, i.e, $\bm{b}=\mathcal{A}\hat{\bm{x}}+\eta$, we correspondingly solve
\begin{equation}\label{eq:minranknoise}
\min_{\bm{x}}\mathrm{rank}(\bm{H}(\bm{x})),\qquad\mbox{subject to}\quad \|\mathcal{A}\bm{x}-\bm{b}\|_2\leq \delta,
\end{equation}
where $\delta=\|\eta\|_2$ is the noise level.
These two problems are all NP hard problems and not easy to solve. Following the ideas of matrix completion and low rank matrix recovery \cite{CR:FoCM:09,CT:TIT:10,CRPW:FoCM:12,RFP:SIREV:10}, it is possible to exactly recover the low rank Hankel matrix via nuclear norm minimization. Therefore, it is reasonable to use nuclear norm minimization for our problem and it leads to the models in \eqref{eq:min} and \eqref{eq:minnoise}.
Intuitively, our model is reasonable and likely to work. Theoretical results are desirable to guarantee it. The results in \cite{CR:FoCM:09,CT:TIT:10,CRPW:FoCM:12,RFP:SIREV:10} do not consider the Hankel structure. For generic $N\times N$ rank-$R$ matrix, they requires $O(NR\cdot poly(\log N))$ measurements for robust recovery which is too much since there are only $2N-1$ degrees of freedom in $\bm{H}(\bm{x})$. The theorems proposed in \cite{TBSR:TIT:13} work only for a special case where signals of interest are superpositions of complex sinusoids, which excludes, e.g., the signals in NMR spectroscopy. While the results from \cite{CC:TIT:14} extend to complex exponentials, the performance guarantees in \cite{TBSR:TIT:13,CC:TIT:14,CF:CPAM:12} require incoherence conditions, implying the knowledge of frequency interval in spectroscopy, which are not available before the realistic sampling of diverse chemical or biological samples. This limits the applicability of these theories.
It is challenging to provide a theorem guaranteeing the exact recovery for model \eqref{eq:min} with arbitrarily linear measurements $\mathcal{A}$. In this paper, we provide a theoretical result ensuring exact recovery when $\mathcal{A}$ is a scaled random Gaussian matrix. Our result does not assume any incoherence conditions on the original signal.
\begin{theorem}\label{thm:main}
Let $\mathcal{A}=\mathcal{B}\mathcal{D}\in\mathbb{C}^{M\times (2N-1)}$, where $\mathcal{B}\in\mathbb{C}^{M\times (2N-1)}$ is a random matrix whose real and imaginary parts are i.i.d. Gaussian with mean $0$ and variance $1$, $\mathcal{D}\in\mathbb{R}^{(2N-1)\times (2N-1)}$ is a diagonal matrix with the $j$-th diagonal $\sqrt{j+1}$ if $j\leq N-1$ and $\sqrt{2N-1-j}$ otherwise. Then, there exists a universal constant $C_1>0$ such that, for an arbitrary $\epsilon>0$, If
$$
M \geq (C_1\sqrt{R}\ln N+\sqrt{2}\epsilon)^2+1,
$$
then, with probability at least $1-2e^{-\frac{M-1}{8}}$, we have
\begin{enumerate}
\item[(a)]
$\tilde{\bm{x}}=\hat{\bm{x}}$, where $\tilde{\bm{x}}$ is the unique solution of \eqref{eq:min} with $\bm{b}=\mathcal{A}\hat{\bm{x}}$;
\item[(b)]
$\|\mathcal{D}(\tilde{\bm{x}}-\hat{\bm{x}})\|_2\leq 2\delta/\epsilon$, where $\tilde{\bm{x}}$ is the unique solution of \eqref{eq:minnoise} with $\|\bm{b}-\mathcal{A}\hat{\bm{x}}\|_2\leq\delta$.
\end{enumerate}
\end{theorem}
The number of measurements required is $O(R\ln^2N)$, which is reasonable small compared with the number of parameters in $\bm{H}(\bm{x})$. Furthermore, there is a parameter $\epsilon$ in Theorem \ref{thm:main}. For the noise-free case (a), the best choice of $\epsilon$ is obviously a number that is very close to $0$. For the noisy case (b), we can balance the error bound and the number of measurements to get an optimal $\epsilon$. On the one hand, according to the result in (b), in order to make the error in noisy case as small as possible, we would like $\epsilon$ to be as large as possible. On the other hand, we would like to keep the measurements $M$ of the order of $R\ln^2N$. Therefore, a seemingly optimal choice of $\epsilon$ is $\epsilon=O(R\ln^2N)$. With this choice of $\epsilon$, the number of measurements $M=O(R\ln^2N)$ and the error $\|\mathcal{D}(\tilde{\bm{x}}-\hat{\bm{x}})\|_2\leq O\left(\frac{\delta}{\sqrt{M}}\right)$.
\section{Proof of Theorem \ref{thm:main}} \label{secProof}
In this section, we prove the main result Theorem \ref{thm:main}.
\subsection{Orthonormal Basis of the $N\times N$ Hankel Matrices Subspace}
In this subsection, we introduce an orthonormal basis of the subspace of $N\times N$ Hankel matrices and use it to define a projection from $\mathbb{C}^{N\times N}$ to the subspace of all $N\times N$ Hankel matrices.
Let $\bm{E}_j\in\mathbb{C}^{N\times N}$, $j=0,1\ldots,2N-2$, be the Hankel matrix satisfying
\begin{equation}\label{eq:Ej}
[\bm{E}_j]_{k l}=
\begin{cases}
1/\sqrt{K_j},&\mbox{if }k+l=j,\cr
0,&\mbox{otherwise,}
\end{cases}
\qquad
k,l=0,\ldots,N-1,
\end{equation}
where $K_j=j+1$ for $j\leq N-1$ and $K_j=2N-1-j$ for $j\geq N-1$ is the number of non-zeros in $\bm{E}_j$. Then, it is easy to check that $\{\bm{E}_j\}_{j=0}^{2N-2}$ forms an orthonormal basis of the subspace of all $N\times N$ Hankel matrices, under the standard inner product in $\mathbb{C}^{N\times N}$.
Define a linear operator
\begin{equation}\label{def:G}
\mathcal{G}~:~\bm{x}\in\mathbb{C}^{2N-1}\mapsto\mathcal{G}\bm{x}=\sum_{j=0}^{2N-2}x_j\bm{E}_j\in\mathbb{C}^{N\times N}.
\end{equation}
The adjoint $\mathcal{G}^*$ of $\mathcal{G}$ is
$$
\mathcal{G}^*~:~\bm{X}\in\mathbb{C}^{N\times N}\mapsto
\mathcal{G}^*\bm{X}\in\mathbb{C}^{2N-1},\qquad [\mathcal{G}^*\bm{X}]_j=\langle\bm{X},\bm{E}_j\rangle.
$$
Obviously, $\mathcal{G}^*\mathcal{G}$ is the identity operator in $\mathbb{C}^{2N-1}$, and $\mathcal{G}\mathcal{G}^*$ is the orthogonal projector onto the subspace of all Hankel matrices.
\subsection{Recovery condition based on restricted minimum gain condition}
First of all, let us simplify the minimization problem \eqref{eq:min} by introducing $\mathcal{D}\in\mathbb{C}^{(2N-1)\times (2N-1)}$, the diagonal matrix with $j$-th diagonal $\sqrt{K_j}$.
Then, by letting $\bm{y}=\mathcal{D}\bm{x}$, \eqref{eq:min} is rewritten as,
\begin{equation}\label{eq:mincomplex}
\min_{\bm{y}}\|\mathcal{G}\bm{y}\|_*\qquad\mbox{subject to}\quad
\mathcal{B}\bm{y}=\bm{b},
\end{equation}
where $\mathcal{B}=\mathcal{A}\mathcal{D}^{-1}$. Similarly, for the noisy case, \eqref{eq:minnoise} is rearranged to
\begin{equation}\label{eq:mincomplexnoisy}
\min_{\bm{y}}\|\mathcal{G}\bm{y}\|_*\qquad\mbox{subject to}\quad
\|\mathcal{B}\bm{y}-\bm{b}\|_2\leq\epsilon.
\end{equation}
By our assumption in Theorem \ref{thm:main}, $\mathcal{B}\in\mathbb{C}^{M\times (2N-1)}$ is a random matrix whose real and imaginary parts are both real-valued random matrices with i.i.d. Gaussian entries of mean $0$ and variance $1$. We will prove $\tilde{\bm{y}}=\mathcal{D}\hat{\bm{x}}$ (respectively $\|\tilde{\bm{y}}-\hat{\bm{y}}\|_2\leq 2\delta/\epsilon$) with dominant probability for problem \eqref{eq:mincomplex} for the noise free case (respectively \eqref{eq:mincomplexnoisy} for the noisy case).
Let the desent cone of $\|\mathcal{G}\cdot\|_*$ at $\hat{\bm{y}}$ be
\begin{equation}\label{eq:tconecomplex}
\mathfrak{T}(\hat{\bm{y}})=\{\lambda\bm{z}~|~\lambda\geq 0, \|\mathcal{G}(\hat{\bm{y}}+\bm{z})\|_*\leq\|\mathcal{G}\hat{\bm{y}}\|_*\}.
\end{equation}
To characterize the recovery condition, we need to use the minimum value of $\frac{\|\mathcal{B}\bm{z}\|_2}{\|\bm{z}\|_2}$ for nonzero $\bm{z}\in\mathfrak{T}(\hat{\bm{y}})$. This quantity is commonly called the {\it minimum gain} of the measurement operator $\mathcal{B}$ restricted on $\mathfrak{T}(\hat{\bm{y}})$\cite{CRPW:FoCM:12}. In particular, if the minimum gain is bounded away from zero, then the exact recovery (respectively approximate recovery) for problem \eqref{eq:mincomplex} (respectively \eqref{eq:mincomplexnoisy}) holds.
\begin{lemma}\label{lem:tangentcone}
Let $\mathfrak{T}(\hat{\bm{y}})$ be defined by \eqref{eq:tconecomplex}.
Assume
\begin{equation}\label{eq:NullSpace}
\min_{\bm{z}\in\mathfrak{T}(\hat{\bm{y}})}\frac{\|\mathcal{B}\bm{z}\|_2}{\|\bm{z}\|_2}\geq\epsilon.
\end{equation}
\begin{enumerate}
\item[(a)]
Let $\tilde{\bm{y}}$ be the solution of \eqref{eq:mincomplex} with $\bm{b}=\mathcal{B}\hat{\bm{y}}$.
Then $\tilde{\bm{y}}=\hat{\bm{y}}$.
\item[(b)]
Let $\tilde{\bm{y}}$ be the solution of \eqref{eq:mincomplexnoisy} with $\|\bm{b}-\mathcal{B}\hat{\bm{y}}\|_2\leq\delta$.
Then $\|\tilde{\bm{y}}-\hat{\bm{y}}\|_2\leq 2\delta/\epsilon$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since (a) is a special case of (b) with $\delta=0$, we prove (b) only. The optimality of $\tilde{\bm{y}}$ implies $\tilde{\bm{y}}-\hat{\bm{y}}\in\mathfrak{T}(\hat{\bm{y}})$. By \eqref{eq:NullSpace}, we have
$$
\|\tilde{\bm{y}}-\hat{\bm{y}}\|_2\leq \frac{1}{\epsilon}\|\mathcal{B}(\tilde{\bm{y}}-\hat{\bm{y}})\|_2
\leq \frac{1}{\epsilon}(\|\mathcal{B}\tilde{\bm{y}}-\bm{b}\|_2+\|\mathcal{B}\hat{\bm{y}}-\bm{b}\|_2)
\leq 2\delta/\epsilon.
$$
\end{proof}
Minimum gain condition is a powerful concept and has been employed in recent recovery results via $\ell_1$ norm minimization, block-sparse vector recovery, low-rank matrix reconstruction and other atomic norms \cite{CRPW:FoCM:12}.
\subsection{Bound of minimum gain via Gaussian width}
Lemma \ref{lem:tangentcone} requires to estimate the lower bound of $\min_{\bm{z}\in\mathfrak{T}(\hat{\bm{y}})}\frac{\|\mathcal{B}\bm{z}\|_2}{\|\bm{z}\|_2}$. Gordon gave a solution using Gaussian width of a set \cite{Gor:GAFA:88,CRPW:FoCM:12} to estimate the lower bound of minimum gain.
\begin{definition}
The Gaussian width of a set $S\subset \mathbb{R}^p$ is defined as:
$$w(S):=\mathsf{E}_{\bm{\xi}}\left[\sup_{\bm{\gamma}\in S}{\bm{\gamma}^T\bm{\xi}}\right],$$
where $\bm{\xi}\in\mathbb{R}^{p}$ is a random vector of independent zero-mean unit-variance Gaussians.
\end{definition}
Let $\lambda_n$ denote the expected length of a $n$-dimensional Gaussian random vector. Then $\lambda_n=\sqrt{2}\Gamma(\frac{n+1}{2})/\Gamma(\frac{n}{2})$ and it can be tightly bounded as $\frac{n}{\sqrt{n+1}}\leq\lambda_n\leq\sqrt{n}$ \cite{CRPW:FoCM:12}. The following theorem is given in Corollary 1.2 in \cite{Gor:GAFA:88}. It gives a bound on minimum gain for a random map $\bm{\Pi}:\mathbb{R}^p\mapsto \mathbb{R}^n$.
\begin{theorem}[Corollary 1.2 in \cite{Gor:GAFA:88}]\label{thm:restrictedeigenvalue}
Let $\Omega$ be a closed subset of $\{\bm{x}\in\mathbb{R}^p|\|\bm{x}\|_2=1\}$. Let $\bm{\Pi}\in\mathbb{R}^{n\times p}$ be a random matrix with i.i.d. Gaussian entries with mean $0$ and variance $1$. Then, for any $\epsilon>0$,
$$
\mathsf{P}\left(\min_{\bm{z}\in\Omega}\|\bm{\Pi}\bm{z}\|_2\geq \epsilon\right)
\geq 1-e^{-\frac12\left(\lambda_n-w(\Omega)-\epsilon\right)^2},
$$
provided $\lambda_n-w(\Omega)-\epsilon\geq 0$. Here $\frac{n}{\sqrt{n+1}}\leq\lambda_n\leq\sqrt{n}$, and $w(\Omega)$ is the Gaussian width of $\Omega$.
\end{theorem}
By converting the complex setting in our problem to the real setting and using Theorem \ref{thm:restrictedeigenvalue}, we can get the bound of \eqref{eq:NullSpace} in terms of Gaussian width of $\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\cap \mathbb{S}_{\mathbb{R}}^{4N-3}$, where $\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})$ is a cone in $\mathbb{R}^{4N-2}$ defined by
\begin{equation}\label{eq:tconereal}
\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})=\left\{\left[\begin{matrix}\bm{\alpha}\cr\bm{\beta}\end{matrix}\right]\Big|~\bm{\alpha}+\imath\bm{\beta}\in\mathfrak{T}(\hat{\bm{y}})\right\}.
\end{equation}
\begin{lemma}\label{lem:Gaussianwidth}
Let the real and imaginary parts of entries of $\mathcal{B}\in\mathbb{C}^{M\times (2N-1)}$ be i.i.d. Gaussian with mean $0$ and variance $1$. Let $\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})$ be defined by \eqref{eq:tconereal} and $\mathbb{S}_c^{2N-2}$ be the unit sphere in $\mathbb{C}^{2N-1}$. Then for any $\epsilon>0$,
$$
\mathsf{P}\left(\min_{\bm{z}\in\mathfrak{T}(\hat{\bm{y}})\cap\mathbb{S}_c^{2N-2}}\|\mathcal{B}\bm{z}\|_2\geq\epsilon\right)\geq 1-2e^{-\frac12\left(\lambda_{M}-w(\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\cap\mathbb{S}_{\mathbb{R}}^{4N-3})-\frac{\epsilon}{\sqrt{2}}\right)^2},
$$
where $\mathbb{S}_{\mathbb{R}}^{4N-3}$ is the unit sphere in $\mathbb{R}^{4N-2}$.
\end{lemma}
\begin{proof}
In order to use Theorem \ref{thm:restrictedeigenvalue}, we convert the complex setting in our problem to the real setting in Theorem \ref{thm:restrictedeigenvalue}. We will use Roman letters for vectors and matrices in complex-valued spaces, and Greek letters for real valued ones. Let $\mathcal{B}=\bm{\Phi}+\imath\bm{\Psi}\in\mathbb{C}^{M\times (2N-1)}$, where both $\bm{\Phi}\in\mathbb{R}^{M\times (2N-1)}$ and $\bm{\Psi}\in\mathbb{R}^{M\times (2N-1)}$ are real-valued random matrices whose entries are i.i.d. mean-$0$ variance-$1$ Gaussian. Then, for any $\bm{z}=\bm{\alpha}+\imath\bm{\beta}\in\mathbb{C}^{2N-1}$ with $\bm{\alpha},\bm{\beta}\in\mathbb{R}^{2N-1}$,
\begin{equation*}
\begin{split}
\|\mathcal{B}\bm{z}\|_2&=\|(\bm{\Phi}+\imath\bm{\Psi})(\bm{\alpha}+\imath\bm{\beta})\|_2
=\left\|(\bm{\Phi}\bm{\alpha}-\bm{\Psi}\bm{\beta})+\imath(\bm{\Psi}\bm{\alpha}+\bm{\Phi}\bm{\beta})\right\|_2\cr
&=\left(\left\|\left[\begin{matrix}\bm{\Phi}&-\bm{\Psi}\end{matrix}\right]
\left[\begin{matrix}\bm{\alpha}\cr \bm{\beta}\end{matrix}\right]\right\|_2^2
+\left\|\left[\begin{matrix}\bm{\Psi}&\bm{\Phi}\end{matrix}\right]
\left[\begin{matrix}\bm{\alpha}\cr \bm{\beta}\end{matrix}\right]\right\|_2^2\right)^{1/2}
\end{split}
\end{equation*}
Then
\begin{equation}\label{eq:event1}
\min_{\bm{z}=\bm{\alpha}+\imath\bm{\beta}\in\mathfrak{T}(\hat{\bm{y}})\cap\mathbb{S}_c^{2N-2}}
\left\|\left[\begin{matrix}\bm{\Phi}&-\bm{\Psi}\end{matrix}\right]
\left[\begin{matrix}\bm{\alpha}\cr \bm{\beta}\end{matrix}\right]\right\|_2\geq \epsilon/\sqrt{2},\quad\mbox{and}
\min_{\bm{z}=\bm{\alpha}+\imath\bm{\beta}\in\mathfrak{T}(\hat{\bm{y}})\cap\mathbb{S}_c^{2N-2}}
\left\|\left[\begin{matrix}\bm{\Psi}&\bm{\Phi}\end{matrix}\right]
\left[\begin{matrix}\bm{\alpha}\cr \bm{\beta}\end{matrix}\right]\right\|_2\geq \epsilon/\sqrt{2}
\end{equation}
implies
$$
\min_{\bm{z}\in\mathfrak{T}(\hat{\bm{y}})\cap\mathbb{S}_c^{2N-2}}\|\mathcal{B}\bm{z}\|_2
\geq \epsilon.
$$
Therefore,
$$
\mathsf{P}\left(\min_{\bm{z}\in\mathfrak{T}(\hat{\bm{y}})\cap\mathbb{S}_c^{2N-2}}\|\mathcal{B}\bm{z}\|_2\geq\epsilon\right)
\geq
\mathsf{P}\left(\mbox{\eqref{eq:event1} holds true}\right).
$$
It is easy to see that both $\left[\begin{matrix}\bm{\Phi}&-\bm{\Psi}\end{matrix}\right]$ and $\left[\begin{matrix}\bm{\Psi}&\bm{\Phi}\end{matrix}\right]$ are real-valued random matrices with i.i.d. Gaussian entries of mean $0$ and variance $1$. By Theorem \ref{thm:restrictedeigenvalue},
$$
\mathsf{P}\left(\mbox{\eqref{eq:event1} holds true}\right)
\geq 1-2e^{-\frac12\left(\lambda_{M}-w(\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\cap\mathbb{S}_{\mathbb{R}}^{4N-3})-\frac{\epsilon}{\sqrt{2}}\right)^2},
$$ and therefore we get the desired result.
\end{proof}
\subsection{Estimation of Gaussian width $w(\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\cap\mathbb{S}_{\mathbb{R}}^{4N-3})$}
Denote $\mathfrak{T}_{\mathbb{R}}^{*}(\hat{\bm{y}}))$ be polar cone of $\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}}))\in\mathbb{R}^{4N-2}$, i.e.,
\begin{equation}\label{def:polar}
\mathfrak{T}_{\mathbb{R}}^{*}(\hat{\bm{y}})
=\{\bm{\delta}\in\mathbb{R}^{4N-2}~|~\bm{\gamma}^T\bm{\delta}\leq 0,~\forall\bm{\gamma}\in\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\}.
\end{equation}
Following the arguments in Proposition 3.6 in \cite{CRPW:FoCM:12}, we obtain
\begin{equation}\label{eq:widthest1}
w(\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\cap\mathbb{S}_{\mathbb{R}}^{4N-3})
=\mathsf{E}\left(\sup_{\bm{\gamma}\in\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\cap\mathbb{S}_{\mathbb{R}}^{4N-3}}\bm{\xi}^T\bm{\gamma}\right)
\leq \mathsf{E}\left(\min_{\bm{\gamma}\in\mathfrak{T}_{\mathbb{R}}^{*}(\hat{\bm{y}})}\|\bm{\xi}-\bm{\gamma}\|_2\right),
\end{equation}
where $\bm{\xi}\in\mathbb{R}^{4N-2}$ is a random vector of i.i.d. Gaussian entries of mean $0$ and variance $1$.
Hence, instead of estimating Gaussian width $w(\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\cap\mathbb{S}_{\mathbb{R}}^{4N-3})$, we bound $\mathsf{E}\left(\min_{\bm{\gamma}\in\mathfrak{T}_{\mathbb{R}}^{*}(\hat{\bm{y}})}\|\bm{\xi}-\bm{\gamma}\|_2\right)$.
For this purpose, let $\mathcal{F}~:~\mathbb{R}^{4N-2}\mapsto\mathbb{R}$ be defined by
\begin{equation}\label{def:F}
\mathcal{F}\left(\left[\begin{matrix}\bm{\alpha}\cr\bm{\beta}\end{matrix}\right]\right)=\|\mathcal{G}(\bm{\alpha}+\imath\bm{\beta})\|_*.
\end{equation}
The following lemma gives us a characterization of $\mathsf{E}\left(\min_{\bm{\gamma}\in\mathfrak{T}_{\mathbb{R}}^{*}(\hat{\bm{y}})}\|\bm{\xi}-\bm{\gamma}\|_2\right)$ in terms of the subdifferential $\partial\mathcal{F}$ of $\mathcal{F}$.
\begin{lemma}\label{lemma:polar}
Let $\mathfrak{T}_{\mathbb{R}}^{*}(\hat{\bm{y}})$ and $\mathcal{F}$ be defined by \eqref{def:polar} and \eqref{def:F} respectively. Let $\hat{\bm{\omega}}_1, \hat{\bm{\omega}}_2\in\mathbb{R}^{2N-1}$ be the real and imaginary parts of $\hat{\bm{y}}$ respectively and denote $\hat{\bm{\omega}}=\left[\begin{matrix}\hat{\bm{\omega}}_1\cr\hat{\bm{\omega}}_2\end{matrix}\right]$. Then
\begin{equation}\label{eq:dualcone}
\mathfrak{T}_{\mathbb{R}}^{*}(\hat{\bm{y}})
=\mathrm{cone}\left(\partial \mathcal{F}\left(\hat{\bm{\omega}}\right)\right)=\left\{\lambda\bm{\delta}~|~\lambda\geq0,~\mathcal{F}\left(\bm{\gamma}+\hat{\bm{\omega}}\right)
\geq \mathcal{F}\left(\hat{\bm{\omega}}\right)
+\bm{\gamma}^T\bm{\delta},~\forall \bm{\gamma}\in\mathbb{R}^{4N-2}
\right\}.
\end{equation}
\end{lemma}
\begin{proof}
It is observed that $\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})$ in \eqref{eq:tconereal} is the descent cone of the function $\mathcal{F}$
$$
\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})=\left\{\delta\bm{\gamma}~|~\delta\geq0,~\mathcal{F}\left(\bm{\gamma}+\hat{\bm{\omega}}\right)
\leq \mathcal{F}\left(\hat{\bm{\omega}}\right)\right\}.
$$
According to Theorem 23.4 in \cite{Roc:BOOK:97}, the cone dual to the descent cone is the conic hull of subgradient, which is exactly \eqref{eq:dualcone}.
\end{proof}
The following lemma gives us an estimation of Gaussian width $w(\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\cap\mathbb{S}_{\mathbb{R}}^{4N-3})$ in terms of $\mathsf{E}(\|\mathcal{G}\bm{g}\|_2)$.
\begin{lemma} \label{lem:boundGaussianWidth}Let $\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})$ and $\mathcal{G}$ be defined by \eqref{eq:tconereal} and \eqref{def:G} respectively. Then
$$
w(\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\cap\mathbb{S}_{\mathbb{R}}^{4N-3})
\leq 3\sqrt{R}\cdot\mathsf{E}(\|\mathcal{G}\bm{g}\|_2),
$$
where $\mathsf{E}(\|\mathcal{G}\bm{g}\|_2)$ is the expectation with respect to $\bm{g}\in\mathbb{C}^{2N-1}$. Here $\bm{g}$ is a random vector whose real and imaginary parts are i.i.d. mean-0 and variance-1 Gaussian entries.
\end{lemma}
\begin{proof}
By using \eqref{eq:widthest1} and Lemma \ref{lemma:polar}, we need to find $\partial \mathcal{F}\left(\hat{\bm{\omega}}\right)$ and thus $\mathfrak{T}_{\mathbb{R}}^{*}(\hat{\bm{y}})$. Let $\hat{\bm{\Omega}}_1=\mathcal{G}\hat{\bm{\omega}}_1$ and $\hat{\bm{\Omega}}_2=\mathcal{G}\hat{\bm{\omega}}_2$. Then $\mathcal{G}\hat{\bm{y}}=\hat{\bm{\Omega}}_1+\imath\hat{\bm{\Omega}}_2$. Let a singular value decomposition of the rank-$R$ matrix $\mathcal{G}\hat{\bm{y}}$ be
\begin{equation}\label{eq:SVDcomplexGy}
\mathcal{G}\hat{\bm{y}}=\bm{U}\bm{\Sigma}\bm{V}^*,
\qquad\mbox{with}\quad
\bm{U} = \bm{\Theta}_1+\imath\bm{\Theta}_2,~~\bm{V}=\bm{\Xi}_1+\imath\bm{\Xi}_2,
\end{equation}
where $\bm{\Theta}_1,\bm{\Theta}_2,\bm{\Xi}_1,\bm{\Xi}_2\in\mathbb{R}^{N\times R}$ and $\bm{\Sigma}\in\mathbb{R}^{R\times R}$, and $\bm{U}\in\mathbb{C}^{N\times R}$ and $\bm{V}\in\mathbb{C}^{N\times R}$ satisfies $\bm{U}^*\bm{U}=\bm{V}^*\bm{V}=\bm{I}$. Then, by direct calculation,
\begin{equation}\label{eq:ThetaXi}
\bm{\Theta}\equiv\left[\begin{matrix}
\bm{\Theta}_1 & -\bm{\Theta}_2\cr \bm{\Theta}_2 & \bm{\Theta}_1
\end{matrix}\right]\in\mathbb{R}^{2N\times (2R)}, \qquad
\bm{\Xi}\equiv\left[\begin{matrix}
\bm{\Xi}_1 & -\bm{\Xi}_2\cr \bm{\Xi}_2 & \bm{\Xi}_1
\end{matrix}\right]\in\mathbb{R}^{2N\times (2R)}
\end{equation}
satisfy $\bm{\Theta}^T\bm{\Theta}=\bm{\Xi}^T\bm{\Xi}=\bm{I}$. Moreover, if we define
$\hat{\bm{\Omega}}=\left[\begin{matrix}
\hat{\bm{\Omega}}_1 & -\hat{\bm{\Omega}}_2\cr \hat{\bm{\Omega}}_2 & \hat{\bm{\Omega}}_1
\end{matrix}\right]$, then
\begin{equation}\label{eq:SVDrealOmega}
\hat{\bm{\Omega}}=
\bm{\Theta}\left[\begin{matrix}
\bm{\Sigma} & \cr & \bm{\Sigma}
\end{matrix}\right]
\bm{\Xi}^T
\end{equation}
is a singular value decomposition of the real matrix $\hat{\bm{\Omega}}$, and the singular values $\hat{\bm{\Omega}}$ are those of $\mathcal{G}\hat{\bm{y}}$, each repeated twice. Therefore,
\begin{equation}\label{eq:Freal}
\mathcal{F}\left(\hat{\bm{\omega}}\right)
=\|\mathcal{G}\hat{\bm{y}}\|_*=\|\bm{\Sigma}\|_*=\frac12\|\hat{\bm{\Omega}}\|_*.
\end{equation}
Define a linear operator $\mathcal{E}~:~\mathbb{R}^{4N-2}\mapsto\mathbb{R}^{2N\times 2N}$ by
$$
\mathcal{E}\left(\left[\begin{matrix}\bm{\alpha}\cr\bm{\beta}\end{matrix}\right]\right)
=\left[\begin{matrix}\mathcal{G}\bm{\alpha}&-\mathcal{G}\bm{\beta}\cr
\mathcal{G}\bm{\beta}&\mathcal{G}\bm{\alpha}\end{matrix}\right],
\quad\mbox{with}\quad\bm{\alpha},\bm{\beta}\in\mathbb{R}^{2N-1}.
$$
By \eqref{eq:Freal} and the definition of $\hat{\bm{\Omega}}$, we obtain $\mathcal{F}(\hat{\bm{\omega}})=\frac12\|\mathcal{E}\hat{\bm{\omega}}\|_*$. From convex analysis theory and $\hat{\Omega}=\mathcal{E}\hat{\omega}$, the subdifferential of $\mathcal{F}$ is given by
\begin{equation}\label{eq:subdiffF}
\partial\mathcal{F}(\hat{\bm{\omega}})=\frac12\mathcal{E}^*\partial\|\hat{\bm{\Omega}}\|_*.
\end{equation}
On the one hand, the adjoint $\mathcal{E}^*$ is given by, for any $\bm{\Delta}=\left[\begin{matrix}\bm{\Delta}_{11}&\bm{\Delta}_{12}\cr\bm{\Delta}_{21}&\bm{\Delta}_{22}\end{matrix}\right]\in\mathbb{R}^{2N\times 2N}$ with each block in $\mathbb{R}^{N\times N}$,
\begin{equation}\label{eq:adjE}
\mathcal{E}^*\bm{\Delta}=\left[\begin{matrix}\mathcal{G}^*(\bm{\Delta}_{11}+\bm{\Delta}_{22})\cr\mathcal{G}^*(\bm{\Delta}_{21}-\bm{\Delta}_{12})\end{matrix}\right].
\end{equation}
On the other hand, since \eqref{eq:SVDrealOmega} provides a singular value decomposition of $\hat{\bm{\Omega}}$,
\begin{equation}\label{eq:subdiffnuc}
\partial\|\hat{\bm{\Omega}}\|_*=\left\{\bm{\Theta}\bm{\Xi}^T+\bm{\Delta}~|~
\bm{\Theta}^T\bm{\Delta}=\bm{0},~\bm{\Delta}\bm{\Xi}=\bm{0},~\|\bm{\Delta}\|_2\leq1\right\}.
\end{equation}
Combining \eqref{eq:subdiffF}\eqref{eq:adjE}\eqref{eq:subdiffnuc} and \eqref{eq:ThetaXi} yields the subdifferential of $\mathcal{F}$ at $\hat{\bm{\omega}}$
$$
\partial\mathcal{F}(\hat{\bm{\omega}})=\left\{\left[\begin{matrix}
\mathcal{G}^*\left(\bm{\Theta}_1\bm{\Xi}_1^T+\bm{\Theta}_2\bm{\Xi}_2^T+\frac{\bm{\Delta}_{11}+\bm{\Delta}_{22}}{2}\right)\cr
\mathcal{G}^*\left(\bm{\Theta}_2\bm{\Xi}_1^T-\bm{\Theta}_1\bm{\Xi}_2^T+\frac{\bm{\Delta}_{21}-\bm{\Delta}_{12}}{2}\right)
\end{matrix}\right]~\Big|~
\bm{\Delta}=\left[\begin{matrix}\bm{\Delta}_{11}&\bm{\Delta}_{12}\cr\bm{\Delta}_{21}&\bm{\Delta}_{22}\end{matrix}\right],~\bm{\Theta}^T\bm{\Delta}=\bm{0},~\bm{\Delta}\bm{\Xi}=\bm{0},~\|\bm{\Delta}\|_2\leq1
\right\}.
$$
We are now ready for the estimation of the Gaussian width. Let the set $\mathfrak{S}$ be a subset of the set of complex-valued vectors
\begin{equation}\label{eq:setS}
\mathfrak{S}=\left\{\mathcal{G}^*(\bm{U}\bm{V}^*+\bm{W})~|~\bm{U}^*\bm{W}=\bm{0},~\bm{W}\bm{V}=\bm{0},
~\|\bm{W}\|_2\leq 1\right\},
\end{equation}
where $\bm{U},\bm{V}$ are in \eqref{eq:SVDcomplexGy}. Then, it can be checked that
\begin{equation}\label{eq:GsubsetsubdiffF}
\mathfrak{H}\equiv \left\{\left[\begin{matrix}\bm{\alpha}\cr\bm{\beta}\end{matrix}\right]~\Big|~
\bm{\alpha}+\imath\bm{\beta}\in\mathfrak{S}\right\}
\subset \partial\mathcal{F}(\hat{\bm{\omega}}).
\end{equation}
Actually, for any $\bm{W}=\bm{\Delta}_1+\imath\bm{\Delta}_2$ satisfying $\bm{U}^*\bm{W}=0,\bm{W}\bm{V}=0$ and $\|\bm{W}\|_2\leq 1$, we choose $\bm{\Delta}=\left[\begin{matrix}\bm{\Delta}_1&-\bm{\Delta}_2\cr\bm{\Delta}_2&\bm{\Delta}_1\end{matrix}\right]$. Obviously, this choice of $\bm{\Delta}$ satisfies the constraints on $\bm{\Delta}$ in $\partial\mathcal{F}(\hat{\bm{\omega}})$. Furthermore, $\bm{U}\bm{V}^*+\bm{W}=(\bm{\Theta}_1\bm{\Xi}_1^T+\bm{\Theta}_2\bm{\Xi}_2^T+\bm{\Delta}_1) +\imath(\bm{\Theta}_2\bm{\Xi}_1^T+\bm{\Theta}_1\bm{\Xi}_2^T+\bm{\Delta}_2)$. Therefore, \eqref{eq:GsubsetsubdiffF} holds.
With the help of \eqref{eq:GsubsetsubdiffF}, we get
\begin{equation}\label{eq:widthest1.5}
\min_{\bm{\gamma}\in\mathfrak{T}_{\mathbb{R}}^{*}(\hat{\bm{y}})}\|\bm{\xi}-\bm{\gamma}\|_2
=\min_{\lambda\geq 0}\min_{\bm{\gamma}\in\partial\mathcal{F}(\hat{\bm{\omega}})}\|\bm{\xi}-\lambda\bm{\gamma}\|_2
\leq\min_{\lambda\geq 0}\min_{\bm{\gamma}\in\mathfrak{H}}\|\bm{\xi}-\lambda\bm{\gamma}\|_2.
\end{equation}
We then convert the real-valued vectors to complex-valued vectors by letting $\bm{g}=\bm{\xi}_1+\imath\bm{\xi}_2$ and $\bm{c}=\bm{\gamma}_1+\imath\bm{\gamma}_2$, where $\bm{\xi}_1$ and $\bm{\xi}_2$ are the first and second half of $\bm{\xi}$ respectively and so for $\bm{\gamma}_1$ and $\bm{\gamma}_2$. This leads to
$$
\min_{\bm{\gamma}\in\mathfrak{T}_{\mathbb{R}}^{*}(\hat{\bm{y}})}\|\bm{\xi}-\bm{\gamma}\|_2
\leq\min_{\lambda\geq 0}\min_{\bm{\gamma}\in\mathfrak{H}}\|\bm{\xi}-\lambda\bm{\gamma}\|_2
=\min_{\lambda\geq 0}\min_{\bm{c}\in\mathfrak{S}}\|\bm{g}-\lambda\bm{c}\|_2.
$$
Since $\mathcal{G}^*\mathcal{G}$ is the identity operator and $\mathcal{G}\mathcal{G}^*$ is an orthogonal projector, for any $\lambda\geq0$ and $\bm{c}\in\mathfrak{S}$,
\begin{equation}\label{eq:widthest2}
\begin{split}
\|\bm{g}-\lambda\bm{c}\|_2&=\|\mathcal{G}\bm{g}-\lambda\mathcal{G}\bm{c}\|_F
=\|\mathcal{G}\bm{g}-\lambda\mathcal{G}\mathcal{G}^*(\bm{U}\bm{V}^*+\bm{W})\|_F\cr
&=\left(\|\mathcal{G}\bm{g}-\lambda(\bm{U}\bm{V}^*+\bm{W})\|_F^2-\|\lambda(\mathcal{I}-\mathcal{G}\mathcal{G}^*)(\bm{U}\bm{V}^*+\bm{W})\|_F^2\right)^{1/2}\cr
&\leq\|\mathcal{G}\bm{g}-\lambda(\bm{U}\bm{V}^*+\bm{W})\|_F,
\end{split}
\end{equation}
where $\bm{W}$ satisfies the conditions in the definition of $\mathfrak{S}$ in \eqref{eq:setS}. Define two orthogonal projectors $\mathcal{P}_1$ and $\mathcal{P}_2$ in $\mathbb{C}^{N\times N}$ by
$$
\mathcal{P}_1\bm{X}=\bm{U}\bm{U}^*\bm{X}+\bm{X}\bm{V}\bm{V}^*-\bm{U}\bm{U}^*\bm{X}\bm{V}\bm{V}^*,\qquad
\mathcal{P}_2\bm{X}=(\bm{I}-\bm{U}\bm{U}^*)\bm{X}(\bm{I}-\bm{V}\bm{V}^*).
$$
Then, it can be easily checked that: $\mathcal{P}_1\bm{X}$ and $\mathcal{P}_2\bm{X}$ are orthogonal, $\bm{X}=\mathcal{P}_1\bm{X}+\mathcal{P}_2\bm{X}$, and
\begin{equation}\label{eq:P1P2}
\mathcal{P}_1\bm{U}\bm{V}^*=\bm{U}\bm{V}^*,\quad \mathcal{P}_2\bm{W}=\bm{0},\quad
\mathcal{P}_1\bm{W}=\bm{0},\quad \mathcal{P}_2\bm{W}=\bm{W},
\end{equation}
where $\bm{U},\bm{V},\bm{W}$ the same as those in \eqref{eq:setS}. We choose
$$
\lambda=\|\mathcal{P}_2(\mathcal{G}\bm{g})\|_2,\qquad
\bm{W}=\frac{1}{\lambda}\mathcal{P}_2(\mathcal{G}\bm{g}).
$$
Then, $\bm{W}$ satisfies constraints in \eqref{eq:setS}. This, together with \eqref{eq:widthest1.5}\eqref{eq:widthest2}\eqref{eq:P1P2}, implies
\begin{equation*}
\begin{split}
\min_{\bm{\gamma}\in\mathfrak{T}_{\mathbb{R}}^{*}(\hat{\bm{y}})}\|\bm{\xi}-\bm{\gamma}\|_2
&\leq \big\|\mathcal{G}\bm{g}-\|\mathcal{P}_2(\mathcal{G}\bm{g})\|_2\bm{U}\bm{V}^*-\mathcal{P}_2(\mathcal{G}\bm{g})\big\|_F
=\big\|\mathcal{P}_1(\mathcal{G}\bm{g})-\|\mathcal{P}_2(\mathcal{G}\bm{g})\|_2\bm{U}\bm{V}^*\big\|_F\cr
&\leq\|\mathcal{P}_1(\mathcal{G}\bm{g})\|_F+\|\mathcal{P}_2(\mathcal{G}\bm{g})\|_2\|\bm{U}\bm{V}^*\|_F
=\|\mathcal{P}_1(\mathcal{G}\bm{g})\|_F+\sqrt{R}\|\mathcal{P}_2(\mathcal{G}\bm{g})\|_2.
\end{split}
\end{equation*}
We will estimate both $\|\mathcal{P}_1(\mathcal{G}\bm{g})\|_F$ and $\|\mathcal{P}_2(\mathcal{G}\bm{g})\|_2$. For $\|\mathcal{P}_1(\mathcal{G}\bm{g})\|_F$, we have
\begin{equation*}
\begin{split}
\|\mathcal{P}_1(\mathcal{G}\bm{g})\|_F&=\|\bm{U}\bm{U}^*(\mathcal{G}\bm{g})+(\mathcal{G}\bm{g})\bm{V}\bm{V}^*-\bm{U}\bm{U}^*(\mathcal{G}\bm{g})\bm{V}\bm{V}^*\|_F
=\|\bm{U}\bm{U}^*(\mathcal{G}\bm{g})+(\bm{I}-\bm{U}\bm{U}^*)(\mathcal{G}\bm{g})\bm{V}\bm{V}^*\|_F\cr
&\leq \|\bm{U}\bm{U}^*(\mathcal{G}\bm{g})\|_F+\|(\bm{I}-\bm{U}\bm{U}^*)(\mathcal{G}\bm{g})\bm{V}\bm{V}^*\|_F
\leq \|\bm{U}\bm{U}^*(\mathcal{G}\bm{g})\|_F+\|(\mathcal{G}\bm{g})\bm{V}\bm{V}^*\|_F\cr
&\leq 2\sqrt{R}\|\mathcal{G}\bm{g}\|_2
\end{split}
\end{equation*}
where in the last line we have used the inequality
$$
\|\bm{U}\bm{U}^*(\mathcal{G}\bm{g})\|_F\leq\|\bm{U}\bm{U}^*\|_F\|\mathcal{G}\bm{g}\|_2\leq \sqrt{R}\|\mathcal{G}\bm{g}\|_2
$$
and similarly $\|(\mathcal{G}\bm{g})\bm{V}\bm{V}^*\|_F\leq \sqrt{R}\|\mathcal{G}\bm{g}\|_2$.
For $\|\mathcal{P}_2(\mathcal{G}\bm{g})\|_2$,
$$
\|\mathcal{P}_2(\mathcal{G}\bm{g})\|_2=\|(\bm{I}-\bm{U}\bm{U}^*)(\mathcal{G}\bm{g})(\bm{I}-\bm{V}\bm{V}^*)\|_2\leq\|\bm{I}-\bm{U}\bm{U}^*\|_2\|\mathcal{G}\bm{g}\|_2\|\bm{I}-\bm{V}\bm{V}^*\|_2
\leq\|\mathcal{G}\bm{g}\|_2.
$$
Altogether, we obtain
$$
\min_{\bm{\gamma}\in\mathfrak{T}_{\mathbb{R}}^{*}(\hat{\bm{y}})}\|\bm{\xi}-\bm{\gamma}\|_2
\leq 3\sqrt{R}\|\mathcal{G}\bm{g}\|_2,
$$
which together with \eqref{eq:widthest1} gives
$$
w(\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\cap\mathbb{S}_{\mathbb{R}}^{4N-3})
\leq 3\sqrt{R}\cdot\mathsf{E}(\|\mathcal{G}\bm{g}\|_2).
$$
\end{proof}
\subsection{Bound of $\mathsf{E}(\|\mathcal{G}\bm{g}\|_2)$}
The estimation of $\mathsf{E}(\|\mathcal{G}\bm{g}\|_2)$ plays an important role in proving Theorem \ref{thm:main} since it needed to give the tight bound of the Gaussian width $w(\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\cap\mathbb{S}_{\mathbb{R}}^{4N-3})$. The following Theorem gives us a bound for $\mathsf{E}(\|\mathcal{G}\bm{g}\|_2)$.
\begin{theorem}\label{lem:randomHankel}
Let $\bm{g}\in\mathbb{R}^{2N-1}$ be a random vector whose entries are i.i.d. Gaussian random variables with mean $0$ and variance $1$, or $\bm{g}\in\mathbb{C}^{2N-1}$ a random vector whose real part and imaginary part have i.i.d. Gaussian random entries with mean $0$ and variance $1$. Then,
$$
\mathsf{E}(\|\mathcal{G}\bm{g}\|_2)\leq C_1\ln N,
$$
where $C_1$ are some positive universal constants.
\end{theorem}
The proof of Theorem \ref{lem:randomHankel} is relatively complicated. In order to help the reader easily understand the proof, we begin with real case and introduce some ideas and lemmas first. Assume $\bm{g}\in\mathbb{R}^{2N-1}$ has i.i.d standard Gaussian entries with mean $0$ and variance $1$. Notice that $\mathcal{G}\bm{g}$ is symmetric. Therefore, for any even integer $k$, $(\mathrm{tr}\left(\mathcal{G}\bm{g}\right)^k)^{1/k}$ is the $k$-norm of vector of singular values, which implies $\|\mathcal{G}\bm{g}\|_2\leq(\mathrm{tr}\left(\mathcal{G}\bm{g}\right)^k)^{1/k}$. This together with Jensen's inequality,
\begin{equation}\label{bound1}
\mathsf{E}(\|\mathcal{G}\bm{g}\|_2)\leq \mathsf{E}\left((\mathrm{tr}\left(\mathcal{G}\bm{g}\right)^k)^{1/k}\right)\leq
\left(\mathsf{E}(\mathrm{tr}\left(\mathcal{G}\bm{g}\right)^k)\right)^{1/k}.
\end{equation}
Thus, in order to get an upper bound of $\mathsf{E}(\|\mathcal{G}\bm{g}\|_2)$, we estimate $\mathsf{E}\left(\mathrm{tr}\left(\left(\mathcal{G}\bm{g}\right)^k\right)\right)$. Denote $\bm{M}=\mathcal{G}\bm{g}$. It is easy to see that
\begin{equation}\label{eq:EMk}
\mathsf{E}(\mathrm{tr}(\bm{M}^k))=\sum_{0\leq i_1,i_2,\ldots,i_k\leq N-1} \mathsf{E}(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1}).
\end{equation}
Therefore, we only need to estimate $\sum_{0\leq i_1,i_2,\ldots,i_k\leq N-1} \mathsf{E}(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1})$.
To simplify the notation, we denote $i_{k+1}=i_1$. Notice that $M_{ij}=\frac{g_{i+j}}{\sqrt{K_{i+j}}}$, where $g_{i+j}$ is a random Gaussian variable and $K_j$ is defined in \eqref{eq:Ej}. Hence, $M_{i_\ell,i_{\ell+1}}=M_{i_{\ell'},i_{\ell'+1}}$ if and only if $i_\ell+i_{\ell+1}=i_{\ell'}+i_{\ell'+1}$. In order to utilize this property, we would like to introduce a graph for any given index $i_1,i_2,\ldots,i_k$ and its equivalent edges on the graph. More specifically, we construct graph $\mathfrak{F}_{i_1,i_2,\ldots,i_k}$ with nodes to be $i_1,i_2,\ldots,i_k$ and edges to be $(i_1,i_2), (i_2,i_3),\ldots,(i_{k-1},i_k),(i_k,i_1)$. Let the weight for the edge $(i_{\ell},i_{\ell+1})$ be $i_{\ell}+i_{\ell+1}$. The edges with the same weights are considered as an equivalent class. Obviously, $M_{i_{\ell},i_{\ell+1}}=M_{i_{\ell'},i_{\ell'+1}}$ if and only if $(i_{\ell},i_{\ell+1})$ and $(i_{\ell'},i_{\ell'+1})$ are in the same equivalent class. Assume there are $p$ equivalent classes of the edges of $\mathfrak{F}_{i_1,i_2,\ldots,i_k}$. These equivalent classes are indexed by $1,2,\ldots,p$ according to their order in the graph traversal $i_1\to i_2\to\ldots\to i_k\to i_1$. We associate the graph $\mathfrak{F}_{i_1,i_2,\ldots,i_k}$ a sequence $c_1c_2\ldots c_k$, where $c_j$ is the index of the equivalent class of the edge $(i_j,i_{j+1})$. We call $c_1c_2\ldots c_k$ the label for the equivalent classes of the graph $\mathfrak{F}_{i_1,i_2,\ldots,i_k}$.
The label for the equivalent classes of the graph $\mathfrak{F}_{i_1,i_2,\ldots,i_k}$ plays an important role in bounding $\mathsf{E}(\|\mathcal{G}\bm{g}\|_2)$. In order to help the reader understand this concept better, we give two specific examples here. For $N=6,k=6,i_1=1,i_2=4,i_3=1,i_4=3,i_5=1,i_6=4$, we have a corresponding graph and its label for the equivalent classes of the graph is $112211$. For $N=6,k=6, i_1=2,i_2=3,i_3=2,i_4=4,i_5=2,i_6=3$, the label for the equivalent classes of the corresponding graph is $112211$ as well. Therefore, there may be several different index sequences $i_1i_2\ldots i_k$ that correspond to the same label for the equivalent classes of the corresponding graph. Let $\mathfrak{A}_{c_1c_2\ldots c_k}$ be the set of indices whose label of equivalent class of the corresponding graph is $c_1c_2\ldots c_k$, i.e.
\begin{equation}\label{eq:frakA}
\mathfrak{A}_{c_1c_2\ldots c_k}=\{i_1i_2\ldots i_k|\mbox{ the label for the equivalent class of the graph } \mathfrak{F}_{i_1,i_2,\ldots,i_k} \mbox{ is } c_1c_2\ldots c_k\}
\end{equation}
For given $c_1c_2\ldots c_k$, $\mathfrak{A}_{c_1c_2\ldots c_k}$ is a subset of $\{i_1i_2\ldots i_k|i_j\in\{0,1,\ldots,N-1\},~\forall~j=1,\ldots,k\}$. The following lemma gives us an estimate for the bound $\sum_{i_1i_2\ldots i_k\in\mathfrak{A}_{c_1c_2\ldots c_k}} \mathsf{E}(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1})$.
\begin{lemma} Let $\zeta$ be the Riemann zeta function and $\mathfrak{A}_{c_1c_2\ldots c_k}$ be defined in \eqref{eq:frakA}. Define $B(s)=\ln (N+1)$ if $s=2$ and $B(s)=\zeta(s/2)\leq \pi^2/6$ for $s\geq 4$.
Then
\begin{equation}\label{eq:ECs1sp}
\sum_{i_1i_2\ldots i_k\in \mathfrak{A}_{c_1c_2\ldots c_k}}\mathsf{E}(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1})
\leq N \prod_{\ell=1}^{p}B(s_\ell)(s_{\ell}-1)!!
\end{equation}
where $p$ is the number of equivalent classes shown in $c_1c_2\ldots c_k$, and $s_{\ell}$, $\ell=1,\ldots,p$, is the frequency of $\ell$ in $c_1c_2\ldots c_k$.
\end{lemma}
\begin{proof}
We begin with finding free indices for any $i_1,i_2,\ldots,i_k$ in the set $\mathfrak{A}_{c_1c_2\ldots c_k}$. Let $(j_1,j_2)$ be the first edge of the class $1$. Therefore, the weight of the first class is $j_1+j_2$. For convenience, we define $k_1(j_1)=j_1$. The first edge of the class $2$ must have a vertex $k_2(j_1,j_2)$, depending on $j_1$ and $j_2$, and a free vertex, denoted by $j_3$. The weight of the second class is $k_2(j_1,j_2)+j_3$. Similarly, the first edge in class $3$ has a vertex $k_3(j_1,j_2,j_3)$ and a free vertex $j_4$, and the weight is $k_3(j_1,j_2,j_3)+j_4$, and so on. Finally, the first edge in class $p$ has a vertex $k_{p}(j_1,j_2,\ldots,j_p)$ and a free vertex $j_{p+1}$, and the weight is $k_{p}(j_1,j_2,\ldots,j_p)+j_{p+1}$. Recall that the entry $M_{ij}$ is $\frac{g_{i+j}}{\sqrt{K_{i+j}}}$, where $g_{i+j}$ is a random Gaussian variable. Therefore, for any $i_1i_2\ldots i_k\in\mathfrak{A}_{c_1c_2\ldots c_k}$,
\begin{equation}\label{eq:Ei1ik2}
\mathsf{E}(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1})
=\prod_{\ell=1}^{p} \frac{1}{K_{m_{\ell}}^{s_{\ell}/2}}\mathsf{E}\left(g_{m_\ell}^{s_{\ell}}\right),
\end{equation}
where $m_{\ell}=k_{\ell}(j_1,j_2,\ldots,j_{\ell})+j_{\ell+1}$. Therefore, it is non-vanishing if and only if $s_1,s_2,\ldots,s_p$ are all even. In these cases,
\begin{equation}\label{eq:Ei1ik}
\mathsf{E}(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1})=\prod_{\ell=1}^{p} \frac{(s_{\ell}-1)!!}{K_{m_{\ell}}^{s_{\ell}/2}}.
\end{equation}
Summing \eqref{eq:Ei1ik} over $\mathfrak{A}_{c_1c_2\ldots c_k}$, we obtain
\begin{equation*}
\begin{split}
&\sum_{i_1i_2\ldots i_k\in \mathfrak{A}_{c_1c_2\ldots c_k}}\mathsf{E}(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1})
\leq\sum_{j_1=0}^{N-1}\sum_{j_2=0}^{N-1}\ldots\sum_{j_p=0}^{N-1}\sum_{j_{p+1}=0}^{N-1}\prod_{\ell=1}^{p} \frac{(s_{\ell}-1)!!}{K_{m_{\ell}}^{s_{\ell}/2}}\cr
&=\sum_{j_1=0}^{N-1}\sum_{j_2=0}^{N-1}\left(\frac{(s_{1}-1)!!}{K_{k_1(j_1)+j_2}^{s_{1}/2}}
\sum_{j_3=0}^{N-1}\left(\frac{(s_{2}-1)!!}{K_{k_2(j_1,j_2)+j_3}^{s_{2}/2}}
\sum_{i_4=0}^{N-1}\left(\ldots\sum_{j_{p+1}=0}^{N-1}\frac{(s_{p}-1)!!}{K_{k_{p}(j_1,\ldots,j_{p})+j_{p+1}}^{s_{p}/2}}\right)\ldots\right)\right)
\end{split}
\end{equation*}
Since, for any $0\leq c\leq N-1$,
$$
\sum_{\ell=0}^{N-1}\frac{1}{K_{c+\ell}^{s/2}}\leq
\begin{cases}
1+1/2+1/3+\ldots 1/N\leq \ln (N+1)& s=2,\cr
1+1/2^{s/2}+\ldots+1/N^{s/2}\leq \zeta(s/2)& s=4,6,\ldots
\end{cases}
$$
where $\zeta$ is the Riemann zeta function. By defining $B(s)=\ln (N+1)$ if $s=2$ and $B(s)=\zeta(s/2)\leq \pi^2/6$ for $s\geq 4$, the desired result easily follows.
\end{proof}
The desired bound for $\mathsf{E}(\|\mathcal{G}\bm{g}\|_2)$ can be obtained if we know how many different sets of $\mathfrak{A}_{c_1c_2\ldots c_k}$ available in the set $\{i_1i_2\ldots i_k|i_j\in\{0,1,\ldots,N-1\},~\forall~j=1,\ldots,k\}$. Let $\mathfrak{B}_{s_1s_2\ldots s_p}$ be the set of all labels of $p$ equivalent classes with $\ell$-th class containing $s_{\ell}$ equivalent edges respectively, i.e.
\begin{equation}
\mathfrak{B}_{s_1s_2\ldots s_p}=\left\{c_1c_2\ldots c_p\big|\begin{array}{ll}&c_1c_2\ldots c_p \hbox{ is a valid label of equivalent classes in graph } \mathfrak{F}_{i1i2\ldots i_k} \\&\hbox{and there are } s_{\ell} \hbox{ $\ell$'s in the label } c_1c_2\ldots c_p \end{array}\right\}
\end{equation}
Let $\mathfrak{C}_p$ be the set of all possible set of all possible choice of $p$ positive even numbers $s_1,\ldots,s_p$ satisfying $s_1+s_2+\ldots+s_p=k$. Then
\begin{equation}\label{eq:Esumall}
\begin{split}
\mathsf{E}(\mathrm{tr}(\bm{M}^k))&=\sum_{0\leq i_1,i_2,\ldots,i_k\leq N-1} E(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1})\cr
&\leq \sum_{p=1}^{k/2}\sum_{s_1\ldots s_p\in\mathfrak{C}_{p}}\sum_{c_1c_2\ldots c_k\in\mathfrak{B}_{s_1s_2\ldots s_p}}\sum_{i_1i_2\ldots i_k\in \mathfrak{A}_{c_1c_2\ldots c_k}}\mathsf{E}(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1})\cr
\end{split}
\end{equation}
By bounding the cardinality of $\mathfrak{B}_{s_1s_2\ldots s_p}$ and $\mathfrak{C}_p$, we can derive the bound $\mathsf{E}(\mathrm{tr}(\bm{M}^k))$ hence $\mathsf{E}(\|\mathcal{G}\bm{g}\|_2)$ for the real case. The complex case can be proved by directly using the results for the real case. Now, we are in position to prove Theorm \ref{lem:randomHankel}.
\begin{proof}[ Proof of Theorem \ref{lem:randomHankel}]
Following \eqref{eq:Esumall}, we need to count the cardinality of $\mathfrak{B}_{s_1s_2\ldots s_p}$. For any $c_1c_2\ldots c_k\in\mathfrak{B}_{s_1s_2\ldots s_p}$, we must have $c_1=1$. Therefore, there are $k-1 \choose s_1-1$ choices of the positions of remaining $1$'s in $c_1c_2\ldots c_k$. Once positions for $1$'s are fixed, the position of the first $2$ has to be the first available slot, we have $k-s_1-1 \choose s_2-1$ choices for the positions of remaining $2$'s, and so on. Thus,
\begin{equation*}
\begin{split}
|\mathfrak{B}_{s_1s_2\ldots s_p}|&\leq{k-1 \choose s_1-1}\cdot{k-s_1-1 \choose s_2-1}\cdot\ldots\cdot
{k-s_1-\ldots-s_{p-1}-1 \choose s_p-1}\cr
&=\frac{(k-1)(k-2)\ldots(k-s_1+1)}{(s_1-1)!}\frac{(k-s_1-1)\ldots(k-s_1-s_2+1)}{(s_2-1)!}
\ldots 1\cr
&=\frac{(k-1)!}{\prod_{\ell=1}^{p}(s_{\ell}-1)!\prod_{\ell=1}^{p-1}(k-s_1-\ldots-s_{\ell})},
\end{split}
\end{equation*}
which together with \eqref{eq:ECs1sp} implies, for any $s_1s_2\ldots s_p\in\mathfrak{C}_p$,
\begin{equation}\label{eq:EBA}
\begin{split}
&\sum_{c_1c_2\ldots c_k\in\mathfrak{B}_{s_1s_2\ldots s_p}}\sum_{i_1i_2\ldots i_k\in \mathfrak{A}_{c_1c_2\ldots c_k}}\mathsf{E}(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1})\cr
\leq& N \frac{(k-1)!}{\prod_{\ell=1}^{p}(s_\ell-2)!!\prod_{\ell=1}^{p-1}(k-s_1-\ldots-s_{\ell})}\prod_{\ell=1}^{p}B(s_\ell)
\end{split}
\end{equation}
Summing \eqref{eq:EBA} over $\mathfrak{C}_{p}$ yields
\begin{equation}\label{eq:EBAC}
\begin{split}
&\sum_{s_1\ldots s_p\in\mathfrak{C}_{p}}\sum_{c_1c_2\ldots c_k\in\mathfrak{B}_{s_1s_2\ldots s_p}}\sum_{i_1i_2\ldots i_k\in \mathfrak{A}_{c_1c_2\ldots c_k}}\mathsf{E}(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1})\cr
\leq& N(k-1)! \sum_{{s_1\ldots s_p\in \mathfrak{C}_{p}}}\frac{\prod_{\ell=1}^{p}B(s_\ell)}{\prod_{\ell=1}^{p}(s_\ell-2)!!\prod_{\ell=1}^{p-1}(k-s_1-\ldots-s_{\ell})}
\end{split}
\end{equation}
Let us estimate the sum in the last line. Let $s$ be the number of $2$'s in $s_1s_2\ldots s_p$. Then, \begin{equation}\label{eq:prodBs}
\prod_{\ell=1}^{p}B(s_\ell)\leq \ln^s(N+1)\left(\frac{\pi^2}{6}\right)^{p-s}.
\end{equation}
Since each $s_1,\ldots,s_p\geq 2$ and there are $p-s$ terms greater than $4$ among them, we have
\begin{equation}\label{eq:prodBs1}
\prod_{\ell=1}^{p}(s_\ell-2)!!\geq 2^{p-s}
\end{equation}
and $k-s_1-\ldots-s_{\ell}=s_{\ell+1}+\ldots+s_p\geq2(p-\ell)$, which implies
\begin{equation}\label{eq:prodBs2}
\prod_{\ell=1}^{p-1}(k-s_1-\ldots-s_{\ell})\geq\prod_{\ell=1}^{p-1}2(p-\ell)=2^{p-1}(p-1)!.
\end{equation}
There are $p\choose s$ choices of the positions of the $s$ $2$'s. Moreover, once the $s$ $2$'s in $s_1s_2\ldots s_p$ are chosen, there are at most
$$
\left(\frac{k}{2}-s\right)\cdot\left(\frac{k}{2}-s-1\right)\cdot\ldots\cdot
\left(\frac{k}{2}-s-(p-s+1)\right)\leq\left(\frac{k}{2}\right)^{p-s}
$$
choices of the remaining $p-s$ $s_j$'s. Altogether,
\begin{equation}\label{eq:EBAC2}
\begin{split}
&\sum_{s_1\ldots s_p\in\mathfrak{C}_{p}}\sum_{c_1c_2\ldots c_k\in\mathfrak{B}_{s_1s_2\ldots s_p}}\sum_{i_1i_2\ldots i_k\in \mathfrak{A}_{c_1c_2\ldots c_k}}\mathsf{E}(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1})\cr
\leq& N(k-1)!\sum_{s=0}^{p}{p\choose s}\left(\frac{k}{2}\right)^{p-s}\ln^s(N+1)\left(\frac{\pi^2}{6}\right)^{p-s}\frac{1}{2^{p-s}2^{p-1}(p-1)!}\cr
=&2N(k-1)!\frac{1}{(p-1)!}\sum_{s=0}^{p}{p\choose s}\left(\frac{k}{2}\right)^{p-s}\ln^s(N+1)\left(\frac{\pi^2}{6}\right)^{p-s}\frac{1}{4^{p-s}2^s}\cr
=&\frac{2N(k-1)!}{(p-1)!}\sum_{s=0}^{p}{p\choose s}\left(\frac{\pi^2k}{48}\right)^{p-s}\left(\frac{\ln (N+1)}{2}\right)^s\cr
=&2N(k-1)!\frac{\left(\frac{\pi^2}{48}k+\frac{\ln (N+1)}{2}\right)^{p}}{(p-1)!}
\end{split}
\end{equation}
Finally, \eqref{eq:EBAC2} is summed over all possible $p$ and we obtain
\begin{equation}\label{eq:Esumal2}
\begin{split}
\mathsf{E}(\mathrm{tr}(\bm{M}^k))&=\sum_{i_1,i_2,\ldots,i_k} \mathsf{E}(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1})\cr
&\leq \sum_{p=1}^{k/2}\sum_{s_1\ldots s_p\in\mathfrak{C}_{p}}\sum_{c_1c_2\ldots c_k\in\mathfrak{B}_{s_1s_2\ldots s_p}}\sum_{i_1i_2\ldots i_k\in \mathfrak{A}_{c_1c_2\ldots c_k}}\mathsf{E}(M_{i_1i_2}M_{i_2i_3}\ldots M_{i_{k-1}i_k}M_{i_ki_1})\cr
&\leq 2N(k-1)!\sum_{p=1}^{k/2}\frac{\left(\frac{\pi^2}{48}k+\frac{\ln (N+1)}{2}\right)^{p}}{(p-1)!}
\end{split}
\end{equation}
By using the fact that, for any $A>0$,
$$
\sum_{p=1}^{k/2}\frac{A^{p}}{(p-1)!}
=A\left(1+A+\frac{A^2}{2!}+\ldots+\frac{A^{k/2-1}}{(k/2-1)!}\right)\leq A e^A,
$$
\eqref{eq:Esumal2} is rearranged into
\begin{equation*}
\begin{split}
\mathsf{E}(\mathrm{tr}(\bm{M}^k))&\leq 2N(k-1)!\left(\frac{\pi^2}{48}k+\frac{\ln (N+1)}{2}\right)e^{\frac{\pi^2}{48}k+\frac{\ln (N+1)}{2}}
=2N\sqrt{N+1}(k-1)!\left(\frac{\pi^2}{48}k+\frac{\ln (N+1)}{2}\right)e^{\frac{\pi^2}{48}k}\cr
&\leq 2(N+1)^{\frac32}k^k\left(\frac{\pi^2}{48}+\frac{\ln (N+1)}{2k}\right)e^{\frac{\pi^2}{48}k}.
\end{split}
\end{equation*}
Let $k$ be the smallest even integer greater than $\frac{24}{\pi^2}\ln (N+1)$. Then using $\|\bm{M}\|_2\leq(\mathrm{tr}(\bm{M}^k))^{1/k}$ lead to
\begin{equation*}
\begin{split}
\mathsf{E}(\|\bm{M}\|_2)&\leq \mathsf{E}((\mathrm{tr}(\bm{M}^k))^{1/k})\leq \left(\mathsf{E}(\mathrm{tr}(\bm{M}^k))\right)^{1/k}
\leq (2(N+1)^{\frac32})^{1/k}k\left(\frac{\pi^2}{48}+\frac{\ln (N+1)}{2k}\right)^{1/k}e^{\frac{\pi^2}{48}}
\cr
&\leq 2^{\frac{\pi^2}{24\ln (N+1)}}\cdot e^{\frac{\pi^2}{16}}\cdot\frac{24}{\pi^2}\ln (N+1)
\cdot \left(\frac{\pi^2}{24}\right)^{\frac{\pi^2}{24\ln (N+1)}}\cdot e^{\frac{\pi^2}{48}}
\leq C_1\ln N,
\end{split}
\end{equation*}
where the constant $C_1$ is some universal constant.
Next, we estimate the complex case. In this case, $\bm{g}\in\mathbb{C}^{2N-1}$, where both its real part and imaginary part have i.i.d. Gaussian entries. Write $\bm{g}=\bm{\xi}+\imath\bm{\eta}$, where $\bm{\xi},\bm{\eta}\in\mathbb{R}^{2N-1}$ are real-valued random Gaussian vectors. From the real-valued case above, we derive
$$
\mathsf{E}(\|\mathcal{G}\bm{\xi}\|_2)\leq C_1\ln N,\quad
\mathsf{E}(\|\mathcal{G}\bm{\eta}\|_2)\leq C_1\ln N.
$$
Therefore,
$$
\mathsf{E}(\|\mathcal{G}\bm{g}\|_2)=\mathsf{E}(\|\mathcal{G}\bm{\xi}+\imath\mathcal{G}\bm{\eta}\|_2)
\leq \mathsf{E}(\|\mathcal{G}\bm{\xi}\|_2)+\mathsf{E}(\|\mathcal{G}\bm{\eta}\|_2)
\leq 2C_1\ln N.
$$
\end{proof}
\subsection{Proof of Theorem \ref{thm:main}}
With Lemmas \ref{lem:tangentcone}, \ref{lem:Gaussianwidth}, \ref{lem:boundGaussianWidth}, and Theorem \ref{lem:randomHankel} in hand, we are in position to prove Theorem \ref{thm:main}.
\begin{proof}[Proof of Theorem \ref{thm:main}]
Since \eqref{eq:mincomplex} is equivalent to \eqref{eq:min} by the relation $\bm{y}=\mathcal{D}\bm{x}$, we only need to prove that $\hat{\bm{y}}=\tilde{\bm{y}}$ for noise free data ( $\|\hat{\bm{y}}-\tilde{\bm{y}}\|_2\leq 2\delta/\epsilon$ for noisy data) with dominant probability. According to Lemma \ref{lem:tangentcone}, we only need to prove \eqref{eq:NullSpace}. By Lemma \ref{lem:Gaussianwidth},
$$
\mathsf{P}\left(\min_{\bm{z}\in\mathfrak{T}(\hat{\bm{y}})\cap\mathbb{S}_c^{2N-2}}\|\mathcal{B}\bm{z}\|_2\geq\epsilon\right)\geq 1-2e^{-\frac12\left(\lambda_{M}-w(\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\cap\mathbb{S}_{\mathbb{R}}^{4N-3})-\frac{\epsilon}{\sqrt{2}}\right)^2}.
$$
Lemma \ref{lem:boundGaussianWidth}, Theorem \ref{lem:randomHankel}, and the inequality $\lambda_M\geq \frac{M}{\sqrt{M+1}}$ imply that
$$\lambda_{M}-w(\mathfrak{T}_{\mathbb{R}}(\hat{\bm{y}})\cap\mathbb{S}_{\mathbb{R}}^{4N-3})-\frac{\epsilon}{\sqrt{2}}\geq \frac{M}{\sqrt{M+1}}-3C_1\sqrt{R}\ln N-\frac{\epsilon}{\sqrt{2}}\geq \sqrt{M-1}-3C_1\sqrt{R}\ln N-\frac{\epsilon}{\sqrt{2}}.$$
When $M \geq (6C_1\sqrt{R}\ln N+\sqrt{2}\epsilon)^2+1$, we can easily get $\mathsf{P}\left(\min_{\bm{z}\in\mathfrak{T}(\hat{\bm{y}})\cap\mathbb{S}_c^{2N-2}}\|\mathcal{B}\bm{z}\|_2\geq\epsilon\right)\geq 1-2e^{-\frac{M-1}{8}}$. We get the desired result.
\end{proof}
\section{Extension to Structured Low-Rank Matrix Reconstruction}\label{secMatrices}
In this section, we extend our results to low-rank Hankel matrix reconstruction and low-rank Toeplitz matrix reconstruction from their Gaussian measurements.
Since the proof of Theorem \ref{thm:main} does not use the specific property that $\hat{\bm{y}}$ is an exponential signal, Theorem \ref{thm:main} holds true for any low-rank Hankel matrices. We have the following corollary, which reads that any Hankel matrix of size $N\times N$ and rank $R$ can be recovered exactly from its $O(R\ln^2N)$ Gaussian measurements, and this reconstruction is robust to noise.
\begin{corollary}[Low-Rank Hankel Matrix Reconstruction]
Let $\hat{\bm{H}}\in\mathbb{C}^{N\times N}$ be a given Hankel matrix with rank $R$. Let $\hat{\bm{x}}\in\mathbb{C}^{2N-1}$ be satisfying $\hat{x}_{i+j}=\hat{H}_{ij}$ for $0\leq i,j\leq N-1$. Let $\mathcal{A}=\mathcal{B}\mathcal{D}\in\mathbb{C}^{M\times (2N-1)}$, where $\mathcal{B}\in\mathbb{C}^{M\times (2N-1)}$ is a random matrix whose real and imaginary parts are i.i.d. Gaussian with mean $0$ and variance $1$, $\mathcal{D}\in\mathbb{R}^{(2N-1)\times (2N-1)}$ is the same as defined in Theorem \ref{thm:main}. Then, there exists a universal constant $C_1>0$ such that, for any $\epsilon>0$, if
$$
M \geq (C_1\sqrt{R}\ln N+\sqrt{2}\epsilon)^2+1,
$$
then, with probability at least $1-2e^{-\frac{M-1}{8}}$, we have
\begin{enumerate}
\item[(a)]
$\bm{H}(\tilde{\bm{x}})=\hat{\bm{H}}$, where $\tilde{\bm{x}}$ is the unique solution of
$$
\min_{\bm{x}}\|\bm{H}(\bm{x})\|_*\quad\mbox{subject to}\quad \mathcal{A}\bm{x}=\bm{b}
$$
with $\bm{b}=\mathcal{A}\hat{\bm{x}}$;
\item[(b)]
$\|\bm{H}(\tilde{\bm{x}})-\hat{\bm{H}})\|_F\leq 2\delta/\epsilon$, where $\tilde{\bm{x}}$ is the unique solution of
$$
\min_{\bm{x}}\|\bm{H}(\bm{x})\|_*\quad\mbox{subject to}\quad \|\mathcal{A}\bm{x}-\bm{b}\|_2\leq\delta
$$
with $\|\bm{b}-\mathcal{A}\hat{\bm{x}}\|_2\leq\delta$.
\end{enumerate}
\end{corollary}
Moreover, Theorem \ref{thm:main} can be extended to the reconstruction of low-rank Toeplitz matrix from its Gaussian measurements. Let $\hat{\bm{T}}\in\mathbb{C}^{N\times N}$ be a Toeplitz matrix. Let $\hat{\bm{x}}\in\mathbb{C}^{2N-1}$ be a vector satisfying $\hat{x}_{N-1+(i-j)}=\hat{T}_{i,j}$ for $0\leq i,j\leq N-1$. Let $\bm{P}\in\mathbb{C}^{N\times N}$ be an anti-diagonal matrix with anti-diagonals of $1$. Then, it is easy to check that $\hat{\bm{T}}=\bm{H}(\hat{\bm{x}})\bm{P}$. Thus, we define a linear operator $\bm{T}$ that maps a vector in $\mathbb{C}^ {2N-1}$ to a $N\times N$ Toeplitz matrix by $\bm{T}(\bm{x})=\bm{H}(\bm{x})\bm{P}$. Since $\bm{P}$ is a unitary matrix, one has $\|\bm{T}(\bm{x})\|_*=\|\bm{H}(\bm{x})\bm{P}\|_*=\|\bm{H}(\bm{x})\|_*$. Therefore, the above corollary can be adapted to low-rank Toeplitz matrices. We obtain the following corollary, which states that any Toeplitz matrix of size $N\times N$ and rank $R$ can be recovered exactly from its $O(R\ln^2N)$ Gaussian measurements, and this reconstruction is robust to noise.
\begin{corollary}[Low-Rank Toeplitz Matrix Reconstruction]
Let $\hat{\bm{T}}\in\mathbb{C}^{N\times N}$ be a given Toeplitz matrix with rank $R$. Let $\hat{\bm{x}}\in\mathbb{C}^{2N-1}$ be the vector satisfying $\hat{x}_{N-1+(i-j)}=\hat{T}_{i,j}$ for $0\leq i,j\leq N-1$. Let $\mathcal{A}=\mathcal{B}\mathcal{D}\in\mathbb{C}^{M\times (2N-1)}$, where $\mathcal{B}\in\mathbb{C}^{M\times (2N-1)}$ is a random matrix whose real and imaginary parts are i.i.d. Gaussian with mean $0$ and variance $1$, $\mathcal{D}\in\mathbb{R}^{(2N-1)\times (2N-1)}$ is the same as defined in Theorem \ref{thm:main}. Then, there exists a universal constant $C_1>0$ such that, for any $\epsilon>0$, if
$$
M \geq (C_1\sqrt{R}\ln N+\sqrt{2}\epsilon)^2+1,
$$
then, with probability at least $1-2e^{-\frac{M-1}{8}}$, we have
\begin{enumerate}
\item[(a)]
$\bm{T}(\tilde{\bm{x}})=\hat{\bm{T}}$, where $\tilde{\bm{x}}$ is the unique solution of
$$
\min_{\bm{x}}\|\bm{T}(\bm{x})\|_*\quad\mbox{subject to}\quad \mathcal{A}\bm{x}=\bm{b}
$$
with $\bm{b}=\mathcal{A}\hat{\bm{x}}$;
\item[(b)]
$\|\bm{T}(\tilde{\bm{x}})-\hat{\bm{T}})\|_F\leq 2\delta/\epsilon$, where $\tilde{\bm{x}}$ is the unique solution of
$$
\min_{\bm{x}}\|\bm{T}(\bm{x})\|_*\quad\mbox{subject to}\quad \|\mathcal{A}\bm{x}-\bm{b}\|_2\leq\delta
$$
with $\|\bm{b}-\mathcal{A}\hat{\bm{x}}\|_2\leq\delta$.
\end{enumerate}
\end{corollary}
\section{Numerical Experiments}\label{secNum}
In this section, we use numerical experiments to demonstrate our result and its performance improvement, compared with the results in \cite{TBSR:TIT:13,CC:TIT:14}. In the numerical experiments, we use superpositions of complex sinusoids as test signals. Note that the application of our result is not limited to such signals but any signals that are superpositions of complex exponentials.
The true signal $\hat{\bm{x}}$ is generated as follows. We choose $N=64$, i.e., the dimension of $\hat{\bm{x}}$ is $127$. The frequencies $f_k$, $k=1,\ldots,R$, are uniformly randomly drawn from the interval $[0,1]$. The arguments of the coefficients $c_k$, $k=1,\ldots,R$, are from the interval $[0,2\pi]$ uniformly at random, and their amplitudes are generated by $|c_i|=1+10^{0.5m_i}$ where $m_i$ follows the uniform distribution on $[0,1]$. Then, we synthesize the true signal $\hat{\bm{x}}$ by $\hat{x}_t=\sum_{k=1}^{R}c_ke^{\imath 2\pi f_k t}$ for $t=0,1,\ldots,126$. For each fixed $M$ and $R$, we test $100$ runs. We plot in Fig. \ref{fig:experiments}(a) the rate of successful reconstruction by \eqref{eq:min}, which is solved by alternating direction method of multipliers (ADMM). We see from Fig. \ref{fig:experiments}(a) that the phase transition of our method is very sharp.
For comparison, we plot the phase transitions of off-the-grid CS \cite{TBSR:TIT:13} and EMaC \cite{CC:TIT:14} in Fig. \ref{fig:experiments}(b) and Fig. \ref{fig:experiments}(c) respectively. These figures are from \cite{CC:TIT:14} under the same setting as ours.
We observe that, for the same $R$, our method generally needs smaller $M$ than off-the-grid CS and EMaC to achieve a high successful reconstruction rate. This illustrates that empirically our method requires fewer measurements than both off-the-grid CS and EMaC for the exact reconstruction of complex sinusoid signals. Finally and importantly, our method does not need a separation condition of frequencies to guarantee a successful recovery.
\begin{figure}
\begin{center}
\subfigure[Our method: Hankel nuclear norm minimization with random Gaussian projections.]%
{\includegraphics[width=.33\textwidth]{ours.eps}}
\subfigure[Off-the-grid CS \cite{TBSR:TIT:13}: Atomic norm minimization with non-uniform sampling of entries.]%
{\includegraphics[width=.31\textwidth]{Atomic.eps}}
\subfigure[EMaC \cite{CC:TIT:14}: Hankel nuclear norm minimization with non-uniform sampling of entries.]%
{\includegraphics[width=.31\textwidth]{EMaC.eps}}
\end{center}
\caption{Numerical Results.}\label{fig:experiments}
\end{figure}
\def$'${$'$}
|
1,477,468,750,431 | arxiv | \section{Introduction}
The intersection of magnetism with topological electronic states has become an exciting area in condensed matter physics. A variety of exotic quantum states have been predicted to emerge, such as the quantum anomalous Hall effect, Weyl semimetals, and axion insulators, although only a few experimental realizations have been found to date. For example, the introduction of bulk ferromagnetic (FM) order in a topological insulator (TI) has been shown to induce the quantum anomalous Hall effect at very low temperatures ($\approx$10\,mK) in thin films of a TI with dilute magnetic doping, such as (Bi$_{1-y}$Sb$_y$)$_{2-x}$Cr$_x$Te$_3$.\cite{chang2013experimental} In this situation, FM order preserves the bulk electronic gap (a FM insulator) and also gaps the spin-momentum locked Dirac-like surface states, producing dissipationless edge modes in the absence of an applied magnetic field. FM order can also close the bulk gap in a TI through exchange coupling, inducing a gapless Weyl semimetal with topologically protected bulk chiral electronic states. In both cases, the breaking of time-reversal symmetry by the magnetic order is key to the unusual topological properties.
Another interesting approach is to consider the effect of antiferromagnetic (AFM) order in topological materials. In this case, time-reversal symmetry is broken, but the combination of time reversal and a half-lattice translation is not broken, which leads to a Z$_2$ topological classification. Such AFM-TIs are predicted to host unusual quantum axion electrodynamics at the surface.\cite{mong2010antiferromagnetic} However, it is extremely rare to find the naturally-grown mutilayers where magnetic (either ferromagnetic or antiferromagnetic) and topological phases coexist and intimately couple to each other.
It has recently been proposed that MnBi$_2$Te$_4$ may be the first example of an AFM-TI.\cite{otrokov2017highly,zhang2018topological,otrokov2018prediction} MnBi$_2$Te$_4$ is based on the Bi$_2$Te$_3$ tetradymite structure common to the well-known topological insulators. The tetradymite structure is rhombohedral and consists of a van der Waals bonded quintuple-layers with a Te-Bi-Te-Bi-Te sequence. In MnBi$_2$Te$_4$, an additional Mn-Te layer is inserted, Te-Bi-Te-Mn-Te-Bi-Te, forming a septuple-layer. Magnetic measurements confirm that the Mn ions adopt a high-spin S=5/2 of a 2+ valence with a large magnetic moment of $\sim$5 $\mu_B$ and also indicate an AFM transition at 24\,K.\cite{otrokov2018prediction} Therefore, MnBi$_2$Te$_4$ offers a unique natural heterostructure of antiferromagnetic planes intergrowing with layers of topological insulators. First-principles calculations, magnetic measurements, and X-ray magnetic circular dichroism measurements predict an A-type magnetic structure with FM hexagonal layers coupled antiferromagnetically along the \textit{c}-axis. However, a confirmation of the magnetic structure by neutron diffraction is still absent possibly due to the difficulty of synthesis of polycrystalline samples and growth of sizable single crystals.\cite{zeugner2018chemical}
In this work, we report the growth of sizable single crystals of MnBi$_2$Te$_4$ out of a Bi-Te flux. The as-grown MnBi$_2$Te$_4$ crystals have an electron concentration of 5.3$\times$10$^{20}$cm$^{-3}$ at room temperature and exhibit antiferromagnetic order at T$_N$=24\,K. Our neutron powder and single crystal diffraction measurements confirm the previously proposed A-type antiferromagnetic order with ferromagnetic planes coupled antiferromagnetically along the \textit{c}-axis. The ordered moment is 4.04(13)$\mu_{B}$/Mn at 10\,K and aligned along the crystallographic \textit{c}-axis. The magnetic order affects both the electrical and thermal conductivity. However, we observed no anomaly in the temperature dependence of thermopower around T$_N$.
\begin{figure} \centering \includegraphics [width = 0.40\textwidth] {Yan-948D.jpg}
\caption{(color online) Photograph of a single crystal of MnBi$_2$Te$_4$ on a mm grid.}
\label{picture-1}
\end{figure}
\section{Experimental details}
MnBi$_2$Te$_4$ single crystals were grown by a flux method. Mixtures of Mn (alfa aesar, 99.99\%), Bi pieces (alfa aesar, 99.999\%), and Te shot (alfa aesar, 99.9999\%) in the molar ratio of 1:10:16 (MnTe:Mn$_2$Te$_3$=1:5) were placed in a 2ml alumina growth crucible of a Canfield crucible set,\cite{canfield2016use} and heated to 900$^\circ$C and held for 12 h. After slowly cooling across a $\approx$10 degree window below 600$^\circ$C in two weeks, the excess flux was removed by centrifugation above the melting temperature of Bi$_2$Te$_3$ (585$^\circ$C). We also tested other ratios of starting materials to find MnTe:Mn$_2$Te$_3$=1:5 gives large crystals and reasonable yield. Crystals produced by this flux method were typically few mm on a side and often grew in thick, blocklike forms with thicknesses up to 2 mm, but are easily delaminated into thin sheets. Figure\,\ref{picture-1} shows the picture of one crystal. Most crystals are about 20 mg per piece but the thick crystals can be over 50 mg.
We performed elemental analysis on both the as-grown and freshly cleaved surfaces using a Hitachi TM-3000 tabletop electron microscope equipped with a Bruker Quantax 70 energy dispersive x-ray system. The elemental analysis confirmed the Mn:Bi:Te ratio is 14.4:28.6:57.0 in the crystals. Magnetic properties were measured with a Quantum Design (QD) Magnetic Property Measurement System in the temperature range 2.0\,K$\leq$T$\leq$\,300\,K. The temperature and field dependent electrical resistivity data were collected using a 9\,T QD Physical Property Measurement System (PPMS). One single crystal with dimensions of 1.20\,mm\,$\times$\,0.75\,mm\,$\times$\,7\,mm was selected for the thermal conductivity measurement using the TTO option of 9T PPMS. Silver epoxy (H20E Epo-Tek) was utilized to provide mechanical and thermal contacts during the thermal transport measurements. The thermal conductivity measurement was performed with the heat flow in the \textit{ab}-plane.
The scanning tunneling microscopy (STM) and spectroscopy (STS) measurements were performed at 4.5 K in an Omicron UHV-LT-STM with a base pressure 1$\times$10$^{-11}$mbar. Electrochemically etched tungsten tips were characterized on Au (111) surface. The MnBi$_2$Te$_4$ single crystals were cleaved in situ at room temperature and then transferred immediately into the cold STM head for measurements. The dI/dV spectra were measured with the standard lock-in technique with a modulation frequency f = 455 Hz and a modulation amplitude Vmod = 20 mV.
Neutron powder diffraction was performed on the time-of-flight (TOF) powder diffractometer,
POWGEN, located at the Spallation Neutron Source at Oak
Ridge National Laboratory. The powder sample used for the neutron diffraction measurements was synthesized by annealing at 585$^\circ$C for a week the homogeneous stoichiometric mixture of the elements quenched from 900$^\circ$C.\cite{zeugner2018chemical} X-ray powder diffraction performed on a PANalytical X'Pert Pro MPD powder X-ray diffractometer using Cu K$_{\alpha1}$ radiation found weak reflections from MnTe$_2$. The room temperature lattice parameters are a=4.3243(2)\,${\AA}$, c=40.888(2)\,${\AA}$, consistent with previous reports.\cite{lee2013crystal,zeugner2018chemical} Magnetic measurement confirmed the polycrystalline sample orders antiferromagnetically at T$_N$=24\,K. Around 2.1 g
powder was loaded in a vanadium container and the POWGEN Automatic Changer was used to access the temperature region of
10$-$300 K. The data were collected with neutrons
of central wavelengths 1.5\,$\AA{}$. Symmetry allowed magnetic structures are
analyzed using the Bilbao crystallographic server.\cite{aroyo2006bilbao,gallego2012magnetic} All of the neutron diffraction data were analyzed
using the Rietveld refinement program FULLPROF suite.\cite{rodriguez1993recent}
Single-crystal neutron diffraction experiments were carried out on the triple-axis spectrometer (TRIAX) located at the University of Missouri Research Reactor (MURR). The TRIAX measurements utilized an incident energy of $E_i=14.7$ meV using a pyrolytic graphite (PG) monochromator system and is equiped with an PG analyzer stage. PG filters were placed before and after the second monochromator to reduce higher order contamination in the incident beam achieving a ratio $\frac{I_\lambda}{2}:I_\lambda: \approx 10^{-4}$. The beam divergence was defined by collimators of 60'-60'-40'-80' between the reactor source to monochromator; monochromator to sample; sample to analyzer; and analyzer to detector, respectfully. A 14 mg MnBi$_2$Te$_4$ crystal was loaded to the cold tip of the Advanced Research Systems closed-cycle refrigerator and cooled to a base temperature of 6.7 K. The sample was mounted with the (1,0,L) plane in the neutron scattering plane, with lattice parameters $a=4.303${\AA} and $c=40.231$ {\AA} at base temperature. The crystal that we examined shows some twinning of two inequivalent domains that are rotated by 120$^\circ$. These twinned grains are identified as Bragg peaks along (H0L) that do not conform to the reflection condition -H+L=3n for the R-3m space group, but are rather (0KL) reflections from the twin which obeys the K+L=3n condition.
\begin{figure} \centering \includegraphics [width = 0.47\textwidth] {contamination.pdf}
\caption{(color online) Temperature dependence of magnetization measured in a magnetic field of 1\,kOe perpendicular to the crystallographic $c$-axis. The curves are shifted for clarity. The ferromagnetic signal disappears after cutting off the edges where the magnetic impurity Bi$_{2-x}$Mn$_x$Te$_3$ tends to stay.}
\label{Contamination}
\end{figure}
\begin{figure} \centering \includegraphics [width = 0.47\textwidth] {Mag.jpg}
\caption{(color online) (a,b) Temperature dependence of magnetization in various magnetic fields up to 70\,kOe perpendicular (H//ab) and parallel (H//c) to the crystallographic $c$ axis, respectively. (c) Suppression of T$_N$ with increasing magnetic fields. It should be noted that a metamagnetic transition occurs around 35\,kOe when the magnetic field is applied along the crystallographic \textit{c}-axis. The solid curves are a guide to the eye. (d) The field dependence of magnetization at different temperatures.}
\label{Mag}
\end{figure}
\section{Results and discussion}
\subsection{Magnetic and transport properties}
As reported previously, \cite{zeugner2018chemical,lee2013crystal} MnBi$_2$Te$_4$ can be synthesized only in a narrow temperature window around 600$^\circ$C, which makes the crystal growth rather challenging. Our growth strategy takes advantage of the low melting temperature of Bi-Te mixture. The melting tempertaure of Bi$_2$Te$_3$ is 585$^\circ$C, which is in the temperature range where MnBi$_2$Te$_4$ is stable and makes possible the crystal separation from flux by decanting. One concern of this growth strategy is that Bi$_{2-x}$Mn$_x$Te$_3$ melt might stay on the surface of the crystal and contribute a weak ferromagnetic signal at low temperatures. Fig.\,\ref{Contamination} shows the temperature dependence of the magnetic susceptibility of one MnBi$_2$Te$_4$ crystal cleaned differently as described below. The measurement was performed in a field of 1\,kOe applied perpendicular to the crystallographic \textit{c}-axis. For the as-grown crystal, there is a weak ferromagnetic signal below Tc$\sim$13\,K, which coincides with the ferromagnetic ordering temperature of Bi$_{2-x}$Mn$_x$Te$_3$.\cite{hor2010development} The measurement was then performed on the same piece of crystal after peeling off the surface layers. The presence of the low temperature ferromagnetism suggests negiligible amount of Bi$_{2-x}$Mn$_x$Te$_3$ on the surface of MnBi$_2$Te$_4$ crystals. We further cleaned the crystals by cutting off the edges using a sharp surgical blade. The nearly temperature independent magnetic susceptibility below T$_N$=24\,K suggests that the magnetic impurities of Bi$_{2-x}$Mn$_x$Te$_3$ tends to stay on the edges of the crystals. This is similar to the contamination of NdFeAsO single crystals by NdAs we observed before.\cite{yan2011contamination} Therefore, before magnetic measurements, we carefully cleaned the crystals by removing the edges using a surgical blade.
Figure\,\ref{Mag} (a, b) show the temperature dependence of the magnetic susceptibility measured in various magnetic fields applied perpendicular (labelled as H//ab) and parallel to the crystallographic \textit{c}-axis, respectively. The anisotropic temperature dependence agrees with previous report\cite{otrokov2018prediction} and suggests an antiferromagnetic order at T$_N$=24\,K. With increasing magnetic fields, the magnetic order is suppressed to lower temperatures. The suppression of T$_N$ with increasing magnetic fields is summarized in Fig.\,\ref{Mag}(c). A spin flop transition occurs around 35\,kOe when the field is applied along the crystallographic \textit{c}-axis. This is better illustrated by the M(H) curves shown in Fig.\,\ref{Mag}(d). A linear field dependence at all measured temperatures was observed when the magnetic field is applied perpendicular to the crystallographic \textit{c}-axis. The data collected at 2\,K are shown as an example. When the magnetic field is applied parallel to the \textit{c}-axis, a metamagnetic transition is observed when the field is larger than 35\,kOe. At 20\,K, the metamagnetic transition occurs in a wider field range around 25\,kOe. The metamagnetic transition disappears when the measurement is performed at a temperature above T$_N$. The observed metamagnetic transition suggests the magnetic moment is aligned along the crystallographic \textit{c}-axis, which agrees with the temperature dependence of the magnetic susceptibility and the magnetic structure revealed by neutron diffraction.
\begin{figure} \centering \includegraphics [width = 0.47\textwidth] {RT.jpg}
\caption{(color online) Field dependence of resistivity at 2\,K. Around 32\,kOe where a metamagnetic transition occurs, a sharp drop was observed in both $\rho_{xx}$ and $\rho_{xy}$. A weak anomaly was observed around 78\,kOe above which the Mn spins are fully polarized. }
\label{RT}
\end{figure}
\begin{figure} \centering \includegraphics [width = 0.47\textwidth] {kappa.jpg}
\caption{(color online) Temperature dependence of thermal conductivity. The electronic thermal conductivity, $\kappa_e$, was estimated from the electrical resistivity data using the Wiedemann-Franz law. The lattice thermal conductivity, $\kappa_{ph}$, was obtained by subtracting $\kappa_e$ from the total thermal conductivity. Inset highlights the details of $\kappa_{ph}$ around T$_N$. The solid curves in the inset are a guide to the eye highlighting the critical scattering.}
\label{kappa}
\end{figure}
The temperature and field dependence of electrical resistivity was measured in the temperature range 2\,K$\leq$T$\leq$300\,K and in magnetic fields up to 90\,kOe. The temperature and field dependence agrees with previous reports.\cite{lee2018spin,zeugner2018chemical,otrokov2018prediction} From the Hall coefficient at room temperature, the electron density is about 5.3$\times$10$^{20}$cm$^{-3}$ assuming one carrier band. Figure\,\ref{RT} shows the field dependence of in-plane electrical and Hall resistivity at 2\,K with the magnetic field applied parallel to the crystallographic \textit{c}-axis. The electrical and Hall resistivity drops sharply around 32\,kOe where the metamagnetic transition occurs. Around 78\,kOe, a weak anomaly was observed in both curves where the Mn spins are fully polarized. The critical fields and large anomalous Hall effect agree with those reported by Lee et al.\cite{lee2018spin}
Figure \ref{kappa} shows the temperature dependence of the thermal conductivity, $\kappa$(T), in the temperature range 2\,K$\leq$T$\leq$300\,K. The thermal conductivity is low in the whole temperature range and weakly temperature dependent. A room temperature value of $\sim$3\,W/K m is comparable to that of a typical n-type of Bi$_2$Te$_3$. The low thermal conductivity signals strong scattering from electrons, magnetic fluctuations, and lattice defects. As presented later, our STM measurement found about 3\%
Mn$_{Bi}$ antisite defects, which might serve as an effective phonon scatterer due to the large mass difference between Mn and Bi. Without considering possible heat conduction by magnetic excitations, the lattice thermal conductivity, $\kappa_{ph}$, can be estimated by subtracting the electronic thermal conductivity from the total thermal conductivity. The electronic thermal conductivity, $\kappa_e$, can be estimated from the electrical resistivity data assuming the Wiedemann-Franz law is valid: $\kappa_e$=LT/$\rho$, where L is the Lorenz constant taken to be equal to 2.44$\times$10$^{-8}$\,V$^2$/K$^2$, T is the absolute temperature, and $\rho$ is the electrical resistivity. $\kappa_e$ is small and decreases while cooling in the whole temperature range studied. $\kappa_{ph}$ follows the temperature dependence of the total thermal conductivity. A critical scattering behavior is observed around T$_N$=24\,K. The dip-like feature is highlighted in the inset of Fig.\ref{kappa}. Thermal conductivity studies of antiferromagnetic insulators have demonstrated a critical scattering effect that induces a dip in $\kappa$(T) with a minimum in the region of the magnetic transition.\cite{slack1961thermal,slack1958thermal,lewis1973thermal} NiO and CoO are two typical examples showing this critical scattering effect in the $\kappa$(T) curve. As pointed by Carruthers,\cite{carruthers1961theory} the anomalous dip in $\kappa$(T) is not a general phenomenon for all antiferromagnets due to varied lattice dynamics in different materials. For example, the perovskite KCoF$_3$ shows a dip anomaly in $\kappa$(T) at T$_N$ whereas KMnF$_3$ shows a minimum at T$_N$ and a glass-like thermal conductivity above T$_N$.\cite{suemune1964thermal} However, the critical scattering illustrated in the inset of Fig.\,\ref{kappa} suggests that the dominant role of the spin system is as an additional phonon scattering mechanism in MnBi$_2$Te$_4$ and the spin-lattice coupling is strong in MnBi$_2$Te$_4$.
Magnetic excitations can also carry heat, especially in low dimensional systems. A famous example is the spin-ladder compound Ca$_9$La$_5$Cu$_{24}$O$_{41}$ in which the thermal conductivity parallel to the ladder direction is nearly two orders of magnitude higher than the thermal conductivity perpendicular to the ladder direction.\cite{hess2001magnon} Similar magnetic heat transport has also been reported in (quasi-)2-dimensional materials such as La$_2$CuO$_4$.\cite{yan2003thermal} Considering the 2-dimensional arrangement of Mn-sublattice, heat transport by magnetic excitations is likely and this deserves further careful investigation.
Figure\,\ref{Thermopower} shows the evolution of the thermopower, $\alpha$(T), with temperature. At room temperature, $\alpha$(T) has a value of -16$\mu$V/K. The negative sign of $\alpha$(T) signals electrons dominated charge transport and the small absolute value signals high electron concentration, consistent with the Hall data and STS result presented later. The absolute value of $\alpha$(T) decreases linearly upon cooling from room temperature to 2\,K. This linear temperature dependence corresponds to the characteristic diffusion thermopower of a metal. A careful measurement around T$_N$ (see inset of Figure\,\ref{Thermopower}) found no response of $\alpha$(T) to the magnetic order. $\alpha$(T) was also measured in magnetic fields up to 90\,kOe applied perpendicular to the crystallographic \textit{c}-axis. However, no magnetothermopower was observed in the temperature range 2\,K$\leq$T$\leq$80\,K. Thermopower is proportional to the logarithmic derivative of the density of states (DOS) with respective to energy at the Fermi level and it is sensitive to the asymmetry in the DOS near the Fermi level. The absence of any anomaly in $\alpha$(T) across T$_N$ suggests that the A-type antiferromagnetic order either does not modify the electronic band structure or the asymmetry in the DOS is maintained even though the band structure is changed across the magnetic order. For the latter case, magnetic ordering would have a larger effect on electrical resistivity beyond the reduction of spin-disorder scattering. It would be interesting to measure thermal conductivity and thermopower in magnetic fields parallel to the \textit{c}-axis to probe the effects of canted magnetism on the bulk properties.
\begin{figure} \centering \includegraphics [width = 0.47\textwidth] {Thermopower.jpg}
\caption{(color online) Temperature dependence of thermopower. Inset shows a more careful measurement around T$_N$ which shows no anomaly.}
\label{Thermopower}
\end{figure}
\subsection{STM/STS}
Fig.\,\ref{STM-1} shows an STM image of a cleaved MnBi$_2$Te$_4$ single crystal terminating with the Te surface. Two types of defects can be observed on the surface: bright circular protrusions and dark clover-shape depressions. Presumably, they are respectively Bi$_{Te}$ antisites in the first layer and Mn occupying Bi sites (Mn$_{Bi}$) in the second layer, as assigned by the previous STM work on topological insulator Bi$_2$Se$_3$ \cite{dai2016toward} and Mn-doped Bi$_2$Te$_3$ \cite{hor2010development}. By counting the number of Mn$_{Bi}$ defects, it is estimated that Mn occupies about 3\% of the Bi sites in the second layer. In Mn-doped Bi$_2$Te$_3$, 1\% of Mn doping is sufficient to generate ferromagnetism.\cite{lee2014ferromagnetism} However, we did not notice any anomaly in the temperature dependence of magnetic susceptibility of a well cleaned crystal. The hexagonal Bragg peaks in the Fourier transformation of the STM image reveal the hexagonal lattice formed by the Te atoms, and from which the lattice constant is estimated to be 4.3${\AA}$. The local density of states (LDOS) is measured by the spatially averaged conductance spectrum. The valence band maximum (VBM) and the conduction band minimum (CBM) locate at around -0.5 and -0.2 eV, respectively. This is consistent with the recent APRES results [4]. The finite LDOS inside the band gap indicates possible existence of topological surface states.
\begin{figure} \centering \includegraphics [width = 0.47\textwidth] {STM.pdf}
\caption{(color online) (a) the STM image of the cleaved MnBi$_2$Te$_4$ terminating with the Te surface. Inset shows the Fourier transformation of the STM image. (b) The local density of states (LDOS) is measured by the spatially averaged conductance spectrum.}
\label{STM-1}
\end{figure}
\subsection{Neutron diffraction}
In order to determine the magnetic structure and ordered moment, we first performed neutron powder diffraction. The diffraction patterns at 100 K and 10 K are shown in Fig.\,\ref{fig:Neutron} (a) and
(b), respectively. Rietveld analysis confirms the trigonal structure with space group \textit{R-3m}
(No.\,166), consistent with previous report.\cite{lee2013crystal} About 5\%wt MnTe$_{2}$ was identified to exist in the sample.
Our neutron diffraction results show no change in the crystal structure of this compound
down to 10 K. The refined atomic positions and lattice constants at
100 K and 10 K are summarized in Table I.
At 10\,K (see Fig.\,\ref{fig:Neutron}(b)), neutron diffraction observed some additional reflections that are absent at 100\,K and are of magnetic origin. These magnetic reflections can be indexed with a propogation vector \textit{\textbf{k}} =(0,0,1/2). Symmetry-allowed magnetic space groups are analyzed by the Bilbao crystallographic server to create
the PCR file for refinement. We used one magnetic unit cell ($a\times b \times 2c$) to refine both nuclear
and magnetic peaks to obtain the lattice information and magnetic structure
simultaneously. The refinement confirms an A-type antiferromagnetic order consisting of ferromagnetic layers coupled antiferromagnetically along the \textit{c}-axis with the magnetic space group P$_c$-3c1 (No.\,165.96). Refinements at 10\,K find an ordered moment of 4.04(13)$\mu_{B}$/Mn that is aligned along the crystallographic \textit{c}-axis. The determined magnetic structure is displayed in Fig. \ \ref{fig:Neutron} (c), consistent with previous theoretical predictions by density functional theory.\cite{eremeev2017competing, otrokov2018prediction} It is worth mentioning that symmetry analysis of the magnetic cell allows for different ordered moments at the two Mn sites at (0,0,0) and (0.6667,0.3333,0.1667), although our refinement suggests the same ordered moment. We also considered other possible magnetic structures, for example, the AFM order with the up-up-down-down stacking of ferromagnetic planes along the \textit{c} axis, or A-type AFM order with moments in \textit{ab}-plane, or G-type AFM order. None of these models provides reasonable refinement.
\begin{table}
\caption{Refined atomic positions and lattice constants at
T\,=\,100\,K and 10\,K for MnBi$_{2}$Te$_{4}$ with space group $R-3m$ (No.\,166).
Mn: 3\textit{a }(0, 0, 0); Bi: 6\textit{c} (0, 0, z); Te1: 6\textit{c} (0, 0, z); Te2: 6\textit{c} (0, 0, z)
}
\label{tab:lattice}
\begin{tabular} {llllll}
\hline\hline
T & Atom& Atomic position & \textit{a}(\AA{})& \textit{c}(\AA{}) &
\\
\hline
100 K& Bi & z= 0.4245(6)& 4.314(6) & 40.741(4) \\
& Te1 & z= 0.1332(4) & & & \\
& Te2 & z= 0.2940(8) & & & \\
10 K& Bi & z= 0.4247(6)& 4.309(7)& 40.679(5) \\
& Te1 & z= 0.1324(7) & & & \\
& Te2 & z= 0.2943(8) & & & \\
\hline\hline
\end{tabular}
\end{table}
\begin{figure}
\centering \includegraphics[width=1\linewidth]{MagneticStructure.pdf} \caption{
(color online) Rietveld refinement fits to neutron diffraction patterns of MnBi$_{2}$Te$_{4}$ at (a) 100 K, and (b) 10 K. The observed
data and the fit are indicated by the open circles and solid lines, respectively. The difference curve is shown at the bottom.
The vertical bars mark the positions of Bragg peaks (nuclear peaks) in (a);
both nuclear and magnetic peaks in (b)) for MnBi$_{2}$Te$_{4}$ (top) and
impurity phase MnTe$_{2}$ (bottom). (c) The determined magnetic structure of MnBi$_{2}$Te$_{4}$, with two
coordinates Mn1 (0,0,0) and Mn2 (0.6667,0.3333,0.1667) in one magnetic cell. }
\label{fig:Neutron}
\end{figure}
\begin{figure}
\includegraphics[width=3. in]{n0L-scans.pdf}
\caption{(Color online) (a) Neutron diffraction from single crystal along the (10L) above (square symbols) and below (diamond symbols) T$_N$. We note that peaks at the nominal (105) and (108) indicate that the crystal is twinned, namely these peaks are (015) and (018) from the other domain. (b) (20L) scan at base temperature. Note the extra peaks from the other (02L) domain are marked with asterisk.}
\label{Fig:noL}
\end{figure}
The magnetic structure shown in Fig.\,\ref{fig:Neutron}(c) is further confirmed by neutron single crystal diffraction measurements. Our measurements observed no weak reflections that could not be indexed by the A-type AFM structure described above. Figure\,\ref{Fig:noL}(a) shows diffraction patterns from the single crystal along the (10L) direction (using hexagonal indexing) above (square symbols) and below (diamond symbols) T$_N$ showing emerging half integer L magnetic Bragg reflections at base temperature, consistent with powder diffraction results. Similarly, Figure\,\ref{Fig:noL}(b) shows emerging half integer reflections along the (20L) reflections. These half integer reflections and the absence of extra reflections along the (00L) at low temperature confirm doubling of the chemical unit cell due to the antiparallel arrangement of adjacent ferromagnetic Mn planes where the magnetic moment in each basal plane is along the \textit{c}-axis.
We note that peaks at the nominal (105) and (108) in Fig.\,\ref{Fig:noL}(a) indicate that the crystal is twinned, namely these peaks can be indexed as (015) and (018) reflections from the other domain. Consistent with that, the (20L) scans in Fig.\,\ref{Fig:noL}(b) also show extra peaks from the other domain at (021), (024), and (027) (marked with asterisk). Figure\,\ref{Fig:OP} shows the temperature dependence of the integrated intensity of the magnetic (1 0 2.5) Bragg reflection. A fit to a power law I$\propto$(1-T/T$_N$)$^{2\beta}$ yields T$_N$=24.1(2)\,K and $\beta$=0.35(2). The Neel temperature agrees well with that determined from magnetic and transport measurements.
\begin{figure}
\includegraphics[width=3.2 in]{OP-v2.pdf}
\caption{(Color online) (left) Intensity versus temperature of the magnetic (1 0 2.5) including a fit to a power law $I \propto (1-T/T_N)^{2\beta}$ (solid line) that yields $T_{\rm N}= 24.1(2)$ K and $\beta = 0.35(2)$. Inset shows intensity of magnetic Bragg reflections versus momentum transfer below and above $T_{\rm N}$. }
\label{Fig:OP}
\end{figure}
\section{Summary}
In summary, we have successfully grown sizable single crystals of MnBi$_2$Te$_4$ out of a Bi-Te flux. The large crystals make possible the exploration and investigation of the intrinsic properties of MnBi$_2$Te$_4$ using various techniques including neutron diffraction and thermal conductivity measurements. Hall and STS measurements suggest the crystals are n-type with a carrier concentration of 5.3$\times$10$^{20}$cm$^{-3}$ at room temperature. MnBi$_2$Te$_4$ orders antiferromagnetically at T$_N$=24\,K. Our neutron powder and single crystal diffraction measurements confirm the proposed A-type antiferromagnetic order with ferromagnetic planes coupled antiferromagnetically along the \textit{c}-axis. The ordered moment is 4.04(13)$\mu_{B}$/Mn at 10\,K and aligned along the crystallographic \textit{c}-axis. The electrical resistivity drops upon cooling across T$_N$ due to the reduced scattering. The long range magnetic order also induces a critical scattering effect around T$_N$ in the temperature dependence of thermal conductivity. These changes suggest that the Mn spins are effective scatterers affecting the electrical and thermal transport. No anomaly in thermopower was observed across T$_N$, which indicates that the A-type antiferromagnetic order has negligible effect on the electronic band structure. However, the sharp change of electrical and Hall resistivity when going across the metamagnetic transition signals strong coupling between the canted magnetism and the bulk band structure. Fine tuning of the magnetism and/or electronic band structure is needed for the proposed topological properties of this compound. The growth protocol reported in this work provides a convenient route to high quality crystals where the electronic band structure and magnetism can be finely tuned by chemical substitutions.
\section{Acknowledgment}
Work at ORNL and Ames Laboratory was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. Ames Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No. DE-AC02-07CH11358. The STM/STS work is supported by NSF grant DMR-1506618. A portion of this research used resources at Spallation Neutron Source, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory.
This manuscript has been authored by UT-Battelle, LLC, under Contract No.
DE-AC0500OR22725 with the U.S. Department of Energy. The United States
Government retains and the publisher, by accepting the article for publication,
acknowledges that the United States Government retains a non-exclusive, paid-up,
irrevocable, world-wide license to publish or reproduce the published form of this
manuscript, or allow others to do so, for the United States Government purposes.
The Department of Energy will provide public access to these results of federally
sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/
downloads/doe-public-access-plan).
\section{references}
\bibliographystyle{apsrev4-1}
|
1,477,468,750,432 | arxiv | \section{Introduction}
\label{sec:intro}
Neutron stars or black-holes in high-mass X-ray binaries (HMXBs) accrete gas from the stellar wind of their massive, OB-type, stellar companions. A fraction of the gravitational potential energy is converted into X-rays, ionizing and heating the surrounding gas. The X-ray emission can be used to investigate the structure of the stellar wind \emph{in situ} \citep{Walter07Winds}.
Vela X-1 (=4U 0900$-$40) is a classical persistent and eclipsing super-giant High Mass X-ray Binary (sgHMXB).
The system consists of an evolved B 0.5 Ib supergiant (HD 77581) and of a massive neutron star
\citep[M$_{NS}=1.86$ M$_{\odot}$;][]{Quaintrell_et_al03}.
The neutron star orbits its massive companion with a period of about 8.9 days,
in a circular orbit \citep[$e\approx 0.09$;][]{1997ApJS..113..367B} with a radius of $\alpha=$1.76 $R_{*}$. The stellar wind is characterized by a mass-loss rate of $\sim 4\times10^{-6}$ M$_{\odot}$ yr$^{-1}$ \citep{1986PASJ...38..547N} and a wind terminal velocity of $\upsilon_{\infty}\approx 1700$ km s$^{-1}$ \citep{1980ApJ...238..969D}. The X-ray luminosity is typically $\sim 4 \times 10^{36}$ erg s$^{-1}$, although high variability can be observed \citep{1999A&A...341..141K}.
Recent studies on the hard X-ray variability of Vela X-1 have revealed a rich phenomenology including flares and short off-states \citep{Kreykenbohm+08}.
Both flaring activity and off-states were interpreted as the effect of a strongly structured wind. \citet{Furstetal10} characterized the X-ray variability of Vela X-1 with a log-normal distribution, interpreted in the context of a clumpy stellar wind. Off-states have been interpreted \citep{Kreykenbohm+08} as an evidence for the propeller effect \citep{1975A&A....39..185I}, possibly accompanied by leakage through the magnetosphere \citep{2011A&A...529A..52D}. The quasi-spherical subsonic accretion model \citep{2012MNRAS.420..216S,2013MNRAS.428..670S} is an alternative, predicting that the repeatedly observed ‘off-states’ in Vela X-1 are the result of a transition from Compton to radiative cooling (higher and lower luminosity, respectively).
In this paper we present new results from 2-D hydrodynamic simulations of Vela X-1 and conclude
that the observed phenomenology can be explained qualitatively without intrinsic clumping or the propeller effect. The code and the simulations are described in Sect. \ref{sec:hydro}. The simulation results are presented and compared to observations in Sect. \ref{sec:results}, and discussed in Sect. \ref{sec:discussion} and \ref{sec:conclude}.
\section{Hydrodynamic Simulations}
\label{sec:hydro}
\subsection{The hydrodynamic code}
The motion of the fluid is described by the Euler equations assuming mass, momentum, and energy conservation. The internal energy in each cell is described by the first law of thermodynamics. The following set of equations are therefore solved in a fixed, non-uniform mesh:
\begin{equation}
\partial_{t}\rho +\nabla \cdot (\rho \mathbf{u})=0
\end{equation}
\begin{equation}
\partial_{t}(\rho \mathbf{u} )+ \nabla \cdot (\rho \mathbf{u} \mathbf{u} )
+\nabla P = \mathbf{F}
\end{equation}
\begin{equation}
\partial_{t} E +\nabla \cdot (
E \mathbf{u} + P \mathbf{u} ) = \mathbf{u} \cdot \mathbf{F}
\end{equation}
where we used $\partial_{t}=\partial/ \partial t$.
The primary variables of the simulations (the mass density $\rho$, gas pressure $P$, and fluid velocity $\mathbf{u}$) are
completed by the total energy $E=\frac{1}{2} \rho \mathbf{u}^{2}+\rho e$ and the force $\mathbf{F}$, accounting for the Roche potential and the line driven force (see Sect. \ref{sec:sw}). The specific internal energy ($e$) is related to the pressure through the equation of state, $P=\rho e (\gamma - 1)$, where $\gamma=5/3$ is the ratio of specific heats.
The VH-1\footnote{http://wonka.physics.ncsu.edu/pub/VH-1/} hydrodynamic code is described in detail in \citet{Blondin90,Blondin91}. The simulations of Vela X-1 take into account the gravity of the primary and of the neutron star, the radiative acceleration \citep[CAK hereafter]{CAKwind} of the stellar wind of the donor star, and the suppression of the stellar wind acceleration due to high ionization within the Str\"{o}mgren sphere of the neutron star. The parameters are listed in table \ref{tab:VELAparams}.
The equations are solved in the orbital plane of the co-rotating reference system, where the lateral extent ($\theta$ component) of the spherical mesh is
only one cell.
We have also assumed a circular orbit and a synchronous rotation. The code uses the piecewise parabolic method for shock hydrodynamics developed by \citet{PPMCW}.
A computational mesh of 900 radial by 347 angular zones, extending from 1 to $\sim$ 25 R$_{*}$ and in
angle from $-\pi$ to $+\pi$, has been employed. The grid is built in a non-uniform way in order to allow for higher resolution (up to $\sim 10^{9}$ cm) towards the neutron star and centred on the center of mass.
The initial wind density, velocity, pressure and CAK parameters of the 2D simulartion are set by the results of a 1D CAK/Sobolev simulation (also extending up to 25 R$_{*}$) of a single star and resulting in the standard $\beta\approx 0.8$ velocity law. In the binary potential, the wind takes a few days of simulation to relax and find a new equilibrium. The first 3 days of the simulations are therefore excluded from the variability analysis.
The wind reaching the radial outermost part of the mesh is characterized as an outflow boundary condition i.e. it leaves the simulation domain. We also assume that the wind entering the cell representing the surface of the neutron star is completely accreted, leaving an (almost) zero density and pressure. The matter falling from the surrounding cells is therefore in free fall \citep{1971MNRAS.154..141H,2009ApJ...700...95B}.
The time step of the simulations is $\sim 1/10$ sec and the variables of each cell are stored for each step. The code also calculate the ionization parameter \citep[$\xi= L_{X}/n r_{ns}^2$, where $n$ is the number density at the distance $r_{ns}$ from the neutron star;][]{1969ApJ...156..943T} and the instantaneous mass and angular momentum accreted ($\dot{M}_{acc}$) on the neutron star. The code was run for about 30 days, i.e. more than three orbits. This is enough for the wind to reach a stable configuration and to study its short time scale variability.
\subsection{Stellar wind acceleration}
\label{sec:sw}
The winds of hot massive stars are characterized observationally by the wind terminal velocity and the mass-loss rate. The velocity is described by the $\beta$-velocity law, $\upsilon=\upsilon_{\infty}(1-R_{*}/r)^{\,\beta}$, where $\upsilon_{\infty}$ is the terminal velocity and $\beta$ is the gradient of the velocity field.
For supergiant stars, values for wind terminal velocities and mass-loss rates are in the range $\upsilon_{\infty}\sim 1500-3000$ km s$^{-1}$ and $\dot{M}_{\rm w}\sim 10^{-(6-7)}$ M$_{\odot}$ yr$^{-1}$, respectively \citep{winds_from_hot_stars}.
The stellar winds of massive supergiant stars are radiatively driven by absorbing ultraviolet photons from the underlying photosphere. To properly simulate the radiation force driving the stellar wind, we used the CAK/Sobolev approximation:
\begin{equation}
\label{eq:radforce}
F_{\rm rad}=\frac{\sigma_{e}L_{*}}{4\pi c R^{2}} k K_{FDC} \Big(
\frac{1}{\sigma_{e} \rho u_{th}} \frac{du}{dR}
\Big)^{\alpha},
\end{equation}
where $L_{*}$ is the stellar luminosity, $\sigma_{e}$ is the electron scattering
coefficient ($\approx 0.33$ cm$^{2}$g$^{-1}$) and $u_{th}$ is the thermal velocity of the gas. The parameters CAK-$k$ and CAK-$\alpha$ are constants and correspond to the number and strength of the absorption lines, respectively \citep{CAKwind}.
The effect of finite disk correction \citep{1986ApJ...311..701F} has been accounted through the factor
\begin{equation}
K_{\rm FDC}=\frac{(1+\sigma)^{1+\alpha}-(1+\sigma\mu^{2})^{1+\alpha}}
{(1+\alpha)(1-\mu^{2})\sigma(1+\sigma)^{\alpha}},
\end{equation}
where $\sigma=\frac{dln\, u}{dln\, R}-1$ and $\mu=\big(1-\frac{R_{*}^{2}}{R^{2}}\big)^{1/2}$,
where R and u are the radial distance and velocity, respectively.
The finite disk correction produces a shallow $\beta\approx0.8$ velocity law, corresponding to the observations, rather than a steeper $\beta\approx 0.5$. The wind parameters, including the density at the bottom of the donor star atmosphere ($\rho_{0}$), and the resulting mass-loss rate and wind terminal velocity are listed in table \ref{tab:VELAparams}.
The radiation force is known to be unstable \citep{1984ApJ...284..337O}, generating inhomogeneities and clumps \citep{Owocki+88,2013MNRAS.428.1837S}.
In a close binary system the neutron star is a driver of the hydrodynamics and models combining binarity and intrinsic instabilities are still to be developed.
\begin{table}[h]
\caption{Parameters of the simulations.}
\label{tab:VELAparams}
\centering
\begin{tabular}{p{6cm} l c }
\hline
\hline
Parameter & Value\rule{0pt}{2.6ex} \\
\hline
\hline
\emph{Donor star Parameters\rule{0pt}{2.6ex}} & \\
M$_{*}$ & 23.1 M$_{\odot}$ \\
R$_{*}$ & 30 R$_{\odot}$ \\
L$_{*}$ & $2.5\times 10^{5}$ L$_{\odot}$\\
T$_{*}$ & 40000 K \\
\hline
\emph{Binary parameters\rule{0pt}{2.6ex}} & \\
M$_{NS}$ & 1.86 M$_{\odot}$ \\
$\alpha$ & 1.76 R$_{*}$ \\
L$_{X}$ & $4\times 10^{36}$ erg s$^{-1}$ \\
\hline
\hline
\emph{CAK parameters\rule{0pt}{2.6ex}} & \\
CAK-$\alpha$ & 0.58 \\
CAK-$k$ & 0.80 \\
$\rho_{0}$ & $10^{-11}$ g cm$^{-3}$\\
\hline
\emph{Wind Parameters\rule{0pt}{2.6ex}} & \\
$\dot{\rm{M}}_{W}$& $4\times 10^{-6}\, {\rm M}_{\odot}$ yr$^{-1}$ \\
$\upsilon_{\infty}$ & 1700 km s$^{-1}$ \\
\hline
\end{tabular} \newline
\end{table}
The radiative acceleration of the wind is suppressed in case of X-ray photo-ionization. The effects of the X-ray radiation on the radiative acceleration force are complicated due to the large number of ions and line transitions contributing to the opacity \citep{1982ApJ...259..282A,1990ApJ...365..321S}. Detailed NLTE wind models of the envelope of Vela X-1 shows the existence of a photoionized `bubble' around the neutron star filled with stagnating flow \citep{2012ApJ...757..162K}.
Assuming that the gas is in ionization equilibrium, the ionization state can be estimated with the $\xi$ parameter \citep{FarnssonFabian1980,Blondin90}. In our simulations we have defined a critical ionization parameter $(\xi>10^{2.5}$ erg cm sec$^{-1})$ above which most of the elements responsible for the wind acceleration (e.g., C, N, O) are fully ionized \citep{Kallman82} and the radiative force becomes negligible ($F_{\rm rad}=0$ in Eq. \ref{eq:radforce}).
The main effect of the ionization is the reduction of the wind velocity in the vicinity of the neutron star and therefore the enhancement of the mass accretion rate onto the compact object (further increasing the effect). The outcomes of our simulations are not significantly affected by small variations ($\sim$ 20\%) of the critical ionization parameter. However, larger variations (order of magnitude) significantly changes the hydrodynamics. The suppression of the acceleration also triggers the formation of a dense wake at the rim of the Str\"{o}mgren zone \citep{FarnssonFabian1980}, and has an impact on the absorption at late orbital phases. X-ray ionization can also affect the thermal state of the wind through X-ray heating and radiative cooling. Such effects are not included in our simulations.
\section{Results}
\label{sec:results}
The observed hard X-ray light-curves of Vela X-1 were obtained form the INTEGRAL \citep{integral-ref} soft $\gamma$-ray imager ISGRI \citep{isgri-ref} in the 20-60 keV energy band and from the PCA \citep{PCA-ref2} detector on board RXTE in the 10-50 keV energy band (lower energies were excluded not to be affected by the variable absorption). The light-curves were obtained using the HEAVENS\footnote{http://www.isdc.unige.ch/heavens} interface \citep{heavens-ref}. The temporal resolutions of the ISGRI and PCA lightcurves are $\sim$ 40 mins and 1 min, respectively. We excluded data obtained during eclipses, using the orbital solution derived from \citet{Kreykenbohm+08}.
The simulated X-ray lightcurve has been obtained from the instantaneous mass accretion rate (L$_{acc}=\eta \dot{\rm M}_{acc}$ c$^{2}$), using a radiative efficiency of $\eta \approx 0.1$. Figure \ref{fig:velaLC} shows a fraction of the simulated light-curve. A number of off-states can be observed as well as flares reaching $\ga 10^{37}$ erg s$^{-1}$.
Figure \ref{fig:offstates} shows a zoom on one of the off-states, where the light-curve has been convolved with a sinusoidal of period of 283 sec to account
for the spin of the Vela X-1 pulsar.
The properties of the simulated lightcurve and its comparison with the observations are described in the next sub sections.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth,angle=0]{figures/LC_HR2TR5_LINEAR_PART.pdf}
\caption{ A fraction of the simulated light-curve, spanning 2.5 days (about 30\% of an orbit).}
\label{fig:velaLC}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth,angle=0]{figures/PDF_lognorm_all_NOTfit_CENTERPEAK.pdf}
\caption{ The X-ray luminosity distribution of Vela X-1 from RXTE (10-50 keV; red) and INTEGRAL (20-60 keV; blue) observations together with that derived from the simulations (black). The simulated lightcurve was re-binned to 30 minutes bin size to match the INTEGRAL data.}
\label{fig:lognorm}
\end{figure}
\subsection{Luminosity distributions}
We have constructed histograms of the observed and simulated luminosities using the complete sample available (excluding the first 3 days of the simulations and the eclipses from the observations). The observed luminosity were derived from the X-ray count rates and responses assuming a distance of $1.9$ kpc \citep{1989PASJ...41....1N},
while $\eta$ was adjusted to the value of 0.107 in order to match the averaged luminosity of the simulation to the observed ones. The histograms are characterized by a peak at $\sim 4\times10^{36}$ erg s$^{-1}$ (Fig. \ref{fig:lognorm}) and normalized to the same maximum amplitude for comparison.
The two histograms derived from the observations are very similar and shaped as a log-normal distribution with a low luminosity tail. The log-normal standard deviation is $\sigma\approx0.23$. The histogram of the 30 minutes binned simulated light-curve is characterized by a narrower distribution $(\sigma=0.18)$, however more realistic simulations (e.g., including 3D, heating, cooling and clumping effects) may generate more turbulence and match better the observations.
Note that the histogram of the un-binned simulated lightcurve shows increased excess (by 25\%) at low luminosity (Log L$_{X}\sim 35.5$).
\subsection{Off states}
\label{sec:offstate}
The simulated light-curves feature a large number of off-states. We defined an off-state when the instantaneous X-ray luminosity dropped below 1/10 of the average luminosity, i.e. $L_{off}\la 4\cdot 10^{35}$ erg s$^{-1} \approx 0.1 \langle L_{X}\rangle$, which is approximately the sensitivity limit of ISGRI in 20 sec.
Figure \ref{fig:map_offstate} shows the density maps and the velocity contours before and during an off-state. The velocity contours are shown for the radial and angular velocities (upper and lower panels, respectively). As the bow shock on the left of the neutron star expands, the mass-accretion rate decreases and an off-state occurs.
The typical size of the bubble sustained by the shock is of the order of $10^{11}$ cm. Inside the density drops at least by a factor of $\sim 10$ when compared with the time-averaged density. This reduced density leads to the chop of the X-ray emission. The duration of the off-state varies with the size of the bubble.
Figure \ref{fig:duroffstate} shows the distribution of the duration of the off-states in the simulated light-curve.
The typical duration of most of the off-states is about 30 minutes and ranges from 10 minutes to about two hours. Although the number of observed off-states is small,
all of them lasts for between 5 and 30 minutes which coincides well with the simulations.
\begin{figure}
\includegraphics[width=0.45\textwidth]{figures/OffstateFlare.pdf} \\
\caption{ A portion of the simulated light-curve with 20 sec time bins.
The average luminosity of $\sim 4 \times 10^{36}$ erg s$^{-1}$ corresponds to $\sim 70$ cps. This figure
can be directly compared to Fig. 6 in \citet{Kreykenbohm+08}. }
\label{fig:offstates}
\end{figure}
\begin{figure*}
\centering
\hspace{-1cm}
\includegraphics[width=0.5\textwidth,angle=0]{figures/before_v1_r1_t3.pdf}
\includegraphics[width=0.5\textwidth,angle=0]{figures/offstate_v1_r1_t3.pdf}
\\
\vspace{+0.15cm} \hspace{-0.9cm}
\includegraphics[width=0.5\textwidth,angle=0]{figures/before_v2_r1_t3.pdf}
\includegraphics[width=0.5\textwidth,angle=0]{figures/offstate_v2_r1_t3.pdf}
\\
\vspace{+0.15cm}
\includegraphics[width=0.175\textwidth,angle=0]{figures/rho_legend.pdf} \hspace{2cm}
\includegraphics[width=0.155\textwidth,angle=0]{figures/v_legend.pdf}
\caption{Density distribution (in gr cm$^{-3}$) before (left columns) and during (right columns) the off-state. The upper and lower panels shows the radial and angular velocity contours, respectively. The position of the neutron star is indicated by the black arrow.}
\label{fig:map_offstate}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth,angle=0]{figures/duroffstate.pdf}
\caption{Histogram of the duration of the off-states in the complete simulated lightcurve.}
\label{fig:duroffstate}
\end{figure}
\subsection{Flaring activity}
In addition to the off-states we identified prominent flares during the simulations. The flares could reach luminosity up to $L_{flare}\ga 10^{37}$ erg s$^{-1}$ for a duration of $\sim 5-30$ minutes. Figure \ref{fig:offstates} shows the brightest flare reaching $\sim 1.2\times 10^{37}$ erg s$^{-1}\approx 3 \langle L_{X} \rangle$ and lasting for $\sim 25$ minutes. The energy released during this flare is $\sim 10^{39}$ ergs. These flares are compatible, in terms of dynamical range with the flares of Vela X-1 observed by INTEGRAL \citep{Kreykenbohm+08} however they are usually shorter, explaining why the luminosity distribution is narrower (see Fig. \ref{fig:lognorm}). We identified 8 such flares (L$_{X}>10^{37}$ erg s$^{-1}$) in the 30 days simulation while \cite{Kreykenbohm+08} detected 5 flares during an observation of about two weeks.
\subsection{Quasi-Periodicity}
Although the simulated light-curve lacks any periodic signal, we can identify a likely quasi-periodic behaviour related to the spacing of off-states. Figure \ref{fig:LS6820} shows the histogram of the time intervals between successive off-states. The distribution peaks in the range 6500-7000 sec, close to the transient period of $\sim$ 6800 sec detected with INTEGRAL by \citet{Kreykenbohm+08} during $\sim$ 10 hours.
The off-states labelled 1,\, 2,\, 3 and 5 in \citet{Kreykenbohm+08}, which reach less than 10 ct/s, are within $\sim$ 0.1 in phase of the minima of
the extension of the modulation mentioned earlier. It is therefore plausible that the observed off-states and modulations are the signature of a single physical mechanism
driving the variability. In our simulations, these modulations last typically for $8 - 16$ hours and repeat every few days.
Some signal is also detected at multiples of that period suggesting that the density of the bubbles does not always reach the threshold we have defined for an off-state.
A section of the simulated lightcurve featuring a very good coherence with the transient period of 6820 sec is shown in Fig. \ref{fig:vela6820modulo}.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth,angle=0]{figures/Offstates_delays_new.pdf}
\caption{Histogram of delay between two subsequent off-states using all the available simulated data. A total of 201 off-states have been identified. The dashed vertical line indicates the 6800 sec quasi-periodicity detected in the observations.}
\label{fig:LS6820}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth,angle=0]{figures/LC_6820period1.pdf}
\caption{ A section of the simulated light-curves of Vela X-1 together with a
sine wave of period $6820$ sec (red dashed line).}
\label{fig:vela6820modulo}
\end{figure}
\section{Discussion}
\label{sec:discussion}
We have simulated the accretion flow in Vela X-1 with high spatial and temporal resolutions around the neutron star and found flares and off-states qualitatively in agreement with the observed ones. In particular off-states are regularly produced corresponding to an instability of the bow shock surrounding the neutron star.
In wind-fed HMXB systems \citep{1988ApJ...331L.117T} the accretion flows are complex and characterized by episodic accretion \citep{1990ApJ...358..545S,1991ApJ...376..750S} with an accreted angular momentum varying in direction and close to zero, on average. This is known as the `flip-flop' instability \citep[e.g.][]{1987MNRAS.226..785M,1988ApJ...327L..73T}, interpreted by the formation of transient accretion discs \citep{2009ApJ...700...95B}. We also observed the accreted angular momentum to change sign regularly in our simulations between flares and off-states but cannot characterize them with the usual `flip-flop' instability as the characteristic duration of the variability is much shorter in our case and not linked to the formation of accretion discs.
Our simulations predicts the formation of low density bubbles behind the bow shock, around the neutron star, resulting in the X-ray off-states. These bubbles are $\sim$ 10 times larger than the Bondi-Hoyle-Lyttleton \citep[BHL; see e.g.,][]{2004NewAR..48..843E} accretion radius ($\sim 10^{10}$ cm). The bow shock is not stable, it appears close to the neutron star and moves away up to a distance ($\ga 10^{11}$ cm; see right panel of Fig. \ref{fig:map_offstate}) before gradually
falling back when a stream of gas eventually reaches the neutron star producing a new rise of the X-ray flux. The accretion stream can either move left-handed or right-handed.
This `breathing' behaviour is not perfectly periodic nor continuous but can be observed most of the time in the simulation and is at the origin of the off-states.
As the internal energy (pressure) of the low density bubble is a fraction ($\sim$ 1/10) of the gravitational potential at the bow shock,
the modulation period ($\sim 6500 - 7000$ sec) is comparable to the free-fall time ($\sim$ 2000 sec) at this position.
Neither the free-fall time from the BHL accretion radius ($t_{ff}\sim 2$ minute) nor from the magnetospheric radius ($t_{ff}\sim 3$ sec) are consistent with the observed
modulation time-scale.
Modulations are also produced by recent idealized 3D BHL accretion simulations \citep{2012ApJ...752...30B}, although weaker than in the 2D case \citep{1994ApJ...427..342R,1997A&A...317..793R,1999A&A...346..861R}.
The time-scale of these modulations is related to the accretion radius and their amplitude is weaker than what we are observing.
\citet{2006ApJ...638..369K} showed that the Bondi-Hoyle accretion of super-sonic turbulent gas has significant density enhancement when compared to the simple BHL formulation. They obtained a log-normal distribution of mass accretion rate, however much wider than what we found. The source of the turbulence is completely different in their case. The simulated log normal distribution features more low luminosity excess than observed. \citet{1998PhRvE..58.4501P} found similar excesses in one dimensional simulations of highly compressible gas. It is unclear if the additional excesses have a physical meaning.
Reality is certainly more complex, indeed massive stars are known to have clumpy winds \citep{Owocki+88,2003A&A...406L...1D,2006MNRAS.372..313O,2013MNRAS.428.1837S} and a rich phenomenology might be produced \citep{Walter07Winds}. Clumpy wind scenarios can enhance the instabilities and reduce the X-ray luminosity for extended period of time, in the class of Super-giant Fast X-ray Transient \citep[SFXT,][]{2006ESASP.604..165N}. One dimensional studies of the influence of strong density and velocity fluctuations of the wind resulted in episodic X-ray variability, however too large to describe the observations \citep{2012MNRAS.421.2820O}. The lack of multi-scale and multi-dimensional hydrodynamic simulations of macroscopic clumpy wind in a binary system does not yet allow to understand the interplay between intrinsic clumping and the various effects of the neutron star.
\section{Conclusion}
\label{sec:conclude}
We compared hard X-ray light-curves of Vela X-1 obtained with RXTE and INTEGRAL with the predictions of hydrodynamic simulations.
The simulated light-curve is highly variable (see Fig. \ref{fig:velaLC}) as observed \citep{Kreykenbohm+08}. The dynamical range of variability is of the order of $\sim 10^{3}$
between off-states and the brightest flares. The X-ray luminosity is a direct probe of the instantaneous mass-accretion rate and of the density fluctuations around the neutron star. The duration of the off-states in our simulation (1 to 5 ksec) corresponds to the free fall time of low density bubbles building up behind the bow shock surrounding the neutron star.
The log normal flux distribution, resulting from the observations and from the simulation, is a characteristic result of self organised criticality \citep{1988PhRvA..38..364B,Crow1988,2005MNRAS.359..345U}. In our case the criticality condition is probably related to the direction of the bow shock and accretion stream that can lead or trail the neutron star. Oscillations between these positions lead to the succession of off-states, flares, and more generally to the near- log-normal flux distribution. The flares correspond to the accretion of a mass of $\sim 10^{19}$ gr, much smaller than inferred for the clumps in the case of supergiant fast X-ray transients \citep[SFTXs,][]{2011A&A...531A.130B,Walter07Winds}, because the variability is driven by small scale instabilities in Vela X-1.
Our hydrodynamic simulations are sufficient to explain the observed behaviour without the need for intrinsically clumpy stellar wind or high magnetic fields and gating mechanisms. More advanced and realistic simulations, including such phenomena are needed to understand their interplay with the hydrodynamic effects of the neutron star and to reveal the full accretion phenomenology in classical sgHMXBs.
\begin{acknowledgements}
AM would like to thank Prof. J. Blondin for fruitful
discussions and hospitality at the NCSU, as well as acknowledge support
by the Polish NCN grant 2012/04/M/ST9/00780.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,477,468,750,433 | arxiv | \section{Introduction}
Convection occurs in the interiors of many astrophysical bodies and must be sustained against viscous and ohmic dissipation. This dissipation is often neglected in astrophysical models, e.g., in standard stellar 1D evolution codes \citep[e.g.,][]{ChabrierBaraffe1997,Paxtonetal2011} though its effects have lately been considered in a few specific contexts \citep[e.g.,][]{BatyginStevenson2010,Browningetal2016}.
Astrophysical convection often occurs over many scale heights. While for incompressible fluids the contribution of dissipative heating to the internal energy budget is negligible \citep{Kundu1990}, \citet{Hewittetal1975} (hereafter HMW) showed that in strongly stratified systems, it is theoretically possible for the rate of dissipative heating to exceed the luminosity. This was supported numerically by \citet{JarvisMcKenzie1980} for the case of a compressible liquid with infinite Prandtl number, $Pr$, (the ratio of viscous and thermal diffusivities), appropriate for models of the Earth's interior.
In this study we aim to establish the magnitude of dissipation for conditions more akin to those encountered in stellar interiors. Specifically, we consider dissipation in a stratified gas at finite Pr, and examine how the total heating changes as system parameters are varied. To begin, we briefly review some relevant thermodynamic considerations that underpin our work.
\subsection{Thermodynamic constraints on dissipative heating}\label{Hewitt}
For a volume $V$ of convecting fluid enclosed by a surface $S$ with associated magnetic field $\mathbf{B}$, in which the normal component of the fluid velocity $\mathbf{u}$ vanishes on the surface, and either all components of $\mathbf{u}$, or the tangential stress, also vanish on the surface, local conservation of energy gives that the rate of change of total energy is equal to the sum of the net inward flux of energy and the rate of internal heat generation (e.g., by radioactivity or nuclear reactions). This implies
\begin{align}\label{consofE}\frac{\partial}{\partial{t}}\left(\rho{e}+\frac{1}{2}\rho{u}^2\right.&\left.+\frac{B^2}{2\mu_0}-\rho\Psi\right)=-\nabla\cdot\left(\rho\left(e+\frac{1}{2}u^2-\Psi\right)\mathbf{u}\right.\nonumber\\&\left.+\frac{(\mathbf{E}\times\mathbf{B})}{\mu_0}+P\mathbf{u}-\bm\tau\cdot\mathbf{u}-k\nabla{T}\right)+H\end{align}
where $\rho$ is the fluid density, $e$ is the internal energy of the fluid, $\Psi$ is the gravitational potential that satisfies $\mathbf{g}=\nabla\Psi$, $P$ is the pressure, $\tau_{ij}$ is the contribution to the total stress tensor from irreversible processes, $k$ is the thermal conductivity, $T$ is the temperature, $H$ is the rate of internal heat generation, and $\frac{\mathbf{E}\times\mathbf B}{\mu_0}$ is the Poynting flux ($\mathbf{E}$ is the electric field and $\mu_0$ is the permeability of free space).
Integrating (\ref{consofE}) over $V$ gives the global relation
\begin{equation}\label{Fbal}
\int_Sk\frac{\partial{T}}{\partial{x_i}}\,dS_i+\int_VH\,dV=0,
\end{equation}
assuming both a steady state and that the electric current, $\mathbf{j}$, vanishes everywhere outside $V$. Equation (\ref{Fbal}) implies that the net flux out of $V$ is equal to the total rate of internal heating. Viscous and ohmic heating do not contribute to the overall heat flux: dissipative heating terms do not appear in equation (\ref{Fbal}).
To examine dissipative heating, we consider the internal energy equation:
\begin{equation}\label{internal}
\rho\left(\frac{\partial{e}}{\partial{t}}+(\mathbf{u}\cdot\nabla)e\right)=\nabla(k\nabla{T})-P(\nabla\cdot\mathbf{u})+\tau_{ij}\frac{\partial{u_i}}{\partial{x_j}}+\frac{j^2}{\sigma}+H
\end{equation}
where $\sigma$ is the conductivity of the fluid.
Integrating over $V$, and assuming a steady state, (\ref{internal}) becomes
\begin{equation}\label{Phibal}
\int_V(\mathbf{u}\cdot\nabla)P\,dV+\Phi =0.
\end{equation}
Here
\begin{equation}\label{Phi}
\Phi=\int_V\tau_{ij}\frac{\partial{u_i}}{\partial{x_j}}+\frac{j^2}{\sigma}\,dV
\end{equation}
is the total dissipative heating rate including viscous and ohmic heating terms.
Equation (\ref{Phibal}) implies that the global rate of dissipative heating is cancelled by the work done against the pressure gradient. Equation (\ref{Phibal}) is only equivalent to HMW's equation (22) when considering an ideal gas (so that $\alpha{T}=1$, where $\alpha$ is the coefficient of thermal expansion); however, in arriving at (\ref{Phibal}), we made no assumption about the fluid being a gas. \citet{AlboussiereRicard2013,AlboussiereRicard2014} note that this inconsistency arises because HMW assume $c_p$ to be constant in their derivation, which is not valid when $\alpha T\neq1$.
Alternatively, from the first law of thermodynamics, we have
\begin{equation}
Tds=de-\frac{P}{\rho^2}d\rho
\end{equation}
where $s$ is the specific entropy, so (\ref{Phibal}) can also be written as
\begin{equation}\label{Phi2}
\Phi=\int_V\rho{T}(\mathbf{u}\cdot\nabla)s\,dV=-\int_V\rho{s}(\mathbf{u}\cdot\nabla)T\,dV
\end{equation}
where we have invoked mass continuity in a steady state ($\nabla\cdot(\rho\mathbf{u})=0$).
Hence the global dissipation rate can also be thought of as being balanced by the work done against buoyancy \citep{JonesKuzanyan2009}.
HMW used the entropy equation to derive an upper bound for the dissipative heating rate in a steadily convecting fluid that is valid for any equation of state or stress-strain relationship.
For the case of convection in a plane layer, that upper bound is
\begin{equation}\label{bound}
\frac{\Phi}{L_u}<\frac{T_{max}-T_u}{T_u}
\end{equation}
where $L_u$ is the luminosity at the upper boundary, $T_{max}$ is the maximum temperature and $T_u$ is the temperature on the upper boundary.
One consequence of this bound is that, for large enough thermal gradients, the dissipative heating rate may exceed the heat flux through the layer; this is perhaps counter-intuitive, but is thermodynamically permitted, essentially because the dissipative heating remains in the system's internal energy \citep[see e.g.,][]{Backus1975}.
The above considerations should hold for both ohmic and viscous dissipation. However, HMW further considered the simple case of viscous heating in a liquid (neglecting magnetism) and showed that the viscous dissipation rate is not only bounded by (\ref{bound}) but that
\begin{equation}\label{Hewittliq}
E\equiv\frac{\Phi}{L_u}=\frac{d}{H_T}\left(1-\frac{\mu}{2}\right)
\end{equation}
where $d$ is the height of the convective layer, $H_T$ is the (constant) thermal scale height and $0\leq\mu\leq1$ is the fraction of internal heat generation.
Interestingly, the theoretical expression (\ref{Hewittliq}) is dependent only on the ratio of the layer depth to the thermal scale height and the fraction of internal heat generation.
As expected, (\ref{Hewittliq}) implies that the dissipative heating rate is negligible when compared with the heat flux in cases where the Boussinesq approximation is valid (i.e., when the scale heights of the system are large compared to the depth of the motion).
But it follows from (\ref{Hewittliq}) that $\Phi$ is significant compared to $L_u$ if $d$ is comparable to $H_T$, i.e., if the system has significant thermal stratification. Stellar convection often lies in this regime, so it is not clear that dissipative heating can be ignored.
This paper explores these theoretical predictions using simulations of stratified convection under conditions akin to those encountered in stellar interiors. Previous numerical simulations conducted by HMW considered only 2D Boussinesq convection and neglected inertial forces (infinite $Pr$ approximation); later work by \citet{JarvisMcKenzie1980} within the so-called anelastic liquid approximation considered stronger stratifications but likewise assumed a liquid at infinite $Pr$. We extend these by considering an ideal gas (so that $\alpha{T}=1$) at finite $Pr$, so inertial effects are important and compressibility is not negligible.
In section \ref{model}, we describe the model setup before presenting results from numerical simulations. In section \ref{discussion} we offer a discussion of the most significant results that emerge before providing conclusions.
\section{Simulations of dissipative convection}\label{model}
\subsection{Model setup}\label{modelsec}
We consider a layer of convecting fluid lying between impermeable boundaries at $z=0$ and $z=d$. We assume thermodynamic quantities to be comprised of a background, time-independent, reference state and perturbations to this reference state. The reference state is taken to be a polytropic, ideal gas with polytropic index $m$ given by
\begin{equation}\label{refstate}
\bar{T}=T_0(1-\beta z),\,\bar\rho=\rho_0(1-\beta z)^m,\,\bar{p}=\mathcal{R}\rho_0T_0(1-\beta z)^{m+1},
\end{equation}
where $\beta=\frac{g}{c_{p,0}T_0}$. Here, $g$ is the acceleration due to gravity, $c_p$ is the specific heat capacity at constant pressure, $\mathcal{R}$ is the ideal gas constant and a subscript $0$ represents the value of that quantity on the bottom boundary. $\beta$ is equivalent to the inverse temperature scale height and so is a measure of the stratification of the layer, although we shall use the more conventional
\begin{equation}
N_{\rho}=-m\ln(1-\beta d)
\end{equation}
to quantify the stratification, with $N_{\rho}$ the number of density scale heights across the layer. We assume a polytropic, monatomic, adiabatic, ideal gas, therefore $m=1.5$. Here we consider only the hydrodynamic problem; i.e., all dissipation is viscous.
We use anelastic equations under the Lantz-Braginsky-Roberts (LBR) approximation \citep{Lantz1992,BraginskyRoberts1995}; these are valid when the reference state is nearly adiabatic and when the flows are subsonic \citep{OguraPhillips1962,Gough1969,LantzFan1999}, as they are here.
The governing equations are then
\begin{align}\frac{\partial\mathbf u}{\partial{t}}&+(\mathbf{u}\cdot\nabla)\mathbf{u}=-\nabla\tilde{p}+\frac{gs}{c_p}\hat{\mathbf{e_z}}\nonumber\\&+\nu\left[\frac{1}{\bar\rho}\frac{\partial}{\partial{x_j}}\left(\bar\rho\left(\frac{\partial{u_i}}{\partial{x_j}}+\frac{\partial{u_j}}{\partial{x_i}}\right)\right)-\frac{2}{3\bar\rho}\frac{\partial}{\partial{x_i}}\left(\bar\rho\frac{\partial{u_j}}{\partial{x_j}}\right)\right]\end{align}
\begin{equation}
\nabla\cdot(\bar\rho\mathbf u)=0
\end{equation}
\begin{equation}\label{energyeq}
\bar\rho\bar{T}\left(\frac{\partial{s}}{\partial{t}}+(\mathbf{u}\cdot\nabla)s\right)=\nabla\cdot(\kappa\bar\rho\bar{T}\nabla{s})+\tau_{ij}\frac{\partial{u_i}}{\partial{x_j}}+H,
\end{equation}
where $\mathbf{u}$ is the fluid velocity, $\tilde{p}=\frac{p}{\bar\rho}$ is a modified pressure and $\nu$ is the kinematic viscosity. The specific entropy, $s$, is related to pressure and density by
\begin{equation}
s=c_v\ln{p}-c_p\ln\rho.
\end{equation}
We assume the perturbation of the thermodynamic quantities to be small compared with their reference state value. Therefore the entropy is obtained from
\begin{equation}
s=c_v\frac{p}{\bar{p}}-c_p\frac{\rho}{\bar\rho}
\end{equation}
and the linearised equation of state is
\begin{equation}
\frac{p}{\bar{p}}=\frac{T}{\bar{T}}+\frac{\rho}{\bar\rho}.
\end{equation}
In (\ref{energyeq}) $\kappa$ is the thermal diffusivity and \begin{equation}\label{tau}
\tau_{ij}=\nu\bar\rho\left(\frac{\partial{u_i}}{\partial{x_j}}+\frac{\partial{u_j}}{\partial{x_i}}-\frac{2}{3}\delta_{ij}\nabla\cdot\mathbf{u}\right)
\end{equation}
is the viscous stress tensor ($\delta_{ij}$ is the Kronecker delta).
Here, we only consider cases with $H=0$ (i.e., no internal heat generation), and instead impose a flux ($F$) at the bottom boundary.
Note the LBR approximation diffuses entropy (not temperature); see \cite{Lecoanetetal2014} for a discussion of the differences.
We assume a constant $\nu$ and $\kappa$.
We solve these equations using the Dedalus pseudo-spectral code \citep{dedalus} with fixed flux on the lower boundary and fixed entropy on the upper boundary.
We assume these boundaries to be impermeable and stress-free. We employ a sin/cosine decomposition in the horizontal, ensuring there is no lateral heat flux.
We employ the semi-implicit Crank-Nicolson Adams-Bashforth numerical scheme and typically use 192 grid points in each direction with dealiasing (so that 128 modes are used). In some cases, 384 (256) grid points (modes) were used to ensure adequate resolution of the solutions.
For simplicity, and to compare our results with those of HMW, we consider 2D solutions so that $\mathbf{u}=(u,0,w)$ and $\frac{\partial}{\partial{y}}\equiv0$. This also allows us to reach higher supercriticalities and $N_{\rho}$ with relative ease.
As we neglect magnetism, the total dissipation rate, $\Phi$, is given by (\ref{Phi}) with $\mathbf j=0$ and $\tau_{ij}$ as given by (\ref{tau}).
An appropriate non-dimensionalisation of the system allows the parameter space to be collapsed such that the dimensionless solutions (in particular $E$) are fully specified by $m$, $N_{\rho}$, $Pr$, together with $\hat{F_0}= \frac{Fd}{\kappa{c_{p,0}}\rho_0T_0}$ (a dimensionless measure of the flux applied at the lower boundary) and a flux-based Rayleigh number \citep[e.g.,][]{Duarteetal2016}
\begin{equation}\label{Ra}
Ra=\frac{gd^4F_{u}}{\nu\kappa^2\rho_0c_{p,0}T_0}.
\end{equation}
The parameters used in our simulations are given in Table \ref{table1}.
In a steady state, an expression for the luminosity $L$ at each depth $z=z'$ can be obtained by integrating the internal energy equation (\ref{energyeq}) over the volume contained between the bottom of the layer and the depth $z=z'$:
\begin{align}L=&FA=\int_{V_{z'}}\nabla\cdot(\bar\rho\bar{T}s\mathbf{u})\,dV+\int_{V_{z'}}-\nabla\cdot(\kappa\bar\rho\bar{T}\nabla{s})\,dV\nonumber\\&+\int_{V_{z'}}-s\bar\rho(\mathbf{u}\cdot\nabla)\bar{T}\,dV+\int_{V_{z'}}-\tau_{ij}\frac{\partial{u_i}}{\partial{x_j}}\,dV,\label{Feqpre}\end{align}
where $A$ is the surface area.
The divergence theorem allows the first two integrals to be transformed into surface integrals giving
\begin{align}L=&FA=\underbrace{\int_{S_{z'}}\bar\rho\bar{T}sw\,dS}_\text{$L_{conv}=AF_{conv}$}+\underbrace{\int_{S_{z'}}-\kappa\bar\rho\bar{T}\frac{\partial{s}}{\partial{z}}\,dS}_\text{$L_{cond}=AF_{cond}$}\nonumber\\&+\underbrace{\int_{V_{z'}}-s\bar\rho(\mathbf{u}\cdot\nabla)\bar{T}\,dV}_\text{$L_{buoy}=A\int_0^{z'}Q_{buoy}\,dz$}+\underbrace{\int_{V_{z'}}-\tau_{ij}\frac{\partial{u_i}}{\partial{x_j}}\,dV}_\text{$L_{diss}=A\int_0^{z'}Q_{diss}\,dz$},\label{Feq}\end{align}
where the surface integrals are over the surface at height $z=z'$.
The first and second terms define the horizontally-averaged heat fluxes associated with convection ($F_{conv}$) and conduction ($F_{cond}$) respectively, along with associated luminosities.
The third and fourth terms define additional sources of heating and cooling ($Q_{diss}$ and $Q_{buoy}$) associated with viscous dissipation and with work done against the background stratification, respectively. These two terms must cancel in a global sense i.e., when integrating from $z=0$ to $z=d$, but they do not necessarily cancel at each layer depth.
An alternative view of the heat transport may be derived by considering the total energy equation (\ref{consofE}), which includes both internal and mechanical energy.
In a steady state (with entropy diffusion), the local balance gives
\begin{equation}
\nabla\cdot\left(\bar\rho\left(e+\frac{1}{2}u^2-\Psi\right)\mathbf{u}+p\mathbf{u}-\bm\tau\cdot\mathbf{u}-\kappa\bar\rho{T}\nabla{s}\right)=H
\end{equation}
which when integrated over the volume for an ideal gas gives \citep[see e.g.,][]{Vialletetal2013}
\begin{align}L=&FA=\underbrace{\int_{S_{z'}}\bar\rho{c_p}wT'\,dS}_\text{$L_e=AF_{e}$}+\underbrace{\int_{S_{z'}}-\kappa\bar\rho\bar{T}\frac{\partial{s}}{\partial{z}}\,dS}_\text{$L_{cond}=AF_{cond}$}\nonumber\\&+\underbrace{\int_{S_{z'}}\frac{1}{2}\bar\rho|u^2|w\,dS}_\text{$L_{KE}=AF_{KE}$}+\underbrace{\int_{S_{z'}}-(\tau_{ij}{u_i})\cdot{\mathbf{\hat{e}_z}}\,dS}_\text{$L_{visc}=AF_{visc}$},\label{FHeq}\end{align}
defining the horizontally-averaged enthalpy flux ($F_e$), kinetic energy flux ($F_{KE}$) and viscous flux ($F_{visc}$).
Note that (\ref{Feq}) and (\ref{FHeq}) are equivalent; whether decomposed in the manner of (\ref{Feq}) or the complementary fashion of (\ref{FHeq}), the transport terms must sum to the total luminosity $L$.
$L_{visc}$ represents the total work done by surface forces, whereas $L_{diss}$ represents only the (negative-definite) portion of this that goes into deforming a fluid parcel and hence into heating.
\subsection{Relations between global dissipation rate and convective flux}
For the model described in section \ref{modelsec}, equation (\ref{Phi2}) becomes
\begin{align}\Phi=&-\int_V\bar\rho{s}(\mathbf{u}\cdot\nabla)\bar{T}\,dV\nonumber\\
=&\frac{g}{c_{p,0}}\int_Vs\bar\rho{w}\,dV=\frac{gA}{c_{p,0}}\int_{0}^{d}\frac{F_{conv}}{\bar{T}}\,dz,\label{phiFconv}\end{align}
Often it is assumed that in the bulk of the convection zone, the total heat flux is just equal to the convective flux as defined above (i.e., $F_{conv}\approx{F}$). We show later that this a poor assumption in strongly stratified cases, but it is reasonable for approximately Boussinesq systems. In the case $F_{conv}\approx{F}$, (\ref{phiFconv}) becomes
\begin{equation}
\Phi=\frac{gAF}{c_{p,0}T_0}\int_0^d\frac{1}{1-\beta{z}}\,dz=-L_u\ln(1-\beta{d})
\end{equation}
and
\begin{equation}\label{lower}
E=-ln(1-\beta{d})=\beta{d}+\ldots\approx\frac{d}{H_{T,0}}.
\end{equation}
However, in strongly stratified cases $F\approx{F_{conv}}+F_{other}$ where $F_{other}=\int_0^{z'}(Q_{buoy}+Q_{diss})\,dz$ from (\ref{Feq}), or alternatively, $F_{other}=F_{p}+F_{KE}+F_{visc}$ from (\ref{FHeq}) (the conductive flux is small in the bulk convection zone). Here $F_{p}=\frac{1}{A}\int_{S_{z'}}wp\,dS$ is the difference between the enthalpy flux $F_e$ and the convective flux $F_{conv}$. Physically, $F_{other}$ is equivalent to the steady-state transport associated with processes other than the convective flux as defined above. In this case, (\ref{phiFconv}) becomes
\begin{equation}\label{phiFother}
\Phi=\frac{gAF}{c_{p,0}}\int_0^d(1-\frac{F_{other}}{F})\frac{1}{\bar{T}}\,dz,
\end{equation}
where we note that in general $F_{other}$ is a function of depth and $(1-\frac{F_{other}}{F})\geq1$.
A complete theory of convection would specify $F_{other}$ a priori, and thereby constrain the dissipative heating everywhere.
In the absence of such a theory, we turn to numerical simulations to determine the magnitude of $\Phi$ for strong stratifications.
\subsection{Dissipation in simulations: determined by stratification}\label{res1}
We examine the steady-state magnitude of $\Phi$ for different values of $N_{\rho}$ and $Ra$.
Figure \ref{fig1} shows the ratio of the global dissipation rate to the luminosity through the layer, $E=\frac{\Phi}{L_u}$, for varying stratifications. First, we highlight the difference between simulations in which the dissipative heating terms were included (red squares) and those where they were not (black circles). At weak stratification,
there is not much difference in the dissipative heating rate between these cases, but differences become apparent as $N_{\rho}$ is increased. Including the heating terms in a self-consistent calculation leads to a much larger value of $E$ than if $\Phi$ is only calculated after the simulation has run (i.e., if heating is not allowed to feedback on the system). When heating terms are included, the global dissipative heating rate exceeds the flux passing through the system (i.e., $E>1$) when $N_{\rho}>1.22$.
As expected, the expression for $E$, in the Boussinesq limit, given by (\ref{lower}), is a good approximation to $E$ for small $N_{\rho}$, but vastly underestimates $E$ at large $N_{\rho}$ (see Figure \ref{fig1}, dash-dot line).
In the cases where the heating terms are not included, $E$ cannot exceed unity for all $N_{\rho}$.
This might have been expected, since in this case none of the dissipated heat is returned to the internal energy of the system; instead, the dissipated energy is simply lost (i.e., energy is not conserved). This has the practical consequence that the flux emerging from the top of the layer is less than that input at the bottom.
In these cases $E$ is very well described by the dashed line which is given by $\frac{d}{H_{{T},0}}$, the leading order term from the expression for $E$ in (\ref{lower}).
The theoretical upper bound derived by HMW is shown on Figure \ref{fig1} by the solid black line. It is clear that all of our cases fit well within this upper bound, even at strong stratifications. This upper bound is equivalent to $\frac{d}{H_{{T},u}}$ in this system, where $H_{{T},u}$ is the value of $H_{T}$ on the upper boundary.
Cases in which the heating terms were included are well described by
\begin{equation}\label{myE}
E=\frac{d}{\tilde{H_T}},
\end{equation}
where
\begin{equation}\label{htdef}
\tilde{H_T} = \frac{H_{T,0}H_{T,u}}{H_{T,z^*}}
\end{equation}
is a modified thermal scale height involving $H_T$ at the top, bottom and at a height $z^*$, defined such that half the fluid (by mass) lies below $z^*$ and half sits above; for a uniform density fluid, $z^*=\frac{d}{2}$. This expression resembles that originally proposed by HMW, on heuristic grounds, for a gas ($E\approx\frac{d}{H_T}$); in our case $H_T$ is not constant across the layer and we find that the combination $\tilde{H_T}$ is the appropriate ``scale height" instead. Like HMW's suggestion, it depends only on the layer depth and temperature scale heights of the system.
For 2D convection, at $Pr=1$ and the $Ra$ considered here, the solutions are steady (time-independent) \citep{VincentYuen1999}; the convection takes the form of a single stationary cell occupying the layer. To assess if the same behaviour occurs for chaotic (time-dependent) solutions, we have included some cases at $Pr=10$ (orange triangles), since then the flow is unsteady. In the cases included here, this unsteady flow is characterised by the breakup of the single coherent convection cell (seen at $Pr=1$); these time-dependent solutions seem also to be well described by the line given by (\ref{myE}). This behaviour is sampled in Figure \ref{figA1}, Supplementary Material, which shows the velocity and entropy fields in a simulation with $Pr=10$, $N_{\rho}=1.31$, $Ra=4.13\times10^8$ and $\hat F_0=0.14$.
At higher $Ra$, the solutions transition to turbulence \citep[see visualisations in e.g.,][]{Rogersetal2003}.
\begin{figure}
\includegraphics[scale=1.03]{f1.eps}
\caption{$E$ (global dissipative heating rate normalised by the luminosity) against $N_{\rho}$ for $Pr=1$ (red squares) and $Pr=10$ (orange triangles). Cases in which the dissipative heating terms were not included in equation (\ref{energyeq}) are denoted by black circles. The dash-dot line shows the expression given by (\ref{lower}) and the dotted line shows the leading order term of this expression. The solid black line shows the upper bound given by (\ref{bound}) and the dashed red line shows the expression given by (\ref{myE}). The cases with heating agree well with the dashed red line and the cases without heating agree with the dotted black line.}\label{fig1}
\end{figure}
\subsection{Dissipation in simulations: independent of diffusivities}\label{2p4}
The results of section \ref{res1}, specifically equation (\ref{myE}), suggest that the amount of dissipative heating is determined by the stratification, not by other parameters such as $Ra$. To probe this further, we consider how/if $E$ changes as $Ra$ is varied. Figure \ref{fig2} shows the results for three different stratifications. For $N_{\rho}\approx0.1$, the fluid is close to being Boussinesq and it is clear that $E$ remains constant (and equal to the value given by (\ref{myE})) for many decades increase in $Ra$. This result complements that of HMW obtained from Boussinesq simulations at infinite $Pr$. For increasing $N_{\rho}$, we find that for large enough $Ra$, $E$ approaches the constant given by (\ref{myE}). That $E$ becomes independent of $Ra$ at large enough $Ra$ for all $N_{\rho}$ was also found by \citet{JarvisMcKenzie1980}, albeit for liquids at infinite $Pr$.
Figure \ref{fig2} indicates that the solutions have to be sufficiently supercritical in order for the theory to be valid. It also suggests that stronger stratifications require simulations to be more supercritical in order to reach the asymptotic regime. (All the simulations displayed in Figure \ref{fig1} approach this asymptotic regime, \emph{except} possibly the uppermost point at $N_{\rho}=2.8$. That simulation has $Ra/Ra_c \approx 9 \times10^{5}$, but it is likely that still higher $Ra$ would yield somewhat greater values of $E$ at this stratification.)
\begin{figure}
\includegraphics[scale=1.03]{f2.eps}
\caption{$E$ as a function of $\frac{Ra}{Ra_c}$ (where $Ra_c$ is the value of $Ra$ at which convection onsets) for $N_{\rho}=0.105$ (circles), $N_{\rho}=0.706$ (triangles) and $N_{\rho}=2.085$ (squares). In each case, for large enough $Ra$ the value of $E$ asymptotes to the value given by (\ref{myE}), indicated for each $N_{\rho}$ by the horizontal lines. The level of stratification (given by $N_{\rho})$, rather then the diffusion, determines the magnitude of the dissipative heating rate compared to the flux through the layer.}\label{fig2}
\end{figure}
\section{Discussion and conclusion}\label{discussion}
We have demonstrated explicitly that the amount of dissipative heating in a convective gaseous layer can, for strong stratifications, equal or exceed the luminosity through the layer.
A principal conclusion is that the ratio of the global viscous heating rate to the emergent luminosity is approximated by a theoretical expression dependent only on the depth of the layer and its thermal scale heights. This ratio, akin to one originally derived for a simpler system by HMW, is given (for the cases studied here) by (\ref{myE}). Interestingly, this relation does not depend on other parameters such as the Rayleigh number. Our simulations confirm that this expression holds for 2D convection in an anelastic gas, provided the convection is sufficiently supercritical. This regime is attainable in our 2D simulations, and is surely reached in real astrophysical objects, but may be more challenging to obtain in (for example) 3D global calculations \citep[e.g.,][]{FeatherstoneHindman2016,Aubertetal2017}.
The dissipative heating appears in the local internal energy (or entropy) equation, in the same way as heating by fusion or radioactive decay. Where it is large, we therefore expect it will modify the thermal structure, just as including a new source of heating or cooling would have done. It must be reiterated, though, that in a global sense this heating is balanced by equivalent cooling terms; i.e., $L_{diss}$ and $L_{buoy}$ in equation (\ref{Feq}) cancel in a global sense; no additional flux emerges from the upper boundary. Stars are not brighter because of viscous dissipation. Locally, however, these terms do \emph{not} necessarily cancel, as explored in Figure \ref{fig3}. There we show the net heating and cooling at each depth in two simulations; in Figure \ref{fig3}$a$, the fluid is weakly stratified, and in (b) is has a stratification given by $N_{\rho}=2.08$. In both cases the sum of the terms must be zero at the top and bottom of the layer, but not in between.
Furthermore, in (a) the terms are small compared to the flux through the layer (typically a few \%) but in the strongly stratified case, the local heating and cooling become comparable to the overall luminosity.
In general, stronger stratifications lead to stronger local heating and cooling in the fluid.
\begin{figure}
\includegraphics[scale=1,trim = {0mm 0mm 0mm 0mm}, clip]{f3.eps}
\caption{Local heating and cooling. $F_{other}$ as a fraction of the total flux through the layer as a function of layer depth for $N_{\rho}=0.1$ in (a) and $N_{\rho}=2.08$ in (b). In (a) the local heating and cooling is only a few percent of the total flux whereas in (b) the local heating and cooling is comparable to the flux through the layer in some parts.}\label{fig3}
\end{figure}
In a steady state the imbalance between this local heating and cooling is equivalent to certain transport terms as discussed in section \ref{modelsec}; these are assessed for our simulations in figure \ref{fig4} where the terms are plotted as luminosities and labelled correspondingly. Turning first to Figure \ref{fig4}$a$, we show the components of the total flux of thermal energy (as described by (\ref{Feq})), namely $L_{conv}$, $L_{cond}$, $L_{buoy}$ and $L_{diss}$. The conductive flux is small throughout the domain except in thin boundary layers and the dissipative heating ($L_{diss}$) is comparable to the convective flux ($L_{conv}$) throughout the domain. The sum of the four transport terms is shown as the black line ($L$) and is constant across the layer depth, indicating thermal balance.
Figure \ref{fig4}$b$ assesses the total energy transport using the complementary analysis of (\ref{FHeq}), using $L_{KE}$, $L_{cond}$, $L_e$ and $L_{visc}$. The primary balance is between the positive $L_e$ and the negative $L_{KE}$. Viewed in this way, the viscous flux ($L_{visc}$) is small except near the lower boundary, but (as discussed in section \ref{modelsec}) this does not necessarily mean the effect of viscous dissipation is also small.
In figure \ref{fig4}$c$ we highlight the equivalence of some transport terms, by showing the term $AF_{other}$ together with its different constituent terms from either the total or thermal energy equations. As expected, $AF_{other}$ is the same in both cases; it is the sum of $L_{diss}$ and $L_{buoy}$, or equivalently, it is the sum of $L_{p}$, $L_{KE}$ and $L_{visc}$. That is, changes in the dissipative heating are reflected not just in $Q_{diss}$ (if analysing internal energy) or $F_{visc}$ (if analysing total energy); the other transport terms ($F_{KE}$, $F_p$, $F_e$, $F_{conv}$, $Q_{buoy}$) also change in response.
To emphasise the importance of dissipative heating in modifying the transport terms, we include in Figure \ref{fig4}$d$, $L_{KE}^{nh}$ , $L_{e}^{nh}$ , $L_{cond}^{nh}$ and $L_{visc}^{nh}$ i.e., the kinetic energy, enthalpy, conductive and viscous fluxes (expressed as luminosities) respectively, in the case where heating terms were not included. It is clear that these are much smaller than in the equivalent simulation with heating (Figure \ref{fig4}$b$), demonstrating explicitly that the inclusion of dissipative heating influences the other transport terms.
In particular, the maximum value of the kinetic energy flux is 3.2 times larger when the heating terms are included. The black line in Figure \ref{fig4}$d$ shows that when heating is not included the flux emerging at the upper boundary is smaller than the flux imposed at the lower boundary; in this case it is approximately $27\%$ of $L$.
The local heating and cooling (or, equivalently, the transport term $F_{other}$ that must arise from this in a steady state) described above is not included in standard 1D stellar evolution models, and we do not yet know what effects (if any) would arise from its inclusion.
In some contexts they may be negligible; the total internal energy of a star is enormously greater than its luminosity $L\star$, so even internal heating that exceeds $L\star$ may not have a noticeable effect on the gross structure. If, however, this heating is concentrated in certain regions (e.g., because of spatially varying conductivity) or occurs in places with lower heat capacity, its impact may be more significant.
\begin{figure*}
\includegraphics[scale=1,trim = {0mm 0mm 0mm 0mm}, clip ]{f4.eps}
\caption{(a) Luminosities $L_i$ defined in (\ref{Feq}) and their sum normalised by the total luminosity $L$. (b) Luminosities $L_i$ defined in (\ref{FHeq}) and their sum normalised by the total luminosity $L$. (c) The constituents of $L_{other}=AF_{other}=A\int_0^{z'}(Q_{buoy}+Q_{diss})\,dz=A(F_p+F_{KE}+F_{visc})$. (d) Luminosities $L_i$ defined in (\ref{FHeq}) and their sum normalised by the total luminosity at the bottom boundary $L_0$ in the case where heating terms are not included. The luminosities in (d) are significantly smaller than the equivalent ones when heating terms were included (see (b)).}\label{fig4}
\end{figure*}
If the results explored here also apply to the full 3D problem with rotation and magnetism -- which clearly must be checked by future calculation -- then the total dissipative heating is determined non-locally, dependent as it is on the total layer depth. Simple modifications to the mixing-length theory (which is determined locally) may not then suffice to capture it. We have begun to explore these issues by modification of a suitable 1D stellar evolution code, and will report on this in future work.
\acknowledgments
We acknowledge support from the European Research Council under ERC grant agreements No. 337705 (CHASM). The simulations here were carried out on the University of Exeter supercomputer, a DiRAC Facility jointly funded by STFC, the Large Facilities Capital Fund of BIS and the University of Exeter. We also acknowledge PRACE for awarding us access to computational resources Mare Nostrum based in Spain at the Barcelona Supercomputing Center, and Fermi and Marconi based in Italy at Cineca. We thank the referee for a thoughtful review that helped to improve the manuscript.
|
1,477,468,750,434 | arxiv | \section{Introduction}
Various unconventional superconductors have proliferated experimentally over the recent decades although the origins of the superconductivity (SC) in cuprates and heavy fermion materials remain theoretically controversial \cite{Bednorz1986,Steglich1979,Lee2006}. The quasi-two-dimensional
(2D) iron pnictides have triggered a new boom of the SC investigation
\cite{Hosono2008}. In particular, the search efforts for the compounds
with the geometrical and electronic structure similar to cuprates
are growing both experimentally and theoretically due to the highest
critical temperatures at ambient conditions. Isostructural compounds,
e.g. Sr$_{2}$RuO$_{4}$, Sr$_{2}$IrO$_{4}$ and LaNiO$_{2}$ as
well as some artificial heterostructure have been extensively proposed,
and synthetized \cite{maeno1994,li2019,Anisimov1999,Lee2004,yan2015,kim2016,chaloupka2008,schwingenschlogl2009stripe,hansmann2009,ikeda2016,kawai2009,kaneko2009,Ryee2019,Hirsch2019,Botana2019,hayward1999}.
The unremitting pursuits have been rewarded despite no evidence of
SC in some of these analogs so far.
Recently, the exciting discovery of superconductivity in the hole
doped infinite-layer nickelate Nd$_{1-x}$Sr$_{x}$NiO$_{2}$ redraws
strong attention to the unconventional SC \cite{li2019,Hepting2019,Sakakibara2019,Gao2019,bernardini2020magnetic,Jiang2019electronic}.
The quasi-2D Ni-O plane is geometrically analog to the Cu-O plane
in cuprates. The $d_{x^{2}-y^{2}}$ orbital of each Ni$^{1+}$ ion
is also half-filled, with an effective spin-1/2 on each site. However,
the differences from cuprates are notably striking. In the parent
compounds, there is no sign of long-range magnetic orders in the measured
temperature range \cite{hayward2003}. Maybe due to self-doping effects,
the electrons of the rare-earth Nd between Ni-O planes form a 3D weakly-interacting
$5d$ metallic state with an electronic Fermi surface \cite{HZhang2019,Wu2019,Hepting2019,Normura2019}.
Intriguingly, the resistivity exhibits metallic temperature dependence
down to 60 K, and then shows insulating upturn at lower temperatures,
which could be the results of weak localization effects, Kondo effects
or temperature driven intra-band transitions \cite{Singh2019,li2019,choi2020role}.
Upon chemical doping, additional holes dominantly enter the $d$ orbitals
of the Ni ions rather than O orbitals as in cuprates since the O $2p$
states are far away from the Fermi level in nickelates \cite{Lee2004,Jiang2019,HZhang2019,YHZhang2019,Gao2019}.
The sign change of the Hall coefficient at low temperature indicates
that both electrons and holes may contribute to the transport and
thermodynamic properties \cite{li2019}. Moreover, it is debating
whether the doped hole forms a spin singlet or triplet doublon with
the original hole on a Ni ion \cite{Jiang2019,Hu2019,YHZhang2019,Werner2019,GMZhang2019}.
Several microscopic models have been proposed, such as the $t$-$J$
models, the metallic gas coupled to a 2D Hubbard model and the spin
freezing model \cite{GMZhang2019,HZhang2019,fu2019corelevel,Wu2019,Hu2019,YHZhang2019,Hepting2019,Werner2019}.
More surprisingly, absence of superconductivity was recently claimed
in the bulk nickelates and the film prepared on various oxide substrates
different from SrTiO. It was suggested that the absence possibly
results from the hydrogen intercalation \cite{qLi2019,xZhou2019,si2019topotactic}.
Own to these confusions, more insights into the microscopic mechanism
in nickelates are imperative.
In this paper, we investigate the nickelate SC based on the analysis
of the transport experiments. Considering the positive Hall coefficient
and the suppressed self-doping effects at low temperature, we suggest
that since the Ni $d_{xy}$ or/and $d_{3z^{2}-r^{2}}$
orbital is close to Fermi energy level, the doped holes may go to these orbitals and establish a conducting
band \cite{Lee2004,lechermann2019late,Sakakibara2019,Gao2019}. The
onsite Hund interaction couples the conducting band with the localized
$d_{x^{2}-y^{2}}$ orbital band together. The correlation between
the sparse carriers and the kinetic energy of the localized holes
could then be ignored. Thus, the two-band model is simplified into
a Hund-Heisenberg model. We show that both the non-Fermi liquid in normal
state and the superconductivity is determined by the spin fluctuations
of the localized holes. This SC mechanism could be realized in multi-orbital
strongly correlated systems with both Hund and Heisenberg interactions \cite{Lee2018,Georges2013,Haule2009,Werner2008}.
\section{Microscopic Hamiltonian}
We first analyze the electronic properties in normal state based on
the transport experiments \cite{li2019}. In the parent compounds,
both the resistivity and Hall effect measurements show Kondo effects with logarithmic temperature dependence from tens to around several
kelvin. The Kondo effects were attributed to the hybridization between
the Nd $5d$ states and the Nd $4f$ or Ni $3d$ states as in rare-earth
heavy fermion compounds although the $\textit{ab initio }$ study
suggests the hybridization between the Ni $3d$ state and the Nd $5d$
states is negligible, and the $4f$ electron spin fluctuation should
be weak due to the large magnetic moment and the energy far away from
Fermi energy level \cite{GMZhang2019,HZhang2019,Normura2019,Wu2019}.
In Nd$_{0.8}$Sr$_{0.2}$NiO$_{2}$, above 60 K, the negative Hall
coefficient indicates that the Nd $5d$ electrons dominate the transport
and thermodynamic properties. With the decreasing of temperature,
the self-doping effect is reduced as in semiconductors and the Hall
coefficient also changes its sign from negative to positive. This
means that the doped holes take over the dominant role in the transport
and thermodynamics at low temperature. In addition, the Hund coupling
between the $3d$ doped holes and localized holes is around an order
of magnitude stronger than the Kondo coupling between the $5d$ electron
and the $3d$ magnetic moments. Therefore, we ignore the $5d$ electrons
in our model, and in our discussion section we show that they only
give negligible contribution to the superconductivity and non-Fermi-liquid
behavior in the normal state. Whether the doped hole forms a high
spin triplet or a low spin singlet doublon with the original hole
on the $d_{x^{2}-y^{2}}$ orbital is still controversial \cite{Jiang2019,Hu2019,YHZhang2019,Werner2019,GMZhang2019}.
In fact, a Ni$^{2+}$ ion with $d^{8}$ configuration often has a
high spin S = 1 in common Nickel oxides as the result of Hund coupling.
According to the first principle calculation, the energy of the triplet
state is around 1 eV lower than that of the singlet, and the top of
the Ni $d_{xy}$ or/and $d_{3z^{2}-r^{2}}$ orbital band is also close
to the Fermi energy \cite{Lee2004,lechermann2019late,Sakakibara2019,Gao2019}.
Moreover, the holes doped on the $d_{xy}$ or/and $d_{3z^{2}-r^{2}}$
orbitals could itinerate freely, agreeing well with the positive Hall
coefficient at low temperature. The delocalization of the doped holes
on $d_{xy}$ or/and $d_{3z^{2}-r^{2}}$ orbitals is attributed to
the fact that the doped hole concentration is dilute, and under the
short range antiferromagetic (AF) correlation background, a hole can
hop freely to its next nearest neighbor sites without energy cost
as long as these sites are not occupied by another doped hole. In
contrast, the doped holes on the already half-filled $d_{x^{2}-y^{2}}$
orbitals tend to be localized at low temperature, otherwise the hopping
disturbs the magnetic configurations of short-range AF correlations
as in cuprates \cite{Lee2006}.
\begin{figure}[t]
\includegraphics[width=1\columnwidth]{fig1.pdf} \caption{ Schematic spin configuration on the Ni-O planes of the hole-doped
nickenates. The red thicker arrows denote the spins of the localized holes
in the $d_{x^{2}-y^{2}}$ orbital band. The local magnetic moments
interact with each other through the Heisenberg interaction. The blue
arrows denotes the spins of the doped holes as carriers in the conducting
$d_{xy}$ or/and $d_{3z^{2}-r^{2}}$ orbital band. Their spins are
parallel with the local magnetic moments due to the strong Hund coupling.
In normal state, the scattering by the spin fluctuations of the localized
holes transfers the carrier Fermi gas into a non-Fermi liquid. A carrier
could hop to its next nearest neighbor sites without any energy cost
as long as these sites are not occupied by other doped holes. Thus
the doped holes itinerate over the lattice even at very low temperature
without formation of a pseudogap. While in superconducting phase,
two neighboring carrier particles are mediated into a Cooper pair
by the spin fluctuations of the local magnetic moments. \label{lattice} }
\end{figure}
Based on the aforementioned analysis, we confine our study to the
Ni-O planes and assume that the holes doped on the $d_{xy}$ or/and
$d_{3z^{2}-r^{2}}$ orbitals form a conducting band, coexisting with
the localized $d_{x^{2}-y^{2}}$ orbital band. $c_{i}=(c_{i\uparrow},c_{i\downarrow})^{T}$
is introduced as the annihilation operator of the bare carrier particles
on the $d_{xy}$ or/and $d_{3z^{2}-r^{2}}$ orbitals of the $i$th
Ni site, and $d_{i}=(d_{i\uparrow},d_{i\downarrow})^{T}$ is the real
space annihilation operator of the localized holes on the Ni $d_{x^{2}-y^{2}}$
orbitals. The Hamiltonian is written as
\begin{eqnarray}
H & = & H_{c}+H_{d}+U_{c}\sum_{i}n_{ci\uparrow}n_{ci\downarrow}+U_{d}\sum_{i}n_{di\uparrow}n_{di\downarrow}\nonumber \\
& & +U_{cd}\sum_{i}n_{ci}n_{di}-J_{h}\sum_{i}\mathbf{S}_{ci}\cdot\mathbf{S}_{di},
\end{eqnarray}
with
\begin{eqnarray}
H_{c}=\varepsilon_{c0}\sum_{i}c_{i}^{\dagger}c_{i}-\sum_{i,j}t_{cij}c_{i}^{\dagger}c_{j}\label{H0-2}
\end{eqnarray}
and
\begin{eqnarray}
H_{d}=\varepsilon_{d0}\sum_{i}d_{i}^{\dagger}d_{i}-\sum_{i,j}t_{dij}d_{i}^{\dagger}d_{j},\label{H0-2-1}
\end{eqnarray}
where $t_{cij}$ and $t_{dij}$ are the hopping integrals of the carriers
in conducting band and the localized holes in $d_{x^{2}-y^{2}}$ band,
respectively. $n_{ci}=c_{i}^{\dagger}c_{i}=n_{ci\uparrow}+n_{ci\downarrow}$
is the occupancy of the carriers with spin up and down on the $i$th
site, and $n_{di}$ is the occupancy of the $d_{x^{2}-y^{2}}$ orbital.
$J_{h}$ is the Hund coupling between the carriers and the localized
holes on the same site. $U_{c}$ is the onsite Coulomb repulsion between
carriers, $U_{d}$ is the interaction between the localized holes,
and $U_{cd}$ is the inter-orbital Coulomb repulsion between the carrier
and localized hole on the same site.
To further simplify the Hamiltonian, based on the fact that the hopping
term $t_{dij}$ is much less than the large onsite Coulomb interaction
$U_{d}$, we take the $t_{dij}$ as the perturbation then the Hubbard
model for the localized $d_{x^{2}-y^{2}}$ orbital band is reduced
to a Heisenberg model. In addition, the large onsite Coulomb interaction
strongly suppresses the particle number fluctuation on the half-filled
$d_{x^{2}-y^{2}}$ orbitals, so that the Coulomb interaction acting
on the carriers by the localized holes could be renormalized into
the chemical potential within the singly occupied approximation $\left\langle n_{di}\right\rangle =1$.
Furthermore, in consideration of the delocalization and the sparse
concentration $\left\langle n_{ci}\right\rangle $ of the doped holes
in conducting band, we can safely ignore the correlation between the
carriers as in common metals. Alternatively, the magnetic coupling
$\mathbf{S}_{ci}\cdot\mathbf{S}_{cj}$ term is ignored due to the much
weaker magnetization of the carriers than that of the localized holes.
Finally, we arrive at a Hund-Heisenberg model,
\begin{eqnarray}
H=H_{c}-J_{h}\sum_{i}\mathbf{S}_{ci}\cdot\mathbf{S}_{di}+J_{H}\sum_{\left\langle i,j\right\rangle }\mathbf{S}_{di}\cdot\mathbf{S}_{dj}\label{H-1}
\end{eqnarray}
where $\mathbf{S}_{ci}=c_{i}^{\dagger}\boldsymbol{\sigma}c_{i}/2$
and $\mathbf{S}_{di}=d_{i}^{\dagger}\boldsymbol{\sigma}d_{i}/2$ with
the Pauli vector $\bm{\sigma}$. $J_{H}$ is the Heisenberg interaction
between the $d_{x^{2}-y^{2}}$ orbital holes on the Ni square lattice.
Here, we have ignored the kinetic energy of the localized holes, and
the carrier chemical potential $\varepsilon_{c0}$ has been replaced
by $\varepsilon_{c}=\varepsilon_{c0}+U_{cd}$ in $H_{c}$ to include
the energy renormalization from the Coulomb interaction of the holes
on the $d_{x^{2}-y^{2}}$ orbitals.
This model is formally similar to the Kondo-Heisenberg model on a
2D square lattice \cite{Chang2017,Zaanen1988}. The difference is
the conducting carrier ferromagnetically coupled to the localized
magnetic moments rather than antiferromangetically. In addition, the
onsite Hund ferromagnetic coupling is in favor of the delocalization
of the doped holes under the antiferromagnetic background even at
very low temperature, and thus it is difficult to form a pseudogap.
In the following, we show that the spin fluctuations of the localized
$d_{x^{2}-y^{2}}$ states not only act as the pairing `glue' in superconducting
state, but also results in non-Fermi liquid in normal state.
\section{Normal state}
The bare doped holes are assumed to compose a dilute Fermi gas with
the retarded Green's function $G_{c}^{0}(\mathbf{k},\omega)$, bathed
in the Heisenberg antiferromagnets of the localized holes. At low
temperature, the transport and thermodynamic properties are determined
by the imaginary part of the carrier self-energy. However, different
from the Fermi liquids, the dominant contribution to the self-energy is from the interaction between the carriers and the
localized holes rather than the carrier-carrier
interaction. Because the concentration of the carriers is much lower than the localized holes despite the same order strength of the two kinds of interactions. Therefore, we ignore
the self-energy correction by the carrier-carrier correlation interaction.
Within the Born approximation, the imaginary part of the self-energy
correction by the renormalized spin fluctuation $\chi_{d}(\mathbf{q},\omega)$
of the localized $d_{x^{2}-y^{2}}$ holes reads \cite{Chang2017}
\begin{eqnarray*}
& & \mbox{Im}\Sigma_{c}(\mathbf{k},\omega)\sim J_{h}^{2}\int_{-\omega_{c}}^{\omega_{c}}dv\left[n_{B}(v)+n_{F}(\omega+v)\right]I(\mathbf{k},\omega,v),
\end{eqnarray*}
where $n_{B}$ and $n_{F}$ are the Bose and Fermi functions, $\omega_{c}$
is the upper cutoff frequency of magnetic fluctuations and
\begin{eqnarray}
I(\mathbf{k},\omega,v)=\int\frac{d^{2}q}{4\pi^{2}}\mbox{Im}\chi_{d}(\mathbf{q},v)\mbox{Im}G_{c}^{0}(\mathbf{k+q},\omega+v).\label{Ikwv}
\end{eqnarray}
It is worth noting that the neglect of the higher order self-energy
corrections is based on the fact that the carriers is weakly magnetized
by the local magnetic moments or $\left|\left\langle \mathbf{S}_{ci}\right\rangle \right|\ll1/2$.
Therefore, the Hund coupling in the effective Hamiltonian only gives
perturbation correction to the carrier self-energy despite the large
Hund coupling constant in Eq. (\ref{H-1}). However, the the approximation of neglecting the higher order corrections may be questionable if
the $\mbox{Im}\chi_{d}(\mathbf{q},v)\sim v^s$ with $s\leq 0$ at low frequency. For instance, the perturbation related coupling constant $\lambda_\mathbf{q} =2\int_0^{\infty}dv\alpha_\mathbf{q}^2(v)\mbox{Im}\chi_{d}(\mathbf{q},v)/v$ diverges, where $\alpha_\mathbf{q}^2(v)\mbox{Im}\chi_{d}(\mathbf{q},v)$ is the generalized McMillan carrier-boson coupling function.
Integrating over the momentum $\mathbf{k}$, one finds the momentum-integral
imaginary part of the Fermi gas self-energy
\begin{eqnarray*}
& & \mbox{Im}\Sigma_{c}(\omega)\sim J_{h}^{2}\int_{-\omega_{c}}^{\omega_{c}}dv\left[n_{B}(v)+n_{F}(\omega+v)\right]\mbox{Im}\chi_{d}(v)\rho_{c}(\omega+v)
\end{eqnarray*}
with the aid of
\begin{eqnarray}
\int\frac{d^{2}k}{4\pi^{2}}I(\mathbf{k},\omega,v)=-\pi\rho_{c}(\omega+v)\mbox{Im}\chi_{d}(v)\label{I2}
\end{eqnarray}
where the magnetic fluctuations $\chi_{d}(v)\equiv\int d^{2}q\chi_{d}(\mathbf{q},v)/4\pi^{2}$.
$\rho_{c}$ is the density of states of the carriers. Since the absolute
value of $n_{B}(v)+n_{F}(\omega+v)$ exponentially decreases to zero
with increasing $|v|$, the energy range of integration can be extended
from the cutoff $\omega_{c}$ to infinity, and $\rho_{c}(\omega+v)$
is approximately substituted by $\rho_{c}^{0}$, the carrier density
of states at Fermi energy level. One has
\begin{eqnarray*}
& & \mbox{Im}\Sigma_{c}(\omega)\sim J_{h}^{2}\rho_{c}^{0}\int_{-\infty}^{\infty}dv\left[n_{B}(v)+n_{F}(\omega+v)\right]\mbox{Im}\chi_{d}(v).
\end{eqnarray*}
Since no experimental or theoretical results on $\mbox{Im}\chi_{d}(v)$
could be obtained presently we assume that the momentum-integral spin-fluctuation
spectra of the half-filled $d_{x^{2}-y^{2}}$ holes take the similar
form of the underdoped cuprates\cite{Hayden1991,Keimer1991,Tranquada1992}
, e.g. $\mbox{Im}\chi_{d}(v)\sim\mbox{tanh}\left(v/2T\right)$. Using
the analytical frequency integral equation\cite{Chang2017}
\begin{eqnarray}
\int_{-\infty}^{\infty}dv\left[n_{B}(v)+n_{F}(\omega+v)\right]\mbox{tanh}\left(\frac{v}{2T}\right)\nonumber \\
=2T\left[1+\frac{\omega}{2T}\mbox{tanh}\left(\frac{\omega}{2T}\right)\right],
\end{eqnarray}
the carriers have the marginal Fermi liquid-like self-energy
\begin{eqnarray}
\mbox{Im}\Sigma_{c}(\omega,T) & \sim & \pi\rho_{c}^{0}J_{h}^{2}T\left[1+\frac{\omega}{2T}\mbox{tanh}\left(\frac{\omega}{2T}\right)\right]\nonumber \\
& \sim & \max{(|\omega|,T)}.\label{selfenergy2}
\end{eqnarray}
Then the linear temperature dependence of electrical resistivity observed
in the experiments could be explained \cite{li2019}, and some other
anomalous transport properties are expected to be experimentally verified.
In cuprates, since the doped holes are antiferromagnetically coupled
to the localized holes, the hopping disturbs the original magnetic
configuration, and thus the doped holes tend to be localized at low
doping or at low temperature \cite{Lee2006}. On the contrary, in
nickelates, the doped holes, which are ferromagnetically coupled to
localized holes, could hop over the Ni-O plane without affecting the
magnetic background so that they are not easy to be trapped around
the local magnetic moments even at low temperature. In addition, a carrier could hop to its next nearest neighbor sites without any energy cost as long as these sites are not occupied by other doped holes. Therefore, we
propose that it is almost impossible to observe a pseudogap in nickelates.
\section{Superconductivity}
In normal state, the carriers are scattered by the short-range spin
fluctuation as the metallic gas by phonons. In the superconducting
state, the carrier pairing is mediated by the spin fluctuations analogues
of the pairing by phonons in the BCS mechanism. It is worthy of note
that we assume that the conducting carriers only partially screen
the local moments without formation of localized triplets or singlets,
and then the Heisenberg interaction between the screened moments and
their surroundings could survive. Thus, the carriers on unit cell
$i$ and $j$ could interact with each other by exchange of the spin
fluctuation in terms of a four point vertex, written in real space
as
\begin{eqnarray}
\Gamma_{\alpha\beta,\gamma\delta}(i,j,\omega)=-\frac{J_{h}^{2}}{4}\chi_{d}(i,j,\omega)\sigma_{\alpha\beta}\sigma_{\gamma\delta}.\label{4vertex}
\end{eqnarray}
We assume that the AF spin fluctuation only mediates the itinerant holes spacing within the AF correlation length into stable Cooper pairs. The correlation length in nickelates is assumed to be the same scale as that in cuprates at low temperature, around two times the lattice constant.
Thus, we only take the nearest-neighbor $\chi_{d}(\left\langle i,j\right\rangle ,\omega)$
into account. Then, the interaction Hamiltonian of the carriers can
be written in the coordinate representation as \cite{Chang2017}
\begin{eqnarray}
H_{sc}=J_{h}^{2}\chi_{d}(\left\langle i,j\right\rangle ,\omega)\sum_{\left\langle i,j\right\rangle }\mathbf{S}_{ci}\cdot\mathbf{S}_{cj},\label{Hsc-1}
\end{eqnarray}
where the nearest neighbor $\chi_{d}(\left\langle i,j\right\rangle ,\omega)$
is assumed to be space independent. For a local pair, the energy of
a spin-triplet is about $J_{h}^{2}\chi_{d}(\left\langle i,j\right\rangle ,\omega)$
higher than that of a spin-singlet, and thus the antiferromagnetic
spin fluctuations favors spin-singlet pairing. Combining with $H_{c}$
in Eq.~(\ref{H0-2}), a $t$-$J$-like model is reached. Interestingly,
despite the formal similarity with the conventional $t$-$J$ model
\cite{Zhang1988}, here the spin-like operator $\mathbf{s}$ is associated
with the carriers rather than the localized holes.
After transforming to momentum space, the Hamiltonian on the square
lattice becomes
\begin{eqnarray}
H_{sc}=\int\frac{d^{2}kd^{2}k'}{(2\pi)^{4}}J\left({\rm \mathbf{k}}-{\rm \mathbf{k}}'\right)c_{{\rm \mathbf{k}}\uparrow}^{\dagger}c_{-{\rm \mathbf{k}}\downarrow}^{\dagger}c_{-{\rm \mathbf{k}}'\downarrow}c_{{\rm \mathbf{k}}'\uparrow},\label{hint}
\end{eqnarray}
with
\begin{eqnarray}
J\left({\rm \mathbf{k}}-{\rm \mathbf{k}}'\right)=-2g\left[\cos\left(k_{x}-k'_{x}\right)+\cos\left(k_{y}-k'_{y}\right)\right]
\end{eqnarray}
and the effective coupling between the carriers
\begin{eqnarray}
g\equiv\frac{3}{4}J_{h}^{2}\chi_{d}(\left\langle i,j\right\rangle ,\omega).
\end{eqnarray}
The Cooper pairing potentials are symmetrized with $J({\rm \mathbf{k}}-{\rm \mathbf{k}}')$
and $J({\rm \mathbf{k}}+{\rm \mathbf{k}}')$ in the singlet channel
as \cite{Coleman2007}
\begin{eqnarray}
V_{{\rm \mathbf{k}},{\rm \mathbf{k}}'}=\frac{J\left({\rm \mathbf{k}}-{\rm \mathbf{k}}'\right)+J\left({\rm \mathbf{k}}+{\rm \mathbf{k}}'\right)}{2},
\end{eqnarray}
i.e.
\begin{eqnarray}
V_{{\rm \mathbf{k}},{\rm \mathbf{k}}'}= & -2g\left[\cos k_{x}\cos k'_{x}+\cos k_{y}\cos k'_{y}\right]
\end{eqnarray}
The pairing interaction can be further decoupled into $d$-wave and
$s$-wave components,
\begin{eqnarray}
2\cos k_{x}\cos k'_{x}+2\cos k_{y}\cos k'_{y}=\gamma_{k}\gamma_{k'}+\gamma_{k}^{s}\gamma_{k'}^{s},
\end{eqnarray}
with the $d$-wave gap function$\gamma_{k}=\cos k_{x}-\cos k_{y}$,
and the extended $s$-wave gap function $\gamma_{k}^{s}=\cos k_{x}+\cos k_{y}$.
For $s$-wave superconductivity, the pairing interaction is expected
to be negative and nearly isotropic. However, the interaction $V_{{\rm \mathbf{k}},{\rm \mathbf{k}}'}$
is positive at $\mathbf{k-k'\sim Q}$, and the AF spin fluctuation
is strongly momentum dependent, i.e. peaked at or near the AF wave vector $\mathbf{Q}$ \cite{Wu2019}. Therefore, only the $d$-wave pairing
channel is favored in the spin- fluctuation SC mechanism \cite{Scalapino1986,Bickers1987,Inui1988,Dong1988,Kotliar1988,Monthoux1991,Moriya1990,Millis1990,Chubukov2008}
and the attractive pairing interaction dominantly mediates the carriers
on the nearest-neighbor unit cells \cite{Scalapino2012}. Thus, the
pairing interaction is $V_{{\rm \mathbf{k}},{\rm \mathbf{k}}'}^{d}=-g\gamma_{k}\gamma_{k'}$
in the $d_{x^{2}-y^{2}}$ channel, which could be detected in phase sensitive interference measurements \cite{bker2020phasesensitive}. Consequently, in momentum space,
a weak coupling BCS interaction can be written
\begin{eqnarray}
H_{scd}=-g\int\frac{d^{2}k}{4\pi^{2}}\gamma_{\mathbf{k}}c_{\mathbf{k}\uparrow}^{\dagger}c_{-\mathbf{k}\downarrow}^{\dagger}\int\frac{d^{2}k'}{4\pi^{2}}\gamma_{\mathbf{k'}}c_{\mathbf{k'}\uparrow}c_{-\mathbf{k'}\downarrow},
\end{eqnarray}
The superconductivity order parameter $\gamma_{\mathbf{k}}\Delta_{sc}$
is introduced in the mean field method with the BCS gap equation
\begin{eqnarray}
\Delta_{sc}=-g\int\frac{d^{2}k}{4\pi^{2}}\gamma_{\mathbf{k}}<c_{\mathbf{k}\uparrow}^{\dagger}c_{-\mathbf{k}\downarrow}^{\dagger}>,
\end{eqnarray}
where $c_{\mathbf{k}\uparrow}^{\dagger}c_{-\mathbf{k}\downarrow}^{\dagger}$
is the Cooper pair operator denoting a bond state of two carriers
with opposite momentum and spin.
Solving the gap equation in the limit $\Delta_{sc}(T\rightarrow T_{c})\rightarrow0$,
the SC transition temperature $T_{c}\sim\omega_{c}e^{-1/\lambda}$
with $\lambda=g\rho_{c}^{0}/2$ for weak coupling $d$-wave superconductors.
Since the AF correlation in nickelates is weaker than that in cuprates,
the spin fluctuation cutoff energy $\hbar\omega_{c}\sim J_H$ should be lower
than that in cuprates. Moreover, the Hund coupling $J_{h}$ is smaller
than the magnetic coupling $J_{K}$ between the O carriers and Cu
local moments in cuprates. Therefore, the lower critical temperature
$T_{c}$ in nickelates could be understood.
\section{Discussion and Conclusion}
Actually, we could not exclude the possibility that the doped holes
go to the Ni $d_{x^{2}-y^{2}}$ orbitals although the strong onsite
Coulomb repulsion pushes the $d_{x^{2}-y^{2}}$ lower Hubbard band
away from the Fermi level \cite{Lee2004,gu2019hybridization}. Nevertheless,
if the doped holes forms onsite spin singlet on the $d_{x^{2}-y^{2}}$
orbital with the original localized hole, the already weak AF coupling
is further suppressed and hence the superconductivity is weakened. The
critical temperature should also be sensitive to the doping level.
In addition, pseudogap should emerge at low temperature as in cuprates.
On the contrary, the doped holes on the $d_{xy}$ or/and $d_{3z^{2}-r^{2}}$
orbital may enhance the original AF couplings although the average
spin of a carrier is very weak. In addition, the SC critical temperature
depending on the Hund coupling $J_{h}$ and the spin fluctuation $\chi_{d}$
is not directly related to the doping level unless the carrier
density is too low. It is not likely to form a pseudogap due to
the ferromagnetic coupling between the carriers and the local magnetic
moments in our model. We expect more experiments to check these differences.
The Nd $5d$ electrons have been ignored in our model for the doped
holes dominate the transport and thermodynamics at low temperature
in the high doped examples. Nevertheless, in the recent paper\cite{li2020superconducting},
it was found that the Hall coefficients become negative at the doping
$x$ below 0.175, which means that electrons may be the dominant carriers
at low doping. The $5d$ electron as carrier could couple to the Ni
localized $d_{x^{2}-y^{2}}$ magnetic moments via the Kondo interaction\cite{GMZhang2019}.
Thus, the interaction between the $5d$ electrons and spin fluctuation
in nickelates could be described by the Kondo-Heisenberg model\cite{Chang2017}.
However, since the Kondo coupling ($J_{K}\sim$0.1 eV) is around ten
times smaller than the Hund coupling ($J_{h}\sim$ 1 eV), the $5d$
electrons give much weaker contribution to the superconductivity,
and their selfenergy renormalization in the normal state is also much
weaker than that of the $3d$ holes, namely, $T_{c}\sim\omega_{c}e^{-1/\lambda}$
with $\lambda\sim J_{K}^{2}$, $J_{h}^{2}$ and $\mbox{Im}\Sigma(\omega,T)\sim J_{K}^{2}$, $J_{h}^{2}$, respectively.
We have assumed that the momentum-integral spin-fluctuation spectra
of localized Ni localized $d_{x^{2}-y^{2}}$ holes on the Ni-O planes
take the similar form of the underdoped cuprates, and then the marginal
Fermi liquid-like self-energy is obtained. We expect that the neutron
scattering and more transport experiments could be conducted to verify
our assumption. Moreover, if the doped holes enter the $d_{xy}$ or/and
$d_{3z^{2}-r^{2}}$ orbital then the doping does not suppress the
AF fluctuations. This also could be judged by the neutron scattering
measurements. In addition, to experimentally determine if the doped
holes enter the $d_{xy}$ or/and $d_{3z^{2}-r^{2}}$ orbital, one
way is to apply Ni L-edge polarized x-ray absorption near edge structure
(XANES) on the single-crystals to study the distribution of holes
in the Ni $3d$ orbitals \cite{kaindl1989correlation}.
In conclusion, we have proposed a Hund-Heisenberg model to investigate
the unconventional SC in the infinite-layer nickelates superconductor.
By analyzing the transport experiments, we suggest that the doped
holes enter the Ni $d_{xy}$ or/and $d_{3z^{2}-r^{2}}$ orbitals,
and form a conducting band. The doped holes interact with the localized
holes on $d_{x^{2}-y^{2}}$ orbital through the onsite Hund coupling.
We show that the non-Fermi liquid state in normal phase results from
the carrier gas interacting with the spin fluctuations of the localized
holes. In the superconducting phase, it is still the short-range spin
fluctuations that mediate the carriers into Cooper pairs and leads
to $d$-wave superconductivity. We expect experiments to check our
predictions that the doped holes slightly enhance the spin fluctuations
and a pseudogap hardly forms in nickelates. We have provided a new
SC mechanism for multi-orbital strongly correlated systems, e.g. iron pnictides and it
should aid in probing or synthesizing new superconductors in transition
or rare-earth metal oxides.
\textit{Acknowledgements}$-$ We are thankful to Xun-Wang Yan, Yuehua Su and Myung Joon Han for
fruitful discussions. This work is supported by the National Natural Science Foundation
of China (91750111, 11874188, U1930401 and 11874075), and National Key Research and Development Program of China (2018YFA0305703) and Science Challenge Project (TZ2016001).
|
1,477,468,750,435 | arxiv | \section{Introduction}
We consider the fundamental mechanism design problem of {\em
approximate social welfare maximization} under {\em general cardinal
preferences} and {\em without money}. In this setting, there is a
finite set of agents (or {\em voters}) $N = \{1,\ldots,n\}$ and a finite set of
alternatives (or {\em candidates}) $M = \{1,\ldots,m\}$. Each voter $i$ has a private
valuation function $u_i: M \rightarrow {\mathbb{R}}$ that can be
arbitrary, except that we require\footnote{We make this requirement primarily for convenience; to avoid having to qualifiy in technically annoying ways a number of definitions and statements of this paper as well as definitions and statements of previous ones.} that it is injective, i.e., we insist that it induces a total order on candidates. Standardly, the function $u_i$ is considered well-defined only up to positive affine
transformations. That is, we consider $x \rightarrow a u_i(x) + b$, for
$a > 0$ and any $b$, to be a different representation of
$u_i$. Given this, we fix the representative $u_i$ that maps the
least preferred candidate of voter $i$ to $0$ and the most preferred
candidate to $1$ as the canonical representation of $u_i$ and we
shall assume that all $u_i$ are thus canonically represented throughout
this paper. In particular, we shall let $\valset{m}$
denote the set of all such functions.
We shall be interested in
{\em direct revelation mechanisms without money} that elicit the {\em valuation profile} ${\bf u} = (u_1,
u_2, \ldots, u_n)$ from the voters and based on this elect a candidate $J({\bf u}) \in
M$. We shall allow mechanisms to be randomized and $J({\bf u})$ is
therefore in general a random map. In fact, we shall define a mechanism simply to be a random map $J: {\valset{m}}^n \rightarrow M$. We prefer mechanisms that
are {\em truthful-in-expectation}, by which we mean that the following condition is satisfied: For each voter $i$, and all ${\bf u} = (u_i, u_{-i}) \in
{\valset{m}}^n$ and $\tilde u_i \in \valset{m}$, we have
$E[u_i(J(u_i,u_{-i})] \geq E[u_i(J(\tilde u_i, u_{-i})]$.
That is, if voters are assumed to be expected utility maximizers, the
optimal behavior of each voter is always to reveal their true valuation
function to the mechanism. As truthfulness-in-expectation is the only notion of truthfulness of interest to us in this paper, we shall use ``truthful'' as a synonym for ``truthful-in-expectation'' from now on. Furthermore, we are interested in
mechanisms for which the expected {\em social welfare}, i.e.,
$E[\sum_{i=1}^n u_i(J({\bf u}))]$, is as high as possible, and we shall in
particular be interested in the {\em approximation ratio} $\mbox{\rm ratio}(J)$ of the
mechanism, defined by
\[ \mbox{\rm ratio}(J) = \inf_{{\bf u} \in {\valset{m}}^n} \frac{E[\sum_{i=1}^n u_i(J({\bf u}))]}{\max_{j \in M}\sum_{i=1}^n u_i(j)}, \]
trying to achieve mechanisms with as high an approximation ratio as possible.
Note that for $m=2$, the problem is easy; a majority vote is a truthful mechanism that achieves optimal social welfare, i.e., it has approximation ratio 1, so we only consider the problem for $m \geq 3$.
A mechanism without money for general cardinal preferences can be
naturally interpreted as a {\em cardinal voting scheme} in which each
voter provides a {\em ballot} giving each candidate $j \in M$ a
numerical score between 0 and 1. A winning candidate is then determined
based on the set of ballots. With this interpretation, the well-known
{\em range voting scheme} is simply the determinstic mechanism that elects the
socially optimal candidate argmax$_{j \in M}\sum_{i=1}^n u_i(j)$, or, more precisely, elects this candidate {\em if} ballots are reflecting the true
valuation functions $u_i$. In particular, range voting has by construction an approximation ratio of 1. However, range voting is not a
truthful mechanism.
Before stating our results, we mention for comparison the approximation ratio of some simple truthful mechanisms. Let {\em random-candidate} be the mechanism that elects a candidate uniformly at random, without looking at the ballots. Let {\em random-favorite} be the mechanism that picks a voter uniformly at random and elects his favorite candidate; i.e., the (unique) candidate to which he assigns valuation $1$. Let {\em random-majority} be the mechanism that picks two candidates uniformly at random and elects one of them by a majority vote. It is not difficult to see that as a function of $m$ and assuming that $n$ is sufficiently large, {\em random-candidate} as well as {\em random-favorite} have approximation ratios $\Theta(m^{-1})$, so this is the trivial bound we want to beat. Interestingly, {\em random-majority} performs even worse, with an approximation ratio of $\Theta(m^{-2})$.
As our first main result, we exhibit a randomized
truthful mechanism with an approximation ratio of $0.37
m^{-3/4}$. The mechanism is the following very simple one: {\em With
probability $3/4$, pick a candidate uniformly at random. With
probability $1/4$, pick a random voter, and pick a candidate uniformly
at random from his $\lfloor m^{1/2} \rfloor$ most preferred
candidates.} Note that this mechanism is {\em ordinal}: Its behavior
depends only on the {\em rankings} of the candidates on the ballots,
not on their numerical scores. We know no asymptotically better
truthful mechanism, even if we allow general (cardinal)
mechanisms, i.e., mechanisms that can depend on the numerical scores
in other ways. We also show a negative result: For sufficiently many
voters and any truthful ordinal mechanism, there is a
valuation profile where the mechanism achieves at most an
$O(m^{-2/3})$ fraction of the optimal social welfare in expectation.
The negative result also holds for non-ordinal mechanisms that are {\em mixed-unilateral}, by which we mean mechanisms that elect a candidate based on the ballot of a single randomly chosen voter.
We get tighter bounds for the natural case of $m = 3$ candidates and
for this case, we also obtain separation results concerning the
approximation ratios achievable by natural restricted classes of
truthful mechanisms. Again, we first state the
performance of the simple mechanisms defined above for comparison: For
the case of $m=3$, {\em random-favorite} and {\em random-majority}
both have approximation ratios $1/2 + o(1)$ while {\em
random-candidate} has an approximation ratio of $1/3$. We show that
for $m=3$ and large $n$, the best mechanism that is {\em ordinal} as
well as {\em mixed-unilateral} has an approximation ratio between 0.610\
and 0.611. The best {\em ordinal} mechanism has an approximation ratio
between 0.616\ and 0.641. Finally, the best {\em mixed-unilateral}
mechanism has an approximation ratio larger than 0.660. In particular,
the best mixed-unilateral mechanism strictly outperforms all
ordinal ones, even the non-unilateral ordinal ones. The
mixed-unilateral mechanism that establishes this is a convex
combination of {\em quadratic-lottery}, a mechanism of Feige and Tennenholtz \cite{Feige10}
and {\em random-favorite}, that was defined above.
\subsection{Background, related research and discussion}
Characterizing strategy-proof social choice functions (a.k.a.,
truthful direct revelation mechanisms without money) under general
preferences is a classical topic of mechanism design and social choice
theory. The celebrated Gibbard-Satterthwaite theorem
\cite{Gibbard73,Satterthwaite75} states that when the number $m$ of candidates is at least 3, any {\em deterministic} and {\em onto} truthful mechanism\footnote{Even though the theorem is usually stated for ordinal mechanisms, it is easy to see that it holds even without assuming that the mechanism is ordinal.} must be a {\em dictatorship},
i.e., it is a function of the ballot of a single distinguished voter
only, and outputs the favorite (i.e., top ranking) candidate of that
voter. Gibbard \cite{Gibbard77} extended the Gibbard-Satterthwaite theorem to the case
of randomized ordinal mechanisms, and we shall heavily use his theorem when proving our negative results on ordinal mechanisms:
\begin{theorem}\cite{Gibbard77}\label{thm-Gibbard77}
The ordinal mechanisms without money that are truthful under general cardinal preferences\footnote{without ties, i.e., valuation functions must be injective, as we require throughout this paper, except in Theorem \ref{thm:anyupper}. If ties were allowed, the characterization would be much more complicated.} are exactly the convex combinations of truthful {\em unilateral} ordinal mechanisms and truthful {\em duple} mechanisms.
\end{theorem}
Here, a {\em unilateral} mechanism is a randomized mechanism whose
(random) output depends on the ballot of a single distinguished voter
$i^*$ only. Note that a unilateral truthful mechanism does not have to be a
dictatorship. For instance, the mechanism that elects with probability
$\frac{1}{2}$ each of the two top candidates according to the ballot of
voter $i^*$ is a unilateral truthful mechanism.
A {\em duple} mechanism is an ordinal mechanism for which there are two distinguished candidates so that all other candidates are elected with probability 0, for all valuation profiles.
An optimistic interpretation of Gibbard's 1977 result as opposed to
his 1973 result that was suggested, e.g., by Barbera \cite{Barbera79}, is that
the class of randomized truthful mechanisms is quite rich and contains
many arguably ``reasonable'' mechanisms--in contrast to dictatorships,
which are
clearly ``unreasonable''. However, we are not aware of any suggestions
in the social choice literature of any well-defined quality measures
that would enable us to rigorously compare these mechanisms and in particular
find the best. Fortunately, one of the main conceptual
contributions from computer science to mechanism design in general is
the suggestion of one such measure, namely the notion of worst case
approximation ratio relative to some objective function. Indeed, a
large part of the computer science literature on mechanism design
(with or without money) is the construction and analysis of
approximation mechanisms, following the agenda set by the seminal papers by Nisan and Ronen \cite{NisanRonen} for the case of mechanisms with money and Procaccia and Tennenholz \cite{PT} for the case of mechanisms without money (i.e., social choice functions). Following this research program, and using
Gibbard's characterization, Procaccia \cite{Procaccia10} gave in a paper conceptually
very
closely related to the present one, upper and lower bounds on the
approximation ratio achievable by ordinal mechanisms for
various objective functions under general preferences. However, he only considered objective
functions that can be defined ordinally (such as, e.g., Borda count),
and did in particular not consider approximating the optimal social
welfare, as we do in the present paper.
The (approximate) optimization of social welfare (i.e. sum of valuations) is indeed a very
standard objective in mechanism design. In particular, in the setting
of mechanisms {\em with} money and agents with quasi-linear utilities,
the celebrated class of Vickrey-Clarke-Groves (VCG) mechanisms exactly
optimize social welfare, while classical negative results such as
Roberts' theorem, state that under general cardinal preferences (and
subject to some qualifications), weighted social welfare
is the only objective one can maximize exactly
truthfully, even with money (see Nisan \cite{Nisan07} for an exposition of all these results). It therefore seems to us extremely natural to try to
understand how well one can approximate this objective truthfully
without money under general cardinal preferences.
One possible reason that the problem was not considered
previously to this paper (to the best of our knowledge) is that,
arguably, social welfare is a somewhat less natural objective function
without the assumption of quasi-linearity of utilities made in
the setting of mechanisms with money. Indeed, assuming quasi-linearity
essentially means forcing the valuations of all agents to be in the unit of
dollars, making it natural to subsequently add them up. On the other hand, in the
setting of social choice theory, the valuation functions are to be
interpreted as von Neumann-Morgenstern utilities (i.e, they are meant
to encode orderings on lotteries), and in particular are only
well-defined up to affine transformations. In this setting, the social
welfare has to be defined as above, as the result of adding up the valuations of all
players, {\em after} these are normalized by scaling to, say, the interval [0,1]. While this is arguably {\em ad hoc}, we note again that optimizing social welfare in this sense is in fact the intended (hoping for truthful ballots) outcome of the well known {\em range voting scheme} (
\verb=http://en.wikipedia.org/wiki/Range_voting=) which is a good piece of evidence for its naturalness.\footnote{It was pointed out to us that it is not completely clear that it is part of the range voting scheme that voters are asked to calibrate their scores so that 0 is the score of their least preferred candidate and 1 is the score of their most preferred candidate. However, without {\em some} calibration instructions, the statement "Score the candidates on a scale from 0 to 1" simply does not make sense and we believe that the present calibration instructions are the most natural ones imaginable.}
As already noted, social welfare is a cardinal objective, i.e., it depends on
the actual numerical valuations of the voters; not just their rankings
of the candidates. While it makes perfect sense to measure how well
ordinal mechanisms can approximate a cardinal objective, such as
social welfare, it certainly also makes sense to see if improvements
to the approximation of the optimal social welfare can be made by mechanisms that actually look at
the numerical scores on the ballots and not just the rankings, i.e.,
cardinal mechanisms. The limitations of ordinal mechanisms were considered recently by Boutilier {\em et al.} \cite{Boutilier} in a very interesting paper closely related to the present one, but crucially, their work did not consider incentives, i.e., they did not require truthfulness of the mechanisms in their investigations. On the other hand, truthfulness is the pivotal property in our approach.
The characterization of truthful mechanisms of Theorem \ref{thm-Gibbard77} does not apply to cardinal
mechanisms. But noting that the definition of ``unilateral'' is not
restricted to ordinal mechanisms, one might naturally suspect that a
similar characterization would also apply to cardinal mechanisms. In a followup paper, Gibbard \cite{Gibbard78}, indeed proved a theorem along those lines,
but interestingly, his result does {\em not} apply to truthful {\em
direct} revelation mechanisms (i.e., strategy-proof social choice
functions), which is the topic of the present paper, but only to {\em
indirect} revelation mechanisms with finite strategy space. Also, the
restriction to finite strategy space (which is in direct contradiction
to direct revelation) is crucial for the proof. Somewhat
surprisingly, to this date, a characterization for the cardinal case
is still an open problem! For a discussion of this situation and for interesting counterexamples to tempting characterization attempts similar to the characterization for the ordinal case of Gibbard \cite{Gibbard77}, see
\cite{Barbera98,Barbera10}, the bottomline being that we at the moment do not have a good understanding of what can be done with cardinal truthful mechanisms for general preferences.
Concrete examples of cardinal mechanisms for general
preferences were given in a number of a papers in the economics and social choice
literature \cite{Zeckhauser73,Freixas84,Barbera98} and the computer
science literature \cite{Feige10}. It is interesting that while the
social choice literature gives examples suggesting that the space of
cardinal mechanisms is rich and even examples of instances where a cardinal
mechanism for voting can yield a (Pareto) better result than all
ordinal mechanisms \cite{Freixas84}, there was apparently no systematic
investigation into constructing ``good'' cardinal mechanisms for
unrestricted preferences. Here, as in the ordinal case, we suggest that the notion of
approximation ratio provides a meaningful measure of quality that
makes such investigations possible, and indeed, our present paper is meant to start such investigations. Our investigations are very much helped by
the work of Feige and Tennenholtz \cite{Feige10} who considered and characterized the {\em
strongly truthful, continuous, unilateral} cardinal mechanisms.
While their agenda was mechanisms for which the objective is information
elicitation itself rather than mechanisms for approximate optimization
of an objective function, the mechanisms they suggest still turn out to be useful for social welfare optimization. In particular, our construction establishing the gap between the approximation ratios for cardinal and ordinal mechanisms for three candidates is based on their {\em quadratic lottery}.
\subsection{Organization of paper}
In Section \ref{sec-prel} we give formal definitions of the concepts
informally discussed above, and state and prove some useful lemmas. In
Section \ref{sec-many}, we present our results for an
arbitrary number of candidates $m$. In Section \ref{sec-few}, we present our results for $m=3$. We conclude with a discussion of open problems in Section \ref{sec-conc}.
\section{Preliminaries}\label{sec-prel}
We let $\valset{m}$ denote the set of canonically represented valuation functions on $M = \{1,2,\ldots,m\}$. That is, $\valset{m}$ is
the set of injective functions $u: M \rightarrow [0,1]$
with the property that $0$ as well as $1$ are contained in the image
of $u$.
We let $\mech{m}{n}$ denote the set of truthful
mechanisms for $n$ voters and $m$ candidates. That is, $\mech{m}{n}$
is the set of random maps $J: {\valset{m}}^n \rightarrow M$ with the property that for voter $i \in \{1,\ldots,n\}$, and all ${\bf u} = (u_i, u_{-i}) \in
{\valset{m}}^n$ and $\tilde u_i \in \valset{m}$, we have
$E[u_i(J(u_i,u_{-i})] \geq E[u_i(J(\tilde u_i, u_{-i})]$. Alternatively, instead of viewing a mechanism as a random map, we can view it as a map from ${\valset{m}}^n$ to $\Delta_m$, the set of probability density functions on $\{1,\ldots,m\}$. With this interpretation, note that $\mech{m}{n}$ is a convex subset of the vector space of all maps from ${\valset{m}}^n$ to ${\mathbb{R}}^m$.
We shall be interested in certain special classes of
mechanisms. In the following definitions, we throughout view a mechanism $J$ as
a map from
${\valset{m}}^n$ to $\Delta_m$.
An {\em ordinal} mechanism $J$
is a mechanism with the following property:
$J(u_i, u_{-i}) = J(u'_i, u_{-i})$, for any voter $i$, any preference
profile ${\bf u} = (u_i, u_{-i})$, and any valuation function $u'_i$
with the property that for all pairs of candidates $j, j'$, it is the
case that $u_i(j) < u_i(j')$ if and only if $u'_i(j) <
u'_i(j')$. Informally, the behavior of an ordinal mechanism only depends on the
ranking of candidates on each ballot; not on the numerical valuations.
We let $\rmech{O}{m}{n}$ denote those mechanisms in $\mech{m}{n}$ that are ordinal.
Following Barbera \cite{Barbera79}, we define an {\em anonymous}
mechanism $J$ as one that does not depend on the names of
voters. Formally, given any permutation $\pi$ on $N$,
and any ${\bf u} \in (\valset{m})^n$, we have
$J({\bf u}) = J(\pi \cdot {\bf u})$, where $\pi \cdot {\bf u}$
denotes the vector $(u_{\pi(i)})_{i=1}^n$.
Similarly following Barbera \cite{Barbera79}, we define a {\em neutral}
mechanism $J$ as one that does not depend on the names of
candidates. Formally, given any permutation $\sigma$ on $M$, any
${\bf u} \in (\valset{m})^n$, and any candidate $j$, we have $J({\bf u})_{\sigma(j)} = J(u_1 \circ \sigma, u_2 \circ \sigma, \ldots, u_n \circ \sigma)_j$.
Following \cite{Gibbard77,Barbera98}, a {\em unilateral} mechanism is a mechanism for which there
exists a single voter $i^*$ so that for all valuation profiles
$(u_{i^*}, u_{-i^*})$ and any alternative valuation profile $u'_{-i^*}$
for the voters except $i^*$, we have $J(u_{i^*}, u_{-i^*}) = J(u_{i^*}, u'_{-i^*})$.
Note that $i^*$ is {\em not} allowed to be chosen at random in the
definition of a unilateral mechanism. In this paper, we shall say that
a mechanism is {\em mixed-unilateral} if it is
a convex combination of unilateral truthful
mechanisms. Mixed-unilateral mechanisms are quite attractive
seen through the ``computer science lens'': They are mechanisms of
{\em low query complexity}; consulting only a single randomly chosen
voter, and therefore deserve special attention in their own right.
We let $\rmech{U}{m}{n}$ denote those mechanisms in $\mech{m}{n}$ that
are mixed-unilateral. Also, we let $\rmech{OU}{m}{n}$ denote
those mechanisms in $\mech{m}{n}$ that are ordinal as well as mixed-unilateral.
Following Gibbard \cite{Gibbard77}, a {\em duple} mechanism
$J$ is an ordinal\footnote{Barbera {\em et al.} \cite{Barbera98} gave a much more general definition of duple mechanism; their duple mechanisms are not restricted to be ordinal. In this paper, ``duple'' refers exclusively to Gibbard's original notion.}
mechanism for which there exist two candidates $j^*_1$ and
$j^*_2$ so that for all valuation profiles, $J$ elects all other candidates with probability $0$.
We next give names to some specific important mechanisms.
We let $\unilat{m}{n}{q} \in \rmech{OU}{m}{n}$ be the mechanism for $m$ candidates and $n$ voters that picks a voter uniformly at random, and elects uniformly at random a candidate among his $q$ most preferred candidates.
We let {\em random-favorite} be a nickname for $\unilat{m}{n}{1}$ and {\em random-candidate} be a nickname for $\unilat{m}{n}{m}$.
We let $\duple{m}{n}{q} \in \rmech{O}{m}{n}$, for $\lfloor n/2 \rfloor + 1 \leq q \leq n + 1$, be the mechanism for $m$ candidates and $n$ voters that picks two candidates
uniformly at random and eliminates all other candidates.
It then checks for each voter which of the two candidates
he prefers and gives that candidate a ``vote''. If a candidate gets at least $q$ votes, she is elected. Otherwise, a coin is flipped to
decide which of the two candidates is elected. We let {\em random-majority} be a nickname for
$\duple{m}{n}{\lfloor n/2 \rfloor + 1}$. Note also that $\duple{m}{n}{n+1}$ is just another name for {\em random-candidate}.
Finally, we shall be interested in the following mechanism $Q_n$ for three candidates shown to be in $\rmech{U}{3}{n}$ by Feige and Tennenholtz \cite{Feige10}: Select a voter uniformly at random, and let $\alpha$ be the valuation of his second most preferred candidate. Elect his most preferred candidate with probability $(4-\alpha^2)/6$, his second most preferred candidate with probability $(1+2 \alpha)/6$ and his least preferred candidate with probability $(1-2 \alpha + \alpha^2)/6$. We let {\em quadratic-lottery} be a nickname for $Q_n$. Note that {\em quadratic-lottery} is not ordinal. Feige and Tennenholtz \cite{Feige10} in fact presented several explicitly given non-ordinal one-voter truthful mechanisms, but {\em quadratic-lottery} is particularly amenable to an approximation ratio analysis due to the fact that the election probabilities are quadratic polynomials.
We let $\mbox{\rm ratio}(J)$ denote the approximation ratio of a mechanism $J \in \mech{m}{n}$, when the objective is social welfare. That is,
\[ \mbox{\rm ratio}(J) = \inf_{{\bf u} \in {\valset{m}}^n} \frac{E[\sum_{i=1}^n u_i(J({\bf u}))]}{\max_{j \in M}\sum_{i=1}^n u_i(j)}. \]
We let $\ratio{m}{n}$ denote the best possible approximation ratio when there are $n$ voters and $m$ candidates. That is,
$\ratio{m}{n} = \sup_{J \in \mbox{\small \rm Mech}_{m,n}} \mbox{\rm ratio}(J)$.
Similarly, we let $\gratio{C}{m}{n} = \sup_{J \in
\mbox{\small \rm Mech}^{\bf C}_{m,n}} \mbox{\rm ratio}(J),$
for {\bf C} being either {\bf O}, {\bf U} or {\bf OU}.
We let $\aratio{m}$ denote the asymptotically best possible approximation ratio when the number of voters approaches infinity. That is, $\aratio{m} = \liminf_{n \rightarrow \infty} \ratio{m}{n}$,
and we also extend this notation to the restricted classes of mechanisms with the obvious notation $\agratio{O}{m}, \agratio{U}{m}$ and $\agratio{OU}{m}$.
The importance of neutral and anonymous mechanisms is apparent from the following simple lemma:
\begin{lemma}\label{lem-naenough}
For all $J \in \mech{m}{n}$, there is a $J' \in \mech{m}{n}$ so that $J'$ is anonymous and neutral and so that $\mbox{\rm ratio}(J') \geq \mbox{\rm ratio}(J)$. Similarly,
for all $J \in \rmech{C}{m}{n}$, there is $J' \in \rmech{C}{m}{n}$ so that $J'$ is anonymous and neutral and so that $\mbox{\rm ratio}(J') \geq \mbox{\rm ratio}(J)$, for {\bf C} being either {\bf O}, {\bf U} or {\bf OU}.
\end{lemma}
\begin{proof}
Given any mechanism $J$, we can ``anonymize'' and ``neutralize'' $J$
by applying a uniformly chosen random permutation to the set of
candidates and an independent uniformly chosen random permutation to
the set of voters before applying $J$. This yields an anonymous and
neutral mechanism $J'$ with at least a good an approximation ratio as
$J$. Also, if $J$ is ordinal and/or mixed-unilateral, then so is $J'$.
\end{proof}
Lemma \ref{lem-naenough} makes the characterizations of the following theorem very useful.
\begin{theorem}\label{thm-na}
The set of anonymous and neutral mechanisms in $\rmech{OU}{m}{n}$ is equal to the set of convex combinations of the mechanisms
$\unilat{m}{n}{q}$, for $q \in \{1,\ldots,m\}$.
Also, the set of anonymous and neutral mechanisms in $\mech{m}{n}$ that can be obtained as convex combinations of duple mechanisms is equal to the set of convex combinations of the mechanisms
$\duple{m}{n}{q}$, for $q \in
\{\lfloor n/2 \rfloor + 1, \lfloor n/2 \rfloor + 2, \ldots,
n, n+1\}$.
\end{theorem}
\begin{proof}
A very closely related statement was shown by Barbera \cite{Barbera78}. We sketch how to derive the theorem from that statement.
Barbera (in \cite{Barbera78}, as summarized in the proof of Theorem 1
in \cite{Barbera79}) showed that the anonymous, neutral mechanisms in
$\rmech{OU}{m}{n}$ are exactly the {\em point voting schemes}
and that the anonymous, neutral mechanism that are convex combinations of duple mechanism are exactly {\em
supporting size schemes}.
A point voting scheme is given by $m$ real
numbers $(a_j)_{j=1}^m$ summing to $1$, with $a_1 \geq a_2 \geq \cdots
\geq a_m \geq 0$. It picks a voter uniformly at random, and elects the
candidate he ranks $k$th with probability $a_k$, for $k=1,\ldots,m$.
It is easy to see that
the point voting schemes are exactly the convex combinations of
$\unilat{m}{n}{q}$, for $q \in \{1,\ldots,m\}$.
A supporting size
scheme is given by $n+1$ real numbers $(b_i)_{i=0}^{n}$ with $b_n \geq
b_{n-1} \cdots \geq b_0 \geq 0$, and $b_i + b_{n-i} = 1$ for $i \leq
n/2$. It picks two different candidates $j_1, j_2$ uniformly at random
and elects candidate $j_k, k=1,2$ with probability $b_{s_k}$ where
$s_k$ is the number of voters than rank $j_k$ higher than
$j_{3-k}$. It is easy to see that the supporting size schemes are
exactly the convex combinations of $\duple{m}{n}{q}$, for $q \in
\{\lfloor n/2 \rfloor + 1, \lfloor n/2 \rfloor + 2, \ldots, n + 1\}$.
\end{proof}
The following corollary is immediate from Theorem \ref{thm-Gibbard77} and Theorem \ref{thm-na}.
\begin{corollary}\label{cor-ud} The ordinal, anonymous and neutral mechanisms in
$\mech{m}{n}$ are exactly the convex combinations of the mechanisms
$\unilat{m}{n}{q}$, for $q \in \{1,\ldots,m\}$ and $\duple{m}{n}{q}$,
for $q \in \{\lfloor n/2 \rfloor + 1, \lfloor n/2 \rfloor + 2, \ldots,
n\}$.
\end{corollary}
We next present some lemmas that allow us to understand the asymptotic behavior of $\ratio{m}{n}$ and $\gratio{C}{m}{n}$ for fixed $m$ and large $n$, for {\bf C} being either {\bf O}, {\bf U} or {\bf OU}.
\begin{lemma}\label{lem-scale}
For any positive integers $n,m,k$, we have
$\ratio{m}{kn} \leq
\ratio{m}{n}$ and $\gratio{C}{m}{kn} \leq \gratio{C}{m}{n}$, for {\bf C} being either {\bf O}, {\bf U} or {\bf OU}.
\end{lemma}
\begin{proof}
Suppose
we are given any mechanism $J$ in $\mech{m}{kn}$ with approximation ratio
$\alpha$. We will convert it to a mechanism $J'$ in $\mech{m}{n}$
with the same approximation ratio, hence proving $\ratio{m}{kn} \leq
\ratio{m}{n}$. The natural idea is to let $J'$ simulate $J$ on the profile where we simply make $k$ copies of each of the $n$ ballots. More specifically, let $\mathbf{u'}=\left(u'_1,\ldots,u'_n\right)$ be a valuation profile with $n$ voters and $\mathbf{u}=\left(u_1,\ldots,u_{kn}\right)$ be a valuation profile with $kn$ voters, such that $u_{ik+1}=u_{ik+2}=\ldots=u_{(i+1)k}=u'_{i
+1}$, for $i=0,\ldots,n-1$, where ``$=$'' denotes component-wise equality. Then let $J'(\mathbf{u'})=J(\mathbf{u})$. To complete the proof, we need to prove that if $J$ is truthful, $J'$ is truthful as well.
Let $\mathbf{u}=\left(u_1,\ldots,u_{kn}\right)$ be the profile defined above for $kn$ agents and let $\mathbf{u'}$ be the corresponding $n$ agent profile. We will consider deviations of agents with the same valuation functions to the same misreported valuation vector $\hat{u}$; without loss of generality, we can assume that these are agents $1,\ldots,k$. For ease of notation, let $\mathbf{u_{i+1}}=(u_{ik+1}=u_{ik+2}=\ldots=u_{(i+1)k})$ be a \emph{block} of valuation functions, for $i=0,\ldots,n-1$ and note that given this notation, we can write $\mathbf{u}=\left(\mathbf{u_1},\mathbf{u_2},\ldots,\mathbf{u_n}\right)=\left(u_1,\ldots,u_k,\mathbf{u_2},\ldots,\mathbf{u_n}\right)$. Let $v^{*}=E[u_i(J(\mathbf{u}))]$. Now consider the profile $\left(\hat{u},u_2,\ldots,u_k,\mathbf{u_2},\ldots,\mathbf{u_n}\right)$. By truthfulness, it holds that agent $1$'s expected utility in the new profile (and with respect to $u_1$) is at most $v^{*}$. Next, consider the profile $\left(\hat{u},\hat{u},u_3,\ldots,u_k,\mathbf{u_2},\ldots,\mathbf{u_n}\right)$ and observe that agent $2$'s utility from misreporting should be at most her utility before misreporting, which is at most $v^{*}$. Continuing like this, we obtain the valuation profile $\left(\hat{u},\hat{u},\ldots,\hat{u},\mathbf{u_2},\ldots,\mathbf{u_n}\right)$ in which the expected utility of agents $1,\ldots,k$ is at most $v^{*}$ and hence no deviating agent gains from misreporting. Now observe that the new profile $\left(\hat{u},\hat{u},\ldots,\hat{u},\mathbf{u_2},\ldots,\mathbf{u_n}\right)$ corresponds to an $n$-agent profile $(\mathbf{\hat{u_i}',u_{-i}'})=\left(\hat{u}_1',u'_2,\ldots,u'_n\right)$ which is obtained from $\mathbf{u'}$ by a single miresport of agent $1$. By the discussion above and the way $J'$ was constructed, agent $1$ does not benefit from this misreport and since the misreported valuation function was arbitrary, $J'$ is truthful.
The same proof works for $\gratio{C}{m}{kn} \leq \gratio{C}{m}{n}$, for {\bf C} being either {\bf O}, {\bf U} or {\bf OU}.
\end{proof}
\begin{lemma}\label{lem-kill}
For any $n,m$ and $k < n$, we have
$\ratio{m}{n} \geq \ratio{m}{n-k} - \frac{km}{n}.$
Also,
$\gratio{C}{m}{n} \geq \gratio{C}{m}{n-k} - \frac{km}{n},$
for {\bf C} being either {\bf O}, {\bf U}, or {\bf OU}.
\end{lemma}
\begin{proof}
We construct a mechanism $J'$ in $\mech{m}{n}$ from a mechanism $J$ in
$\mech{m}{n-k}$. The mechanism $J'$ simply simulates $J$ after removing
$k$ voters, chosen uniformly at random and randomly mapping the remaining voters to $\{1,\ldots,n\}$.
In particular, if $J$ is ordinal (or mixed-unilateral, or both)
then so is $J'$. Suppose $J$ has approximation ratio
$\alpha$. Consider running $J'$ on any profile where the socially
optimal candidate has social welfare $w^*$. Note that $w^* \geq n/m$, since each voter assigns valuation $1$ to some candidate. Ignoring $k$ voters
reduces the social welfare of any candidate by at most $k$, so $J'$ is
guaranteed to return a candidate with expected social welfare at least
$\alpha (w^* - k)$. This is at least a $\alpha (1 - k/w^*) \geq \alpha - \frac{km}{n}$ fraction of $w^*$. Since the profile was arbitrary, we are done.
\end{proof}
\begin{lemma}\label{lem-limit}For any $m, n \geq 2, \epsilon>0$ and all $n' \geq (n-1)m/\epsilon$, we have $\ratio{m}{n'} \leq \ratio{m}{n} + \epsilon$ and $\gratio{C}{m}{n'} \leq \gratio{C}{m}{n} + \epsilon$,
for {\bf C} being either {\bf O}, {\bf U}, or {\bf OU}.
\end{lemma}
\begin{proof}
If $n$ divides $n'$, the statement follows from Lemma \ref{lem-scale}. Otherwise, let $n^*$ be the smallest number larger than $n'$ divisible by $m$; we have $n^* < n' + n$. By Lemma \ref{lem-scale}, we have
$\ratio{m}{n^*} = \ratio{m}{n}$. By Lemma \ref{lem-kill}, we have
$\ratio{m}{n^*} \geq \ratio{m}{n'} - \frac{(n-1)m}{n^*}$.
Therefore, $\ratio{m}{n'} \leq \ratio{m}{n} + \frac{(n-1)m}{n^*} \leq \ratio{m}{n} + \frac{(n-1)m}{n'}$. The same arguments work for proving $\gratio{C}{m}{n'} \leq \gratio{C}{m}{n} + \epsilon$,
for {\bf C} being either {\bf O}, {\bf U}, or {\bf OU}.
\end{proof}
In particular, Lemma \ref{lem-limit} implies that $\ratio{m}{n}$ converges to a limit as $n \rightarrow \infty$.
\subsection{Quasi-combinatorial valuation profiles}
It will sometimes be useful to restrict the set of valuation functions to a certain finite domain $\rvalset{m}{k}$ for an integer parameter $k \geq m$. Specifically, we define:
\[ \rvalset{m}{k} = \left\{ u \in \valset{m} | \Im(u) \subseteq \{0, \frac{1}{k}, \frac{2}{k}, \ldots, \frac{k-1}{k}, 1\}\right\} \]
where $\Im(u)$ denotes the image of $u$.
Given a valuation function $u \in \rvalset{m}{k}$, we define its {\em alternation number} $a(u)$ as
\[ a(u) = \# \{j \in \{0,\ldots,k-1\} | [\frac{j}{k} \in \Im(u)] \oplus [\frac{j+1}{k} \in \Im(u)] \}, \]
where $\oplus$ denotes exclusive-or. That is, the alternation number of $u$ is the number of indices $j$ for which exactly one of $j/k$ and $(j+1)/k$ is in the image of $u$. Since $k \geq m$ and $\{0,1\} \subseteq \Im(u)$, we have that the alternation number of $u$ is at least $2$. We shall be interested in the class of valuation functions $\combset{m}{k}$ with minimal alternation number. Specifically, we define:
\[ \combset{m}{k} = \{ u \in \rvalset{m}{k} | a(u) = 2 \} \]
and shall refer to such valuation functions as {\em quasi-combinatorial valuation functions}. Informally, the quasi-combinatorial valuation functions have all valuations as close to 0 or 1 as possible.
The following lemma will be very useful in later sections. It states that in order to analyse the approximation ratio of an ordinal and neutral mechanism, it is sufficient to understand its performance on quasi-combinatorial valuation profiles.
\begin{lemma}\label{lem-quasicomb}
Let $J \in \mech{m}{n}$ be ordinal and neutral. Then
\[ \mbox{\rm ratio}(J) = \liminf_{k \rightarrow \infty} \min_{{\bf u} \in (\combset{m}{k})^n} \frac{E[\sum_{i=1}^n u_i(J({\bf u}))]}{\sum_{i=1}^n u_i(1)}. \]
\end{lemma}
\begin{proof}
For a valuation profile ${\bf u} = (u_i)$, define
$g({\bf u}) = \frac{E[\sum_{i=1}^n u_i(J({\bf u}))]}{\sum_{i=1}^n u_i(1)}.$
We show the following equations:
\begin{eqnarray}
\mbox{\rm ratio}(J)
& = & \inf_{{\bf u} \in \valset{m}^n} \frac{E[\sum_{i=1}^n u_i(J({\bf u}))]}{\max_{j \in M}\sum_{i=1}^n u_i(j)} \\
& = & \inf_{{\bf u} \in \valset{m}^n} g({\bf u}) \label{eq-fixed-cand}\\
& = & \liminf_{k \rightarrow \infty} \min_{{\bf u} \in (\rvalset{m}{k})^n} g({\bf u}) \label{eq-discretized}\\
& = & \liminf_{k \rightarrow \infty} \min_{{\bf u} \in (\combset{m}{k})^n} g({\bf u}) \label{eq-quasicomb}
\end{eqnarray}
Equation (\ref{eq-fixed-cand}) follows from the fact that since $J$ is neutral, it is invariant over permutations of the set of candidates, so there is a worst case instance (with respect to approximation ratio) where the socially optimal candidate is candidate 1.
Equation (\ref{eq-discretized}) follows from the facts that
(a) each ${\bf u} \in (\valset{m})^n$ can be written as ${\bf u} = \lim_{k \rightarrow \infty} {\bf v}_k$ where $({\bf v}_k)$ is a sequence so that ${\bf v}_k \in (\rvalset{m}{k})^n$ and where the limit is with respect to the usual Euclidean topology (with the set of valuation functions being considered as a subset of a finite-dimensional Euclidean space), and
(b) the map $g$ is continuous in this topology (to see this, observe that the denominator in the formula for $g$ is bounded away from 0).
Finally, equation (\ref{eq-quasicomb}) follows from the following claim:
\[ \forall {\bf u} \in (\rvalset{m}{k})^n\ \exists {\bf u}' \in (\combset{m}{k})^n : g({\bf u}') \leq g({\bf u}). \]
With ${\bf u} = (u_1, \ldots, u_n)$, we shall prove this claim by induction in $\sum_i a(u_i)$ (recall that $a(u_i)$ is the alternation number of $u_i$).
For the induction basis, the smallest possible value of $\sum_i a(u_i)$ is $2n$, corresponding to all $u_i$ being quasi-combinatorial. For this case, we let ${\bf u}'={\bf u}$.
For the induction step, consider a valuation profile ${\bf u}$ with $\sum_i a(u_i) > 2n$. Then, there must be an $i$ so that the alternation number $a(u_i)$ of $u_i$ is strictly larger than 2 (and therefore at least 4, since alternation numbers are easily seen to be even numbers). Then, there must be $r, s \in \{2,3,\ldots,k-2\}$, so that $r \leq s$, $\frac{r-1}{k} \not \in \Im(u_i)$,
$\{\frac{r}{k}, \frac{r+1}{k}, \ldots, \frac{s-1}{k}, \frac{s}{k} \} \subseteq \Im(u_i)$ and $\frac{s+1}{k} \not \in \Im(u_i)$. Let $\tilde{r}$ be the largest number strictly smaller than $r$ for which $\frac{\tilde{r}}{k} \in \Im(u_i)$; this number exists since $0 \in \Im(u_i)$. Similarly, let $\tilde{s}$ be the smallest number strictly larger than $s$ for which $\frac{\tilde{s}}{k} \in \Im(u_i)$; this number exists since $1 \in \Im(u_i)$. We now define a valuation function $u^x \in \valset{m}$ for any $x \in [\tilde{r}-r+1; \tilde{s}-s-1]$, as follows: $u^x$ agrees with $u_i$ on all candidates $j$ {\em not} in $u_i^{-1}(\{\frac{r}{k}, \frac{r+1}{k}, \ldots, \frac{s-1}{k}, \frac{s}{k} \})$,
while for candidates
$j \in u_i^{-1}(\{\frac{r}{k}, \frac{r+1}{k}, \ldots, \frac{s-1}{k}, \frac{s}{k} \})$
, we let
$u^x(j) = u_i(j) + \frac{x}{k}$. Now consider the function $h: x \rightarrow g((u^x, u_{-i}))$, where
$(u^x, u_{-i})$ denotes the result of replacing $u_i$ with $u^x$ in the profile ${\bf u}$. Since $J$ is ordinal, we see by inspection of the definition of the function $g$, that $h$ on the domain $[\tilde{r}-r+1; \tilde{s}-s-1]$ is a fractional linear function $x \rightarrow (ax + b)/(cx + d)$ for some $a,b,c,d \in {\mathbb R}$. As $h$ is defined on the entire interval $[\tilde{r}-r+1; \tilde{s}-s-1]$, we therefore have that $h$ is either monotonely decreasing or monotonely increasing in this interval, or possibly constant. If $h$ is monotonely increasing, we let $\tilde{\bf u} = (u^{\tilde{r}-r+1}, u_{-i})$, and apply the induction hypothesis on $\tilde{\bf u}$. If $h$ is monotonely decreasing, we let $\tilde {\bf u} = (u^{\tilde{s}-s-1}, u_{-i})$, and apply the induction hypothesis on $\tilde{\bf u}$. If $h$ is constant on the interval, either choice works. This completes the proof.
\begin{figure}
\label{fig-quasicomb}
\begin{tikzpicture}
\usetikzlibrary{arrows}
\usetikzlibrary{shapes}
\tikzstyle{every node}=[draw=black, inner sep=1pt]
\draw[|->] (0,0)--(17,0);
\node [draw=white,inner sep=0pt,label=below:{$R_{m,k}$}] at (17,0) {};
\node [circle,label=below:{$0$}] at (1,0) {};
\node [circle,label=below:{$\frac{1}{10}$}] at (2.5,0) {};
\node [circle,label=below:{$\frac{2}{10}$}] at (4,0) {};
\node [circle,label=below:{$\frac{3}{10}$}] at (5.5,0) {};
\node [circle,label=below:{$\frac{4}{10}$}] at (7,0) {};
\node [circle,label=below:{$\frac{5}{10}$}] at (8.5,0) {};
\node [circle,label=below:{$\frac{6}{10}$}] at (10,0) {};
\node [circle,label=below:{$\frac{7}{10}$}] at (11.5,0) {};
\node [circle,label=below:{$\frac{8}{10}$}] at (13,0) {};
\node [circle,label=below:{$\frac{9}{10}$}] at (14.5,0) {};
\node [circle,label=below:{$1$}] at (16,0) {};
\node [draw=white,inner sep=0pt,label=below:{$\in \Im(u_i)$}] at (14,1.5) {};
\node [draw=white,inner sep=0pt,label=below:{$\notin \Im(u_i)$}] at (14,-0.7) {};
\draw[thick,dashed,fill=gray,opacity=0.2] (0.2,-1.5) to [out=-15, in=115] (3,0.5) to [out=-75,in=-115] (4.5,-1) to [out=75,in=120] (6,0.5) to [out=-75,in=-115] (12.9,0.5) to [out=75,in=115] (15.5,-0.5) to [out=-75,in=0] (17,-1.5);
\draw[|->] (0,-4)--(17,-4);
\node [draw=white,inner sep=0pt,label=below:{$R_{m,k}$}] at (17,-4) {};
\node [circle,label=below:{$0$}] at (1,-4) {};
\node [circle,label=below:{$\frac{1}{10}$}] at (2.5,-4) {};
\node [circle,label=below:{$\frac{2}{10}$}] at (4,-4) {};
\node [circle,label=below:{$\frac{3}{10}$}] at (5.5,-4) {};
\node [circle,label=below:{$\frac{4}{10}$}] at (7,-4) {};
\node [circle,label=below:{$\frac{5}{10}$}] at (8.5,-4) {};
\node [circle,label=below:{$\frac{6}{10}$}] at (10,-4) {};
\node [circle,label=below:{$\frac{7}{10}$}] at (11.5,-4) {};
\node [circle,label=below:{$\frac{8}{10}$}] at (13,-4) {};
\node [circle,label=below:{$\frac{9}{10}$}] at (14.5,-4) {};
\node [circle,label=below:{$1$}] at (16,-4) {};
\node [draw=white,inner sep=0pt,label=below:{$\in \Im(u_i)$}] at (14,-2.5) {};
\node [draw=white,inner sep=0pt,label=below:{$\notin \Im(u_i)$}] at (14,-4.9) {};
\draw[thick,dashed,fill=gray,opacity=0.2] (0.2,-5.5) to [out=-15, in=115] (3,-3.5) to [out=-75,in=-115] (4.5,-5) to [out=75,in=120] (9,-4) to [out=-70,in=120] (17,-5.7);
\end{tikzpicture}
\caption{Example of the induction step of the proof of Lemma \ref{lem-quasicomb} for $m=7$ and $k=10$. Here, $r=4$, $s=7$, $\tilde{r}=2$ and $\tilde{s}=10$ and hence $x \in \left[-1,2\right]$. The bottom figure depicts the induced profile when $h(x)$ is monotonely decreasing in $[-1,2]$.}
\end{figure}
\end{proof}
\section{Mechanisms and negative results for the case of many candidates}\label{sec-many}
We can now analyze the approximation ratio of the mechanism $J \in \rmech{OU}{m}{n}$ that with probability
$3/4$ elects a uniformly random candidate and with probability
$1/4$ uniformly at random picks a voter and elects a candidate uniformly
at random from the set of his $\lfloor{m^{1/2}}\rfloor$ most
preferred candidates.
\begin{theorem}
Let $n \geq 2, m \geq 3$.
Let $J = \frac{3}{4} \unilat{m}{n}{m} + \frac{1}{4}
\unilat{m}{n}{\lfloor {m^{1/2}}\rfloor}$. Then,
$\mbox{\rm ratio}(J) \geq 0.37 m^{-3/4}$.
\end{theorem}
\begin{proof}
For a valuation profile ${\bf u} = (u_i)$, we define
$g({\bf u}) = \frac{E[\sum_{i=1}^n u_i(J({\bf u}))]}{\sum_{i=1}^n u_i(1)}.$
By Lemma \ref{lem-quasicomb}, since $J$ is ordinal, it is enough to bound from below $g({\bf u})$ for all ${\bf u} \in (\combset{m}{k})^n$ with $k \geq 1000(n m)^2$. Let $\epsilon = 1/k$. Let $\delta = m \epsilon$. Note that all functions of ${\bf u}$ map each alternative either to a valuation smaller than $\delta$ or a valuation larger than $1-\delta$.
Since each voter assigns valuation $1$ to at least one candidate, and since $J$ with probability $3/4$ picks a candidate uniformly at random from the set of all candidates, we have $E[\sum_{i=1}^n u_i(J({\bf u}))] \geq 3n/(4m)$. Suppose $\sum_{i=1}^n u_i(1) \leq 2 m^{-1/4} n$. Then $g({\bf u}) \geq \frac{3}{8} m^{-3/4}$, and we are done. So we shall assume from now on that
\begin{equation}
\sum_{i=1}^n u_i(1) > 2 m^{-1/4} n. \label{eq-many-votes-for-one}
\end{equation}
Obviously, $\sum_{i=1}^n u_i(1) \leq n$. Since $J$ with probability $3/4$ picks a candidate uniformly at random from the set of all candidates, we have that $E[\sum_{i=1}^n u_i(J({\bf u}))] \geq \frac{3}{4m} \sum_{i,j} u_i(j)$.
So if $\sum_{i,j} u_i(j) \geq \frac{1}{2} n m^{1/4}$, we have $g({\bf u}) \geq \frac{3}{8} m^{-3/4}$, and we are done. So we shall assume from now on that
\begin{equation}
\sum_{i,j} u_i(j) < \frac{1}{2} n m^{1/4} \label{eq-nongen}.
\end{equation}
Still looking at the fixed quasi-combinatorial ${\bf u}$, let a voter $i$ be called {\em
generous} if his $\lfloor m^{1/2} \rfloor + 1$ most preferred candidates
are all assigned valuation greater than $1 - \delta$.
Also, let a voter $i$ be called {\em friendly} if he has candidate
$1$ among his $\lfloor m^{1/2} \rfloor$ most preferred candidates. Note that if a voter
is neither generous nor friendly, he assigns to candidate $1$ valuation at most $\delta$. This means that the total contribution
to $\sum_{i=1}^n u_i(1)$ from such voters is less than $n \delta < 0.001/m$.
Therefore, by equation (\ref{eq-many-votes-for-one}), the union of
friendly and generous voters must be a set of size at least $1.99
m^{-1/4} n$.
If we let $g$ denote the number of generous voters, we have $\sum_{i,j} u_i(j) \geq g m^{1/2} (1 - \delta) \geq 0.999 g m^{1/2}$, so by equation (\ref{eq-nongen}), we have that $0.999 g m^{1/2} < \frac{1}{2} n m^{1/4}$. In particular $g < 0.51 m^{-1/4} n$. So since the union of friendly and generous voters must be a set of size at least a $1.99 m^{-1/4} n$ voters, we conclude that there are at least $1.48 m^{-1/4} n$ friendly voters, i.e. the friendly voters is at least a $1.48 m^{-1/4}$ fraction of the set of all voters.
But this ensures that $\unilat{m}{n}{\lfloor {m^{1/2}}\rfloor}$ elects candidate $1$ with probability at least $1.48 m^{-1/4}/m^{1/2} \geq 1.48 m^{-3/4}$. Then, $J$ elects candidate $1$ with probability at least $0.37 m^{-3/4}$ which means that $g({\bf u}) \geq 0.37 m^{-3/4}$, as desired. This completes the proof.
\end{proof}
We next show our negative result. We show that any convex combination of (not necessarily ordinal) unilateral and duple mechanisms performs poorly.
\begin{theorem}\label{thm-neg}
Let $m \geq 20$ and let $n = m-1+g$ where $g = \lfloor m^{2/3} \rfloor$. For any mechanism $J$ that is a convex combination of unilateral and duple mechanisms in $\mech{m}{n}$, we have $\mbox{\rm ratio}(J) \leq 5 m^{-2/3}$.
\end{theorem}
\begin{proof}
Let $k= \lfloor m^{1/3} \rfloor$. By applying the same proof technique as in the proof of Lemma \ref{lem-naenough}, we can assume that $J$ can be decomposed into a convex combination of mechanisms $J_\ell$, with each $J_\ell$ being anonymous as well as neutral, and each $J_\ell$ either being a mechanism of the form $\duple{m}{n}{q}$ for some $q$ (by Theorem \ref{thm-na}), or a mechanism that applies a truthful one-voter neutral mechanism $U$ to a voter chosen unformly at random.
We now describe a single profile for which any such mechanism $J_\ell$ performs
badly. Let $M_1,..,M_g$ be a partition of $\{1,\ldots,kg\}$ with $k$ candidates in each set.
The bad profile has the following voters:
\begin{itemize}
\item{}For each $i \in \{1,\ldots,m-1\}$ a voter that assigns $1$ to
candidate $i$, $0$ to candidate $m$ and valuations smaller than
$1/m^2$ to the rest.
\item{}For each $j \in \{1,\ldots,g\}$ a vote that assigns valuations strictly bigger than $1-1/m^2$ to members of $M_j$, valuation $1-1/m^2$ to m, and valuations smaller than $1/m^2$ to the rest.
\end{itemize}
Note that the social welfare of candidate $m$ is $(1-1/m^2) g$ while the
social welfares of the other candidates are all smaller than $2+{1/m}$. Thus,
the conditional expected approximation ratio given that the mechanism
does not elect $m$ is at most $(2+{1/m})/(1-1/m^2)g \leq 3 m^{-2/3}$.
We therefore only need to estimate the probability that candidate $m$ is elected.
For a mechanism of the form $\duple{m}{n}{q}$, candidate $m$ is chosen with probability at most $2/m$, since such a mechanism first eliminates all candidates but two and these two are chosen uniformly at random.
For a mechanism that picks a voter uniformly at random and applies a truthful one-agent neutral mechanism $U$ to the ballot of this voter, we make the following claim: Conditioned on a particular voter $i^*$ being picked, the conditional probability that $m$ is chosen is at most $1/(r+1)$, where $r$ is the number of candidates that outranks $m$ on the ballot of voter $i$. Indeed, if candidate $m$ were chosen with conditional probability strictly bigger than $1/(r+1)$, she would be chosen with strictly higher probability than some other candidate $j^*$ who outranks $m$ on the ballot of voter $i^*$. But if so, since $U$ is neutral, voter $i$ would increase his utility by switching $j^*$ and $m$ on his ballot, as this would switch the election probabilities of $j^*$ and $m$ while leaving all other election probabilities the same. This contradicts that $U$ is truthful. Therefore, our claim is correct. This means that candidate $m$ is chosen with probability at most $1/m + (g/m)\cdot(1/k) \leq 1/m + m^{2/3}/(m(m^{1/3}-1)) \leq 2 m^{-2/3}$, since $m \geq 20$.
We conclude that on the bad profile, the expected approximation ratio of any mechanism $J_\ell$ in the decomposition is at most $3 m^{-2/3} + 2 m^{-2/3} = 5 m^{-2/3}$. Therefore, the expected approximation ratio of $J$ on the bad profile is also at most $5 m^{-2/3}$.
\end{proof}
\begin{corollary}
For all $m$, and all sufficiently large $n$ compared to $m$, any mechanism $J$ in $\rmech{O}{m}{n} \cup \rmech{U}{m}{n}$ has approximation ratio $O(m^{-2/3})$.
\end{corollary}
\begin{proof}
Combine Theorem \ref{thm-Gibbard77}, Lemma \ref{lem-limit} and Theorem \ref{thm-neg}.
\end{proof}
As followup work to the present paper, in a working manuscript, Lee \cite{Lee14} states a lower bound of $\Omega(m^{-2/3})$ that closes the gap between our upper and lower bounds. The mechanism achieving this bound is a convex combination of random-favorite and the mixed unilateral mechanism that uniformly at random elects one of the $m^{1/3}$ most preferred candidates of a uniformly chosen voter. The main question that we would like to answer is how well one can do with (general) cardinal mechanisms. The next theorem provides a weak upper bound.
\begin{theorem}\label{thm-dm}
All mechanisms $J \in \mech{m}{n}$ for $m,n \geq 3$ have $\mbox{\rm ratio}(J) < 0.94$.
\end{theorem}
\begin{proof}
We will prove the theorem for mechanisms in $\mech{3}{3}$. By simply adding alternatives for which every agent has valuation almost $0$ and then applying Lemma \ref{lem-limit}, the theorem holds for any $m, n \geq 3$.
Assume for contradiction that there exists a mechanism $J \in
\mech{3}{3}$, with $\mbox{\rm ratio}(J) \geq 0.94$. Consider the valuation profile
${\bf u}$ with three voters $\{1,2,3\}$, three candidates $\{A,B,C\}$,
and valuations $u_1(B) = u_2(B) = u_3(C) = 1$, $u_1(C) = u_2(C) =
u_3(B) = 0$, $u_1(A) = 0.7$ and $u_2(A) = u_3(A) = 0.8$. The social
optimum on profile ${\bf u}$ is candidate $A$, with social welfare
$w_A=2.3$, while $w_B=2$ and $w_C=1$. Since $J$'s expected social
welfare is at least a $0.94$ fraction of $w_A$, i.e. $2.162$, the
probability of $A$ being elected is at least $0.54$, as otherwise the
expected social welfare would be smaller than $0.54 \cdot 2.3 + 0.46 \cdot
2 = 2.162$ . The expected utility $\tilde u$ of voter 1 in that case is
at most $0.54 \cdot 0.7+ 0.46 \cdot 1 = 0.838$.
Next, consider the profile ${\bf u}'$ identical to ${\bf u}$ except
that $u'_1(A) = 0.0001$. Let $p_A,p_B,p_C$ be the probabilities of
candidates $A,B$ and $C$ being elected on this profile,
respectively. The social optimum is $B$ with social welfare $2$. It
must be that $0.7 p_A+ p_B \leq \tilde u$, otherwise on profile ${\bf u}$,
voter 1 would have an incentive to misreport $u_1(A)$ as $0.0001$. Also,
since $J$ has an approximation ratio of at least $0.94$, it must be
the case that $1.6001 \cdot p_A + 2 \cdot p_B + p_C \geq 1.88$. By
those two inequalities, we have: $0.9001p_A+p_B+p_C \geq 1.8800 -
\tilde u$ $\Rightarrow$ $0.9001(p_A+p_B+p_C)+0.0999p_B+0.0999p_C \geq
1.8800 - \tilde u$ $\Rightarrow$ $0.0999(p_B+p_C) \geq 0.10419$
$\Rightarrow$ $p_B+ p_C \geq 1.42$ which is not possible. Hence, it
cannot be that $\mbox{\rm ratio}(J) \geq 0.94$.
\end{proof}
Recall that in the definition of valuation functions $u_i$, we required $u_i$ to be injective, i.e. ties are not allowed in the image of the function. If we actually allow $u_i$ to map the same real number to different candidates (with $0$ and $1$ still in the image of the function), we can prove a much stronger upper bound on the approximation ratio of any truthful mechanism. The proof is based on a bound proven by Filos-Ratsikas et al. \cite{Aris14} for the \emph{one-sided matching problem}. There are some interesting technical difficulties in adapting their proof to work for thel setting without ties. As we do not want to declare either the ``ties" or the ``no ties" model the ``right one", we want all positive results (mechanisms) to be proven for the setting with ties and all negative ones (upper bounds on approximation ratio) to be proven for the setting without ties. The proof of the following theorem is the only one of the paper which isn't easily modified to work for both settings.
\begin{theorem}\label{thm:anyupper}
Let $J'$ be any voting mechanism for $n$ agents and $m$ alternatives, with $m \geq n^{\lfloor\sqrt{n}\rfloor +2}$, in the setting with ties. The approximation ratio of $J'$ is $O(\log\log m/\log m)$.
\end{theorem}
\begin{proof}
Filos-Ratsikas et. al \cite{Aris14} proved a related upper bound for the one-sided matching problem.\footnote{Their proof is actually for the setting of $n$ agents and $n$ items but it can be easily adapted to work when the number of items is $\lfloor \sqrt{n} \rfloor+2$.} The bound corresponds to an upper bound on the approximation ratio of any truthful mecahnism $J$ in the general setting with ties. This is because there is a reduction from the general setting with ties to the setting of the one-sided matching problem.
In the one-sided matching problem, there is a set of $n$ agents and a set of $k$ items and each agent $i$ has a valuation function $v_i:[k] \rightarrow [0,1]$ mapping items to real values in the unit interval. Similarly to our definitions, these functions are injective and both $0$ and $1$ are in their image. A mechanism $J$ on input a valuation profile $\mathbf{v}=(v_1,...,v_n)$ outputs a \emph{matching} $J(\mathbf{v})$, i.e. an allocation of items to agents such that each agent receives at most one item. Let $J(\mathbf{v})_i$ be the item allocated to agent $i$. For convenience, we will refer to this problem as the \emph{matching setting} and to our problem as the \emph{general setting}.
The reduction works as follows. Let $\mathbf{v}=(v_1,...,v_n)$ be a valuation profile of the matching setting. We will construct a valuation profile $\mathbf{u}=(u_1,...,u_n)$ of the general setting that will correspond to $\mathbf{v}$. Let each outcome of the matching setting correspond to a candidate in the general setting. For every agent $i$ and every item $j$ let $u_i(A)=v_i(j)$ for each candidate $A \in M$ that corresponds to a matching in which item $j$ is allocated to agent $i$. Note that the number of candidates is $n^k$ and a bound for the matching setting implies a bound for the general setting. Specifically, the $O(1/\sqrt{n})$ bound proved in \cite{Aris14} translates to a $O(\log\log m/\log m)$ upper bound.
\end{proof}
\section{Mechanisms and negative results for the case of three candidates}\label{sec-few}
In this section, we consider the special case of three candidates $m=3$. To improve readability, we shall denote the three candidates by $A, B$ and $C$, rather than by 1,2 and 3.
When the number of candidates $m$ as well as the number of voters $n$ are small constants, the exact values of $\gratio{O}{m}{n}$ and $\gratio{OU}{m}{n}$ can be determined. We first give a clean example, and then describe a general method.
\begin{proposition}\label{prop-three}
For all $J \in \rmech{O}{3}{3}$, we have $\mbox{\rm ratio}(J) \leq 2/3$.
\end{proposition}
\begin{proof}
By Lemma \ref{lem-naenough}, we can assume that $J$ is anonymous and neutral.
Let $A >_i B$ denote the fact that voter $i$ ranks candidate $A$ higher than $B$ in his ballot. Let a {\em Condorcet} profile be any valuation profile with $A>_{1} B>_{1} C$, $B >_{2} C >_{2} A$ and $C >_{3} A >_{3} B$. Since $J$ is neutral and anonymous, by symmetry, $J$ elects each candidate with probability $1/3$. Now, for some small $\epsilon > 0$, consider the Condorcet profile where $u_1(B)= \epsilon$, $u_2(C)=\epsilon$ and $u_3(A)=1-\epsilon$. The socially optimal choice is candidate $A$ with social welfare $2-\epsilon$, while the other candidates have social welfare $1+\epsilon$. Since each candidate elected with probability $1/3$, the expected social welfare is $(4+\epsilon)/3$. By making $\epsilon$ suffciently small, the approximation ratio on the profile is arbitrarily close to $2/3$.
\end{proof}
With a case analysis and some pain, it can be proved by hand that
\emph{random-majority} achieves an approximation ratio of at least
$2/3$ on any profile with three voters and three candidates.
Together with Proposition \ref{prop-three}, this implies that
$\gratio{O}{3}{3} = \frac{2}{3}$. Rather than presenting the case
analysis, we describe a general method for how to exactly and
mechanically compute
$\gratio{O}{m}{n}$ and $\gratio{OU}{m}{n}$ and the associated optimal
mechanisms for small values of $m$ and $n$. The key is to apply {\em
Yao's principle} \cite{Yao77} and view the construction of a
randomized mechanism as devising a strategy for Player I in a
two-player zero-sum game $G$ played between Player I, the mechanism
designer, who picks a mechanism $J$ and Player II, the adversary, who
picks an input profile ${\bf u}$ for the mechanism, i.e., an element of
$(\valset{m})^n$. The payoff to Player I is the approximation ratio of
$J$ on ${\bf u}$. Then, the value of $G$ is exactly the
approximation ratio of the best possible randomized mechanism. In order
to apply the principle, the computation of the value of $G$ has to be
tractable. In our case, Theorem \ref{thm-na} allows us to reduce the
strategy set of Player I to be finite while Lemma \ref{lem-quasicomb}
allows us to reduce the strategy set of Player II to be finite. This
makes the game into a matrix game, which can be solved to optimality
using linear programming. The details follow.
For fixed $m, n$ and $k > 2m$, recall that the set of quasi-combinatorial valuation functions $\combset{m}{k}$
is the set of valuation functions $u$ for which there is a $j$ so that
$\Im(u) = \{0, \frac{1}{k}, \frac{2}{k}, \ldots, \frac{m-j-1}{k} \} \cup \{\frac{k-j+1}{k}, \frac{k-j+2}{k}, \ldots, \frac{k-1}{k}, 1\}$.
Note that a quasi-combinatorial valuation function $u$ is fully
described by the value of $k$, together with a partition of $M$ into
two sets $M_0$ and $M_1$, with $M_0$ being those candidates close to $0$ and
$M_1$ being those sets close to $1$ together with a ranking of the candidates (i.e., a total ordering $<$
on $M$), so that all elements of $M_1$ are greater than all elements of $M_0$ in this ordering. Let the {\em type} of a quasi-combinatorial valuation function
be the partition and the total ordering $(M_0, M_1, <)$. Then, a
quasi-combinatorial valuation function is given by its type and the
value of $k$. For instance, if $m=3$, one possible type is
$(\{B\},\{A,C\}, \{B < A < C\})$, and the quasi-combinatorial valuation
function $u$ corresponding to this type for $k=1000$ is $u(A) =
0.999$, $u(B)=0$, $u(C) = 1$. We see that for any fixed value of $m$,
there is a finite set $T_m$ of possible types. In particular, we have
$|T_3| = 12$. Let $\eta: T_m \times {\mathbb N} \rightarrow
\combset{m}{k}$ be the map that maps a type and an integer $k$ into
the corresponding quasi-combinatorial valuation function.
For fixed $m, n$, consider the following matrices $G$ and $H$. The
matrix $G$ has a row for each of the mechanisms $\unilat{m}{n}{q}$ for
$q = 1,\ldots,m$, while the matrix $H$ has a row for each of the
mechanisms $\unilat{m}{n}{q}$ for $q = 1,\ldots,m$ as well as for each of the mechanisms
$\duple{m}{n}{q}$, for $q = \lfloor n/2 \rfloor + 1, \lfloor
n/2 \rfloor + 2, \ldots, n$. Both matrices have a column for each element of $(T_m)^n$. The entries of the matrices are as follows: Each entry is indexed by a mechanism $J \in \mech{m}{n}$ (the row index) and by a type profile ${\bf t} \in (T_m)^n$ (the column index). We let that entry be
\[ c_{J,{\bf t}} = \lim_{k \rightarrow \infty} \frac{E[\sum_{i=1}^n u_i(J({\bf u}^k))]}{\max_{j \in M}\sum_{i=1}^n u^k_i(j)}, \]
where $u^k_i = \eta(t_i, k)$.
Informally, we let the entry be the approximation ratio of the mechanism on the quasi-combinatorial profile of the type profile indicated in the column and with $1/k$ being ``infinitisimally small''. Note that for the mechanisms at hand, despite the fact that the entries are defined as a limit, it is straightforward to compute the entries symbolically, and they are rational numbers.
We now have
\begin{lemma}\label{lem-matrix}
The value of $G$, viewed as a matrix game with the row player being
the maximizer, is equal to $\gratio{OU}{m}{n}$. The value of $H$ is
equal to $\gratio{O}{m}{n}$.
Also, the optimal strategies for the row players in
the two matrices, viewed as convex combinations of the mechanisms
corresponding to the rows, achieve those ratios.
\end{lemma}
\begin{proof}
We only show the statement for $\gratio{O}{m}{n}$, the other proof
being analogous. For fixed $k$, consider the matrix $H^k$ defined
similarly to $H$, but with entries $c_{J,{\bf t}} = \frac{E[\sum_{i=1}^n
u_i(J({\bf u}^k))]}{\max_{j \in M}\sum_{i=1}^n u^k_i(j)}$, where
$u^k_i = \eta(t_i, k)$. Viewing $H^k$ as a matrix game, a mixed
strategy of the row player can be interpreted as a convex combination
of the mechanisms corresponding to the rows, and the expected payoff
when the column player responds with a particular column ${\bf t}$ is equal to the
approximation ratio of $J$ on the valuation profile
$(\eta(t_i,k))_i$. Therefore, the value of the game is the worst case
approximation ratio of the best convex combination, among profiles of
the form $(\eta(t_i,k))_i$ for a type profile ${\bf t}$. By Lemma
\ref{lem-naenough}, $\gratio{O}{m}{n}$ is determined by the best available
anonymous and neutral ordinal mechanism. By Corollary \ref{cor-ud},
the anonymous and neutral ordinal mechanisms are exactly the convex
combinations of the $\unilat{m}{n}{q}$ and the $\duple{m}{n}{q}$
mechanisms for various $q$. Given any particular convex combination
yielding a mechanism $K$, by Lemma \ref{lem-quasicomb}, its worst case
approximation ratio is given by $\liminf_{k \rightarrow \infty}
\min_{{\bf u} \in (\combset{m}{k})^n} \frac{E[\sum_{i=1}^n u_i(K({\bf
u}))]}{\sum_{i=1}^n u_i(A)}$ which is equal to $\liminf_{k \rightarrow
\infty} \min_{{\bf u} \in (\combset{m}{k})^n} \frac{E[\sum_{i=1}^n
u_i(K({\bf u}^k))]}{\max_{j \in M}\sum_{i=1}^n u^k_i(j)}$, since $K$
is neutral. This means that no mechanism can have an approximation ratio better than the limit of the values of the games $H^k$ as $k$ approaches
infinity. By continuity of the value of a matrix game as a function of its entries, this is equal to the value of $H$. Therefore, $\gratio{O}{m}{n}$ is at most the value of $H$.
Now consider the mechanism $J$ defined by the optimal strategy for the row player in the
matrix game $H$. As the entries of $H_k$
converge to the entries of $H$ as $k \rightarrow \infty$, we have that
for any $\epsilon > 0$, and sufficiently large $k$, the strategy is
also an $\epsilon$-optimal strategy for $H_k$. Since $\epsilon$ is
arbitrary, we have that $\mbox{\rm ratio}(J)$ is at least the value of $H$, completing the proof.
\end{proof}
When
applying Lemma \ref{lem-matrix} for concrete values of $m,n$, one can take
advantage of the fact that all mechanisms corresponding to rows are
anonymous and neutral. This means that two different columns will have
identical entries if they correspond to two type profiles that can be
obtained from one another by permuting voters and/or candidates. This
makes it possible to reduce the number of columns drastically. After
such reduction, we have applied the theorem to $m=3$ and $n=2,3,4$ and
$5$, computing the corresponding optimal approximation ratios and optimal mechanisms. The ratios are given in Table \ref{aratios}
The mechanisms achieving the ratios are shown in Table \ref{probmix} and
Table \ref{probmixuni}. These mechanisms are in general not unique. Note in particular that a different approximation-optimal mechanism than {\em random-majority} was found in $\rmech{O}{3}{3}$.
\begin{table}
\caption{Approximation ratios for $n$ voters.\label{aratios}}
\begin{center}
\begin{tabular}{ccc}
\textbf{$\mathbf{n}$}/\textbf{Approximation ratio} & $\gratio{O}{3}{n}$ & $\gratio{OU}{3}{n}$ \\ \hline
\noalign{\smallskip}
2 & 2/3 & 2/3 \\
3 & 2/3 & 105/171 \\
4 & 2/3 & 5/8 \\
5 & 6407/9899 & 34/55 \\
\end{tabular}
\hfill
\end{center}
\end{table}
\begin{table}
\caption{Mixed-unilateral ordinal mechanisms for $n$ voters. \label{probmixuni}}\begin{center}
\begin{tabular}{cccc}
\textbf{$\mathbf{n}$}/\textbf{Mechanism} & $\unilat{3}{n}{1}$ & $\unilat{3}{n}{2}$ & $\unilat{3}{n}{3}$ \\ \hline
\noalign{\smallskip}
2 & 1/3 & 2/3 & 0 \\
3 & 9/19 & 10/19 & 0 \\
4 & 1/2 & 1/2 & 0 \\
5 & 5/11 & 6/11 & 0 \\
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Ordinal mechanisms for $n$ voters. \label{probmix}}
\begin{center}
\begin{tabular}{ccccccc}
\textbf{$\mathbf{n}$}/\textbf{Mechanism} & $\unilat{3}{n}{1}$ & $\unilat{3}{n}{2}$ & $\unilat{3}{n}{3}$ & $\duple{n}{3}{\lfloor n/2\rfloor +1}$ & $\duple{n}{3}{\lfloor n/2\rfloor +2}$ & $\duple{n}{3}{\lfloor n/2\rfloor +3}$ \\ \hline
\noalign{\smallskip}
2 & 4/100 & 8/100 & 0 & 88/100 & | & | \\
3 & 47/100 & 0 & 0 & 53/100 & 0 & | \\
4 & 0 & 0 & 0 & 1 & 0 & | \\
5 & 3035/9899 & 0 & 0 & 3552/9899 & 3312/9899 & 0 \\
\end{tabular}
\end{center}
\end{table}
We now turn our attention to the case of three candidates and
arbitrarily many voters. In particular, we shall be interested in
$\agratio{O}{3} = \liminf_{n \rightarrow \infty} \gratio{O}{3}{n}$ and
$\agratio{OU}{3} = \liminf_{n \rightarrow \infty}
\gratio{OU}{3}{n}$. By Lemma \ref{lem-limit}, we in fact have
$\agratio{O}{3} = \lim_{n \rightarrow \infty} \gratio{O}{3}{n}$ and
$\agratio{OU}{3} = \lim_{n \rightarrow \infty} \gratio{OU}{3}{n}$.
We present a family of ordinal and mixed-unilateral mechanisms $J_n$ with $\mbox{\rm ratio}(J_n) > 0.610$. In particular, $\agratio{OU}{3} > 0.610$. The coefficents $c_1$ and $c_2$ were found by trial-and-error; we present more information about how later.
\begin{theorem}\label{thm-oul} Let $c_1 = \frac{77066611}{157737759}\approx 0.489$ and $c_2 = \frac{80671148}{157737759}\approx 0.511$. Let $J_n = c_1 \cdot \unilat{m}{n}{1} + c_2 \cdot \unilat{m}{n}{2}$. For all $n$, we have $\mbox{\rm ratio}(J_n) > 0.610$.
\end{theorem}
\begin{proof}
By Lemma \ref{lem-quasicomb}, we have that $\mbox{\rm ratio}(J_n) = \liminf_{k
\rightarrow \infty} \min_{{\bf u} \in (\combset{3}{k})^n}
\frac{E[\sum_{i=1}^n u_i(J_n({\bf u}))]}{\sum_{i=1}^n u_i(A)}.$ Recall
the definition of the set of {\em types} $T_3$ of quasi-combinatorial
valuation functions on three candidates and the definintion of $\eta$
preceding the proof of Lemma \ref{lem-matrix}. From that discussion, we have
$\liminf_{k
\rightarrow \infty} \min_{{\bf u} \in (\combset{m}{k})^n}
\frac{E[\sum_{i=1}^n u_i(J_n({\bf u}))]}{\sum_{i=1}^n u_i(A)}
= \min_{{\bf t} \in (T_3)^n} \liminf_{k
\rightarrow \infty} \frac{E[\sum_{i=1}^n u_i(J_n({\bf u}))]}{\sum_{i=1}^n u_i(A)}$, where $u_i = \eta(t_i, k)$.
Also recall that $|T_3|
= 12$. Since $J_n$ is anonymous, to determine the approximation ratio
of $J_n$ on ${\bf u} \in (\combset{m}{k})^n$, we observe that we only
need to know the value of $k$ and the {\em fraction} of voters of each
of the possible 12 types. In particular, fixing a type profile ${\bf
t} \in (\combset{m}{k})^n$, for each type $k \in T_3$, let $x_k$ be
the fraction of voters in ${\bf u}$ of type $k$. For convenience of
notation, we identify $T_3$ with $\{1,2,\ldots,12\}$ using the scheme
depicted in Table \ref{variables}. Let $w_j = \lim_{k \rightarrow
\infty} \sum_{i=1}^n u_i(i)$, where $u_i = \eta(t_i,k)$, and let $p_j = \lim_{k \rightarrow \infty} \Pr[E_j]$, where $E_j$ is the event that candidate $j$ is
elected by $J_n$ in an election with valuation profile ${\bf u}$ where $u_i = \eta(t_i, k)$. We then have
$\liminf_{k
\rightarrow \infty} \frac{E[\sum_{i=1}^n u_i(J_n({\bf u}))]}{\sum_{i=1}^n u_i(A)} = (p_A \cdot w_A + p_B \cdot w_B + p_C \cdot w_C)/w_A.$
Also, from Table \ref{variables} and the definition of $J_n$, we see:
\begin{eqnarray*}
w_A & = & n (x_1 + x_2 + x_3 + x_4 + x_5 + x_9) \\
w_B & = & n (x_1 + x_5 + x_6 + x_7 + x_8 + x_{11}) \\
w_C & = & n (x_4 + x_7 + x_9 + x_{10} + x_{11}+ x_{12}) \\
p_A & = & (c_1 + c_2/2) (x_1 + x_2 + x_3 + x_4) + (c_2/2)(x_5 + x_6 + x_9 + x_{10}) \\
p_B & = & (c_1 + c_2/2) (x_5 + x_6 + x_7 + x_8) + (c_2/2)(x_1 + x_2 + x_{11}+x_{12}) \\
p_C & = & (c_1 + c_2/2) (x_9 + x_{10} + x_{11} + x_{12}) + (c_2/2)(x_3 + x_4 + x_7 + x_8)
\end{eqnarray*}
Thus we can establish that $\mbox{\rm ratio}(J_n) > 0.610$ for all $n$, by
showing that the {\em quadratic program} ``{\em Minimize $(p_A \cdot w_A +
p_B \cdot w_B + p_C \cdot w_C) - 0.610 w_A$ subject to $x_1 + x_2 +
\cdots + x_{12} = 1, x_1, x_2, \ldots, x_{12} \geq 0$}'', where $w_A, w_B,
w_C, p_A, p_B, p_C$ have been replaced with the above formulae using the
variables $x_i$, has a strictly positive minimum (note that the parameter $n$ appears as a multiplicative constant in the objective function and can be removed, so there is only one program, not one for each $n$). This was
established rigorously by solving the program symbolically in Maple by a facet
enumeration approach (the program being non-convex), which is easily feasible for quadratic programs of this relatively small size.
\end{proof}
We next present a family of ordinal mechanisms $J'_n$ with $\mbox{\rm ratio}(J'_n) > 0.616$. In particular, $\agratio{O}{3} > 0.616$. The coefficents defining the mechanism $c_1$ and $c_2$ were again found by trial-and-error; we present more information about how later.
\begin{theorem}\label{thm-ogl}
Let $c'_1 = 0.476, c'_2=0.467$ and $d = 0.057$ and let $J_n = c'_1 \cdot \unilat{3}{n}{1} + c'_2 \unilat{3}{n}{2} + d \cdot \duple{m}{n}{\lfloor{n/2}\rfloor + 1}$. Then $\mbox{\rm ratio}(J_n) > 0.616$ for all $n$.
\end{theorem}
\begin{proof}
The proof idea is the same as in the proof of Theorem
\ref{thm-oul}. In particular, we want to reduce proving the theorem to
solving quadratic programs. The fact that we have to deal with the
$\duple{m}{n}{\lfloor{n/2}\rfloor + 1}$, i.e., {\em random-majority}, makes this task slightly more
involved. In particular, we have to solve many programs rather than just one. We only provide a sketch, showing how to modify the proof of
Theorem \ref{thm-oul}.
As in the proof of Theorem \ref{thm-oul}, we let $w_j = \lim_{k \rightarrow
\infty} \sum_{i=1}^n u_i(i)$, where $u_i = \eta(t_i,k)$. The expressions for $w_A$ as functions of the variables $x_i$ remain the same as in that proof.
Also, we let $p_j = \lim_{k \rightarrow \infty} \Pr[E_j]$, where $E_j$ is the event that candidate $j$ is elected by $J'_n$ in an election with valuation profile ${\bf u}$ where $u_i = \eta(t_i, k)$. We then have
\begin{eqnarray*}
p_A & = & (c'_1 + c'_2/2) (x_1 + x_2 + x_3 + x_4) + (c'_2/2)(x_5 + x_6 + x_9 + x_{10}) + d\cdot q_A({\bf t}) \\
p_B & = & (c'_1 + c'_2/2) (x_5 + x_6 + x_7 + x_8) + (c'_2/2)(x_1 + x_2 + x_{11}+x_{12}) + d\cdot q_B({\bf t}) \\
p_C & = & (c_1' + c'_1/2) (x_9 + x_{10} + x_{11} + x_{12}) + (c'_2/2)(x_3 + x_4 + x_7 + x_8) + d\cdot q_C({\bf t})
\end{eqnarray*}
where $q_j({\bf t})$ is the probability that {\em random-majority} elects candidate $j$ when the type profile is ${\bf t}.$ Unfortunately, this quantity is not a linear combination of the $x_i$ variables, so we do not immediately arrive at a quadratic program.
However, we can observe that the values of $q_j({\bf t}), j=A,B,C$
depend only on the outcome of the three pairwise majority votes
between $A,B$ and $C$, where the majority vote between, say, $A$ and
$B$ has three possible outcomes: A wins, B wins, or there is a tie. In
particular, there are 27 possible outcomes of the three pairwise
majority votes. To show that $\min_{{\bf t} \in (T_3)^n} \liminf_{k
\rightarrow \infty} \frac{E[\sum_{i=1}^n u_i(J'_n({\bf
u}))]}{\sum_{i=1}^n u_i(A)} > 0.616$, where $u_i = \eta(t_i, k)$, we
partition $(T_3)^n$ into 27 sets according to the outcomes of the
three majority votes of an election with type profile ${\bf t}$ and
show that the inequality holds on all 27 sets in the partition. We
claim that on each of the 27 sets, the inequality is equivalent to a
quadratic program. Indeed, each $q_A({\bf t})$ is now a constant, and
the constraint that the outcome is as specified can be expressed as a
linear constraint in the $x_i$'s and added to the program. For
instance, the condition that $A$ beats $B$ in a majority vote can be
expressed as $x_1 + x_2 + x_3 + x_4 + x_9 + x_{10} > 1/2$ while $A$ ties
$C$ can be expressed as $x_1 + x_2 + x_3 + x_4 + x_5 + x_6 = 1/2$.
Except for the fact that these constraints are added, the program is
now constructed exactly as in the proof of Theorem
\ref{thm-oul}. Solving\footnote{To make the program amenable to
standard facet enumeration methods of quadratic programming, we changed the sharp inequalities $>$ expresssed the
majority vote constraints into weak inequalities $\geq$. Note that
this cannot decrease the cost of the optimal solution.} the programs
confirms the statement of the theorem.
\end{proof}
\begin{table}
\caption{Variables for types of quasi-combinatorial valuation functions with $\epsilon$ denoting $1/k$ \label{variables}}
\begin{tabular}{ccccccccccccc}
\textbf{Candidate}/\textbf{Variable} & $x_1$ & $x_2$ & $x_3$ & $x_4$ & $x_5$ & $x_6$ & $x_7$ & $x_8$ & $x_9$ & $x_{10}$ & $x_{11}$ & $x_{12}$ \\ \hline
\noalign{\smallskip}
A & $1$ & $1$ & $1$ & $1$ & $1-\epsilon$ & $\epsilon$ & $0$ & $0$ & $1-\epsilon$ & $\epsilon$ & $0$ & $0$ \\
B & $1-\epsilon$ & $\epsilon$ & $0$ & $0$ & $1$ & $1$ & $1$ & $1$ & $0$ & $0$ & $1-\epsilon$ & $\epsilon$ \\
C & $0$ & $0$ & $\epsilon$ & $1-\epsilon$ & $0$ & $0$ & $1-\epsilon$ & $\epsilon$ & $1$ & $1$ & $1$ & $1$ \\
\end{tabular}
\end{table}
We next show that $\agratio{OU}{3} \leq 0.611$ and $\agratio{O}{4} \leq 0.641$. By Lemma \ref{lem-limit}, it is enough to show that
$\gratio{OU}{3}{n^*} \leq 0.611$ and $\gratio{O}{3}{n^*} \leq 0.641$ for some fixed $n^*$.
Therefore, the statements follow from the following theorem.
\begin{theorem}\label{thm-ou}
$\gratio{OU}{3}{23000} \leq \frac{32093343}{52579253} < 0.611$ and $\gratio{O}{3}{23000} \leq \frac{41}{64} < 0.641$.
\end{theorem}
\begin{proof}
Lemma \ref{lem-matrix} states that the two upper bounds can be proven
by showing that the values of two certain matrix games $G$ and $H$ are
smaller than the stated figures. While the two games have a reasonable
number of rows, the number of columns is astronomical, so we cannot
solve the games exactly. However, we can prove upper bounds on the
values of the games by restricting the strategy space of the column
player. Note that this corresponds to selecting a number of {\em bad
type profiles}. We have constructed a catalogue of just 5 type profiles, each with
23000 voters. Using the ``fraction
encoding'' of profiles suggested in the proof of Theorem
\ref{thm-oul}, the profiles are:
\begin{itemize}
\item{}$x_2 = 14398/23000, x_5 = 2185/23000, x_{11}=6417/23000$.
\item{}$x_2 = 6000/23000, x_5 = 8000/23000, x_{12} = 9000/23000$.
\item{}$x_1 = 11500/23000, x_{11}=11500/23000$.
\item{}$x_2 = 9200/23000, x_5 = 4600/23000, x_{12} = 9200/23000$.
\item{}$x_2 = 13800/23000, x_{12} = 9200/23000$.
\end{itemize}
Solving the corresponding matrix games yields the
stated upper bound.
\end{proof}
While the catalogue of bad type profiles of the proof of Theorem \ref{thm-ou} suffices to prove Theorem \ref{thm-ou}, we should discuss how we arrived at this particular ``magic'' catalogue. This discussion also explains how we arrived at the ``magic'' coefficients in Theorems \ref{thm-oul} and \ref{thm-ogl}. In fact, we arrived at the catalogue and the coefficients iteratively in a joint local search process (or ``co-evolution'' process).
To get an initial catalogue, we used the fact that we had already solved the matrix games yielding the values of $\gratio{OU}{3}{n}$ and $\gratio{O}{3}{n}$, for $n = 2,3,5$. By the theorem of Shapley and Snow \cite{Shapley50}, these matrix games have optimal strategies for the column player with support size at most the number of rows of the matrices. One can think of these supports as a small set of bad type profiles for $2,3$ and $5$ voters. Utilizing that $2,3$ and $5$ all divide 1000, we scaled all these {\em up} to 1000 voters. Also, we had solved the quadratic programs of the proofs of Theorem \ref{thm-oul} and Theorem \ref{thm-ogl}, but with inferior coefficients and resulting bounds to the ones stated in this paper. The quadratic programs obtained their minima at certain type profiles. We added these entries to the catalogue, and scaled all profiles to their least common multiple, i.e. 23000.
Solving the linear programs of the proof of Theorem \ref{thm-ou} now gave not only an upper bound on the approximation ratio, but the optimal strategy of Player I in the games also suggested reasonable mixtures of the $\unilat{3}{n}{q}$ (in the unilateral case) and of the $\unilat{3}{n}{q}$ and {\em random-majority} (all $\duple{3}{n}{q}$ mechanisms except {\em random-majority} were assigned zero weight) to use for large $n$, making us update the coefficients and bounds of Theorem \ref{thm-oul} and \ref{thm-ogl}, with new bad type profiles being a side product. We also added by hand some bad type profiles along the way, and iterated the procedure until no further improvement was found. In the end we pruned the catalogue into a set of five, giving the same upper bound as we had already obtained.
We finally show that
$\agratio{U}{3}$ is between $0.660$ and $0.750$. The upper bound follows from the following proposition and Lemma \ref{lem-limit}.
\begin{proposition}
$\gratio{U}{3}{2} \leq 0.75$.
\end{proposition}
\begin{proof}
Suppose $J \in \rmech{U}{3}{2}$ has $\mbox{\rm ratio}(J) > 0.75$. By Lemma \ref{lem-naenough}, we can assume $J$ is neutral. For some $\epsilon > 0$, consider the valuation profile with $u_1(A) = u_2(A)= 1-\epsilon$, $u_1(B) = u_2(C) = 0$, and $u_1(C) = u_2(B) = 1$. As in the proof of Theorem \ref{thm-neg}, by neutrality, we must have that the probability of $A$ being elected is at most $\frac{1}{2}$. The statement follows by considering a sufficiently small $\epsilon$.
\end{proof}
The lower bound follows from an analysis of the {\em quadratic-lottery} of Feige and Tennenholtz \cite{Feige10}. The main reason that we focus on this particular cardinal mechanism is given by the following lemma.
\begin{lemma}\label{lem-qquasicomb}
Let $J \in \mech{3}{n}$ be a convex combination of $Q_n$ and any ordinal and neutral mechanism. Then
\[ \mbox{\rm ratio}(J) = \liminf_{k \rightarrow \infty} \min_{{\bf u} \in (\combset{m}{k})^n} \frac{E[\sum_{i=1}^n u_i(J({\bf u}))]}{\sum_{i=1}^n u_i(1)}. \]
\end{lemma}
\begin{proof}
The proof is a simple modification of the proof of Lemma \ref{lem-quasicomb}. As in that proof, for a valuation profile ${\bf u} = (u_i)$, define
$g({\bf u}) = \frac{E[\sum_{i=1}^n u_i(J({\bf u}))]}{\sum_{i=1}^n u_i(1)}.$
We show the following equations:
\begin{eqnarray}
\mbox{\rm ratio}(J)
& = & \inf_{{\bf u} \in \valset{3}^n} \frac{E[\sum_{i=1}^n u_i(J({\bf u}))]}{\max_{j \in M}\sum_{i=1}^n u_i(j)} \\
& = & \inf_{{\bf u} \in \valset{3}^n} g({\bf u}) \label{eq-fixed-cand2}\\
& = & \liminf_{k \rightarrow \infty} \min_{{\bf u} \in (\rvalset{3}{k})^n} g({\bf u}) \label{eq-discretized2}\\
& = & \liminf_{k \rightarrow \infty} \min_{{\bf u} \in (\combset{3}{k})^n} g({\bf u}) \label{eq-quasicomb2}
\end{eqnarray}
Equation (\ref{eq-fixed-cand2}), (\ref{eq-discretized2}) follows as in the proof of Lemma \ref{lem-quasicomb}.
Equation (\ref{eq-quasicomb2}) follows from the following argument.
For a profile ${\bf u} = (u_i)\in (\rvalset{3}{k})^n$, let $c_{\bf u}$ denote the number of pairs $(i,j)$ with $i$ being a voter and $j$ a candidate, for which $u_i(j)-{1/k}$ and $u_i(j) + {1/k}$ are both in $[0,1]$ and both {\em not} in
the image of $u_i$. Then, $\combset{3}{k}$ consists of exactly those ${\bf u}$ in $\rvalset{3}{k}$ for which $c_{\bf u} = 0$. To establish equation (\ref{eq-quasicomb2}), we merely have to show that for any ${\bf u} \in \rvalset{3}{k}$ for which $c_{\bf u} > 0$, there is a ${\bf u}' \in \rvalset{3}{k}$ for which
$g({\bf u}') \leq g({\bf u})$ and $c_{\bf u'} < c_{\bf u}$.
We will now construct such ${\bf u}'$. Since $c_{\bf u} > 0$, there is a pair $(i,j)$ so that $u_i(j)-{1/k}$ and $u_i(j) + {1/k}$ are both in $[0,1]$ and both not in the image of $u_i$. Let $\ell_-$ be the smallest integer value so that
$u_i(j) - {\ell/k}$ is not in the image of $u_i$, for any integer $\ell \in \{\ell_-, \ldots, j-1\}$. Let $\ell_+$ be the largest integer value so that
$u_i(j) + {\ell/k}$ is not in the image of $u_i$, for any integer $\ell \in \{j+1, \ldots, \ell_+\}$. We can define a valuation function $u^x \in \valset{m}$ for any $x \in [-\ell_-/k; \ell_+/k]$
as follows: $u^x$ agrees with $u_i$ except on $j$, where we let $u^x(j) = u_i(j) + x$. Let ${\bf u}^x = (u^x, u_{-i})$. Now consider the function $h: x \rightarrow g({\bf u}^x)$. Since $J$ is a convex combination of {\em quadratic-lottery} and a neutral ordinal mechanism, we see by inspection of the definition of the function $g$, that $h$ on the domain $[-\ell_-/k; \ell_+/k]$ is the quotient of two quadratic polynomials where the numerator has
second derivative being a negative constant and the denominator is postive throughout the interval.
This means that $h$ attains its minimum at either $\ell_-/k$ or at $\ell_+/k$.
In the first case, we let ${\bf u}' = {\bf u}^{\ell_-/k}$ and in the second, we let ${\bf u}' = {\bf u}^{\ell_+/k}$. This completes the proof.
\end{proof}
\begin{theorem}\label{thm-cardthree}
The limit of the approximation ratio of $Q_n$ as $n$ approaches infinity, is exactly the golden ratio, i.e., $(\sqrt{5}-1)/2 \approx 0.618$. Also, let $J_n$ be the mechanism for $n$ voters that selects {\em random-favorite} with probability $29/100$ and {\em quadratic-lottery} with probability $71/100$. Then, $\mbox{\rm ratio}(J_n) > \frac{33}{50}=0.660$.
\end{theorem}
\begin{proof} (sketch)
Lemma \ref{lem-qquasicomb} allows us to proceed completely as in the proof of Theorem \ref{thm-oul}, by constructing and solving appropriate quadratic programs. As the proof is a straightforward adaptation, we leave out the details.
\end{proof}
Mechanism $J_n$ of Theorem \ref{thm-cardthree} achieves an approximation ratio strictly better than $0.64$. In other words, the best truthful cardinal mechanism for three candidates strictly outperforms all ordinal ones.
\section{Conclusion}
\label{sec-conc}
By the statement of Lee \cite{Lee14}, mixed-unilateral mechanisms are asymptotically no better than ordinal mechanisms. Can a cardinal mechanism which is not mixed-unilateral beat this approximation barrier? Getting upper bounds on the performance of general cardinal mechanisms is impaired by the lack of a
characterization of cardinal mechanisms a la Gibbard's. Can we adapt the proof of Theorem \ref{thm:anyupper} to work in the general setting without ties?
For the case of $m=3$, can we close the gaps for ordinal mechanisms and for mixed-unilateral mechanisms? How well can cardinal mechanisms do for $m=3$? Theorem \ref{thm-dm} holds for $m=3$ as well, but perhaps we could prove a tighter upper bound for cardinal mechanisms in this case.
\bibliographystyle{plain}
|
1,477,468,750,436 | arxiv | \section{Introduction}
Recently, unmanned aerial vehicles (UAVs)-assisted cellular networks are deployed as a promising alternative to handle out-of-coverage issues. These UAVs work as a flying base station (BS) that offers computing and communication facilities for Internet of Things (IoT) devices and applications, e.g., disaster and rescue, autonomous control, military operations, and smart farming \cite{mozaffari2019tutorial, tun2020energy}. However, UAVs are energy-constrained, which needs to delicately allocate its available resources while concurrently serving as a local multi-access edge computing (MEC) infrastructure. Besides, to fully reap the benefits of UAVs, it is essential to have its efficient integration with the 5G New Radio (NR) standards to deliver latency-sensitive data packets.
5G NR standardization has been a significant paradigm shift to realize full-fledged communication networks for the next-generation applications. The Third Generation Partnership Project (3GPP) Release 15 5G-NR \cite{3gpp} supports connectivity for massive device densities, high data rate, and ultra-reliable low-latency communication (URLLC) services. The key feature of URLLC supports a high level of reliability and low latency services, i.e., the latency of less than 1\textit{ms} while guaranteeing packet error rates (PER) in the \textit{order-of-five} (10$^{-5}$) \cite{3gpp}. This mandates immediate transmission of critical URLLC packets over the short transmission time interval (sTTI) \cite{3gpp} to meet the stringent latency-reliability requirements. Hence, recent studies focus on resource block\footnote{Resource block (RB) is the smallest unit of bandwidth resources defined in Long Term Evolution (LTE) \cite{3gpp}.} (RB) allocation problem to deliver URLLC services \cite{anand2020joint,bennis2018ultrareliable, alsenwi2019embb, huang2020deep, kasgari2019model, alsenwi2021intelligent}. However, the solution to overcome the challenges due to the dynamic nature of URLLC traffic is non-trivial.
Inline with \cite{alsenwi2019embb}, several approaches have been recently proposed to optimize resource allocation \cite{anand2020joint, huang2020deep}, considering a general arrival process to capture dynamic URLLC traffic and perform RB allocation. Authors in \cite{kasgari2019model} proposed a model-free approach to guarantee end-to-end latency and end-to-end reliability imposing latency constraints in the optimization problem. However, the authors in \cite{kasgari2019model, alsenwi2019embb, anand2020joint, huang2020deep} consider a typical network infrastructure with a fixed BS, having no energy restrictions, and the mobile users deployed randomly under its coverage area. Furthermore, the challenges of integrating 5G features on energy-constrained UAV systems, particularly serving out-of-coverage users, are still overlooked in recent studies \cite{mozaffari2019tutorial, khawaja2019survey, tun2020energy}.
In this work, we first leverage the benefits of UAVs to ensure latency-sensitive data transmission and offer 5G services in an out-of-coverage area with unmanned aerial systems. In particular, we adopt a Gaussian Process Regression (GPR) \cite{williams1998prediction} approach to capture the network dynamics and predict the URLLC traffic online for executing an efficient resource allocation (i.e., RBs and power) and optimal deployment strategy of the UAV. GPR, a flexible and robust active learning approach that shows merit in tackling parametric models' issues \cite{williams1998prediction} and best-capture uncertainty in time-varying processes, allows UAVs to predict latency-sensitive data packets before serving the remote IoT devices. Hence, leveraging the methodological advantages of a GPR-based approach, which offers better functional performance and analytical properties, we can predict the dynamic URLLC traffic in an online fashion. Moreover, unlike existing prediction algorithms such as Long Term Short Term Mermory (LSTM), a GPR mechanism works well in small datasets and efficiently handles random components while making prediction.
In summary, the main contribution of this paper is a novel framework to deliver critical URLLC services in an out-of-coverage area deploying UAVs. In doing so, we develop a practical integration of 5G features with UAV networks and leverage GPR to appropriately characterize and predict the dynamic URLLC traffic online. We later fuse this prediction information to optimize the radio resources, i.e., RBs and transmit power, and deployment strategy of the UAV. Then, we formulate a joint optimization problem for maximizing the average sum-rate while minimizing the transmit power of UAV with the constraints to satisfy stringent URLLC requirements. The formulated problem is revealed as an MINLP, which is NP-hard due to the binary constraint. Hence, we relax the binary constraints and decompose the proposed problem into three sub-problems, which are solved using a low-complexity near-optimal successive minimization algorithm. Simulation results show that the proposed approach achieves a performance gain of up to 24.2\% as compared with the baseline for satisfying the reliability constraints.
To our best knowledge, this is the first work that adopts GPR for performing dynamic URLLC traffic prediction and resource optimization to guarantee maximum average sum-rate and minimum transmit power, jointly, in a UAV-assisted 5G network.
\begin{figure}[t!]
\centering
\captionsetup{justification = centering}
\includegraphics[width=4.5in]{Pandey_WCL2021_0146_R1_fig1.eps}
\caption{Illustration of our system model.}
\label{fig:sys_model}
\end{figure}
\section{System Model and Problem Formulation}
\subsection{System Model}
In this work, as depicted in Fig.~1, we consider a wireless network system where a single UAV is deployed in an out-of-coverage area to provide wireless communication services to a set of URLLC users\footnote{We will use the term ``users" to denote URLLC users henceforth.} $\mathcal{U}$ of $|\mathcal{U}|=U$ at time slot in the set $\mathcal{T}$ of $|\mathcal{T}|=T$. We fixed the location of UAV at an altitude $H$, and the horizontal coordinates are initialized at $(x,y)$; thus, the position of UAV is $\boldsymbol{c}=[x, y, H]$. Similarly, the location of URLLC user is $\boldsymbol{o}_u=[x_u, y_u], \forall u \in \mathcal{U}$. The system available total bandwidth is divided into a set of RBs $\mathcal{B}$ of $|\mathcal{B}|=B$. We considered line-of-sight (LOS) link is available, and the orthogonal frequency division multiple access (OFDMA) scheme is adopted to share radio resources amongst the URLLC users. Let $a_{u}^{b} (t) \in \{0, 1\}$ be the RB assignment variable at time slot $t$ defined as
\begin{equation}
a_{u}^{b} (t) =
\begin{cases}
1, \; \; \text{ if user $u$ is assigned to RB $b$ at time slot $t$},\\
0, \; \; \text{otherwise}.
\end{cases}
\end{equation}
Next, considering a general channel fading model \cite{zhan2017energy}, we can define the channel coefficient between the user $u$ assigned to RB $b$ and UAV at time slot $t$ as $h_u^b(t)= \sqrt{\gamma_u^b (t)}\rho_u^b(t)$, where $\rho_u^b(t)$ is a small-scale fading coefficient and $\gamma_u^b (t)$ is the attenuation factor depending upon the distance between the user and UAV. Formally, we define $\gamma_u^b (t)$ as
\begin{align}
\gamma_u^b (t) &= \gamma_0 d_u^{-\theta}, \forall u\in \mathcal{U}, \forall b \in \mathcal{B}, \forall t \in \mathcal{T}, \label{SINR}
\end{align}
where, respectively, $d_u$ is the distance between the UAV and user $u$ defined as $d_u = \sqrt { H^2 + || \boldsymbol{c} - \boldsymbol{o}_u||^2 }$, $\theta$ is the path loss exponent, and $\gamma_0$ is the channel power gain at the reference distance $d_0 =1$ m. Then, considering the LoS path, the small-scale fading can be modeled by the Rician fading with $\mathbb{E}|\rho_u^b(t)|^2 =1$ as
\begin{align} \label{SINR}
\rho_u^b(t) &= \sqrt{\frac{\Tilde{K}_u^b(t)}{\Tilde{K}_u^b(t)+1}}\rho + \sqrt{\frac{1}{\Tilde{K}_u^b(t)+1}}\Tilde{\rho},
\end{align}
where $\rho$ and $\Tilde{\rho}$, respectively, denote the deterministic LoS channel component with $|\rho| =1$ and the random scattered component which is a symmetric complex Gaussian
(CSCG) random variable having zero mean and a unit variance, and $\Tilde{K}_u^b(t)$ is the Rician factor of the channel between the user $u$ assigned to RB $b$ and UAV at time slot $t$. Therefore, the achievable rate expression over the RB $b$ for user $u \in \mathcal{U}$ at time slot $t$ of a block fading channel in a finite-blocklength regime \cite{polyanskiy2010channel, she2021tutorial}, is defined as
\begin{align} \label{SINR}
r_u^b (t) &= a_{u}^{b} (t)\Bigg[\omega^b \log_2\left(1 + \dfrac{p_u^b (t)|h_u^b (t)|^2}{n_0 \omega^b}\right) - \sqrt\frac{V_u^b(t)}{n_u^b(t)}Q^{-1}(\Theta)\Bigg], \forall u\in \mathcal{U}, \forall b \in \mathcal{B}, \forall t \in \mathcal{T},
\end{align}
where $\omega^b$ is the bandwidth of each RB, $n_0$ is the additive white Gaussian noise power density, $p_u^b (t)$ is the transmit power of the UAV over RB $b$ at time slot $t$, respectively; $V_u^b(t)$ is the channel dispersion\footnote{It captures the stochastic variability of the channel of user $u$ at time slot $t$.} given by $V_u^b(t)=1-\frac{1}{\left(1+\frac{p_u^b(t)|h_u^b(t)|^2}{n_0\omega^b}\right)}$, $n_u^b(t)$ is blocklength, and $Q^{-1}(\Theta)$ is the Q-function with error probability $\Theta >0$, respectively.
Let $L_u(t)$ denotes the random URLLC traffic arrivals at time slot $t$ of user $u$. Therefore, the system reliability constraint can be formally defined as
\begin{equation}
\mathsf{Pr} \left[ \sum_{b=1}^{B}r_u^b(t) \leq \beta L_u(t) \right] \leq \epsilon , \forall u \in \mathcal{U}, \forall t \in \mathcal{T},
\end{equation}
where $\beta$ denotes the URLLC packet size, and $\epsilon$ is a small outage threshold value. Then, by using the Markov's Inequality \cite{alsenwi2019embb}, we can rewrite the reliability constraint in (4) as a linear constraint as follows:
\begin{equation}
\mathsf{Pr} \left[ \sum_{b=1}^{B} r_u^b (t)\leq \beta L_u(t) \right]\leq \frac{\beta \mathbb{E}[L_u(t)]}{\sum_{b=1}^{B} r_u^b (t)}, \forall u \in
\mathcal{U}, \forall t \in \mathcal{T}.
\label{eq:markov}
\end{equation}
\subsection{Problem Formulation}
In order to satisfy the stringent latency requirements of URLLC traffic \eqref{eq:markov}, we need to consider the limited resource capacity, i.e., transmit power and RBs, at the UAV and its optimal deployment strategy. Therefore, we formulate our optimization problem to jointly maximize the average sum-rate and minimize the transmit power of the UAV, while ensuring the URLLC constraints, as follows:
\begin{maxi!}[2]
{\boldsymbol{a, p, c}}
{ \frac{1}{T}\sum_{t=1}^{T}\bigg( \sum_{u=1}^{U} \sum_{b=1}^{B} r_u^b (t) - \zeta\sum_{u=1}^{U} \sum_{b=1}^{B} p_u^b (t) \bigg)}{\label{opt:P1}}{\textbf{P:}}
\addConstraint{ \sum_{b=1}^{B}r_u^b (t)\geq \frac{\beta\mathbb{E}[L_u(t)]}{\epsilon},}{\; \; \forall u \in \mathcal{U}, \forall t \in \mathcal{T}, \label{cons1:reliablity}}
\addConstraint{\sum_{u=1}^{U} a_{u}^{b} (t) \leq 1,}{\; \; \forall b \in \mathcal{B}, \forall t \in \mathcal{T}, \label{cons2:association}}
\addConstraint{a_{u}^{b} (t) \in \{0,1\},}{\; \; \forall u \in \mathcal{U}, b \in \mathcal{B}, \forall t \in \mathcal{T}, \label{con3:association_variable}}
\addConstraint{\sum_{u=1}^{U} \sum_{b=1}^{B} a_{u}^{b} (t) p_{u}^{b} (t)\leq P^{\mathsf{max}}, \forall t \in \mathcal{T},}{ \label{con4:powerbudget}}
\addConstraint{0 \leq p_{u}^{b} (t) \leq P^{\mathsf{max}},}{\forall u \in \mathcal{U}, \forall b \in \mathcal{B}, \forall t \in \mathcal{T}, \label{con5:powerbudget_variable}}
\end{maxi!}
where $\boldsymbol{a}$ and $\boldsymbol{p}$ are, respectively, the matrix of resource allocation and transmit power, $\boldsymbol{c}$ is the location of the UAV, and $\zeta > 0$ is a scaling constant. Here, $\boldsymbol{a}$ characterizes the mapping between the RBs in $\mathcal{B}$ and the number of users $\mathcal{U}$. \eqref{cons1:reliablity} is the URLLC reliability constraint, and constraints \eqref{cons2:association} and \eqref{con3:association_variable} ensure one RB is assigned to only one user at most. Constraints \eqref{con4:powerbudget} and \eqref{con5:powerbudget_variable} define the total transmit power of UAV over all RBs is bounded by the system power budget $P^{\mathsf{max}}$.
\section{Proposed Solution Approach}
The formulated problem in \eqref{opt:P1} is an MINLP, which may require exponential-complexity to solve. To solve \eqref{opt:P1} efficiently and sub-optimally, we decompose it into three sub-problems: (i) \textit{RB allocation}, (ii) \textit{transmit power allocation}, and (iii) \textit{UAV location optimization.}
\subsection{RB Allocation Problem for a Given Power Allocation and UAV Location}
For a given power allocation and UAV location, we can relax the binary constraint \eqref{con3:association_variable} and recast the integer programming problem \eqref{opt:P1} as a RB allocation problem. Then, the fractional
solution is rounded to get a solution to the original integer
problem following the threshold rounding technique described in \cite{alsenwi2021intelligent}. Hence, we pose \eqref{opt:P1} as
\begin{maxi!}[2]
{\hat{\boldsymbol{a}}}
{\frac{1}{T}\sum_{t=1}^{T}\sum_{u=1}^{U} \sum_{b=1}^{B} r_u^b (t) }{\label{opt:P2}}{\textbf{P1: }}
\addConstraint{\eqref{cons1:reliablity}, \eqref{cons2:association}, \eqref{con4:powerbudget},}
\addConstraint{\hat{a}_{u}^{b} (t) \in [0,1],}{\; \; \forall u \in \mathcal{U}, b \in \mathcal{B}, \forall t \in \mathcal{T}. \label{con3_P2:association_variable}}
\end{maxi!}
The above problem is a maximization problem with concave objective and linear constraints; hence, a convex optimization problem which can be solved efficiently using ECOS solver in the CVXPY toolkit.
\subsection{Transmit Power Allocation Problem for a Given RB Allocation and UAV Location}
For a given RB allocation and UAV location, we can recast the integer programming problem \eqref{opt:P1} as a transmit power allocation problem as
\begin{maxi!}[2]
{\boldsymbol{p}}
{\frac{1}{T}\sum_{t=1}^{T} \bigg(\sum_{u=1}^{U} \sum_{b=1}^{B} r_u^b (t) - \zeta\sum_{u=1}^{U} \sum_{b=1}^{B} p_u^b (t) \bigg)}{\label{opt:P3}}{\textbf{P2: }}
\addConstraint{\eqref{cons1:reliablity}, \eqref{con4:powerbudget}, \eqref{con5:powerbudget_variable}.}
\end{maxi!}
For any given RB allocation, the above problem is a
convex optimization problem which can be solved efficiently by the UAV.
\subsection{UAV Location Optimization for a Given Power and RB Allocation}
For a given RB allocation from \textbf{P1} and power allocation from \textbf{P2}, the location optimization problem can be formulated as follows:
\begin{maxi!}[2]
{\boldsymbol{c}}
{\frac{1}{T}\sum_{t=1}^{T} \bigg(\sum_{u=1}^{U} \sum_{b=1}^{B} r_u^b (t) - \zeta\sum_{u=1}^{U} \sum_{b=1}^{B} p_u^b (t) \bigg)}{\label{opt:P4}}{\textbf{P3: }}
\addConstraint{ \sum_{b=1}^{B}r_u^b (t)\geq \frac{\beta\mathbb{E}[L_u(t)]}{\epsilon},}{\; \; \forall u \in \mathcal{U}, \forall t \in \mathcal{T}, \label{cons1_P4:reliablity}}
\end{maxi!}
The formulated problem is a convex optimization problem in $x$ and $y$ which can be shown following sequence of deduction from the proofs given in [\citenum{xu2020joint}, Appendix]. In particular, the Hessian of the inverse of the objective function \textbf{P3} is equivalently shown as a positive semi-definite, i.e., convex, about the UAV location. This means, with the direct consequence of simple composition rule that preserves convexity, the objective function is concave and positive. Thus, we have the maximization of a concave function as a convex. Therefore, we can solve \textbf{P3} using ECOS solver in the CVXPY toolkit.
However, to solve \textbf{P1}, \textbf{P2}, and \textbf{P3}, we first need to efficiently predict the expected random URLLC $L_u(t), \forall u$ traffic load at time $t$. A naive approach is to quantify $L_u(t), \forall u$ as a random variable with some known distribution \cite{alsenwi2019embb}; however, it may result poor performance in making online scheduling decision for URLLC traffic placements. Hence, we resort to a GPR approach, which is a flexible and robust mechanism to capture the network dynamics and provide online URLLC traffic prediction with minimal errors.
Algorithm 1 summarizes the algorithm to solve \textbf{P} which must converge due to the fact that the overall problem is multi-convex; and hence, solving convex sub-problems in an iterative manner ensures convergence \cite{tun2020energy, xu2020joint, alsenwi2021intelligent}.
\begin{algorithm}[t!]
\caption{\strut Iterative solution approach for the relaxed problem}
\label{alg:profit}
\begin{algorithmic}[1]
\STATE{\textbf{Initialization:} Set $k=0$ and initial solutions $(\boldsymbol{a}^{(0)} (t), \boldsymbol{p}^{(0)} (t), \boldsymbol{c}^{(0)} (t))$;}
\STATE Obtain URLLC traffic prediction $L_u(t)$ from \eqref{channel_prediction};
\REPEAT
\STATE{Compute $\hat{\boldsymbol{a}}^{(k+1)}(t)$ from (P1) at given $ \boldsymbol{p}^k(t)$, $\boldsymbol{c}^{(k)}(t)$};
\STATE{Compute $\boldsymbol{p}^{(k+1)}(t)$ from (P2) at given $ \hat{\boldsymbol{a}}^{(k+1)}(t)$, $\boldsymbol{c}^{(k)}(t)$};
\STATE{Compute $\boldsymbol{c}^{(k+1)}(t)$ from (P3) at given $ \hat{\boldsymbol{a}}^{(k+1)}(t)$, $ \boldsymbol{p}^{(k+1)}(t)$};
\STATE{$k = k + 1$};
\UNTIL{objective function converges.}
\STATE{Recover a binary solution $\boldsymbol{a}^{(k+1)}(t)$ from $\hat{\boldsymbol{a}}^{(k+1)}(t)$ using the threshold rounding technique \cite{alsenwi2021intelligent}.}
\STATE{Then, set $\big(\boldsymbol{a}^{(k+1)}(t)$, $\boldsymbol{p}^{(k+1)}(t), \boldsymbol{c}^{(k+1)}(t) \big)$ as the desired solutions}.
\end{algorithmic}
\label{Algorithm}
\end{algorithm}
\subsection{GPR-based URLLC Traffic Prediction}
Our aim is to perform an online prediction for the incoming URLLC traffic gain of the next time slot $\hat{L}_u(t+1)$ at each time slot $t$. To achieve that, we update the learning parameters over a moving window. Let $N$ be the window size, i.e., the window composed of the last $N$ time slots, in the set $\mathcal{N}$ of $|\mathcal{N}|=N$. The model parameters are trained on the data, i.e., URLLC traffic, inside the window. Then, trained parameters are used to predict the URLLC traffic load of the next time slot.
In this view, for a finite data set $(t_n, L_u(t_n)), \;\forall n\in\mathcal{N}$, a general GPR-based prediction model \cite{williams1998prediction} can be modified as
\begin{equation}
\hat{L}_u(t+1)=f(L_u(t))+\varepsilon, \; \forall u\in\mathcal{U},
\end{equation}
where $f(\cdot)$ is the regression function modeled as a Gaussian process with the mean function set to zero when there is no prior knowledge, and $\varepsilon$ is a Gaussian distribution random variable with $\sigma^2_{\varepsilon}$ variance and zero mean that represents the independent noise,
with the kernel function $g(\cdot)$ defined as
\begin{equation}
\begin{split}
g\big(L(t-m), L(t-n), \boldsymbol{\theta}\big)=\exp\bigg(\frac{-1}{\theta_1}\sin^2\Big(\frac{\pi}{\theta_2}\big(L(t-m)-L(t-n)\big)\Big)\bigg),
\end{split}
\end{equation}
where $m, n\in\{0, 1, 2, \dots, N\}$, and $\boldsymbol{\theta} =[\theta_1, \theta_2]$ defines a vector of the lengths and period hyper-parameters, respectively. Accordingly, the URLLC traffic load prediction at time slot $t+1$ is given as
\begin{equation}
\hat{L}_u(t+1)=g^{\dagger}(t)\boldsymbol{G}^{-1}[L_u(t-N), L_u(t-N+1), \dots, L_u(t)],
\label{channel_prediction}
\end{equation}
where $\boldsymbol{G}=[g(t-m, t-n)]$, and $g(t)=[g(t, t-n)], \; \forall m, n\in\mathcal{N}$. Moreover, the variance (uncertainty) on the predicted value is given by
\begin{equation}
\textsf{Var}\big(\hat{L}_u(t+1)\big)=g(t, t)-g^{\dagger}(t)\boldsymbol{G}^{-1}g(t).
\label{variance}
\end{equation}
The traffic prediction is obtained from \eqref{channel_prediction} and exploring highly uncertain traffic provides more insight.
\begin{figure*}[t!]
\centering
\begin{subfigure}{0.3\linewidth}
\includegraphics[width=\linewidth]{Pandey_WCL2021_0146_R1_fig2.eps}
\caption{}
\label{fig:a}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\includegraphics[width=\linewidth]{Pandey_WCL2021_0146_R1_fig3.eps}
\caption{}
\label{fig:b}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\includegraphics[width=\linewidth]{Pandey_WCL2021_0146_R1_fig4.eps}
\caption{}
\label{fig:rate}
\end{subfigure}
\caption{The tradeoff between UAV transmission energy and the system bandwidth varying (a) the outage threshold $\epsilon$, and (b) the network density $U$. Fig.~\eqref{fig:rate} shows the impact of the number of users on average per user data rate for different channel bandwidth.}
\setlength{\belowcaptionskip}{-10pt}
\end{figure*}
\section{Performance Evaluation}
In our simulation, we consider the UAV is deployed at a fixed height in the range $H = [100, 150]$m with the coverage area of $(250\times250)$m$^2$. We set the number of users in the range $[5, 20]$, which are positioned randomly in the UAV's coverage area. We consider the bandwidth of each RB $\omega=180$ kHz. The total available transmit power is set as $P_{\max}=10$ Watts, the noise power density $n_0=-174$ dBm/Hz, and the channel gain at the reference distance is $\gamma_0 = -30$ dBm. We consider the URLLC packet size as $\beta=32$ bytes, and the window size $N=600$ time slots for traffic prediction. Due the absence of real URLLC traffic datasets, we adopt the real world stock market datasets\footnote{https://www.kaggle.com/szrlee/stock-time-series-20050101-to-20171231.} to replicate and characterize the URLLC traffic load dynamics; and hence, evaluating the performance of the proposed algorithms. Moreover, we show the performance evaluation of the proposed approach in terms of overall transmission energy that well-captures both the transmit power and average transmission rate.
Fig.~\eqref{fig:a} shows the impact of the outage threshold $\epsilon$ for different system bandwidth configuration on the overall UAV transmission energy with $U=20$. In this figure, decreasing $\epsilon$ leads to high URLLC reliability. We compare the performance of the proposed approach with two intuitive baselines \textbf{Maximum power}, which is the worst case scenario to satisfy reliability constraints, and \textbf{Random}, which considers random placement of the UAV. For a given system bandwidth, the UAV increases the transmit power, so higher transmission energy improves the average sum-rate required for obtaining higher reliability (i.e., smaller $\epsilon$); however, a performance gain of up to 24.2\% as compared with the Maximum Power and around 23\% with Random. On the other hand, we observe the UAV significantly lowers the transmit power when the available system bandwidth is high; thus, a low transmission energy. This is expected as a lower transmit power is sufficient enough to maximize the average sum-rate for URLLC users that guarantees its reliability requirements. Thus, we observe the tradeoff between the overall transmission power and the average sum-rate, as defined in \eqref{opt:P1}. Moreover, the results are obtained after performing Monte-Carlo simulations to well-capture the variations in the UAV transmission energy when increasing the available system bandwidth. Such variations are particularly due to the random dynamics of wireless channel and uncertainty in the arrival of the URLLC traffic load.
Fig.~\eqref{fig:b} demonstrates the impact of network density, i.e., the number of URLLC users, on the UAV transmission energy for $\epsilon=0.1$. For a given available system bandwidth, it is shown that the UAV requires higher transmission energy to satisfy the stringent requirements of a large number of users. Moreover, with the increase in the available bandwidth, UAV can reduce its transmit energy consumption (low transmit power) without compromising the achievable average sum-rate. Similarly, Fig.~\eqref{fig:rate} evaluates the impact of the number of users on average per user data rate. In particular, we observe a sub-linear increase in the average per user rate with the availability of system bandwidth. Moreover, we also notice a negative impact of the network density on the per user rate, i.e., with the increasing number of users, the per user rate drops. This is intuitive as the radio resources will be shared amongst a larger number of users.
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\linewidth]{Pandey_WCL2021_0146_R1_fig5.eps}
\caption{The performance of URLLC traffic load prediction model for N = 600.}
\label{fig:gpr}
\end{figure}
Finally, Fig.~\ref{fig:gpr} captures the real trend of the normalized URLLC traffic load and the propagating uncertainty over time using a GPR prediction approach as compared to a two-layer LSTM network with ``\textit{rmsprop}" optimizer. In particular, it is observed that the prediction accuracy of GPR surpassed $99\%$ with a mean squared error (MSE) of 0.00288 when trained 600 data samples over the window of 600 time slots\footnote{In our formulation, we consider the worst-case scenario using chance constraint to ensure the reliability requirement of URLLC under such minimal errors.}, whereas an MSE of 0.015 with LSTM. Consequently, a better margin of performance gain is obtained while performing RB and transmit power optimization to satisfy the URLLC requirements.
\section{Conclusions}
In this letter, we have studied the problems of practical integration of 5G features with resource-constrained UAV to deliver URLLC services in an out-of-coverage area. In doing so, firstly, we have exploited a GPR approach to capture the real trend of URLLC traffic manipulating real world datasets. We have then formulated a joint optimization problem that incorporates optimal deployment strategy of UAV for maximizing the average sum-rate while minimizing its transmit power with the constraints to satisfy stringent URLLC requirements. We have revealed the formulated problem as an MINLP, challenging to solve directly using conventional optimization techniques. To tackle this issue, we have introduced the low-complexity near-optimal successive minimization algorithm. Finally, we have presented numerical results to validate the efficiency of our proposed solution approach where our approach outperforms the other baselines.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\vspace{-0.3cm}
\bibliographystyle{IEEEtran}
|
1,477,468,750,437 | arxiv | \section{Introduction}
The study of random graphs dates back to the work of Erd\H{o}s and Renyi in the late 1950s \cite{erdHos1960evolution,erdHos1959random}.
In particular, the transition of a random graph as a composition of mostly small components to one with a ``giant component'' to a connected graph has been studied extensively. These structural changes are known as phase transitions, and the two-step process above is sometimes referred to as the ``double-jump'' in a random graph. Beginning with the revolutionary work of Erd\H{o}s and R\'{e}nyi, phase transitions have been studied in a multitude of different settings (see, for example, \cite{bollobas2007phase, bollobas2012simple,ding2011anatomy,janson1993birth,janson2012phase, spencer2010giant}, among many others).
Of the first pieces of work in the subject, Erd\H{os} and R\'{e}nyi published an analysis of component sizes and phase transitions in random graphs
\cite{erdHos1960evolution,erdHos1959random}.
The first of the commonly studied random graphs is known as the Erd\H{o}s-R\'{e}nyi model. In this model, the number of vertices is denoted $n$ and the probability that two vertices are adjacent is denoted $p$, where each edge is included independently. Herein we use the notation $G(n, p)$ for this graph. Phase transition has been studied extensively in $G(n, p)$. For probability $np \lesssim c<1 $, this model contains only small components of size at most $\bigOh{\log n}$ asymptotically almost surely. For $ 1<c\lesssim np \lesssim \log n $, there is a giant component of size on the order of $n$ asymptotically almost surely, and for $np \gtrsim \log n$, the graph is connected asymptotically almost surely (see, for example \cite{AlonandSpencer} for an analysis of component sizes in $G(n, p)$).
In this work, we study random graphs for which the vertices are embedded in a metric space, and edges are chosen based upon the distance between these vertices. Examples of graphs of this type in the literature are abound, such as random geometric graphs (see, for example, \cite{penrose2003random,balister2008percolation,mahadev1995threshold}), geographical threshold graphs (see, for example, \cite{bradonjic2007wireless,masuda2005geographical,bradonjic2007giant}), the Kleinberg small world model (see, for example, \cite{garfield1979its,kleinberg2000navigation}), Waxman models (see, for example, \cite{waxman1988routing,van2001paths,naldi2005connectivity}), among others. See \cite{avin2008distance} for a description of some of these types of graphs.
The ``randomness'' in such graphs is generally presented in one of two ways: either vertices are chosen randomly or the edges are chosen randomly. For random geometric graphs and geographical threshold graphs, the vertices are chosen randomly from an underlying metric space, and then a rule is devised to determine their adjacency; typically the adjacency is deterministic once the vertex set has been chosen. Often the metric space in question here is $\mathbb{R}^d$, although that is not strictly necessary. On the other hand, for the Kleinberg small world model, the vertices are fixed in the metric space, but the presence of edges is chosen randomly as a decreasing function of the distance between nodes. The Waxman model, seemingly uniquely among graphs of this type, chooses both the position of the vertices in the metric space and the edges randomly.
In this work, we propose a graph model similar to a Kleinberg model or Waxman model. We consider a sequence of random graphs defined as follows. First, fix some metric space $(X, d)$, and take $X_1\subset X_2\subset X_3\subset\dots\subset X$ to be a sequence of sub-metric spaces of $X$, in which $|X_j|$ is finite for all $j$. For each $j$, let $f_j:[0, \DIAM {X_j})\to [0, 1]$ be a decreasing function. The graph $G(X_j, f_j)$ is defined by $V(G(X_j, f_j))=X_j$ and for any $u, v\in X_j$, $\mathbb{P}(u\sim v) = f_j(d(u, v))$. We refer to such a model in this work as a {\it random distance graph}.
To distinguish this model from the existing literature, we include an analysis of connectivity and other structures in the traditional Waxman model in Section \ref{S:Waxman}. Given an metric space $(X, d)$, together with a probability distribution $\mu$ over $X$, let $W(X, n, f)$ denote the traditional Waxman model over $X$ with $n$ vertices, wherein each vertex is embedded randomly in $X$ according to $\mu$, and the probability that two vertices are adjacent is given by $\mathbb{P}(u\sim v)=f(d(u, v))$. We prove the following.
\begin{theorem}\label{fixedms}
Let $X$ be a connected metric space with finite diameter $d_X$ and let $\mu$ be a probability distribution over $X$. Let $G=W(X, n, f_n)$. If there exists $\epsilon>0$ such that the functions $f_n$ satisfy the condition
\[
\frac{n}{\log n} f_n(d_X) > 1+\epsilon
\]
for $n$ sufficiently large, then the graph $G$ is connected a.a.s.
\end{theorem}
We note that in practice, Waxman models are typically used with $X$ a finite volume subset of $\mathbb{R}^k$, and $f=f_n= \alpha e^{-\frac{d}{\beta}}$ with $\alpha, \beta \in (0,1]$. Note that an immediate corollary to the above theorem is that all traditional Waxman graphs are connected asymptotically almost surely.
In contrast, we have the following theorems regarding connectivity in random distance graphs.
\begin{theorem}\label{T:conn}
Let $G_n=G(X_n,f_n)$. If there exists $\epsilon>0$ such that \[f_n(\DIAM X_n )>\frac{(1+\epsilon) \ln |X_n|}{|X_n|},\] then $G_n$ is connected a.a.s..
\end{theorem}
\begin{theorem}\label{T:disconn}
Let $(X_n)$ be a sequence of nested finite metric spaces with metric $\rho$, and let $\rho_n=\DIAM X_n$. Let $G_n=G(X_n,f_n)$. For each $n, d>0$, define \[a_n(d) = \sup_{v\in X_n}\left|\left\{u\in X_n\ | \ \rho(u, v)=d\right\}\right|.\] If there exists $\alpha>0$ such that
\[
f_n(d)\leq\frac{|X_n|^{-\alpha}}{a_n(d)\rho_n}
\]
for all $d>0$ and $n$ sufficiently large, then there exists $\epsilon>0$ such that with probability at least $1-n^{-\epsilon}$, $G_n$ has $|X_n|(1-o(1))$ isolated vertices.
\end{theorem}
We note that there is bit of difference in the size of $f_n$ in Theorems \ref{T:conn} and \ref{T:disconn}. However, we have a precise threshold in the case that $(X, d)$ is the $r$-dimensional integer lattice, under the $\ell_1$ metric, and
\[ X_n = \{(a_1, a_2, \dots, a_n)\in X\ | \ 0\leq a_i\leq n-1 \hbox{ for all }i\};\]that is, $X_n$ is a $n\times n\times\dots\times n$-sized subset of $\mathbb{Z}^r$. We typically write $L_n^r$ to denote this metric space.
Using this metric space, we can view adjacency between nodes of $X_n$ as a function of the difference between corresponding coordinates in the vector representing the node. Examples of graphs defined in a similar way include stochastic Kronecker graphs (see, for example, \cite{mahdian2007stochastic,leskovec2010kronecker,radcliffe2013connectivity}), multiplicative attribute graphs (see, for example, \cite{kim2011modeling,kim2012multiplicative}), and random dot product graphs (see, for example, \cite{scheinerman2010modeling, nickel2007random, young2007random}) . However, in these cases, the specific values of the coordinates is taken into account in determining the probability that two nodes are adjacent, whereas in this model, the only determining factor is the difference between corresponding coordinates.
In this particular case, we obtain the following result, which closes the gap between Theorems \ref{T:conn} and \ref{T:disconn}.
\begin{theorem}\label{T:main}
Let $f_n(d) = \frac{1}{n^\beta d}$, where $\beta\in \mathbb{R}$. Fix $r>0$, and let $G=G(L_n^r, f_n)$. Then
\begin{enumerate}
\item if $\beta<r-1$, then $G$ is connected a.a.s., and
\item if $\beta>r-1$, then $G$ is disconnected a.a.s., and, moreover, $G$ has $n^r(1-\lilOh{1})$ isolated vertices.
\end{enumerate}
\end{theorem}
This behavior is striking in that we do not see the typical ``double-jump'' between a giant component and a connected graph when $\beta$ is constant. Instead, the graph goes from having no large component directly to being connected. It seems likely that a more nuanced approach to choosing $\beta$ as a function of $n$ may identify a double jump here.
The approach to the proof of Theorem \ref{T:main} involves an approximation of the expected degree of each vertex, obtained by first expanding the vertex set to the infinite lattice $\mathbb{Z}^r$, and then considering an appropriately chosen subset. We use an approximation for the expected degree, but also include a proof of the precise expected degree in dimension $2$ for comparison in Section \ref{S:degreetwo}.
We note also that this choice of $f_n$ is designed to keep the graph sparse even as $n\to \infty$, in keeping with expectations for small world models (see, for example, \cite{humphries2008network, easley2010networks}). We shall see as a corollary that the density of the graph in fact tends to 0 as $n\to\infty$.
\section{Tools and notation}\label{S:notation}
Throughout this paper, we use standard graph theoretic notation and terminology. Our primary language is defined below; we refer the reader to \cite{bollobas1982graph} for any terminology not herein defined.
Let $G=(V, E)$ be a graph. Given a vertex $v\in V$, define $\deg_G(v)$ to be the degree of $v$ in $G$. If $G$ is understood, we shall write $\deg(v)$ for brevity. We write $u\sim v$ to indicate that $u$ is adjacent to $v$. A graph $G$ is connected if for any two vertices $u, v\in V$, there is a path between $u$ and $v$. A maximal connected subgraph of $G$ is called a connected component, or simply a component of $G$. A graph family $G_n$ is said to have a giant component if there exists a connected component containing $\bigTheta{|V(G_n)|}$ vertices of $G_n$ for each $n$.
A graph $G$ is called {vertex transitive} if for every $u, v\in V(G)$, there exists a function $\phi:V(G)\to V(G)$ such that $\phi(u)=v$, and $x\sim y$ if and only if $\phi(x)\sim \phi(y)$; that is, there is a homomorphism that sends any vertex in $G$ to any other.
Throughout, we shall focus on graphs with a fixed vertex set and randomly generated edges, where the set of edges are mutually independent. Given a sequence of random graphs $\{G_n\}$, we say that $G_n$ has a property P asymptotically almost surely (a.a.s.) if $\mathbb{P}(G_n\hbox{ has P})\to 1$ as $n\to \infty$. A graph property P is called {\it monotonic} if, whenever $H=(V, E')$, $E'\subset E$, has P and $H$ is a subgraph of $G$, then $G$ also has P; that is, the property is preserved if additional edges are added to the graph.
Let $G_1$ and $G_2$ be random graphs with $V(G_1)=V(G_2)=V$. We say that $G_2$ {\it dominates} $G_1$ if for all $u, v\in V$, $\mathbb{P}_{G_2}(u\sim v)\geq \mathbb{P}_{G_1}(u\sim v)$. We note the following lemma for monotone graph properties, which is a standard exercise in random graphs (see, for example, \cite{friedgut1996every, erdHos1960evolution}):
\begin{lemma}\label{dominated graphs and connectivity}
Let $G_{1}, G_{2}$ be random graphs with the same vertex set such that $G_2$ dominates $G_1$. If P is a monotone graph property, and $G_1$ has P a.a.s., then $G_2$ also has P a.a.s..
\end{lemma}
We denote by $G(m,p)$ the Erd\H{o}s-R\'{e}nyi graph with $m$ vertices, such that the probability that any two vertices are adjacent is $p$. We recall the following classical result on connectivity in $G(m, p)$ (see, for example, \cite{AlonandSpencer}).
\begin{theorem}\label{T:Gnpconn}
Let $G=G(m, p)$. If there exists $\epsilon>0$ such that $p>\frac{(1+\epsilon)\ln m}{m}$, then $G$ is connected a.a.s.. On the other hand, if there exists $\epsilon>0$ such that $p<\frac{(1-\epsilon)\ln m}{m}$, then $G$ is disconnected a.a.s..
\end{theorem}
For our purposes, the monotone graph property of greatest interest is the property of a graph being connected a.a.s.. By combining Lemma \ref{dominated graphs and connectivity} with Theorem \ref{T:Gnpconn}, we have the following immediate result:
\begin{lemma}\label{T:domconn}
Let $G$ be a random graph on $m$ vertices. If there exists $\epsilon>0$ such that for all $u, v\in V(G)$, $\mathbb{P}(u\sim v)> \frac{(1+\epsilon)\ln m}{m}$, then $G$ is connected a.a.s..
\end{lemma}
As above, for a metric space $(X, d)$ and a function $f:\mathbb{R}^+\to [0, 1]$, let $G=G(X, f)$ be the random graph with $V(G)=X$ and $\mathbb{P}(u\sim v) = f(d(u, v))$ for all $u, v\in V$. We refer to this graph as a {\it random distance graph}. We shall use the notation $\mathbb{Z}^r$ and $L_n^r$ as defined in the introduction.
As for probabilistic tools, we shall require nothing more complex than Markov's Inequality, included here for completeness.
\begin{theorem}[Markov's Inequality]
Let $X$ be a random variable with $X\geq 0$. Then for all $a>0$, we have
\[\mathbb{P}(X>a)\leq \frac{\e{X}}{a}.\]
\end{theorem}
Throughout, we shall use the standard asymptotic notations of $\bigOh{\cdot}$, $\lilOh{\cdot}$, $\ll$, $\gg$, etc. We refer the reader to \cite{cormen2009introduction} for a full formal definition of these notations. Asymptotics will always be considered with respect to $n$; for example, if we write $f(n, r)=\bigOh{g(n, r)}$, it is implied that $r$ is to be held constant and the limit to be considered as $n\to\infty$.
\section{Connectivity in a traditional Waxman model}\label{S:Waxman}
Recall the definition of $W(X, n, f)$ as given in the introduction to be a Waxman graph over a metric space $(X, d)$, where vertices are embedded randomly in $X$ according to some probability distribution $\mu$, and $\mathbb{P}(u\sim v)=f(u, v)$. Waxman graphs have been used to generate models of random networks for modeling systems such as the Internet graph and various biological networks \cite{faloutsos1999power,calvert1997modeling}. The most traditional version of a Waxman graph is formed by taking the underlying metric space as $X=[0,1]^r$, a subset of $\mathbb{R}^r$ under the $\ell^2$ metric, and the distribution $\mu$ to be uniform over $X$. The function $f_n$ is typically chosen to be constant with respect to $n$, with $f_n(d)=f(d)=\alpha e^{-\frac{d}{\beta}}$ with $\alpha, \beta \in (0,1]$. It is commonly known, though no formal proof has been presented to our knowledge, that the graph $W(X, n, f)$ is connected a.a.s.. In fact, we can extend this even to the case that $f_n$ is not constant with respect to $n$, as follows.
\begin{theorem}\label{fixedms}
Let $X$ be a connected metric space with finite diameter $\rho_X$ and let $\mu$ be a probability distribution over $X$. Let $G=W(X, n, f_n)$. If there exists $\epsilon>0$ such that the functions $f_n$ satisfy the condition
\[
\frac{n}{\log n} f_n(\rho_X) > 1+\epsilon
\]
for $n$ sufficiently large, then the graph $G$ is connected a.a.s.
\end{theorem}
\begin{proof}
For all $x,y$ vertices in $X$, we see that $\mathbb{P}(x\sim y) > f_n(\rho_X)$. By hypothesis, $\frac{n}{\log n} f_n(\rho_X) > 1+\epsilon$ for $n$ sufficiently large, so by Lemma \ref{T:domconn}, $G$ is connected a.a.s.
\end{proof}
We find that the above theorem applies to traditional Waxman models by fixing $f_n$, as we thus have $f_n(\rho_X)=f(\rho_X)$ is constant. The application of the theorem leads to no further consequence in traditional Waxman models, so we depart from the model for the purposes of this paper. Indeed, overall the traditional Waxman model becomes locally quite dense over time, and it is for this reason that we depart from this model, and allow the functions $f_n$ and the metric spaces themselves to change with $n$.
\section{Connectivity in $G(X_n, f_n)$}
In this section, we prove Theorems \ref{T:conn}, \ref{T:disconn}, and \ref{T:main}, regarding the a.a.s.\ connectivity of random distance graphs having as their vertex sets nested connected finite metric spaces $(X_n,\rho)$ with finite diameter.
We begin with the two most general theorems, namely, Theorems \ref{T:conn} and \ref{T:disconn}. We note that Theorem \ref{T:conn} relies almost entirely on Lemma \ref{T:domconn}. Throughout this section, we assume that $f_n$ is a monotonically decreasing function for each $n$.
\begin{proof}[Proof of Theorem \ref{T:conn}]
Let $H=G(X_n,f_n(\DIAM X_n))$. Then as $f_n$ is monotonically decreasing, $H$ is an Erd\H{o}s-R\'enyi graph that dominates $G_n$ and is a.a.s. connected by Theorem \ref{T:Gnpconn}. Therefore $G_n$ is a.a.s.\ connected by Lemma \ref{T:domconn}.
\end{proof}
We now turn to the proof of Theorem \ref{T:disconn}. We here use a simplified version of the proof that will be used in the case that $X_n=L^r_n$.
\begin{proof}[Proof of Theorem \ref{T:disconn}]
Let $v\in V(G_n)$. We note that as $|\{u\in V(G_n)\ | \ \rho(u, v)=d\}|\leq a_n(d)$ and $f_n(d)\leq\frac{|X_n|^{-\alpha}}{a_n(d)\rho_n}$ for all $d>0$, we thus have
\[
\mathbb{E}[\deg_{G_n}(v)]\leq \sum_{d=1}^{\rho_n}a_n(d)f_n(d)\leq\sum_{d=1}^{\rho_n}\frac{|X_n|^\alpha}{\rho_n}=|X_n|^{-\alpha}.
\]
By Markov's inequality,
\[
\mathbb{P}(v\,\textrm{not isolated in}\, G_n)=\mathbb{P}\left(\deg_{G_n} (v)>\frac{1}{2}\right)\leq 2|X_n|^{-\alpha}
\]
Therefore, the expected number of nonisolated vertices is at most $2|X_n|^{1-\alpha}$. Let $\delta\in(0,1)$ with $1-\delta<\alpha$. By Markov's inequality again,
\[
\mathbb{P}(G_n \,\textrm{has at least}\, |X_n|^\delta\,\textrm{nonisolated vertices)}\leq 2|X_n|^{1-\alpha-\delta}.
\]
Therefore, with probability at least $1-2|X_n|^{1-\alpha-\delta}=1-\lilOh{1}$, $G_n$ has at least $|X_n|-|X_n|^\delta=|X_n|(1-|X_n|^{\delta-1})=|X_n|(1-\lilOh{1})$ isolated vertices.
\end{proof}
As noted in the introduction, there is some difference in the size of the two bounds on $f_n$ in these two theorems. It seems likely that tighter restrictions on $a_n(d)$ could improve the second theorem substantially, if one controls the type of metric space permitted. We note also that straightforward generalizations of these theorems can be derived in the case that $X_n$ is not a finite metric space, but instead we take a Waxman-like approach, and choose finitely many vertices from a single metric space with a finite diameter.
\subsection{Proof of Theorem \ref{T:main}}\label{S:mainthm}
In this section we focus our analysis on Theorem \ref{T:main}; that is, the case that $X_n=L_n^r$, the $r$-dimensional integer lattice of width $n$ in each dimension under the $\ell^1$ metric, which we shall denote by $\rho$, and $f_n(d) = \frac{1}{n^\beta d}$ for some $\beta>0$. As the proofs of the two parts of the theorem are substantially different in character, we write them as two separate theorems below. We begin with the first statement. Throughout this section, we shall use the following notation.
Let $\mathbb{Z}^{r}$ denote the $r$-dimensional integer lattice. Write $L_{n}^{r}=\{a \in \mathbb{Z}^r\ | \ 0\leq a_i\leq n-1\hbox{ for all } i\}\subset \mathbb{Z}^r$, the $n\times n\times\dots\times n$ integer lattice in $r$ dimensions. We call $L_{n}^{r}$ the $r-$dimensional lattice of size $n$, and when context makes $n,r$ clear, we write $L=L_{n}^{r}$ and $\mathbb{L}=\mathbb{Z}^r$. Our primary focus will be on the graph $G(L, f_n)$, where $f_n(d) = \frac{1}{n^\beta d}$. We begin with the first statement in Theorem \ref{T:main}, restated below for convenience, whose proof mirrors that of Theorem \ref{T:conn}.
\setcounter{theorem}{3}
\begin{theorem}[Part 1]
Let $L=L_n^r$, and $G=G(L, f_n)$, where $\beta<r-1$. Then $G$ is connected a.a.s..
\end{theorem}
\begin{proof}
Let $u, v\in L$. Note by definition that $\rho(u, v)\leq r(n-1)<rn$, and hence $\mathbb{P}(u\sim v)\geq f_n(rn) = \frac{1}{rn^{1+\beta}}=:p$.
Moreover, $G$ has $n^r$ vertices. Note that $n^rp=n^{r}\frac{1}{rn^{1+\beta}}=\frac{1}{r}n^{r-1-\beta}\gg\log(n^{r})$ when $\beta<r-1$. But then by Lemma \ref{T:domconn}, we immediately have that $G$ is connected a.a.s..
\end{proof}
We now turn our attention to the proof of the second half of Theorem \ref{T:main}. To prove the second half, we shall view $G(L, f_n)$ as a subgraph of the infinite graph $G(\mathbb{L}, g_n)$, where $g_n(d)=f_n(d)$ if $d\leq r(n-1)$ and $0$ otherwise. Note that it is sufficient to prove that $G$ has $n^r(1-\lilOh{1})$ isolated vertices whenever $\beta>r-1$. To do so, we shall view $G(L, f_n)$ as a subgraph of the infinite graph $G(\mathbb{L}, g_n)$, where \begin{equation}\label{E:gdef}g_n(d)=\left\{\begin{array}{ll}f_n(d) & \hbox{ if }d\leq r(n-1)\\0&\hbox{ otherwise}\end{array}\right. .\end{equation} The structure of the proof is similar to that of the proof of Theorem \ref{T:disconn}, however we shall be able to develop much more precise estimates on $a_n(d)$ in this case.
To begin, note $G(\mathbb{L}, g_n)$ is vertex transitive, and hence the expected degree is the same for every vertex. Fix a vertex in $v\in L^r_n$, and define $a^{(v)}_{r}(d)$ to be the number of vertices $u$ in $L^r_n$ such that $\rho(u, v)=d$. For $d\leq r(n-1)$, let $a_{r}(d)$ denote the number of vertices $u$ in $\mathbb{Z}^r$ with $\rho(u, v)=d$, and define $a_{r}(d)=0$ for $d>r(n-1)$. By definition, we have $a^{(v)}_r(d)\leq a_r(d)$, and hence note the following simple observations:
\begin{equation}\label{E:degininfL}\e{\deg_{\mathbb{Z}^r}(v)} = \sum_{d=0}^{r(n-1)} a_r(d)g_n(d),\end{equation}
and\begin{equation}\label{E:deginL} \e{\deg_{L_n^r}(v)} = \sum_{d=0}^{r(n-1)} a^{(v)}_r(d)f_n(d)\leq \e{\deg_{\mathbb{Z}^r}(v)}.\end{equation}
As $a_r(d)$ is independent of the chosen vertex $v$, we thus have a uniform bound on the expected degree of any vertex in $G(L_n^r, f_n)$. In order to make this bound useful, we shall use the following recursive formula for $a_r(d)$. We note that in this formula, we shall take $a_r(0)=1$, as a vertex has exactly one vertex at distance 0 to it, namely, itself.
\begin{lemma}\label{L:ard}
For $d\leq 2n-2$, $a_{2}(d)=4d$, and for $d\leq r(n-1)$,
\[
a_{r+1}(d)=2\left(\sum_{k=0}^{d-1}a_{r}(k)\right)+a_{r}(d).
\]
\end{lemma}
\begin{proof} As noted above, $a_r(d)$ is independent of the vertex $v$; let us suppose that $v=\mathbf{0}$. We take $(b_1, b_2,\ldots, b_{r+1})$ to be a point in $\mathbb{Z}^{r+1}$. Let us consider, then
\begin{eqnarray*}
a_{r+1}(d) &=& \left| \left\{(b_1, b_2, \dots, b_{r+1})\ \vert \ \sum |b_i| = d\right\} \right|\\
& = & \sum_{k=0}^d \left| \left\{(b_1, b_2, \dots, b_{r+1})\ | \ \sum |b_i|=d \hbox{ and }|b_{r+1}|=k\right\}\right|.
\end{eqnarray*}
That is to say, we can view $\mathbb{Z}^{r+1}$ as an infinite stack of copies of $\mathbb{Z}^r$, arrayed along the $(r+1)^{\textrm {st}}$ axis. To calculate $a_{r+1}(d)$, we then simply add up the values of $a_{r}(k)$ contributed from each copy. Note that if $|a_{r+1}|=k\neq 0$, we have
\[\left|\left\{(b_1, b_2, \dots, b_{r+1})\ | \ \sum |b_i|=d \hbox{ and }|b_{r+1}|=k\right\}\right|= 2\left|\left\{(b_1, b_2, \dots, b_r)\ | \ \sum |b_i|=d-k\right\}\right|,\]
where the 2 is to accommodate the duplication for $a_{r+1}=\pm k$. The case that $k=0$ is identical, without the factor of two.
Together with the above, we thus obtain
\begin{eqnarray*}
a_{r+1}(d) & = & \sum_{k=0}^d \left| \left\{(b_1, b_2, \dots, b_{r+1})\ | \ \sum |b_i|=d \hbox{ and }|b_{r+1}|=k\right\}\right|\\
& = & \sum_{k=1}^d 2\left|\left\{(b_1, b_2, \dots, b_r)\ | \ \sum |b_i|=d-k\right\}\right| + \left|\left\{(b_1, b_2, \dots, b_r)\ | \ \sum |b_i|=d\right\}\right|\\
& = & \sum_{k=1}^d 2a_r(d-k) + a_r(d).
\end{eqnarray*}
Reindexing this sum yields the stated result.
For the case that $r=2$, note that we can apply the above calculation to obtain that for $d\leq 2(n-1)=2n-2$,
\[ a_2(d) = 2\sum_{k=0}^{d-1} a_1(k) + a_1(d).\]
Note that in dimension 1, there are precisely two vertices at distance $k$ for any positive $k$, and one vertex at distance 0. Hence, we have
\[a_2(d) = 2(1 + 2(d-1))+2 = 4d.\]
\end{proof}
\begin{lemma}\label{expecteddegree}
Let $H_n^{r}=G(\mathbb{Z}^{r},g_n)$, where $g_n$ is as in Equation \eqref{E:gdef}. Fix $v\in V({H_n^r})$. Then for all $r\geq 2$,
\[
\e{\deg_{H_n^{r}}(v)} = \bigOh{n^{r-1-\beta}}.
\]
\end{lemma}
\begin{proof}
We work by induction on $r$. For simplicity of notation, we write $H_n^r$ as $H$ or $H^r$ when $n$ is clear.
First, suppose $r=2$. By Lemma \ref{L:ard} and Equation \eqref{E:degininfL}, we thus have
\[ \e{\deg_{H}(v)} = \sum_{d=1}^{2(n-1)}a_2(d)g_n(d) = \sum_{d=1}^{2(n-1)}4d\left(\frac{1}{n^\beta d}\right) = 4(2n-2)\frac{1}{n^\beta}<8n^{1-\beta}.\]
Hence, the case that $r=2$ is established. Now, for induction, suppose that the result holds for $r$. Note by Lemma \ref{L:ard} that for any vertex $v\in V(H^{r+1})$, we have
\begin{align*}
\e{\deg_{H^{r+1}}(v)} &= \sum_{d=1}^{(r+1)(n-1)} a_{r+1}(d)g_n(d)\\
&= \frac{1}{n^\beta} \sum_{d=1}^{(r+1)(n-1)} \frac{2\left( \sum_{k=0}^{d-1} a_r(k) \right) + a_r(d)}{d}\\
&= \frac{1}{n^\beta} \sum_{d=1}^{(r+1)(n-1)} \frac{a_r(d)}{d} + \frac{1}{n^\beta} \sum_{d=1}^{(r+1)(n-1)} \frac{2}{d} \sum_{k=0}^{d-1} a_r(k).
\end{align*}
For the first term, notice that $a_r(d)=0$ by definition if $d>r(n-1)$, and hence
\begin{equation}\label{E:firstterm} \frac{1}{n^\beta}\sum_{d=1}^{(r+1)(n-1)} \frac{a_r(d)}{d} = \frac{1}{n^\beta} \sum_{d=1}^{r(n-1)} \frac{a_r(d)}{d} = \e{\deg_{H^r}(v)}= \bigOh{n^{r-1-\beta}}, \end{equation}
by the inductive hypothesis.
For the second term, we may change the order of summation to obtain
\[ \frac{1}{n^\beta} \sum_{d=1}^{(r+1)(n-1)} \frac{2}{d} \sum_{k=0}^{d-1} a_r(k) = \frac{1}{n^\beta}\left(2\sum_{k=1}^{(r+1)(n-2)\frac{1}[d}+ \sum_{k=1}^{(r+1)(n-1)-1} \frac{a_r(k)}{k} \sum_{d=k+1}^{(r+1)(n-1)} \frac{2k}{d}\right).\]
The first term corresponds to the case that $k=0$, the second to all other values of $k$. Note that for the first term, we have
\begin{equation*}
\frac{1}{n^\beta}\sum_{d=1}^{(r+1)(n-2)}\frac{2}{d}\leq \frac{1}{n^\beta}2(r+1)(n-2) = \bigOh{n^{1-\beta}} = \bigOh{n^{r-1-\beta}},
\end{equation*}
since $r\geq 2$.
For the second term, we have $\frac{2k}{d}\leq \frac{2k}{k+1}\leq 2$ for all $k>0$. Further, we can apply the property that $a_r(k)=0$ if $k>r(n-1)$, and we thus have
\begin{eqnarray} \nonumber\frac{1}{n^\beta} \sum_{k=1}^{(r+1)(n-1)-1} \frac{a_r(k)}{k} \sum_{d=k+1}^{(r+1)(n-1)} \frac{2k}{d} &\leq& \frac{1}{n^\beta}\sum_{k=1}^{r(n-1)}\frac{a_r(k)}{k}\sum_{d={k+1}}^{(r+1)(n-1)}2\\
\nonumber & \leq & 2((r+1)(n-1)-1)\frac{1}{n^\beta}\sum_{k=1}^{r(n-1)}\frac{a_r(k)}{k}\\
\nonumber & = & 2(n(r+1)-(r+2))\bigOh{n^{r-1-\beta}}\\
\label{E:secondterm}& = & \bigOh{n^{r-\beta}}.
\end{eqnarray}
Taking Equations \eqref{E:firstterm} and \eqref{E:secondterm} together, we obtain
\[\e{\deg_{H^{r+1}}(v)} = \bigOh{n^{r-1-\beta}} +\bigOh{n^{r-\beta}} =\bigOh{n^{r-\beta}},\] as desired.
\end{proof}
\setcounter{theorem}{3}
\begin{theorem}[Part 2]
Let $G=G(L_n^r,f_n)$ with $r$ fixed and $f_n(d)=\frac{1}{n^{\beta}d}$. If $\beta>r-1$, then there exists $\epsilon>0$ such that with probability at least $1- n^{-\epsilon}$, $G$ has $n^r(1-\lilOh{1})$ isolated vertices.
\end{theorem}
\setcounter{theorem}{7}
\begin{proof}
Let $G=G(L_n^r, f_n)$, where $\beta>r-1$ and $r\geq 2$, and let $H=G(\mathbb{Z}^r, g_n)$. By Lemma \ref{expecteddegree}, we thus have that there exists some constant $c$ such that, for any vertex $v\in V(G)$,
\[\e{\deg_G(v)} \leq \e{\deg_H(v)}\leq cn^{r-1-\beta}.\]
Thus, by Markov's Inequality, we have that
\[\mathbb{P}(v\hbox{ is not isolated in }G ) =\mathbb{P}\left(\deg(v)>\frac{1}{2}\right)\leq 2cn^{r-1-\beta}.\] Hence, the expected number of nonisolated vertices in $G$ is at most $n^r(2cn^{r-1-\beta})=2cn^{2r-1-\beta}$. By Markov's inequality again, for any $\epsilon>0$, we have
\[\mathbb{P}\left(G\hbox{ has at least }2cn^{2r-1-\beta+\epsilon}\hbox{ nonisolated vertices}\right)\leq n^{-\epsilon}=\lilOh{1}.\]
Take $\epsilon=\frac{-r+\beta+1}{2}>0$. Note then that as $r-\beta-1<0$, that $r-\beta-1+\epsilon<0$. Thus, we have that with probability at least $1-n^{-\epsilon}=1-\lilOh{1}$, $G$ has at least
\[n^r-2cn^{2r-1-\beta+\epsilon}=n^r\left(1-2cn^{r-1-\beta+\frac{r-\beta-1}{2}}\right) = n^r(1-\lilOh{1})\]
isolated vertices, as desired.
\end{proof}
\section{Expected degree in $G(L_n^2, f_n)$}\label{S:degreetwo}
For completeness we include an exact analysis on the expected degree of a vertex in $G=G(\mathbb{Z}^2,f_n)$ and determine $a_2^{(v)}(d)$ exactly. We do so by explicitly counting the number of vertices at distance $d$ from a given vertex $v=(d_1, d_2)\in L_n^2$.
Throughout this section, we shall keep $v$ fixed as $(d_1, d_2)$, and hence we will suppress the superscript $(v)$ and simply write $a_2(d)$ in place of $a_2^{(v)}(d)$. Likewise, we shall restrict to working in the $n\times n$ integer lattice, which we shall denote simply by $L$.
Fix a distance $d\leq 2(n-1)$. Recall that from Lemma \ref{L:ard}, in $\mathbb{Z}^2$, there are $4d$ vertices at distance $d$ from $v$. Hence, we need only determine how many of these vertices are in fact members of $L$.
Let $S_d(v)$ be a square of side length $2d$ centered at $v$. We note that not all vertices in $S_d(v)$ will be within distance $d$ of $v$; however, all vertices at distance precisely $d$ from $v$ are contained in $S_d(v)$. By considering the corners of the square $S_d(v)$, we thus have that if \begin{equation}\label{Conditions}d_1-d\geq 0, d_2-d\geq 0, d_1+d\leq n-1, \hbox{ and }d_2+d\leq n-1,\end{equation} then $a_2(d)=4d$.
If these four conditions are not all met, then we have that $S_d(v) \cap (\mathbb{Z}^2\backslash L)\neq \emptyset$; our main task then is to count how many vertices at distance $d$ from $v$ lie outside of $L$.
First, consider the case that only one of these inequalities fails; without loss of generality, suppose that $d_1-d<0$. This case is illustrated in Figure \ref{firstcase}. Let $c=(-1, d_2)$, and note that $d(c, v)=d_1+1$. Note that if $u=(x, y)$ is a vertex at distance $d$ from $v$, with $u\notin L$, then we have $x<0$, so that $u$ is obtained from $c$ by taking $k$ steps left and $d-k-d_1-1$ steps either up or down. Hence, there will be $1+2(d-d_1-2)$ such vertices, where we obtain 1 vertex for the case that $k=d-d_1-1$ and 2 vertices (corresponding to steps up or down) in all other cases.
\begin{figure}[htp]
\includegraphics[width=.4\textwidth]{Figure1.pdf}
\caption{An illustration of the case that $d_1-d<0$, but all other conditions in \eqref{Conditions} are met. Here, the shaded region represents $S_d(v)$, and we see that $S_d(v)$ intersects $\mathbb{Z}^2\backslash L$ only on one of the four sides.
}\label{firstcase}
\end{figure}
Hence, in the case that $d>d_1$, and all other conditions of \eqref{Conditions} are met, we have that $a_2(d) = 4d-2(d-d_1-2)-1=2d+2d_1+3$.
In all other cases, we shall apply the same technique. We thus need only determine the number of vertices that are double counted by this technique. Without loss of generality, we shall consider this double count only for the case that $d>d_1$ and $d>d_2$; all other cases will be symmetric. This situation is illustrated in Figure \ref{secondcase}.
\begin{figure}[htp]
\label{secondcase}
\includegraphics[width=.4\textwidth]{Figure2.pdf}
\caption{An illustration of the case that $d_1-d<0$ and $d_2-d<0$, but all other conditions in \eqref{Conditions} are met. Here, the shaded regions represent $S_d(v)$, and we see that $S_d(v)$ intersects $\mathbb{Z}^2\backslash L$ on two of the four sides. The blue shaded region represents vertices that will be double counted by the technique used in the first case.
}\label{secondcase}
\end{figure}
Notice that here we need to count the number of vertices $u=(x, y)$ such that $d(u, v)=d$ and $x<0$, $y<0$. Notice that $d(0, v)=d_1+d_2$, and hence any such vertex $u$ has $d(0, u)=d-d_1-d_2$ (we note also here that if $d_1+d_2\geq d$, there is nothing to count). Note that by symmetry, exactly $\frac{1}{4}$ of the vertices at this distance to $u$, excluding the axes, shall occur in the blue shaded region shown in Figure \ref{secondcase}. Excluding vertices on the axes, there are $4(d-d_1-d_2-1)$ such vertices; hence the number of such vertices with both $x<0$ and $y<0$ is precisely $d-d_1-d_2-1$.
Combining these results and applying symmetry, we thus obtain the following theorem.
\begin{theorem}
Let $\delta_x = 1$ if $x<0$ and $0$ otherwise. Then
\begin{eqnarray*}
a_2(d)&=&4d-\delta_{d_1-d}(2(d-d_1)-3) - \delta_{d_2-d}(2(d-d_2)-3) - \delta_{n-1-d_1-d}(2(d_1+d-n+1)-3)\\ &&- \delta_{n-1-d_2-d}(2(d_2+d-n+1)-3) + \delta_{d_1+d_2-d}(d-d_1-d_2-1) + \delta_{d_1+n-1-d_2-d}(d-d_1-(n-1)+d_2-1)\\
&& + \delta_{d_2+n-1-d_1-d}(d-d_2-(n-1)+d_1-1)+ \delta_{2(n-1)-d_1-d_2-d}(d-2(n-1)+d_1+d_2-1)
\end{eqnarray*}
\end{theorem}
\section{Conclusions}
Although the traditional Waxman graph has been widely used in some areas of social science, its mathematical features have to date not been studied in detail. Here, we find that as a network model, the Waxman graph has some deficiencies, particularly in its connectivity structure, and hence it may be more reasonable, and perhaps not more difficult, to replace this model with a model as proposed herein.
In addition, further study on the structure of a random distance graph as described herein, for which vertices are chosen randomly from the underlying metric space $X$, would be an interesting future direction for this research. Moreover, in the specific case studied in Theorem \ref{T:main}, it would be interesting to determine if a more nuanced choice of $\beta$, perhaps dependent on $n$, might yield the typical ``double-jump'' behavior for random graphs.
\section{Acknowledgements}
The authors are grateful to Ryan Dingman for his contribution to the initial stages of development of this project and its results, and to Toby Johnson for some useful discussion on the traditional Waxman model.
\bibliographystyle{siam}
|
1,477,468,750,438 | arxiv | \section{#1} \setcounter{subsection}{1}}
\renewcommand{\thesection}{\arabic{section}}
\renewcommand{\thesubsection}{\thesection.\arabic{subsection}}
\setcounter{secnumdepth}{1}
\date \today
\begin{document}
\title{Quantization of general linear electrodynamics}
\author{Sergio Rivera}
\author{Frederic P. Schuller}
\address{Albert Einstein Institute,\\ Max Planck Institute for Gravitational Physics, Am M\"uhlenberg 1, 14476 Potsdam, Germany}
\begin{abstract}
General linear electrodynamics allow for an arbitrary linear constitutive relation between the field strength two-form and induction two-form density if crucial hyperbolicity and energy conditions are satisfied, which render the theory predictive and physically interpretable. Taking into account the higher-order polynomial dispersion relation and associated causal structure of general linear electrodynamics, we carefully develop its Hamiltonian formulation from first principles. Canonical quantization of the resulting constrained system then results in a quantum vacuum which is sensitive to the constitutive tensor of the classical theory. As an application we calculate the Casimir effect in a bi-refringent linear optical medium.
\end{abstract}
\maketitle
\section{Introduction}
Classical electromagnetism can be formulated on much more general optical backgrounds than the familiar ones described in terms of Lorentzian manifolds. From the point of view of electrodynamics, this is because one merely needs a constitutive law that links the electromagnetic field strength two-form $F$ with the induction two-form density $H$,
and thus closes the relations
\begin{equation}
(d F)_{\alpha \beta \gamma} = 0, \qquad (d H)_{\alpha \beta \gamma} = \epsilon_{\alpha \beta \gamma \delta} j^\delta\,,
\end{equation}
which any electromagnetic theory featuring charge conservation and no magnetic monopoles in four dimensions must satisfy in the presence of a current vector field density $j$. This point has been made most prominently and lucidly by \cite{hehl2003}. Even if one restricts attention to linear constitutive laws \footnote{Born-Infeld theory presents the most prominent example of electrodynamics with a non-linear constitutive relation between the induction and field strength.},
new{the resulting electrodynamic theories will generically feature birefringence, meaning that distinguished polarizations of light will travel at different speeds.Now the most general action for an electromagnetic gauge potential that results in a linear constitutive law, and which we will carefully quantize in this paper, is}
\begin{equation}
\label{areametricaction}
S[A,G]=-\frac{1}{8}\int \, dx^4 \omega_G \left[ F_{\alpha \beta} F_{ \gamma \delta}\, G^{\alpha \beta \gamma \delta} + j^\alpha A_\alpha\right]\,,
\end{equation}
where $G$ is a smooth covariant rank four tensor field with the symmetries $G_{\alpha \beta \gamma \delta} = G_{\gamma \delta \alpha \beta}$ and $G_{\alpha \beta \gamma \delta} = -G_{\beta \alpha \gamma \delta}$, and which is invertible in the sense that there is a smooth contravariant tensor field $G^{\alpha \beta \gamma \delta}$
so that $G^{\alpha \beta \rho \sigma} G_{\rho \sigma \gamma \delta} = 2(\delta ^\alpha _\gamma \delta ^\beta _\delta - \delta^\alpha_\delta \delta ^\beta _\gamma )$ and there is a well-defined volume form $\omega_G$ for such area metric tensors \cite{punzi2007}. The birefringence of such general linear electrodynamics is encoded in its dispersion relation, or equivalently the causal structure, of the associated field equations \cite{raetzel2010}. This dispersion relation is known \cite{Hehl2002} to be of higher polynomial order, and indeed the central challenge faced in this paper is to properly deal with this fact, both in the classical and quantum analysis.
The importance of understanding Maxwell theory on such general linear backgrounds is that the latter comprehensively describe all linear optical backgrounds ranging from fundamental spacetime geometries beyond Lorentzian geometry \cite{schuller2005,schuller2006,punzi2009,schuller2010,raetzel2010} over the effective spacetime structure seen by photons to first order quantum corrections in a curved Lorentzian spacetime \cite{drummond1980} to all non-dissipative linear optical media available in the laboratory \cite{schuller2010}.
The present article develops the canonical quantization of these most general linear electrodynamics from first principles, and arrives at an explicit calculation of the quantum vacuum of the theory. We show that the related Casimir effect detects deviations from a non-birefringent background with an amplification which in principle is limited only by technological constraints.
Arriving at these results requires special care when obtaining the Hamiltonian formulation of the classical theory that precedes the actual quantization. While quite generally the Dirac-Bergmann quantization procedure of course also applies to these gauge field dynamics, the key issue is the question of which hypersurfaces provide viable initial data surfaces on which the canonical phase space variables can be defined and evolved by the Hamiltonian. It is precisely this question that makes the problem of quantization of the dynamics (\ref{areametricaction}) so subtle, and requires the conceptually robust understanding of its causal structure developed in \cite{raetzel2010} and concisely summarized in section \ref{causal_structure}. Only when using the insights gained there, can the formulation of the Hamiltonian picture in section \ref{sec_hamiltonian} and the canonical quantization in section \ref{sec_quantization} proceed as usual, based on the derivation of the Dirac brackets in section \ref{sec_dirac} and the diagonalization of the Hamiltonian in section \ref{sec_diagonalization}, which is particularly simple for the area metrics in a neighborhood of Lorentzian metric geometries, as shown in section \ref{sec_classI}. However, having gone through the laborious quantization procedure, one is rewarded in section \ref{sec_casimir} with the said method to measure deviations from a metric-induced background through the Casimir effect in particular, and a demonstration of how to quantize field theories with higher-order polynomial dispersion relations \cite{ling2006,barcelo2007,rinaldi2007,garattini2010,bazo2009,gregg2009,sindoni2008,sindoni2009,sindoni2009-2,chang2008,laemmerzahl2009,laemmerzahl2005,liberati2001,perlick2010,gibbons2007,lukierski1995,glikman2001,girelli2007} in general.
For simplicity, we restrict attention to flat area metric manifolds throughout the paper. Analogous to any other geometric structure on a smooth manifold, an area metric is called flat if there exists a set of charts covering the underlying smooth manifold such that the components of the area metric tensor are constant within each such chart.
\section{Causual structure of linear electrodynamics}
\label{causal_structure}
The Hamiltonian formulation of the dynamics (\ref{areametricaction}), on which the canonical quantization will be built, hinges on several key results of the associated causality theory. Here we summarize the central results of practical importance. For a detailed derivation of these results we refer the reader to \cite{raetzel2010}. A necessary condition for Maxwell theory on a four-dimensional area metric background to be predictive is that the following polynomial \cite{Hehl2002} on covectors $k$,
\begin{equation}
\label{fresnel}
P(k)=-\frac{1}{24} \epsilon_{\rho \sigma \tau \epsilon} \epsilon_{\mu \nu \omega \vartheta} G^{\rho \sigma \mu \alpha} G^{\beta \tau \nu \gamma} G^{\delta \epsilon \omega \vartheta }k_\alpha k_\beta k_\gamma k_\delta\,,
\end{equation}
is hyperbolic. This means that there is at least one covector $h$ with $P(h)\neq 0$ such that for every covector $q$ the polynomials
\begin{equation}
P_{q,h}(\lambda) = P(q + \lambda h)
\end{equation}
have only real roots $\lambda$, in which case $h$ is said to be a hyperbolic covector with respect to $P$.
That hyperbolicity is a necessary criterion for a well-posed initial value problem is a central result of the theory of partial differential equations \cite{garding1964,atiyah1970}. For the flat area metric manifolds discussed here, the hyperbolicity of $P$ is even sufficient for the predictivity of the theory \cite{garding1964}. Initial data, given on a hypersurface whose normal covectors are all hyperbolic with respect to $P$, are then uniquely evolved away from the hypersurface. Thus a Hamiltonian formulation of the dynamics, which deals precisely with the evolution between initial data surfaces, must be based on a foliation $\{t,x^a\}$ of the manifold whose leaves $t=\textrm{const}$ are hypersurfaces with hyperbolic conormal.
However, this requirement needs to be sharpened if one requires that the actual initial data can be collected by observers. The definition of observers now hinges on the so-called dual polynomial $P^\#$ \cite{hassett2007}, which for those $P$ that arise from area metrics by virtue of (\ref{fresnel}) and which admit hyperbolic covectors, can be calculated explicitly and takes the deceivingly simple form
\begin{equation}
\label{dualfresnel}
P^\#(X)=-\frac{1}{24}\epsilon^{\rho \sigma \tau \epsilon} \epsilon^{\mu \nu \omega \vartheta} G_{\rho \sigma \mu \alpha} G_{\beta \tau \nu \gamma} G_{\delta \epsilon \omega \vartheta } X^\alpha X^\beta X^\gamma X^\delta\,,
\end{equation}
which sends any tangent vector $X$ to a real number.
That the dual polynomial $P^\#$ can be calculated analytically at all, and takes such a comparatively simple form, is only due to an interplay of the area metric structure underlying it and the necessary hyperbolicity of $P$.
While the hyperbolic covectors of $P$ distinguish admissible initial data surfaces, admissible observers are distinguished by their worldline tangent vectors being hyperbolic vectors of $P^\#$. In other words, the very existence of observers restricts the admissible area metric geometries further to those where also $P^\#$ is hyperbolic.
But exactly this hyperbolicity of $P^\#$ then allows to make a choice of time-orientation, which in turn implies a choice of positive energy. More precisely, a time-orientation is chosen by picking one connected set of all hyperbolic tangent vectors, a so-called hyperbolicity cone $C^\#$, out of the several such connected components defined by $P^\#$. But then the covectors $q$ for which all future-directed observers measure positive energy, $q(X)>0$ for all $X\in C^\#$, themselves constitute a cone $(C^\#)^+$ in cotangent space, which thus deserves to be called the positive energy cone with respect to the chosen time-orientation.
The latter, in turn, selects the (open and convex) cone $C$ of hyperbolic covectors of $P$ that lie within the positive energy cone $(C^\#)^+$. For technical convenience we require, without loss of generality, that $P$ be positive on all of $C$; indeed, from (\ref{fresnel}) it is clear that this always can be arranged for by switching the overall sign of $G$.
Besides the hyperbolicity of $P$ and $P^\#$, one finally needs to require that there exists a time orientation such that any non-zero $P$-null covector lies either in $(C^\#)^+$ or $-(C^\#)^+$. In other words, the energy of any massless momentum is to have a definite sign upon which all observers agree. If and only if this bi-hyperbolicity and energy distinguishing properties are met, it is justified to call the underlying area metric manifold an area metric {\it spacetime}, and we will consider only such. For an illustration in a typical case, see figure \ref{fig_cones}, and for a detailed exposition of these concepts, see \cite{raetzel2010}.
\begin{figure}
\includegraphics[width=15cm,angle=0]{casimircones.pdf}
\caption{\label{fig_cones} Causal structure of a typical bi-hyperbolic and energy-distinguishing polynomial dispersion relation. Dotted surfaces indicate sets of covectors and vectors that are null with respect to the polynomial $P$ and its dual $P^\#$, respectively. The cone $C$ of hyperbolic covectors and the cone $C^\#$ of observers both arise as hyperbolicity cones. Purely spatial directions, as seen by an observer with worldline tangent $T$, are those vectors annihilated by the preimage of $T$ under the Legendre map $L$.}
\end{figure}
The final piece of technology concerns the duality map between covectors and vectors in an area metric spacetime. The map
\begin{equation}
L: C \to L(C)\,,\qquad L(q) = \frac{P(q,q,q,\cdot)}{P(q,q,q,q)}
\end{equation}
is shown in \cite{raetzel2010} to be a well-defined and invertible Legendre map precisely because $P$ is assumed to be bi-hyperbolic and energy-distinguishing. Spacelike hypersurfaces are meaningfully defined as those having tangent directions that are purely spatial with respect to some observer. More precisely, the spacelike hypersurfaces are those whose conormals lie in $L^{-1}(C^\#)$. But since it can be shown that $L^{-1}(C^\#)$ always lies within $C$, the condition that a hypersurface be spacelike (and thus initial data on it accessible by local observers) further sharpens the condition for a feasible initial data surface for the dynamics (\ref{areametricaction}) we identified before. Thus only a foliation $(t, x^1, x^2, x^3)$ of an area metric spacetime into spacelike hypersurfaces $t=\textrm{const}$ (which then contain all vectors that are annihilated by the covector $L^{-1}(\partial/\partial t)$, see figure \ref{fig_cones}) provides an appropriate temporal-spatial split for the Hamiltonian formulation. For the flat area metric spacetimes considered here, one may further choose the coordinates such that $L^{-1}(\partial/\partial t)$ is conormal to the spacelike hypersurfaces. In other words, one can choose trivial shift and lapse in the flat case.
The reader may find it helpful to get a feel for these seemingly abstract conditions for the special case where the area metric is induced by a metric $g$ by virtue of $G_{\alpha \beta \gamma \delta} = g_{\alpha \gamma}g_{\beta \delta}-g_{\alpha \delta}g_{\beta \gamma}$. Precisely the same conceptual steps force one then to take the metric $g$ to be of Lorentzian signature (otherwise Maxwell theory would not be well-posed). Since in this metric-induced case $P(k)=(g^{\alpha \beta}k_\alpha k_\beta)$ as usual, we have $L(C)=C^\#$, and thus one recovers the standard Lorentzian notions of obervers and spacelike hypersurfaces. However, the general construction presented before does not justify itself from this reduction to the metric case. The general treatment rather demonstrates the appropriateness and consistency of the standard Lorentzian definitions from a conceptual point of view.
\section{Hamiltonian formulation}
\label{sec_hamiltonian}
With the appropriate foliation $(t,x^1, x^2, x^3)$ of the area metric spacetime with spacelike leaves for constant time $t$ and conormals given by $L^{-1}(\partial/\partial t)$, as constructed in the previous section, we are now in a position to develop the Hamiltonian formulation of the dynamics encoded in the action (\ref{areametricaction}). For a flat area metric spacetime, one can choose coordinates not only such that the area metric has constant components throughout those charts, but also that additionally the components of the volume form $\omega_G$ featuring in the action are numerically identical to those of the totally antisymmetric Levi-Civita symbol $\epsilon_{\alpha \beta \gamma \delta}$ defined by $\epsilon_{0123}=+1$. In such a coordinate system, we obtain the canonical momenta associated with the field variables $(A_0,A_i)$ from the Lagrangian density $\mathcal{L}$ of the action (\ref{areametricaction}) as
\begin{eqnarray}
\label{canonicalmomenta}
\pi^{0}=\frac{\delta \mathcal L}{\delta (\partial_0 A_0)}&=&0,\\
\nonumber
\pi ^{i} = \frac{\delta \mathcal L}{\delta (\partial_0 A_i)} &=& -G^{0i0j}\partial_0 A_j-G^{0ij0}\partial_j A_0-G^{0ijk}\partial_j A_k\,,
\end{eqnarray}
where here, and for the remainder of the paper, latin indices range from 1 to 3, while greek indices continue to range from 0 to 3.
In the language of the theory for constrained systems \cite{hanson1976,sundermeyer1982}, we thus identify $\phi_1=\pi^{0}\approx 0$ as a primary constraint of the dynamics. Defining the matrix $M_{ij}$ with the property $M_{ij} G^{0i0j}=\delta^k_j$, whose existence is guaranteed if the differential equations coming from (\ref{areametricaction}) are hyperbolic (see appendix \ref{appendixone}), and using (\ref{canonicalmomenta}) to express $\partial_0 A_i$ in terms of the canonical momenta $\pi^{i}$, we find the total Hamiltonian density
\begin{eqnarray}
\label{ext_ham_1}
\mathcal H&=&-\frac{1}{2}M_{js}\pi^j \pi^s-A_0 \partial_j \pi^j-\pi^j M_{ij}G^{0irk}\partial_r A_k\\ \nonumber
& &+\frac{1}{2} G^{ijkr}\partial_i A_j \partial_k A_r-\frac{1}{2} M_{ir}G^{0ijk} G^{0rmn} \partial_m A_n \partial_j A_k + u_1(x) \pi^0(x).
\end{eqnarray}
Following the Dirac-Bergmann algorithm \cite{sundermeyer1982} for obtaining the Hamiltonian formulation of systems with constraints, we now compute the commutator $ \{ \pi ^0,\mathcal H\}$. If this commutator is not zero, we need to impose $ \{ \pi ^0,\mathcal H\}\approx 0$ as a secondary constraint, in order to ensure that the primary constraint $\phi_1\approx 0$ is preserved under time evolution. Indeed, one obtains $ \{ \pi ^0,\mathcal H\}=-\partial_j \pi^j$. Thus we impose $\phi_2=\partial_j\pi^j\approx 0$ as a secondary constraint, which must be added to (\ref{ext_ham_1}) with a corresponding Lagrange multiplier. The total Hamiltonian now reads
\begin{equation}
\mathcal H=\mathcal H_0+u_1(x) \pi^0(x) +(u_2(x)-A_0)\partial_j\pi^j,
\end{equation}
with
\begin{equation}
\label{classicalhamiltonian}
\mathcal H_0=-\frac{1}{2}M_{js}\pi^j \pi^s-\pi^j M_{ij}G^{0irk}\partial_r A_k+\frac{1}{2}( G^{mnjk}- M_{ir}G^{0ijk} G^{0rmn} )\partial_m A_n \partial_j A_k.
\end{equation}
Now we find $\{ \phi_2,\mathcal H\}=0$, so that the Dirac-Bergmann algorithm ends here and $\phi_1\approx 0$ and $\phi_2\approx 0$ exhaust the constraints. However, $\{\phi_1(t,\vec x),\phi_2(t,\vec y)\}=0$, so that $\phi_1$ and $\phi_2$ are first class constraints, implying that the multipliers $u_1(x)$ and $u_2(x)$ are completely undetermined. The infinitesimal gauge transformations induced by $(\phi_1,\phi_2)$ on the canonical variables $(A_\alpha,\pi^\alpha)$ are
\begin{eqnarray}
\label{gaugetransformations}
\delta A_\alpha (t,\vec x)&=&\int d^3 y\, \epsilon^I(t,\vec y)\{A_\alpha(t,\vec x),\phi_I(t,\vec y)\}=\epsilon^1(t,\vec x)\delta^0_\alpha-\delta^i_\alpha \partial_i \epsilon^2(t,\vec x)\,,\\
\delta \pi^\alpha (t,\vec x)&=&\int d^3 y\, \epsilon^I(t,\vec y)\{\pi^\alpha(t,\vec x),\phi_I(t,\vec y)\}=0,
\end{eqnarray}
with $I=1,2$ and $\epsilon^I(t,\vec x)$ being the infinitesimal parameters of the transformations. Knowledge of these generators of gauge transformations allows us to identify classical observables of the theory as those functionals that are invariant under gauge transformations. Equivalently, observables commute with the constraints $\{O,\phi_{I}\}\approx 0$. In the present case, it can be checked that the electromagnetic inductions
\begin{eqnarray}
D^a &=& -G^{0a0b} F_{0b}-\frac{1}{2} G^{0abk}F_{bk} \\ \nonumber
&=& -G^{0a0b}\partial_0 A_j-G^{0ab0}\partial_j A_0-G^{0abk}\partial_b A_k\,,\\
H_a&=&-\frac{1}{2} \epsilon_{0abc}\left[ G^{bcm0} F_{m0}+\frac{1}{2} G^{bcmn} F_{mn} \right] \\ \nonumber
& =& -\frac{1}{2} \epsilon_{0abc}\left[ G^{bcm0} (\partial_mA_0-\partial_0A_m)+G^{bcmn}\partial_m A_n\right]\,,
\end{eqnarray}
defined with respect to the chosen foliation of spacetime into spacelike hypersurfaces, indeed commute with the constraints, so that they can be used as observables. Thus we are finally able to write the Hamiltonian (\ref{classicalhamiltonian}) for our system in terms of gauge-invariant observables $D^a$ and $H_a$ as
\begin{equation}
\mathcal H_0=\frac{1}{2} U_{al} D^a D^l+ \frac{1}{2} V^{al} H_a H_l,
\end{equation}
where the matrices $U$ and $V$ are given as
\begin{eqnarray}
U_{al}&=&-M_{al}+\frac{1}{8} T_{pqjk}G^{0sjk} G^{0tpq}M_{s(l}M_{a)t}\\
V^{al}&=&-\frac{1}{8} \epsilon^{0jk(a} \epsilon^{|0|l)pq} T_{pqjk},
\end{eqnarray}
with $T_{pqjk}$ defined such that
\begin{equation}
\label{observ}
(G^{pqmn}-G^{0rpq}G^{0amn}M_{ar})T_{pqtu}=-8 \,\delta^m_{[t} \delta^n_{u]}\,.
\end{equation}
The existence of $T$ is guaranteed due to the invertibility properties of area metrics; indeed it can be written explicitly in terms of the block matrices constituting the area metric tensor, see again appendix \ref{appendixone}.
\section{Gauge fixing and Dirac brackets}\label{sec_dirac}
In order to determine the Dirac brackets associated with our system, one needs to remove the indeterminacy in the Lagrange multipliers by fixing a gauge. This is achieved here by manually imposing two further constraints $\phi_3\approx 0, \phi_4\approx 0$ such that $\textup{det} \{\phi_I(\vec x),\phi_J(\vec y)\} \neq 0$, with $I,J=1,\dots 4$, so that the new set of constraints $\phi_I$ is now of second class.
In our case, the Euler-Lagrange equations for the gauge field $A$ obtained from the action (\ref{areametricaction}) are given by
\begin{equation}
G^{\alpha \beta \gamma \delta}\,\partial_{\beta}\,\partial_{\delta}\, A_\gamma=0,
\end{equation}
which is conveniently split into one temporal equation
\begin{equation}
\label{temporalequation}
G^{0a0b}\,\partial_a\,\partial_b A_0+\left [G^{0abc}\partial_a \, \partial_c-G^{0a0b}\partial_0 \,\partial_a\right]A_b = 0
\end{equation}
and three spatial equations
\begin{equation}
\label{spatialequation}
\left[ G^{0bla}\,\partial_a\,\partial_b - G^{0l0m}\,\partial_0\,\partial_m \right]A_0 +\left[G^{0l0m}\partial_0^2 -2 G^{0(lm)a}\partial_0 \,\partial_a +G^{lamd} \partial_a\,\partial_d\right]A_m =0\,.
\end{equation}
As the third constraint we impose the Glauber gauge
\begin{equation}
\label{glauber_gauge}
\phi_3=A_0(\vec x)-\int d^3 x'\,G(\vec x,\vec x')G^{0abc}\partial'_a\,\partial'_c A_b(\vec x')\approx 0
\end{equation}
with $-G^{0a0b}\,\partial_a\,\partial_b\,G(\vec x,\vec x')=\delta(\vec x-\vec x' )$, or more explicitly,
\begin{equation}
G(\vec x,\vec x')= -\frac{1}{4\pi \sqrt{-M_{ab}(x^a-x'^a)(x^b-x'^b)}}.
\end{equation}
The expression under the square root is non-negative ultimately due to the energy distinguishing property (see appendix \ref{appendixone}). Consistency of the gauge (\ref{glauber_gauge}) with the temporal equation (\ref{temporalequation}) requires that the last constraint
\begin{equation}
\phi_4=G^{0a0b}\,\partial_a A_b\approx 0.
\end{equation}
In summary, our constraints $\phi_I$ are given by
\begin{equation}
\label{gaugefixedconstraints}
\begin{array}{ll}
\phi_1=\pi^0\approx 0,\quad \quad \quad &\phi_3=A_0(\vec x)-\int d^3 x'\,G(\vec x,\vec x')G^{0abc}\partial'_a\,\partial'_c A_b(\vec x')\approx0,\\
\phi_2=\partial_a \pi^a\approx 0,\quad \quad \quad & \phi_4=G^{0a0b}\,\partial_a\,A_b\approx 0,\\
\end{array}
\end{equation}
and satisfy
\begin{eqnarray}
\{\phi_I(t,\vec x),\phi_J(t,\vec y)\}
&=&\int \dfrac{d^3k}{(2\pi^3)}\left[
\begin{array}{cccc}
0 & 0 & -1 & 0\\
0 & 0 & 0 & -G^{0a0b}\,k_a k_b\\
1 & 0 & 0 & 0\\
0 & G^{0a0b} k_a k_b & 0 &0\\
\end{array}
\right]e^{i \vec k.(\vec x-\vec y)}.
\end{eqnarray}
The matrix above $\{\phi_I(t,\vec x),\phi_J(t,\vec y)\}$ is invertible, so that the constraints $\phi_I$ are now of second class and the gauge freedom is gone. Its inverse $(\{\phi(\vec x),\phi(\vec y)\}^{-1})^{IJ}$, defined through
\begin{equation}
\int\,d^3 y\,\{\phi_I(\vec x),\phi_J(\vec y)\}\, (\{\phi(\vec y),\phi(\vec z)\}^{-1})^{JM}=\delta_I^M\delta(\vec x-\vec z),
\end{equation}
is simply given as
\begin{equation}
\label{inverse_constraint_matrix}
\{\phi_I(t,\vec x),\phi_J(t,\vec y)\}^{-1}=\int \dfrac{d^3k}{(2\pi^3)}
\left[
\begin{array}{cccc}
0 & 0 & 1 & 0\\
0 & 0 & 0 & \dfrac{1}{G^{0a0b}\,k_a k_b}\\
-1 & 0 & 0 & 0\\
0 & -\dfrac{1}{G^{0a0b} k_a k_b} & 0 &0\\
\end{array}
\right]\,e^{i \vec k.(\vec x-\vec y)}\\.
\end{equation}
Equipped with equation (\ref{inverse_constraint_matrix}) we can now follow Dirac's procedure and replace the standard Poisson bracket $\{,\}$ by the Dirac bracket $\{,\}_D$, which is defined as
\begin{equation}
\label{diracbracket}
\{A(\vec x),B(\vec y)\}_D=\{A(\vec x),B(\vec y)\}-\int\, d^3 z \,d^3 w\,\{A(\vec x),\phi_I(\vec z)\}(\{\phi(\vec z),\phi(\vec w)\}^{-1})^{IJ}\{\phi_J(\vec w),B(\vec y)\}.
\end{equation}
Thus we arrive at the fundamental Dirac brackets of our system, with respect to which the theory must be quantized
\begin{eqnarray}
\label{fuldameltal_dirac_conmutators}\nonumber
\{A_\alpha(t,\vec x),\pi^\beta(t,\vec y)\}_D&=&\int\,\dfrac{d^3 k}{(2\pi)^3}\left[\delta^\beta_\alpha-\delta^0_\alpha \delta^\beta_0-\dfrac{\delta^m_\alpha \delta^\beta_n \, k_a\,k_m\,G^{0a0n}}{G^{0p0q}k_p\,k_q}-\dfrac{\delta^0_\alpha \delta^\beta_b G^{0abc}\,k_a\, k_c}{G^{0p0q}k_p\,k_q}\right]\,e^{i\vec k.(\vec x-\vec y)}\,,\\
\{A_\alpha(t,\vec x),A_\beta(t,\vec y)\}_D&=&0\,,\\\nonumber
\{\pi^\alpha(t,\vec x),\pi^\beta(t,\vec y)\}_D&=&0\,,
\end{eqnarray}
and the dynamics of the system is simply generated by the Hamilton equations
\begin{eqnarray}\label{weak_dynamics}
\partial_t A_\alpha(t,\vec x)&\approx &\int d^3 y\, \{A_\alpha(t,\vec x),\mathcal H_0(\vec y) \}_D\,,\\\nonumber
\partial_t \pi^\alpha(t,\vec x)&\approx &\int d^3 y\, \{\pi^\alpha(t,\vec x),\mathcal H_0(\vec y) \}_D\,,
\end{eqnarray}
where, due to the use of Dirac brackets, only $\mathcal H_0$ is involved.
\section{Bihyperbolic area metrics close to Lorentzian metrics}\label{sec_classI}
The preceding Hamiltonian analysis and calculation of Dirac brackets made only implicit use of the requirement that the area metric background be bi-hyperbolic and energy-distinguishing, namely in the abstract constructions underlying the definition of spacetime foliations into spacelike leaves. But now we need to explicitly solve the field equations (\ref{spatialequation}) with the gauge imposed by (\ref{glauber_gauge}), and this requires to restrict attention to concrete bi-hyperbolic and energy-distinguishing area metric backgrounds. Moreover, for actual calculations it is most convenient to choose a coordinate frame in which the area metric takes a simple normal form. The normal form theory of area metrics in four dimensions has been developed in \cite{schuller2010}, and used in \cite{raetzel2010} to show that the area metric cannot be bi-hyperbolic unless the endomorphism $J$ on the space of two-forms defined through
\begin{equation}
J_{\gamma \delta}{}^{\alpha \beta} = G^{\gamma \delta \mu \nu} \omega_{\mu \nu \alpha \beta}
\end{equation}
has a complex eigenvalue structure (Segr\'e type) of the form $[1\bar 1 1 \bar 1 1 \bar 1]$, $[2\bar 2 1 \bar 1]$, $[3 \bar 3]$, $[1 \bar 1 1 \bar 1 1 1]$, $[2 \bar 2 1 1]$, $[1 \bar 1 11 11]$ or $[11 11 11]$. However, four-dimensional area metrics that are induced by a Lorentzian metric automatically lie in the first class, $[1 \bar 1 1 \bar 1 1 \bar 1]$, and moreover the continuous dependence of the eigenvalues of an endomorphism on the components of a representing matrix implies that any area metric in the neighborhood of such a metric-induced area metric is equally of class $[1 \bar 1 1 \bar 1 1 \bar 1]$. Thus area metrics of immediate phenomenological relevance are clearly those of this first class, and it can be shown that by $GL(4)$ frame transformations these can always be brought to the form
\begin{equation}
\label{classInormalform}
G^{[ab][cd]}=
\left[\begin{array}{cccccc}
-\alpha & 0 & 0 & \rho & 0 & 0\\
0 & -\beta & 0 & 0 & \sigma & 0\\
0 & 0 &-\gamma & 0 & 0 & \tau \\
\rho & 0 & 0 & \alpha \,\,\, & 0 & 0\\
0 & \sigma & 0 & 0 & \beta \,\,\, & 0\\
0 & 0 & \tau & 0 & 0 & \gamma \,\,\,\\
\end{array}\right]\qquad \begin{array}{l}\textrm{for real } \rho, \sigma, \tau\, \textrm{ and } \\ \textrm{real positive } \alpha, \,\beta,\,\gamma\,,\end{array}
\end{equation}
where for notational purposes, $G$ is considered here as a bilinear form on the space of two-forms for convenience, and the representing matrix shown above is with respect to the obvious induced basis in the order $[01], [02], [03], [23], [31], [12]$. The positivity of $\alpha\,,\beta\,,\gamma$ follows from our convention that $P$ is positive on the hyperbolicity cone $C$. If and only if the area metric is induced by a Lorentzian metric, do the real scalars assume the values $\alpha=\beta=\gamma=1$ and $\rho=
\sigma=\tau=0$. So any finite (but not too large) deviation from a Lorentzian metric is encoded in these scalars and the frame that brings about this normal form.
It is straightforward to show that if one chooses $\rho=\sigma=\tau$, the polynomial
\begin{eqnarray}\label{classI_pol}
P(q) &=& \alpha\beta\gamma (q_0^4 + q_1^4 + q_2^4 + q_3^4)\nonumber\\
& & + \alpha(\beta^2+\gamma^2)(q_2^2 q_3^2 - q_0^2 q_1^2)\nonumber\\
& & + \beta(\alpha^2+\gamma^2)(q_1^2q_3^2-q_0^2 q_2^2)\nonumber\\
& & + \gamma(\alpha^2+\beta^2)(q_1^2q_2^2-q_0^2 q_3^2)
\end{eqnarray}
associated with an area metric of this class is hyperbolic with respect to $h=L^{-1}(\partial/\partial t)$. This is most efficiently verified in the normal frame by observing that for $h=(1,0,0,0)$, the real symmetric Hankel matrix $H_1(P_{q,h})$ associated with the polynomial $P_{q,h}$ is positive definite for any covector $q$, which implies that $P$ is hyperbolic \cite{gantmacher1959,basu2006}. The dual polynomialm $P^\#$ takes precisely the same shape in the normal form frame employed here, and thus is also seen to be hyperbolic. Finally also the energy-distinguishing property is easily checked.
Finally, note that for area metrics with polynomial (\ref{classI_pol}), we have
\begin{equation}
\label{specialchoice}
G^{0abc} = \rho\, \epsilon^{0abc}
\end{equation}
in this normal form frame, which significantly simplifies the field equations (\ref{temporalequation}) and (\ref{spatialequation}) whose solutions we will now be able to obtain, orthogonalize appropriately, and thus obtain a diagonalization of the Hamiltonian.
It is worth noting that the hyperbolic polynomial (\ref{classI_pol}) only factorizes if at least two of the scalars $\alpha,\, \beta,\,\gamma$ coincide, so that area metrics with a bi-metric dispersion relation merely present a subset of measure zero within the set of area metrics neighboring Lorentzian metrics. Indeed, for the generic case of mutually different scalars, the polynomial $P$ is irreducible. Thus theories trying to account for birefringence in linear electrodynamics by some sort of bi-metric geometry fail to parametrize almost all relevant geometries near Lorentzian metric ones.
\section{Diagonalization of the Hamiltonian}\label{sec_diagonalization}
In order to diagonalize the Hamiltonian (\ref{classicalhamiltonian}) for bi-hyperbolic and energy-distinguishing general linear electrodynamics with a higher-order polynomial dispersion relation given by (\ref{classI_pol}), we first need to find the solutions of the classical field equations (\ref{temporalequation}) and (\ref{spatialequation}). After choosing the Glauber gauge (\ref{gaugefixedconstraints}), the first equation is trivially satisfied, and the second one reduced to
\begin{equation}
\label{final_field_eq}
\left[G^{0l0m}\partial_0^2+G^{lamd} \partial_a\,\partial_d\right]A_m(t,\vec x) =0\,,
\end{equation}
due to (\ref{specialchoice}). Moreover, these field equations are completely equivalent to the field equations arising from (\ref{weak_dynamics}). Specifically, we look for plane wave solutions
\begin{equation}
\label{ansatz}
A_a(t,\vec x)=\int \frac{d^3 p}{(2\pi)^3} e^{-i(\omega t+\vec p.\vec x)}f_a(\vec p),
\end{equation}
so that introducing (\ref{ansatz}) into (\ref{final_field_eq}) we observe that the equation
\begin{equation}\label{eigenvalueequation}
\left[G^{0l0m}(\omega)^2+G^{lamd} p_a\,p_d\right]f_m(\vec p) =0
\end{equation}
must be satisfied if (\ref{ansatz}) is indeed a solution. Equation (\ref{eigenvalueequation}) has non-trivial solutions only if \begin{equation}
\label{polyn_def}
\textup{det}\left(G^{0l0m}(\omega)^2+G^{lamd} p_a\,p_d\right)=0\,.
\end{equation}
The non-zero frequencies $\omega$ for which this is the case are precisely the solutions of $P(\omega,\vec p)=0$, compare (\ref{fresnel_det}). From the energy distinguishing condition of an area metric spacetime it follows that this frequencies are non-zero unless $\vec p=0$, and real because of the hyperbolicity of $P$.
It is then further immediate from (\ref{classI_pol}) that if some (without loss of generality positive) $\omega(\vec{p})$ is a solution for some given $\vec{p}$ in our normal frame, then so is $-\omega(\vec{p})$, and that $\omega(\vec{p})=\omega(-\vec{p})$. Thus we have four non-zero energy solutions $\pm\omega^I(\vec{p})$ labeled by $I=1,2$, two positive and two negative ones, for each spatial momentum $\vec{p}$.
Therefore any solution of the field equations for the real gauge potential $A$ can be expanded as
\begin{equation}
\label{generalsolution}
A_a(t,\vec x)= \sum_{I=1,2} \int_{N^{\textup{smooth}}} \frac{d^3 p}{(2\pi)^3}\left( e^{-i(\omega^I(\vec p) t+\vec p.\vec x)}f^I_a(\vec p)+e^{i(\omega^I(\vec p) t+\vec p.\vec x)}f^{*I}_a(\vec p)\right)\,,
\end{equation}
where strictly speaking, the integral is to be taken only over spatial momenta $\vec{p}$ for which the roots $\omega$ of $P(\omega,\vec{p})$ are non-degenerate, so that the elementary plane wave solutions are linearly independent. However, the set of covectors for which these zeros are degenerate is of measure zero \cite{raetzel2010}, so that this restriction of the integral domain can be technically disregarded. It may be worth emphasizing that the standard appearance of this expansion is somewhat deceptive, since the $\omega^I$ appearing here are solutions of (\ref{polyn_def}), rather than the standard Lorentzian dispersion relation.
Having obtained a basis of solutions of the classical field equations, we now identify an inner product that is preserved under time evolution and positive definite for positive energy solutions. To this end, consider solutions $A_a(\vec p)(t,\vec x)$ and $\tilde A_a(\vec q)(t,\vec x)$ of the field equation for specific spatial covectors $\vec p$ and $\vec q$, respectively. Using the field equation (\ref{final_field_eq}), it can be shown that the continuity equation
\begin{equation}
\label{continuityequation}
\partial_0\left(G^{0a0b}A_a^*(\vec p)\overleftrightarrow{\partial}_0 \tilde A_b(\vec q)\right)+
\partial_m\left(-G^{a(mn)b} A_a^*(\vec p)\overleftrightarrow{\partial}_n \tilde A_b(\vec q) \right)=0
\end{equation}
is satisfied. This implies that we have a conserved charge $Q$ given by
\begin{equation}
Q=\int d^3x \,G^{0a0b}A_a^*(\vec p)\overleftrightarrow{\partial}_0 \tilde A_b(\vec q).
\end{equation}
The above defined charge $Q$ can be used to define a scalar product in the space of solutions, which then by definition is conserved under time evolution and is defined as $(A(\vec p),\tilde A(\vec q))=-i\,Q$. It satisfies the following properties
\begin{eqnarray}\label{scalar_product}
\nonumber
(A(\vec p),\lambda \tilde A(\vec q))&=&\lambda(A(\vec p),\tilde A(\vec q))\\
(\lambda A(\vec p),\tilde A(\vec q))&=&\lambda^*(A(\vec p),\tilde A(\vec q))\\\nonumber
(A(\vec p),\tilde A(\vec q))&=&( A(\vec p),\tilde A(\vec q))^*=-( A^*(\vec p),\tilde A^*(\vec q)).
\end{eqnarray}
Hence, if we define for our different frequency solutions
\begin{equation}
F^I_{a}(\vec p)(t,\vec x)=e^{-i(\omega^I(\vec p) t+\vec p.\vec x)}f^I_a(\vec p),
\end{equation}
we find that $(F^I(\vec p),F^{*J}(\vec q))=0$ and
\begin{equation}
\label{positivesolutions}
(F^I(\vec p),F^{J}(\vec q))=-(F^{*I}(\vec p),F^{*J}(\vec q))=-2\omega^I(\vec p) G^{0a0b} f^{I*}_a(\vec p) f^{I}_a(\vec p) \delta^{IJ}\delta(\vec p-\vec q).
\end{equation}
In the derivation of the above results we used charge conservation to find that for $I\neq J$
\begin{equation}
\label{ortho_id}
G^{0a0b} f^{I*}_a(\vec p)f^{J^*}_b(-\vec p)=G^{0a0b} f^{I*}_a(\vec p)f^{J}_b(\vec p)=0.
\end{equation}
Moreover, since $G^{0a0b}$ is negative definite due to (\ref{classInormalform}), equation (\ref{positivesolutions}) shows that the positive energy solutions can be positively normalized, implying in turn that the negative energy solutions are negatively normalized. This indefiniteness of the scalar product is responsible for creation and anhilation processes. Choosing, without loss of generality, $f^{I}_a(\vec p)=\frac{a^I_a(\vec p)}{\sqrt{2\omega^I(\vec p)}}$, we finally have
\begin{equation}
(F^I(\vec p),F^{J}(\vec q))=-(F^{*I}(\vec p),F^{*J}(\vec q))=- G^{0a0b} a^{I*}_a(\vec p) a^{I}_a(\vec p) \delta^{IJ}\delta(\vec p-\vec q),
\end{equation}
and our general solution reads
\begin{eqnarray}
\label{finalsolution}
A_a(t,\vec x)&=&\sum_{I=1,2} \int \frac{d^3 p}{(2\pi)^3}\frac{1}{\sqrt{2\omega^I(\vec p)}}\left( e^{-i(\omega^I(\vec p) t+\vec p.\vec x)}a^I_a(\vec p)+e^{i(\omega^I(\vec p) t+\vec p.\vec x)}a^{*I}_a(\vec p)\right).
\end{eqnarray}
Now that we have the general solution (\ref{finalsolution}), we can use it to write the Hamiltonian evaluated at a solution in diagonal form,
\begin{eqnarray}
\label{class_Ham}
{H}_0=\int d^3x\,\mathcal H_0(\vec x)&=&-\frac{1}{2}\int d^3 x G^{0a0b}\left( \partial_0 A_a \partial_0 A_b-A_a\partial_0^2 A_b\right)\\\nonumber
&=&\frac{1}{2}\sum_{I,J} \int \frac{d^3 p}{(2\pi)^3}\frac{d^3 q}{(2\pi)^3} \omega^J(\vec p)\left[ (F^I(\vec p),F^J(\vec q))+(F^J(\vec p),F^I(\vec q))\right]\\\nonumber
&=&-\frac{1}{2}\sum_{I=1,2} \int \frac{d^3 p}{(2\pi)^3} \omega^I(\vec p) G^{0a0b}\left[ a^{*I}_a(\vec p)a^I_b(\vec p))+a^I_a(\vec p) a^{*I}_b(\vec p))\right].
\end{eqnarray}
The last expression shows that the classical Hamiltonian is positive because $G^{0a0b}$ is negative definite.
\section{Quantization}
\label{sec_quantization}
Equipped with the results developed so far, we are now ready to quantize the electromagnetic field. First, notice that if we multiply equation (\ref{eigenvalueequation}) by $p_l$ then the amplitude eigenvectors $a^I_b(\vec p)$ satisfy
\begin{equation}
\label{orthogonality}
G^{0a0b}p_a a^I_b(\vec p)=0,
\end{equation}
such that the constraints $G^{0a0b}\partial_a A_b\approx 0$ and $\partial_a \pi^a\approx 0$ are satisfied. Now it can be shown \cite{raetzel2010} that for almost all spatial momenta $\vec{p}$, the two associated positive energies do not coincide, $\omega^{I=1}(\vec{p}) \neq \omega^{I=2}(\vec{p})$, so that the covectors $a^{I=1}_b(\vec p)$ and $a^{I=2}_b(\vec{p})$ are determined up to scale, linearly independent and thus forming a basis for the space of all purely spatial covectors $v$ for which $G^{0a0b}p_a v_b=0$, and diagonalize the Hamiltonian (\ref{class_Ham}). Thus the only one freedom we have is a choice of normalization, which we choose such that any solution $a^I_b(\vec p)$ is expressed as $a^I_b(\vec p)=a^I(\vec p) \epsilon^I_b(\vec p)$ with the covectors $\epsilon^I_b(\vec p)$ normalized with respect to our scalar product, i.e.,
\begin{equation}
-G^{0a0b} \epsilon^{I^*}_a(\vec p) \epsilon^I_b(\vec p)=1,
\end{equation}
where there is no summation over $I$. Furthermore, $p_a$ and any $a^I_b(\vec p)$ are clearly linearly independent, such that the set of covectors
\begin{equation}
\left\lbrace \epsilon^{I=1}_b(\vec p),\epsilon^{I=2}_b(\vec p),\frac{\vec p}{\sqrt{-G^{0a0b}p_a p_b}}\right\rbrace
\end{equation}
constitute a basis for $V$, which is orthonormalized with respect to the scalar product (\ref{scalar_product}). Hence, they satisfy the completeness relation
\begin{equation}
\label{completeness}
-G^{0i0j}\sum_{I=1,2}\epsilon^{I*}_j(\vec p) \epsilon^{I}_b(\vec p)=\delta^i_b-\frac{p_m p_b G^{0m0i}}{G^{0r0s}p_r p_s}.
\end{equation}
Notice that the normalized covectors $\epsilon^I_b(\vec p)$ satisfy the orthogonality identities (\ref{ortho_id}). Now the general solution (\ref{finalsolution}) takes the form
\begin{eqnarray}
\label{finalsolution2}
A_a(t,\vec x)&=&\sum_{I=1,2} \int \frac{d^3 p}{(2\pi)^3}\frac{1}{\sqrt{2\omega^I(\vec p)}}\left( e^{-i(\omega^I(\vec p) t+\vec p.\vec x)}a^I(\vec p)\epsilon^I_a(\vec p)+e^{i(\omega^I(\vec p) t+\vec p.\vec x)}a^{I*}(\vec p)\epsilon^{*I}_a(\vec p)\right)\,,
\end{eqnarray}
where the coefficients $a^{I}(\vec p)$ correspond to the amplitudes of the solutions and depend on the initial values that one considers for a specific problem in the classical approach. At the quantum level, these amplitudes are precisely the mathematical objects that should be promoted to operators, such that the corresponding quantum field reads
\begin{eqnarray}
\label{quantumsolution}
\hat A_a(t,\vec x)&=&\sum_{I=1,2} \int \frac{d^3 p}{(2\pi)^3}\frac{1}{\sqrt{2\omega^I(\vec p)}}\left( e^{-i(\omega^I(\vec p) t+\vec p.\vec x)}\hat a^I(\vec p)\epsilon^I_a(\vec p)+e^{i(\omega^I(\vec p) t+\vec p.\vec x)}\hat a^{I \dagger}(\vec p)\epsilon^{*I}_a(\vec p)\right).
\end{eqnarray}
Using this quantum solution and the expressions for the energy and spatial momentum (which can be obtained by calculating the energy-momentum tensor) we find that the quantum Hamiltonian and quantum spatial momentum operators are given by
\begin{eqnarray}
\label{quantumhamiltonian}
{\hat H}_0&=&\frac{1}{2}\sum_{I=1,2} \int \frac{d^3 p}{(2\pi)^3} \omega^I(\vec p) \left[ \hat a^{I}(\vec p)\hat a^{I\dagger}(\vec p))+\hat a^{I\dagger}(\vec p)\hat a^{I}(\vec p))\right],\\
{\hat P}_i&=&\frac{1}{2}\sum_{I=1,2} \int \frac{d^3 p}{(2\pi)^3}\, p_i\, \left[ \hat a^{I}(\vec p)\hat a^{I\dagger}(\vec p))+\hat a^{I\dagger}(\vec p)\hat a^{I}(\vec p))\right].
\end{eqnarray}
Hence, if we identify the operators $\hat a^I(\vec p),\,\hat a^{I\dagger}(\vec p)$ with annihilation and creation operators respectively, a condition for the Hamiltonian to be positive definite is that these operators obey the bosonic commutation relations
\begin{eqnarray}
\label{crea_anh_comm}
[\hat a^I(\vec p), \hat a^{J\dagger}(\vec q)]&=&(2\pi)^3\delta^{IJ}\delta(\vec p-\vec q)\,,\\\nonumber
[\hat a^I(\vec p),\, \hat a^{J}(\vec q)\,] &=& [\hat a^{I\dagger}(\vec p), \hat a^{J\dagger}(\vec q)]=0.
\end{eqnarray}
Hence, the quantum Hamiltonian operator can be written as
\begin{eqnarray}
\label{quantumhamiltonian_normal}
{\hat H}_0&=&\sum_{I=1,2} \int \frac{d^3 p}{(2\pi)^3} \omega^I(\vec p) \hat a^{I\dagger}(\vec p) \hat a^{I}(\vec p)) +\sum_{I=1,2} \frac{1}{2}\int d^3 p \,\omega^I(\vec p)\delta(0)\,,
\end{eqnarray}
from which expression we identify the energy of the electromagnetic vacuum, which was calculated here for plane wave solutions without any boundary conditions, as
\begin{equation}
\label{zeropointenergy}
E_{\textrm{vac}}(\textrm{no boundaries})=\sum_{I=1,2} \frac{1}{2}\int d^3 p \,\omega^I(\vec p)\delta(0)\,.
\end{equation}
In the next section, we will calculate how this expression changes if one imposes boundary conditions. Finally, by using the completeness relation (\ref{completeness}) one confirms that
\begin{equation}
\left[ \hat A_a(t,\vec x), \hat \pi^b(t,\vec y)\right] = i\int\,\dfrac{d^3 p}{(2\pi)^3}\left[ \delta^b_a-\frac{p_m p_a G^{0m0b}}{G^{0r0s}p_r p_s} \right]\,e^{i\vec p.(\vec x-\vec y)},
\end{equation}
which shows the consistency of the quantization procedure with the Dirac brackets (\ref{fuldameltal_dirac_conmutators})\new{, since the latter reduce to the above form due to (\ref{specialchoice}).}
\section{Application: Casimir effect in a birefringent linear optical medium}
\label{sec_casimir}
\new{The Hamiltonian (\ref{quantumhamiltonian_normal}) shows that the quantization of general linear electrodynamics leads to a modified quantum vacuum compared to standard non-birefringent Maxwell theory. In fact, local physical phenomena which only depend on the quantum vacuum can be used to test and bound the non-metricity of spacetime. In this section we analyze one such phenomenon, namely the Casimir effect; similar studies can be conducted for the Unruh effect and spontaneous emission.
The Casimir effect \cite{casimir1948} arises because of the energy cost incurred by imposing boundary conditions on the electromagnetic field strength. Physically, such boundary conditions arise for instance by introducing perfectly conducting metal plates into the spacetime. For two infinitely extended plates parallel to the 1-2-plane, and this is the configuration we will study here for general linear electrodynamics, the electromagnetic field strength must satisfy the boundary conditions \begin{equation}\label{platebdy}
\left.F_{01}\right|_{\textrm{plates}} = \left.F_{02}\right|_{\textrm{plates}}=\left.F_{12}\right|_{\textrm{plates}} =0
\end{equation}
everywhere on either plate; this follows, by Stokes' theorem and thus independent of the geometric background, from the physical assumption that the plates are ideal conductors inside of which the field strength must vanish.
Now the key point is that having, or not having, boundary conditions for the vacuum amounts to an energy difference, the so-called Casimir energy
\begin{equation}
E_\textrm{Casimir} = E_\textrm{vac}(\textrm{plate boundaries}) - E_\textrm{vac}(\textrm{no boundaries})\,.
\end{equation}
But both energies on the right hand side diverge and need to be regularized such that their difference is independent of the regulator.
This is most easily achieved by first considering boundary conditions analogous to (\ref{platebdy}), but for all six faces of a finite rectangular box with faces parallel to the coordinate planes, and separated by coordinate distances $L_1, L_2, L_3$. In a second step we will then push all faces a very large coordinate distance $L$ apart in order to obtain an expression for $E_\textrm{vac}(\textrm{no boundaries})$ regularized by $L$, and similarly push all but two faces in order to obtain a corresponding regularized expression for $E_\textrm{vac}(\textrm{plate boundaries})$. The difference of these two regulated quantities will indeed turn out to be finite per unit area and be independent of the regulator $L$.
Now more precisely, a basis of solutions of general linear electrodynamics satisfying the box boundary conditions is labeled by a triple $(n_1,n_2,n_3)$ of non-negative integers and a polarization $I=1,2$ and takes the form
\begin{eqnarray}
\label{metric_solution}\nonumber
A_x(\vec x)&=&a^I_x(n_1,n_2,n_3) \, \textup{cos}(n_1 \pi \frac{x}{L_1})\, \textup{sin}(n_2 \pi \frac{y}{L_2})\, \textup{sin}(n_3 \pi \frac{z}{L_3})\,,\\
A_y(\vec x)&=&a^I_y(n_1,n_2,n_3) \, \textup{sin}(n_1 \pi \frac{x}{L_1}) \, \textup{cos}(n_2 \pi \frac{y}{L_2}) \, \textup{sin}(n_3 \pi \frac{z}{L_3})\,,\\\nonumber
A_z(\vec x)&=&a^I_z(n_1,n_2,n_3) \, \textup{sin}(n_1 \pi \frac{x}{L_1}) \, \textup{sin}(n_2 \pi \frac{y}{L_2})\, \textup{cos}(n_3 \pi \frac{z}{L_3})\,,
\end{eqnarray}
where the $a^I_m(n_1,n_2,n_3)$ are solutions to equation (\ref{eigenvalueequation}) for $\omega^I(n_1 \pi/L_1, n_2 \pi/L_2, n_3 \pi/L_3)$, which always exist if the dispersion relation is bi-hyperbolic and energy distinguishing. The vacuum energy in the presence of the box boundary conditons is thus given by the discrete sum
\begin{equation}
\label{ape_discretized}
E_{vac}(\textrm{box boundaries})=\frac{1}{2}\sum_{\vec n=0}^{\infty} \sum_{I=1,2}\omega^I(\pi \frac{n_1}{L_1}, \pi \frac{n_2}{L_2}, \pi\frac{n_3}{L_3}).
\end{equation}
Removing appropriate faces to a coordinate distance $L$ one finds from this, in the very large $L$ limit, the $L$-regularized expression for the vacuum energy without boundary conditions
\begin{equation}
\label{ape_continuum}
E^L_{vac}(\textrm{no boundaries})= \frac{L^3}{2 \pi^3}\sum_{I=1,2}\int_{0}^{\infty}d^3 p\, \omega^I(\vec p),
\end{equation}
and the $L$-regularized expression for the vacuum energy in the presence of two plates parallel to the 1-2-plane and separated by a coordinate distance $d$
\begin{equation}
E^L_{\textup{vac}}(\textrm{plate boundaries})=\frac{L^2}{2\pi^2} \sum_{I=1,2} \sum_{n'}\int_0 ^\infty dp_x dp_y \,\omega^I\left(p_x^2,p_y^2,(\frac{n\pi}{d})^2\right)\,,
\end{equation}
where the prime in the summation symbol $n$ means that a factor $1/2$ should be inserted if this integer is zero, for then we have just one independent polarization. Hence we find for the physical vacuum Casimir energy $U(d)=(E_\textrm{vac}(\textrm{plate boundaries}) - E_\textrm{vac}(\textrm{no boundaries}))/L^2$ per unit area
\begin{equation}
\label{casimir_energy}
U(d) = \frac{1}{2\pi^2} \sum_{I=1,2} \left[ \sum_{n'}\int_0^\infty dp_x dp_y \,\omega^I\left(p_x^2,p_y^2,(\frac{n\pi}{d})^2\right)-\frac{d}{\pi} \int_0^\infty dp_x dp_y dp_z\,\omega^I(p_x^2,p_y^2,p_z^2)\right].
\end{equation}
In principle, the execution of the above integrals can proceed as in the standard case. However, with the frequencies $\omega^I$ now being solutions to a quartic, rather than quadratic, dispersion relation, these integrals are much harder particularly due to the absence of rotational invariance.
Fortunately, the fact that contributions from the two different polarizations $I=1,2$ are simply added in the above expression allows for an analytic study of the case where the polynomial $P$ is reducible. In terms of the scalars $\alpha, \beta, \gamma, \rho$ defining the area metric in a normal form frame, this is the case if and only if two of the scalars $\alpha, \beta, \gamma$ coincide, and we may take $\alpha=\beta$, for instance. Even in this simplest of non-trivial cases, the Casimir energy crucially depends on the birefringence properties of the underlying general linear electrodynamics. More precisely, the polynomial in (\ref{classI_pol}) factorizes into two Lorentzian metrics,
\begin{equation}
\label{classIbimetricpolynomial}
P(p)=\alpha(\alpha p_0^2-\alpha p_3^2-\gamma (p_1^2+p_2^2))(\gamma p_0^2-\gamma p_3^2-\alpha (p_1^2+p_2^2))\,,
\end{equation}
so that we immediately obtain the positive energy solutions
\begin{equation}
\label{bimetric_energies}
\omega^{I=1}=\left[\frac{1}{\alpha}\left(\alpha p_3^2+\gamma p_2^2 + \gamma p_2^2\right)\right]^{1/2}\qquad \textrm{and}\qquad
\omega^{I=2}=\left[\frac{1}{\gamma}\left( \gamma p_3^2+\alpha p_1^2 + \alpha p_2^2\right)\right]^{1/2}\,,
\end{equation}
turning (\ref{casimir_energy}) into a sum of integrals as they appear in the standard Casimir problem on a Lorentzian background. Thus from here on the standard calculation of the Casimir effect \cite{milonni1994} can be followed for each of these integrals separately, and one finally obtains the Casimir energy (\ref{casimir_energy})
\begin{equation}
U(d)=-\frac{1}{2}\left( \frac{\alpha}{\gamma}+\frac{\gamma}{\alpha} \right) \frac{\pi^2}{720 d^3}\,.
\end{equation}
This energy difference of course results in a Casimir force
\begin{equation}
\label{casimir_force}
F(d)=-U'(d)=-\frac{1}{2}\left( \frac{\alpha}{\gamma}+\frac{\gamma}{\alpha} \right) \frac{\pi^2}{240 d^4}
\end{equation}
between the plates. The standard Casimir force is recovered if and only if $\alpha=\beta=\gamma$, and irrespective of the value of the scalar $\rho$. This in turn is equivalent to the absence of classical bi-refringence \cite{favaro2010}. Note that the amplification of any bi-refringence is limited only by the technological constraint of how small the separation $d$ between the plates can be made in any realistic set-up. In contrast to classical bi-refringence tests, which usually require accumulative effects over large distances (with all the uncertainties present in such non-local measurements), one sees here that the Casmir force allows for a detection of bi-refringence by way of a highly local measurement.
Conversely, of course, experimental measurements of the Casimir force agreeing with the standard prediction within the given technological constraints can be used to put stringent bounds on the non-metricity of the spacetime region where the measurement is conducted.}
\section{Conclusions}
The canonical quantization of general linear electrodynamics, as undertaken in this article, required the solution of several, and in themselves challenging, questions.
First, from the classical field theory point of view, it had to be clarified which general linear electrodynamics are predictive on the one hand and physically interpretable in terms of quantities measurable by observers on the other hand. The answer to both questions is encoded in the polynomial dispersion relation of the field theory, and amounts to the simple algebraic conditions that the latter be bi-hyperbolic and energy-distinguishing. Further down the road, these conditions turned out to be crucial in ensuring the existence of a Glauber gauge, which allowed to define a time-conserved scalar product in the space of classical solutions, on which all further developments were based.
Second, and closely related, is the construction of a Hamiltonian formulation of
general linear electrodynamics. The causal structure encoded in the higher-order polynomial dispersion relation of this theory required a revision of the construction of suitable spacetime foliations that underlie a Hamiltonian formulation. The key point here was that the leaves of the foliation must be such that initial data provided on them must be causally evolved by the field equations and at the same time be accessible to observers. It turned out that bi-hyperbolic and energy-distinguishing area metric manifolds provide precisely the structure to ensure both, and ultimately render the classical Hamiltonian positive.
Third, the quantum Hamiltonian operator is positive definite. For a theory with a higher-order polynomial dispersion relation this is far from trivial, and again only due to bi-hyperbolicity and the energy-distinguishing property. The positive definiteness of the quantum Hamiltonian operator is inherited from the positivity of the classical Hamiltonian because the positive energy solutions have positive norm with respect to the scalar product identified before. This is of course synonymous with the stability of the quantum vacuum, and thus the physical relevance of the Casmir effect we derived from it.
The wider lesson learnt from our study consists in this being a prototypical, and rather non-trivial example for the quantization of a field theory with a modified dispersion relation. Such theories are discussed extensively throughout the literature with a number of motivations, but usually disregarding the fundamental consistency conditions that were instrumental in this work. In particular, the classically inevitable condition that the dispersion relation be given by a bi-hyperbolic and energy distinguishing polynomial proved inevitable also at virtually every step of the quantization process.
Actual calculations were made tractable by employing the fact that the dispersion relation of general linear electrodynamics is ultimately determined by a fourth rank area metric tensor for which a complete algebraic classification and associated normal forms are available for the phenomenologically directly relevant case of four spacetime dimensions. This normal form theory was also used to ensure that the birefringent optical backgrounds for which we calculated the Casimir effect (and which owe their physical relevance to their parametrizing the neighborhood of non-birefringent optical media) indeed are bi-hyperbolic and energy-distinguishing. While it is possible to directly exclude 16 out of a total of 23 algebraic classes of four-dimensional area metrics as admissible spacetime structures, a complete and simple characterization of all area metric manifolds that {\it are} bi-hyperbolic and energy-distinguishing however remains an open problem. The high interest that would attach to a comprehensive solution of this problem is clearly underlined by the pivotal role we saw these conditions to play for the classical and quantum theory alike.
Another open, albeit well-defined, problem is the coupling of fermions to general linear electrodynamics. The issue is the very definition of spinors in the presence of a higher-degree polynomial dispersion relation, rather than one given by a Lorentzian metric. For rather than satisfying the standard binary Dirac algebra, generalized Dirac matrices that intertwine spacetime and spinor indices must now satisfy a quarternary algebra determined by the fourth-degree polynomial associated with a four-dimensional area metric spacetime structure. Even employing the normal form theory, representations of this quarternary algebra appear hard to find in any other but the case of a reducible dispersion relation satisfying the relevant conditions (which then leads to a sixteen-dimensional spinor representation with an associated refined Dirac equation for this special bi-metric case). Once a representation in the general case is obtained, the canonical quantization can proceed exactly along the now clearly defined path for such theories, and complete a full theory of general linear quantum electrodynamics including charges.
Concluding, we see that the results of this article open up the arena for comprehensive, and above all conceptually watertight, studies of quantum effects brought about by birefringence. Indeed, beyond the Casimir force we calculated here explicitly, any other effect rooting in the quantum vacuum of electrodynamics can be directly calculated now on the basis of the technical findings of this paper. This includes for instance the Unruh effect or the spontaneous emission of photons from quantized point particles. Once spinor fields are included, the range of effects of course extends to the full spectrum of processes discussed in standard quantum electrodynamics with charged fermions.
Far from being merely academic musings, however interesting, these findings are of immediate relevance to physicists with interests ranging from fundamental theory to material science. Indeed, while on the one hand directly testable in birefringent optical media in laboratory experiments \footnote{For not truly continuous optical media, such as any material in the laboratory, the calculations made in this paper will only apply to the same approximation to which the medium can be modeled as an area metric spacetime. This will be the case for wavelengths that are well above the average distance between the atoms constituting the material but also well below wavelengths of the order of the separation between the plates. More sophisticated calculations taking into account these cut-offs, as well as finitely extended plates or different geometric configurations should now be feasible for the interested specialist, based on the results derived in this paper.}, the constructions of this paper on the other hand also put phenomenological studies of modified dispersion relations \cite{ling2006,barcelo2007,rinaldi2007,garattini2010,bazo2009,gregg2009,sindoni2008,sindoni2009,sindoni2009-2,chang2008,laemmerzahl2009,laemmerzahl2005,liberati2001,perlick2010,gibbons2007,lukierski1995,glikman2001,girelli2007} (or, equivalently, Lorentz-violating spacetime structures \cite{kostelecky1989,gambini1998,alfaro1999,sudarsky2002,myers2003,magueijo2002,mavromatos2010,bojowald2004,jacobson2006,hossenfelder2010}), as they now abound in the literature, on a solid theoretical footing.
|
1,477,468,750,439 | arxiv | \section{Introduction}
It was realized long ago (Cowsik \& Lee 1982 and references therein) that the
divergence of the velocity field in astrophysical flows can provide a very
efficient mechanism to transfer energy from the fluid to particles (photons,
neutrinos, cosmic rays) diffusing through the medium, even in the absence of
shocks. In the case of photons undergoing multiple scatterings off cold
electrons, this effect is germane to thermal Comptonization, with the
flow velocity $v$ playing the role of the
thermal velocity, and is sometimes
referred to as dynamical Comptonization. In a series of papers
Blandford \& Payne (1981a,b; Payne \& Blandford 1981, PB in the following)
were the first to emphasize
the importance of repeated scatterings in a steady, spherical
flow of depth $\tau\gg 1$. They have shown that monochromatic photons injected
in a region where $\tau v/c\sim 1$ always gain energy, because of adiabatic
compression, and emerge with a broad
distribution which exhibits a distinctive power--law, high--energy tail.
Under the assumption that $v\propto r^\beta$, the spectral index depends only
on $\beta$.
Cowsik \& Lee (1982), and later Schneider \& Bogdan (1989), stressed that
Blandford
\& Payne diffusion equation for the photon occupation number could be
regarded as a particular case of the standard cosmic--ray transport
equation in which the diffusion coefficient, $\kappa \propto r^\alpha
\nu^\gamma$, does not depend on the photon energy ($\gamma=0$) and $\alpha
-\beta = 2$. Starting from this result, Schneider \& Bogdan were
able to generalize PB analysis to include the transition between Thomson
($\gamma =0$) and Klein--Nishina ($\gamma =1$) scattering cross--sections.
The competitive role of dynamical and thermal Comptonization in accretion
flows with a non--zero electron temperature was studied by Colpi (1988).
More recently, Mastichiadis \& Kylafis (1992) investigated the effects of
dynamical Comptonization in near--critical accretion onto a neutron star. Their
approach is very similar to PB, but the presence of a perfectly reflecting
inner boundary (either the NS surface or the magnetosphere) was taken into
account. They have shown that, in this case, the emergent spectrum is much
harder than in PB and that the spectral index depends both on $\beta$ and
on the depth at the inner boundary.
Even when dealing with black hole accretion, all previous analyses neglected
both special and general relativity. The presence of an event horizon was not
considered and only terms up to first order in $v/c$ were retained.
In this paper we extend PB calculations to account for relativistic effects
on radiative transfer which arise when the flow velocity approaches the
speed of light in the vicinity of the hole horizon; as in PB, we assume that
scattering is elastic in the electron rest frame.
The motivation for
this work is twofold, much like in Mastichiadis \& Kylafis. First: to
investigate the properties of the emergent spectrum in a more realistic
accretion scenario, in which
the optical depth near the horizon is not so large to prevent photons from
escaping. Second: to check by means of an analytical calculation the results
obtained with a numerical code recently developed for solving the complete
transfer problem in spherical flows (Zane {\it et al.\/} 1996). Computed
spectra show, in fact, a power--law, high--energy tail but the spectral index
depends on
the optical depth at the horizon, even for fixed $\beta=-1/2$, and it is always
smaller than $2$ (PB result). Here we show that advection/aberration effects
in the high--speed flow near the horizon, due to the finiteness of the depth
there, produce a power--law tail flatter with respect to PB and enable photons
to drift also towards energies lower than the injection energy.
In addition, we present an analysis of dynamical Comptonization in an
expanding atmosphere, using the same assumptions of PB. We found that the
solution for the emergent flux shows specular features with respect to PB.
Adiabatic expansion produces a drift of injected monochromatic photons to
lower energies and the formation of low--energy, power--law tail.
In this case, however, the spectral index is independent on the
velocity gradient and turns out to be always equal to $-3$.
\section{Radiative transfer in a converging flow}
In this and in the following sections we deal with the transfer of radiation
through a scattering, steady, spherical flow, characterized by a power--law
velocity
profile $v \propto r^{\beta}$. Under these assumptions the rest--mass
conservation yields immediately a density profile
$\varrho \propto r^{-2-\beta}$
from which it follows that the electron--scattering optical depth is
$\tau = \kappa_{es}\varrho r/(1+\beta) \propto r^{-1-\beta}$.
The parameter $\beta$ is positive for outflows and it has to be $\beta > -1$
for the optical depth to decrease with increasing $r$. Units in which
$c=G=1$ are used unless explicitly stated.
Blandford \& Payne (1981a, b) and PB restricted their analysis to
converging flows with
a non--relativistic bulk velocity and to conservative and isotropic scattering
in the electron rest frame.
Combining the first two moment equations, written in an inertial frame, they
found that, in diffusion approximation, the (angle--averaged)
photon occupation number $n$ obeys a Fokker--Planck equation. Defining
$\tau^* = 3 \tau v $ and looking for separable solutions of the form
$$
n\left ( \nu, \tau^* \right ) = f \left ( \tau^* \right )
{\tau^*}^{3 +\beta} \nu^{-\lambda} \, ,
\eqno (1) $$
the resulting second order ordinary
differential equation for $f$ reduces to a confluent
hypergeometric equation
(here the sign of $\beta$ is the opposite with respect to PB,
e.g. $\beta = -1/2$ for free--fall, according to the assumptions at the
beginning of this section).
As discussed by PB,
the solution corresponding to a constant radiative flux at infinity and
to adiabatic compression of photons for $\tau \to \infty$
is expressed in terms of the Laguerre polynomial
$L_n^{(3+\beta)}(\tau^*)$. The above two conditions give rise to a discrete
set of eigenvalues for
the photon index $\lambda$ which are given by
$$
\lambda_n = { 3 \left ( n + 3 +\beta \right ) \over \left ( 2 +\beta
\right ) }\qquad\qquad (n = 0,1,2,\ldots )\eqno (2) $$
The general solution is written as the superposition of different modes.
Assuming that monochromatic photons with $\nu =\nu_0$ are injected at
$\tau^* = \tau^*_0$, it is
$$
n \propto {\tau^*}^{3+\beta}\sum_{n=0}^\infty
{ \Gamma(n+1) \over \Gamma (n+4+ \beta )} L_n^{(3 + \beta )} (\tau^*_0)
L_n^{(3 + \beta )} (\tau^*)\left( { \nu \over \nu_0}\right)^{-\lambda_n}\, .
\eqno (3)
$$
In this case, bulk motion comptonization tends to create a power law, high
energy tail. Defining the spectral index as
$$
\alpha = - \dert {\ln L \left ( \nu,0 \right )}{\ln \nu} \eqno (4) $$
where $L$ is the luminosity, PB found
$$
\lim_{\nu \to \infty} \alpha = { 3 \over 2 +\beta }
\eqno (5) $$
showing that the spectral slope at high frequencies is
dominated by the fundamental mode $n=0$. In particular, it is
$\alpha = 2$ for a free--falling gas.
The same results can be recovered using the PSTF moment formalism introduced
by Thorne (1981), as shown by Nobili, Turolla
\& Zampieri (1993). Here we outline the general method,
mainly to introduce same basic concepts that will be used
in the following sections. In particular, we consider
the first two PSTF moment equations with only Thomson scattering included in
the source term. In the frame comoving with the fluid they read
$$
\der{w_1}{\ln r} + 2w_1 + { y' \over y } \left(
w_1 - \der{w_1}{\ln\nu} \right)
-v \left[\der{w_0}{\ln r} +
\left( 1 - \beta \right)\der{w_2}{\ln\nu}
- (2+\beta ) \left ({1\over 3} \der{w_0}{\ln\nu} - w_0 \right )
\right] = 0 \eqno(6a)
$$
$$\eqalign {&
{1\over 3}\der{w_0}{\ln r} +
\der{w_2}{\ln r }
+ 3 w_2 - {y' \over y} \der{w_2}{\ln\nu} +
{ y'\over y}\left(w_0 - {1\over 3}\der{w_0}{\ln\nu}\right) -
v\left[ - {3\over 5}(4+\beta )w_1 -
{1\over 5} (2+ 3\beta )
\der{w_1}{\ln\nu} + \der{w_1}{\ln r} +
\right. \cr
&
\left. \left(1 - \beta \right)
\left( w_3+ \der{w_3}{\ln\nu}\right)\right] +
{{(1+\beta )\tau}\over y}w_1=0\, .\cr}\eqno(6b)$$
where a prime denotes the total derivative wrt $\ln r$,
$ y = \sqrt{1 - r_g/r}/\sqrt {1 - v^2}$, $r_g$ is the gravitational radius,
and $v$ is taken positive for inward motion.
As discussed by Turolla \& Nobili (1988, see also Thorne, Flammang \&
\.Zytkow 1981), in diffusion approximation the hierarchy of
the frequency--integrated PSTF moments $W_l$ is such that
$$
\eqalign{
W_1 & \sim {W_0\over \tau} \cr
W_2 & \sim {W_0\over \tau}\left({1\over \tau} -v\right) \cr
W_3 & \sim {W_0 \over \tau^2} \left({1\over \tau} -v\right) \, .
\cr }\eqno(7)$$
The same hierarchy can be assumed to hold also for frequency--dependent
moments in a scattering medium.
PB result can be reproduced in the limit of large $\tau$ and small $v$,
i.e. retaining in equations (6)
only terms of order $w_0$, $v w_0$ and $w_0/ \tau$, and suppressing gravity,
which is equivalent to set $y=1$ and $y'=0$. In particular,
under such hypothesis, all terms containing both $w_2$ and $w_3$ can be
neglected and the moment equations become
$$
\der{w_1}{\ln t} - v\der{w_0}{\ln t} - 2w_1 + v(2+\beta )\left(w_0
-{1\over 3}\der{w_0}{\ln\nu}\right)=0 \eqno(8a)
$$
$$
v\der{w_0}{\ln t} - tw_1=0\, , \eqno(8b)
$$
where $t = (1+\beta)\tau^*$.
Equations (8a) and (8b) can be combined together to yield a
second order, partial differential equation for the radiative flux
$$t\derss{w_1}{t} -\left(t+1-\beta\right )\der{w_1}{t}+\left(1-{{2\beta
\over t}}\right)w_1 - {{2+\beta}\over 3}\der{w_1}{\ln\nu}=0\, .\eqno(9)$$
Following PB, the solution of equation (9) can be found by
separation of variables. Writing $w_1 = t^p h_1(t)\nu^{-\alpha}$, it is easy
to show that for $p=2$ and for $p=-\beta$, equation (9) becomes
a confluent hypergeometric equation for $h_1$. Actually the requirement of
constant radiative flux at infinity is met only for $p=2$, and in this case
we get
$$
t\dertt{h_1}{t} +\left(3+\beta-t\right )\dert{h_1}{t}-\left(1-{{2+\beta}
\over 3}\alpha\right)h_1 = 0 \, . \eqno(10)
$$
As previously discussed, the physical solution for $w_1$ can be obtained as a
superposition of the Kummer functions $M(-n,3+\beta ,t)\propto
L_n^{(2+\beta)}(t)$, for $n=0,1,\ldots$,
with corresponding eigenvalues
$$ \alpha_n = {{3(n+1)}\over {2+\beta}} \, . \eqno(11) $$
In this case
$$ w_1 = t^2 \sum_{n=0}^{\infty} A_n L_n^{(2+\beta)}(t)\nu^{-\alpha_n} \, ,
\eqno(12)$$
where the $A_n$'s are constants to be fixed by the boundary conditions.
At sufficiently large frequencies the spectrum is dominated by
the fundamental mode; in particular, for $\beta = -1/2$, it is again
$\alpha_0 = 2 \, .$
\section{Importance of relativistic effects}
Here we consider the effects of
dynamical Comptonization in spherical accretion onto a non--rotating black
hole, taking into full account both gravity and velocity terms in the
moment equations. With reference to this particular problem, we can
safely assume that
matter is free--falling, $v=(r/r_g)^\beta$ with $\beta = -1/2$. Note that
in spherical accretion onto black holes the radiative flux is never
going to influence the flow dynamics close to the horizon (see e.g.
Gilden \& Wheeler 1980; Nobili, Turolla \& Zampieri 1991; Zampieri, Miller \&
Turolla 1996). In this
case dynamics cancels, locally, gravity, so that
$y=1$, $y'=0$. The moment equations look then ``non--relativistic'' in form,
although $v$ can be arbitrarily close to unity. Corrections due to large
values of the flow velocity were not considered in previous works despite
the fact that they are bounded to become important near the event horizon where
$v\sim 1$. We note that the bulk of the emission in realistic accretion models
is expected to come precisely from the region close to $r_g$.
As in the non--relativistic analysis presented in the last
section, we consider the diffusion limit, truncating self--consistently
both equations (6) to terms of order $w_0/\tau$.
The moments hierarchy, expressions (7), shows that all terms containing $w_3$
can be always neglected in equation (6b), since they are of order $w_0/\tau^2$.
In the present case, however, all other terms must be retained. In fact,
$v w_1\sim w_1\sim w_0/\tau$ when $v\sim 1$ and $w_2\sim w_0/\tau$,
at least for $v\sim 1\magcir 1/\tau$ (see again expressions [7]).
This implies that $w_1$ and $w_2$ contribute to the
same extent to the anisotropy of the radiation field.
Note that under such conditions it is $W_2=4\pi(K-J/3)<0$, as already pointed
out by Turolla \& Nobili (1988), so that in high--speed, diffusive flows $K$
may become less than $J/3$. Contrary to the
case discussed in section 2 where $w_2$ is negligible, now the system
of the first two moment equations is not closed. However, up to terms of
order $w_0/\tau$, the second moment equation
does not contain moments of order higher than $w_2$ and provides then
the required closure equation
$$
v\der{w_2}{\ln t} - {4\over 15}\der{w_1}{\ln t} -{15\over 14}vw_2-{4\over 15}
w_1 + {2\over 5}vw_0 +
{3\over 14}v\der{w_2}{\ln\nu} - {2\over 15}v\der{w_0}
{\ln\nu} + {3\over 10}{t\over v}w_2 = 0\, . \eqno(13)
$$
The complete system (6a), (6b) and (13) is awkward and a solution can be
obtained only numerically. It is possible, nevertheless, to find an
analytical solution if we consider the closure condition for $w_2$ which
follows from equation (13) with only terms of order $w_0$ retained
$$w_2 = {4\over 9}{v\over t}\left(\der{w_0}{\ln\nu}-3w_0\right)\, .
\eqno(14)$$
With this closure, $w_2$ is always negative provided that
$\partial w_0/\partial\ln\nu <0$. This implies that equation (14) is strictly
valid only for $\tau v
\magcir 1$ (see expression [7]), that is to say below the trapping radius.
Introducing the new dependent variables $f_0 = vw_0$, $f_1 = w_1$ and
$f_2 = vw_2$, the moment equations become
$$
t\der{f_0}{t} + {1 \over 2} \der{f_0}{\log \nu} - 2 f_0 - t\der{f_1}{t}
+ 2 f_1 - { 3 \over 2} \der{f_2}{\log \nu} = 0 \eqno(15a)
$$
$$
{t \over 3} \der { f_0}{t} - { 1 \over 6 } f_0 - { t \over t_h}
\left [ t\der { f_1}{t} + { 1\over 10} \der {f_1} {\log \nu} + \left (
{t_h\over 3} - { 9 \over 10} \right ) f_1 \right ]
+ t\der{f_2}{t} - { 7 \over 2 } f_2 = 0 \eqno(15b)
$$
$$
f_2 - { 4 \over 9 t_h } \left( \der{f_0}{\log \nu} - 3 f_0\right) = 0\, ,
\eqno(15c)
$$
where $t_h$ is the value of $t$ at the radius where $v=1$, i.e. at $r=r_g$ in
the case under examination.
We note that for $t_h \to \infty$
equations (15a,b) give exactly the low--velocity limit of PB
(equations [8a,b]), irrespective of the value of $v$. This is because when
$t_h\to\infty$ the scattering depth itself near the horizon must be very
large,
so the radiation field there is very nearly isotropic. Departures from isotropy,
due both to the radiative flux $w_1\sim w_0/\tau$ and to the radiative shear
$w_2\sim (1/\tau - v)w_0/\tau$ become
vanishingly small, no matter how large velocity is. Under such conditions
PB approach is still valid just because {\it both\/} $w_1$ and $w_2$ become
negligible in the moment equations, although they may be of the same order.
The system (15) can be solved looking again for separable
solutions of the type $f_i = g_i(t) \nu^{-\alpha}$.
After some manipulation, it can be transformed into a pair of
decoupled, second order, ordinary differential equation for $g_0(t)$ and
$g_1(t)$, having the same structure. In particular for $g_1(t)$ it is
$$
t^2 \left ( \beta t + \gamma \right ) \dertt{g_1}{t} + t
\left ( \delta t + \epsilon \right ) \dert {g_1}{t}
- \left ( \eta t + \lambda \right ) g_1 = 0\eqno (16)
$$
where
$$\eqalign {
& \beta = 60 t_h\, , \cr
& \gamma = -(20/3) t_h \left [ 3 t_h - 4 \left ( 3 + \alpha \right ) \right ]
\, , \cr
& \delta = 20 t_h^2 - 18 \left ( 2 \alpha + 3 \right ) t_h - 40
\alpha \left ( 3 + \alpha \right )\, , \cr
& \epsilon = 30 t_h \left [ t_h - 4 \left ( 3 + \alpha \right ) \right ]
\, , \cr
& \eta = 10 \left ( \alpha + 2 \right ) t_h^2 + \left ( 31 \alpha^2/3
+ 7 \alpha - 54 \right )t_h - 4 \alpha \left (\alpha + 3 \right )
\left ( \alpha + 9 \right )\, , \cr
& \lambda = (20/3) t_h \left [ 3 t_h - 28 \left (3 + \alpha \right ) \right ]\,
. \cr}
$$
Equation (16) can be reduced to a hypergeometric equation upon the
change of variables $g_1(t) = t^p h_1(z)$ and $ z = - (\beta/\gamma)t$,
where $p$
is the solution of the quadratic equation $\gamma p^2 + (\epsilon - \gamma)p -
\lambda = 0$.
A direct, but tedious, calculation shows that
$$\eqalign{
& p_+ = 2 \cr
& p_- = {7\over 2} -{9 t_h \over 3t_h -4(3 + \alpha)}\, ;\cr}
\eqno(18)
$$
in the limit $t_h \to \infty$, $p_- = 1/2$
as in the case considered in the previous section,
and $p=p_+$ will be used in the following to meet the requirement of constant
radiative flux at infinity. Equation (16) can be now written in
the form
$$
z ( 1 - z) \dertt{h_1}{z} + \left [ { \epsilon \over \gamma} + 2 p - \left (
{ \delta \over \beta} + 2 p \right ) z \right ] \dert{h_1}{z} - \left [
p \left ( p -1 \right ) + p { \delta \over \beta } - { \eta \over \beta }
\right ] h_1 = 0 \eqno (19)
$$
which is a hypergeometric equation. The general solution is expressed in terms
of the hypergeometric function $\null_2F_1( a, b, c; z)$ and the three
parameters $a$, $b$, $c$
(see Abramowitz \& Stegun 1972, AS in the following, for notation) are given
by the relations
$$\eqalign{
& c = {\epsilon\over\gamma} + 2p\cr
& a+b +1 = {\delta\over\beta} + 2p\cr
& ab = p ( p-1) + p {\delta\over\beta} - {\eta\over\beta}\, .\cr}
$$
Solving for $a$, $b$ we obtain, after a considerable amount of algebra,
$$
a = { 2 -\alpha \over 2} - {2 \alpha \left ( \alpha + 3 \right ) \over 3 t_h}
\eqno(20a)
$$
$$
b = {t_h\over 3} + {11 - \alpha \over 10}\, . \eqno(20b)
$$
It can be seen from equations (20) that, in the limit $t_h \to \infty$, $b$
diverges while $a$ stays finite; in the same limit
the hypergeometric equation reduces to
the confluent hypergeometric equation (see e.g. Sneddon 1956). As discussed
in section 2, the relevant
solution in the non--relativistic case is given by Laguerre polynomials
and is recovered imposing $a = -n$, with $n=0,1,\ldots$.
The solution of equation (21) which reduces to PB for $t_h\to\infty$ is found
imposing again that $a$ is either zero or a negative integer (although other
classes of solutions that do not match PB may exist).
In this case $h_1$ is still polynomial and takes the form
$$
h_1 (z) = \null_2F_1 \left ( -n , b, c; z \right )
= { n ! \over \left (c \right )_n } P_n^{\left (c-1,b-c-n \right )}
\left ( 1 - 2z \right ) \, , \eqno (21)
$$
where $P_n^{(p,q)}(z)$ is the Jacobi polynomial and $(c)_n =
\Gamma(c +n)/\Gamma(c)$ is the
Pochhammer's symbol (see again AS). For $t_h\to\infty$, it is
$c\sim 5/2$, $b\sim t_h/3$, $z\sim 3t/t_h=t/b$ and
$$P_n^{(c-1,b-c-n)}\left(1-2{t\over b}\right)\to L_n^{(3/2)}(t)\, ,$$
so that, as expected, the solution of section 2 is recovered.
The discrete set of eigenvalues for the spectral index $\alpha_n$ follows
immediately from (20a) solving the quadratic equation $a = -n$. For each
$n$ both a positive, $\alpha^+_n$, and a negative, $\alpha^-_n$, mode is
present
$$
\alpha^{\pm}_n = {- (12 + 3 t_h) \pm \sqrt{ \left ( 12 + 3 t_h \right )^2 +
96(n+1)t_h }\over 8 } \, . \eqno (22)
$$
We checked that the eigenvalues of the equation for $g_0$ are again given by
equation (22), in agreement with the starting hypothesis that $\alpha$ is the
same for all moments.
The general solution for the spectral flux is obtained as a linear
superposition of all modes
$$
w_1 = t^2 \left[\sum_{n=0}^\infty A^+_n(-1)^n{{(b)_n}\over{(c)_n}}
G_n(b-n,c,z)\left({\nu\over\nu_0}\right)^{-\alpha^+_n} +
\sum_{n=0}^\infty A^-_n(-1)^n{{(b)_n}\over{(c)_n}}
G_n(b-n,c,z)\left({\nu\over\nu_0}\right)^{-\alpha^-_n}\right]\, , \eqno(23)
$$
where we have expressed $h_1$ in terms of the shifted Jacobi polynomials
$G_n$. We remind that $b$, $c$ and $z$ are all functions of $\alpha^{\pm}_n$,
although we dropped all indices to simplify the notation. The two sets of
constants $A^\pm_n$ are fixed imposing a boundary condition at the injection
frequency $\nu=\nu_0$. The only boundary condition compatible with
the assumption of a pure scattering flow for $t< t_h$ is that all photons are
created in an infinitely thin shell at $t_*$. This is equivalent to ask that
$w_1(t,\nu_0 )\propto\delta(t/t_* -1)$, as in PB.
At variance with the results discussed in section 2, now the series in equation
(23) can not be summed using the polynomial generating function because $b$,
$c$ and $z$ depend on $n$. The coefficients $A^\pm_n$ are
solution of an upper triangular, infinite system of linear
algebraic equations (see Appendix A). It can be easily shown that the
two series in equation (23) do not converge for any value of $\nu/\nu_0$.
In fact, the general term of the first series, which is of the type
$f(n)(\nu_0/\nu)^{\alpha^+_n}$, can not be infinitesimal for arbitrarily
small frequencies unless the series truncates, which is not the case if
it must reproduce the $\delta$--function at $\nu=\nu_0$. On the other
hand, the series is absolutely convergent for $\nu > \nu_0$,
provided that $|f(n)|$ is bounded. For $N\gg 1$, the series has
a majorant $\propto\sum_{n=N}^\infty(\nu_0/\nu)^{\sqrt n}$ which
is convergent because $\int_N^\infty(\nu_0/\nu )^{\sqrt x}\, dx$ is finite
for $\nu > \nu_0$.
The same argument applies to the second series for $\nu<\nu_0$, so that the
solution satisfying our boundary condition is
$$
w_1(t,\nu) = \cases{\displaystyle
t^2\sum_{n=0}^\infty A^-_n(-1)^n{{(b)_n}\over{(c)_n}}
G_n(b-n,c,z)\left({\nu\over\nu_0}\right)^{-\alpha^-_n} & \qquad $\nu < \nu_0
\, ;$\cr
\, & \cr
\displaystyle t^2\sum_{n=0}^\infty A^+_n(-1)^n{{(b)_n}\over{(c)_n}}
G_n(b-n,c,z)\left({\nu\over\nu_0}\right)^{-\alpha^+_n} & \qquad $\nu \geq
\nu_0\, .$\cr}\eqno(24)
$$
\beginfigure*{1}
\vskip 90mm \special{fig1bm.ps}
\caption{{\bf Figure 1.}
Emergent flux $F_\nu$ (in arbitrary units) for spherical accretion onto
a Schwarzschild black hole; here $t_h =20$, $t_*=0.9t_h$ and
$\alpha_0^+ = 1.54$.
A comparison with the analytical solution (dashed line)
in the limit $t_h \to \infty $ is illustrated in the box for
$t_*=20$, showing a good agreement with PB result.}
\endfigure
Equation (24) exhibits two striking features, not shared by its
non--relativistic counterpart, which arise from the presence of
advection/aberration terms in the moment equations. First of all, we
note that according to equation (24) photons injected at $\nu = \nu_0$ can
be shifted {\it both\/} to higher and lower energies by dynamical
Comptonization. This is in apparent contrast with PB result that photons can
only gain energy in scatterings with electrons in a converging flow
(the adiabatic compression).
PB statement is, however, correct up to $O(v)$ terms and their equation (8)
is the low--velocity limit of the more general expression for the photon energy
change along a geodesic (see e.g. Novikov \& Thorne 1973)
$${1\over \nu}\dert{\nu}{\ell} = -\left(n^i a_i + {1\over 3}\theta + n^in^j
\sigma_{ij}\right)\, ,\eqno(25)$$
where $n^i$ is the unit vector along the photon trajectory and $a^i$, $\theta$
and $\sigma_{ij}$ are the flow 4--acceleration, expansion and shear,
respectively. In free--fall $a^i$ vanishes while it can be safely neglected
in PB approximation being $O(v^2)$. The remaining two terms are both of
order $v$
$$\eqalign{ &\theta = -{3\over 2}{v\over r} \cr
& n^in^j\sigma_{ij} = {1\over 2}{v\over r}(3\mu^2 -1)\cr}
\eqno(26)
$$
where $\mu$ is the cosine of the angle between the photon and the radial
directions. The mean photon energy change can be obtained
angle--averaging equation (25) over the specific intensity
$$
I_\nu (\mu) = w_0 + 3 \mu w_1 + {15 \over 4 } (3\mu^2 - 1) w_2 + \ldots
\, . \eqno(27)
$$
Recalling the behaviour of the radiation moments in the
diffusion limit, we get
$$\left < {1\over \nu}\dert{\nu}{\ell} \right >
= {v \over r}\left[{1\over 2} + {3\over\tau}\left({1\over\tau} - v\right)
\right]\, .\eqno(28)$$
The second term in square brackets arises because of shear and
is negligible in PB approximation, being either $O(1/ \tau^2)$ or
$O(v /\tau)$. This implies that the mean photon energy change is
always positive. However,
when advection and aberration are taken into
account (see equations [25], [26]) photons moving in a cone around the
radial direction suffer an energy loss and the collective effect is stronger
when the flow velocity approaches unity
in regions of moderate optical depth.
The second important feature concerns the slope
of the power--law, high--energy tail
of the spectrum. From equation (22) the fundamental mode is
$$
\alpha^+_0 = {-( 12 + 3 t_h) + \sqrt{ \left ( 12 + 3 t_h \right )^2 + 96 t_h }
\over 8 } \, , \eqno (29)
$$
and, for large values of $t_h$, equation (29) gives
$$
\alpha^+_0 = 2 - {40 \over 3 t_h}
+ {1120 \over 9 t_h^2} + O \left ( 1 / t_h^3 \right ) \, . \eqno (30)
$$
At large enough frequencies the spectral index is dominated by
the fundamental mode which, for $t_h\magcir 1$,
sensibly deviates from the value predicted by the non--relativistic calculation.
Despite the fact that this effect is present below the trapping radius,
we stress that, contrary to a widespread belief, the trapping radius
does not act as a one--way membrane. Photons produced near or below the
surface $\tau v =1$ can still escape to infinity
even if both the large optical depth and the strong advection caused by the
inward flow dramatically reduce the emergent radiative flux. Moreover, these
photons, although comparatively few, are the more strongly
comptonized and will anyway dominate the high--energy tail of the spectral
distribution. Equation (30) shows that
the emergent spectrum turns out to be flatter with respect to PB
case. The two main features of our solution, harder spectrum and drift of
photons below $\nu_0$, can be clearly seen in figure 1, where the emergent
spectrum is shown for $t_h=20$.
It is interesting to compare the present, analytical solution with the
numerical result obtained using the fully GR characteristic--ray code (CRM)
described in Zane {\it et al.\/} (1996).
In figure 2 we show the emergent spectrum relative
to the ``cold'' solution for black hole accretion with $\varrho_h = 1.42
\times 10^{-5}$ $\rm g\, cm^{-3}$. In this case both electron
scattering and free--free emission/absorption are considered. At large enough
frequencies scattering is the only source of opacity, so, in this limit, we
expect our idealized analytical model to be representative of the realistic
situation. The numerical model has $t_h\simeq 15$ which corresponds to
$\alpha_0^+=1.43$. This value is in excellent agreement with the derived
spectral index $\alpha = 1.36$.
\beginfigure*{2}
\vskip 90mm \special{fig2bm.ps}
\caption{{\bf Figure 2.}
Emergent flux $F_\nu$ computed using our CRM code; the derived spectral index
is $1.36$. In this model $t_h\simeq 15$ and the corresponding value of
$\alpha_0^+$ is $1.43$ (dashed line).}
\endfigure
\section{Radiative transfer in an expanding atmosphere}
In this section we discuss
the case of a pure scattering, expanding atmosphere with a power--law
velocity profile
$$v = v_*\left({r\over {r_*}}\right)^\beta\eqno(31)$$
where the subscript ``$_*$'' refers to the base of the envelope, now
$v$ is taken positive outwards and PB approximation is used. The
second order partial differential equation for the radiation flux is
given by equation (9), upon the substitution of $t$ with $-t$
$$t\derss{w_1}{t} +\left(t-1+\beta\right )\der{w_1}{t}-\left(1+{{2\beta
\over t}}\right)w_1 + {{2+\beta}\over 3}\der{w_1}{\ln\nu}=0\, .\eqno(32)$$
This equation could be integrated using the same technique discussed in
section 2, looking for
separable solutions
$w_1 = t^2 h_1(t)\nu^{-\alpha}$. It can be easily
checked that equation (32) yields again, upon factorization, a Kummer equation
for $h_1(t)$, as in the converging flow case.
A problem arises, however, as
far as boundary conditions are concerned: in section 2
the only physically meaningful solution was selected asking that the
flux become a constant for $t\to 0$ and that adiabatic
compression of photons hold for $t\to\infty$. In that case the existence of
these physical constraints was sufficient to fix univocally the mathematical
solution. However, this particular issue turns out to be much
more delicate in the wind problem. We preferred then to
look for an alternative method of solution which allows for an
easier handling of boundary conditions. Equation (32), describing
diffusion of photons through a moving medium, is a Fokker--Planck
equation and can be brought into the standard Fokker--Planck
form
$$\derss{(tu_1)}{t} -\der{[(-1-\beta-t)u_1]}{t}=\der{u_1}{x}
\, ,\eqno(33)$$
where we have defined $w_1= t^2 u_1$ and $x = -3/(2+\beta)\ln\nu$.
The solution
can be found by Fourier transforming equation (33) with respect to $t$,
solving the equation for the Fourier transform $\hat u_1$ and then
transforming back (see e.g. Risken 1989). The equation for the Fourier
transform is obtained
from equation (33) replacing $\partial/\partial t$ by $ik$ and $t$ by
$i\partial/\partial k$,
$$ik(1+ik)\der{\ln\hat u_1}{ik} + \der{\ln\hat u_1}{x} = (1+\beta)ik\,
.\eqno(34)$$
This is a first order PDE which can be solved by standard methods
(see e.g. Sneddon 1957) once a boundary condition is specified. If we
assume that monochromatic photons of frequency $\nu_0$ are
injected at $t=t _*$, the boundary condition for equation (34) is just
$\hat u_1 = u_1^0\exp (-ikt_*)$, here $u_1^0$ is the luminosity emitted
at $\nu_0$, and the corresponding solution is
$$\hat u_1 = u_1^0\left\{\left[1-\exp(x - x_0)\right]ik+1\right\}^{1+\beta}
\exp\left\{-{{\exp(x - x_0)ikt_*}\over{\left[1-\exp(x - x_0)\right]ik+1}}
\right\}\, .\eqno(35)$$
The solution to equation (33) is given by the Fourier integral
$$u_1 = {1\over{2\pi}}\int^{+\infty}_{-\infty}\hat u_1\exp(ikt)\, dk$$
which can be evaluated analytically in terms of the modified Bessel function
$I_q$ (see Appendix B)
$$\eqalign{ u_1 = &
u_1^0\left({\nu\over{\nu_0}}\right)^{3/2}
\left({{t_*}\over t}\right)^{(2+\beta )/2}\left[1-\left({\nu\over{\nu_0}}
\right)^{3/(2+\beta )}\right]^{-1}\exp\left [-{{t+\left(
\nu/\nu_0\right)^{3/(2+\beta )}t_*}\over
{1-\left(\nu/\nu_0\right)^{3/(2+\beta )}}}\right ] \times \cr
& I_{2+\beta}\left[2\sqrt{(\nu/\nu_0)^{2/(3+\beta )}t_*t}\biggl /\left (1-
\left(\nu/\nu_0\right)^{2/(3+\beta )}\right )\right ]\, .
\cr}\eqno(36)$$
The main advantage of solving equation (32) following the method outlined here
is that the Fourier transform is automatically selecting the regular solution,
because it can be computed only for functions that are $L_2$ in $]-\infty,
\infty[$.
In other words, it is the method of solution itself which is suited for
finding only regular solutions and in doing so no extra constraint is required.
The spectrum is shifted towards lower frequencies and it broadens at the same
time, developing a power law, low--energy tail. The overall behaviour is
similar to that of the converging flow but somehow reversed, since now
photons can
drift only to frequencies lower than $\nu_0$. There is, however, a major
difference in the power--law index $\alpha$ between
the two cases since $\alpha$ does not
depend on $\beta$ for the wind solutions, as can be seen examining the
spectral behaviour of equation (36) at low frequencies.
Since $I_q(z)\sim (z/2)^q/
\Gamma (q+1)$ when the argument is small, we have for the emergent luminosity
$$L_\nu\propto \left({\nu\over {\nu_0}}\right)^3\exp\left[-{{
\left(\nu/\nu_0 \right )^{(2+\beta )/3}t_*}\over{1 - \left(\nu/\nu_0 \right )^
{(2+\beta )/3}}}\right]\left[1 - \left({\nu\over{\nu_0}} \right )^
{(2+\beta )/3}\right]^{-3-\beta}\sim \left({\nu\over{\nu_0}}\right)^3
\eqno(37)$$
if $\nu\ll\nu_0$, which shows that $\alpha=-3$
irrespective of the value of $\beta$. The monochromatic flux at $t=0$ is shown
in figure 3 for $\beta=1$ and $t_*=1$; the power--law tail at low energies is
clearly visible.
\beginfigure*{3}
\vskip 90mm \special{fig3bm.ps}
\caption{{\bf Figure 3.}
Emergent flux $F_\nu$ (in arbitrary units) for an expanding envelope
with $\beta =1$ and $t_*=1$.
At low energies the spectral index approaches $-3$ (dashed line).}
\endfigure
\section{Discussion and conclusions}
In this paper we have reconsidered the transfer of radiation in a
scattering, spherically--symmetric medium, extending PB analysis of
converging flows
to the relativistic case and investigating the effects of bulk motion
comptonization in expanding atmospheres.
In the low--velocity limit and assuming diffusion approximation, PB found that
monochromatic photons injected at the base of the atmosphere always gain energy
as they propagate outwards. The emergent spectrum exhibits an overall
shift to higher frequencies and a power--law, high--energy tail with a
spectral index depending on the velocity gradient.
Under the same assumptions, the wind solution shows similar, although reversed,
features. Adiabatic expansion now produces an
overall drift toward lower energies and the formation of
a power--law tail at low frequencies. In this case, however, the spectral
index is independent on the velocity gradient and turns out to be always
equal to $-3$.
Both these analyses are correct to first order in $v/c$ and can be thought to
adequately describe situations in which bulk motion is non--relativistic
in regions of moderate scattering depth. Relativistic corrections are, in fact,
related to the anisotropy of the radiation field and are washed out
if $\tau \gg 1$. Obviously, if the flow is optically thin, repeated scatterings
are ineffective no matter how large the velocity is. In outflowing atmospheres
high velocities are expected at large radii, where the optical depth has
dropped below unity, so that our assumption can be reasonable.
On the other hand, in accretion flows onto
compact objects the condition $\tau\gg 1$ where $v\sim 1$ is likely to
be met only when the accretion
rate becomes hypercritical. This shows that a relativistic treatment of
dynamical comptonization is indeed required in investigating the emission
properties of accretion flows. For $v\sim 1$ the diffusion limit is not
recovered simply asking that the radiative flux is proportional to the gradient
of the energy density, since the radiative shear is as important as the flux.
Relativistic corrections produces two main effects:
first, photons are shifted toward {\it both} higher and lower frequencies by
dynamical comptonization and, second, the spectrum at large frequencies
is sensibly flatter than in the non--relativistic case. The spectral
index now depends not only on the velocity gradient, but also on the
value of the scattering depth at the horizon and goes
to its non--relativistic limit when $\tau_h$ tends to infinity.
Despite the fact that relativistic effects are important only where $\tau v
> 1$,
that is to say below the trapping radius, their signature is still present
in the emergent spectrum. In particular, the high energy tail
is populated by the strongly comptonized photons coming just from this region.
A similar effect was found by Mastichiadis \& Kylafis (1992, see also Zampieri,
Turolla, \& Treves 1993) in an accretion
flow onto a neutron star. In their case the formation of an essentially
flat ($\alpha\simeq 0$) spectrum is due to the fact that photons experience
a very large number of energetic scatterings before emerging to infinity,
since no advection is present being the star surface a perfect reflector.
Our spectrum is softer with respect to Mastichiadis \& Kylafis just because
a sizable fraction of the more boosted photons are dragged into the hole,
but, at the same time, it is harder than PB since
in the relativistic regime the mean energy gain per scattering is higher.
From the mathematical point of view it is noteworthy that the assumption of
a finite optical depth at the inner boundary
(i.e. at the horizon in our model or at the reflecting surface in Mastichiadis
\& Kylafis) produces a fundamental mode which is flatter
with respect to PB; in both cases PB result is recovered in the limit
$\tau_h \to \infty$.
The possibility that scattering of photons in an accretion flow onto
a black hole produces a power--law tail with spectral index flatter than 2
was also suggested in a very recent paper by Ebisawa, Titarchuk, \& Chakrabarti
(1996). Using a semi--qualitative analisys they found that the spectral
index is close to 3/2 for large values of the optical depth at the horizon
and discussed the possible relevance of this result in connection with the
observed hard X--ray emission from black hole candidates in the high state.
We note that for $1<\tau_h<32/9$ the predicted spectral index (see equation
[29]) is smaller than $1$, implying a divergent frequency--integrated
luminosity; this behaviour is not new and was already found by Schinder \&
Bogdan (1989) and Mastichiadis \& Kylafis (1992). It simply reflects
the fact that photons can gain an {\it arbitrarily large\/} amount of
energy in collisions with the free--falling electrons.
It should be taken into account, however,
that, when $h\nu\approx m_ec^2$ the electron recoil in the particle
rest frame can not be neglected anymore, so for large enough energies our
treatment is not valid, as discussed in more detail in Zampieri (1995).
The decrease of the cross--section in the quantum
limit makes the scattering process less efficient, producing a sharp cut--off
in the spectral distribution.
Finally, as already stressed by Blandford \& Payne (1981a) and Colpi (1988),
thermal comptonization dominates over dynamical comptonization when
$v^2\mincir 12kT/m_e$. The spectral distribution depends then
on the relative strength of competitive processes such as
heating/cooling by thermal comptonization and compressional heating and must
be derived solving the radiative transfer equation in its complete form.
\section* {References}
\beginrefs
\bibitem{Abramowitz, M., \& Stegun, I.A. 1972, Handbook of Mathematical
Functions, (New York: Dover), AS}
\bibitem{Blandford, R.D., \& Payne, D.G. 1981a, MNRAS, 194, 1033}
\bibitem{Blandford, R.D., \& Payne, D.G. 1981b, MNRAS, 194, 1041}
\bibitem{Colpi, M. 1988, ApJ, 326, 233}
\bibitem{Cowsik, R., \& Lee, M.A. 1982, Proc. Roy. Soc. London A, 383, 409}
\bibitem{Ebisawa, K., Titarchuk, L., \& Chakrabarti, S.K. 1996, PASJ, 48, 59}
\bibitem{Gilden, D.L., \& Wheeler, J.C. 1980, ApJ, 239, 705}
\bibitem{Mastichiadis, A., \& Kylafis, N.D. 1992, ApJ, 384, 136}
\bibitem{Nobili, L., Turolla, R., \& Zampieri, L. 1991, ApJ, 383, 250}
\bibitem{Nobili, L., Turolla, R., \& Zampieri, L. 1993, ApJ, 404, 686}
\bibitem{Novikov, I.D., \& Thorne, K.S. 1973, in Black Holes, DeWitt, C. \&
DeWitt B.S. eds., (New York: Gordon \& Breach)}
\bibitem{Payne, D.G., \& Blandford, R.D. 1981, MNRAS, 196, 781, PB}
\bibitem{Prudnikov, A.P., Brychkov, Yu.A., \& Marichev, O.I. 1986, Integrals
and Series, Vol. I (New York: Gordon \& Breach)}
\bibitem{Risken, H. 1989, The Fokker--Planck Equation (Berlin: Springer--Verlag)}
\bibitem{Schneider, P., \& Bogdan, T.J. 1989, ApJ, 347, 496}
\bibitem{Sneddon, I.N. 1956, Special Functions of Mathematical Physics and
Chemistry (Edinburgh: Oliver \& Boyd)}
\bibitem{Sneddon, I.N. 1957, Elements of Partial Differential Equations (New
York: McGraw--Hill)}
\bibitem{Thorne, K.S. 1981, MNRAS, 194, 439}
\bibitem{Thorne, K.S., Flammang, R.A., \& \.Zytkov, A.N. 1981, MNRAS, 194, 475}
\bibitem{Turolla, R., \& Nobili, L. 1988, MNRAS, 235, 1273}
\bibitem{Zampieri, L., Turolla, R., \& Treves, A. 1993, ApJ, 419, 311}
\bibitem{Zampieri, L. 1995, unpublished PhD Thesis}
\bibitem{Zampieri, L., Miller, J.C., \& Turolla, R. 1996, MNRAS, in the press}
\bibitem{Zane, S., Turolla, R., Nobili, L., \& Erna, M. 1996, ApJ, in the press}
\endrefs
\section*{Appendix A}
Since the polynomials $G_n (b-n, c, z)$ appearing in equation (24) are not
an orthogonal system, is not possible to derive an explicit expression
for the coefficients $A^\pm_n$. Here we show that these constants can be,
in principle, obtained as the solution of an infinite system of
linear algebraic equations. We note that the two sets $A^\pm_n$ are not
independent, because the two expressions in (24) must match
at $\nu = \nu_0$, where
$$
w_1(t, \nu_0) = A \delta ( t/t_* - 1)\eqno({\rm A}1)
$$
($A$ is a constant related to the monochromatic flux injected at the inner
boundary). Since $z\propto t/t_h$ and $x=t/t_*$, the polynomials $G_n(b-n,c,z)$
can be expressed in terms of $G_n(3,3,x)$, which form an orthogonal system, as
$$
G_n(b-n,c,z) = \sum_{m=0}^n C_{nm}G_m(3,3,x)\, .\eqno({\rm A}2)
$$
The coefficients $C_{nm}$ are solution of the upper triangular system of
linear algebraic equations
$$
\sum_{m=k}^n (-1)^{m-n}{m\choose k}{{(m+2)!(m+2+k)!}\over{(2m+2)!
(k+2)!}}C_{nm} = {n\choose k}{{\Gamma(c+n)\Gamma(b+k)}\over{\Gamma(b+n)
\Gamma(c+k)}}\left(-{\beta\over\gamma}t_*\right)^k\qquad\qquad k=0,\ldots , n\, .
\eqno({\rm A}3)
$$
Recalling the standard expansion of the $\delta$--function over an orthogonal
set of eigenfunctions and using again $G_n(3,3,x)$ as a basis, it is
$$
\delta (x-1) = x^2\sum_{m=0}^\infty {{(2m+3)!}\over{m!(m+2)!}}G_m(3,3,x)\, .
\eqno({\rm A}4)
$$
Inserting (A2) and (A4) into (A1) and equating the coefficients of the
polynomials of the same degree, we obtain
$$
\sum_{n=m}^\infty (-1)^n {{(b)_n}\over{(c)_n}}C_{nm}A^\pm_n =
{A\over{t_*^2}}{{(2m+3)!}\over{m!(m+2)!}}\qquad\qquad m\geq 0\, .
\eqno({\rm A}5)
$$
The numerical evaluation of $A^\pm_n$ has been carried out truncating the
series appearing in (A5) to a maximum order $N\sim 60$ and solving the
system by backsubstitution.
\section*{Appendix B}
In this Appendix we derive the expression for $u_1$, equation (36), starting
from the Fourier integral. By defining, for the sake of conciseness,
$a = (\nu/\nu_0)^{3/(2+\beta )}$, the Fourier integral can be written as
$$u_1 = {u_1^0\over{2\pi}}\int_{-\infty}^{+\infty}\left[\left(1-a\right)ik+1
\right]^{1+\beta}\exp\left[ikt - {{aikt_*}\over{(1-a)ik+1}}\right]\, dk\, ,
\eqno(\rm B1)$$
which can be transformed into an integral in the complex plane by introducing
the new, complex, integration variable $z = (1-a)ik+1$:
$$u_1 = {u_1^0\over{2\pi i}}{1\over{1-a}}\exp\left[-{{t+at_*}\over{1-a}}\right]
\int_{-i\infty+1}^{+i\infty+1}z^{1+\beta}
\exp\left[{t\over{1-a}}z + {{at_*}\over{1-a}}z^{-1}\right]\, dz\, .
\eqno(\rm B2)$$
The integral appearing in (B2) defines the Bessel function of imaginary
argument (see Prudnikov, Brychkov, \& Marichev 1986),
$$\int^{+i\infty +1}_{-i\infty +1}z^{1+\beta}
\exp\left[{t\over{1-a}}z + {{at_*}\over{1-a}}z^{-1}\right]\, dz\, =
2\pi i\left({{at_*}\over t}\right)^{(2+\beta)/2}
J_{2+\beta}[i2(att_*)^{1/2}/(1-a)] \eqno(\rm B3)$$
so that finally we have
$$u_1 = {{u_1^0}\over{1-a}}\left({{at_*}\over t}\right)^{(2+\beta)/2}
\exp\left[-{{t+at_*}\over{1-a}}\right]
I_{2+\beta}\left[2{{(att_*)^{1/2}}\over{1-a}}\right]\, ,\eqno(\rm B4)$$
which is exactly equation (36).
We note that, although the absolute
convergence in the complex plane of the integral representation
(B3) is proved only for $at_*/(a-1)>0$, $2+\beta < 1$,
direct substitution of (B4) into the Fokker--Planck equation
(33) shows that (B4) is a solution with the only restriction
$a < 1 $.
\bye
|
1,477,468,750,440 | arxiv | \section{Introduction}
In \cite{Haglund-Remmel-Wilson-2018}, Haglund, Remmel and Wilson conjectured a combinatorial formula for $\Delta_{e_{n-k-1}}'e_n$ in terms of decorated labelled Dyck paths, which they called \emph{Delta conjecture}, after the so called delta operators $\Delta_f'$ introduced by Bergeron, Garsia, Haiman, and Tesler \cite{Bergeron-Garsia-Haiman-Tesler-Positivity-1999} for any symmetric function $f$. There are two versions of the conjecture, referred to as the \emph{rise} and the \emph{valley} version.
In the same article \cite{Haglund-Remmel-Wilson-2018} the authors conjecture a combinatorial formula for the more general expression $\Delta_{h_m}\Delta_{e_{n-k-1}}'e_n$ in terms of decorated partially labelled Dyck paths, which we call \emph{generalised Delta conjecture} (rise version). In this paper, the authors also state a \emph{touching} refinement (where the number of times the Dyck path returns to the main diagonal is specified) of their conjecture. In \cite{DAdderio-Iraci-VandenWyngaerd-Theta-2019}, the authors introduce the $\Theta_f$ operators, and reformulate the touching version using these tools. In the present work, we will be using the latter formulation.
The Delta conjecture and its derivatives have attracted considerable attention since their formulation, see among others \cites{Wilson-Equidistribution, Rhoades-2018, Remmel-Wilson-2015, Zabrocki-Delta-Module-2019, Haglund-Rhoades-Shimozono-Advances, Garsia-Haglund-Remmel-Yoo-2019, DAdderio-Iraci-VandenWyngaerd-GenDeltaSchroeder-2019, DAdderio-Iraci-VandenWyngaerd-TheBible-2019, DAdderio-Iraci-VandenWyngaerd-DeltaSquare-2019, DAdderio-Iraci-VandenWyngaerd-Delta-t0-2018, Zabrocki-4Catalan-2016, Qiu-Wilson-2019,Haglund-Sergel-2019}. Most of the earlier work concerns the rise version, but interest in the valley version is growing.
The special case $k=0$ of the Delta conjecture, which is known as the \emph{shuffle conjecture} \cite{HHLRU-2005}, was recently proved by Carlsson and Mellit \cite{Carlsson-Mellit-ShuffleConj-2018}. The shuffle theorem, thanks to the famous \emph{$n!$ conjecture}, now $n!$ theorem of Haiman \cite{Haiman-nfactorial-2001}, gives a combinatorial formula for the Frobenius characteristic of the $\mathfrak{S}_n$-module of diagonal harmonics studied by Garsia and Haiman.
In \cite{Loehr-Warrington-square-2007} Loehr and Warrington conjecture a combinatorial formula for $\Delta_{e_{n}}\omega(p_n)=\nabla \omega(p_n)$ in terms of labelled square paths (ending east), called the \emph{square conjecture}. The special case $\<\cdot ,e_n\>$ of this conjecture, known as the \emph{$q,t$-square}, has been proved by Can and Loehr in \cite{Can-Loehr-2006}. Recently the full square conjecture has been proved by Sergel in \cite{Leven-2016}, who showed that the shuffle theorem by Carlsson and Mellit \cite{Carlsson-Mellit-ShuffleConj-2018} implies the square conjecture (now square theorem).
In \cite{DAdderio-Iraci-VandenWyngaerd-DeltaSquare-2019} the authors conjecture a combinatorial formula for $\frac{[n-k]_t}{[n]_t}\Delta_{h_m}\Delta_{e_{n-k}}\omega(p_n)$ in terms of \emph{rise-decorated partially labelled square paths} that we call \emph{generalised Delta square conjecture} (rise version). This conjecture extends the square conjecture of Loehr and Warrington \cite{Loehr-Warrington-square-2007} (now a theorem \cite{Leven-2016}), i.e. it reduces to that one for $m=k=0$. Moreover, it extends the generalised Delta conjecture in the sense that on decorated partially labelled Dyck paths gives the same combinatorial statistics.
In \cite{Qiu-Wilson-2019}, the authors state a \emph{generalised Delta conjecture} (valley version), extending the valley version of the Delta conjecture. They also prove the case $q=0$, extending the results in \cite{DAdderio-Iraci-VandenWyngaerd-Delta-t0-2018}.
Inspired by \cite{Qiu-Wilson-2019} and \cite{DAdderio-Iraci-VandenWyngaerd-DeltaSquare-2019}, we formulate two statements that can reasonably be called the \emph{generalised Delta square conjecture} (valley version). One is a combinatorial interpretation of the symmetric function \[\frac{[n-k]_q}{[n]_q}\Delta_{h_m}\Delta_{e_{n-k}}\omega(p_n)= \frac{[n]_t}{[n-k]_t} \Delta_{h_m} \Theta_{e_k} \nabla \omega(p_{n-k})\] (notice the swapping of $q$ and $t$ with respect to the rise version). The other is an interpretation of $\Delta_{h_m} \Theta_{e_k} \nabla \omega(p_{n-k})$, for which the combinatorics seems to be nicer and does not have the multiplicative factor.
Next, we adapt the schedule formula in \cite{Haglund-Sergel-2019} to objects with repeated labels, which enabled us to incorporate the monomials into the formula. This allowed us to obtain a schedule formula for the combinatorics of our conjecture and to deal with the symmetric functions more easily. As a byproduct, our formula provides a new factorisation of all other previous schedule formulae concerning Dyck or square paths.
Finally, we use this formula to prove that the (generalised) valley version of the Delta conjecture implies our (generalised) valley version of the Delta square conjecture. This implication broadens the argument in \cite{Leven-2016}, relying on the formulation of the touching version in terms of the $\Theta_f$ operators.
\section{Combinatorial definitions}
\begin{definition}
A \emph{square path} of size $n$ is a lattice paths going from $(0,0)$ to $(n,n)$ consisting of east or north unit steps, always ending with an east step. The set of such paths is denoted by $\SQ(n)$. We call \emph{shift} of a square path the maximum value $s$ such that the path intersect the line $y=x-s$ in at least one point. We refer to the line $y=x+i-s$ as \emph{$i$-th diagonal} and to the line $x=y$, (the $s$-th diagonal) as the \emph{main diagonal}. A \emph{Dyck path} is a square path whose shift is $0$. The set of Dyck paths is denoted by $\D(n)$. Of course $\D(n)\subseteq \SQ(n)$.
\end{definition}
For example, the path in Figure~\ref{fig:labelled-square-path} has shift $3$.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale = 0.6]
\draw[step=1.0, gray!60, thin] (0,0) grid (8,8);
\draw[gray!60, thin] (3,0) -- (8,5);
\draw[blue!60, line width=1.6pt] (0,0) -- (0,1) -- (1,1) -- (2,1) -- (3,1) -- (4,1) -- (4,2) -- (5,2) -- (5,3) -- (5,4) -- (6,4) -- (6,5) -- (6,6) -- (6,7) -- (7,7) -- (7,8) -- (8,8);
\node at (0.5,0.5) {$2$};
\draw (0.5,0.5) circle (.4cm);
\node at (4.5,1.5) {$1$};
\draw (4.5,1.5) circle (.4cm);
\node at (5.5,2.5) {$2$};
\draw (5.5,2.5) circle (.4cm);
\node at (5.5,3.5) {$4$};
\draw (5.5,3.5) circle (.4cm);
\node at (6.5,4.5) {$1$};
\draw (6.5,4.5) circle (.4cm);
\node at (6.5,5.5) {$3$};
\draw (6.5,5.5) circle (.4cm);
\node at (6.5,6.5) {$4$};
\draw (6.5,6.5) circle (.4cm);
\node at (7.5,7.5) {$1$};
\draw (7.5,7.5) circle (.4cm);
\end{tikzpicture}
\caption{Example of an element in $\LSQ(8)$.}
\label{fig:labelled-square-path}
\end{figure}
\begin{definition}
Let $\pi$ be a square path of size $n$. We define its \emph{area word} to be the sequence of integers $a(\pi) = (a_1(\pi), a_2(\pi), \cdots, a_n(\pi))$ such that the $i$-th vertical step of the path starts from the diagonal $y=x+a_i(\pi)$. For example the path in Figure~\ref{fig:labelled-square-path} has area word $(0, \, -\!3, \, -\!3, \, -\!2, \, -\!2, \, -\!1, \, 0, \, 0)$.
\end{definition}
\begin{definition}
A \emph{partial labelling} of a square path $\pi$ of size $n$ is an element $w \in \mathbb N^n$ such that
\begin{itemize}
\item if $a_i(\pi) > a_{i-1}(\pi)$, then $w_i > w_{i-1}$,
\item $a_1(\pi) = 0 \implies w_1 > 0$,
\item there exists an index $i$ such that $a_i(\pi) = - \mathsf{shift}(\pi)$ and $w_i(\pi) > 0$,
\end{itemize}
i.e. if we label the $i$-th vertical step of $\pi$ with $w_i$, then the labels appearing in each column of $\pi$ are strictly increasing from bottom to top, with the additional restrictions that, if the path starts north then the first label cannot be a $0$, and that there is at least one positive label lying on the base diagonal.
We omit the word \emph{partial} if the labelling is composed of strictly positive labels only.
\end{definition}
\begin{definition}
A \emph{(partially) labelled square path} (resp. \emph{Dyck path}) is a pair $(\pi, w)$ where $\pi$ is a square path (resp. Dyck path) and $w$ is a (partial) labelling of $\pi$. We denote by $\LSQ(m,n)$ (resp. $\LD(m,n)$) the set of labelled square paths (resp. Dyck paths) of size $m+n$ with exactly $n$ positive labels, and thus exactly $m$ labels equal to $0$.
\end{definition}
The following definitions will be useful later on.
\begin{definition}
Let $w$ be a labelling of square path of size $n$. We define $x^w \coloneqq \prod_{i=1}^{n} x_{w_i} \rvert_{x_0 = 1}$.
\end{definition}
The fact that we set $x_0 = 1$ explains the use of the expression \emph{partially labelled}, as the labels equal to $0$ do not contribute to the monomial.
Sometimes we will, with an abuse of notation, write $\pi$ as a shorthand for a labelled path $(\pi, w)$. In that case, we use the identification $x^\pi \coloneqq x^w$.
Now we want to extend our sets introducing some decorations.
\begin{definition}
\label{def:valley}
The \emph{contractible valleys} of a labelled square path $\pi$ are the indices $1 \leq i \leq n$ such that one of the following holds:
\begin{itemize}
\item $i = 1$ and either $a_1(\pi) < -1$, or $a_1(\pi) = -1$ and $w_1 > 0$,
\item $i > 1$ and $a_i(\pi) < a_{i-1}(\pi)$,
\item $i > 1$ and $a_i(\pi) < a_{i-1}(\pi) \land w_i > w_{i-1}$.
\end{itemize}
We define \[ v(\pi, w) \coloneqq \{1 \leq i \leq n \mid i \text{ is a contractible valley} \}, \] corresponding to the set of vertical steps that are directly preceded by a horizontal step and, if we were to remove that horizontal step and move it after the vertical step, we would still get a square path with a valid labelling, with the additional restriction that if the vertical step is in the first row and it is attached to a $0$ label, then we require that it is preceded by at least two horizontal steps.
\end{definition}
\begin{remark}
These slightly contrived conditions on the steps labelled $0$ have a more natural formulation in terms of steps labelled $\infty$, see Section~\ref{sec:concluding}
\end{remark}
This extends the definition of contractible valley given in \cite{Haglund-Remmel-Wilson-2018} to (partially) labelled square paths.
\begin{definition}
\label{def:rise}
The \emph{rises} of a (labelled) square path $\pi$ are the indices \[ r(\pi) \coloneqq \{2 \leq i \leq n \mid a_i(\pi) > a_{i-1}(\pi)\}, \] i.e. the vertical steps that are directly preceded by another vertical step.
\end{definition}
\begin{definition}
A \emph{valley-decorated (partially) labelled square path} is a triple $(\pi, w, dv)$ where $(\pi, w)$ is a (partially) labelled square path and $dv \subseteq v(\pi, w)$. A \emph{rise-decorated (partially) labelled square path} is a triple $(\pi, w, dr)$ where $(\pi, w)$ is a (partially) labelled square path and $dr \subseteq r(\pi)$.
\end{definition}
Again, we will often write $\pi$ as a shorthand for the corresponding triple $(\pi, w, dv)$ or $(\pi, w, dr)$.
We denote by $\LSQ(m,n)^{\bullet k}$ (resp. $\LSQ(m,n)^{\ast k}$) the set of partially labelled valley-decorated (resp. rise-decorated) square paths of size $m+n$ with $n$ positive labels and $k$ decorated contractible valleys (resp. decorated rises). We denote by $\LD(m,n)^{\bullet k}$ (resp. $\LD(m,n)^{\ast k}$) the corresponding subsets of Dyck paths.
We also define $\LSQ'(m,n)^{\bullet k}$ as the set of paths in $\LSQ(m,n)^{\bullet k}$ such that there exists an index $i$ such that $a_i(\pi) = - \mathsf{shift}(\pi)$ and $i \not \in dv \land w_i(\pi) > 0$, i.e. there is at least one positive label lying on the bottom-most diagonal that is not a decorated valley. The importance of this set will be evident later in the paper.
Finally, we sometimes omit writing $m$ or $k$ when they are equal to $0$. Notice that, because of the restrictions we have on the labellings and the decorations, the only path with $n=0$ is the empty path, for which also $m=0$ and $k=0$.
\begin{figure}[!ht]
\begin{minipage}{0.5\textwidth}
\centering
\begin{tikzpicture}[scale = 0.6]
\draw[step=1.0, gray!60, thin] (0,0) grid (8,8);
\draw[gray!60, thin] (3,0) -- (8,5);
\draw[blue!60, line width=1.6pt] (0,0) -- (0,1) -- (1,1) -- (2,1) -- (3,1) -- (4,1) -- (4,2) -- (5,2) -- (5,3) -- (5,4) -- (6,4) -- (6,5) -- (6,6) -- (6,7) -- (7,7) -- (7,8) -- (8,8);
\node at (3.5,1.5) {$\bullet$};
\node at (6.5,7.5) {$\bullet$};
\node at (0.5,0.5) {$2$};
\draw (0.5,0.5) circle (.4cm);
\node at (4.5,1.5) {$0$};
\draw (4.5,1.5) circle (.4cm);
\node at (5.5,2.5) {$2$};
\draw (5.5,2.5) circle (.4cm);
\node at (5.5,3.5) {$4$};
\draw (5.5,3.5) circle (.4cm);
\node at (6.5,4.5) {$0$};
\draw (6.5,4.5) circle (.4cm);
\node at (6.5,5.5) {$1$};
\draw (6.5,5.5) circle (.4cm);
\node at (6.5,6.5) {$3$};
\draw (6.5,6.5) circle (.4cm);
\node at (7.5,7.5) {$4$};
\draw (7.5,7.5) circle (.4cm);
\end{tikzpicture}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\centering
\begin{tikzpicture}[scale = 0.6]
\draw[step=1.0, gray!60, thin] (0,0) grid (8,8);
\draw[gray!60, thin] (3,0) -- (8,5);
\draw[blue!60, line width=1.6pt] (0,0) -- (0,1) -- (1,1) -- (2,1) -- (3,1) -- (4,1) -- (4,2) -- (5,2) -- (5,3) -- (5,4) -- (6,4) -- (6,5) -- (6,6) -- (6,7) -- (7,7) -- (7,8) -- (8,8);
\node at (4.5,3.5) {$\ast$};
\node at (5.5,5.5) {$\ast$};
\node at (0.5,0.5) {$2$};
\draw (0.5,0.5) circle (.4cm);
\node at (4.5,1.5) {$1$};
\draw (4.5,1.5) circle (.4cm);
\node at (5.5,2.5) {$0$};
\draw (5.5,2.5) circle (.4cm);
\node at (5.5,3.5) {$4$};
\draw (5.5,3.5) circle (.4cm);
\node at (6.5,4.5) {$0$};
\draw (6.5,4.5) circle (.4cm);
\node at (6.5,5.5) {$1$};
\draw (6.5,5.5) circle (.4cm);
\node at (6.5,6.5) {$3$};
\draw (6.5,6.5) circle (.4cm);
\node at (7.5,7.5) {$4$};
\draw (7.5,7.5) circle (.4cm);
\end{tikzpicture}
\end{minipage}
\caption{Example of an element in $\LSQ(2,6)^{\bullet 2}$ (left) and an element in $\LSQ(2,6)^{\ast 2}$ (right).}
\label{fig:decorated-square-paths}
\end{figure}
We define two statistics on this set that reduce to the ones defined in \cite{Loehr-Warrington-square-2007} when $m=0$ and $k=0$.
\begin{definition}
\label{def:area}
Let $(\pi, w, dr) \in \LSQ(m,n)^{\ast k}$ and $s$ be its shift. We define
\[ \mathsf{area}(\pi, w, dr) \coloneqq \sum_{i \not \in dr} (a_i(\pi) + s), \]
i.e. the number of whole squares between the path and the base diagonal that are not in rows containing a decorated rise.
For $(\pi, w, dv) \in \LSQ(m,n)^{\bullet k}$, we define $\mathsf{area}(\pi, w, dv) \coloneqq \mathsf{area}(\pi, w, \varnothing) \in \LSQ(m,n)^{\ast 0}$.
\end{definition}
For example, the path in Figure~\ref{fig:decorated-square-paths} (left) has area $8$. Notice that the area does not depend on the labelling.
\begin{definition}
\label{def:dinv}
Let $(\pi, w, dv) \in \LSQ(m,n)^{\bullet k}$. For $1 \leq i < j \leq n$, the pair $(i,j)$ is an \emph{inversion} if
\begin{itemize}
\item either $a_i(\pi) = a_j(\pi)$ and $w_i < w_j$ (\emph{primary inversion}),
\item or $a_i(\pi) = a_j(\pi) + 1$ and $w_i > w_j$ (\emph{secondary inversion}),
\end{itemize}
where $w_i$ denotes the $i$-th letter of $w$, i.e. the label of the vertical step in the $i$-th row. Then we define
\begin{align*}
\mathsf{dinv}(\pi) \coloneqq \# \{ 1 \leq i < j \leq n \mid (i,j) \text{ inversion } \land j \not \in dv \} + \#\{1 \leq i \leq n \mid a_i(\pi) < 0 \land w_i > 0 \} - \# dv
\end{align*}
where again $\pi$ is a shorthand for $(\pi, w, dv)$.
For $(\pi, w, dr) \in \LSQ(m,n)^{\ast k}$, we define $\mathsf{dinv}(\pi, w, dr) \coloneqq \mathsf{dinv}(\pi, w, \varnothing) \in \LSQ(m,n)^{\bullet 0}$.
\end{definition}
We refer to the middle term, counting the nonzero labels below the main diagonal, as \emph{bonus} or \emph{tertiary dinv}.
For example, the path in Figure~\ref{fig:decorated-square-paths} (right) has dinv equal to $3$: $1$ primary inversion in which the leftmost label is not a decorated valley, i.e. $(1,7)$; $1$ secondary inversion in which the leftmost label is not a decorated valley, i.e. $(1,6)$; $3$ bonus dinv, coming from the rows $3$, $4$, and $6$; $2$ decorated valleys.
It is easy to check that if $j \in dv$ then either there exists some inversion $(i,j)$ or $a_j < 0$. This means that if $m=0$ the dinv is always non-negative. In fact, thanks to the condition on the first row, the dinv is also non-negative for $m>0$, and also if $\pi \in \LSQ'(m,n)^{\bullet k}$ and it is not a Dyck path, then the dinv is necessarily strictly positive.
Finally, we recall two classical definitions.
\begin{definition}
Let $p_1, \dots, p_k$ be a sequence of integers. We define its \emph{descent set} \[ \mathsf{Des}(p_1, \dots, p_k) \coloneqq \{ 1 \leq i \leq k-1 \mid p_i > p_{i+1} \} \] and its \emph{major index} $\mathsf{maj}(p_1, \dots, p_k)$ to be the sum of the elements of the descent set.
\end{definition}
\section{Symmetric functions}
For all the undefined notations and the unproven identities, we refer to \cite{DAdderio-Iraci-VandenWyngaerd-TheBible-2019}*{Section~1}, where definitions, proofs and/or references can be found.
We denote by $\Lambda$ the graded algebra of symmetric functions with coefficients in $\mathbb{Q}(q,t)$, and by $\<\, , \>$ the \emph{Hall scalar product} on $\Lambda$, defined by declaring that the Schur functions form an orthonormal basis.
The standard bases of the symmetric functions that will appear in our calculations are the monomial $\{m_\lambda\}_{\lambda}$, complete $\{h_{\lambda}\}_{\lambda}$, elementary $\{e_{\lambda}\}_{\lambda}$, power $\{p_{\lambda}\}_{\lambda}$ and Schur $\{s_{\lambda}\}_{\lambda}$ bases.
For a partition $\mu \vdash n$, we denote by \[ \H_\mu \coloneqq \H_\mu[X] = \H_\mu[X; q,t] = \sum_{\lambda \vdash n} \widetilde{K}_{\lambda \mu}(q,t) s_{\lambda} \] the \emph{(modified) Macdonald polynomials}, where \[ \widetilde{K}_{\lambda \mu} \coloneqq \widetilde{K}_{\lambda \mu}(q,t) = K_{\lambda \mu}(q,1/t) t^{n(\mu)} \] are the \emph{(modified) Kostka coefficients} (see \cite{Haglund-Book-2008}*{Chapter~2} for more details).
Macdonald polynomials form a basis of the ring of symmetric functions $\Lambda$. This is a modification of the basis introduced by Macdonald \cite{Macdonald-Book-1995}.
If we identify the partition $\mu$ with its Ferrers diagram, i.e. with the collection of cells $\{(i,j)\mid 1\leq i\leq \mu_i, 1\leq j\leq \ell(\mu)\}$, then for each cell $c\in \mu$ we refer to the \emph{arm}, \emph{leg}, \emph{co-arm} and \emph{co-leg} (denoted respectively as $a_\mu(c), l_\mu(c), a_\mu(c)', l_\mu(c)'$) as the number of cells in $\mu$ that are strictly to the right, above, to the left and below $c$ in $\mu$, respectively.
Let $M \coloneqq (1-q)(1-t)$. For every partition $\mu$, we define the following constants:
\begin{align*}
B_{\mu} & \coloneqq B_{\mu}(q,t) = \sum_{c \in \mu} q^{a_{\mu}'(c)} t^{l_{\mu}'(c)}, \\
D_{\mu} & \coloneqq D_{\mu}(q,t) = MB_{\mu}(q,t)-1, \\
T_{\mu} & \coloneqq T_{\mu}(q,t) = \prod_{c \in \mu} q^{a_{\mu}'(c)} t^{l_{\mu}'(c)} = q^{n(\mu')} t^{n(\mu)} = e_{\vert \mu \vert}[B_\mu], \\
\Pi_{\mu} & \coloneqq \Pi_{\mu}(q,t) = \prod_{c \in \mu / (1,1)} (1-q^{a_{\mu}'(c)} t^{l_{\mu}'(c)}), \\
w_{\mu} & \coloneqq w_{\mu}(q,t) = \prod_{c \in \mu} (q^{a_{\mu}(c)} - t^{l_{\mu}(c) + 1}) (t^{l_{\mu}(c)} - q^{a_{\mu}(c) + 1}).
\end{align*}
We will make extensive use of the \emph{plethystic notation} (cf. \cite{Haglund-Book-2008}*{Chapter~1}).
We need to introduce several linear operators on $\Lambda$.
\begin{definition}[\protect{\cite[3.11]{Bergeron-Garsia-ScienceFiction-1999}}]
\label{def:nabla}
We define the linear operator $\nabla \colon \Lambda \rightarrow \Lambda$ on the eigenbasis of Macdonald polynomials as \[ \nabla \H_\mu = T_\mu \H_\mu. \]
\end{definition}
\begin{definition}
\label{def:pi}
We define the linear operator $\mathbf{\Pi} \colon \Lambda \rightarrow \Lambda$ on the eigenbasis of Macdonald polynomials as \[ \mathbf{\Pi} \H_\mu = \Pi_\mu \H_\mu \] where we conventionally set $\Pi_{\varnothing} \coloneqq 1$.
\end{definition}
\begin{definition}
\label{def:delta}
For $f \in \Lambda$, we define the linear operators $\Delta_f, \Delta'_f \colon \Lambda \rightarrow \Lambda$ on the eigenbasis of Macdonald polynomials as \[ \Delta_f \H_\mu = f[B_\mu] \H_\mu, \qquad \qquad \Delta'_f \H_\mu = f[B_\mu-1] \H_\mu. \]
\end{definition}
Observe that on the vector space of symmetric functions homogeneous of degree $n$, denoted by $\Lambda^{(n)}$, the operator $\nabla$ equals $\Delta_{e_n}$.
We also introduce the Theta operators, first defined in \cite{DAdderio-Iraci-VandenWyngaerd-Theta-2019}
\begin{definition}
\label{def:theta}
For $f \in \Lambda$, we define the linear operator $\Theta_f \colon \Lambda \rightarrow \Lambda$ as \[ \Theta_f F[X] = \mathbf{\Pi} \; f \left[ \frac{X}{M} \right] \mathbf{\Pi}^{-1} F[X]. \]
\end{definition}
It is clear that $\Theta_f$ is linear, and moreover, if $f$ is homogenous of degree $k$, then so is $\Theta_f$, i.e. \[\Theta_f \Lambda^{(n)} \subseteq \Lambda^{(n+k)} \qquad \text{ for } f \in \Lambda^{(k)}. \]
It is convenient to introduce the so called $q$-notation. In general, a $q$-analogue of an expression is a generalisation involving a parameter $q$ that reduces to the original one for $q \rightarrow 1$.
\begin{definition}
For a natural number $n \in \mathbb{N}$, we define its $q$-analogue as \[ [n]_q \coloneqq \frac{1-q^n}{1-q} = 1 + q + q^2 + \dots + q^{n-1}. \]
\end{definition}
Given this definition, one can define the $q$-factorial and the $q$-binomial as follows.
\begin{definition}
We define \[ [n]_q! \coloneqq \prod_{k=1}^{n} [k]_q \quad \text{and} \quad \qbinom{n}{k}_q \coloneqq \frac{[n]_q!}{[k]_q![n-k]_q!} \]
\end{definition}
\begin{definition}
For $x$ any variable and $n \in \mathbb{N} \cup \{ \infty \}$, we define the \emph{$q$-Pochhammer symbol} as \[ (x;q)_n \coloneqq \prod_{k=0}^{n-1} (1-xq^k) = (1-x) (1-xq) (1-xq^2) \cdots (1-xq^{n-1}). \]
\end{definition}
We can now introduce yet another family of symmetric functions.
\begin{definition}
\label{def:Enk}
For $0 \leq k \leq n$, we define the symmetric function $E_{n,k}$ by the expansion \[ e_n \left[ X \frac{1-z}{1-q} \right] = \sum_{k=0}^n \frac{(z;q)_k}{(q;q)_k} E_{n,k}. \]
\end{definition}
Notice that $E_{n,0} = \delta_{n,0}$. Setting $z=q^j$ we get \[ e_n \left[ X \frac{1-q^j}{1-q} \right] = \sum_{k=0}^n \frac{(q^j;q)_k}{(q;q)_k} E_{n,k} = \sum_{k=0}^n \qbinom{k+j-1}{k}_q E_{n,k} \] and in particular, for $j=1$, we get \[ e_n = E_{n,0} + E_{n,1} + E_{n,2} + \cdots + E_{n,n}, \] so these symmetric functions split $e_n$, in some sense.
We care in particular about the following identity.
\begin{proposition}[\protect{\cite[Theorem~4]{Can-Loehr-2006}}]
\label{prop:pn_Enk}
\[ \omega(p_n) = \sum_{k=1}^n \frac{[n]_q}{[k]_q} E_{n,k} \]
\end{proposition}
The Theta operators will be useful to restate the Delta conjectures in a new fashion, thanks to the following results.
\begin{theorem}[\protect{\cite[Theorem~3.1]{DAdderio-Iraci-VandenWyngaerd-Theta-2019}}]
\label{thm:theta-en}
\[ \Theta_{e_k} \nabla e_{n-k} = \Delta'_{e_{n-k-1}} e_n \]
\end{theorem}
\begin{theorem}[\protect{\cite[Theorem~3.3]{DAdderio-Iraci-VandenWyngaerd-Theta-2019}}]
\label{thm:theta-pn}
\[ \frac{[n]_q}{[n-k]_q} \Theta_{e_k} \nabla \omega(p_{n-k}) = \frac{[n-k]_t}{[n]_t} \Delta_{e_{n-k}} \omega(p_n) \]
\end{theorem}
\begin{corollary}
\[ \frac{[n]_t}{[n-k]_t} \Theta_{e_k} \nabla \omega(p_{n-k}) = \frac{[n-k]_q}{[n]_q} \Delta_{e_{n-k}} \omega(p_n) \]
\end{corollary}
\section{Delta conjectures}
By \emph{Delta conjectures} we refer to a family of conjectures that provide a combinatorial interpretation of certain symmetric functions that arise from the Delta operators and show positivity properties.
The first and most famous of the Delta conjectures is known as \emph{shuffle conjecture}, now a theorem by E. Carlsson and A. Mellit (see \cite{Carlsson-Mellit-ShuffleConj-2018}).
\begin{theorem}[Shuffle theorem]
\[ \nabla e_n = \sum_{\pi \in \LD(n)} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi \
\end{theorem}
The shuffle theorem is especially important because $\nabla e_n$ has another interpretation, as the bigraded Frobenius characteristic of the $S_n$ module of the \emph{diagonal harmonics}. This is one of the facts that first motivated the study of Macdonald polynomials, and it has been proved by M. Haiman in \cite{Haiman-Vanishing-2002}. See also \cite{Haiman-nfactorial-2001} for further details.
The \emph{Delta conjecture} is a generalisation of the shuffle conjecture, introduced by J. Haglund, J. Remmel, and A. Wilson in \cite{Haglund-Remmel-Wilson-2018}. In the same paper, the authors suggest that an even more general conjecture should hold, which we call \emph{generalised Delta conjecture}. It reads as follows.
\begin{conjecture}[(Generalised) Delta conjecture, valley version]
\label{conj:valley-delta}
\[ \Delta_{h_m} \Delta'_{e_{n-k-1}} e_n = \sum_{\pi \in \LD(m,n)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
\end{conjecture}
For $m=0$ this conjecture first appears together with the rise version in \cite{Haglund-Remmel-Wilson-2018}. The full statement, together with a proof of the case $q=0$, has been given by D. Qiu and A. Wilson in \cite{Qiu-Wilson-2019}.
\begin{conjecture}[(Generalised) Delta conjecture, rise version]
\[ \Delta_{h_m} \Delta'_{e_{n-k-1}} e_n = \sum_{\pi \in \LD(m,n)^{\ast k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
\end{conjecture}
The rise version of the Delta conjecture is simply the case $m=0$ of the general case.
Recalling that $\nabla \rvert_{\Lambda^{(n)}} = \Delta'_{e_{n-1}} \rvert_{\Lambda^{(n)}}$, it is clear that for $k=0$ both the versions of the Delta conjecture reduce to the shuffle theorem.
The \emph{square conjecture} was first suggested by N. Loehr and G. Warrington in \cite{Loehr-Warrington-square-2007}, and it was then proved by E. Sergel in \cite{Leven-2016} using the shuffle theorem.
\begin{theorem}[Square Theorem]
\[ \nabla \omega(p_n) = \sum_{\pi \in \LSQ(n)} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \
\end{theorem}
Unfortunately, adding zero labels and decorated rises to square paths in the trivial way and $q,t$-counting the resulting objects with respect to the bistatistic $(\mathsf{dinv}, \mathsf{area})$ gives a polynomial that does not match the expected symmetric function. This issue has been addressed by M. D'Adderio and the authors, who stated the \emph{generalised Delta square conjecture} in \cite{DAdderio-Iraci-VandenWyngaerd-DeltaSquare-2019}.
\begin{conjecture}[(Generalised) Delta square conjecture, rise version]
\[ \frac{[n-k]_t}{[n]_t} \Delta_{h_m} \Delta_{e_{n-k}} \omega(p_n) = \sum_{\pi \in \LSQ(m,n)^{\ast k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
\end{conjecture}
where the rise version of the square conjecture is simply the case $m=0$ of the general case.
The square conjectures used to lack a valley version. Computational evidence suggests the following, checked by computer up to $n=6$.
\begin{conjecture}[(Generalised) Delta square conjecture, valley version]
\label{conj:gen-valley-square}
\[ \frac{[n-k]_q}{[n]_q} \Delta_{h_m} \Delta_{e_{n-k}} \omega(p_n) = \sum_{\pi \in \LSQ(m,n)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
\end{conjecture}
Notice that the symmetric function we propose for the valley version differs from the one appearing in the rise version (for $m=0$) as the multiplicative factor is the ratio of two $q$-analogues instead of two $t$-analogues. This suggests a potential extension of the conjecture to a version with both decorated rises and contractible valleys, possibly using the Theta operators appearing in \cite{DAdderio-Iraci-VandenWyngaerd-Theta-2019}. The power series associated to the obvious combinatorial extension, however, seems to be quasi-symmetric function which is not symmetric, and thus further investigation is required to find suitable statistics.
We can restate it in terms of Theta operators as follows.
\begin{conjecture}[(Generalised) Delta square conjecture, valley version]
\label{conj:gen-valley-square-theta}
\[ \frac{[n]_t}{[n-k]_t} \Delta_{h_m} \Theta_{e_k} \nabla \omega(p_{n-k}) = \sum_{\pi \in \LSQ(m,n)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
\end{conjecture}
We need to state a refinement of the Delta conjecture, valley version, that naturally arises when stating it in terms of Theta operators. But first, we need another combinatorial definition. Let \[ \LSQ(m, n \backslash r)^{\bullet k} \coloneqq \{ \pi \in \LSQ(n)^{\bullet k} \mid \# \{ i \not \in dv \colon a_i = - \mathsf{shift}(\pi) \land w_i > 0 \} = r \}, \] which is the set of labelled valley-decorated square paths of size $m+n$ with $m$ labels equal to $0$ and $k$ decorations such that there are exactly $r$ steps which are neither $0$ labels not decorated valleys on the bottom-most diagonal, and let \[ \LD(m, n \backslash r)^{\bullet k} \coloneqq \LSQ(m, n \backslash r)^{\bullet k} \cap \LD(m, n)^{\bullet k}, \] which is the subset of corresponding labelled valley-decorated Dyck paths. We state the following.
\begin{conjecture}[Touching Delta conjecture, valley version]
\label{conj:valley-delta-touching}
\[ \Theta_{e_k} \nabla E_{n-k, r} = \sum_{\pi \in \LD(n \backslash r)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
\end{conjecture}
It is immediate that Conjecture~\ref{conj:valley-delta-touching} implies the case $m=0$ of Conjecture~\ref{conj:valley-delta}, as it is enough to sum over $r$ and then apply Theorem~\ref{thm:theta-en}.
We need to state the same refinement for the generalised version too.
\begin{conjecture}[Generalised touching Delta conjecture, valley version]
\label{conj:gen-valley-delta-touching}
\[ \Delta_{h_m} \Theta_{e_k} \nabla E_{n-k, r} = \sum_{\pi \in \LD(m, n \backslash r)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
\end{conjecture}
We now want to state yet another version of the Delta square conjecture, using the set $\LSQ'(m,n)^{\bullet k}$ previously introduced.
\begin{conjecture}[Modified Delta square conjecture, valley version]
\label{conj:valley-square-2}
\[ \Theta_{e_k} \nabla \omega(p_{n-k}) = \sum_{\pi \in \LSQ'(n)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
\end{conjecture}
This conjecture is new and it is more nice-looking than the other forms of the Delta square conjecture as it does not have any multiplicative correcting factor. It also extends nicely to the $m>0$ case, as follows.
\begin{conjecture}[Modified generalised Delta square conjecture, valley version]
\label{conj:gen-valley-square-2}
\[ \Delta_{h_m} \Theta_{e_k} \nabla \omega(p_{n-k}) = \sum_{\pi \in \LSQ'(m,n)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
\end{conjecture}
Our goal is to show that Conjecture~\ref{conj:gen-valley-delta-touching} implies Conjecture~\ref{conj:gen-valley-square-2}, and as a corollary that Conjecture~\ref{conj:valley-delta-touching} implies Conjecture~\ref{conj:valley-square-2}.
\section{Schedule numbers for repeated labels}
\begin{definition}
Let $(\pi, w, dv)$ be a valley-decorated labelled square path with shift $s$. For $i \geq 0$ we set $\rho_i$ to be the marked word in the alphabet $\mathbb{N}$ consisting of the labels appearing in the $i$-th diagonal, marked with a $\bullet$ if it labels a decorated valley, in increasing order, where we consider $c <\;\stackrel{\bullet}{\raisebox{0 em}{c}}\;<c+1$. The \emph{diagonal word} of $(\pi, w, dv)$ is $\mathsf{dw}(\pi, w, dv) \coloneqq \rho_\ell \dots \rho_{1} \rho_{0}$.
\end{definition}
For example the diagonal word of the path in Figure~\ref{fig:diagonal-word} is $1243 \!\! \stackrel{\bullet}{\raisebox{0 em}{1}} \!\! 4 1 \!\! \stackrel{\bullet}{\raisebox{0 em}{1}}$. Notice that the $\rho_i$ are the runs of $\mathsf{dw}(\pi, w, dv)$, i.e. the maximal weakly increasing substrings (disregarding decorations).
\begin{figure}[!ht]
\begin{tikzpicture}[scale = 0.6]
\draw[step=1.0, gray!60, thin] (0,0) grid (8,8);
\draw[gray!60, thin] (3,0) -- (8,5);
\draw[blue!60, line width=1.6pt] (0,0) -- (0,1) -- (1,1) -- (2,1) -- (3,1) -- (4,1) -- (4,2) -- (5,2) -- (5,3) -- (5,4) -- (6,4) -- (6,5) -- (6,6) -- (6,7) -- (7,7) -- (7,8) -- (8,8);
\node at (3.5,1.5) {$\bullet$};
\node at (5.5,4.5) {$\bullet$};
\node at (0.5,0.5) {$2$};
\draw (0.5,0.5) circle (.4cm);
\node at (4.5,1.5) {$1$};
\draw (4.5,1.5) circle (.4cm);
\node at (5.5,2.5) {$1$};
\draw (5.5,2.5) circle (.4cm);
\node at (5.5,3.5) {$4$};
\draw (5.5,3.5) circle (.4cm);
\node at (6.5,4.5) {$1$};
\draw (6.5,4.5) circle (.4cm);
\node at (6.5,5.5) {$3$};
\draw (6.5,5.5) circle (.4cm);
\node at (6.5,6.5) {$4$};
\draw (6.5,6.5) circle (.4cm);
\node at (7.5,7.5) {$1$};
\draw (7.5,7.5) circle (.4cm);
\end{tikzpicture}
\caption{Square path of diagonal word $1243 \!\! \stackrel{\bullet}{\raisebox{0 em}{1}} \!\! 4 1 \!\! \stackrel{\bullet}{\raisebox{0 em}{1}}$}
\label{fig:diagonal-word}
\end{figure}
\begin{definition}
Let $z \coloneqq \mathsf{dw}(\pi, w, dv)$ be the diagonal word of a valley-decorated labelled square path $(\pi, w, dv)$ such that $z = \rho_\ell \cdots \rho_0$, where the $\rho_i$'s are its runs. We define its \emph{$i$-th run multiplicity functions}
\begin{align*}
z_i \colon \mathbb{N} & \rightarrow \mathbb{N} \\
x & \mapsto \# \{ x \in \rho_i \mid x \text{ is undecorated}\} \\
z_i^\bullet \colon \mathbb{N} & \rightarrow \mathbb{N} \\
x & \mapsto \# \{ x \in \rho_i \mid x \text{ is decorated}\}
\end{align*}
Notice that each function $z_i$ has finite support.
\end{definition}
\begin{definition}
Given $z=\rho_\ell, \dots, \rho_0$ the diagonal word of a square path $(\pi,w)$ with shift $s$, where the $\rho_i$ are the runs of $z$. We set $\tilde \rho_i$ to be the subword obtained from $\rho_i$ by deleting its decorated numbers and $\rho'_i$ the subword obtained from $\tilde \rho_i$ by deleting the zeros. Take $c \in z$, for $i \in \{0,\dots, \ell\}$, we define its \emph{schedule numbers} $w_{i,s}(c)$ as follows: For $c \in \mathbb{N}$
\begin{align*}
w_{i,s}(c) &\coloneqq
\begin{cases}
\#\{d \in \tilde\rho_i \mid d>c \} + \#\{d \in \tilde\rho_{i-1} \mid d<c \} & \text{if } i\in \{s+1,\dots, \ell\} \\
\#\{d \in \tilde\rho_i \mid d>c \} + 1 - \delta_{c,0} & \text{if } i=s \\
\#\{d \in \tilde\rho_i \mid d<c \} + \#\{d \in \tilde\rho_{i+1} \mid d>c \} & \text{if } i\in \{0,\dots, s-1\}
\end{cases}\\
w_{i,s}^\bullet (c) &\coloneqq \#\{d \in \tilde\rho_i \mid d<c \}+\#\{d \in \tilde\rho_{i+1} \mid d>c \} - \delta_{c,0}\delta_{i,s-1}
\end{align*}
Careful! These cardinalities of multisets take into account the multiplicities.
\end{definition}
\begin{theorem}
\label{thm:factorisation}
Let $z$ be a marked word in the alphabet $\mathbb{N}$ and let $\rho_\ell, \dots, \rho_0$ be the runs of $z$, so that $z = \rho_\ell \cdots \rho_0$. Let $b(z,s) \coloneqq \sum_{i=0}^{s-1} \sum_{c \in \mathbb{N}} z_i(c) - z_{i-1}^{\bullet}(c)$ and $x^z\coloneqq \prod_{c\in z}x_c$. Then
\[ \sum_{\substack{\pi \in \LSQ(n)^{\bullet k} \\ \mathsf{shift}(\pi) = s \\ \mathsf{dw}(\pi) = z }} q^{\mathsf{dinv}(\pi)}t^{\mathsf{area}(\pi)} x^\pi
= t^{\mathsf{maj}(z)} q^{b(z,s)} \prod_{i=0}^\ell\left( \prod_{c \in \mathbb{N}} \qbinom{ w_{i,s}(c) + z_i(c) - 1}{z_i(c)}_q q^{z_i^\bullet (c)\choose 2}\qbinom{w_{i,s}^\bullet (c)}{z_i^\bullet(c)}_q \right)x^z. \]
\end{theorem}
The proof of this result is similar to the one described by Haglund and Sergel in \cite[Theorem 3.2]{Haglund-Sergel-2019} for Dyck paths, except that we consider repeated labels. For the sake of completeness, we repeat some of their arguments here.
\begin{proof}
Let us begin by noting that the right hand side of this equation consists of a finite number of terms different from $1$. Indeed $z_i(c) = z_i^\bullet(c) = 0$ for all but a finite number of elements of $\mathbb{N}$ and thus all but a finite number of $q$-binomials are equal to $1$, which means that the product is actually finite.
Next, observe that for any $\pi\in \LSQ(n)^{\bullet k}$ with $\mathsf{dw}(\pi)= z$ we trivially have $x^\pi=x^z$, so we only need to consider the $q,t$-enumerators. It is also not difficult to see that for any such path $\mathsf{maj}(z)=\mathsf{area}(\pi)$, indeed,
\begin{align*}
\mathsf{area}(\pi) & = \ell \cdot \#\rho_{\ell} + (\ell-1)\cdot \#\rho_{\ell-1} + \cdots + 1 \cdot \#\rho_1 \\
& = \rho_\ell + (\rho_\ell+\rho_{\ell-1}) + \cdots + (\rho_{\ell}+\rho_{\ell-1}+\cdots +\rho_1)= \mathsf{maj}(z).
\end{align*}
For the dinv, we will construct all the paths of a given diagonal word and shift, starting from the empty path, all the while keeping track of the dinv. We do this by applying the procedure described below. For each step, we give illustrate by a (partial) construction of the paths with diagonal word $44223\!\!\stackrel{\bullet}{\raisebox{0 em}{3}}\stackrel{\bullet}{\raisebox{0 em}{3}}\stackrel{\bullet}{\raisebox{0 em}{0}}\!\!11\!\!\stackrel{\bullet}{\raisebox{0 em}{2}}$.
First, we construct all the undecorated paths whose diagonal word is $\tilde \rho_s\dots \tilde \rho_0$. The decorations will be added in the last step.
\begin{enumerate}
\item Starting from the empty path, we insert the numbers of $\tilde\rho_s$ into the main diagonal, starting with the biggest labels and draw the unique path whose vertical steps are labelled by these numbers. If $c$ is the biggest label in $\tilde\rho_s$ then $w_{s,s}(c)= 1$ and so $\qbinom{w_{s,s}+z_s(c)-1}{z_s(c)}_q=1$, indeed there is only one way to insert $z_s(c)$ labels equal to $c$ into the main diagonal and this creates 0 units of dinv because steps with the same label do not create dinv among each other. In general take $c$ a number of $\tilde \rho_s$. Then $w_{s,s}(c)$ is equal to the number of numbers in $\tilde\rho_s$ bigger than $c$, that have already be inserted, plus $1$ if $c \neq 0$. So when inserting $z_s(c)$ numbers equal to $c$ into the diagonal there are $\binom{w_{s,s}(c)-1+z_s(c)}{z_s(c)}$ ways to do it (even if $c$ is $0$, as the leftmost label on the main diagonal cannot be a $0$, which explains the absence of the extra $+1$) and the contribution to the dinv is counted by the $q$-analogue. Indeed we have to choose an interlacing between the $w_{s,s}(c)-1$ labels that are already there and the $z_s(c)$ labels equal to $c$ and each time a $c$ precedes a bigger label, one unit of dinv is created.
\begin{figure}[!ht]
\begin{center}
\includegraphics{tree_zerorun}
\end{center}
\caption{Insertion of undecorated numbers into diagonal $y=x$}
\end{figure}
\item Next we add the numbers in $\tilde\rho_{s+1}, \tilde\rho_{s+2}, \dots, \tilde\rho_l$ (in that order) by adding the numbers in $\tilde \rho_i$ in the $i$-th diagonal and drawing the unique path with this labelling. As before, we insert the numbers from biggest to smallest in each run. When adding the $z_i(c)$ numbers equal to $c$ of $\tilde \rho_i$, we can add them on top of smaller numbers in $\tilde \rho_{i-1}$ or directly northeast of the numbers in $\tilde\rho_i$ that have already been inserted, i.e. that are bigger. There may also be consecutive $c$'s in the diagonal. So there are $w_{i,s}(c)$ labels after which the $c$'s may be inserted. So the insertion of the $z_i(c)$ labels equal to $c$ uniquely corresponds to an interlacing of the $z_i(c)$ $c$'s and $w_{i,s}(c)$ possible insertion positions, that does not start with a $c$ (indeed a $c$ must be inserted \emph{after} one of the $w_{i,s}(c)$ positions). Since each time one of the $c$'s precedes one of the $w_{i,s}(c)$ discussed labels, one unit of dinv is created (primary for bigger labels in $\tilde \rho_i$ and secondary for smaller labels in $\tilde \rho_{i-1}$), this dinv contribution is $q$-counted by $\qbinom{w_{s,i}(c)-1+z_i(c)}{z_i(c)}_q$.
\begin{figure}[!ht]
\begin{center}
\includegraphics{tree_positiverun}
\end{center}
\caption{Insertion of undecorated numbers into diagonal $y=x+1$}
\end{figure}
\item Next we add the numbers in runs $\tilde \rho_{s-1}, \tilde\rho_{s-2},\dots, \tilde\rho_0$ (in that order), this time from smallest to biggest numbers in each run. As before, we insert the numbers in run $\tilde\rho_i$ into the $i$-th diagonal. When adding the $z_i(c)$ numbers equal to $c$ in $\tilde\rho_i$, they can either be inserted directly underneath a bigger number from $\tilde \rho_{i+1}$ or directly southwest of a label of $\tilde \rho_i$ that has already been inserted, i.e. that is smaller. There may also be consecutive $c$'s in the diagonal. So there are again $w_{s,i}(c)$ places we may insert $c$. Note that the last $c$ must be underneath a bigger label from $\tilde \rho_{i+1}$, since the path must end with an east step. Thus choosing an interlacing between the $w_{i,s}(c)$ positions and the $z_i(c)$ $c$'s, ending with a label of the first kind, we get a unique insertion whose dinv is $q$-counted by $\qbinom{w_{s,i}(c)-1+z_i(c)}{z_i(c)}_qq^{\delta_{c>0} \sum_{i=0}^{s-1} \sum_{c \in \mathbb{N}} z_i(c)}$ since each time an insertion position precedes a $c$ a unit of dinv is created and each positive label underneath the main diagonal creates a unit of bonus dinv.
\begin{figure}[!ht]
\begin{center}
\includegraphics{tree_negativerun}
\end{center}
\caption{Insertion of undecorated numbers into diagonal $y=x-1$}
\end{figure}
\item Finally we add the decorated labels. We will insert, for each $i$, $z_i^\bullet(c)$ decorated valleys labelled $c$ into the $i$-th diagonal. Consider $S_{i,c}$, the set of size $w_{i,s}^\bullet(c)$, of all the steps that have already been inserted and would create dinv with a the decorated valley labelled $c$ inserted to its right in the $i$-th diagonal. In other words, the (undecorated) steps in the $i+1$-th diagonal with a label bigger than $c$ and the (undecorated) steps in the $i$-th diagonal with a label smaller than $c$.
The insertion is slightly different above and below the main diagonal. First, we consider the insertion above the main diagonal, take $i\in \{s,\dots, \ell\}$.
Choose a subset $T\subseteq S_{i,c}$ of size $z_i^\bullet(c)$. We insert one decorated valley labelled $c$ in the $i$-th diagonal to the right of each element of $T$ and to the left of the next element of $S_{i,c}$. We claim that there is always a unique way to do this and that this yields all the possible paths (surjectivity).
\textbf{Surjectivity and uniqueness.} There are two things to show. Firstly, we have to show that a decorated valley cannot be inserted to the left of all the elements is $S_{i,c}$. Indeed, we are inserting decorated valleys and decorations must be placed on contractible valleys. This means that if $(\pi, w, dv)$ is a valley-decorated labelled square path and $j\in dv$ then
\begin{itemize}
\item either $a_{j-1}=a_{j}$ and $w_{j-1}<w_{j}$, in which case $(j-1,j)$ is a primary inversion.
\item or $a_{j-1}> a_j$ in which case there must be a $k<i$ such that $a_k= a_j$ and $a_{k+1}= a_k +1$ (in other words $k+1$ is a rise). Then either $w_{k+1}> w_j$, in which case $(k+1,j)$ is an inversion, or $w_{k+1}\leq w_j$, in which case $w_k < w_{k+1}\leq w_j$ so $(k,j)$ is an inversion.
\end{itemize}
It follows that when inserting a decorated valley at least one unit of dinv is created to its left, in other words, at least one element of $S_{i,c}$ is to its left.
Secondly, we must argue that there can never be two insertions in between two consecutive elements of $S_{i,c}$. In other words, there must always be an element of $S_{i,c}$ between two decorated valleys labelled $c$ in the $i$-th diagonal. Indeed
\begin{itemize}
\item if one such valley is followed by a vertical step $s$, its label must be bigger then $c$ and lie in the $i+1$-th diagonal. Thus, $s$ is an element of $S_{i,c}$.
\item if one such valley is followed by a horizontal step the path hits the $i$-th diagonal at a point $p$. The next step cannot be another decorated valley or it would not be a \emph{contractible} valley. Now we can apply a similar argument as above to the portion of the path starting from $p$ to deduce the existence of an element in $S_{i,c}$ that lies after $p$ and before the next occurence of the relevant decorated valleys.
\end{itemize}
This also implies that there exist no two distinct ways of inserting our decorated valley in between two elements of $S_{i,c}$. So the insertion must be unique.
\textbf{Existence.} We now show that given an element $t$ of $S_{i,c}$, there always exists a way to insert a decorated valley of the discussed kind to its right and to the left of the the next element of $S_{i,c}$. It will follow that we can insert the valleys after elements of $T$ one by one and since each must lie strictly between elements of $S_{i,c}$ these insertions are independent. There some cases and subcases to consider.
\begin{itemize}
\item [\textbf{Case 1.}]The step $t$ lies in the $i$-th diagonal, and thus is labelled with a number smaller than $c$, call it $S$.
\begin{itemize}
\item [\textbf{Case 1.1}] The step $t$ is followed by a vertical step.
\begin{itemize}
\item [\textbf{Case 1.1.1}] The label of the vertical step following $t$, is bigger than $c$. Call it $B$. Directly following $t$, insert a horizontal step followed by a decorated vertical step labelled $c$. Then continue the path with the vertical step labelled $B$, since $S<c<B$, the step labelled $c$ is a contractible valley and the condition on the columns of labelled paths is respected.
\begin{figure}[H]
\begin{center}
\includegraphics{Insertion111}
\end{center}
\end{figure}
\item [\textbf{Case 1.1.2}] The label of the vertical step following $t$, is smaller than $c$. Call it $\tilde S$. Now consider the portion of the path between the endpoint of $t$ and the first point $p$ where the path crosses the $i+1$-th diagonal with two consecutive horizontal steps, one ending and one beginning at $p$ ($p$ always exists since the path must return to the main diagonal). If, in this portion of path, there is an occurence of a vertical step contained in the $i+1$-th diagonal and labelled with a number bigger then $c$, call it $B$, then this step is an element of $S_{i,c}$. Insert, directly before this step labelled $B$, a decorated vertical step labelled $c$ and before that step a horizontal step. Since $B$ is bigger than $c$, the labelling is valid. Furthermore, since the discussed portion of the path stays weakly above the $i+1$-th diagonal, the step labelled $B$ must be preceded by a horizontal step and so the inserted valley is preceded by two horizontal steps and thus is contractible.
Otherwise, the discussed portion of path does not contain an element of $S_{i,c}$. In that case, insert directly after the horizontal step ending at $p$ a horizontal step followed by a decorated vertical step labelled $c$ and continue the path with the horizontal step starting at $p$. Since the inserted decorated valley is preceded by two horizontal steps, it is always contractible.
\begin{figure}[H]
\begin{minipage}{.375 \textwidth}
\centering
\includegraphics{Insertion1121}
\end{minipage}%
\begin{minipage}{.375 \textwidth}
\centering
\includegraphics{Insertion1122}
\end{minipage}
\end{figure}
\end{itemize}
\item [\textbf{Case 1.2}] The step $t$ is followed by a horizontal step. In this case, we may simply insert, directly after $t$, a horizontal step followed by a decorated vertical step labelled $c$ and continue the path with the horizontal step that followed $t$. Since $S<c$, the valley we created is contractible.
\begin{figure}[H]
\begin{center}
\includegraphics{Insertion12}
\end{center}
\end{figure}
\end{itemize}
\item [\textbf{Case 2.}] The step $t$ lies in the $i+1$-th diagonal, and thus is labelled with a number bigger than $c$, call it $B$. The reasoning here is very similar to case 1.1.2.: after $t$ the path must cross the $i+1$-th diagonal. Depending on wether or not there is an element of $S_{i,c}$ in this portion of path, we have two types of insertions. We include a diagram of each situation.
\begin{figure}[H]
\begin{minipage}{.4 \textwidth}
\centering
\includegraphics{Insertion21}
\end{minipage}%
\begin{minipage}{.4 \textwidth}
\centering
\includegraphics{Insertion22}
\end{minipage}
\end{figure}
\end{itemize}
It follows that the insertion of $z_i^\bullet(c)$ decorated valleys labelled $c$ into the $i$-th diagonal is equivalent to the choice of a subset $T\subseteq S_{i,c}$ of size $z_i^\bullet(c)$. As mentioned above, each such an insertion creates at least one unit of dinv, which can be thought of as being cancelled out by the subtraction of the number of decorated valleys in the computation of the dinv. Additionally, every pair of elements of $T$ creates one unit of dinv, as the decorated valley following the second one creates dinv with the first one. This explains the factor $q^{z_i^\bullet(c)\choose 2}$. Lastly one unit of dinv is created each time an element of $S_{i,c}\setminus T$ precedes an element of $T$ in the path, which explains $\qbinom{w_{i,s}^\bullet(c)}{z_i(c)}_q$
Now for the insertion below the main diagonal, take $i\in \{0,\dots, s-1\}$. The argument is very similar, up to a few minor changes. Indeed, now it is no longer true that any decorated valley must create a unit of dinv with an element to its left. However, there must be an element of $S_{i,c}$ that is to the right of all the decorated valleys labelled $c$ in the $i$-th diagonal. Indeed let $t$ be such a decorated valley.
\begin{itemize}
\item if $t$ is followed by a vertical step, it must be contained in the $i+1$-th diagonal and its label must be bigger then $c$ so it must be an element of $S_{i,c}$;
\item if $t$ is followed by a horizontal step and so the path hits the $i$-th diagonal at a point $p$, which lies beneath the main diagonal. Since the path must end with an east step above the main diagonal we may deduce the existence of two consecutive vertical steps after $p$, the first one starting at the $i$-th diagonal. If the first of these steps has a label smaller that $c$, it is an element of $S_{i,c}$. If it is bigger of equal to $c$, the succeeding vertical step must have a label bigger than $c$ and thus must be in $S_{i,c}$.
\end{itemize}
So given a subset $T\subseteq S_{i,c}$ of size $z_i^\bullet(c)$, we will insert a decorated valley in the $i$-th diagonal to the \emph{left} of each element in $T$ and to the right of all the elements of $S_{i,c}$ preceding the element it.
Except for when $T$ contains the very first step of $S_{i,c}$, we can recycle the existence of insertion argument given above. Indeed, we can replace all the elements of $T$ with its predecessors in $S_{i,c}$, obtaining a set $\tilde T$ and apply the described insertions to the right. There is just one subtlety: in that discussion, we used the fact that if the path crosses the $i$-th diagonal vertically, it must proceed to cross it again, horizontally. This is not necessarily true if $i\in \{0,\dots, s-1\}$. However, as we have shown, there must be at least one element of $S_{i,c}$ to the right of all the elements of $\tilde T$ and this is sufficient to apply the same argument.
The only thing that is now left to show that, for $i < s-1$, or $i = s-1$ and $c > 0$, it is (uniquely) possible to insert a decorated valley labelled $c$ into the $i$-th diagonal, to the left of all the elements in $S_{i,c}$. If $i = s-1$ and $c=0$, then this is not possible. Indeed by definition here cannot be a decorated valley labelled $0$ in the first row if the corresponding area letter is $-1$. Any other decorated valley labelled $0$ with area word $-1$ must be preceded by two horizontal steps (or the valley would not be \emph{contractible}) forcing a positive label on the main diagonal (which is an element of $S_{s-1,0}$) to appear on its left. This restriction explains the presence of the extra $-1$ in the definition of $w_{s-1,s}(0)$.
If it exists, let us call $f$ the first vertical step of the existing path contained in the $i$-th diagonal ($f$ is not necessarily an element of $S_{i,c}$ as its label, call it $F$, might be bigger than $c$).
\begin{itemize}
\item[\textbf{Case 1.}] The step $f$ is an element of or occurs before every element in $S_{i,c}$. Then one of two situations occur. Either $f$ is preceded by a horizontal step, in which case that horizontal step must be preceded by another horizontal step, indeed if is were preceded by a vertical step, this would lie in the $i$-th diagonal and so $f$ would not be the first. In that case insert a horizontal step followed by a vertical decorated step labelled $c$ in between these two horizontal steps. Or, $f$ is preceded by a vertical step, so the begin point of $f$ is a point were the path crosses the $i$-th diagonal vertically. Since the path started at $(0,0)$, there must be a point before $f$ where the path crosses this diagonal horizontally, with two consecutive horizontal steps (notice that this point must be unique by definition of $f$). These two horizontal steps must be preceded by a third horizontal step because if not, this contradicts the definition of $f$. Insert a horizontal step followed by a decorated vertical step labelled $c$ directly after the first (from the left) of these three horizontal steps.
\begin{figure}[H]
\begin{minipage}{.38 \textwidth}
\centering
\includegraphics{Insertion311}
\end{minipage}%
\begin{minipage}{.48 \textwidth}
\centering
\includegraphics{Insertion312}
\end{minipage}
\end{figure}
\item[\textbf{Case 2.}] The leftmost element of $S_{i,c}$ is to the left of all (if any exist) the vertical steps in the $i$-th diagonal, in which case it must be a step $t$ in the $i+1$-th diagonal labelled with a number bigger than $c$, call it $B$. It follows that $t$ must be preceded by a horizontal step. Insert a horizontal step followed by a decorated vertical step labelled $c$ directly after this horizontal step.
\begin{figure}[H]
\begin{center}
\includegraphics{Insertion32}
\end{center}
\end{figure}
\end{itemize}
Essentially the same argument as before ensures that there cannot be two decorated valleys inserted between two elements of $S_{i,c} $ (or before all of its elements). This ensures the unicity of the insertion given a choice of $T$. Now any two inserted valleys must create one unit of dinv explaining and any element of $T$ succeeding an element of $S_{i,c}\setminus T$ creates a unit of dinv. Each inserted decorated non-zero valley below the diagonal creates a unit of bonus dinv by definition, which we can think of as getting cancelled out with the subtraction of the number of decorated non-zero valleys in the computation of the dinv. Furthermore, for each decorated zero valley below the diagonal, there must be a corresponding non-decorated non-zero valley, which also contributed one unit to the dinv that we can think of as cancelled out with the subtraction of the number of decorated zero valleys.
The contribution to the dinv of this procedure is thus $q$-counted by $q^{z_i^\bullet (c)\choose 2}\qbinom{w_{i,s}^\bullet (c)}{z_i^\bullet(c)}_q$.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.8]{tree_decorations}
\end{center}
\caption{Insertion of $\stackrel{\bullet}{\raisebox{0 em}{2}}$ into diagonal $y=x-1$}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.8]{tree_decorations2}
\end{center}
\caption{Insertion of $\stackrel{\bullet}{\raisebox{0 em}{0}}$ into diagonal $y=x-1$}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.8]{tree_decorations3}
\end{center}
\caption{Insertion of two $\stackrel{\bullet}{\raisebox{0 em}{3}}$'s into diagonal $y=x-1$}
\end{figure}
\end{enumerate}
\end{proof}
\section{The valley Delta implies the valley square}
Let us define \[ \LSQ_{q,t;x}(z,s) \coloneqq \sum_{\substack{\pi \in \LSQ(n)^{\bullet k} \\ \mathsf{shift}(\pi) = s \\ \mathsf{dw}(\pi) = z}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi, \]
where $z$ is the diagonal word of a path in $\LSQ(n)^{\bullet k}$. We have the following.
\begin{theorem}
\label{thm:shift-by-1}
\[ \LSQ_{q,t;x}(z,s) = q^{\# \rho'_{s-1}-z^\bullet_{s-2}(0)} \frac{[\# \rho'_{s}-z_{s-1}^\bullet(0)]_q}{[\# \rho'_{s-1}z_{s-1}^\bullet(0)]_q} \LSQ_{q,t;x}(z,s-1). \]
\end{theorem}
\begin{proof}
By Theorem~\ref{thm:factorisation}, we have
\[ \LSQ_{q,t;x}(z,s) = t^{\mathsf{maj}(z)} q^{b(z,s)} \prod_{i=0}^\ell\left( \prod_{c \in \mathbb{N}} \qbinom{ w_{i,s}(c) + z_i(c) - 1}{z_i(c)}_q q^{z_i^\bullet (c)\choose 2}\qbinom{w_{i,s}^\bullet (c)}{z_i^\bullet(c)}_q \right)x^z. \]
and
\[ \LSQ_{q,t;x}(z,s-1) = t^{\mathsf{maj}(z)} q^{b(z,s-1)} \prod_{i=0}^\ell\left( \prod_{c \in \mathbb{N}} \qbinom{ w_{i,s-1}(c) + z_i(c) - 1}{z_i(c)}_q q^{z_i^\bullet (c)\choose 2}\qbinom{w_{i,s-1}^\bullet (c)}{z_i^\bullet(c)}_q \right)x^z. \]
Now, by definition we have $b(z,s) - b(z, s-1) = \# \rho'_{s-1}-z^\bullet_{s-2}(0)$. Also, for $c\neq 0$ we have that $w_{i,s}^\bullet(c)$ does not depend on $s$ and $w_{i,s}^\bullet(0) = w_{i,s-1}^\bullet(0)$ for $i\not\in \{s-2,s-1\}$. Furthermore for $c \in \mathbb{N}$ we have that $w_{i,s}(c) = w_{i,s-1}(c)$ for $i \not \in \{s-1, s\}$. By simplifying all these terms, we get
\begin{align*}
\LSQ_{q,t;x}(z,s) = q^{\# \rho'_{s-1}-z_{s-2}^\bullet(0)} &\prod_{c \in \mathbb{N}}
\frac{
\qbinom{ w_{s,s}(c) + z_s(c) - 1}{z_s(c)}_q
\qbinom{ w_{s-1,s}(c) + z_{s-1}(c) - 1}{z_{s-1}(c)}_q
}
{
\qbinom{ w_{s,s-1}(c) + z_s(c) - 1}{z_s(c)}_q
\qbinom{ w_{s-1,s-1}(c) + z_{s-1}(c) - 1}{z_{s-1}(c)}_q
}
\\
& \times \prod_{c\in \mathbb{N}}
\frac{
\qbinom{w_{s-1,s}^\bullet(0)}{z_{s-1}^\bullet(0)}_q
\qbinom{w_{s-2,s}^\bullet(0)}{z_{s-2}^\bullet(0)}_q
}
{
\qbinom{w_{s-1,s-1}^\bullet(0)}{z_{s-1}^\bullet(0)}_q
\qbinom{w_{s-2,s-1}^\bullet(0)}{z_{s-2}^\bullet(0)}_q
}
\LSQ_{q,t;x}(z,s-1)
\end{align*}
so all that's left to do is to compute the $q$-binomials and check that the product yields the desired result.
Recall that
\begin{align*}
w_{s,s}(c) & = 1 - \delta_{c,0} + \sum_{a > c} z_s(a), \qquad && w_{s-1,s}(c) = \sum_{a > c} z_s(a) + \sum_{a < c} z_{s-1}(a), \\
w_{s-1,s-1}(c) & = 1 - \delta_{c,0} + \sum_{a > c} z_{s-1}(a) \qquad && w_{s,s-1}(c) = \sum_{a > c} z_s(a) + \sum_{a < c} z_{s-1}(a).
\end{align*} and that
\begin{align*}
w_{s-1,s}^\bullet(0) = \#\rho'_s-1 && w_{s-2,s}^\bullet(0) = \rho'_{s-1} &&
w_{s-1,s-1}^\bullet(0) = \#\rho'_s && w_{s-2,s-1}^\bullet(0) = \#\rho'_{s-1}-1
\end{align*}
Let $m = \max \{ c \in \mathbb{N} \mid z_s(c) > 0 \text{ or } z_{s-1}(c) > 0 \}$. We have
\begin{align*}
&\prod_{c \in \mathbb{N}} \frac{\qbinom{ w_{s,s}(c) + z_s(c) - 1}{z_s(c)}_q}{\qbinom{ w_{s-1,s-1}(c) + z_{s-1}(c) - 1}{z_{s-1}(c)}_q} = \prod_{c=1}^{m} \frac{[z_{s-1}(c)]_q!}{[z_s(c)]_q!} \cdot \frac{[w_{s,s}(c) + z_s(c) - 1]_q!}{[w_{s,s}(c) - 1]_q!} \cdot \frac{[w_{s-1,s-1}(c) - 1]_q!}{[w_{s-1,s-1}(c) + z_{s-1}(c) - 1]_q!} \\
& = \prod_{c=0}^{m} \frac{[z_{s-1}(c)]_q!}{[z_s(c)]_q!} \cdot \frac{[\sum_{a \geq c} z_s(a) - \delta_{c,0}]_q!}{[\sum_{a > c} z_s(a) - \delta_{c,0}]_q!} \cdot \frac{[\sum_{a > c} z_{s-1}(a)-\delta_{c,0}]_q!}{[\sum_{a \geq c} z_{s-1}(a) - \delta_{c,0}]_q!} \\
& = \frac{[\sum_{a \geq 0} z_s(a)]_q!}{[\sum_{a > 0} z_s(a)]_q!} \cdot \frac{[\sum_{a > 0} z_{s-1}(a)]_q!}{[\sum_{a \geq 0} z_{s-1}(a)]_q!}\prod_{c=0}^{m} \frac{[z_{s-1}(c)]_q!}{[z_s(c)]_q!} \prod_{c=1}^{m} \frac{[\sum_{a \geq c} z_s(a)]_q!}{[\sum_{a > c} z_s(a)]_q!} \cdot \frac{[\sum_{a > c} z_{s-1}(a)]_q!}{[\sum_{a \geq c} z_{s-1}(a)]_q!} \\
& = \frac{[\#\tilde\rho_s-1]_q! [\#\rho'_{s-1}-1]_q!}{[\# \rho'_s-1]_q![\# \tilde \rho_{s-1}-1]_q!} \frac{[\sum_{a \geq 1} z_s(a)]_q!}{[\sum_{a \geq 1} z_{s-1}(a)]_q!} \cdot \prod_{c=0}^{m} \frac{[z_{s-1}(c)]_q!}{[z_s(c)]_q!} \prod_{c=1}^{m} \frac{[\sum_{a > c} z_s(a)]_q!}{[\sum_{a > c} z_s(a)]_q!} \cdot \frac{[\sum_{a > c} z_{s-1}(a)]_q!}{[\sum_{a > c} z_{s-1}(a)]_q!} \\
& = \frac{[\#\tilde\rho_s-1]_q! [\#\rho'_{s-1}-1]_q!}{[\# \rho'_s-1]_q![\# \tilde \rho_{s-1}-1]_q!} \frac{[\# \rho'_s]_q!}{[\# \rho'_{s-1}]_q!} \cdot \prod_{c=0}^{m} \frac{[z_{s-1}(c)]_q!}{[z_s(c)]_q!} \\
& = \frac{[\#\tilde\rho_s-1]_q!}{[\# \tilde \rho_{s-1}-1]_q!} \frac{[\# \rho'_s]_q}{[\# \rho'_{s-1}]_q} \cdot \prod_{c=0}^{m} \frac{[z_{s-1}(c)]_q!}{[z_s(c)]_q!}
\end{align*}%
and
\begin{align*}
\prod_{c \in \mathbb{N}} \frac{\qbinom{ w_{s-1,s}(c) + z_{s-1}(c) - 1}{z_{s-1}(c)}_q}{\qbinom{ w_{s,s-1}(c) + z_s(c) - 1}{z_s(c)}_q} & = \prod_{c=0}^{m} \frac{[z_s(c)]_q!}{[z_{s-1}(c)]_q!} \cdot \frac{[w_{s-1,s}(c) + z_{s-1}(c) - 1]_q!}{[w_{s-1,s}(c) - 1]_q!} \cdot \frac{[w_{s,s-1}(c) - 1]_q!}{[w_{s,s-1}(c) + z_s(c) - 1]_q!} \\
& = \prod_{c=0}^{m} \frac{[z_s(c)]_q!}{[z_{s-1}(c)]_q!} \cdot \frac{[w_{s-1,s}(c) + z_{s-1}(c) - 1]_q!}{[w_{s,s-1}(c) + z_s(c) - 1]_q!} \\
& = \prod_{c=0}^{m} \frac{[z_s(c)]_q!}{[z_{s-1}(c)]_q!} \prod_{c=0}^{m} \frac{[\sum_{a > c} z_s(a) + \sum_{a \leq c} z_{s-1}(a) - 1]_q!}{[\sum_{a \geq c} z_s(a) + \sum_{a < c} z_{s-1}(a) - 1]_q!} \\
& = \frac{\prod_{c=0}^{m} [\sum_{a > c} z_s(a) + \sum_{a \leq c} z_{s-1}(a) - 1]_q!}{\prod_{c=0}^{m} [\sum_{a > c-1} z_s(a) + \sum_{a \leq c-1} z_{s-1}(a) - 1]_q!} \cdot \prod_{c=0}^{m} \frac{[z_s(c)]_q!}{[z_{s-1}(c)]_q!} \\
& = \frac{\prod_{c=0}^{m} [\sum_{a > c} z_s(a) + \sum_{a \leq c} z_{s-1}(a) - 1]_q!}{\prod_{c=-1}^{m-1} [\sum_{a > c} z_s(a) + \sum_{a \leq c} z_{s-1}(a) - 1]_q!} \cdot \prod_{c=0}^{m} \frac{[z_s(c)]_q!}{[z_{s-1}(c)]_q!} \\
& = \frac{[\sum_{a > m} z_s(a) + \sum_{a \leq m} z_{s-1}(a) - 1]_q!}{[\sum_{a > -1} z_s(a) + \sum_{a \leq -1} z_{s-1}(a) - 1]_q!} \cdot \prod_{c=0}^{m} \frac{[z_s(c)]_q!}{[z_{s-1}(c)]_q!} \\
& = \frac{[\sum_{a \leq m} z_{s-1}(a) - 1]_q!}{[\sum_{a \geq 0} z_s(a) - 1]_q!} \cdot \prod_{c=0}^{m} \frac{[z_s(c)]_q!}{[z_{s-1}(c)]_q!} \\
& = \frac{[\# \tilde \rho_{s-1} - 1]_q!}{[\# \tilde \rho'_s - 1]_q!} \cdot \prod_{c=0}^{m} \frac{[z_s(c)]_q!}{[z_{s-1}(c)]_q!}
\end{align*}%
and
\begin{align*}
\frac{\qbinom{w_{s-1,s}^\bullet(0)}{z_{s-1}^\bullet(0)}_q \qbinom{w_{s-2,s}^\bullet(0)}{z_{s-2}^\bullet(0)}_q}{\qbinom{w_{s-1,s-1}^\bullet(0)}{z_{s-1}^\bullet(0)}_q\qbinom{w_{s-2,s-1}^\bullet(0)}{z_{s-2}^\bullet(0)}_q} & = \frac{[w_{s-1,s}^\bullet(0)]_q![w_{s-2,s}^\bullet(0)]_q![w_{s-1,s-1}^\bullet(0)-z_{s-1}^\bullet(0)]_q![w_{s-2,s-1}^\bullet(0)-z_{s-2}^\bullet(0)]_q!}{[w_{s-1,s-1}^\bullet(0)]_q![w_{s-2,s-1}^\bullet(0)]_q![w_{s-1,s}^\bullet(0)-z_{s-1}^\bullet(0)]_q![w_{s-2,s}^\bullet(0)-z_{s-2}^\bullet(0)]_q!}
\\&= \frac{[\#\rho'_{s}-1]_q![\#\rho'_{s-1}]_q!}{[\#\rho'_{s}]_q![\#\rho'_{s-1}-1]_q!}
\frac{[\#\rho'_{s}-z_{s-1}^\bullet(0)]_q![\#\rho'_{s-1}-1-z_{s-2}^\bullet(0)]_q!}{[\#\rho'_{s}-1-z_{s-1}^\bullet(0)]_q![\#\rho'_{s-1}-z_{s-2}^\bullet(0)]_q!}\\
&=\frac{[\#\rho'_{s-1}]_q}{[\#\rho'_{s}]_q}
\frac{[\#\rho'_{s}-z_{s-1}^\bullet(0)]_q}{[\#\rho'_{s-1}-z_{s-2}^\bullet(0)]_q}.
\end{align*}
Taking the product we get, after obvious cancellations
\begin{align*}
\frac{[\#\rho'_{s}-z_{s-1}^\bullet(0)]_q}{[\#\rho'_{s-1}-z_{s-2}^\bullet(0)]_q}
\end{align*}
which is exactly what we wanted to show.
\end{proof}
\begin{corollary}
If $\# \rho'_0 \neq 0$, then
\[ \LSQ_{q,t;x}(z,s) = q^{b(z,s)} \frac{[\# \rho'_s - z_{s-1}^\bullet(0)]_q}{[\# \rho'_0]_q} \LD_{q,t;x}(z). \]
\end{corollary}
\begin{proof}
It follows immediately by applying \ref{thm:shift-by-1} $s$ times and recalling that $z_{-1}^\bullet(0) = 0$.
\end{proof}
\begin{corollary}
\label{cor:square-to-dyck}
\[ \LSQ'_{q,t;x}(n \backslash r)^{\bullet k} = \frac{[n-k]_q}{[r]_q} \LD_{q,t;x}(n \backslash r)^{\bullet k} \]
\end{corollary}
\begin{proof}
Given a marked word $z$ with $\ell$ runs and $\rho'_0 \neq 0$, we have
\begin{align*}
\sum_{s=0}^{\ell} \LSQ_{q,t;x}(z,s) & = \sum_{s=0}^{\ell} q^{b(z,s)} \frac{[\# \rho'_s - z_{s-1}^\bullet(0)]_q}{[\# \rho'_0]_q} \LD_{q,t;x}(z) \\
& = \frac{\sum_{s=0}^{\ell} q^{b(z,s)} [\# \rho'_s - z_{s-1}^\bullet(0)]_q}{[\# \rho'_0]_q} \LD_{q,t;x}(z) \\
& = \frac{[\sum_{s=0}^{\ell} (\# \rho'_s - z_{s-1}^\bullet(0))]_q}{[\# \rho'_0]_q} \LD_{q,t;x}(z)
\end{align*}
and now taking the sum over all the marked words $z$ of length $n$ with $k$ decorations and $\rho'_0 = r$, since for any such $z$, $\sum_{s=0}^{\ell} \# \rho'_s = n - k + \sum_{s=0}^{\ell} z_{s-1}^\bullet(0)$ (the total number of non-decorated positive labels), the thesis follows immediately.
\end{proof}
\begin{theorem}[Conditional modified Delta square conjecture, valley version]
If Conjecture~\ref{conj:gen-valley-delta-touching} holds, then so does Conjecture~\ref{conj:gen-valley-square-2}. As a special case, if Conjecture~\ref{conj:valley-delta-touching} holds, then so does Conjecture~\ref{conj:valley-square-2}.
\end{theorem}
\begin{proof}
We recall the statement of Conjecture~\ref{conj:gen-valley-delta-touching}, which is
\[ \Delta_{h_m} \Theta_{e_k} \nabla E_{n-k, r} = \sum_{\pi \in \LD(m, n \backslash r)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
Applying Corollary~\ref{cor:square-to-dyck}, we have
\[ \frac{[n-k]_q}{[r]_q} \Delta_{h_m} \Theta_{e_k} \nabla E_{n-k, r} = \sum_{\pi \in \LSQ'(m, n \backslash r)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
Summing over $r$ and using Proposition~\ref{prop:pn_Enk}, we get
\[ \Delta_{h_m} \Theta_{e_k} \nabla \omega(p_{n-k}) = \sum_{\pi \in \LSQ'(m,n)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi, \]
as desired.
\end{proof}
\section{Concluding remarks}\label{sec:concluding}
As we mentioned before, the slightly contrived conditions on the positions of the steps labelled with zeros can be reformulated quite naturally by considering a step labelled $0$ as the ``pushing'' of a step labelled $\infty$.
\begin{figure}[!ht]
\begin{minipage}{0.5\textwidth}
\centering
\begin{tikzpicture}[scale = 0.6]
\draw[step=1.0, gray!60, thin] (0,0) grid (8,8);
\draw[gray!60, thin] (3,0) -- (8,5);
\draw[blue!60, line width=1.6pt] (0,0) -- (0,1) -- (1,1) -- (2,1) -- (3,1) -- (3,2) -- (4,2) -- (5,2) -- (5,3) -- (5,4) -- (5,5) -- (6,5) -- (6,6) -- (6,7) -- (7,7) -- (7,8) -- (8,8);
\node at (2.5,1.5) {$\bullet$};
\node at (6.5,7.5) {$\bullet$};
\node at (0.5,0.5) {$2$};
\draw (0.5,0.5) circle (.4cm);
\node at (3.5,1.5) {$\infty$};
\draw (3.5,1.5) circle (.4cm);
\node at (5.5,2.5) {$2$};
\draw (5.5,2.5) circle (.4cm);
\node at (5.5,3.5) {$4$};
\draw (5.5,3.5) circle (.4cm);
\node at (5.5,4.5) {$\infty$};
\draw (5.5,4.5) circle (.4cm);
\node at (6.5,5.5) {$1$};
\draw (6.5,5.5) circle (.4cm);
\node at (6.5,6.5) {$3$};
\draw (6.5,6.5) circle (.4cm);
\node at (7.5,7.5) {$4$};
\draw (7.5,7.5) circle (.4cm);
\end{tikzpicture}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\centering
\begin{tikzpicture}[scale = 0.6]
\draw[step=1.0, gray!60, thin] (0,0) grid (8,8);
\draw[gray!60, thin] (3,0) -- (8,5);
\draw[blue!60, line width=1.6pt] (0,0) -- (0,1) -- (1,1) -- (2,1) -- (3,1) -- (4,1) -- (4,2) -- (5,2) -- (5,3) -- (5,4) -- (6,4) -- (6,5) -- (6,6) -- (6,7) -- (7,7) -- (7,8) -- (8,8);
\node at (3.5,1.5) {$\bullet$};
\node at (6.5,7.5) {$\bullet$};
\node at (2.5,1.5) {$\rightarrow$};
\node at (5.5,4.5) {$\rightarrow$};
\node at (0.5,0.5) {$2$};
\draw (0.5,0.5) circle (.4cm);
\node at (4.5,1.5) {$0$};
\draw (4.5,1.5) circle (.4cm);
\node at (5.5,2.5) {$2$};
\draw (5.5,2.5) circle (.4cm);
\node at (5.5,3.5) {$4$};
\draw (5.5,3.5) circle (.4cm);
\node at (6.5,4.5) {$0$};
\draw (6.5,4.5) circle (.4cm);
\node at (6.5,5.5) {$1$};
\draw (6.5,5.5) circle (.4cm);
\node at (6.5,6.5) {$3$};
\draw (6.5,6.5) circle (.4cm);
\node at (7.5,7.5) {$4$};
\draw (7.5,7.5) circle (.4cm);
\end{tikzpicture}
\end{minipage}
\caption{``Pushing'' of $\infty$'s.}
\label{fig:pushing-algorithm}
\end{figure}
Performing this manoeuvre does not change the dinv (if we define that the $\infty$'s under the main diagonal do not contribute to the bonus dinv and that there are no $\infty$'s on the base diagonal). The area changes by a constant factor equal to the number of zeros.
Several open problems arise from our discussion. There is no interpretation of the symmetric function $\Delta_{h_m} \Theta_{e_k} \nabla \omega(p_{n-k})$ in terms of rise-decorated square paths, for which also the schedule formula is lacking. This is one of the very few instances where the valley version seems to be easier to treat than the rise version. Understanding the rise version better might lead to a unified valley-rise conjecture interpreting $\Theta_{e_j} \Theta_{e_k} \nabla e_{n-k-j}$.
Lastly, it would be nice to show that the valley Delta conjecture implies the generalised valley Delta conjecture. Given that, our results would be conditional only on the valley Delta conjecture. There might be a way to prove this using the ``pushing'' manoeuvre described above to interpret the behaviour of the $h_j^\perp$ operator. We have some symmetric function identities suggesting that this avenue might be fruitful, and some of these conjectural identities are strongly suggested by certain relations among the combinatorial objects.
\bibliographystyle{amsalpha}
|
1,477,468,750,441 | arxiv | \section{Introduction}
\IEEEPARstart{D}{eep} neural networks have achieved remarkable success in various multi-media applications, where sufficiently large-scale and well-labeled data are present. However, manually labeling sufficient data is often time-consuming and labor-exhaustive, thus hard to meet the demand of rapid growth of the multi-media steaming or the content sharing applications \cite{yao2019heterogeneous}. To address it, unsupervised domain adaptation (UDA) is becoming an increasingly attractive research topic in the multi-media community \cite{kouw2019review,li2020unsupervised,li2019joint,wang2020prototype,wang2021interbn}.
Specifically, by leveraging the discriminative knowledge from readily available and labeled source domains, UDA aims to establish a desired prediction model for the unlabeled target domain, which significantly relieves the burden of annotating magnanimous data. In the past few years, UDA has achieved several impressive results on various multi-media applications such as image recognition \cite{long2018conditional,jing2020adaptively}, semantic segmentation \cite{cheng2021dual}, text classification \cite{guo2020multi}, recommendation \cite{jiang2017deep} and action recognition \cite{luo2020adversarial}.
Currently, a series of theoretical results have been presented to guide the solution for addressing UDA \cite{ben2007analysis,ben2010theory}, which illustrates that the target risk is bounded by the between-domain discrepancy. Motivated by these theoretical results, domain alignment comes to the dominant strategy for solving domain adaptation \cite{kouw2019review}, whose goal is to alleviate the between-domain discrepancy, so that the learned source model can be naturally adapted to the target domain. Specifically, these methods can be categorized into the sample alignment or the feature alignment \cite{kouw2019review}. In particular, the sample alignment focuses on mitigating the domain shifts by the importance-weighting of samples \cite{tsuboi2009direct,sugiyama2008direct,huang2007correcting}. In contrast, the feature alignment attempts to learn a domain invariant feature or representation to alleviate the domain discrepancies by kernel matching \cite{pan2010domain}, adversarial learning \cite{ganin2015unsupervised}, prototype matching \cite{wang2020prototype}, optimal transport \cite{deng2021informative} and image reconstruction \cite{2018Generate}.
In practice, almost all existing UDA approaches require to access both raw source and target data while learning to adapt. Unfortunately, due to the cost of storage or the protection of private information, in reality, the source data is often not at hand. In light of this, a more realistic scenario, called Unsupervised Source-Free Domain Adaptation (USFDA), is considered, in which only source model can be accessed at adaptation \cite{liang2020we,li2020model,ye2021source}. In such a scenario with no source data, the explicit sample alignment approaches completely fail. \textcolor{black}{ Instead, the existing USFDA approaches attempt to transfer knowledge by \textit{implicitly} aligning the target features with the source by pseudo labeling \cite{liang2020we} or image generation \cite{li2020model}, which are derived from the source model. However,
focusing on implicit alignment alone is often insufficient to address USFDA, since obtaining a desired alignment is usually difficult in the absence of source data, while such an implicit alignment cannot guarantee the success of the adaptation \cite{Siry2021inductive}. To relieve such dilemma, we are motivated to look back upon the origin of USFDA: \textit{ how to guarantee the generalization performance of the learned model on target domain, without accessing to the original source samples?} }
For this purpose, we aim to explore a new insight to relieve this dilemma in the existing USFDAs. Specifically,
a novel target generalization error bound is derived, which incorporates a between-domain discrepancy term and a new model-smoothness term. In contrast to the existing theoretical works, such a new term is the first time to be considered in adaptation. This term indicates that the model output should be smooth/consistent in the neighborhood of target sample, which instructs us to utilize such neighbor information for guiding the knowledge transfer. It should be noted that the existing USFDAs purely focus on mitigating the between-domain discrepancy, while all of them neglect the neighbor information (i.e., model smoothness). Thus, motivated by the theoretical perspective, we focus on optimizing this term on the target domain to boost the adaptation ability of the existing USFDAs.
Driven by this, a quite simple and model-smoothness-derived Jacobian Norm (JN) regularizer is designed as a plug-in unit to mitigate the dilemma and further boost the performance in the existing USFDAs and used to \textit{implicitly} force the smoothness/consistency of the model output in the neighborhood of target sample. Consequently, it takes advantage of both the target data and the source model, as shown in Figure \ref{fig1}. Due to the convenience of the pseudo-based strategy, we adopt the prevailing pseudo-labelling approach as a baseline to potentially reduce the between-domain discrepancy. In its implementation, we freeze the pre-trained classifier and fine-tune the feature encoding module by minimizing the JN regularized objective to boost its performance. In the end, we conduct abundant experiments on several domain adaptation datasets with different sample sizes. The experimental results demonstrate significant superiority of our model in USFDA. The contributions of this paper are as follows:
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figurea.png}
\caption{We propose a Jacobian norm regularizer, which effectively utilizes the neighbor information of the target sample to boost the performance of USFDA.}
\label{fig1}
\end{figure*}
\begin{itemize}
\item We theoretically provide a new-brand target generalization bound based on the Total Variation distance \cite{villani2009optimal} and model smoothness, which provides a novel insight for solving USFDA.
\item We develop a simple yet general JN regularizer to boost the performance of USFDA, which can be easily incorporated into any existing USFDA methods as a plug-in unit with a few lines of code increased.
\item We empirically find that JN regularizer can significantly improve the performance of the existing USFDA method, which achieves competitive results on multiple datasets.
\end{itemize}
The remainder of this paper is organized as follows. In Section \ref{section2}, we briefly overview unsupervised domain adaptation and unsupervised source-free domain adaptation. In Section \ref{section3}, we present the problem definition and derive a new-brand target generalization error bound. In Section \ref{sec4}, We develop the JN regularizer and the entire USFDA model to address USFDA. The experimental results and the post-hoc analysis are reported in Section \ref{sec5}. In the end, we conclude the entire paper with future research directions in Section \ref{sec6}.
\section{Related Works}
In this section, we present the most related researches on UDA/USFDA and highlight the differences between these methods and ours.
\label{section2}
\subsection{Unsupervised Domain Adaptation}
Recent practices on UDA usually attempt to minimize the domain discrepancy for knowledge transfer. Following this, multiple domain adaptation techniques have been developed, which can be summarized into the sample alignment \cite{tsuboi2009direct,sugiyama2008direct,huang2007correcting} and feature alignment \cite{pan2010domain,fernando2013unsupervised,ganin2015unsupervised}. In particular, the sample alignment methods focus on mitigating the between-domain divergence such as the $\mathcal{A}$-distance \cite{huang2007correcting}, Maximum Mean Discrepancy (MMD) \cite{sugiyama2008direct}, or KL-divergence \cite{tsuboi2009direct} through re-weighting the individual samples. In contrast, the feature alignment methods generate the domain-invariant feature through kernel matching \cite{pan2010domain}, adversarial learning \cite{ganin2015unsupervised}, transferrable contrastive learning \cite{chen2021transferrable}, prototype matching \cite{wang2020prototype}, optimal transport \cite{deng2021informative} and image reconstruction \cite{2018Generate}, to reduce the distribution differences across domains, such as MMD \cite{pan2010domain}, central moment discrepancy \cite{zellinger2017central}, Joint MMD \cite{long2017deep}, $\mathcal{A}$-distance \cite{ganin2015unsupervised} and maximum classifier discrepancy \cite{saito2018maximum}, Wasserstein distance \cite{courty2014domain}, etc.
Compared with the UDA methods which require access to both source and target data, our work does NOT require the source data while learning to adapt. This is more suitable in real-world applications.
\subsection{Unsupervised Source-Free Domain Adaptation}
Different from UDA, USFDA is a more practical scenario in which the source data is inaccessible at adaptation. Existing methods seek to implicitly align the target domain feature to the source domain by heuristically leveraging the information from the source model. The conventional USFDA methods include pseudo labelling (e.g., SHOT \cite{liang2020does} and BAIT \cite{yanga2010casting}), batch normalization (e.g., BN \cite{ishii2021source}) or data generation (e.g., MA \cite{li2020model}). Specifically, the pseudo labelling methods implicitly align representations from the target domains to the source model, the BN method minimizes the discrepancy between domains by BN statistics stored in the source model, and the image generation methods align the target domain to the annotated data generated by source model.
The existing USFDAs mainly focus on mitigating the between-domain divergence, which often neglect the neighbor information. Instead, the proposed JN regularization term focuses on optimizing the model smoothness to leverage the neighbor information from the target domain itself, which can be used as a plug-in unit to effectively boost performance of the existing USFDAs.
\section{Model Smoothness for Target Generalization Error Bound}
The current theoretical works on domain adaptation are typically established on domain discrepancy, which encourages domain alignment in solving domain adaptation \cite{ben2010theory,ben2007analysis}. However, in USFDA, due to the absence of source data, such an alignment is often difficult and insufficient. To boost the adaptation ability of USFDA, we now look back upon the origin of domain adaptation and attempt to induce a novel insights for addressing USFDA.
\label{section3}
\subsection{Preliminaries}
In this paper, we focus on the USFDA task. We use $\mathcal{X} \subset \mathbb{R}^d$ and $\mathcal{Y}\subset \mathbb{R}$ to denote the feature and the label space, respectively. We assume that the feature space $\mathcal{X}$ of source and target domain has a compact support. Thus, there exists a constant $D>0$, such that $\forall {\mathbf{u,v}\in\mathcal{X}},\Vert \mathbf{u-v} \Vert <D$. In particular, we are given $n$ labeled samples $\{x_i^s,y_i^s\}_{i=1}^{n}$ from the source domain $\mathcal{D}_s$ with the distribution $\mathbb{P}$, where $x_i^s\in\mathcal{X}$ and $y_i^s\in \mathcal{Y}$. We also have $m$ unlabeled samples $\{x_i^t\}_{i=1}^{m}$ from $\mathcal{D}_t$ with the distribution $\mathbb{Q}$, where $x_i^t\in\mathcal{X}$. In the USFDA setting, $\mathbb{P}\neq\mathbb{Q}$ and the source data can be only accessed at the source model training procedure. In particular, we consider the K-way classification task. The goal of the USFDA is to learn a target function $f:\mathcal{X}\rightarrow\mathcal{Y}$ and predict the target label $\{y_i^t\}_{i=1}^{m}$, where $y_i^t\in \mathcal{Y}$, with only target data $\{x_i^t\}_{i=1}^{m}$ and the source function $f_s:\mathcal{X}\rightarrow\mathcal{Y}$ available in adaptation.
In addition, let $\mathcal{L}(f(\mathbf{x}),y)$ be the continuous and differentiable loss function. Inspired by a current theoretical study \cite{yi2021improved}, we assume that $0\leq \mathcal{L}(f(\mathbf{x}),y) \leq M$ for constant $M$ without loss of generality. Moreover, we denote the $\mathcal{E}_{\mathbb{P}}\left(f\right) = \mathbb{E}_{\{x,y\}\sim \mathbb{P}}\mathcal{L}(f(x),y)$ as the expected risk of model $f$ over distribution $\mathbb{P}$.
In order to find an alternative to mitigate the dilemma in USFDA. we also need the following definitions of Total Variation distance \cite{villani2009optimal} and model smoothness:
\begin{definition}
\label{def2}
Total Variation distance \textnormal{\cite{villani2009optimal}}: Given two distributions $\mathbb{P}$ and $\mathbb{Q}$. The Total Variation distance $\mathrm{TV} (\mathbb{P},\mathbb{Q})$ between distributions $\mathbb{P}$ and $\mathbb{Q}$ is defined as:
\end{definition}
\begin{equation}
\mathrm{TV}(\mathbb{P}, \mathbb{Q})=\frac{1}{2} \int_{\mathcal{X}}|d \mathbb{P}(\mathbf{x})-d \mathbb{Q}(\mathbf{x})|.
\end{equation}
\begin{definition}
\label{def3}
Model Smoothness : A model $f$ with parameter $\mathbf{w}$ is $r$-cover with $\epsilon$-smoothness on distribution $\mathbb{P}$, if
\end{definition}
\begin{equation}
\label{eq2}
\mathbb{E}_{\mathbb{P}}\left[\sup _{\|\boldsymbol{\delta}\|_{\infty} \leq r}|f( \mathbf{x}+\boldsymbol{\delta})-f(\mathbf{x})|\right] \leq \epsilon.
\end{equation}
where $\|\boldsymbol{\cdot}\|_{\infty}$ is the $l_\infty$-norm.
\noindent
\textcolor{black}{\textbf{Remark:} Since the source data is unavailable, the current theoretical works based on $\mathcal{H}$-divergence are often limited in the USFDA setting. Further, such a divergence is not consistent with the current domain alignment work \cite{shui2020beyond} and let alone the existing USFDA works. In contrast, TV distance is more probabilistically interpretable and consistent with the objective of the existing USFDAs. In particular, pseudo-labeling attempts to minimize the KL-divergence between the one-hot encoding and the model output of the target domain\cite{liang2020we}, BN method also minimizes the KL-divergence between domains by utilizing batch normalization (BN) statistics stored in the source model \cite{ishii2021source}. Moreover, the image generation methods align the target domain to the annotated data by GAN, which is proven to be an approximation of TV distance \cite{pan2020loss}. Note that TV distance can be upper bounded in terms of KL-divergence as $TV(\mathbb{P},\mathbb{Q})\leq\sqrt{KL(\mathbb{P},\mathbb{Q})/{2\log e}}$ \cite{nielsen2018guaranteed}. Thus, we conduct the TV distance to analyze the new target generalization error bound.}
\subsection{Generalization Error Bound Via Model Smoothness}
With the Total Variation distance in Definition \ref{def2} and model smoothness in Definition \ref{def3}, we present our new-brand generalization error bound on target domain in Theorem \ref{the2}.
\begin{theorem}
\label{the2}
Given two distributions $\mathbb{P}$ and $\mathbb{Q}$, if a model $f$ is $2r$-cover with $\epsilon$ smoothness over distributions $\mathbb{P}$ and $\mathbb{Q}$, with probability at least $ 1 - \theta$, we have:
\end{theorem}
\begin{equation}
\label{eq13}
\begin{aligned}
\mathcal{E}_{\mathbb{Q}}\left(f\right)
\leq \mathcal{E}_{\mathbb{P}}\left(f\right) +2&\epsilon+ 2M\mathrm{TV}(\mathbb{P},\mathbb{Q})\\
+ M&\sqrt{\frac{\left(2 d\right)^{\frac{2 \epsilon^{2} D}{r^{2}}+1} \log 2+2 \log \left(\frac{1}{\theta}\right)}{m}}\\
+ M&\sqrt{\frac{\left(2 d\right)^{\frac{2 \epsilon^{2} D}{r^{2}}+1} \log 2+2 \log \left(\frac{1}{\theta}\right)}{n}}\\
+ M&\sqrt{\frac{\log(1/\theta)}{2m}}.
\end{aligned}
\end{equation}
The proof of Theorem \ref{the2} is given in the supplementary file.
In contrast to the existing theoretical works, we utilize the TV-divergence $\mathrm{TV}(\mathbb{P},\mathbb{Q})$ to measure the domain discrepancy, since TV distance is more probabilistically interpretable and consistent with the objective of the existing USFDAs as mentioned. In particular, the model smoothness $\epsilon$ is firstly considered in adaptation, which provides a new perspective to address USFDA. To relieve the dilemma and boost the performance of the USFDA, we intuitively focus more on optimizing the new term $\epsilon$ to leverage the neighbor knowledge from the target domain.
\section{Domain adaptation without Access to Source Data}
\label{sec4}
In this section, following the idea of the theoretical result, we provide a novel learning framework to address USFDA. More precisely, for the model smoothness term, we present a JN regularizer on target domain as a plug-in unit to boost the existing USFDA, which implicitly forces the smoothness of model in the neighborhood of target sample. Subsequently, we adopt pseudo labelling strategy (i.e., SHOT \cite{liang2020we}) as an attempt baseline to handle the domain discrepancy term, due to its simplicity.
Figure \ref{fig2} shows a pipeline of our approach. Specifically, we first generate the source model by source data. During its development, we keep the classifier frozen and utilize the feature encoding module as initialization for target domain. The feature encoding module is then fine-tuned by our proposed framework. In the following, we elaborate each step of our model.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figureb2.png}
\caption{The pipeline of our approach: we freeze (solid line) the source classifier and fine-tune (dash line) the source encoding module. The source data is only available in source model training. The JN regularizer can be easily adopted as a plug-in unit to boost the adaptation performance.}
\label{fig2}
\end{figure*}
\subsection{Source Model Generation}
To learn the source model for the subsequent target adaptation, referring to the baseline model \cite{liang2020we}, we adopt the cross-entropy loss based on label smoothing, as it increases the discriminability of the learned source model \cite{muller2019does}
. We have the following objective $\mathcal{L}_{s}$:
\begin{equation}
\mathcal{L}_{s}=-\mathbb{E}_{\left(x_{s}, y_{s}\right) \in \mathcal{D}_{s}} \sum_{k=1}^{K} q_{k}^{ls} \log \left(\delta_{k}\left(f_{s}\left(x_{s}\right)\right)\right),
\end{equation}
where $q_{k}^{ls}=(1-\alpha)q_k+\alpha / K$ is the smoothed label, $q_{k}$ is the one-of $K$ encoding and $\alpha$ is smoothing parameter empirically set to 0.1. $\delta_{k}\left(\cdot\right)$ denotes the $k$-th element in the soft-max output of a $K$-dimensional vector.
\subsection{Target Model Fine-tuning}
To make the fixed classifier/model works well in the target domain, we aim to obtain a fine-tuned encoder that could implicitly mitigate the domain discrepancy and optimize the model smoothness on target domain. Therefore, the objective function of our framework is formulated as follows:
\begin{equation}
\label{eq5}
\mathcal{L}_{t}=\mathcal{L}_{\mathcal{M}}+\mathcal{L}_{\mathcal{D}},
\end{equation}
where $\mathcal{L}_{\mathcal{M}}$ denotes the model smoothness objective, and $\mathcal{L}_{\mathcal{D}}$ represents the domain alignment objective. Note that the current USFDAs focus on heuristically modelling the $\mathcal{L}_{\mathcal{D}}$, we aims to formulate $\mathcal{L}_{\mathcal{M}}$ , which can act as a plug-in unit to any existing USFDAs. These two terms are detailed in the following subsections:
\subsubsection{Jacobian Norm for Model Smoothness}
To formulate $\mathcal{L}_{\mathcal{M}}$, according to the Definition \ref{def3}, we are motivated to control the smoothness/consistence in the neighborhood of the target sample. Along this line, we relax Eq. \ref{eq2} for simplicity, and obtain the following objective:
\begin{equation}
\label{eq6}
\frac{1}{\sigma^{2}} \sum_{i=1}^{n_{t}} \mathbb{E}_{\zeta}\left(f\left(\mathbf{x}_{i}+\zeta\right)-f\left(\mathbf{x}_{i}\right)\right)^{2},
\end{equation}
where $\zeta \sim N\left(0, \sigma^{2} \mathbf{I}\right)$. By first-order Taylor expansion, and let $\mathbf{J}(\mathbf{x})=\frac{\partial \mathbf{f}}{\partial \mathbf{x}} \in R^{K \times D}$, we have
\begin{equation}
f(\mathbf{x}+\zeta)=f(\mathbf{x})+\mathbf{J}(\mathbf{x}) \zeta+\boldsymbol{o}(\zeta).
\end{equation}
Omitting the high-order terms, Eq. \ref{eq6} can be reformulated to the following JN regularizer ($\mathcal{L}_{\mathcal{J}}$):
\begin{equation}
\label{eq8}
\begin{aligned}
\mathcal{L}_{\mathcal{J}}&=\frac{1}{\sigma^{2}} \sum_{i=1}^{n_{t}} \mathbb{E}_{\zeta}\left(f \left(\mathbf{x}_{i}\right.\right.\left.+\zeta)-f\left(\mathbf{x}_{i}\right)\right)^{2} \\
&=\frac{1}{\sigma^{2}} \sum_{i=1}^{n_{t}} \mathbb{E}_{\zeta}\left\|\mathbf{J}\left(\mathbf{x}_{i}\right) \zeta\right\|^{2} \\
&=\sum_{i=1}^{n_{t}} \operatorname{tr}\left[\mathbf{J}\left(\mathbf{x}_{i}\right)^{T} \mathbf{J}\left(\mathbf{x}_{i}\right) \frac{1}{\sigma^{2}} \mathbb{E}_{\zeta}\left[\zeta \zeta^{T}\right]\right] \\
&=\sum_{i=1}^{n_{t}} \operatorname{tr}\left[\mathbf{J}\left(\mathbf{x}_{i}\right)^{T} \mathbf{J}\left(\mathbf{x}_{i}\right) \frac{1}{\sigma^{2}} \sigma^{2} \mathbf{I}\right] \\
&=\sum_{i=1}^{n_{t}}\left\|\mathbf{J}\left(\mathbf{x}_{i}\right)\right\|_{F}^{2}.
\end{aligned}
\end{equation}
Consequently, we formulate $\mathcal{L}_{\mathcal{M}}$ to JN regularizer $\mathcal{L}_{\mathcal{J}}$
where $\lambda$ is the balancing hyper-parameters.
\begin{equation}
\label{eqm}
\mathcal{L}_{\mathcal{M}}=\lambda\mathcal{L}_{\mathcal{J}}=\lambda\sum_{i=1}^{n_{t}}\left\|\mathbf{J}\left(\mathbf{x}_{i}\right)\right\|_{F}^{2}.
\end{equation}
\subsubsection{Nearest Centroid classifier for Obtaining Pseudo Label}
To alleviate the harmful effects caused by the inaccurate network outputs, we further apply pseudo-labeling for each unlabeled data to better supervise the target data encoding training. Inspired by the DeepCluster \cite{caron2018deep}, we first attain the centroid for each class in the target domain as follows:
\begin{equation}
c_{k}^{(0)}=\frac{\sum_{x_{t} \in \mathcal{X}_{t}} \delta\left(\hat{f}_{t}^{(k)}(x)\right) \hat{g}_{t}(x)}{\sum_{x_{t} \in \mathcal{X}_{t}} \delta\left(\hat{f}_{t}^{(k)}(x)\right)},
\end{equation}
where $\hat{f}_t$ denotes the previously learned target model and $\hat{g}_{t}(x)$ is the target encoder. Then, we obtain the pseudo labels via the nearest centroid classifier
\begin{equation}
\hat{y}_{t}=\arg \min _{k} D_{f}\left(\hat{g}_{t}\left(x_{t}\right), c_{k}^{(0)}\right),
\end{equation}
where $D_f(\cdot, \cdot)$ measures the cosine distance. In the end, the target centroids is computed via the new pseudo labels:
\begin{equation}
c_{k}^{(1)}=\frac{\sum_{x_{t} \in \mathcal{X}_{t}} \mathbb{I}\left(\hat{y}_{t}=k\right) \hat{g}_{t}(x)}{\sum_{x_{t} \in \mathcal{X}_{t}} \mathbb{I}\left(\hat{y}_{t}=k\right)},
\end{equation}
where $\mathbb{I}$ is the index function. The final pseudo label is obtained as followings:
\begin{equation}
\hat{y}_{t}=\arg \min _{k} D_{f}\left(\hat{g}_{t}\left(x_{t}\right), c_{k}^{(1)}\right),
\end{equation}
\subsubsection{Pseudo Labeling for Implicit Alignment}
To formulate the $\mathcal{L}_{\mathcal{D}}$, referring to the baseline \cite{liang2020we}, we adopt both the information maximization loss $\mathcal{L}_{IM}$ and self-supervised pseudo labelling loss $\mathcal{L}_{SSL}$ toward implicit alignment. Their formulations are as follows, respectively:
\begin{equation}
\label{eq9}
\mathcal{L}_{IM}=-H\left(\frac{1}{K} \sum_{i}^{K} f\left(x_{i}\right)\right)+\frac{1}{K} \sum_{i}^{K} H\left(f\left(x_{i}\right)\right),
\end{equation}
\begin{equation}
\label{eq10}
\mathcal{L}_{SSL}=-\mathbb{E}_{\left(x_{t}\right) \in \mathcal{D}_{t}} \sum_{k=1}^{K} \hat{q}_{k} \log \left(\delta_{k}\left(f\left(x_{t}\right)\right)\right),
\end{equation}
where $H(\cdot)$ is the entropy function. $\hat{q}_{k}$ is the one-of-$K$ encoding of the target pseudo labels, which is generated by the nearest centroid classifier. For more details, please refer to \cite{liang2020we}. In this way, we define $\mathcal{L}_{\mathcal{D}}$ as the composition of $\mathcal{L}_{IM}$ and $\mathcal{L}_{SSL}$:
\begin{equation}
\label{eqd}
\mathcal{L}_{\mathcal{D}} = \beta\mathcal{L}_{IM}+\gamma\mathcal{L}_{SSL},
\end{equation}
where $\beta$ and $\gamma$ are the balancing hyper-parameters and we empirically set $\beta=1$ and $\gamma=1$ in our following experiments.
Last but not least importantly, we need to state that although our model is based on SHOT, for other USFDA methods, our framework can be easily applied by reformulating $\mathcal{L}_{\mathcal{D}}$. Therefore, the JN regularizer is flexible enough to be embedded into any other existing USFDAs
\begin{table}[t!]
\centering
\caption{Statistics of the benchmark datasets}
\begin{tabular}{cccc}
\toprule
Dataset & \#Sample & \#Class & \#Domain \\ \midrule
Office-31 & 4652 & 31 & A, W, D \\
Office-Home & 15500 & 65 & Ar, Cl, Pr, Rw \\
VisDa-C & 207000 & 12 & Real, Synthesis \\
\bottomrule
\end{tabular}
\label{table1}
\end{table}
\begin{table*}[ht!]
\centering
\caption{Classification accuracies (\%) on small-sized Office-31 dataset (ResNet-50).}
\label{tab2}
\begin{tabular}{cccccc|cccccc}
\toprule
& DAN & DANN & CDAN & BSP & SO & MA & BAIT & BN & SHOT & JN(o) & Our Model \\\midrule
A$\rightarrow$D & 78.6 & 79.7 & 92.9 & 93.0 & 80.1 & {\ul 92.7} & 92.0 & 89.0 & 92.1 & 88.8 & \textbf{95.9} \\
A$\rightarrow$W & 80.5 & 82.0 & 94.1 & 93.3 & 76.6 & {\ul 93.7} & \textbf{94.6} & 91.7 & 89.6 & 88.9 & 92.5 \\
D$\rightarrow$A & 63.6 & 68.3 & 71.0 & 74.6 & 60.5 & 75.3 & 74.6 & \textbf{78.5} & 74.1 & 72.7 & {\ul 75.4} \\
D$\rightarrow$W & 97.1 & 96.9 & 98.6 & 98.2 & 95.5 & {\ul 98.5} & 98.1 & \textbf{98.9} & {\ul 98.5} & 98.4 & {\ul 98.5} \\
W$\rightarrow$A & 62.8 & 67.4 & 69.3 & 72.6 & 63.4 & \textbf{77.8} & 73.9 & 76.6 & 73.3 & 73.4 & {\ul 77.4} \\
W$\rightarrow$D & 99.6 & 99.1 & 100 & 100 & 98.5 & 98.5 & \textbf{100} & 99.8 &{\ul 99.9} & {\ul 99.9} & {\ul 99.9} \\
AVE & 80.4 & 82.2 & 87.7 & 88.5 & 79.1& {\ul 89.6} & 89.1 & 89.0 & 87.9 & 87.0 & \textbf{89.9} \\
\bottomrule
\multicolumn{12}{p{13cm}}{\footnotesize{Due to the simplicity and the availability of source code, we only conduct SHOT as the baseline method. Specifically, our model is actually a JN regularized objective of SHOT, according to Eq.\ref{eqo}. The best accuracy is presented in bold and the second best is underlined, similarly hereinafter.
}}
\end{tabular}
\end{table*}
\begin{table*}[ht!]
\centering
\caption{Classification accuracies (\%) on medium-sized Office-Home dataset (ResNet-50).}
\label{tab3}
\begin{tabular}{cccccc|cccc}
\toprule
& DAN & DANN & CDAN & BSP & SO & BAIT & SHOT & JN(o) & Our Model \\\midrule
Ar$\rightarrow$Cl & 43.6 & 45.6 & 50.7 & 52.0 & 44.9 & \textbf{57.4} & 55.9 & 54.8 & {\ul 56.4} \\
Ar$\rightarrow$Pr & 57.0 & 59.3 & 70.6 & 68.6 & 66.5 & 77.5& {\ul 77.8} & 76.6 & \textbf{78.6} \\
Ar$\rightarrow$Re & 67.9 & 70.1 & 76.0 & 76.1 & 74.3 & \textbf{82.4} & 80.7 & {\ul 82.0} & {\ul 82.0} \\
Cl$\rightarrow$Ar & 45.8 & 47.0 & 57.6 & 58.0 & 52.9 & 68.0 & 66.9 & \textbf{68.4} & {\ul 68.3} \\
Cl$\rightarrow$Pr & 56.5 & 58.5 & 70.0 & 70.3 & 62.8 & {\ul 78.2} & 77.3 & 73.9 & \textbf{80.1} \\
Cl$\rightarrow$Re & 60.4 & 60.9 & 70.0 & 70.2 & 65.1 & {\ul 78.1} & 77.1 & 76.3 & \textbf{79.4} \\
Pr$\rightarrow$Ar & 44.0 & 46.1 & 57.4 & 58.6 & 52.9 & 67.4 & 66.4 & {\ul 66.8} & \textbf{68.8} \\
Pr$\rightarrow$Cl & 43.6 & 43.7 & 50.9 & 50.3 & 41.5 & \textbf{55.5} & 53.6 & 54.6 & {\ul 55.1} \\
Pr$\rightarrow$Re & 67.7 & 68.5 & 77.3 & 77.6 & 73.4 & {\ul 81.7} & 81.5 & 81.3 & \textbf{82.2} \\
Re$\rightarrow$Ar & 63.1 & 63.2 & 70.9 & 72.2 & 65.4 & \textbf{76.3} & 72.1 & 74.2 & {\ul 75.3} \\
Re$\rightarrow$Cl & 51.5 & 51.8 & 56.7 & 59.3 & 45.4 & {\ul 57.1} & 57.2 & 58.4 & \textbf{58.8} \\
Re$\rightarrow$Pr & 74.3 & 76.8 & 81.6 & 81.9 & 77.9 & {\ul 84.3} & 83.0 & 83.3 & \textbf{84.6} \\
AVE & 56.3 & 57.6 & 65.8 & 66.3 & 60.2 & 70.8 & {\ul 71.5} & 70.6 & \textbf{74.5} \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[ht!]
\centering
\caption{Classification accuracies (\%) on large-sized VisDA-C dataset (ResNet-101).}
\label{tab4}
\begin{tabular}{cccccc|ccccc}
\toprule
& DAN & DANN & CDAN & BSP & SO & MA & BAIT & SHOT & JN(o) & Our Model \\\midrule
plane & 87.1 & 81.9 & 85.2 & 92.4 & 76.1 & {\ul 94.8} & 93.7 & 94.6 & 94.6 & \textbf{95.1} \\
bcycl & 63.0 & 77.7 & 66.9 & 61.0 & 21.9 & 73.4 & 83.2 & {\ul 83.1} & 83.4 & \textbf{86.6} \\
bus & 76.5 & 82.8 & 83.0 & 81.0 & 51.1 & 68.8 & \textbf{84.5} & 73.3 & {\ul 80.3} & 78.3 \\
car & 42.0 & 44.3 & 50.8 & 57.5 & 70.3 & \textbf{74.8} & {\ul 65.0} & 54.3 & 56.8 & 62.1 \\
horse & 90.3 & 81.2 & 84.2 & 89.0 & 64.6 & {\ul 93.1} & 92.9 & 90.2 & 91.4 & \textbf{94.5} \\
knife & 42.9 & 29.5 & 74.9 & 80.6 & 16.0 & {\ul 95.4} & {\ul 95.4}& 67.1 & 92.5 & \textbf{96.4} \\
mcycl & 85.9 & 65.1 & 88.1 & 90.1 & 81.2 & \textbf{88.5} & {\ul 88.1}& 78.8 & 84.2 & 84.6 \\
person & 53.1 & 28.6 & 74.5 & 77.0 & 18.5 & \textbf{84.7} & 80.8& 76.3 & 78.3 & {\ul 81.0} \\
plant & 49.7 & 51.9 & 83.4 & 84.2 & 69.3 & 89.1 & {\ul 90.0}& 89.6 & 87.6 & \textbf{90.2} \\
sktbrd & 36.3 & 54.6 & 76.0 & 77.9 & 28.4 & 84.7 & {\ul 89.0}& 87.2 & 88.9 & \textbf{89.5} \\
train & 85.8 & 82.8 & 81.9 & 82.1 & 84.3 & 83.6 & 84.0& \textbf{87.3} & \textbf{87.3} & {\ul 84.7} \\
truck & 20.7 & 20.7 & 38.0 & 38.4 & 6.0 & 48.1 & 45.3 & 50.3& {\ul56.2} & \textbf{58.6} \\
AVE & 61.1 & 57.4 & 73.9 & 75.9 & 49.0 & 81.6 & {\ul82.7}& 77.7 & 81.8 & \textbf{83.5} \\
\bottomrule
\end{tabular}
\end{table*}
\section{Experiments}
\label{sec5}
In this section, we present the experimental results on multiple domain adaptation benchmarks to demonstrate the effectiveness of our model.
\subsection{Benchmark Datasets}
To evaluate the performance of our model, we conduct abundant experiments over the most widely-used benchmark datasets \cite{li2019joint} with different sample size including \emph{Office-31} (small-size), \emph{Office-Home} (medium size) and \emph{VisDA-C} (large size). Table \ref{table1} lists the statistics of these datasets.
\\
\noindent
\textbf{Office-31} (\cite{saenko2010adapting} is a small-size benchmark, which contains 4652 images with 31 categories in three visual domains Amazon(A), DSLR(D), Webcam(W).\\
\noindent
\textbf{Office-Home} \cite{venkateswara2017deep} is a medium-size benchmark, which contains 15588 images of 65 categories from 4 domains: Artistic images (Ar), Clipart images (Cl), Product images (Pr), and Real-world images (Rw).\\
\noindent
\textbf{VisDA-C} (\cite{peng2019domain} is a large-scale benchmark, which contains 207000 images of 12 categories from synthesis and real domains. The source domain contains 152000 of synthetic images, while the target domain has 55000 real object images sampled from Microsoft COCO.
\subsection{Comparison Methods}
To verify the effectiveness of the our work, we compare it respectively with several UDA and USFDA SOTAs. The UDA methods include Deep Adaptation Network (DAN) \cite{long2015learning}, Domain Adversarial Neural Networks (DANN) \cite{ganin2015unsupervised} Conditional Adversarial Networks (CDAN) \cite{long2018conditional} and Batch Spectral Penalization (BSP) \cite{chen2019transferability}. The USFDA methods contain Source HypOthesis Transfer (SHOT) \cite{liang2020we}, Model Adaptation (MA) \cite{li2020model}, BAIT \cite{yanga2010casting} and Batch Normalization (BN) \cite{ishii2021source}. Moreover, source model only (SO) denotes using the entire source model for target label prediction. JN only (JN(o)) represents using JN regularizer only to fine-tune the feature encoder. \textit{It should be noted that SHOT is the baseline of our model by setting $\lambda=0$. } To evaluate their performance, we follow the widely used \textbf{accuracy} as a measurement. The results of comparison methods are directly obtained from the published papers, since we follow the same setting.
\subsection{Implementation Details}
\subsubsection{Network architecture}
\textcolor{black}{
Following the current UDA/USFDA works \cite{liang2020we,long2017deep,ganin2015unsupervised}, we employ the pre-trained ResNet-50 or ResNet-101 \cite{he2016deep} models as the backbone module. Specifically, we replace the original FC layer with a bottleneck layer (256 units) and a task-specific FC classifier layer. After that, a BN layer is put inside the bottleneck layer. Moreover, a weight normalization layer is utilized in the last FC layer.
More specifically, for each task, referring to the existing works \cite{long2018conditional,chen2019progressive}, we employ the pre-trained ResNet-50 (\emph{Office-31} and \emph{Office-Home}) or ResNet-101 (\emph{VisDA-C}) \cite{he2016deep} models as the backbone module.}
\subsubsection{Parameter Settings}
\textcolor{black}{
To fine-tune the adaptive model, we adopt the mini-batch SGD with momentum 0.9 and set the batch size as 64. For \emph{Office-31} and \emph{Office-Home}, we empirically set the learning rate as 0.01 and $\lambda=0.2$. Since \emph{VisDA-C} can easily converge, we utilize a smaller learning rate 0.001 and a bigger $\lambda=0.8$. For learning in the target domain, we update the pseudo-labels epoch by epoch. The whole network is trained by the back propagation, while the newly added layers (e.g., task-specific FC classifier layer) are trained with learning rate 10 times of that of the pre-trained layers.
}
\subsection{Experimental Results}
\textcolor{black}{
The experimental results of \emph{Office-31}, \emph{Office-Home}, and \emph{VisDA-C} are reported in Tables \ref{tab2}, \ref{tab3}, and \ref{tab4}, respectively. From these results, we can make several observations as follows.}
\textcolor{black}{
Firstly, by adding a simple JN regularization term, our model obtains the best mean accuracy on \emph{Office-31} and \emph{Office-Home}, and the best per-class accuracy on \emph{Visda-Home}. Compared with the USFDA SOTAs \cite{li2020model,yanga2010casting,ishii2021source}, \textbf{we achieve the best/second-best results on 5 out of 6 individual tasks at \emph{Office-31} dataset and the best/second best on all 12 tasks at \emph{Office-Home} dataset, respectively. For large-scale synthesis-to-real \emph{VisDA-C} dataset, we achieve the best/second-best class accuracy among 9 out of 12 classes.} These results obtained from a wide range of datasets with different sample sizes demonstrate that our model is capable of reducing the target risk while solving USFDA to great extent.}
\textcolor{black}{
Secondly, compared to the conventional UDA works, we also achieve the competitive results even with no direct access to the source domain data. Specifically, \textbf{our model achieves better gains with 1.4$\%$, 8.2$\%$ and 7.6$\%$ in performance than the UDA SOTAs} on the \emph{Office-31}, \emph{Office-Home} and \emph{VisDA-C} datasets, respectively. This implying the superior of our model even without source data.}
\textcolor{black}{
Thirdly, compared to the baseline model (i.e, SHOT) which only achieves the second best results on two tasks at \emph{Office-31} dataset and one task at \emph{Office-Home}, \textbf{the designed JN regularizer provides gains on almost all tasks (e.g., $\sim$ 4\% on A$\rightarrow$D and W$\rightarrow$A tasks)}. Moreover, the performance of our model only degrades in class 'train' on \emph{VisDA-C}, and the main reason may be that the background of this class is too complex. }
\begin{table}[t!]
\centering
\caption{Ablation Study: Components of Each Method}
\begin{tabular}{cccc}
\toprule
Dataset & SHOT & JN(o) & Our Model \\ \midrule
Model Smoothness &\XSolidBrush& \Checkmark &\Checkmark \\
Implicit Alignment & \Checkmark &\XSolidBrush & \Checkmark \\
\bottomrule
\end{tabular}
\label{table5}
\end{table}
\subsection{Evaluation of Each Component}
\textcolor{black}{
When solving the USFDA, our model involves two components: (1) Jacobian norm for model smoothness and (2)pseudo labeling for implicit alignment. To verify the performance of different components, we empirically select different components as shown in Table \ref{table5}. Specifically, SHOT only considers the implicit alignment, JN(o) only considers the model smoothness while our model contains both two terms.}
\textcolor{black}{
As expected, compared to SHOT (i.e., $\mathcal{L}_{\mathcal{D}}$ only), \textbf{the proposed JN regularizer provides a significant performance gain (i.e., 2\% over \emph{Office-31}, 3.7\% over \emph{Office-Home} and 5.8\% over \emph{VisDA-C})}, which illustrates that implicit alignment is not sufficient to address USFDA and the proposed JN regularizer can effectively boost the performance of USFDA. Moreover,the results of JN alone (i.e., $\mathcal{L}_{\mathcal{J}}$ only) also achieves comparable results on USFDA with the accuracy of 87.0\%, 70.6\% and 81.8\% on the \emph{Office-31}, \emph{Office-Home} and \emph{VisDA-C}, respectively. }
\textcolor{black}{
Those results demonstrate that both two components are important for improving the accuracy in USFDA tasks, which is consistent with the derived theoretical result in the Theorem \ref{the2}. Moreover, the results further reveal that implicit alignment and model smoothness can benefit each other in solving USFDA. }
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figtime.png}
\caption{The time cost(s) of SHOT and our Model on the VisDA-C dataset. The error bar represents the standard deviation.}
\label{figt}
\end{figure}
\subsection{Time Complexity}
\textcolor{black}{
We validate the time complexity of the proposed JN regularizer through the empirical analysis. Specifically, we compare the time cost of our model with SHOT on the large-scale dataset, i.e., \emph{VisDA-C}. The environment is Nvidia RTX 2080Ti with 11G memory. The results are given in Figure. \ref{figt}. Specifically, for each batch, the proposed JN regularization term only introduces additional 0.06s cost on \emph{VisDA-C}. As we can observe, despite its superiority in the performance gain, the time cost paid by the proposed JN regularization term is almost negligible.}
\section{Conclusion}
In this paper, we develop a JN regularizer as a plug-in unit to boost the performance of USFDA. With a few lines of codes, the proposed JN regularizer can significantly improve the performance of the existing USFDAs. It is worth noting that the JN regularization term does NOT need access to the source model and thus can be applied on more challenging black-box USFDA, which will be further studied in our future work.
\label{sec6}
\bibliographystyle{ieeetr}
|
1,477,468,750,442 | arxiv | \chapter{Tangent Myller configurations $\mathfrak{M}_{t}$}
The theory of Myller configurations
$\mathfrak{M}(C,\overline{\xi},\pi)$ presented in the Chapter 2
has an important particular case when the tangent versor fields
$\overline{\alpha}(s)$, $\forall s\in (s_{1},s_{2})$ belong to the
corresponding planes $\pi(s)$. These Myller configurations will be
denoted by $\mathfrak{M}_{t} = \mathfrak{M}_{t}(C,
\overline{\xi},\pi)$ and named {\bf tangent} {\it Myller configuration}.
The geometry of $\mathfrak{M}_{t}$ is much more rich that the
geometrical theory of $\mathfrak{M}$ because in $\mathfrak{M}_{t}$
the tangent field has some special properties. So,
$(C,\overline{\alpha})$ in $\mathfrak{M}_{t}$ has only three
invariants $\kappa_{g}, \kappa_{n}$ and $\tau_{g}$ called {\it geodesic
curvature}, {\it normal curvature} and {\it geodesic torsion}, respectively, of
the curve $C$ in $\mathfrak{M}_{t}$.
The curves $C$ with $\kappa_{g} = 0$ are {\it geodesic lines} of
$\mathfrak{M}_{t}$; the curves $C$ with the property $\kappa_{n}=0$
are the {\it asymptotic lines} of $\mathfrak{M}_{t}$ and the curve $C$
for which $\tau_{g} = 0$ are the {\it curvature lines} for
$\mathfrak{M}_{t}$. The mentioned invariants have some geometric
interpretations as the geodesic curvature, normal curvature and
geodesic torsion of a curve $\cal{C}$ on a surfaces $S$ in the
Euclidean space $E_{3}$.
\section{The fundamental equations of $\mathfrak{M}_{t}$}
\setcounter{theorem}{0}\setcounter{equation}{0}
Consider a tangent Myller configuration $\mathfrak{M}_{t} =
(C,\overline{\xi},\pi)$. Thus we have
\begin{equation}
\langle \alpha(s), \nu(s)\rangle = 0,\; \forall s\in (s_{1},s_{2}).
\end{equation}
The Darboux frame $\cal{R}_{D}$ is $\cal{R}_{D}=(P(s);
\overline{\xi}(s), \overline{\mu}(s),\overline{\nu}(s))$ with
$\overline{\mu}(s) = \overline{\nu}(s)\times \overline{\xi}(s)$.
The fundamental equations of $\mathfrak{M}_{t}$ are obtained by
the fundamental equations (2.1.3), (2.1.4), Chapter 2 of a general
Myller configuration $\mathfrak{M}$ for which the invariant
$c_{3}(s)$ vanishes.
\begin{theorem}
The fundamental equations of the tangent Myller configuration
$\mathfrak{M}_{t}(C, \overline{\xi}, \pi)$ are given by the
following system of differential equations:
\begin{equation}
\displaystyle\frac{d\overline{r}}{ds} = c_{1}(s)\overline{\xi}(s) +
c_{2}(s)\overline{\mu}(s),\;\; (c_{1}^{2}+c_{2}^{2} = 1),
\end{equation}
\begin{eqnarray}
\displaystyle\frac{d\overline{\xi}}{ds} &=&
G(s)\overline{\mu}(s)+K(s)\overline{\nu}(s)\nonumber\\
\displaystyle\frac{d\overline{\mu}}{ds}& =& -G(s)\overline{\xi}(s) +
T(s)\overline{\nu}(s)\\\displaystyle\frac{d\overline{\nu}}{ds} &=&
-K(s)\overline{\xi}(s) -T(s)\overline{\mu}(s).\nonumber
\end{eqnarray}
\end{theorem}
The invariants $c_{1},c_{2},G, K,T$ have the same geometric
interpretations and the same denomination as in Chapter 2. So,
$G(s)$ is {\it the geodesic curvature} of the field
$(\overline{C},\overline{\xi})$ in $\mathfrak{M}_{t}$, $K(s)$ is
{\it the normal curvature} and $T(s)$ is {\it the geodesic
torsion} of $(C,\overline{\xi})$ in $\mathfrak{M}_{t}$.
The cases when some invariants $G, K, T$ vanish can be investigate
exactly as in the Chapter 2.
In this respect, denoting $\varphi = \sphericalangle
(\overline{\xi}_{2}(s),\overline{\nu}(s))$ and using the Frenet
formulae of the versor field $(C,\overline{\xi})$ we obtain the
formulae
\begin{equation}
G = K_{1}\sin \varphi,\;\; K = K_{1}\cos \varphi, \;\; T = K_{2}
+\displaystyle\frac{d\varphi}{ds}.
\end{equation}
In \S 5, Chapter 2 we get the relations between the invariants of
$(C,\overline{\xi})$ in $\mathfrak{M}_{t}$ and the invariants of
normal versor field $(C,\overline{\nu})$, (Theorem 2.5.1, Ch 2.) For
$\sigma = \sphericalangle (\overline{\xi}(s), \overline{\nu}_{3}(s))$
we have
\begin{equation}
K = \chi_{1}\sin \sigma,\;\; T = \chi_{1}\cos \sigma,\;\; G = \chi_{2}
+\displaystyle\frac{d\sigma}{ds}.
\end{equation}
Others results concerning $\mathfrak{M}_{t}$ can be deduced from
those of $\mathfrak{M}$.
For instance
\begin{theorem}
(Mark Krein) Assuming that we have:
$1.$ $\mathfrak{M}_{t}(C, \overline{\xi}, \pi)$ a tangent Myller
configuration of class $C^{3}$ in which $C$ is a closed curve,
having $s$ as natural parameter.
$2.$ The spherical image $C^{*}$ of $\mathfrak{M}_{t}$ determines
on the support sphere $\Sigma$ a simply connected domain of area
$\omega$.
$3.$ $\sigma = \sphericalangle (\overline{\xi}, \overline{\nu}_{3})$.
\end{theorem}
{\it In these conditions the following Mark-Krein$'$s formula}
holds:
\begin{equation}
\omega = 2\pi - \int_{C}G(s)ds +\int_{C}d\sigma.
\end{equation}
\section{The invariants of the curve $C$ in $\mathfrak{M}_{t}$}
\setcounter{theorem}{0}\setcounter{equation}{0}
A smooth curve $C$ having $s$ as arclength determines the tangent
versor field $(C,\overline{\alpha})$ with $\overline{\alpha}(s) =
\displaystyle\frac{d\overline{r}}{ds}$. Consequently we can consider a
particular tangent Myller configuration
$\mathfrak{M}_{t}(C,\overline{\alpha},\pi)$ defined only by curve $C$
and tangent planes $\pi(s)$.
In this case the geometry of Myller configurations
$\mathfrak{M}_{t}(C,\overline{\alpha},\pi)$ is called the geometry of
curve $C$ in $\mathfrak{M}_{t}$. The Darboux frame of curve $C$ in
$\mathfrak{M}_{t}$ is $\cal{R}_{D} = (P(s); \overline{\alpha}(s),
\overline{\mu}^{*}(s), \overline{\nu}(s))$, $\overline{\mu}^{*}(s)
= \overline{\nu}(s)\times \overline{\alpha}(s)$. It will be called
{\it the Darboux frame} of the curve $C$ in $\mathfrak{M}_{t}$.
\begin{theorem}
The fundamental equations of the curve $C$ in the Myller
configuration $\mathfrak{M}_{t}(C,\overline{\alpha},\pi)$ are given by
the following system of differential equations:
\begin{equation}
\displaystyle\frac{d\overline{r}}{ds} = \overline{\alpha}(s),
\end{equation}
\begin{eqnarray}
\displaystyle\frac{d\overline{\alpha}}{ds}&=&
\kappa_{g}(s)\overline{\mu}^{*}(s)+\kappa_{n}(s)\overline{\nu}(s),\nonumber\\
\displaystyle\frac{d\overline{\mu}^{*}}{ds}&=&
-\kappa_{g}(s)\overline{\alpha}(s)+\tau_{g}(s)\overline{\nu}(s),\\\displaystyle\frac{d\overline{\nu}}{ds}&=&
- \kappa_{n}(s)\overline{\alpha}(s) -
\tau_{g}(s)\overline{\mu}^{*}(s).\nonumber
\end{eqnarray}\end{theorem}
Of course $(3.2.1), (3.2.2)$ are the moving equations of the
Darboux frame $\cal{R}_{D}$ of the curve $C$.
The invariants $\kappa_{g}, \kappa_{n}$ and $\tau_{g}$ are called: the
{\it geodesic curvature}, {\it normal curvature} and {\it geodesic torsion} of the
curve $C$ in $\mathfrak{M}_{t}.$
Of course, we can prove a fundamental theorem for the geometry of
curves $C$ in $\mathfrak{M}_{t}:$
\begin{theorem}
A priori given $C^{\infty}$-functions $\kappa_{g}(s),$ $\kappa_{n}(s)$,
$\tau_{g}(s)$, $s\in [a,b]$, there exists a Myller configuration
$\mathfrak{M}_{t}(C,\overline{\alpha},\pi)$ for which $s$ is arclength
on the curve $C$ and given functions are its invariants. Two such
configurations differ by a proper Euclidean motion.\end{theorem}
The proof is the same as proof of Theorem 2.1.2, Chapter 2.
\begin{remark}\rm
Let $C$ be a smooth curve immersed in a $C^{\infty}$ surface $S$
in $E_{3}$. Then the tangent plans $\pi$ to $S$ along $C$ uniquely
determines a tangent Myller configuration
$\mathfrak{M}_{t}(C,\overline{\alpha},\pi)$. Its Darboux frame
$\cal{R}_{D}$ and the invariants $k_{g}, \kappa_{n},\tau_{g}$ of $C$
in $\mathfrak{M}_{t}$ are just the Darboux frame and geodesic
curvature, normal curvature and geodesic torsion of curve $C$ on
the surface $S$.
\end{remark}
Let $\cal{R}_{F} = (P(s); \overline{\alpha}_{1}(s),
\overline{\alpha}_{2}(s), \overline{\alpha}_{3}(s))$, be the Frenet frame of
curve $C$ with $\overline{\alpha}_{1}(s) = \overline{\alpha}(s)$.
The Frenet formulae hold:
\begin{equation}
\displaystyle\frac{d\overline{r}}{ds} = \overline{\alpha}_{1}(s)
\end{equation}
\begin{equation}
\left\{\begin{array}{l}
\displaystyle\frac{d\overline{\alpha}_{1}}{ds} = \kappa(s)\alpha_{2}(s);\vspace{1.5mm}\\
\displaystyle\frac{d\overline{\alpha}_{2}}{ds} = -\kappa(s)\overline{\alpha}_{1}(s) +
\tau(s)\overline{\alpha}_{3}(s)\nonumber\vspace{1.5mm}\\
\displaystyle\frac{d\overline{\alpha}_{3}}{ds} = -\tau(s)\overline{\alpha}_{2}(s)
\end{array}\right.
\end{equation}
where $\kappa(s)$ is the curvature of $C$ and $\tau(s)$ is the torsion of
$C$.
The relations between the invariants $\kappa_{g}, \kappa_{n},\tau_{g}$ and
$\kappa, \tau$ can be obtained like in Section 4, Chapter 2.
\begin{theorem}
The following formulae hold good
\begin{equation}
\kappa_{g}(s) = \kappa \sin \varphi^{*},\;\; \kappa_{n}(s) = \kappa \cos \varphi^{*},\;\;
\tau_{g}(s) = \tau +\displaystyle\frac{d\varphi^{*}}{ds},
\end{equation}
with $\varphi^{*} = \sphericalangle(\overline{\alpha}_{2}(s),
\overline{\nu}(s))$.
\end{theorem}
In the case when we consider the relations between the invariants
$\kappa_{g}, \kappa_{n}, \tau_{g}$ and the invariants $\chi_{1}, \chi_{2}$
of the normal versor field $(C,\overline{\nu})$ we have from the
formulae (3.1.5):
\begin{equation}
\kappa_{n} = \chi_{1}\sin \sigma, \; \tau_{g} = \chi_{1}\cos \sigma,\;\;
\kappa_{g} = \chi_{2} + \displaystyle\frac{d\sigma}{ds},
\end{equation}
where $\sigma = \sphericalangle (\overline{\alpha}(s),
\overline{\nu}_{3}(s))$.
It is clear that the second formula (3.2.5) gives us a theorem of
Meusnier type, and for $\varphi^{*} = 0$ or $\varphi^{*} = \pm \pi$ we have
$\kappa_{g} = 0, \kappa_{n} = \pm \kappa$, $\tau_{g} = \tau.$
For $\sigma = 0,$ or $\sigma = \pm \pi,$ from (3.2.6) we obtain $\kappa_{n} =
0,$ $\tau_{g} = \pm \chi_{1}$, $\kappa_{g} = \chi_{2}$.
\section{Geodesic, asymptotic and curvature\, \, \, lines in $\mathfrak{M}_{t}(C,\overline{\alpha},\pi)$}
\setcounter{theorem}{0}\setcounter{equation}{0}
The notion of parallelism of a versor field $(C,\overline{\alpha})$ in
$\mathfrak{M}_{t}$ along the curve $C$, investigated in the
Section 8, Chapter 2 for the general case, can be applied now for
the particular case of tangent versor field $(C,\overline{\alpha})$.
It is defined by the condition $\kappa_{g}(s) =0,$ $\forall s\in
(s_{1},s_{2})$. The curve $C$ with this property is called
{\it geodesic line} for $\mathfrak{M}_{t}$ (or {\it autoparallel curve}).
The following properties hold:
1. {\it The curve $C$ is a geodesic in the configuration
$\mathfrak{M}_{t}$ iff at every point $P(s)$ of $C$ the osculating
plane of $C$ is normal to $\mathfrak{M}_{t}$.}
2. {\it $C$ is geodesic in $\mathfrak{M}_{t}$ iff the equality
$|\kappa_{n}| = \kappa$ holds along $C$.}
3. {\it If $C$ is a straight line in $\mathfrak{M}_{t}$ then $C$ is a
geodesic of $\mathfrak{M}_{t}$.}
\bigskip
{\bf Asymptotics}
The curve $C$ is called {\it asymptotic} in $\mathfrak{M}_{t}$ if
$\kappa_{n} = 0,$ $\forall s\in (s_{1},s_{2})$. An asymptotic $C$ is
called an {\it asymptotic line}, too. The following properties can be
proved without difficulties:
1. {\it $C$ is asymptotic line in $\mathfrak{M}_{t}$ iff at every point
$P(s)\in C$ the osculating plane of $C$ coincides to the plane
$\pi(s)$.}
2. {\it If $C$ is a straight line then $C$ is asymptotic in
$\mathfrak{M}_{t}$.}
3. {\it $C$ is asymptotic in $\mathfrak{M}_{t}$ iff along $C$,
$|\kappa_{g}| = \kappa.$}
4. {\it If $C$ is asymptotic in $\mathfrak{M}_{t}$, then $\tau_{g} =
\tau$ along $C$.}
5. {\it If $\overline{\alpha}(s)$ is conjugate to $\overline{\alpha}(s)$ then
$C$ is asymptotic line in $\mathfrak{M}_{t}$.}
Therefore we may say that the asymptotic line in
$\mathfrak{M}_{t}$ are the autoconjugated lines.
\medskip
{\bf Curvature lines}
The curve $C$ is called the {\it curvature line} in the
configurations\linebreak $\mathfrak{M}_{t}(C, \overline{\xi},\pi)$
if the ruled surface $\cal{R}(C,\overline{\nu})$ is a developing
surface.
One knows that $\cal{R}(C,\overline{\nu})$ is a developing surface
iff the following equation holds:
$$
\langle\overline{\alpha}(s), \overline{\nu}(s),
\displaystyle\frac{d\overline{\nu}}{ds}(s)\rangle = 0, \;\; \forall s\in
(s_{1},s_{2}).
$$
Taking into account the fundamental equations of
$\mathfrak{M}_{t}$ one gets:
1. {\it $C$ is a curvature line in $\mathfrak{M}_{t}$ iff its geodesic
torsion $\tau(s) = 0$, $\forall s$.}
2. {\it $C$ is a curvature line in $\mathfrak{M}_{t}$ iff the versors
field $(C,\overline{\mu}^{*})$ are conjugated to the tangent
$\overline{\alpha}(s)$ in $\mathfrak{M}_{t}$.}
\begin{theorem}
If the curve $C$ of the tangent Myller con\-fi\-gu\-ration
$\mathfrak{M}_{t}(C,$ $\overline{\xi},\pi)$ satisfies two from the
following three conditions
a) $C$ is a plane curve.
b) $C$ is a curvature line in $\mathfrak{M}_{t}$.
c) The angle $\varphi^{*}(s) = \sphericalangle
(\overline{\alpha}_{2}(s),\overline{\nu}(s))$ is constant,
\noindent then the curve $C$ verifies the third condition,
too.\end{theorem}
For proof, one can apply the Theorem 3.2.3, Chapter 3.
More general, let us consider two tangent Myller configurations
$\mathfrak{M}_{t}(C,$ $\overline{\alpha},\pi)$ and
$\mathfrak{M}_{t}^{*}(C, \overline{\alpha},\pi^{*})$ and $\varphi^{**}(s) =
\sphericalangle (\overline{\nu}(s), \overline{\nu}^{*}(s))$.
We can prove without difficulties:
\begin{theorem}
Assuming satisfied two from the following three conditions
\begin{itemize}
\item[1.] $C$ is curvature line in $\mathfrak{M}_{t}$.
\item[2.] $C$ is curvature line in $\mathfrak{M}_{t}^{*}$.
\item[3.] The angle $\varphi^{**}(s)$ is constant for $s\in
(s_{1},s_{2})$.
\noindent Then, the third condition is also verified.
\end{itemize}
\end{theorem}
\section{Mark Krein$^{\prime}$s formula}
\setcounter{theorem}{0}\setcounter{equation}{0}
In the Section 10, Chapter 2, we have defined the spherical image
of a general Myller configuration $\mathfrak{M}$. Of course the
definition applies for tangent configurations $\mathfrak{M}_{t}$,
too.
But now will appear some new special properties.
If in the formulae from Section 10, Chapter 2 we take
$\overline{\xi}(s) = \overline{\alpha}(s)$, $\forall s$, the
invariants $G,K,T$ reduce to geodesic curvature, normal curvature
and geodesic torsion, respectively, of the curve $C$ in
$\mathfrak{M}_{t}$.
Such that we obtain the formulae:
\begin{equation}
\kappa_{n} = -\displaystyle\frac{ds^{*}}{ds}\cos \theta_{1},\;\; \theta_{1} =
\sphericalangle (\overline{\nu}_{2},\overline{\alpha})
\end{equation}
\begin{equation}
\tau_{g} = -\displaystyle\frac{ds^{*}}{ds}\cos \theta_{2},\;\; \theta_{2} =
\sphericalangle (\overline{\nu}_{2},\overline{\mu}^{*})
\end{equation}
which have as consequences:
1. {\it $C$ is asymptotic in $\mathfrak{M}_{t}$ iff $\theta_{1} = \pm
\displaystyle\frac{\pi}{2}$.}
2. {\it $C$ is curvature line in $\mathfrak{M}_{t}$ iff $\theta_{2} = \pm
\displaystyle\frac{\pi}{2}$.}
\smallskip
The following Mark Krein$'$s theorem holds:
\begin{theorem}
Assume that we have
\begin{itemize}
\item[$1.$] A Myller configuration $\mathfrak{M}_{t}(C,
\overline{\xi},\pi)$ of class $C^{k}$, $(k\geq 3)$ where $s$ is
the natural parameter on the curve $C$ and $C$ is a closed curve.
\item[$2.$] The spherical image $C^{*}$ of $\mathfrak{M}_{t}$ determine on
the support sphere $\Sigma$ a simply connected domain of area
$\omega$.
\item[$3.$] $\sigma = \sphericalangle (\overline{\nu}_{3}, \overline{\alpha})$.
In these conditions Mark Krein$'$s formula holds:
\begin{equation}
\omega =2\pi - \int_{C}\kappa_{g}(s)ds +\int_{C}d\sigma.
\end{equation}
\end{itemize}
\end{theorem}
Indeed the formula (2.10.9) from Theorem 2.10.1, Chapter 2 is
equivalent to (3.4.3).
The remarks from the end of Section 10, Chapter 2 are valid, too.
\section{Relations between the invariants $G,K,T$ of the versor field
$(C,\overline{\xi})$ in $\mathfrak{M}_{t}(C,\overline{\xi},\pi)$ and the
invariants of tangent versor field $(C,\overline{\alpha})$ in
$\mathfrak{M}_{t}$}
\setcounter{theorem}{0}\setcounter{equation}{0}
Let $\mathfrak{M}_{t}(C,\overline{\xi},\pi)$ be a tangent Myller
configuration, $\cal{R}_{D} = (P(s), \overline{\xi}(s),$$
\overline{\mu}(s),$ $\overline{\nu}(s))$ its Darboux frame and the
tangent Myller configuration
$\mathfrak{M}_{t}^{\prime}(C,\overline{a},\pi)$ determined by the
plane field $(C,\pi(s))$, with $\overline{\alpha}
=\displaystyle\frac{d\overline{r}}{ds}$-tangent versors field to the oriented
curve $C$. The Darboux frame $\cal{R}_{D}^{\prime} = (P(s);
\overline{\alpha}(s),$ $ \overline{\mu}^{*}(s),$ $\overline{\nu})$ and
oriented angle $\lambda = \sphericalangle
(\overline{\alpha},\overline{\xi})$ allow to determine $\cal{R}_{D}$
by means of formulas:
\begin{eqnarray}
\overline{\xi}&=& \overline{\alpha}\cos \lambda + \overline{\mu}^{*}\sin
\lambda\nonumber\\\overline{\mu}&=& -\overline{\alpha}\sin \lambda
+\overline{\mu}^{*}\cos \lambda\\\overline{\nu}&=&
\overline{\nu}.\nonumber
\end{eqnarray}
The moving equations (3.1.2), (3.1.3) and (3.2.1), (3.2.3) of
$\cal{R}_{D}$ and $\mathcal{R}^{\prime}_{D}$ lead to the following
relations between invariants $\kappa_{g}, \kappa_{n}$ and $\tau_{g}$ of
the curve $C$ in $\mathfrak{M}_{t}^{\prime}$ and the invariants
$G,K,T$ of versor field $(C,\overline{\xi})$ in
$\mathfrak{M}_{t}:$
\begin{eqnarray}
K&=& \kappa_{n}\cos \lambda + \tau_{g}\sin \lambda\nonumber\\T&=& -\kappa_{n}\sin \lambda
+ \tau_{g}\cos \lambda\\Gamma&=& \kappa_{g} + \displaystyle\frac{d\lambda}{ds}.\nonumber
\end{eqnarray}
The two first formulae (3.5.2) imply
\begin{equation}
K^{2} + T^{2} = \kappa_{n}^{2} + \tau_{g}^{2} =
\left(\displaystyle\frac{ds^{*}}{ds}\right)^{2}.
\end{equation}
The formula (3.5.2) has some important consequences:
\begin{theorem}
1. The curve $C$ has $\kappa_{g}(s)$ as geodesic curvature on the
developing surface $E$ generated by planes $\pi(s)$, $s\in
(s_{1},s_{2})$.
2. $G$ is an intrinsec invariant of the developing surface $E$.
\end{theorem}
\begin{proof} 1. Since the planes $\pi(s)$ pass through tangent line
$(P(s), \overline{\alpha}(s))$ to curve $C$, the developing surface
$E$, enveloping by planes $\pi(s)$ passes through curve $C$. So,
$\kappa_{g}$ is geodesic curvature of $C\subset E$ at point $P(s)$.
2. The invariants $\kappa_{g}(s)$ and $\lambda(s)$ are the intrinsic
invariants of surface $E$. It follows that the invariant $G$ has
the same property.\end{proof}
Now, we investigate an extension of Bonnet result.
\begin{theorem}
Supposing that the versor field $(C, \overline{\xi})$ in tangent
Myller configuration $\mathfrak{M}_{t}(C,\overline{\xi},\pi)$ has
two from the following three properties:
\begin{itemize}
\item[1.] $\overline{\xi}(s)$ is a parallel versor field in
$\mathfrak{M}_{t}$,
\item[2.] The angle $\lambda = \sphericalangle (\overline{\alpha},\overline{\xi})$ is
constant,
\item[3.] The curve $C$ is geodesic in $\mathfrak{M}_{t},$
\end{itemize}
then it has the third property, too.
\end{theorem}
The proof is based on the last formula (3.5.2). Also, it is not
difficult to prove:
\begin{theorem}
The normal curvature and geodesic torsion of the versor field
$(C,\overline{\mu}^{*})$ in tangent Myller configuration
$\mathfrak{M}_{t}(C,\overline{\xi},\pi)$, respectively are the
geodesic torsion and normal curvature with opposite sign of the
curve $C$ in $\mathfrak{M}_{t}$.
\end{theorem}
The same formulae (3.5.2) for $K(s)=0,$ $s\in (s_{1}, s_{2})$ and
$\tau_{g}(s)\neq 0$ imply:
\begin{equation}
{\rm{tg}} \lambda = -\displaystyle\frac{\kappa_{n}(s)}{\tau_{g}(s)}.
\end{equation}
In the theory of surfaces immersed in $E_{3}$ this formula was
independently established by E. Bortolotti [3] and Al. Myller
[31-34].
\begin{definition}
A curve $C$ is called a geodesic helix in tangent configuration
$\mathfrak{M}_{t}(C,\overline{\xi},\pi)$ if the angle $\lambda(s) =
\sphericalangle (\overline{\alpha},\overline{\mu})$ is constant $(\neq
0,\pi)$.
\end{definition}
The following two results can be proved without difficulties.
\begin{theorem}
If $C$ is a geodesic helix in
$\mathfrak{M}_{t}(C,\overline{\xi},\pi)$, then $$\kappa_{n}/\tau_{g} =
\pm \displaystyle\frac{\kappa}{\tau}.$$
\end{theorem}
Another consequence of the previous theory is given by:
\begin{theorem}
If $C$ is geodesic in $\mathfrak{M}_{t}(C,\overline{\xi},\pi)$, it
is a geodesic helix for $\mathfrak{M}_{t}$ iff $C$ is a cylinder
helix in the space $E_{3}$.
\end{theorem}
We can make analogous consideration, taking $\tau=0$ in the formula
(3.5.2).
We stop here the theory of Myller configurations
$\mathfrak{M}(C,\overline{\xi},\pi)$ in Euclidean space $E_{3}$.
Of course, it can be extended for Myller configurations
$\mathfrak{M}(C,\overline{\xi}, \pi)$ with $\pi$ a $k$-plane in
the space $E^{n}, n\geq 3$, $k<n$ or in more general spaces, as
Riemann spaces, [26], [27].
\newpage
\thispagestyle{empty}
\chapter[Applications of the theory of Myller configuration $\mathfrak{M}_{t}$]{Applications of theory of Myller configuration $\mathfrak{M}_{t}$ in the geometry of surfaces in
$E_{3}$}
A first application of the theory of Myller configurations
$\mathfrak{M}(C,\overline{\xi},\pi)$ in the Euclidean space $E_{3}$
can be realized to the geometry of surfaces $S$ embedded in
$E_{3}$. We obtain a more clear study of curves $C\subset S$, a
natural definition of Levi-Civita parallelism of vector field
$\overline{V}$, tangent to $S$ along $C$, as well as the notion of
concurrence in Myller sense of vector field, tangent to $S$ along
$C$. Some new concepts, as those of mean torsion of $S$, the total
torsion of $S$, the Bonnet indicatrix of geodesic torsion and its
relation with the Dupin indicatrix of normal curvatures are
introduced. A new property of Bacaloglu-Sophie-Germain curvature
is proved, too. Namely, it is expressed in terms of the total
torsion of the surface $S$.
\section{The fundamental forms of surfaces in $E_{3}$}
\setcounter{theorem}{0}\setcounter{equation}{0}
Let $S$ be a smooth surface embedded in the Euclidean space
$E_{3}$. Since we use the classical theory of surfaces in $E_{3}$
[see, for instance: Mike Spivak, Diff. Geom. vol. I, Bearkley,
1979], consider the analytical representation of $S$, of class $C^k$, $(k\geq 3,\ {\rm{or\ }}k=\infty)$:
\begin{equation}
\overline{r} = \overline{r}(u,v),\;\; (u,v)\in D.
\end{equation}
$D$ being a simply connected domain in plane of variables $(u,v)$,
and the following condition being verified:
\begin{equation}
\overline{r}_{u}\times \overline{r}_{v}\neq \overline{0}\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
for }\forall (u,v)\in D.
\end{equation}
Of course we adopt the vectorial notations
\begin{eqnarray*}
\overline{r}_{u} = \displaystyle\frac{\partial \overline{r}}{\partial u},\;\;
\overline{r}_{v} = \displaystyle\frac{\partial\overline{r}}{\partial v}.
\end{eqnarray*}
Denote by $C$ a smooth curve on $S$, given by the parametric
equations
\begin{equation}
u = u(t),\;\; v = v(t),\;\; t\in (t_{1},t_{2}).
\end{equation}
In this chapter all geometric objects or mappings are considered
of $C^{\infty}$-class, up to contrary hypothesis.
In space $E_{3}$, the curve $C$ is represented by
\begin{equation}
\overline{r} = \overline{r}(u(t), v(t)),\;\; t\in (t_{1},t_{2}).
\end{equation}
Thus, the following vector field
\begin{equation}
d\overline{r} = \overline{r}_{u}du + \overline{r}_{v}dv
\end{equation}
is tangent to $C$ at the points $P(t) = P(\overline{r}(u(t)),
v(t))\in C.$ The vectors $\overline{r}_{u}$ and $\overline{r}_{v}$
are tangent to the parametric lines and $d\overline{r}$ is tangent
vector to $S$ at point $P(t).$
From (4.1.2) it follows that the scalar function
\begin{equation}
\Delta = \|\overline{r}_{u}\times \overline{r}_{v}\|^{2}
\end{equation}
is different from zero on $D$.
The unit vector $\overline{\nu}$
\begin{equation}
\overline{\nu} = \displaystyle\frac{\overline{r}_{u}\times
\overline{r}_{v}}{\sqrt{\Delta}}
\end{equation}
is normal to surface $S$ at every point $P(t).$
The tangent plane $\pi$ at $P$ to $S$, has the equation
\begin{equation}
\langle \overline{R}-\overline{r}(u,v), \overline{\nu}\rangle=0.
\end{equation}
Assuming that $S$ is orientable, it follows that $\overline{\nu}$
is uniquely determined and the tangent plane $\pi$ is oriented (by
means of versor $\overline{\nu}$), too.
The {\it first fundamental form} $\phi$ of surface $S$ is defined by
\begin{equation}
\phi = \langle d\overline{r}, d\overline{r}\rangle,\;\; \forall
(u,v)\in D
\end{equation}
or by $\phi(du,dv) = \langle d\overline{r}(u,v),
d\overline{r}(u,v)\rangle$, $\forall (u,v)\in D.$
Taking into account the equality (4.1.5) it follows that
$\phi(du,dv)$ is a quadratic form:
\begin{equation}
\phi(du,dv) = E du^{2} + 2F du dv + G dv^{2}.
\end{equation}
The coefficients of $\phi$ are the functions, defined on $D$:
\begin{equation}
\hspace*{1cm}E(u,v) = \langle \overline{r}_{u}, \overline{r}_{u}\rangle, F(u,v)
= \langle \overline{r}_{u}, \overline{r}_{v}\rangle,\; G(u,v) =
\langle \overline{r}_{v}, \overline{r}_{v} \rangle.
\end{equation}
But we have the discriminant $\Delta$ of $\phi:$
\begin{equation}
\Delta = EG - F^{2}.
\end{equation}
\begin{equation}
E>0,\; G>0,\; \Delta >0.
\end{equation}
Consequence: the first fundamental form $\phi$ of $S$ is
positively defined.
Since the vector $d\overline{r}$ does not depend on
parametrization of $S$, it follows that $\phi$ has a geometrical meaning
with respect to a change of coordinates in $E_{3}$ and with
respect to a change of parameters $(u,v)$ on $S$.
Thus $ds,$ given by:
\begin{equation}
ds^{2} = \phi(du,dv) = E du^{2} + 2F du dv + G dv^{2}
\end{equation}
is called the {\it element of arclength} of the surface $S$.
The arclength of a curve $C$ an $S$ is expressed by
\begin{equation}
s = \int_{t_{0}}^{t}\left\{E\(\displaystyle\frac{du}{d\sigma}\)^{2} + 2F
\displaystyle\frac{du}{d\sigma}\displaystyle\frac{dv}{d\sigma} + G\(\displaystyle\frac{dv}{d\sigma}\)^{2}\right\}^{1/2}d\sigma.
\end{equation}
The function $s = s(t),$ $t\in [a,b]\subset (t_{1},t_{2})$,
$t_{0}\in [a,b]$ is a diffeomorphism from the interval $[a,b]\to
[0,s(t)]$. The inverse function $t =t(s)$, determines a new
parametrization of curve $C$: $\overline{r} = \overline{r}(u(s),
v(s))$. The tangent vector $\displaystyle\frac{d\overline{r}}{ds}$ is a versor:
\begin{equation}
\overline{\alpha} = \displaystyle\frac{d\overline{r}}{ds} =
\overline{r}_{u}\displaystyle\frac{du}{ds} + \overline{r}_{v}\displaystyle\frac{dv}{ds}.
\end{equation}
For two tangent versors $\overline{\alpha} =
\displaystyle\frac{d\overline{r}}{ds}$, $\overline{\alpha}_{1} =
\displaystyle\frac{\delta\overline{r}}{\delta s}$ at point $P\in S$ the angle
$\sphericalangle(\overline{\alpha}, \overline{\alpha}_{1})$ is expressed
by
\begin{equation}
\hspace*{13mm}\cos\hspace{-1mm} \sphericalangle(\hspace{-0.5mm}\overline{\alpha}, \overline{\alpha}_{1}\hspace{-0.5mm})\hspace{-1mm} =\hspace{-1mm}
\displaystyle\frac{E du
\delta u\hspace{-0.8mm} +\hspace{-0.8mm} F(du \delta v\hspace{-0.8mm} +\hspace{-0.8mm} \delta u dv)\hspace{-0.8mm} +\hspace{-0.8mm} Gdv \delta v}{\sqrt{Edu^{2}\hspace{-0.8mm} +\hspace{-0.8mm} 2F du dv\hspace{-0.8mm} +\hspace{-0.8mm}
Gdv^{2}}\hspace{-1mm}\cdot\hspace{-1mm} \sqrt{E\delta u^{2}\hspace{-0.8mm} +\hspace{-0.8mm} 2F \delta u \delta v\hspace{-0.8mm} +\hspace{-0.8mm} G\delta v^{2}}}.
\end{equation}
{\it The second fundamental form} $\psi$ of $S$ at point $P\in S$ is
defined by
\begin{equation}
\psi = \langle \overline{\nu}, d^{2}\overline{r}\rangle = -
\langle d\overline{r}, d\overline{\nu}\rangle\;\; \forall (u,v)\in
D.
\end{equation}
Also, we adopt the notation $\psi(du,dv) = - \langle
d\overline{r}(u,v),d\nu(u,v)\rangle.$ $\psi$ is a quadratic form:
\begin{equation}
\psi(du,dv) =L du^{2} + 2M du dv + N dv^{2},
\end{equation}
having the coefficients functions of $(u,v)\in D:$
\begin{equation}
\begin{array}{l}
L(u,v) = \displaystyle\frac{1}{\sqrt{\Delta}} \langle \overline{r}_{u},
\overline{r}_{u}, \overline{r}_{uu}\rangle,\; M(u,v) =
\displaystyle\frac{1}{\sqrt{\Delta}}\langle \overline{r}_{u}, \overline{r}_{v},
\overline{r}_{uv} \rangle\\N(u,v) =
\displaystyle\frac{1}{\sqrt{\Delta}}<\overline{r}_{u}, \overline{r}_{v},
\overline{r}_{vv}>.
\end{array}
\end{equation}
Of course from $\psi =-\langle d\overline{r},
d\overline{\nu}\rangle$ it follows that $\psi$ has geometric
meaning.
The two fundamental forms $\phi$ and $\psi$ are enough to
determine a surface $S$ in Euclidean space $E_{3}$ under proper
Euclidean motions. This property results by the integration of the
Gauss-Weingarten formulae and by their differential consequences
given by Gauss-Codazzi equations.
\section{Gauss and Weingarten formulae}
\setcounter{theorem}{0}\setcounter{equation}{0}
Consider the moving frame$$ \cal{R} = (P(\overline{r});
\overline{r}_{u}, \overline{r}_{v}, \overline{\nu})
$$
on the smooth surface $S$ (i.e. $S$ is of class $C^\infty$).
The moving equations of $R$ are given by the Gauss and Weingarten
formulae.
The Gauss formulae are as follows:
\begin{equation}
\begin{array}{lll}
\overline{r}_{uu}&=& \left\{\begin{array}{l}1\\11\end{array}\right\} \overline{r}_{u} + \left\{\begin{array}{l}2\\ 11\end{array}\right\}
\overline{r}_{v} +
L\overline{\nu},\\\overline{r}_{uv}&=& \left\{\begin{array}{l}1\\
12\end{array}\right\}\overline{r}_{u} + \left\{\begin{array}{l}2\\ 12\end{array}\right\} \overline{r}_{v} +
M\overline{\nu}\\\overline{r}_{vv}&=&\left\{\begin{array}{l}1\\
22\end{array}\right\}\overline{r}_{u} +\left\{\begin{array}{l}2\\ 22\end{array}\right\} \overline{r}_{v} +
N\overline{\nu}.
\end{array}
\end{equation}
The coefficients $\left\{\begin{array}{cc} i\\jk
\end{array}\right\}$, $(i,j,k=1,2; u=u^{1}, v= u^{2})$ are called the
Christoffel symbols of the first fundamental form $\phi:$
\begin{equation}
\hspace*{5mm}{1\brace 11} =-\displaystyle\frac{1}{\sqrt{D}} \langle \overline{\nu},
\overline{r}_{v}, \overline{r}_{uu}\rangle, { 1\brace 21} ={
1\brace 21} = -\displaystyle\frac{1}{\sqrt{\Delta}}\langle \overline{\nu},
\overline{r}_{v}, \overline{r}_{uv}\rangle,
\end{equation}
$$
{ 1\brace 22} = -\displaystyle\frac{1}{\sqrt{\Delta}}\langle \overline{\nu},
\overline{r}_{v}, \overline{r}_{vv} \rangle,
$$
\begin{eqnarray}
\hspace*{14mm}{ 2\brace 11}
= - \displaystyle\frac{1}{\sqrt{\Delta}}\langle \overline{\nu}, \overline{r}_{u},
\overline{r}_{uu}\rangle, {2\brace 21}
= {2\brace 12}
= -\displaystyle\frac{1}{\sqrt{\Delta}}\langle \overline{\nu},\overline{r}_{u},
\overline{r}_{uv}\rangle,\nonumber\\ {2\brace 22} =
-\displaystyle\frac{1}{\sqrt{\Delta}}\langle
\overline{\nu},\overline{r}_{u},\overline{r}_{vv}\rangle.\nonumber
\end{eqnarray}
The Weingarten formulae are given by:
\begin{eqnarray}
\displaystyle\frac{\partial \overline{\nu}}{\partial u}&=& \displaystyle\frac{1}{\sqrt{\Delta}}\{(FM -
GL)\overline{r}_{u} + (FL - EM)\overline{r}_{v}\}\\\displaystyle\frac{\partial
\overline{\nu}}{\partial v}&=& \displaystyle\frac{1}{\sqrt{\Delta}}\{(FN -
GM)\overline{r}_{u} + (FM -EN)\overline{r}_{v}\}.\nonumber
\end{eqnarray}
Of course, the equations (4.2.1) and (4.2.3) express the variation
of the moving frame $R$ on surface $S$.
Using the relations $\overline{r}_{uuv} = \overline{r}_{uvu},$
$\overline{r}_{vuv} = \overline{r}_{vvu}$ and
$\displaystyle\frac{\partial^{2}\overline{\nu}}{\partial uv} =
\displaystyle\frac{\partial^{2}\overline{\nu}}{\partial v u}$ applied to (4.2.1), (4.2.3)
one deduces the so called fundamental equations of surface $S$,
known as the Gauss-Codazzi equations.
A fundamental theorem can be proved, when the first and second
fundamental from $\phi$ and $\psi$ are given and Gauss-Codazzi
equations are verified (Spivak [75]).
\newpage
\section{The tangent Myller configuration $\mathfrak{M}_{t}(C,$ $\overline{\xi},\pi)$ associated to a tangent versor field $(C,\overline{\xi})$ on a surface
$S$}
\setcounter{theorem}{0}\setcounter{equation}{0}
Assuming that $C$ is a curve on surface $S$ in $E_{3}$ and $(C,
\overline{\xi})$ is a versor field tangent to $S$ along $C$, there
is an uniquely determined tangent Myller configuration
$\mathfrak{M}_{t} = \mathfrak{M}_{t}(C, \overline{\xi},\pi)$ for
which $(C,\pi)$ is tangent field planes to $S$ along $C$.
The invariants $(c_{1}, c_{2}, G,K,T)$ of $(C, \overline{\xi})$ in
$\mathfrak{M}_{t}(C, \overline{\xi},\pi)$ will be called the
invariants of tangent versor field $(C, \overline{\xi})$ on
surface $S$.
Evidently, these invariants have the same values on every smooth
surface $S^{\prime}$ which contains the curve $C$ and is tangent
to $S$.
Using the theory of $\mathfrak{M}_{t}$ from Chapter 3, for $(C,
\overline{\xi})$ tangent to $S$ along $C$ we determine:
1. The Darboux frame
\begin{equation}
R_{D} = (P(s); \overline{\xi}(s), \overline{\mu}(s),
\overline{\nu}(s)),
\end{equation}
where $s$ is natural parameter on $C$ and
\begin{equation}
\overline{\nu} = \displaystyle\frac{1}{\sqrt{\Delta}} \overline{r}_{u}\times
\overline{r}_{v},\;\; \overline{\mu} = \overline{\nu}\times
\overline{\xi}.
\end{equation}
Of course $\cal{R}_{D}$ is orthonormal and positively oriented.
The invariants $(c_{1}, c_{2}, G,K,M)$ of $(C, \overline{\xi})$ on
$S$ are given by (3.1.2), (3.1.3) Chapter 3. So we have the
moving equations of $R.$
\begin{theorem}
The moving equations of the Darboux frame of tangent versor field
$(C,\overline{\xi})$ to $S$ are given by
\begin{equation}
\displaystyle\frac{d\overline{r}}{ds} = \overline{\alpha}(s) =
c_{1}(s)\overline{\xi} + c_{2}(s)\overline{\mu},\;\; (c_{1}^{2} +
c_{2}^{2} =1)
\end{equation}
and
\newpage
\begin{eqnarray}
\displaystyle\frac{d\overline{\xi}}{ds} &=& G(s)\overline{\mu}(s) +
K(s)\overline{\nu}(s),\nonumber\\ \displaystyle\frac{d\overline{\mu}}{ds} &=&
-G(s)\overline{\xi}(s) +
T(s)\overline{\nu}(s),\\\displaystyle\frac{d\overline{\nu}}{ds} &=&
-K(s)\overline{\xi} - T(s)\overline{\mu}(s),\nonumber
\end{eqnarray}
where $c_{1}(s), c_{2}(s), G(s), K(s), T(s)$ are invariants with
respect to the changes of coordinates on $E_{3}$, with respect of transformation of local coordinates on $S$, $(u,v)\to(\widetilde{u},\widetilde{v})$ and with respect
to transformations of natural parameter $s\to s_{0}+s$.
\end{theorem}
A fundamental theorem can be proved exactly as in Chapter 2.
\begin{theorem}
Let be $c_{1}(s),c_{2}(s)$, $[c_{1}^{2} + c_{2}^{2}=1]$, $G(s),
K(s), T(s)$, $s\in [a,b]$, a priori given smooth functions. Then
there exists a tangent Myller configuration
$\mathfrak{M}_{t}(C,\overline{\xi},\pi)$ for which $s$ is the
arclength of curve $C$ and the given functions are its invariants.
$\mathfrak{M}_{t}$ is determined up to a proper Euclidean motion.
The given functions are invariants of the versor field
$(C,\overline{\xi})$ for any smooth surface $S$, which contains
the curve $C$ and is tangent planes $\pi(s)$, $s\in [a,b]$.
\end{theorem}
The geometric interpretations of the invariants $G(s), K(s)$ and
$T(s)$ are those mentioned in the Section 2, Chapter 3.
So, we get
$$
G(s) = \lim_{\Delta s\to 0}\displaystyle\frac{\Delta\psi_{1}}{\Delta s}, \;\; \RIfM@\expandafter\text@\else\expandafter\mbox\fi{
with } \Delta \psi_{1} = \sphericalangle (\overline{\xi}(s),
{\rm{pr}}_{\pi(s)}\overline{\xi}(s+\Delta s)),
$$
$$
K(s) = \lim_{\Delta s\to 0}\displaystyle\frac{\Delta\psi_{2}}{\Delta s}, \;\; \RIfM@\expandafter\text@\else\expandafter\mbox\fi{
with } \Delta \psi_{2} = \sphericalangle (\overline{\mu},
{\rm{pr}}_{(P; \overline{\xi},\overline{\nu})}\overline{\xi}(s+\Delta
s)),
$$
$$
T(s) = \lim_{\Delta s\to 0}\displaystyle\frac{\Delta\psi_{3}}{\Delta s}, \;\; \RIfM@\expandafter\text@\else\expandafter\mbox\fi{
with } \Delta \psi_{3} = \sphericalangle (\overline{\mu},
{\rm{pr}}_{(P; \overline{\mu},\overline{\nu})}\overline{\xi}(s+\Delta
s)).
$$
For this reason $G(s)$ is called the {\it geodesic curvature} of $(C,
\overline{\xi})$ on $S$; $K(s)$ is called {\it the normal curvature} of
$(C, \overline{\xi})$ on $S$ and $T(s)$ is named the {\it geodesic
torsion} of the tangent versor field $(C, \overline{\xi})$ on $S$.
The calculus of invariants $G,K,T$ is exactly the same as it has been done in Chapter
2, (2.3.3), (2.3.4):
\begin{eqnarray}
&&G(s) = \left\langle \overline{\xi}, \displaystyle\frac{d\overline{\xi}}{ds},
\overline{\nu} \right\rangle,\;\; K(s) =
\left\langle\displaystyle\frac{d\overline{\xi}}{ds},\overline{\nu}\right\rangle = -\left\langle
\overline{\xi},\displaystyle\frac{d\overline{\nu}}{ds}
\right\rangle,\nonumber\\&&T(s) = \left\langle \overline{\xi},
\overline{\nu}, \displaystyle\frac{d\overline{\nu}}{ds}\right\rangle.
\end{eqnarray}
The analytical expressions of invariants $G,K,T$ on $S$ are given in next
section.
\section{The calculus of invariants $G,K,T$ on a surface}
\setcounter{theorem}{0}\setcounter{equation}{0}
The tangent versor $\overline{\alpha} = \displaystyle\frac{d\overline{r}}{ds}$ to
the curve $C$ in $S$ is
\begin{equation}
\overline{\alpha}(s) = \displaystyle\frac{d\overline{r}}{ds} = \overline{r}_{u}
\displaystyle\frac{du}{ds} + \overline{r}_{v}\displaystyle\frac{dv}{ds}.
\end{equation}
$\overline{\alpha}(s)$ is the versor of the tangent vector
$d\overline{r}$, from (4,4)
\begin{eqnarray}
d\overline{r} = \overline{r}_{u}du + \overline{r}_{v}dv.
\end{eqnarray}
The coordinate of a vector field $(C, \overline{V})$ tangent to
surface $S$ along the curve $C\subset S$, with respect to the
moving frame $\cal{R} = (P; \overline{r}_{u}, \overline{r}_{v},
\overline{\nu})$ are the $C^{\infty}$ functions $V^{1}(s),
V^{2}(s)$:
$$
\overline{V}(s)=V^{1}(s)\overline{r}_{u}+V^{2}(s)\overline{r}_{v}.
$$
The square of length of vector $\overline{V}(s)$ is:
$$
\langle \overline{V}(s),\overline{V}(s)\rangle = E(V^{1})^{2} +
2FV^{1}V^{2} + G(V^{2})^{2}
$$
and the scalar product of two tangent vectors $\overline{V}(s)$
and
$\overline{U}(s)=U^{1}(s)\overline{r}_{u}+U^{2}(s)\overline{r}_{v}$
is as follows
$$
\langle \overline{U}, \overline{V}\rangle = EU^{1}V^{1} +
F(U^{1}V^{2} + V^{1}U^{2}) +GU^{2}V^{2}.
$$
Let $C$ and $C^{\prime}$ two smooth curves on $S$ having $P(s)$ as
common point. Thus the tangent vectors $d\overline{r},
\delta\overline{r}$ at a point $P$ to $C$, respectively to $C^{\prime}$
are
$$
d\overline{r} = \overline{r}_{u}du + \overline{r}_{v}dv,
$$
$$
\delta \overline{r} = \overline{r}_{u} \delta u +\overline{r}_{v}\delta v.
$$
They correspond to tangent directions $(du,dv)$, $(\delta u, \delta v)$ on
surface $S$.
Evidently, $\overline{\alpha}(s) = \displaystyle\frac{d\overline{r}}{ds}$ and
$\overline{\xi}(s) = \displaystyle\frac{\delta \overline{r}}{\delta s}$ are the tangent
versors to curves $C$ and $C^{\prime}$, respectively at point
$P(s)$, and $(C, \overline{\alpha})$, $(C, \overline{\xi})$ are the
tangent versor fields along the curve $C$.
The Darboux frame $\cal{R}_{D} = (P(s); \overline{\xi}(s),
\overline{\mu}(s), \overline{\nu}(s))$ has the versors
$\overline{\xi}(s), \overline{\mu}(s),$ $\overline{\nu}(s)$ given
by
\begin{eqnarray}
\overline{\xi}(s) &=& \overline{r}_{u}\displaystyle\frac{\delta u}{\delta s} +
\overline{r}_{v}\displaystyle\frac{\delta v}{\delta s},\nonumber\\\overline{\mu}(s)&=&
\displaystyle\frac{1}{\sqrt{\Delta}}\left[(E \overline{r}_{v} - F
\overline{r}_{u})\displaystyle\frac{\delta u}{\delta s} +(F \overline{r}_{v} - G
\overline{r}_{u})\displaystyle\frac{\delta v}{\delta s}\right],\\\overline{\nu}(s) &=&
\displaystyle\frac{1}{\Delta}(\overline{r}_{u} \times \overline{r}_{v}).\nonumber
\end{eqnarray}
Taking into account (4.3.3) and (4.4.3) it follows:
\begin{eqnarray}
c_{1}&=& \langle \overline{\alpha}, \overline{\xi}\rangle = \displaystyle\frac{E du
\delta u +F (du \delta v + dv \delta u)+G dv \delta v}{\sqrt{\phi(du,
dv)}\sqrt{\phi(\delta u, \delta v)}}\\c_{2}&=& \langle
\overline{\alpha},\overline{\mu}\rangle=\displaystyle\frac{\sqrt{\Delta}(\delta u dv - \delta v
du)}{\sqrt{\phi(du,dv)}\sqrt{\phi(\delta u, \delta v)}}\nonumber
\end{eqnarray}
where
$$
\phi(du, dv) = \langle d\overline{r}, d\overline{r}\rangle,\;\;
\phi(\delta u, \delta v) = \langle \delta \overline{r}, \delta
\overline{r}\rangle.
$$
In order to calculate the invariants $G,K,T$ we need to determine
the vectors $\displaystyle\frac{d\overline{\xi}}{ds},
\displaystyle\frac{d\overline{\nu}}{ds}$.
By means of (4.4.3) we obtain
\begin{eqnarray}
\displaystyle\frac{d\overline{\xi}}{ds}&=& \displaystyle\frac{d}{ds}\left(\displaystyle\frac{\delta
\overline{r} }{\delta s}\right) =
\overline{r}_{uu}\displaystyle\frac{du}{ds}\displaystyle\frac{\delta u}{\delta s} +
\overline{r}_{uv}\left(\displaystyle\frac{du}{ds}\displaystyle\frac{\delta v}{\delta s} + \displaystyle\frac{\delta u
}{\delta s}\displaystyle\frac{dv}{ds}\right)
\nonumber\\&&+\overline{r}_{vv}\displaystyle\frac{dv}{ds} \displaystyle\frac{\delta v}{\delta s} +
\overline{r}_{u}\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\delta u }{\delta s}\right)
+\overline{r}_{v}\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\delta\nu}{\delta s}\right).
\end{eqnarray}
The scalar mixt $\langle\overline{r}_{u}, \overline{r}_{v},
\overline{\nu})\rangle$ is equal to $\sqrt{\Delta}$. It results:
\begin{eqnarray}
\left\langle \overline{\xi}, \displaystyle\frac{d\overline{\xi}}{ds},
\overline{\nu}\right\rangle& =&
\sqrt{\Delta}\left[\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\delta v }{\delta s}\right)
\displaystyle\frac{du}{ds} - \displaystyle\frac{d}{ds}\left(\displaystyle\frac{\delta u }{\delta s
}\right)\displaystyle\frac{dv}{ds}\right] \nonumber\\&&+ \langle
\overline{r}_{u},
\overline{r}_{uu},\overline{\nu}\rangle\displaystyle\frac{du}{ds}\left(\displaystyle\frac{\delta
u}{\delta s}\right)^{2}\nonumber+\\&&+ \langle \overline{r}_{u},
\overline{r}_{uv}, \overline{\nu}
\rangle\left(\displaystyle\frac{du}{ds}\displaystyle\frac{\delta v}{\delta s} +
\displaystyle\frac{dv}{ds}\displaystyle\frac{\delta u}{\delta s} \right)\displaystyle\frac{\delta u}{\delta s} + \\&&+
\langle \overline{r}_{u}, \overline{r}_{vv},
\overline{\nu}\rangle\displaystyle\frac{\delta u}{\delta s}\displaystyle\frac{dv}{ds}\displaystyle\frac{\delta v}{\delta
s }+\nonumber\\&&+\langle \overline{r}_{v}, \overline{r}_{uu},
\overline{\nu}\rangle \displaystyle\frac{du}{ds}\displaystyle\frac{\delta u}{\delta s}\displaystyle\frac{\delta v}{\delta
s } + \langle \overline{r}_{v}, \overline{r}_{uv},
\overline{\nu}\rangle\left(\displaystyle\frac{du}{ds}\displaystyle\frac{\delta v}{\delta s} +
\displaystyle\frac{dv}{ds} \displaystyle\frac{\delta u}{\delta s}\right)\displaystyle\frac{\delta v}{\delta
s}\nonumber\\&&+\langle \overline{r}_{v}, \overline{r}_{vv},
\overline{\nu}\rangle\displaystyle\frac{dv}{ds}\left(\displaystyle\frac{\delta v}{\delta s
}\right)^{2}.\nonumber
\end{eqnarray}
Taking into account (4.2.1), (4.2.2), (4.3.5) and (4.4.5) we have
\begin{proposition}
The geodesic curvature of the tangent versor field
$(C,\overline{\xi})$ on $S$, is expressed as follows:
\begin{eqnarray}
G(\delta,d)&=& \sqrt{\Delta}\{\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\delta v}{\delta
s}\right)\displaystyle\frac{\delta u}{\delta s} - \displaystyle\frac{d}{ds}\left(\displaystyle\frac{\delta u}{\delta s
}\right)\displaystyle\frac{\delta v}{\delta s} \nonumber\\&&+ \left( {2\brace 11}
\displaystyle\frac{du}{ds}+{2\brace12} \displaystyle\frac{dv}{ds}\right)\left(\displaystyle\frac{\delta u}{\delta
s}\right)^{2}\\&&\left(\left({ 1\brace 11}
-{2\brace 12}
\right)\displaystyle\frac{du}{ds} - \left( {2\brace 22}
- {1\brace 12}
\right)\displaystyle\frac{dv}{ds}\right)\displaystyle\frac{\delta u}{\delta s}\displaystyle\frac{\delta v}{\delta s}
\nonumber\\&& - \left({ 1\brace 21} \displaystyle\frac{du}{ds} + {1\brace 22}
\displaystyle\frac{dv}{ds}\right)\left(\displaystyle\frac{\delta v}{\delta s}\right)^{2}\}.\nonumber
\end{eqnarray}
\end{proposition}
Remark that the Christoffel symbols are expressed only by means of
the coefficients of the first fundamental form $\phi$ of surface
$S$ and their derivatives. It follows a very important result
obtained by Al Myller [34]:
\begin{theorem}
The geodesic curvature $G$ of a tangent versor field $(C,
\overline{\xi})$ on a surface $S$ is an intrisic invariant of $S$.
\end{theorem}
The invariant $G$ was named by Al. Myller the {\it deviation of
para\-llelism} of the tangent field $(C,\overline{\xi})$ on
surface $S$.
The expression (4.4.7) of $G$ can be simplified by introducing the
following notations
\begin{eqnarray}
&&\displaystyle\frac{D}{ds}\left(\displaystyle\frac{\delta u^{i}}{\delta s}\right) =
\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\delta u^{i}}{\delta s}\right) +\sum_{j,k=1}^{2}{
i\brace jk}
\displaystyle\frac{\delta u^{j}}{\delta s} \displaystyle\frac{d u^{k}}{ds}\nonumber\\&& (u= u^{1}; v =u^{2};
i=1,2).
\end{eqnarray}
The operator (4.4.8) is the classical operator of covariant
derivative with respect to Levi-Civita connection.
Denoting by
\begin{equation}
\hspace*{10mm}G^{ij}(\delta, d) = \sqrt{\Delta}\left\{\displaystyle\frac{D}{ds}\left(\displaystyle\frac{\delta
u^{i}}{\delta s }\right)\displaystyle\frac{\delta u^{j}}{\delta s} -
\displaystyle\frac{D}{ds}\left(\displaystyle\frac{\delta u^{j} }{\delta s}\right)\displaystyle\frac{\delta u^{i}}{\delta
s}\right\},(i,j=1,2)
\end{equation}
and remarking that $G^{ij}(\delta,d) = -G^{ji}(\delta,d)$ we have $G^{11}
= G^{22}=0$.
\begin{proposition}
The following formula holds:
\begin{equation}
G(\delta, d) = G^{12}(\delta,d).
\end{equation}
\end{proposition}
Indeed, the previous formula is a consequence of (4.4.7), (4.4.8)
and (4.4.9).
\begin{corollary}
The parallelism of tangent versor field $(C,\overline{\xi})$ to
$S$ along the curve $C$ is characterized by the differential
equation
\begin{equation}
G^{ij}(\delta,d) = 0\;\; (i,j =1,2).
\end{equation}
\end{corollary}
In the following we introduce the notations \begin{equation} G(s)
= G(\delta,d),\;\; K(s) = K(\delta,d),\; T(s) = T(\delta,d),
\end{equation}
since $\overline{\xi}(s) = \overline{r}_{u} \displaystyle\frac{\delta u}{\delta s} +
\overline{r}_{v}\displaystyle\frac{\delta v}{\delta s}$, $\overline{\alpha}(s) =
\overline{r}_{u}\displaystyle\frac{du}{ds}+\overline{r}_{v} \displaystyle\frac{dv}{ds}$.
\medskip
The second formula (4.3.5), by means of (4.4.5) gives us the
expression of normal curvature $K(\delta,d)$ in the form
\begin{equation}
\hspace*{12mm}K(\delta,d) = \displaystyle\frac{L du \delta u +M(du \delta v + dv \delta u) + N dv \delta
v}{\sqrt{Edu^{2} + 2F du dv + G dv^{2}}\sqrt{E\delta u^{2} + 2F\delta u \delta
v + G\delta v^{2}}}.
\end{equation}
It follows
\begin{equation}
K(\delta,d) = K(d,\delta)
\end{equation}
\begin{remark}\rm
The invariant $K(\delta,d)$ can be written as follows
$$
K(\delta,d) = \displaystyle\frac{\psi(\delta,d)}{\sqrt{\phi(d,d)}\sqrt{\phi(\delta,\delta)}}
$$
where $\psi(\delta,d)$ is the polar form of the second fundamental
form $\psi(d,d)$ of surface $S$.
\end{remark}
$K(\delta,d)=0$ gives us the property of conjugation of $(C,
\overline{\xi})$ with $(C,\overline{\alpha})$.
The calculus of invariant $T(\delta,d)$ can be made by means of
Weingarten formulae (4.2.3).
Since we have
\begin{eqnarray}
\displaystyle\frac{d\overline{\nu}}{ds} &=& \displaystyle\frac{1}{\Delta}\left\{[(FM -GL)
\overline{r}_{u} + (FL -
EM)\overline{r}_{v}]\displaystyle\frac{du}{ds}\nonumber\right.\\&&\left.+ \left[(FN
-GM)\overline{r}_{u} + (FM -
EN)\overline{r}_{v}\right]\displaystyle\frac{dv}{ds}\right\},
\end{eqnarray}
we deduce
\begin{eqnarray}
T(\delta,d)&=& \displaystyle\frac{1}{\sqrt{\Delta}}\left\{\left[(FM - GL) \displaystyle\frac{du}{ds} +
(FN-GM)\displaystyle\frac{dv}{ds}\right]\displaystyle\frac{\delta v}{\delta s}\right.\nonumber\\&&\left.-
\left[(FL-EM)\displaystyle\frac{du}{ds} + (FM- EN)\displaystyle\frac{dv}{ds}\right]\displaystyle\frac{\delta
u}{\delta s}\right\}.
\end{eqnarray}
For simplicity we write $T(\delta,d)$ from previous formula in the
following form
\begin{equation}
\hspace*{12mm}T(\delta,d) =
\displaystyle\frac{1}{\sqrt{\Delta}\sqrt{\phi(d,d)}\sqrt{\phi(\delta,\delta)}}\left\|
\begin{array}{ccc}
E \delta u\hspace{-0.5mm} +\hspace{-0.5mm} F\delta v& F\delta u\hspace{-0.5mm} +\hspace{-0.5mm}G \delta v\\Lambda du\hspace{-0.5mm} +\hspace{-0.5mm} Mdv & Mdu + N dv
\end{array} \right\|.
\end{equation}
Remember the expression of mean curvature $H$ and total curvature
$K_{t}$ of surface $S$:
\begin{equation}
H = \displaystyle\frac{EN - 2FM +GN}{2(EG -F^{2})},\;\; K_{t} = \displaystyle\frac{LN -
M^{2}}{EG-F^{2}}.
\end{equation}
It is not difficult to prove, by means of (4.4.16), the following
formula
\begin{equation}
T(\delta, d) - T(d,\delta) = 2\sqrt{\Delta} H\left(\displaystyle\frac{\delta u}{\delta
s}\displaystyle\frac{dv}{ds} - \displaystyle\frac{\delta v}{\delta s}\displaystyle\frac{du}{ds}\right).
\end{equation}
It has the following nice consequence:
\begin{theorem}
The geodesic torsion $T(\delta,d)$ is symmetric with respect of
directions $\delta$ and $d$, if and only if $S$ is a minimal surface.
\end{theorem}
Finally, $(C,\overline{\xi})$ is orthogonally conjugated with
$(C,\overline{\alpha})$ if and only if $T(\delta,d) = 0$.
\begin{remark}\rm
The invariant $K(\delta,d)$ was discovered by O. Mayer [20] and the
invariant $T(\delta,d)$ was found by E. Bertoltti [3].
\end{remark}
\section{The Levi-Civita parallelism of vectors tangent to $S$}
\setcounter{theorem}{0}\setcounter{equation}{0}
The notion of Myller parallelism of vector field
$(C,\overline{V})$ tangent to $S$ along curve $C$, in the
associated Myller configuration $\mathfrak{M}_{t}(C,
\overline{\xi},\pi)$ is exactly the Levi-Civita parallelism of tangent
vector field $(C,\overline{V})$ to $S$ along the curve $C$. Indeed,
taking into account that the tangent versor field
$(C,\overline{\xi})$ has
\begin{equation}
\overline{\xi}(s) = \xi^{1}(s)\overline{r}_{u} +
\xi^{2}(s)\overline{r}_{v}
\end{equation}
and expression of the operator $\displaystyle\frac{D}{ds}$ is
\begin{equation}
\displaystyle\frac{D\xi^{i}}{ds} = \displaystyle\frac{d\xi^{i}}{ds} +
\sum_{j,k=1}^{2}\xi^{j}\left\{\begin{array}{cc} i\\jk
\end{array}\right\} \displaystyle\frac{du^{k}}{ds},\; (i=1,2)
\end{equation}
we have
$$
G^{ij} = \displaystyle\frac{D\xi^{i}}{ds}\xi^{j} -
\displaystyle\frac{D\xi^{j}}{ds}\xi^{i},\; (i=1,2).
$$
Thus the parallelism of versor $(C,\overline{\xi})$ along $C$ on
$S$ is expressed by $G^{ij} = 0.$ But these equations are
equivalent to
\begin{equation}
\displaystyle\frac{D(\lambda(s)\xi^{i}(s))}{ds} = 0\; (i=1,2),\; (\lambda(s)\neq 0).
\end{equation}
This is the definition of Levi-Civita parallelism of vectors
$\overline{V}(s)=\lambda(s)\overline{\xi}(s)$, $(\lambda(s)=\|V\|)$ tangent to $S$ along
$C$. Writing $\overline{V} = V^{1}(s) \overline{r}_{u} +
V^{2}(s)\overline{r}_{v}$ and putting
\begin{equation}
\displaystyle\frac{DV^{i}}{ds} =
\displaystyle\frac{dV^{i}}{ds}+\sum_{j,k=1}^{2}V^{j}{i\brace jk}
\displaystyle\frac{du^{k}}{ds}
\end{equation}
the Levi-Civita parallelism is expressed by
\begin{equation}
\displaystyle\frac{DV^{i}}{ds} = 0.
\end{equation}
In the case $\overline{V}(s)=\overline{\xi}(s)$ is a versor field,
parallelism of $(C,\overline{\xi})$ in the associated Myller
configurations $\mathfrak{M}_{t}(C, \overline{\xi},\pi)$ is called
the parallelism of directions tangent to $S$ along $C$. We can
express the condition of parallelism by means of invariants of the
field $(C,\overline{\xi})$, as follows:
1. {\it $(C,\overline{\xi})$ is parallel in the Levi-Civita sense on $S$
along $C$ iff $G(\delta,d)=0.$}
2. {\it $(C, \overline{\xi})$ is parallel in the Levi-Civita sense on $S$
along $C$ iff the versor $\overline{\xi}(s^{\prime})$,
$\{|s^{\prime}-s|<\varepsilon, \varepsilon>0, s^{\prime}\in (s_{1},s_{2})\}$ is
parallel in the space $E_{3}$ with the normal plan $(P(s);
\overline{\xi}(s), \overline{\nu}(s))$.}
3. {\it A necessary and sufficient condition for the versor field $(C,
\overline{\xi})$ to be parallel on $S$ along $C$ in Levi-Civita sense
is that developing on a plane the ruled surface $E$ generated by
tangent planes field $(C,\pi)$ along $C$ to $E$- the directions
$(C,\overline{\xi})$ after developing to be parallel in Euclidean
sense.}
4. {\it The directions $(C,\overline{\xi})$ are parallel in the Levi-Civita
sense on $S$ along $C$ iff the versor field $(C,\overline{\xi}_{2})$ is normal to $S$.}
5. {\it $(C,\overline{\xi})$ is parallel on $S$ along $C$, iff $|K| =
K_{1}$.}
6. {\it If $(C,\xi)$ is parallel on $S$ along $C$, then $|K| = K_{1},
T=K_{2}$.}
7. {\it If the ruled surface $R(C,\overline{\xi})$ is not developable,
then $(C,\overline{\xi})$ is parallel in the Levi-Civita sense iff
$C$ is the striction line of surface $R(C,\overline{\xi})$.}
Taking into account the system of differential equations (4.5.5)
in the given initial conditions, we have assured the existence and
uniqueness of the Levi-Civita parallel versor fields on $S$ along
$C$.
Other results presented in Section 9, Chapter 2 can be
particularized here without difficulties.
A first application. The Tchebishev nets on $S$ are defined as a
net of $S$ for which the tangent lines to a family of curves of
net are parallel on $S$ along to every curve of another family of
curves of net and conversely.
\bigskip
\noindent{\bf A Bianchi result}
\begin{theorem}
In order that the net parameter $(u = u_{0}, v=v_{0})$ on $S$ to
be a Tchebishev is necessary and sufficient to have the following
conditions:
\begin{equation}
{1\brace 12}
= {2\brace 12}
= 0.
\end{equation}
\end{theorem}
Indeed, we have $G(\delta,d) = G(d,\delta) = 0$ if (4.5.6) holds.
But (4.5.6) is equivalent to the equations $\displaystyle\frac{\partial E}{\partial v} =
\displaystyle\frac{\partial G}{\partial u} = 0.$ So, with respect to a Tchebishev
parametrization of $S$ its arclength element $ds^{2} = \phi(d,d)$
is given by
$$
ds^{2} = E(u)du^{2} + 2F (u,v)du dv + G(v)dv^{2}.
$$
We finish this paragraph remarking that the parallelism of vectors
$(C, \overline{V})$ on $S$ or the concurrence of vectors
$(C,\overline{V})$ on $S$ can be studied using the corresponding
notions in configurations $\mathfrak{M}_{t}$ described in the
Chapter 3.
\section{The geometry of curves on a surface}
\setcounter{theorem}{0}\setcounter{equation}{0}
The geometric theory of curves $C$ embedded in a surface $S$ can
be derived from the geometry of tangent Myller configuration
$\mathfrak{M}_{t}(C,\overline{\alpha},\pi)$ in which
$\overline{\alpha}(s)$ is tangent versor to $C$ at point $P(s)\in C$
and $\pi(s)$ is tangent plane to $S$ at point $P$ for any $s\in
(s_{1},s_{2})$.
Since $\mathfrak{M}_{t}(c,\overline{\alpha},\pi)$ is geometrically
associated to the curve $C$ on $S$ we can define its Darboux frame
$\cal{R}_{D}$, determine the moving equations of $\cal{R}_{D}$ and
its invariants, as belonging to curve $C$ on surface $S$.
Applying the results established in Chapter 3, first of all we
have
\begin{theorem}
For a smooth curve $C$ embedded in a surface $S$, there exists a
Darboux frame $\cal{R}_{D} = (P(s); \overline{\alpha}(s),
\overline{\mu}^{*}(s), \overline{\nu}(s))$ and a system of
invariants $\kappa_{g}(s), \kappa_{n}(s)$ and $\tau_{g}(s)$, satisfying
the following moving equations
\begin{equation}
\displaystyle\frac{d\overline{r}}{ds} = \overline{\alpha}(s),\;\; \forall s\in
(s_{1},s_{2})
\end{equation}
and
\begin{eqnarray}
\displaystyle\frac{d\overline{\alpha}}{ds} &=& \kappa_{g}(s)\overline{\mu}^{*}
+\kappa_{n}(s)\overline{\nu},\nonumber\\\displaystyle\frac{d\overline{\mu}^{*}}{ds}
&=& -\kappa_{g}(s)\overline{\alpha} +
\tau_{g}(s)\overline{\nu},\\\displaystyle\frac{d\overline{\nu}}{ds}&=&
-\kappa_{n}(s)\overline{\alpha} - \tau_{g}(s)\overline{\mu}^{*},\;\;
\forall s\in (s_{1},s_{2}).\nonumber
\end{eqnarray}
\end{theorem}
The functions $\kappa_{g}(s), \kappa_{n}(s), \tau_{g}(s)$ are called the
{\it geodesic curvature}, the {\it normal curvature} and {\it
geodesic torsion} of curve $C$ at point $P(s)$ on surface $S,$
respectively.
Exactly as in Chapter 3, a fundamental theorem can be enounced and
proved.
The invariants $\kappa_{g}, \kappa_{n}$ and $\tau_{g}$ have the same
values along $C$ on any smooth surface $S^{\prime}$ which passes
throught curve $C$ and is tangent to surface $S$ along
$C$.
The geometric interpretations of these invariants and the cases of
curves $C$ for which $\kappa_{g}(s) = 0,$ or $\kappa_{n}(s) = 0$ or
$\tau_{g}(s)=0$ can be studied as in Chapter 3.
The curves $C$ on $S$ for which $\kappa_{g}(s) = 0$ are called (as in
Chapter 3) geodesics (or {\it autoparallel curves}) of $S$. If $C$
has the property $\kappa_{n}(s) = 0,$ $\forall s\in (s_{1},s_{2})$, it
is {\it asymptotic curve} of $S$ and the curve $C$ for which
$\tau_{g}(s) = 0,$ $\forall s\in(s_{1},s_{2})$ is the {\it
curvature line} of $S$.
The expressions of these invariants can be obtained from those of
the invariants $G(\delta,d)$, $K(\delta,d)$ and $T(\delta,d)$ for
$\overline{\xi}(s) = \overline{\alpha}(s)$ given in Chapter 4:
\begin{equation}
\kappa_{g} = G(d,d),\; \kappa_{n}(s) = K(d,d),\;\; \tau_{g}(s) = T(d,d).
\end{equation}
So, we have
\begin{equation}
\hspace*{10mm}\begin{array}{lll}
\kappa_{g}&=&
\sqrt{\Delta}\left\{\displaystyle\frac{D}{ds}\left(\displaystyle\frac{du^{i}}{ds}\right)\displaystyle\frac{du^{j}}{ds}-\displaystyle\frac{D}{ds}
\left(\displaystyle\frac{du^{j}}{ds}\right)\displaystyle\frac{du^{i}}{ds}\right\},\ (i,j=1,2)\vspace*{2mm}\\
\kappa_{n}&=& \displaystyle\frac{L du^{2} + 2M du dv +N dv^{2}}{E du^{2} + 2F du dv
+ G dv^{2}},\vspace*{2mm}\\ \tau_{g}&=& \displaystyle\frac{1}{\sqrt{\Delta}}
\displaystyle\frac{\left\|\begin{array}{cc}L du + Mdv& Mdu + N dv\vspace*{2mm}\\ Edu + Fdv &
Fdu + Gdv
\end{array}\right\|}{Edu^{2} + 2F du dv + Gdv^{2}}.
\end{array}
\end{equation}
Evidently, $\kappa_{g}$ is an intrinsic invariant of surface $S$ and
$\kappa_{n}$ is the ratio of fundamental forms $\psi$ and $\Phi$.
The Mark Krein formula (3.4.3), Chapter 3 gives now the
Gauss-Bonnet formula for a surface $S$.
\newpage
\section{The formulae of O. Mayer and E. Bortolotti}
\setcounter{theorem}{0}\setcounter{equation}{0}
The geometers O. Mayer and E. Bortolotti gave some new forms of
the invariants $K(\delta,d)$ and $T(\delta,d)$ which generalize the Euler
or Bonnet formulae from the geometry of surfaces in Euclidean
space $E_{3}$.
Let $S$ be a smooth surface in $E_{3}$ having the parametrization
given by curvature lines. Thus the coefficients $F$ and $M$ of the
first fundamental and the second fundamental form vanish.
We denote by $\theta = \sphericalangle \(\overline{\alpha},
\displaystyle\frac{\overline{r}_{u}}{\sqrt{E}}\)$ and obtain
\begin{equation}
\cos \theta = \displaystyle\frac{\sqrt{E}du}{\sqrt{Edu^{2} + G dv^{2}}},\;\; \sin
\theta = \displaystyle\frac{\sqrt{G}dv}{\sqrt{Edu^{2} + G dv^{2}}}.
\end{equation}
The principal curvature are expressed by
\begin{equation}
\displaystyle\frac{1}{R_{1}} = \displaystyle\frac{L}{E},\;\; \displaystyle\frac{1}{R_{2}} = \displaystyle\frac{N}{G}.
\end{equation}
The mean curvature and total curvature are
\begin{eqnarray}
H&=&\displaystyle\frac{1}{2}\left(\displaystyle\frac{1}{R_{1}} + \displaystyle\frac{1}{R_{2}}\right) =
\displaystyle\frac{1}{2}\left(\displaystyle\frac{L}{E} +
\displaystyle\frac{N}{G}\right)\nonumber\\K_{t}&=&\displaystyle\frac{1}{R_{1}}\displaystyle\frac{1}{R_{2}}
= \displaystyle\frac{LN}{EG}.
\end{eqnarray}
Consider a tangent versor field $(C,\overline{\xi})$,
$\overline{\xi} = \overline{r}_{u}\displaystyle\frac{\delta u}{\delta s} +
\overline{r}_{v}\displaystyle\frac{\delta v}{\delta s}$ and let be $\sigma =
\sphericalangle
\(\overline{\xi},\displaystyle\frac{\overline{r}_{u}}{\sqrt{E}}\)$. One gets
\begin{equation}
\cos \sigma = \displaystyle\frac{\sqrt{E}\delta u}{\sqrt{E\delta u^{2} + G\delta v^{2}}},\;
\sin \sigma = \displaystyle\frac{\sqrt{G}\delta v}{\sqrt{E \delta u^{2} + G\delta v^{2}}}.
\end{equation}
Consequently, the expressions (4.4.13) and (4.4.17) of
the normal \linebreak curvature and geodesic torsion of
$(C,\overline{\xi})$ on $S$ are as follows
\begin{equation}
K(\delta,d) = \displaystyle\frac{\cos \sigma \cos \theta}{R_{1}} + \displaystyle\frac{\sin \sigma \sin
\theta}{R_{2}}
\end{equation}
\begin{equation}
T(\delta,d) = \displaystyle\frac{\cos \sigma\sin \theta}{R_{2}} - \displaystyle\frac{\sin \sigma \cos
\theta}{R}.
\end{equation}
The first formula was established by O. Mayer [20] and second
formula was given by E. Bortolotti [3].
In the case $\overline{\xi}(s) = \overline{\alpha}(s)$ these formulae
reduce to the known Euler and Bonnet formulas, respectively:
\begin{eqnarray}
\kappa_{n}&=& \displaystyle\frac{\cos^{2}\theta}{R_{1}} +
\displaystyle\frac{\sin^{2}\theta}{R_{2}}\nonumber\\\tau_{g} &=&
\displaystyle\frac{1}{2}\left(\displaystyle\frac{1}{R_{2}} - \displaystyle\frac{1}{R_{1}}\right)\sin
2\theta.
\end{eqnarray}
For $\theta =0$ or $\theta = \displaystyle\frac{\pi}{2},\kappa_{n}$ is equal to
$\displaystyle\frac{1}{R_{1}}$ and $\displaystyle\frac{1}{R_{2}}$ respectively. For $\theta =
\pm \displaystyle\frac{\pi}{4}$, $\tau_{g}$ takes the extremal values.
\begin{equation}
\displaystyle\frac{1}{T_{1}} = \displaystyle\frac{1}{2}\left(\displaystyle\frac{1}{R_{2}} -
\displaystyle\frac{1}{R_{1}}\right),\; \displaystyle\frac{1}{T_{2}} =
-\displaystyle\frac{1}{2}\left(\displaystyle\frac{1}{R_{2}} - \displaystyle\frac{1}{R_{1}}\right).
\end{equation}
Thus
\begin{equation}
T_{m} = \displaystyle\frac{1}{2}\left(\displaystyle\frac{1}{T_{1}} +
\displaystyle\frac{1}{T_{2}}\right),\; T_{t} = \displaystyle\frac{1}{T_{1}}\displaystyle\frac{1}{T_{2}}
\end{equation}
are the {\it mean torsion} of $S$ at point $P\in S$ and the {\it total
torsion} of $S$ at point $P\in S.$
For surfaces $S$, $T_{m}$ and $T_{t}$ have the following
properties:
\begin{equation}
T_{m} = 0,\;\; T_{t} = -\displaystyle\frac{1}{4}\left(\displaystyle\frac{1}{R_{1}} -
\displaystyle\frac{1}{R_{2}}\right)^{2}.
\end{equation}
\begin{remark}\rm
1. As we will see in the next chapter the nonholomorphic manifolds in
$E_{3}$ have a nonvanishing mean torsion $T_{m}$.
2. $T_{t}$ from (4.7.10) gives us the Bacaloglu curvature of
surfaces [35].
\end{remark}
Consider in plane $\pi(s)$ the cartesian orthonormal frame $(P(s);
\overline{i}_{1}, \overline{i}_{2})$, $i_{1} =
\displaystyle\frac{\overline{r}_{u}}{\sqrt{E}}, i_{2} =
\displaystyle\frac{\overline{r}_{v}}{\sqrt{G}}$ and the point $Q\in \pi(s)$
with the coordinates $(x,y)$, given by
$$
\overrightarrow{PQ} = x\overline{i}_{1} + y \overline{i}_{2},\;
\overrightarrow{PQ} = |\kappa_{n}|^{-1}\overline{\alpha}.
$$
But $\overline{\alpha} =\cos \theta \overline{i}_{1} +\sin \theta i_{2}$. So
we have the coordinates $(x,y)$ of point $Q$:
\begin{equation}
x = |\kappa_{n}|^{-1}\cos \theta; \;\; y = |\kappa_{n}|^{-1}\sin \theta.
\end{equation}
The locus of the points $Q$, when $\theta$ is variable in interval
$(0,2\pi)$ is obtained eliminating the variable $\theta$ between the
formulae (4.7.7) and (4.7.11). One obtains a pair of conics:
\begin{equation}
\displaystyle\frac{x^{2}}{R_{1}} + \displaystyle\frac{y^{2}}{R_{2}} = \pm 1
\end{equation}
called the {\it Dupin indicatrix} of normal curvatures. This indicatrix
is important in the local study of surfaces $S$ in Euclidean
space.
Analogously we can introduce the Bonnet indicatrix. Consider in
the plane $\pi(s)$ tangent to $S$ at point $P(s)$ the frame
$(P(s), \overline{i}_{1}, \overline{i}_{2})$ and the point
$Q^{\prime}$ given by
$$
\overrightarrow{PQ}^{\prime} = |\tau_{g}|^{-1}\overline{\alpha} =
x\overline{i}_{1} +y \overline{i}_{2}.
$$
Then, the locus of the points $Q^{\prime}$, when $\theta$ verifies
(4.7.7) and $x =|\tau_{g}|^{-1}\cos \theta$, $y = |\tau_{g}|^{-1}\sin
\theta$, defines the Bonnet indicatrix of geodesic torsions:
\begin{equation}
\left(\displaystyle\frac{1}{R_{2}} - \displaystyle\frac{1}{R_{1}}\right)xy = \pm 1
\end{equation}
which, in general, is formed by a pair of conjugated equilateral
hyperbolas.
Of course, the relations between the indicatrix of Dupin and
Bonnet can be studied without difficulties.
Following the same way we can introduce the indicatrix of the
invariants $K(\delta,d)$ and $T(\delta,d)$.
So, consider the angles $\theta = \sphericalangle
(\overline{\alpha},i_{1})$, $\sigma =
\sphericalangle(\overline{\xi},\overline{i}_{1})$ and $U\in
\pi(s)$. The point $U$ has the coordinates $x,y$ with respect to
frame $(P(s); \overline{i}_{1}, i_{2})$ given by
\begin{equation}
x= |K(\delta, d)|^{-1}\cos \sigma,\;\; y = |K(\delta,d)|^{-1} \sin \sigma.
\end{equation}
The locus of points $U$ is obtained from (4.7.5) and (4.7.14):
\begin{equation}
\displaystyle\frac{x\cos \theta}{R_{1}} +\displaystyle\frac{y\sin \theta}{R_{2}} = \pm 1.
\end{equation}
Therefore (4.7.15) is the indicatrix of the normal curvature
$K(\delta,d)$ of versor field $(C,\overline{\xi}).$ It is a pair of
parallel straight lines.
Similarly, for
$$
x = |T(\delta,d)|^{-1}\cos \sigma,\;\; y = |T(\delta,d)|^{-1}\sin \sigma
$$
and the formula (4.7.6) we determine the indicatrix of geodesic
torsion $T_{g}(\delta,d)$ of the versor field $(C,\overline{\xi}):$
$$
\displaystyle\frac{x\sin \theta}{R_{2}} - \displaystyle\frac{y\cos \theta}{R_{1}} = \pm 1.
$$
It is a pair of parallel straight lines.
Finally, we can prove without difficulties the following formula:
\begin{equation}
\hspace*{14mm}\kappa_{n}(\theta)\kappa_{n}(\sigma)\hspace{-0.5mm}+\hspace{-0.5mm} \tau_{g}(\theta) \tau_{g}(\sigma)\hspace{-0.5mm}=\hspace{-0.5mm}2HK(\sigma,\theta)\cos
(\sigma-\theta)\hspace{-0.5mm}-\hspace{-0.5mm}K_{t}\cos 2(\sigma-\theta),
\end{equation}
where $\kappa_{n}(\theta)= \kappa_{n}(d,d),$ $\kappa_{n}(\sigma) = \kappa_{n}(\delta,\delta)$,
$K(\sigma,\theta) = K(d,\delta)$.
For $\sigma = \theta,$ (i.e. $\overline{\xi} = \overline{\alpha}$) the
previous formulas leads to a known Baltrami-Enneper formula:
\begin{equation}
\kappa_{n}^{2} + \tau_{g}^{2} - 2H \kappa_{n} +K_{t} = 0,
\end{equation}
for every point $P(s)\in S.$
Along the asymptotic curves we have $\kappa_{n} = 0,$ and the previous
equations give us the Enneper formula
$$
\tau_{g}^{2} +K_{t} = 0,\; (\kappa_{n}=0).
$$
Along to the curvature lines, $\tau_{g}=0$ and one obtains from
(4.7.17)
$$
\kappa_{n}^{2} - 2H \kappa_{n} +K_{t} = 0,\;\; (\tau_{g} = 0).
$$
The considerations made in Chapter 4 of the present book allow
to affirm that the applications of the theory of Myller
configurations the geometry of surfaces in Euclidean space $E_{3}$
are interesting. Of course, the notion of Myller configuration can
be extended to the geometry of nonholomonic manifolds in $E_{3}$
which will be studied in next chapter. It can be applied to the
theory of versor fields in $E_{3}$ which has numerous applications
to Mechanics, Hydrodynamics (see the papers by Gh. Gheorghiev and
collaborators).
Moreover, the Myller configurations can be defined and
investigated in Riemannian spaces and applied to the geometry of
submanifolds of these spaces, [24], [25]. They can be studied in the Finsler, Lagrange or Hamilton spaces [58], [68], [70].
\chapter*{Introduction}
In the differential geometry of curves in the Euclidean space $E_{3}$
one introduces, along a curve $C$, some versor fields, as tangent,
principal normal or binormal, as well as some plane fields as
osculating, normal or rectifying planes.
More generally, we can consider a versor field $(C,\overline{\xi})$
or a plane field $(C,\pi)$.
A pair $\{(C,\overline{\xi}), (C,\pi)\}$ for which
$\overline{\xi}\in \pi$, has been called in 1960 [23] by the present
author, a Myller configuration in the space $E_{3}$, denoted by
$\mathfrak{M}(C, \overline{\xi},\pi)$. When the planes $\pi$ are tangent to $C$ then we have a tangent
Myller configuration $\mathfrak{M}_{t}(C,\overline{\xi},\pi)$.
Academician Alexandru Myller studied in 1922 the notion of
parallelism of $(C,\overline{\xi})$ in the plane field $(C,\pi)$
obtaining an interesting generalization of the famous parallelism
of Levi-Civita on the curved surfaces. These investigations have
been continued by Octav Mayer which introduced new fundamental
invariants for $\mathfrak{M}(C, \overline{\xi}, \pi)$. The
importance of these studies was underlined by Levi Civita in
Addendum to his book {\it Lezioni di calcolo differentiale assoluto},
1925.
Now, we try to make a systematic presentation of the geometry of
Myller configurations $\mathfrak{M}(C,\overline{\xi},\pi)$ and
$\mathfrak{M}_{t}(C,\overline{\xi}, \pi)$ with applications to the
differential geometry of surfaces and to the geometry of
nonholonomic manifolds in the Euclidean space $E_{3}$.
Indeed, if $C$ is a curve on the surface $S\subset E_{3}$, $s$ is
the natural parameter of curve $C$ and $\overline{\xi}(s)$ is a
tangent versor field to $S$ along $C$ and $\pi(s)$ is tangent
planes field to $S$ along $C$, we have a tangent Myller
configuration $\mathfrak{M}_{t}(C,\overline{\xi},\pi)$ intrinsic
associated to the geometric objects $S,C,\overline{\xi}$.
Consequently, the geometry of the field $(C, \overline{\xi})$ on
surface $S$ is the geometry of the associated Myller
configurations $\mathfrak{M}_{t}(C,\overline{\xi},\pi)$. It is
remarkable that the geometric theory of $\mathfrak{M}_{t}$ is a
particular case of that of general Myller configuration
$\mathfrak{M}(C,\overline{\xi},\pi)$.
For a Myller configuration $\mathfrak{M}(C,\overline{\xi},\pi)$ we
determine a Darboux frame, the fundamental equations and a
complete system of invariants $G,K,T$ called, geodesic curvature,
normal curvature and geodesic torsion, respectively, of the versor
field $(C,\overline{\xi})$ in Myller configuration
$\mathfrak{M}(C,\overline{\xi}, \pi)$. A fundamental theorem, when
the functions $(G(s),K(s),$ $T(s))$ are given, can be
proven.
The invariant $G(s)$ was discovered by Al. Myller (and named by
him the deviation of parallelism). $G(s) = 0$ on curve $C$
characterizes the parallelism of versor field $(C,\overline{\xi})$
in $\mathfrak{M},$ [23], [24], [31], [32], [33]. The second invariant
$K(s)$ was introduced by O. Mayer (it was called the curvature of
parallelism). Third invariant $T(s)$ was found by E. Bortolotti
[3].
In the particular case, when $\mathfrak{M}$ is a tangent Myller
configuration $\mathfrak{M}_{t}(C,\overline{\xi},\pi)$ associated
to a tangent versor field $(C,\overline{\xi})$ on a surface $S$,
the $G(s)$ is an intrinsic invariant and $G(s)=0$ along $C$ leads
to the Levi Civita parallelism.
In configurations $\mathfrak{M}_{t}(C,\overline{\xi},\pi)$ there
exists a natural versor field $(C,\overline{\alpha})$, where
$\overline{\alpha}(s)$ are the tangent versors to curve $C$. The
versor field $(C,\overline{\alpha})$ has a Darboux frame $\cal{R} =
(P(s); \overline{\alpha}, \overline{\mu}^{*}, \overline{\nu})$ in
$\mathfrak{M}_{t}$ where $\overline{\nu}(s)$ is normal to plane
$\pi(s)$ and $\overline{\mu}^{*}(s) = \overline{\nu}(s)\times
\overline{\alpha}(s).$ The moving equations of $\cal{R}$ are:
\begin{eqnarray*} \displaystyle\frac{d\overline{r}}{ds} &=& \overline{\alpha}(s),\;\;
s\in (s_{1},s_{2})\\\displaystyle\frac{d\overline{\alpha}}{ds}&=&
\kappa_{g}(s)\overline{\mu}^{*} +\kappa_{n}(s)\overline{\nu},\\
\displaystyle\frac{d\overline{\mu}^{*}}{ds}&=& -\kappa_{g}(s)\overline{\alpha} +
\tau_{g}(s)\overline{\nu},\\\displaystyle\frac{d\overline{\nu}}{ds}&=&
-\kappa_{n}(s)\overline{\alpha} - \tau_{g}(s)\overline{\mu}^{*}.
\end{eqnarray*}
The functions $\kappa_{g}(s), \kappa_{n}(s)$ and $\tau_{g}(s)$ form a
complete system of invariants of the curve in $\mathfrak{M}_{t}$.
A theorem of existence and uniqueness for the versor fields
$(C,\overline{\alpha})$ in $\mathfrak{M}_{t}(C,$ $\overline{\alpha},\pi)$,
when the invariant $\kappa_{g}(s)$, $k_n$ and $\tau_{g}(s)$ are given, is proved. The function $\kappa_{g}(s)$ is called the geodesic
curvature of the curve $C$ in $\mathfrak{M}_{t}$; $\kappa_{n}(s)$ is
the normal curvature and $\tau_{g}(s)$ is the geodesic torsion of
$C$ in $\mathfrak{M}_{t}$.
The condition $\kappa_{g}(s) = 0$, $\forall s\in (s_{1},s_{2})$
characterizes the geodesic (autoparallel lines) in
$\mathfrak{M}_{t}$; $\kappa_{n}(s) = 0,$ $\forall s\in (s_{1},s_{2})$
give us the asymptotic lines and $\tau_{g}(s) = 0,$ $\forall s\in
(s_{1},s_{2})$ characterizes the curvature lines $C$ in
$\mathfrak{M}_{t}$
One can remark that in the case when $\mathfrak{M}_{t}$ is the
associated Myller confi\-gu\-ration to a curve $C$ on a surface $S$ we
obtain the classical theory of curves on surface $S$. It is
important to remark that the Mark Krien$'$s formula (2.10.3) leads
to the integral formula (2.10.9) of Gauss-Bonnet for surface $S$, studied by R. Miron in the book [62].
Also, if $C$ is a curve of a nonholonomic manifold $E_{3}^{2}$ in
$E_{3}$, we uniquely determine a Myller configuration
$\mathfrak{M}_{t}(C,\overline{\alpha},\pi)$ in which
$(C,\overline{\alpha})$ is the tangent versor field to $C$ and
$(C,\pi)$ is the tangent plane field to $E_{3}^{2}$ along $C$, [54], [64], [65].
In this case the geometry of $\mathfrak{M}_{t}$ is the geometry of
curves $C$ in the nonholonomic manifolds $E_{3}^{2}$. Some new
notions can be introduced as: concurrence in Myller sense of
versor fields $(C,\overline{\xi})$ on $E_{3}^{2}$, extremal
geodesic torsion, the mean torsion and total torsion of
$E_{3}^{2}$ at a point, a remarkable formula for geodesic torsion
and an indicatrix of Bonnet for geodesic torsion, which is not
reducible to a pair of equilateral hyperbolas, as in the case of
surfaces.
The nonholonomic planes, nonholonomic spheres of Gr. Moisil, can
be studied by means of techniques from the geometry of Myller
configurations $\mathfrak{M}_{t}$.
We finish the present introduction pointing out some important
developments of the geometry of Myller configurations:
The author extended the notion of Myller configuration in
Riemannian Geometry [24], [25], [26], [27]. Izu Vaisman [76] has studied
the Myller configurations in the symplectic geometry.
Mircea Craioveanu realized a nice theory of Myller configurations
in infinit dimensional Riemannian manifolds. Gheorghe Gheorghiev
developed the configuration $\mathfrak{M}$, [11], [55] in the
geometry of versor fields in the Euclidean space and applied it in
hydromechanics.
N.N. Mihalieanu studied the Myller configurations in Minkowski
spaces [61]. For Myller configurations in a Finsler, Lagrange or Hamilton spaces we refer to the paper [58] and to the books of R. Miron and M. Anastasiei [68], R. Miron, D. Hrimiuc, H. Shimada and S. Sab\u{a}u [70].
All these investigations underline the usefulness of geometry of
Myller configurations in differential geometry and its
applications.
\newpage
\cleardoublepage
\chapter{Versor fields. Plane fields in $E_{3}$}
First of all we investigate the geometry of a versor field
$(C,\overline{\xi})$ in the Euclidean space $E_{3}$, introducing an
invariant frame of Frenet type, the moving equations of this
frame, invariants and proving a fundamental theorem. The
invariants are called $K_{1}$-curvature and $K_{2}$-torsion of
$(C,\overline{\xi})$. Geometric interpretations for $K_{1}$ and
$K_{2}$ are pointed out. The parallelism of $(C,\overline{\xi})$,
concurrence of $(C,\overline{\xi})$ and the enveloping of versor
field $(C,\overline{\xi})$ are studied, too.
A similar study is made for the plane fields $(C,\pi)$ taking into
account the normal versor field $(C,\overline{\nu})$, with
$\overline{\nu}(s)$ a normal versor to the plane $\pi(s)$.
\section{Versor fields $(C,\overline{\xi})$}
\setcounter{theorem}{0}\setcounter{equation}{0}
In the Euclidian space a versor field $(C,\overline{\xi})$ can be
analytical represented in an orthonormal frame $\cal{R} =
(0;\overline{i}_{1}, \overline{i}_{2},\overline{i}_{3})$, by
\begin{equation}
\overline{r} = \overline{r}(s),\;\;\;\;\overline{\xi} = \overline{\xi}(s), \;\; s\in
(s_{1},s_{2})
\end{equation}
where $s$ is the arc length on curve $C$,
$$
\overline{r}(s) = \overline{OP}(s) = x(s)\overline{i}_{1} +
y(s)\overline{i}_{2} + z(s)\overline{i}_{3}\textrm{ and }
$$ $\overline{\xi}(s) =
\overline{\xi}(\overline{r}(s))\hspace{-0.5mm}=\hspace{-0.5mm} \overrightarrow{PQ} =
\xi^{1}(s)\overline{i}_{1}\hspace{-0.5mm} +\hspace{-0.5mm} \xi^{2}(s)\overline{i}_{2}\hspace{-0.5mm} +\hspace{-0.5mm}
\xi^{3}(s)\overline{i}_{3}$, $\|\overline{\xi}(s)\|^2\hspace{-0.5mm} =\hspace{-0.5mm}
\langle\overline{\xi}(s),\overline{\xi}(s)\rangle =1.$
\smallskip
All geometric objects considered in this book are assumed to be of
class $C^{k}$, $k\geq 3$, and sometimes of class $C^{\infty}$. The
pair $(C,\overline{\xi})$ has a geometrical meaning. It follows
that the pair $\(C,\displaystyle\frac{d\overline{\xi}}{ds}\)$ has a geometric
meaning, too. Therefore, the norm:
\begin{equation}
K_{1}(s) = \left\|\displaystyle\frac{d\overline{\xi}(s)}{ds}\right\|
\end{equation}
is an invariant of the field $(C,\overline{\xi})$.
We denote
\begin{equation}
\overline{\xi}_1(s) = \overline{\xi}(s)
\end{equation}
and let $\overline{\xi}_{2}(s)$ be the versor of vector
$\displaystyle\frac{d\overline{\xi}_{1}}{ds}$. Thus we can write
$$
\displaystyle\frac{d\overline{\xi}_{1}(s)}{ds} =K_{1}(s)\overline{\xi}_{2}(s).
$$
Evidently, $\overline{\xi}_{2}(s)$ is orthogonal to
$\overline{\xi}_{1}(s)$.
It follows that the frame
\begin{equation}
\cal{R}_{F} = (P(s); \overline{\xi}_{1}, \overline{\xi}_{2},
\overline{\xi}_{3}),\;\; \overline{\xi}_{3}(s) =
\overline{\xi}_{1}\times \overline{\xi}_{2}
\end{equation}
is orthonormal, positively oriented and has a geometrical meaning.
$\cal{R}_{F}$ is called the Frenet frame of the versor field
$(C,\overline{\xi})$.
We have:
\begin{theorem}
The moving equations of the Frenet frame $\cal{R}_{F}$ are:
\begin{equation}
\hspace*{5mm}\displaystyle\frac{d\overline{r}}{ds} =
a_{1}(s)\overline{\xi}_{1}+a_{2}(s)\overline{\xi}_{2} +
a_{3}(s)\overline{\xi}_{3}\;\;\;\; a_{1}^{2}(s)+a_{2}^{2}(s) +
a_{3}^{2}(s) =1
\end{equation}
and
\begin{eqnarray}
\displaystyle\frac{d\overline{\xi}_{1}}{ds} &=&
K_{1}(s)\overline{\xi}_{2},\nonumber\\\displaystyle\frac{d\overline{\xi}_{2}}{ds}
&=& -K_{1}(s)\overline{\xi}_{1}(s) +
K_{2}(s)\overline{\xi}_{3},\\\displaystyle\frac{d\overline{\xi}_{3}}{ds}& =&
-K_{2}(s)\overline{\xi}_{2}(s),\nonumber
\end{eqnarray}
where $K_{1}(s)>0.$ The functions $K_{1}(s), K_{2}(s), a_{1}(s),
a_{2}(s)$, $a_{3}(s),$ $s\in (s_{1},s_{2})$ are invariants of the
versor field $(C,\overline{\xi})$.\end{theorem}
The proof does not present difficulties.
The invariant $K_{1}(s)$ is called the curvature of
$(C,\overline{\xi})$ and has the same geometric interpretation as
the curvature of a curve in $E_{3}$. $K_{2}(s)$ is called the
torsion and has the same geometrical interpretation as the torsion
of a curve in $E_{3}$.
The equations (1.1.5), (1.1.6) will be called the {\it fundamental or
Frenet equations} of the versor field $(C,\overline{\xi})$.
In the case $a_{1}(s)= 1,$ $a_{2}(s) = 0,$ $a_{3}(s) =0$ the
tangent versor $\displaystyle\frac{d\overline{r}}{ds}$ is denoted by
$$
\overline{\alpha}(s) =
\displaystyle\frac{d\overline{r}}{ds}(s)\leqno{(1.1.5')}
$$
The equations (1.1.5), (1.1.6) are then the Frenet equations of a
curve in the Euclidian space $E_{3}$.
For the versor field $(C,\overline{\xi})$ we can formulate a
fundamental theorem:
\begin{theorem}
If the functions $K_{1}(s)>0,$ $K_{2}(s)$, $a_{1}(s), a_{2}(s),
a_{3}(s)$, $(a_{1}^{2} + a_{2}^{2} + a_{3}^{2} =1)$ of class
$C^{\infty}$ are apriori given, $s\in [a,b]$ there exists a curve
$C:[a,b]\to E_{3}$ parametrized by arclengths and a versor field
$\overline{\xi}(s), s\in [a,b]$, whose the curvature, torsion and
the functions $a_{i}(s)$ are $K_{1}(s), K_{2}(s)$ and
$a_{i}(s)$. Any two such versor fields $(C,\overline{\xi})$ differ
by a proper Euclidean motion.
\end{theorem}
For the proof one applies the same technique like in the proof of
Theorem 11, p. 45 from [75], [76].
\begin{remark}\rm
1. If $K_{1}(s) = 0,$ $s\in (s_{1}, s_{2})$ the versors
$\overline{\xi}_{1}(s)$ are parallel in $E_{3}$ along the curve
$C.$
2. The versor field $(C,\overline{\xi})$ determines a ruled
surface $S(C,\overline{\xi})$.
3. The surface $S(C,\overline{\xi})$ is a cylinder iff the
invariant $K_{1}(s)$ vanishes.
4. The surface $S(C,\overline{\xi})$ is with director plane iff
$K_{2}(s) = 0$.
5. The surface $S(C,\overline{\xi})$ is developing iff the
invariant $a_{3}(s)$ vanishes.
\end{remark}
If the surfaces $S(C,\overline{\xi})$ is a cone we say that the
versors field $(C,\overline{\xi})$ is {\it concurrent}.
\begin{theorem}
A necessary and sufficient condition for the versor field $(C,\overline{\xi})$, $(K_{1}(s)\neq 0)$,
to be concurrent is the following
$$
\displaystyle\frac{d}{ds}\left(\displaystyle\frac{a_{2}(s)}{K_{1}(s)}\right)-a_{1}(s) =
0,\;\; a_{3}(s) = 0,\;\; \forall s\in (s_{1},s_{2}).
$$
\end{theorem}
\medskip
\section{Spherical image of a versor field $(C,\overline{\xi})$}
\setcounter{theorem}{0}\setcounter{equation}{0}
Consider the sphere $\Sigma$ with center a fix point $O\in E_{3}$
and radius 1.
\begin{definition}
The spherical image of the versor field $(C,\overline{\xi})$ is the
curve $C^{*}$ on sphere $\Sigma$ given by $\overline{\xi}^{*}(s) =
\overrightarrow{OP^{*}}(s) = \overline{\xi}(s),$ $\forall s\in
(s_{1},s_{2})$.
\end{definition}
From this definition we have:
\begin{equation}
d\overline{\xi}^{*}(s) =\overline{\xi}_{2}(s) K_{1}(s)ds.
\end{equation}
Some immediate properties:
1. It follows
\begin{equation}
ds^{*} = K_{1}(s)ds.
\end{equation}
Therefore:
The arc length of the curve $C^{*}$ is
\begin{equation}
s^{*} = s_{0}^{*} +\int_{s_{0}}^{s}K_{1}(\sigma) d\sigma
\end{equation}
where $s_{0}^{*}$ is a constant and $[s_{0},s]\subset
(s_{1},s_{2})$.
2. The curvature $K_{1}(s)$ of $(C,\overline{\xi})$ at a
point $P^{*}(s)$ is expressed by
\begin{equation}
K_{1}(s) = \displaystyle\frac{ds^*}{ds}
\end{equation}
3. $C^{*}$ is reducible to a point iff $(C,\overline{\xi})$ is a
parallel versor field in $E_{3}$.
4. The tangent line at point $P^{*}\in C^{*}$ is parallel with
principal normal line of $(C,\overline{\xi})$.
5. Since $\overline{\xi}_{1}(s)\times \overline{\xi}_{2}(s) =
\overline{\xi}_{3}(s)$ it follows that the direction of binormal
versor field $\overline{\xi}_{3}$ is the direction tangent to
$\Sigma$ orthogonal to tangent line of $C^{*}$ at point $P^{*}$.
6. The geodesic curvature $\kappa_{g}$ of the curve $C^{*}$ at a point
$P^{*}$ verifies the equation
\begin{equation}
\kappa_{g} ds^{*} =K_{2}ds.
\end{equation}
7. The versor field $(C,\overline{\xi})$ is of null torsion (i.e.
$K_{2}(s)=0$) iff $\kappa_{g} = 0.$ In this case $C^{*}$ is an arc of
a great circle on $\Sigma$.
\section{Plane fields $(C,\pi)$}
\setcounter{theorem}{0}\setcounter{equation}{0}
A plane field $(C,\pi)$ is defined by the versor field
$(C,\overline{\nu}(s))$ where $\overline{\nu}(s)$ is normal to
$\pi(s)$ in every point $P(s)\in C.$ We assume the $\pi(s)$ is
oriented. Consequently, $\overline{\nu}(s)$ is well determined.
Let $\cal{R}_{\pi} = (P(s), \overline{\nu}_{1}(s),
\overline{\nu}_{2}(s), \overline{\nu}_{3}(s))$ be the Frenet frame,
$\overline{\nu}(s) = \overline{\nu}_{1}(s)$, of the versor field
$(C, \overline{\nu})$.
It follows that the fundamental equations of the plane field
$(C,\pi)$ are:
\begin{equation}
\displaystyle\frac{d\overline{r}}{ds}(s) = b_{1}(s){\overline{\nu}}_{1} +
b_{2}(s){\overline{\nu}}_{2} + b_{3}(s)\nu_{3},\;\;
b_{1}^{2}+b_{2}^{2}+b_{3}^{2}=1.
\end{equation}
\begin{eqnarray}
\displaystyle\frac{d\overline{\nu}_{1}}{ds} &=& \chi_{1}(s)\overline{\nu}_{2},\nonumber\\
\displaystyle\frac{d\overline{\nu}_{2}}{ds} &=& -\chi_{1}\overline{\nu}_{1} +
\chi_{2}(s)\overline{\nu}_{3}\\
\displaystyle\frac{d\overline{\nu}_{3}}{ds}&=&
-\chi_{2}(s)\overline{\nu}_{2}.\nonumber
\end{eqnarray}
The invariant $\chi_{1}(s) = \left\|\displaystyle\frac{d\overline{\nu}_{1}}{dr}\right\|$
is called the curvature and $\chi_{2}(s)$ is the torsion of the
plane field $(C,\pi(s))$.
The following properties hold:
1. The characteristic straight lines of the field of planes
$(C,\pi)$ cross through the corresponding point $P(s)\in C$ iff the
invariant $b_{1}(s) = 0$, $\forall s\in (s_{1},s_{2})$.
2. The planes $\pi(s)$ are parallel along the curve $C$ iff the
invariant $\chi_{1}(s)$ vanishes, $\forall s\in (s_{1},s_{2})$.
3. The characteristic lines of the plane field $(C,\pi)$ are
parallel iff the invariant $\chi_{2}(s) = 0$, $\forall s\in
(s_{1},s_{2})$.
4. The versor field $(C,\overline{\nu}_{3})$ determines the
directions of the characteristic line of $(C,\pi)$.
5. The versor field $(C,\overline{\nu}_{3})$, with
$\chi_{2}(s)\neq 0$, is concurrent iff:
$$
b_{1}(s) = 0,\;\; b_{3}(s) +
\displaystyle\frac{d}{dt}\left(\displaystyle\frac{b_{2}(s)}{\chi_{2}(s)}\right)=0.
$$
6. The curve $C$ is on orthogonal trajectory of the generatrices of
the ruled surface $\cal{R}(C,\overline{\nu}_{3})$ if $b_{3}(s) =
0,$ $\forall s\in (s_{1},s_{2})$.
7. By means of equations (1.3.1), (1.3.2) we can prove a
fundamental theorem for the plane field $(C,\pi).$
\chapter{Myller configurations $\mathfrak{M}(C,\overline{\xi},\pi)$}
The notions of versor field $(C,\overline{\xi})$ and the plane
field $(C,\pi)$ along to the same curve $C$ lead to a more general
concept named Myller configuration
$\mathfrak{M}(C,\overline{\xi},\pi)$, in which every versor
$\overline{\xi}(s)$ belongs to the corresponding plane $\pi(s)$ at
point $P(s)\in C.$ The geometry of $\mathfrak{M} =
\mathfrak{M}(C,\overline{\xi},\pi)$ is much more rich as the
geometries of $(C,\overline{\xi})$ and $(C,\pi)$ separately taken.
For $\mathfrak{M}$ one can define its geometric invariants, a Darboux frame
and introduce a new idea of parallelism or concurrence of versor
$(C,\overline{\xi})$ in $\mathfrak{M}$. The geometry of
$\mathfrak{M}$ is totally based on the fundamental equations of
$\mathfrak{M}$. The basic idea of this construction belongs to Al.
Myller [31], [32], [33], [34] and it was considerable developed by O. Mayer
[20], [21], R. Miron [23], [24], [62] (who proposed the name of Myller
Configuration and studied its complete system of invariants).
\section{Fundamental equations of Myller configuration}
\setcounter{theorem}{0}\setcounter{equation}{0}
\begin{definition}
A Myller configuration $\mathfrak{M} =
\mathfrak{M}(C,\overline{\xi},\pi)$ in the Euclidean space $E_{3}$
is a pair $(C,\overline{\xi})$, $(C,\pi)$ of versor field and
plane field, having the property: every $\overline{\xi}(s)$
belongs to the plane $\pi(s)$.
\end{definition}
Let $\overline{\nu}(s)$ be the normal versor to plane $\pi(s)$.
Evidently $\overline{\nu}(s)$ is uniquely determined if $\pi(s)$
is an oriented plane for $\forall s\in (s_{1},s_{2})$.
By means of versors $\overline{\xi}(s), \overline{\nu}(s)$ we can
determine the {\it Darboux frame} of $\mathfrak{M}:$
\begin{equation}
\cal{R}_{D} = (P(s); \overline{\xi}(s), \overline{\mu}(s),
\overline{\nu}(s)),
\end{equation}
where
\begin{equation}
\overline{\mu}(s) = \overline{\nu}(s)\times \overline{\xi}(s).
\end{equation}
$\cal{R}_{D}$ is geometrically associated to
$\mathfrak{M}$. It is orthonormal and positively oriented.
Since the versors $\overline{\xi}(s), \overline{\mu}(s),
\overline{\nu}(s)$ have a geometric meaning, the same pro\-per\-ties
have the vectors $\displaystyle\frac{d\overline{\xi}}{ds},$
$\displaystyle\frac{d\overline{\mu}}{ds}$ and $\displaystyle\frac{d\overline{\nu}}{ds}$.
Therefore, we can prove, without difficulties:
\begin{theorem}
The moving equations of the Darboux frame of $\mathfrak{M}$ are as
follows:
\begin{equation}
\displaystyle\frac{d\overline{r}}{ds} = \overline{\alpha}(s) =
c_{1}(s)\overline{\xi} + c_{2}(s)\overline{\mu} +
c_{3}(s)\overline{\nu};\;\; c_{1}^{2} + c_{2}^{2}+c_{3}^{2} =1
\end{equation}
and
\begin{eqnarray}
\displaystyle\frac{d\overline{\xi}}{ds} &=& G(s)\overline{\mu} +
K(s)\overline{\nu},\nonumber\\
\displaystyle\frac{d\overline{\mu}}{ds} &=& -G(s)\overline{\xi} +
T(s)\overline{\nu},\\\displaystyle\frac{d\overline{\nu}}{ds} &=&
-K(s)\overline{\xi} - T(s)\overline{\mu}\nonumber
\end{eqnarray}
and $c_{1}(s), c_{2}(s), c_{3}(s); G(s), K(s)$ and $T(s)$ are uniquely determined and are invariants.
\end{theorem}
The previous equations are called {\it the fundamental equations}
of the Myller configurations $\mathfrak{M}$.
Terms:
$G(s)$ -- is the geodesic curvature, $K(s)$ -- is the normal
curvature, $T(s)$ -- is the geodesic torsion of the versor field $(C,\overline{\xi})$ in Myller configuration $\mathfrak{M}$.
For $\mathfrak{M}$ a fundamental theorem can be stated:
\begin{theorem}
Let be a priori given $C^{\infty}$ functions $c_{1}(s), c_{2}(s),
c_{3}(s)$, $[c_{1}^{2} + c_{2}^{2} + c_{3}^{2}=1]$, $G(s),$
$K(s)$, $T(s)$, $s\in [a,b]$. Then there is a Myller configuration
$\mathfrak{M}(C,\overline{\xi},\pi)$ for which $s$ is the arclength of
curve $C$ and the given functions are its invariants. Two such
configuration differ by a proper Euclidean motion.
\end{theorem}
\noindent{\it Proof.}
By means of given functions $c_{1}(s), \ldots, G(s), \ldots$ we
can write the system of differential equations (2.1.3), (2.1.4).
Let the initial conditions $\overline{r}_{0} =
\overrightarrow{OP}_{0}$, $(\overline{\xi}_{0},
\overline{\mu}_{0}, \overline{\nu}_{0})$, an orthonormal,
positively oriented frame in $E_{3}$.
From (2.1.4) we find an unique solution $(\overline{\xi}(s),
\overline{\mu}(s),\overline{\nu}(s))$, $s\in [a,b]$ with the
property
$$
\overline{\xi}(s_{0}) = \overline{\xi}_{0},\;\;
\overline{\mu}(s_{0}) = \overline{\mu}_{0},\;\;
\overline{\nu}(s_{0}) = \overline{\nu}_{0},
$$
with $s_{0}\in [a,b]$ and $(\overline{\xi}(s), \overline{\mu}(s),
\overline{\nu} (s))$ being an orthonormal, positively oriented
frame.
Then consider the following solution of (2.1.3)
$$
\overline{r}(s) = \overline{r}_{0} +
\int_{s_{0}}^{s}[c_{1}(\sigma)\overline{\xi}(\sigma) +
c_{2}(\sigma)\overline{\mu}(\sigma) +c_{3}(\sigma)\overline{\nu}(\sigma)]d\sigma,
$$
which has the property $\overline{r}(s_{0}) = \overline{r}_{0}$,
and $\left\|\displaystyle\frac{d\overline{r}}{ds}\right\| =1.$
Thus $s$ is the arc length on the curve $\overline{r} = r(s)$.
Now, consider the configuration
$\mathfrak{M}(C,\overline{\xi},\pi(s))$, $\pi(s)$ being the plane
orthogonal to versor $\overline{\nu}(s)$ at point $P(s)$.
We can prove that $\mathfrak{M}(C,\overline{\xi},\pi)$ has as
invariants just $c_{1},c_{2},c_{3},G,K,T.$
The fact that two configurations $\mathfrak{M}$ and
$\mathfrak{M}^{\prime}$, obtained by changing the initial
conditions $(\overline{r}_{0}; \overline{\xi}_{0},
\overline{\mu}_{0}, \overline{\nu}_{0})$ to
$(\overline{r}_{0}^{\prime}, \overline{\xi}_{0}^{\prime},
\overline{\mu}_{0}^{\prime}, \overline{\nu}_{0}^{\prime})$ differ
by a proper Euclidian motions follows from the property that the
exists an unique Euclidean motion which apply $(\overline{r}_{0};
\overline{\xi}_{0}, \overline{\mu}_{0},\overline{\nu}_{0})$ to
$(\overline{r}^{\prime}_{0}, \overline{\xi}_{0}^{\prime},
\overline{\mu}_{0}^{\prime},
\overline{\nu}_{0}^{\prime})$.
\section{Geometric interpretations of invariants}
\setcounter{theorem}{0}
\setcounter{equation}{0}
The invariants $c_{1}(s), c_{2}(s), c_{3}(s)$ have simple
geometric interpretations:
$$
c_{1}(s) = \cos \sphericalangle (\overline{\alpha}, \overline{\xi}),\;
c_{2}(s) =\cos \sphericalangle (\overline{\alpha}, \overline{\mu}),\;
c_{3} = \cos \sphericalangle (\overline{\alpha}, \overline{\nu}).
$$
We can find some interpretation of the invariants $G(s), K(s)$ and
$T(s)$ considering a variation of Darboux frame
$$
R_{D}(P(s); \overline{\xi}(s),
\overline{\mu}(s),\overline{\nu}(s))$$$$\to
R^{\prime}_{D}(P^{\prime}(s+\Delta s), \overline{\xi}(s+\Delta s),
\overline{\mu}(s+\Delta s), \overline{\nu}(s+\Delta s)),
$$
obtained by the Taylor expansion
\begin{equation}
\begin{array}{lll}
\overline{r}(s+\Delta s)&=& \overline{r}(s)+\displaystyle\frac{\Delta s
}{1!}\displaystyle\frac{d\overline{r}}{ds} + \displaystyle\frac{(\Delta s)^{2}}{2!}
\displaystyle\frac{d^{2}r}{ds^{2}} + \ldots +\vspace*{2mm}\\ & & +\displaystyle\frac{(\Delta
s)^{n}}{n!}\left(\displaystyle\frac{d^{n}\overline{r}}{ds^{n}}
+ \overline{\omega}_{0}(s,\Delta
s)\right),\\\overline{\xi}(s+\Delta s)&=&
\overline{\xi}(s)+\displaystyle\frac{\Delta s }{1!}\displaystyle\frac{d\overline{\xi}}{ds} +
\displaystyle\frac{(\Delta s)^{2}}{2!} \displaystyle\frac{d^{2}\overline{\xi}}{ds^{2}} + \ldots
+\\ & & +\displaystyle\frac{(\Delta s)^{n}}{n!}\left(\displaystyle\frac{d^{n}\overline{\xi}}{ds^{n}}
+ \overline{\omega}_{1}(s,\Delta
s)\right),\vspace*{2mm}\\\overline{\mu}(s+\Delta s)&=&
\overline{\mu}(s)+\displaystyle\frac{\Delta s }{1!}\displaystyle\frac{d\overline{\mu}}{ds} +
\displaystyle\frac{(\Delta s)^{2}}{2!} \displaystyle\frac{d^{2}\overline{\mu}}{ds^{2}} + \ldots
+\\ & & +\displaystyle\frac{(\Delta
s)^{n}}{n!}\left(\displaystyle\frac{d^{n}\overline{\mu}}{ds^{n}}+ \overline{\omega}_{2}(s,\Delta
s)\right),\vspace*{2mm}\\\overline{\nu}(s+\Delta s)&=& \overline{\nu}(s)+\displaystyle\frac{\Delta s
}{1!}\displaystyle\frac{d\overline{\nu}}{ds} + \displaystyle\frac{(\Delta s)^{2}}{2!}
\displaystyle\frac{d^{2}\overline{\nu}}{ds^{2}} + \ldots +\vspace*{2mm}\\ & &\displaystyle\frac{(\Delta
s)^{n}}{n!}\left(\displaystyle\frac{d^{n}\overline{\nu}}{ds^{n}}+ \overline{\omega}_{3}(s,\Delta s)\right),
\end{array}
\end{equation}
where
\begin{eqnarray}
\lim\limits_{\Delta s\to 0}\overline{\omega}_{i}(s,\Delta s) = 0,\;\;
(i=0,1,2,3).
\end{eqnarray}
Using the fundamental formulas (2.1.3), (2.1.4) we can write, for
$n=1:$
\begin{eqnarray}
\overline{r}(s+\Delta s) &=& \overline{r}(s) + \Delta s
(c_{1}\overline{\xi} + c_{2} \overline{\mu} + c_{3}\overline{\nu}
+ \overline{\omega}_{0}(s,\Delta s)),\nonumber\\\overline{\xi}(s+\Delta s) &=&
\overline{\xi}(s) + \Delta s(G(s) \overline{\mu} + K(s)\overline{\nu}
+ \overline{\omega}_{1}(s,\Delta s) ),\\\overline{\mu}(s+\Delta s) &=&
\overline{\mu}(s) + \Delta s (-G(s)\overline{\xi} + T(s)\overline{\nu}
+ \overline{\omega}_{2}(s,\Delta s) )\nonumber\\\overline{\nu}(s)+\Delta s &=&
\overline{\nu}(s) + \Delta s(-K(s)\overline{\xi} - T(s)\overline{\mu}
+ \overline{\omega}_{3}(s,\Delta s)).\nonumber
\end{eqnarray} and (2.2.2) being verified.
Let $\overline{\xi}^{*}(s+\Delta s)$ be the orthogonal projection of
the versor $\overline{\xi}(s+\Delta s)$ on the plane $\pi(s)$ at point
$P(s)$ and let $\Delta \psi_{1}$ be the oriented angle of the versors
$\overline{\xi}(s), \overline{\xi}^{*}(s+\Delta s)$. Thus, we have
\begin{theorem}
The invariant $G(s)$ of versor field $(C,\xi)$ in Myller
configuration $\mathfrak{M}(C,\overline{\xi},\pi)$ is given by
$$
G(s) = \lim_{\Delta s\to 0}\displaystyle\frac{\Delta\psi_{1}}{\Delta s}.
$$
\end{theorem}
By means of second formula (2.2.3), this Theorem can be proved
without difficulties.
Therefore the name of {\it geodesic curvature} of $(C,\overline{\xi})$
in $\mathfrak{M}$ is justified.
Consider the plane $(P(s);
\overline{\xi}(s), \overline{\nu}(s))$-called the normal plan of
$\mathfrak{M}$ which contains the versor $\overline{\xi}(s)$.
Let be the vector $\overline{\xi}^{**}(s+\Delta s)$ the orthogonal
projection of versor $\overline{\xi}(s+\Delta s)$ on the normal plan
$(P(s); \overline{\xi}(s), \overline{\nu}(s))$. The angle $\Delta
\psi_{2} = \sphericalangle (\overline{\xi}(s),
\overline{\xi}^{**}(s+\Delta s))$ is given by the forma
$$
\sin \Delta \psi_{2} = \displaystyle\frac{\langle\overline{\xi}(s),
\overline{\xi}(s+\Delta s),
\overline{\mu}(s)\rangle}{\|\overline{\xi}^{**}(s+\Delta s)\|}.
$$
By (2.2.3) we obtain
$$
\sin \Delta \psi_{2} = \displaystyle\frac{K(s) +\langle \overline{\xi}(s),
\overline{\omega}_{1}(s, \Delta s), \overline{\mu}(s)
\rangle}{\|\overline{\xi}^{**}(s+\Delta s)\|}\Delta s.
$$
Consequently, we have:
\begin{theorem}
The invariant $K(s)$ has the following geometric interpretation
$$
K(s) = \lim_{\Delta s\to 0}\displaystyle\frac{\Delta \psi_{2}}{\Delta s}.
$$\end{theorem}
Based on the previous result we can call {\it $K(s)$ the normal
curvature} of $(C,\overline{\xi})$ in $\mathfrak{M}$.
A similar interpretation can be done for the invariant $T(s)$.
\begin{theorem}
The function $T(s)$ has the interpretation:
$$
T(s) = \lim\limits_{\Delta s\to 0}\displaystyle\frac{\Delta \psi_{3}}{\Delta s},
$$
where $\Delta \psi_{3}$ is the oriented angle between
$\overline{\mu}(s)$ and $\overline{\mu}^{*}(s+\Delta s)$-which is the
orthogonal projection of $\overline{\mu}(s+\Delta s)$ on the normal
plane $(P(s); \overline{\mu}(s),$
$\overline{\nu}(s))$.\end{theorem}
This geometric interpretation allows to give the name {\it
geodesic torsion} for the invariant $T(s)$.
\section{The calculus of invariants $G, K, T$}
\setcounter{theorem}{0}
\setcounter{equation}{0}
The fundamental formulae (2.1.3), (2.1.4) allow to calculate the
expressions of second derivatives of the versors of Darboux frame
$\cal{R}_{D}$. We have:
\begin{equation}\hspace*{8mm}
\begin{array}{lll}
\displaystyle\frac{d^{2}\overline{r}}{ds^{2}}&=& \left(\displaystyle\frac{dc_{1}}{ds} -
Gc_{2}-K c_{3} \right)\overline{\xi}+\left(\displaystyle\frac{dc_{2}}{ds} +
Gc_{1} - Tc_{3} \right)\overline{\mu} +\vspace*{2mm}\\&&+\left(\displaystyle\frac{dc_{3}}{ds}
+ K c_{1} + Tc_{2}\right)\overline{\nu}
\end{array}
\end{equation}
and
\begin{equation}
\hspace*{8mm}\begin{array}{lll}
\displaystyle\frac{d^{2}\overline{\xi}}{ds^{2}} &=& -(G^{2} +
T^{2})\overline{\xi} + \left(\displaystyle\frac{dG}{ds} -
KT\right)\overline{\mu} +\left(\displaystyle\frac{dK}{ds} +
GT\right)\overline{\nu},\vspace*{2mm}\\ \displaystyle\frac{d^{2}\overline{\mu}}{ds^{2}}
&=& -\left(\displaystyle\frac{dG}{ds} +KT \right)\overline{\xi} -
(G^{2}+T^{2})\overline{\mu} + \left(\displaystyle\frac{dT}{ds} -
GT\right)\overline{\nu},\vspace*{2mm}\\ \displaystyle\frac{d^{2}\nu}{ds^{2}} &=&
\left(-\displaystyle\frac{dK}{ds}+GT\right)\overline{\xi} - \left(\displaystyle\frac{dT}{ds}
+GT \right)\overline{\mu} - (K^{2}+T^{2})\overline{\nu}.
\end{array}
\end{equation}
These formulae will be useful in the next part of the book.
From the fundamental equations (2.1.3), (2.1.4) we get
\begin{theorem}
The following formulae for invariants $G(s), K(s)$ and $T(s)$
hold:
\begin{equation}
G(s) = \left\langle \overline{\xi},
\displaystyle\frac{d\xi}{ds},\overline{\nu}\right\rangle,
\end{equation}
\begin{equation}
K(s) = \left\langle \displaystyle\frac{d\overline{\xi}}{ds},
\overline{\nu}\right\rangle = -\left\langle \overline{\xi},
\displaystyle\frac{d\overline{\nu}}{ds}\right\rangle, T(s)=\left\langle\overline{\xi},\overline{\nu},\displaystyle\frac{d\overline{\nu}}{ds}\right\rangle.
\end{equation}
\end{theorem}
Evidently, these formulae hold in the case when $s$ is the arclength of
the curve $C$.
\section{Relations between the invariants of the field $(C,\overline{\xi})$ and the invariants of $(C,\overline{\xi})$ in
$\mathfrak{M}(C,\overline{\xi},\pi)$}
\setcounter{theorem}{0}
\setcounter{equation}{0}
The versors field $(C,\overline{\xi})$ in $E_{3}$ has a Frenet
frame $\cal{R}_{F} = (P(s), \overline{\xi}_{1},
\overline{\xi}_{2}, \overline{\xi}_{3})$ and a complete system of
invariants $(a_{1},a_{2},a_{3}; K_{1}, K_{2})$ verifying the
equations (1.1.5) and (1.1.6).
The same field $(C,\overline{\xi})$ in Myller configuration
$\mathfrak{M}(C,\overline{\xi},\pi)$ has a Darboux frame
$\cal{R}_{D} = (P(s), \overline{\xi}, \overline{\mu},
\overline{\nu})$ and a complete system of invariants $(c_{1},
c_{2}, c_{3}; G,K,T)$. If we relate $\cal{R}_{F}$ to $\cal{R}_{D}$
we obtain
\begin{eqnarray*}
\overline{\xi}_{1}(s) &=&
\overline{\xi}(s),\\\overline{\xi}_{2}(s)&=& \overline{\mu}(s)\sin
\varphi +\overline{\nu}(s)\cos \varphi,\\\overline{\xi}_{3}(s) &=&-\mu(s)cos
\varphi + \overline{\nu}(s)\sin \varphi,
\end{eqnarray*}
with $\varphi = \sphericalangle (\overline{\xi}_{2},\overline{\nu})$
and
$\langle\overline{\xi}_{1},\overline{\xi}_{2},\overline{\xi}_{3}\rangle
= \langle\overline{\xi},\overline{\mu},\overline{\nu}\rangle =1.$
Then, from (1.1.5) and (2.1.3) we can determine the relations between the two
systems of invariants.
From
$$
\displaystyle\frac{d\overline{r}}{ds} = \overline{\alpha}(s) =
a_{1}\overline{\xi}_{1} + a_{2}\overline{\xi}_{2} +
a_{3}\overline{\xi}_{3} = c_{1}\overline{\xi} +
c_{2}\overline{\mu}+c_{3}\overline{\nu}
$$
it follows
\begin{eqnarray}
c_{1}(s)&=&a_{1}(s)\nonumber\\c_{2}(s) &= &a_{1}(s)\sin \varphi
-a_{3}(s)\cos \varphi\\c_{3}(s)&=& a_{2}(s)\cos \varphi + a_{3}(s)\sin
\varphi.\nonumber
\end{eqnarray}
And, (1.1.6), (2.1.4) we obtain
\begin{eqnarray}
G(s)&=& K_{1}(s) \sin \varphi\nonumber\\K(s) &=& K_{1}(s)\cos \varphi\\T(s)
&=& K_{2}(s) + \displaystyle\frac{d\varphi}{ds}.\nonumber
\end{eqnarray}
These formulae, allow to investigate some important properties of
$(C,\overline{\xi})$ in $\mathfrak{M}$ when some invariants
$G,K,T$ vanish.
\section{Relations between invariants of normal field $(C,\overline{\nu})$ and invariants $G,K,T$}
\setcounter{theorem}{0}
\setcounter{equation}{0}
The plane field $(C,\pi)$ is characterized by the normal versor
field $(C,\overline{\nu})$, which has as Frenet frame $\cal{R}_{F}
= (P(s); \overline{\nu}_{1}, \overline{\nu}_{2},
\overline{\nu}_{3})$ with $\overline{\nu}_{1} = \overline{\nu}$
and has $(b_{1}, b_{2}, b_{3}, \chi_{1},\chi_{2})$ as a complete
system of invariants. They satisfy the formulae (1.3.1), (1.3.2).
But the frame $\cal{R}_{F}$ is related to Darboux frame
$\cal{R}_{D}$ of $(C,\overline{\xi})$ in $\mathfrak{M}$ by the
formulae
\begin{eqnarray}
\overline{\nu}_{1} &=&
\overline{\nu}(s)\nonumber\\-\overline{\nu}_{2}&=& \sin \sigma
\overline{\xi} +\cos \sigma \overline{\mu}\\\overline{\nu}_{3}&=&
-\cos \sigma \overline{\xi}+\sin \sigma \overline{\mu}\nonumber
\end{eqnarray}
where $\sigma = \sphericalangle (\overline{\xi}(s),
\overline{\nu}_{3}(s))$.
Proceeding as in the previous section we deduce
\begin{theorem}
The following relations hold:
\begin{eqnarray}
c_{1}&=& -b_{2}\sin \sigma + b_{3}\cos \sigma\nonumber\\-c_{2}&=&
b_{2}\cos \sigma + b_{3}\sin \sigma\\c_{3}&=& b_{2}\nonumber
\end{eqnarray}
and
\begin{eqnarray}
K&=& \chi_{1}\sin \sigma\nonumber\\T&=& \chi_{1}\cos \sigma\\Gamma&=&
\chi_{2}+\displaystyle\frac{d\sigma}{ds}.\nonumber
\end{eqnarray}
\end{theorem}
A first consequence of previous formulae is given by
\begin{theorem}
The invariant $K^{2}+T^{2}$ depends only on the plane field
$(C,\pi)$. We have
\begin{equation}
K^{2}+T^{2} = \chi_{1}.
\end{equation}
\end{theorem}
The proof is immediate, from (2.5.3).
\section{Meusnier$^{\prime}$s theorem. Versor fields $(C,\overline{\xi})$ conjugated with tangent versor $(C,\overline{\alpha})$}
\setcounter{theorem}{0}
\setcounter{equation}{0}
Consider the vector field $\overline{\xi}^{**}(s+\Delta s)$, $(|\Delta
s|<\varepsilon, \varepsilon>0)$, the orthogonal projection of versor
$\overline{\xi}(s+\Delta s)$ on the normal plane $(P(s);
\overline{\xi}(s), \overline{\nu}(s))$. Since, up to terms of
second order in $\Delta s$, we have
$$
\overline{\xi}^{**}(s+\Delta s) = \overline{\xi}(s) + \displaystyle\frac{\Delta
s}{1!}K(s)\overline{\nu}(s) + \overline{\theta}(s,\Delta s)\displaystyle\frac{(\Delta
s)^{2}}{2!},
$$
for $\Delta s\to 0$ one gets:
\begin{equation}
\displaystyle\frac{d\overline{\xi}^{**}}{ds} = K(s)\overline{\nu}(s).
\end{equation}
Assuming $K(s)\neq 0$ we consider the point $P_{c}^{**}$-called
the {\it center of curvature} of the vector field
$(C,\overline{\xi}^{**})$, given by
$$
\overrightarrow{PP}_{c}^{**} = \displaystyle\frac{1}{K(s)}\overline{\nu}(s).
$$
On the other hand the field of versors $(C,\overline{\xi})$ have a
center of curvature $P_{c}$ given by $\overrightarrow{PP}_{c} =
\displaystyle\frac{1}{K_{1}(s)}\overline{\xi}_{2}$.
The formula (2.4.2), i.e., $K(s) = K_{1}(s)\cos \varphi,$ shows that
the orthogonal projection of vector $\overrightarrow{PP}_{c}^{**}$
on the (osculating) plane $(P(s); \overline{\xi}_{1},
\overline{\xi}_{2})$ is the vector $\overrightarrow{PP}_{c}$.
Indeed, we have
\begin{equation}
\displaystyle\frac{\cos \varphi}{K} = \displaystyle\frac{1}{K_{1}}
\end{equation}
As a consequence we obtain a theorem of Meusnier type:
\begin{theorem}
The curvature center of the field $(C,\overline{\xi})$ in
$\mathfrak{M}$ is the orthogonal projection on the osculating
plane $(M; \overline{\xi}_{1}, \overline{\xi}_{2})$ of the
curvature center $P_{c}^{**}$.
\end{theorem}
\begin{definition}
The versor field $(C,\overline{\xi})$ is called conjugated with
tangent versor field $(C,\overline{\alpha})$ in the Myller
configuration $\mathfrak{M}(C,\overline{\xi},\pi)$ if the
invariant $K(s)$ vanishes.
\end{definition}
Some immediate consequences:
1. $(C,\overline{\xi})$ is conjugated with $(C,\overline{\alpha})$ in
$\mathfrak{M}$ iff the line $(P;\overline{\xi})$ is parallel in
$E_{3}$ to the characteristic line of the planes $\pi(s)$, $s\in
(s_{1},s_{2})$.
2. $(C,\overline{\xi})$ is conjugated with $(C,\overline{\alpha})$ in
$\mathfrak{M}$ iff $|T(s)| = \chi_{1}(s)$.
3. $(C,\overline{\xi})$ is conjugated with $(C,\overline{\alpha})$ in
$\mathfrak{M}$ iff the osculating planes
$(P;\overline{\xi}_{1},\overline{\xi}_{2})$ coincide to the planes
$\pi(s)$ of $\mathfrak{M}$.
4. $(C,\overline{\xi})$ is conjugated with $(C,\overline{\xi})$
iff the asimptotic planes of the ruled surface
$\cal{R}(C,\overline{\xi})$ coincide with the planes $\pi(s)$ of
$\mathfrak{M}$.
\section{Versor field $(C,\overline{\xi})$ with null geodesic torsion}
\setcounter{theorem}{0}
\setcounter{equation}{0}
A new relation of conjugation of versor field $(C,\overline{\xi})$
with the tangent versor field $(C,\overline{\alpha})$ is obtained in
the case $T(s) = 0.$
\begin{definition}
The versor field $(C,\overline{\xi})$ is called orthogonal
conjugated with the tangent versor field $(C,\overline{\alpha})$ in
$\mathfrak{M}$ if its geodesic torsion $T(s) = 0,$ $\forall s\in
(s_{1},s_{2})$.
Some properties
\begin{itemize}
\item[1.] $(C,\overline{\xi})$ is orthogonal conjugated with
$(C,\overline{\alpha})$ in $\mathfrak{M}$ iff $\overline{\mu}(s)$ are
parallel with the characteristic line of planes $\pi(s)$ along the
curve $C$.
\item[2.] $(C,\overline{\xi})$ is orthogonal conjugated with
$(C,\overline{\alpha})$ in $\mathfrak{M}$ if $|K_{1}(s)| =
\chi_{1}(s)$, along $C$.
\end{itemize}
\end{definition}
\begin{theorem}
Assuming that the versor field $(C,\overline{\xi})$ in the
configuration $\mathfrak{M}(C,\overline{\xi},\pi)$ has two of the
following three properties then it has the third one, too:
\begin{itemize}
\item[a.] The osculating planes $(P; \overline{\xi}_{1},
\overline{\xi}_{2})$ are parallel in $E_{3}$ along $C$.
\item[b.] The osculating planes
$(P;\overline{\xi}_{1},\overline{\xi}_{2})$ have constant angle
with the plans $\pi(s)$ on $C$.
\item[c.] The geodesic torsion $T(s)$ vanishes on $C$.
\end{itemize}
\end{theorem}
The proof is based on the formula $T(s) = K_{2}(s) +
\displaystyle\frac{d\varphi}{ds}$, $\varphi = \sphericalangle
(\overline{\xi}_{2},\overline{\nu})$.
Consider two Myller configurations
$\mathfrak{M}(C,\overline{\xi},\pi)$ and
$\mathfrak{M}^{\prime}(C,\overline{\xi},\pi^{\prime})$ which have
in common the versor field $(C,\overline{\xi})$. Denote by $\varphi =
\sphericalangle (\overline{\xi}_{2},\overline{\nu})$, $\varphi^{\prime}
= \sphericalangle(\overline{\xi}_{2},\overline{\nu}^{\prime})$.
Then the geodesic torsions of $(C,\overline{\xi})$ in
$\mathfrak{M}$ and $\mathfrak{M}^{\prime}$ are follows:
$$
T(s) = K_{2}(s) + \displaystyle\frac{d\varphi}{ds},\;\; T^{\prime}(s) = K_{2}(s) +
\displaystyle\frac{d\varphi^{\prime}}{ds}.
$$
Evidently, we have $\varphi-\varphi^{\prime} = \sphericalangle
(\overline{\nu},\overline{\nu}^{\prime})$.
By means of these relations we can prove, without difficulties:
\begin{theorem}
If the Myller configurations $\mathfrak{M}(C,\overline{\xi},\pi)$
and $\mathfrak{M}^{\prime}(C,\overline{\xi},\pi^{\prime})$ have two
of the following properties:
\begin{itemize}
\item[a)] $(C,\overline{\xi})$ has the null geodesic torsion,
$T(s)=0$ in $\mathfrak{M}$.
\item[b)] $(C,\overline{\xi})$ has the null geodesic torsion
$T^{\prime}(s)$in $\mathfrak{M}^{\prime}$.
\item[c)] The angle
$\sphericalangle(\overline{\nu},\overline{\nu}^{\prime})$is
constant along $C$, then $\mathfrak{M}$ and
$\mathfrak{M}^{\prime}$ have the third property.
\end{itemize}
\end{theorem}
\begin{remark}\rm
The versor field $(C,\overline{\nu}_{2})$ is orthogonally conjugated
with the tangent versors $(C,\overline{\alpha})$ in the configuration
$\mathfrak{M}(C,\overline{\xi},\pi)$.
\end{remark}
\section{The vector field parallel in Myller sense in configurations $\mathfrak{M}$}
\setcounter{theorem}{0}
\setcounter{equation}{0}
Consider $(C,\overline{V})$ a vector field, along the curve $C$.
We denote $\overline{V}(s) = \overline{V}(\overline{r}(s))$ and
say that $(C,\overline{V})$ is a vector field in the configuration
$\mathfrak{M} = \mathfrak{M}(C,\overline{\xi},\pi)$ if the vector
$\overline{V}(s)$ belongs to plane $\pi(s),$ $\forall s\in
(s_{1},s_{2})$.
\begin{definition}
The vectors field $(C,\overline{V})$ in
$\mathfrak{M}(C,\overline{\xi},\pi)$ is parallel in Myller sense
if the vector field $\displaystyle\frac{d\overline{V}}{ds}$ is normal to
$\mathfrak{M}$, i.e. $\displaystyle\frac{d\overline{V}}{ds} =
\lambda(s)\overline{\nu}(s),$ $\forall s\in
(s_{1},s_{2})$.\end{definition}
The parallelism in Myller sense is a direct generalization of
Levi-Civita parallelism of tangent vector fields along a curve
$C$ of a surface $S$.
It is not difficult to prove that the vector field
$\overline{V}(s)$ is parallel in Myller sense if the vector field
$$
\overline{V}^{\prime}(s+\Delta s) = pr_{\pi(s)}\overline{V}(s+\Delta s)
$$
is parallel in ordinary sens in $E_{3}$-up to terms of second
order in $\Delta s$.
In Darboux frame, $\overline{V}(s)$ can be represented by its
coordinate as follows:
\begin{equation}
\overline{V}(s) =V^{1}(s)\overline{\xi}(s) +
V^{2}(s)\overline{\mu}(s).
\end{equation}
By virtue of fundamental equations (2.1.4) we find:
\begin{equation}
\hspace*{8mm}\displaystyle\frac{d\overline{V}}{ds} = \left(\displaystyle\frac{dV^{1}}{ds} - G
V^{2}\right)\overline{\xi} +\left(\displaystyle\frac{dV^{2}}{ds} +
GV^{1}\right)\overline{\mu} + (KV^{1} +TV^{2})\overline{\nu}.
\end{equation}
Taking into account the Definition 2.8.1, one proves:
\begin{theorem}
The vector field $\overline{V}(s)$, $(2.8.1)$ is parallel in Myller
sense in configuration $\mathfrak{M}(C, \overline{\xi}, \pi)$ iff
coordinates $V^{1}(s), V^{2}(s)$ are solutions of the system of
differential equations:
\begin{equation}
\displaystyle\frac{dV^{1}}{ds}-GV^{2} = 0,\;\; \displaystyle\frac{dV^{2}}{ds}+GV^{1} = 0.
\end{equation}
\end{theorem}
In particular, for $\overline{V}(s) = \overline{\xi}(s)$, we
obtain
\begin{theorem}
The versor field $\overline{\xi}(s)$ is parallel in Myller sense
in $\mathfrak{M}(C,\overline{\xi},\pi)$ iff the geodesic curvature
$G(s)$ of $(C,\overline{\xi})$ in $\mathfrak{M}$ vanishes.
\end{theorem}
This is a reason that Al. Myller says that $G$ is {\it the
deviation of parallelism} [31]. Later we will see that $G(s)$ is
an intrinsec invariant in the geometry of surfaces in $E_{3}$.
By means of (2.8.3) we have
\begin{theorem}
There exists an unique vector field $\overline{V}(s)$, $s\in
(s_{1}^{\prime}, s_{2}^{\prime})\subset (s_{1},s_{2})$ parallel in
Myller sense in the configuration
$\mathfrak{M}(C,\overline{\xi},\pi)$ which satisfy the initial
condition $\overline{V}(s_{0}) = \overline{V}_{0}$, $s_{0}\in
(s_{1}^{\prime}, s_{2}^{\prime})$ and $\langle \overline{V}_{0},
\overline{\nu}(s_{0}) \rangle = 0.$
\end{theorem}
Evidently, theorem of existence and uniqueness of solutions of
system (2.8.3), is applied in this case.
In particular, if $G(s)$ = constant, then the general solutions of
(2.8.3) can be obtained by algebric operations.
An important property of parallelism in Myller sense is expressed
in the next theorem.
\begin{theorem}
The Myller parallelism of vectors in $\mathfrak{M}$ preserves the
lengths and angles of vectors.
\end{theorem}
\begin{proof}\rm If $\displaystyle\frac{d\overline{V}}{ds} =
\lambda(s)\overline{\nu}$, then $\displaystyle\frac{d}{ds}\langle \overline{V},
\overline{V} \rangle=0.$ Also,
$\displaystyle\frac{d\overline{V}^{\prime}}{ds}=\lambda(s)\overline{\nu}$,
$\displaystyle\frac{d\overline{U}}{ds} = \lambda^{\prime}(s)\overline{\nu}$, then
$\displaystyle\frac{d}{ds}\langle \overline{V}(s),
\overline{U}^{\prime}(s)\rangle = 0$
\end{proof}
\section{Adjoint point, adjoint curve and concurrence in Myller sense}
\setcounter{theorem}{0}
\setcounter{equation}{0}
The notions of adjoint point, adjoint curve and concurrence in
Myller sense in a configuration $\mathfrak{M}$ have been
introduced and studied by O. Mayer [20] and Gh. Gheorghiev
[11], [55]. They applied these notions, to the theory of surfaces,
nonholomorphic manifolds and in the geometry of versor fields in
Euclidean space $E_{3}$.
In the present book we introduce these notions in a different way.
Consider the vector field
$$
\overline{\xi}^{*}(s+\Delta s) = pr_{\pi(s)}\overline{\xi}(s+\Delta s).
$$
Taking into account the formula (2.2.1)$^{\prime}$ we can write up
to terms of second order in $\Delta s:$
\begin{equation}
\overline{\xi}^{*}(s+\Delta s) = \overline{\xi}(s) + \Delta s (G
\overline{\mu}(s) + \overline{\omega}^{*}(s,\Delta s))
\end{equation}
with
$$
\overline{\omega}^{*}(s, \Delta s) \to 0, (\Delta s\to 0).
$$
Let $C^{\prime}$ be the orthogonal projection of the curve $C$ on
the plane $\pi(s)$. A neighbor point $P^{\prime}(s+\Delta s)$ is
projected on plane $\pi(s)$ in the point $P^{*}(s+\Delta s)$ given by
\begin{equation}
\begin{array}{l}
\overline{r}^{*}(s+\Delta s) = \overline{r}(s) + \Delta s
(c_{1}\overline{\xi} + c_{2}\overline{\mu} +
\overline{\omega}_{0}^{*}(s, \Delta s)),\\ \overline{\omega}_{0}^{*}(s, \Delta s)
\to 0, (\Delta s\to 0).
\end{array}
\end{equation}
\begin{definition}
The adjoint point of the point $P(s)$ with respect to
$\overline{\xi}(s)$ in $\mathfrak{M}$ is the characteristic point
$P_{a}$ on the line $(P;\overline{\xi})$ of the plane ruled
surface $R(C^{*}, \overline{\xi}^{*})$.
\end{definition}
One proves that the position vector $\overline{R}(s)$ of adjoint
point $P_{a}$ for $G\neq 0$, is as follows:
\begin{equation}
\overline{R}(s) = \overline{r}(s) -
\displaystyle\frac{c_{2}}{G}\overline{\xi}(s).
\end{equation}
The vector field $(C^{*}, \overline{\xi}^{*})$ from (2.9.3) is
called {\it geodesic field}. A result established by O. Mayer [20]
holds:
\begin{theorem}
If the versor field $(C, \overline{\xi})$ is enveloping in space
$E_{3}$, then the adjoint point $P_{a}$ of the point $P(s)$ in
$\mathfrak{M}$ is the contact point of the line $(P,
\overline{\xi})$ with the cuspidale line.
\end{theorem}
\begin{definition}
The geometric locus of the adjoint points corresponding to the
versor field $(C, \overline{\xi})$ in $\mathfrak{M}$ is the
adjoint curve $C_{a}$ of the curve $C$ in $\mathfrak{M}$.
\end{definition}
The adjoint curve $C_{a}$ has the vector equations (2.9.3) for
$\forall s\in (s_{1},s_{2})$.
Now, we can introduce
\begin{definition}
The versor field $(C, \overline{\xi})$ is concurrent in Myller
sense in $\mathfrak{M}(C, \overline{\xi}, \pi)$ if, at every point
$P(s)\in C$ the geodesic vector field $(C^{*},
\overline{\xi}^{*})$ is concurrent.
\end{definition}
For $G(s)\neq 0$, we have
\begin{theorem}
The versor field $(C,\xi)$ is concurrent in Myller sense in
$\mathfrak{M}$ iff the following equation hold
\begin{equation}
\displaystyle\frac{d}{ds}\left(\displaystyle\frac{c_{2}}{G}\right) = c_{1}.
\end{equation}
\end{theorem}
For the proof see Section 1.1, Chapter 1, Theorem 1.1.3.
\section{Spherical image of a configuration $\mathfrak{M}$}
\setcounter{theorem}{0}
\setcounter{equation}{0}
In the Section 2, Chapter 1 we defined the spherical image of a
versor field $(C,\overline{\xi})$. Applying this idea to the
normal vectors field $(C,\overline{\nu})$ to a Myller
configuration $\mathfrak{M}(C, \overline{\xi}, \pi)$ we define the
notion of spherical image $C^{*}$ of $\mathfrak{M}$ as being
\begin{equation}
\overline{\nu}^{*}(s) = \overline{OP}^{*}(s) = \overline{\nu}(s)
\end{equation}
Thus, the relations between the curvature $\chi_{1}$ and torsion
$\chi_{2}$ of $(C,\overline{\nu})$ and geodesic curvature
$\kappa_{g}^{*}$ of $C^{*}$ at a point $P^{*}\in C^{*}$ and arclength
$s^{*}$ are as follows
$$
\chi_{1} = \displaystyle\frac{ds^{*}}{ds},\; \chi_{2}ds = \kappa_{g}^{*}ds^{*}.
$$
The properties enumerated in Section 2, ch 1, can be obtained for the
spheric image $C^{*}$ of configuration $\mathfrak{M}$.
Consider the versor field $\overrightarrow{P^{*}P^{*}_{1}} =
\xi^{*}(s) = \xi(s)$ and the angle $\theta = \sphericalangle
(\overline{\nu}_{2}, \overline{\xi})$. It follows
\begin{equation}
K = -\displaystyle\frac{ds^{*}}{ds}\cos \theta,\;\; (for \chi_{2}\neq 0).
\end{equation}
Thus, for $\theta = \pm \displaystyle\frac{\pi}{2}$ one gets $K=0,$ which leads to
a new interpretation of the fact that $\overline{\xi}(s)$ is
conjugated to $\overline{\alpha}(s)$ in Myller configurations.
Analogous one can obtain $T = -\displaystyle\frac{ds^{*}}{ds}\cos
\widetilde{\theta},$ $\widetilde{\theta} =
\sphericalangle(\overline{\nu}_{2}, \overline{\mu}^{*})$ (with
$\overline{\mu}^{*}(s) = \overline{\mu}(s)$, applied at the point
$P^{*}(s)\in C^{*}$). For $\widetilde{\theta} = \pm \displaystyle\frac{\pi}{2}$ it
follows that $\overline{\xi}(s)$ are orthogonally conjugated with
$\overline{\alpha}(s) .$
The problem is to see if the Gauss-Bomet formula can be extended
to Myller configurations $\mathfrak{M}(C, \overline{\xi}, \pi)$.
In the case $(\alpha(s)\in \pi(s)$, (i.e. $\overline{\alpha}(s)\bot
\overline{\nu}(s)$) such a problem was suggested by Thomson [18] and it
has been solved by Mark Krein in 1926, [18].
Here, we study this problem in the general case of Myller
configuration, when $\langle \overline{\alpha}(s),
\overline{\nu}(s)\rangle\neq 0.$
First of all we prove
\begin{lemma} Assume that we have:
1. $\mathfrak{M}(C, \overline{\xi},\pi)$ a Myller configuration of
class $C^{3}$, in which $C$ is a closed curve, having $s$ as
arclength.
2. The spherical image $C^{*}$ of $\mathfrak{M}$ determines on the
support sphere $\Sigma$ a simply connected domain of area $\omega$.
In this hypothesis we have the formula
\begin{equation}
\omega = 2\pi - \int_{C}\left(\overline{\nu}, \displaystyle\frac{d\overline{\nu}}{ds},
\displaystyle\frac{d^{2}\overline{\nu}}{ds^{2}}\right)/
\left\|\displaystyle\frac{d\overline{\nu}}{ds}\right\|^{2}ds.
\end{equation}
\end{lemma}
\begin{proof}\rm Let $\Sigma$ be the unitary sphere of
center $O\in E_{3}$ and a simply connected domain $D$, delimited
by $C^{*}$ on $\Sigma.$ Assume that $D$ remains to left with
respect to an observer looking in the sense of versor
$\overline{\nu}(s)$, when he is going along $C^{*}$ in the
positive sense.
Thus, we can take the following representation of $\Sigma$:
\begin{eqnarray}
x^{1}&=&\cos \varphi \sin \theta\nonumber\\
x^{2}&=&\sin \varphi \sin \theta\\x^{3}&=&\cos \varphi,\;\; \varphi\in [0, 2\pi), \theta
\in \(-\displaystyle\frac{\pi}{2}, \displaystyle\frac{\pi}{2}\).\nonumber
\end{eqnarray}
The curve $C^{*}$ can be given by
\begin{equation}
\varphi = \varphi(s),\; \theta = \theta(s),\;\; s\in [0,s_{1}]
\end{equation}
with $\varphi(s), \theta(s)$ of class $C^{3}$ and $C^{*}$ being closed:
$\varphi(0) = \varphi(s_{1})$, $\theta(0) = \theta(s_1)$.
The area $\omega$ of the domain $D$ is
\begin{equation}
\begin{array}{l}
\omega = \int_{0}^{s_{1}}\int_{0}^{\theta(s_{1})}\sin \theta d\theta =
\int_{0}^{s_{1}}(1-\cos \theta)\displaystyle\frac{d\varphi}{ds}ds =\\ = 2\pi
-\int_{0}^{s_{1}}\cos \theta \displaystyle\frac{d\varphi}{ds}ds.
\end{array}
\end{equation}
Noticing that the versor $\overline{\nu}^{*} = \overline{\nu}(s)$
has the coordinate (2.10.4) a straightforward calculus leads to
$$
\left\langle \overline{\nu}, \displaystyle\frac{d\overline{\nu}}{ds},
\displaystyle\frac{d^{2}\overline{\nu}}{ds^{2}} \right\rangle/
\left\|\displaystyle\frac{d\overline{\nu}}{ds}\right\|^{2} = \cos \theta
\displaystyle\frac{d\varphi}{ds} +\displaystyle\frac{d}{ds} \mbox{arctg}\, \displaystyle\frac{\sin
\theta\displaystyle\frac{d\varphi}{ds}}{\displaystyle\frac{d\theta}{ds}}.
$$
Denoting by $\psi$ the angle between the meridian $\varphi = \varphi_{0}$ and
curve $C^{*}$, oriented with respect to the versor
$\overline{\nu}$ we have
\begin{equation}
\mbox{tg} \psi =\(\sin \theta \displaystyle\frac{d\varphi}{ds}\)/ \displaystyle\frac{d\theta}{ds}.
\end{equation}
The previous formulae lead to
\begin{equation}
\left\langle \overline{\nu}, \displaystyle\frac{d\overline{\nu}}{ds},
\displaystyle\frac{d^{2}\overline{\nu}}{ds^{2}} \right\rangle/
\left\|\displaystyle\frac{d\overline{\nu}}{ds}\right\|^{2} = \cos \theta
\displaystyle\frac{d\varphi}{ds} + \displaystyle\frac{d\psi}{ds}.
\end{equation}
But in our conditions of regularity $\displaystyle\int_{C}\displaystyle\frac{d\psi}{ds}ds =
0.$ Thus (2.10.7), (2.10.8) implies the formula (2.10.3)
\end{proof}
It is not difficult to see that the formula (2.10.3) can be
generalized in the case when the curve $C$ of the configuration
$\mathfrak{M}$ has a finite number of angular points. The second
member of the formula (2.10.3) will be additive modified with the
total of variations of angle $\psi$ at the angular points
corresponding to the curve $C$.
Now, one can prove the generalization of Mark Krein formula.
\begin{theorem}
Assume that we have
1. $\mathfrak{M}(C, \overline{\xi}, \pi)$ a Myller configuration
of class $C^{3}$ $($i.e. $C$ is of class $C^{3}$ and $\xi(s)$,
$\overline{\nu}(s)$ are the class $C^{2}$$)$ in which $C$ is a
closed curve, having $s$ as natural parameter.
2. The spherical image $C^{*}$ of $\mathfrak{M}$ determine on the
support sphere $\Sigma$ a simply connected domain of area $\omega.$
3. $\sigma$ the oriented angle between the versors
$\overline{\nu}_{3}(s)$ and $\overline{\xi}(s)$. In these
conditions the following formula hold:
\begin{equation}
\omega = 2\pi -\int_{C}G(s)ds +\int_{C}d\sigma.
\end{equation}
\end{theorem}
\noindent {\bf Proof.} The first two conditions allow to apply the
Lemma 2.10.1. The fundamental equations (1.3.1), (1.3.2) of
$(C,\overline{\nu})$ give us for $\overline{\nu} =
\overline{\nu}_{1}$:
$$
\displaystyle\frac{d\overline{\nu}_{1}}{ds} = \chi_{1}\overline{\nu}_{2},\;
\displaystyle\frac{d_{2}\overline{\nu}_{1}}{ds^{2}} =
\displaystyle\frac{d\chi_{1}}{ds}\overline{\nu}_{2} +
\chi_{1}(-\chi_{1}\overline{\nu}_{1} +
\chi_{2}\overline{\nu}_{3}).
$$
So,
$$
\left\langle \overline{\nu}_{1}, \displaystyle\frac{d\overline{\nu}_{1}}{ds},
\displaystyle\frac{d^{2}\overline{\nu}_{1}}{ds^{2}} \right\rangle =
\chi_{1}^{2}\chi_{2}.
$$
Thus, the formula (2.10.3), leads to the following formula
$$
\omega = 2\pi -\int_{C}\chi_{2}(s)ds.
$$
But, we have $G(s) =\chi_{2}(s) +\displaystyle\frac{d\sigma}{ds}$, $G(s)$ being
the geodesic curvature of $(C,\overline{\xi})$ in $\mathfrak{M}$.
Then the last formula is exactly (2.10.9).
If $G=0$ for $\mathfrak{M}$, then we have $\omega = 2\pi. $
Indeed $G(s) = 0$ along the curve $C$ imply $\omega = 2\pi +2k \pi$,
$k\in \mathbb{N}$. But we have $0\leq \omega <4\pi,$ so $k=0.$
A particular case of Theorem 2.10.1 is the famous result of Jacobi:
{\it The area $\omega$ of the domain $D$ determined on the sphere $\Sigma$
by the closed curve $C^{*}$-spherical image of the principal normals
of a closed curve $C$ in $E_{3}$, assuming $D$ a simply connected
domain, is a half of area of sphere $\Sigma$.}
In this case we consider the Myller configuration $\mathfrak{M}(C,
\overline{\alpha}, \pi)$, $\pi(s)$ being the rectifying planes of $C$.
\newpage
\thispagestyle{empty}
\chapter{Applications of theory of Myller configurations in the geometry of nonholonomic manifolds from $E_{3}$}
The second efficient applications of the theory of Myller
configurations can be done in the geometry of nonholonomic
manifolds $E_{3}^{2}$ in $E_{3}$. Some important results, obtained
in the geometry of manifolds $E_{3}^{2}$ by Issaly l$'$Abee, D.
Sintzov [40], Gh. Vr\u{a}nceanu [44], [45], Gr. Moisil [72], M. Haimovici [14], [15] Gh. Gheorghiev [10], [11], I. Popa [12],
G. Th. Gheorghiu [13], R. Miron [30], [64], [65], I. Creang\u{a} [4], [5], [6], A. Dobrescu [7], [8] and I. Vaisman [41], [42], can be unitary presented by means of
associated Myller configuration to a curve embedded in
$E_{3}^{2}$. Now, a number of new concepts appears, the mean
torsion, the total torsion, concurrence of tangent vector field.
The new formulae, as Mayer, Bortolotti-formulas, Dupin and Bonnet
indicatrices etc. will be studied, too.
\section{Moving frame in Euclidean spaces $E_{3}$}
\setcounter{theorem}{0}\setcounter{equation}{0}
Let $R = (P; I_{1}, I_{2}, I_{3})$ be an orthonormed positively
oriented frame in $E_{3}$. The application $P\in E_{3}\to R = (P;
I_{1}, I_{2}, I_{3})$ of class $C^{k}$, $k\geq 3$ is a moving frame
of class $C^{k}$ in $E_{3}$. If $\overline{r} =
\overrightarrow{OP} = x \overline{e}_{1} + y \overline{e}_{2} +
z\overline{e}_{3}$ is the vector of position of point $P$, and $P$
is the application point of the versors $I_{1}, I_{2}, I_{3}$,
thus the moving equation of $R$ can be expressed in the form (see
Spivak, vol. I [76] and Biujguhens [2]):
\begin{equation}
d\overline{r} = \omega_{1}I_{1} + \omega_{2}I_{2} + \omega_{3}I_{3},
\end{equation}
where $\omega_{i}(i=1,2,3)$ are independent 1 forms of class $C^{k-1}$
on $E_{3}$, and
\begin{equation}
dI_{i} = \sum_{j=1}^{3}\omega_{ij}I_{j}, \;\; (i=1,2,3),
\end{equation}
with $\omega_{ij}$, $(i,j = 1,2,3)$ are the rotation Ricci
coefficients of the frame $R$. They are 1-forms of class
$C^{k-1}$, satisfying the skewsymmetric conditions
\begin{equation}
\omega_{ij} +\omega_{ji} = 0,\;\; (i,j = 1,2,3).
\end{equation}
In the following, it is convenient to write the equations (5.1.2)
in the form
$$
\begin{array}{c}
dI_{1} = rI_{2} -qI_{3}\\dI_{2} = p I_{3} - r I_{1}\\dI_{3} = q
I_{1} - p I_{2}.
\end{array}\leqno{(5.1.2)^{\prime}}
$$
Thus, {\it thus structure equations} of the moving frame $R$ can
be obtained by exterior differentiating the equations (5.1.1) and
(5.1.2)$^{\prime}$ modulo the same system of equations (5.1.1),
(5.1.2)$^{\prime}$.
One obtains, without difficulties
\begin{theorem}
The structure equations of the moving frame $R$ are
\begin{eqnarray}
&&d\omega_{1} = r \wedge \omega_{2},-q\wedge \omega_{3}, \;\; dp = r\wedge
q,\nonumber\\&&d\omega_{2} = p\wedge \omega_{3} - r \wedge \omega_{1},\;\; dq
= p\wedge r ,\\&&d\omega_{3} = q\wedge \omega_{1} - p\wedge \omega_{2},\;\; dr
= q\wedge p.\nonumber
\end{eqnarray}
\end{theorem}
In the vol. II of the book of Spivak [76], it is proved the theorem
of existence and uniqueness of the moving frames:
\begin{theorem}
Let $(p,q,r)$ be 1-forms of class $C^{k-1}$ ,$(k\geq 3)$ on $E_{3}$
which satisfy the second group of structure equation $(5.1.4)$,
thus:
$1^{\circ}.$ In a neighborhood of point $O\in E_{3}$ there is a
triple of vector fields $(I_{1}, I_{2}, I_{3})$, solutions of
equations (5.1.2)$^{\prime}$, which satisfy initial conditions
$(I_{1}(0), I_{2}(0), I_{3}(0))$-positively oriented, orthonormed
triple in $E_{3}$.
$2^{\circ}.$ In a neighborhood of point $O\in E_{3}$ there exists
a moving frame $R = (P; I_{1}, I_{2}, I_{3})$ orthonormed
positively oriented for which $\overline{r} = \overrightarrow{OP}$
is given by (5.1.1), where $\omega_{i} (i=1,2,3)$ satisfy the first
group of equations (5.1.4). $(I_{1}, I_{2}, I_{3})$ are given in
$1^{\circ}$ and the initial conditions are verified: $(P_{0} =
\overrightarrow{OO}; I_{1}(0), I_{2}(0), I_{3}(0))$-the
orthonormed frame at point $O\in E_{3}$.\end{theorem}
Remarking that the 1-forms $\omega_{1}, \omega_{2}, \omega_{3}$ are
independent we can express 1-forms $p,q,r$ with respect to
$\omega_{1}, \omega_{2}, \omega_{3}$ in the following form:
\begin{eqnarray}
&&p = p_{1}\omega_{1} + p_{2}\omega_{2} + p_{3}\omega_{3},\ q = q_{1}\omega_{1} +
q_{2}\omega_{2} + q_{3}\omega_{3},\\&& r = r_{1}\omega_{1} + r_{2}\omega_{2} +
r_{3}\omega_{3}.\nonumber
\end{eqnarray}
The coefficients $p_{i}, q_{i}, r_{i}$ are function of class
$C^{k-1}$ on $E_{3}$.
Of course we can write the structure equations (5.1.4) in terms of
these coefficients. Also, we can state the Theorem 5.1.2 by means of
coefficients of 1-forms $p,q,r$.
The 1-forms $p,q,r$ determine the rotation vector $\Omega$ of the
moving frame:
\begin{equation}
\Omega = p I_{1} + q I_{2} + r I_{3}.
\end{equation}
Its orthogonal projection on the plane $(P; I_{1}, I_{2})$ is
\begin{equation}
\overline{\theta} = p I_{1} + q I_{2}.
\end{equation}
Let $C$ be a curve of class $C^{k}$, $k\geq 3$ in $E_{3}$, given
by
\begin{equation}
\overline{r} = \overline{r}(s), \;\; s\in(s_{1},s_{2}),
\end{equation}
where $s$ is natural parameter. If the origin $P$ of moving frame
describes the curve $C$ we have $R = (P; I_{1}(s), I_{2}(s),
I_{3}(s))$, where $P(s) = P(\overline{r}(s))$, $I_{j}(s) =
I_{j}(\overline{r}(s))$, $(j=1,2,3)$.
So, the tangent versor $\displaystyle\frac{d\overline{r}}{ds}$ to curve $C$ is:
\begin{equation}
\displaystyle\frac{d\overline{r}}{ds} = \displaystyle\frac{\omega_{1}(s)}{ds}I_{1} +
\displaystyle\frac{\omega_{2}(s)}{ds}I_{2} + \displaystyle\frac{\omega_{3}(s)}{ds}I_{3}
\end{equation}
and $ds^{2}=\langle d\overline{r},d\overline{r}\rangle$ is:
\begin{equation}
ds^{2} = (\omega_{1}(s))^{2} + (\omega_{2}(s))^{2} + (\omega_{3}(s))^{2},
\end{equation}
where $\omega_{i}(s)$ are 1-forms $\omega_{i}$ restricted to $C$.
The restrictions $p(s), q(s), r(s)$ of the 1-forms $p,q,r$ to $C$
give us:
\begin{equation}
\begin{array}{lll}
\displaystyle\frac{p(s)}{ds} &=& p_{1}(s)\displaystyle\frac{\omega_{1}(s)}{ds} +
p_{2}(s)\displaystyle\frac{\omega_{2}(s)}{ds}
+p_{3}(s)\displaystyle\frac{\omega_{3}(s)}{ds},\\\vspace*{-.3cm}\\ \displaystyle\frac{q(s)}{ds} &=&
q_{1}(s)\displaystyle\frac{\omega_{1}(s)}{ds} + q_{2}(s)\displaystyle\frac{\omega_{2}(s)}{ds} +
q_{3}(s)\displaystyle\frac{\omega_{3}(s)}{ds},\\\vspace*{-.3cm}\\ \displaystyle\frac{r(s)}{ds}&=&r_{1}(s)\displaystyle\frac{\omega_{1}(s)}{ds}
+ r_{2}(s)\displaystyle\frac{\omega_{2}(s)}{ds} +
r_{3}(s)\displaystyle\frac{\omega_{3}(s)}{ds}.
\end{array}
\end{equation}
\section{Nonholonomic manifolds $E_{3}^{2}$}
\setcounter{theorem}{0}\setcounter{equation}{0}
\begin{definition}
A nonholonomic manifold $E_{3}^{2}$ on a domain $D\subset E_{3}$
is a nonintegrable distribution $\cal{D}$ of dimension 2, of class
$C^{k-1},$ $k\geq 3$.
\end{definition}
One can consider $\cal{D}$ as a plane field $\pi(P)$, $P\in D,$ the
application $P\to \pi(P)$ being of class $C^{k-1}$.
Also $\cal{D}$ can be presented as the plane field $\pi(P)$
orthogonal to a versor field $\overline{\nu}(P)$ normal to the
plane $\pi(P)$, $\forall P\in D.$
Assuming that $\overline{\nu}(P)$ is the versor of vector
$$\overline{V}(P) = X(x,y,z)\overline{e}_{1} +
Y(x,y,z)\overline{e}_{2} + Z(x,y,z)\overline{e}_{3}$$ and
$\overrightarrow{OP} = x \overline{e}_{1} + y \overline{e}_{2} +z
\overline{e}_{3}$ thus the nonholonomic manifold $E_{3}^{2}$ is
given by the Pfaff equation:
\begin{equation}
\omega = X(x,y,z)dx + Y(x,y,z)dy + Z(x,y,z)dz = 0
\end{equation}
which is nonintegrable, i.e.
\begin{equation}
\omega\wedge d\omega \neq0.
\end{equation}
Consider a moving frame $\cal{R} = (P; I_{1}, I_{2}, I_{3})$ in
the space $E_{3}$ and the nonholonomic manifold $E_{3}^{2}$, on a
domain $D$ orthogonal to the versors field $I_{3}$. It is given by
the Pfaff equations
\begin{equation}
\omega_{3} = \langle I_{3}, d\overline{r}\rangle = 0,\;\;\;
\mbox{on}\; D.
\end{equation}
By means of (5.1.4) we obtain
\begin{equation}
\omega_{3}\wedge d\omega_{3} = -(p_{1} +q_{2})\omega_{1}\wedge \omega_{2}\wedge
\omega_{3}.
\end{equation}
So, the Pfaff equation $\omega_{3} = 0$ is not integrable if and only
if we have
\begin{equation}
p_{1} + q_{2}\neq 0,\; \mbox{on}\; D.
\end{equation}
It is very known that:
In the case $p_{1} + q_{2} =0$ on the domain $D$, there are two
functions $h(x,y,z)\neq 0$ and $f(x,y,z)$ on $D$ with the property
\begin{equation}
h\omega_{3} = df.
\end{equation}
Thus the equation $h\omega_{3} = 0,$ have a general solution
\begin{equation}
f(x,y,z) = c,\;\;(c=const.),\;\; P(x,y,z)\in D.
\end{equation}
The manifold $E_{3}^{2}$, in this case is formed by a family of
surfaces, given by (5.2.7).
For simplicity we assume that the class of the manifold
$E_{3}^{2}$ is $C^{\infty}$ and the same property have the
geometric object fields or mappings which will be used in the
following parts of this chapter.
A smooth curve $C$ embedded in the nonholonomic manifolds
$E_{3}^{2}$, has the tangent vector $\overline{\alpha} =
\displaystyle\frac{d\overline{r}}{ds}$ given by (5.1.9) and $\omega_{3} = 0.$
This is
\begin{equation}
\overline{\alpha}(s) = \displaystyle\frac{d\overline{r}}{ds}
=\displaystyle\frac{\omega_{1}(s)}{ds}I_{1} + \displaystyle\frac{\omega_{2}(s)}{ds}I_{2},\; \forall
P(\overline{r}(s))\in D.
\end{equation}
The square of arclength element, $ds$ is
\begin{equation}
ds^{2} = (\omega_{1}(s))^{2} + (\omega_{2}(s))^{2}.
\end{equation}
And the arclength of curve $C$ on the interval $[a,b]\subset
(s_{1},s_{2})$ is \begin{equation} s =
\int_{a}^{b}\sqrt{(\omega_{1}(\sigma))^{2} + (\omega_{2}(\sigma))^{2}}d\sigma.
\end{equation}
A tangent versor field $\overline{\xi}(s)$ at the same point
$P(s)$ to another curve $C^{\prime},$ $P\in C^{\prime}$ and $P\in
C,$ can be given in the same form (5.2.8):
\begin{equation}
\overline{\xi}(s) = \displaystyle\frac{\delta\overline{r}}{\delta s} =
\displaystyle\frac{\widetilde{\omega}_{1}(s)}{\delta s}I_{1} +
\displaystyle\frac{\widetilde{\omega}_{2}(s)}{\delta s}I_{2},
\end{equation}
with $\delta s^{2} = (\widetilde{\omega}_{1}(s))^{2} +
(\widetilde{\omega}_{2}(s))^{2}$.
Thus, $\overline{\xi}(s)$ is a versor field tangent to the
nonholonomic manifold $E_{3}^{2}$ along to the curve $C$.
To any tangent vector field $(C,\overline{\xi})$ to $E_{3}^{2}$
along the curve $C$ belonging to $E_{3}^{2}$ we uniquely
associated the tangent Myller configuration $\mathfrak{M}_{t}(C,
\overline{\xi}, I_{3})$. We say that: {\it the geometry of the
associated tangent Myller configuration $\mathfrak{M}_{t}(C,
\overline{\xi},I_{3})$ is the geometry of versor field $(C,
\overline{\xi})$ on $E_{3}^{2}$}. In particular the geometry of
configuration $\mathfrak{M}_{t}(C,\overline{\alpha},I_{3})$ is {\it
the geometry of curve $C$} in the nonholonomic manifold
$E_{3}^{2}$.
The Darboux frame of $(C,\overline{\xi})$ on $E_{3}^{2}$ is
$\cal{R}_{D} = (P; \overline{\xi}, \overline{\mu}, I_{3})$, with
$\overline{\mu} = I_{3}\times \overline{\xi}$, i.e.:
\begin{equation}
\overline{\mu} = \displaystyle\frac{\widetilde{\omega}_{1}(s)}{\delta s}I_{2} -
\displaystyle\frac{\widetilde{\omega}_{2}(s)}{\delta s}I_{1}.
\end{equation}
In Darboux frame $\cal{R}_{D}$, the tangent versor $\overline{\alpha}
= \displaystyle\frac{d\overline{r}}{ds}$ to $C$ can be expressed in the
following form:
\begin{equation}
\overline{\alpha}(s) = \cos \lambda(s) \overline{\xi}(s) + \sin
\lambda(s)\overline{\mu}(s).
\end{equation}
Taking into account the relations (5.2.8), (5.2.11) and (5.2.13),
one gets:
\begin{eqnarray}
\displaystyle\frac{\omega_{1}}{ds}& =& \cos \lambda \displaystyle\frac{\widetilde{\omega}_{1}}{\delta s} -
\sin
\lambda \displaystyle\frac{\widetilde{\omega}_{2}}{\delta s},\nonumber\\
\displaystyle\frac{\omega_{2}}{ds}&=& \sin \lambda \displaystyle\frac{\widetilde{\omega}_{1}}{\delta s} +
\cos \lambda \displaystyle\frac{\widetilde{\omega}_{2}}{\delta s}.
\end{eqnarray}
For $\lambda(s) = 0$, we have $\overline{\alpha}(s) = \overline{\xi}(s)$.
\section{The invariants $G,R,T$ of a tangent versor field $(C,\overline{\xi})$ in $E_{3}^{2}$}
\setcounter{theorem}{0}\setcounter{equation}{0}
The invariants $G,K,T$ of $(C,\overline{\xi})$ satisfy the
fundamental equations
\begin{equation}
\displaystyle\frac{d\overline{\xi}}{ds} = G \overline{\mu}+K I_{3},\
\displaystyle\frac{d\overline{\mu}}{ds} = -G \overline{\xi}+T I_{3},\
\displaystyle\frac{dI_{3}}{ds} = -K \overline{\xi} - T \overline{\mu}.
\end{equation}
Consequently,
\begin{eqnarray}
G(\delta,d) &=& \left\langle \overline{\xi}, \displaystyle\frac{d\overline{\xi}}{ds},
I_{3} \right\rangle\nonumber\\
K(\delta,d)&=& \left\langle\displaystyle\frac{d\overline{\xi}}{ds}, I_{3}\right\rangle =
-\left\langle \overline{\xi}, \displaystyle\frac{dI_{3}}{ds}\right\rangle\\T(\delta,d)&=&
\left\langle \overline{\xi}, I_{3}, \displaystyle\frac{dI_{3}}{ds}\right\rangle.\nonumber
\end{eqnarray}
The proofs and the geometrical meanings are similar to those from
Chapter 3.
By means of expression (5.2.11) of versors $\overline{\xi}(s)$ we
deduce:
\begin{eqnarray}
\displaystyle\frac{d\overline{\xi}}{ds}&=& \left[
\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\widetilde{\omega}_{1}}{\delta s}\right) -
\displaystyle\frac{r}{ds}\displaystyle\frac{\widetilde{\omega}_{2}}{\delta s} \right]I_{1} + \left[
\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\widetilde{\omega}_{2}}{\delta s}\right) +
r\displaystyle\frac{\widetilde{\omega}_{1}}{\delta s} \right]I_{2} + \nonumber\\&&+
\left[ \displaystyle\frac{p}{ds}\displaystyle\frac{\widetilde{\omega}_{2}}{\delta s} - \displaystyle\frac{q}{ds}
\displaystyle\frac{\widetilde{\omega}_{1}}{\delta s} \right]I_{3},
\end{eqnarray}
and
\begin{equation}
\displaystyle\frac{dI_{3}}{ds} = \left(q_{1}\displaystyle\frac{\omega_{1}}{ds} +
q_{2}\displaystyle\frac{\omega_{2}}{ds}\right)I_{1} - \left(p_{1}\displaystyle\frac{\omega_{1}}{ds}
+ p_{2}\displaystyle\frac{\omega_{2}}{ds} \right)I_{2}.
\end{equation}
The formulae (5.3.2) lead to the following expressions for {\it
geodesic curvature} $G(\delta,d)$, {\it normal curvature} $K(\delta,d)$
and {\it geodesic torsion} $T(\delta,d)$ of the tangent versor field
$(C,\overline{\xi})$ on the nonholonomic manifold $E_{3}^{2}$:
\begin{equation}
\hspace*{8mm}G(\delta,d) = \displaystyle\frac{\widetilde{\omega}_{1}}{\delta
s}\left[\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\widetilde{\omega}_{2}}{\delta s }\right)
+ \displaystyle\frac{r}{ds}\displaystyle\frac{\widetilde{\omega}_{1}}{\delta s} \right] -
\displaystyle\frac{\widetilde{\omega}_{2}}{\delta
s}\left[\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\widetilde{\omega}_{1}}{\delta s } -
\displaystyle\frac{r}{ds}\displaystyle\frac{\widetilde{\omega}_{2}}{\delta s} \right)\right],
\end{equation}
\begin{equation}
K(\delta,d) = \displaystyle\frac{p\widetilde{\omega}_{2} - q\widetilde{\omega}_{1}}{\delta
s\cdot d s}
\end{equation}
\begin{equation}
T(\delta,d) = \displaystyle\frac{p\widetilde{\omega}_{1} + q\widetilde{\omega}_{2}}{\delta s
ds}.
\end{equation}
All these formulae take very simple forms if we consider the
angles $\alpha = \sphericalangle (\overline{\alpha},I_{1})$, $\beta =
\sphericalangle (\overline{\xi},I_{1})$ since we have
\begin{eqnarray}
&&\displaystyle\frac{\omega_{1}}{ds} = \cos \alpha,\ \displaystyle\frac{\omega_{2}}{ds} = \sin
\alpha,\nonumber\\&& \displaystyle\frac{\widetilde{\omega}_{1}}{\delta s} = \cos \beta,\
\displaystyle\frac{\widetilde{\omega}_{2}}{\delta s} = \sin \beta.
\end{eqnarray}
With notations
\begin{equation}
\hspace*{8mm}G(\delta,d) = G(\beta,\alpha); K(\delta,d) = K(\beta,\alpha), T(\delta,d) = T(\beta,\alpha)
\end{equation}
the formulae (5.3.2) can be written:
\begin{equation}
\hspace*{12mm}G(\beta,\alpha) = \displaystyle\frac{d\overline{\beta}}{ds} +r_{1}\cos \alpha +r_{2}\sin \alpha,
\end{equation}
\begin{equation}
\hspace*{14mm}K(\beta,\alpha) = (p_{1}\cos \alpha + p_{2}\sin \alpha)\sin \beta - (q_{1}\cos \alpha
+q_{2}\sin \alpha)\cos \beta,
\end{equation}
\begin{equation}
\hspace*{14mm}T(\beta,\alpha) = (p_{1}\cos \alpha +p_{2}\sin \alpha)\cos \beta + (q_{1}\cos\alpha +
q_{2}\sin \alpha)\sin \beta.
\end{equation}
Immediate consequences:
Setting
\begin{equation}
T_{m} = p_{1} +q_{2},
\end{equation}
(called the {\it mean torsion} of $E_{3}^{2}$ at point $P\in
E_{3}^{2}$),
\begin{equation}
H = p_{2} - q_{1}
\end{equation}
(called the {\it mean curvature} of $E_{3}^{2}$ at point $P\in
E_{3}^{2}$), from (5.3.11) and (5.3.12) we have:
\begin{eqnarray}
K(\beta,\alpha) - K(\alpha,\beta)&=& T_{m}\cos (\alpha-\beta),\nonumber\\T(\beta,\alpha)-
T(\alpha,\beta) &=& H \cos (\alpha-\beta).
\end{eqnarray}
\begin{theorem}
The following properties hold:
1. Assuming $\beta-\alpha\neq \pm \displaystyle\frac{\pi}{2}$, the normal curvature
$K(\beta,\alpha)$ is symmetric with respect to the versor field
$(C,\overline{\xi}), (C,\overline{\alpha})$ at every point $P\in
E_{3}^{2}$, if and only if the mean torsion $T_{m}$ of $E_{3}^{2}$ vanishes
$(E_{3}^{2}$ is integrable$)$.
2. For $\beta-\alpha \neq \pm \displaystyle\frac{\pi}{2}$, the geodesic torsion
$T(\beta,\alpha)$ is symmetric with respect to the versor fields
$(C,\overline{\xi})$, $(C,\overline{\alpha})$, if and only if the
nonholonomic manifold $E_{3}^{2}$ has null mean curvature
$($$E_{3}^{2}$ is minimal$)$.\end{theorem}
\section{Parallelism, conjugation and orthonormal conjugation}
\setcounter{theorem}{0}\setcounter{equation}{0}
The notion of conjugation of versor field $(C,\overline{\xi})$
with tangent field $(C,\overline{\alpha})$ on the nonholonomic
manifold $E_{3}^{2}$ is that studied for the associated Myller
configuration $\mathfrak{M}_t$,
So that $(C,\overline{\xi})$ are conjugated with
$(C,\overline{\alpha})$ on $E_{3}^{2}$ if and only if $K(\delta,d)=0$ i.e.
\begin{equation}
(p_{1}\omega_{1} +p_{2}\omega_{2})\widetilde{\omega}_{2} - (q_{1}\omega_{1}
+q_{2}\omega_{2})\widetilde{\omega}_{1} = 0
\end{equation}
or, by means of (5.3.11):
\begin{equation}
(p_{1}\cos\alpha + p_{2}\sin \alpha)\sin \beta - (q_{1}\cos\alpha +q_{2}\sin
\alpha)\cos \beta = 0.
\end{equation}
All propositions established in section 3.3, Chapter 3, can be
applied.
The notion of orthogonal conjugation of $(C, \overline{\xi})$ with
$(C,\overline{\alpha})$ is given by $T(\delta,d) = 0$ or by
\begin{equation}
(p_{1}\omega_{1} + p_{2}\omega_{2})\widetilde{\omega}_{1} + (q_{1}\omega_{1} +
q_{2}\omega_{2})\widetilde{\omega}_{2} = 0
\end{equation}
or, by means of (5.3.11), it is characterized by
\begin{equation}
(p_{1}\cos \alpha + p_{2}\sin \alpha)\cos \beta + (q_{1}\cos \alpha + q_{2}\sin
\alpha)\sin \beta = 0.
\end{equation}
The relation of conjugation is symmetric iff $E_{3}^{2}$ is
integrable $(T_{m} = 0)$ and that of orthogonal conjugation is
symmetric iff $E_{3}^{2}$ is minimal (i.e. $H=0$).
Now we study the case of tangent versor field $(C,\overline{\xi})$
parallel along $C$, in $E_{3}^{2}$. Applying the theory of
parallelism in $\mathfrak{M}_{t}(C,\overline{\xi},\pi)$, from
Chapter 3 we obtain:
\begin{theorem}
The versors $(C,\overline{\xi})$, tangent to the manifold
$E_{3}^{2}$ along the curve $C$ is parallel in the Levi-Civita
sense if and only if the following system of equations holds
\begin{equation}
\displaystyle\frac{\widetilde{\omega}_{1}}{\delta
s}\left[\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\widetilde{\omega}_{2}}{\delta s }\right)
+\displaystyle\frac{r}{ds}\displaystyle\frac{\widetilde{\omega}_{1}}{\delta s} \right] -
\displaystyle\frac{\widetilde{\omega}_{2}}{\delta
s}\left[\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\widetilde{\omega}_{1}}{\delta s }\right)
- \displaystyle\frac{r}{ds}\displaystyle\frac{\widetilde{\omega}_{2}}{\delta s} \right] = 0.
\end{equation}
\end{theorem}
But, the previous system is equivalent to the system:
\begin{eqnarray}
\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\widetilde{\omega}_{1}}{\delta s}\right) -
\displaystyle\frac{r}{ds}\displaystyle\frac{\widetilde{\omega}_{2}}{\delta s} &=&
h(s)\displaystyle\frac{\widetilde{\omega}_{1}}{\delta
s}\nonumber\\\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\widetilde{\omega}_{2}}{\delta
s}\right) +\displaystyle\frac{r}{ds}\displaystyle\frac{\widetilde{\omega}_{1}}{\delta s} &=&
h(s)\displaystyle\frac{\widetilde{\omega}_{2}}{\delta s},\;\; \forall h(s)\neq 0.
\end{eqnarray}
Since, the tangent vector field $h(s)\overline{\xi}(s)$ has the
same direction with the versor $\overline{\xi}(s)$, from (5.2.11) we
obtain $G(\widetilde{\delta},d) = h^{2}G(\delta,d)$, $\widetilde{\delta}$
being the direction of tangent vector $h(s)\overline{\xi}$.
Therefore, the equations $G(\delta,d)=0$ is invariant with respect to
the applications $\overline{\xi}(s)\to h(s)\overline{\xi}(s)$.
Thus the equations (5.4.3) or (5.4.4) characterize the parallelism of
directions $(C, h(s) \overline{\xi}(s))$ tangent to $E_{3}^{2}$
along $C$.
All properties of the parallelism of versors $(C,\xi)$ studied for
the configuration $\mathfrak{M}_{t}$ in sections 2.8, Chapter 2 are
applied here.
Theorem of existence and uniqueness of parallel of tangent versors
$(C,\overline{\xi})$ on $E_{3}^{2}$ can be formulated exactly as
for this notion in $\mathfrak{M}_{t}$.
But a such kind of theorem can be obtained by means of the
following equation, given by (5.3.10):
\begin{equation}
G(\beta,\alpha) \equiv \displaystyle\frac{d\beta}{ds} + r_{1}(s)\cos \alpha +r_{2}(s)\sin \alpha
= 0.
\end{equation}
The parallelism of vector field $(C,\overline{V})$ tangents to
$E_{3}^{2}$ along the curve $C$ can be studied using the form
$$
\overline{V}(s)= \|\overline{V}(s)\|\overline{\xi}(s)
$$
where $\overline{\xi}$ is the versor of $\overline{V}(s)$.
Also, we can start from the definition of Levi-Civita parallelism
of tangent vector field $(C,\overline{V})$, expressed by the
property
\begin{equation}
\displaystyle\frac{d\overline{V}}{ds} = h(s)I_{3}.
\end{equation}
Setting
\begin{equation}
\overline{V}(s) = V^{1}(s)I_{1} + V^{2}(s)I_{2}
\end{equation}
and using (5.4.8) we obtain the system of differential equations
\begin{equation}
\displaystyle\frac{dV^{1}}{ds} - \displaystyle\frac{r}{ds}V^{2}=0,\ \
\displaystyle\frac{dV^{2}}{ds}+\displaystyle\frac{r}{ds}V^{1}=0.
\end{equation}
All properties of parallelism in Levi-Civita sense of tangent
vectors $(C,\overline{V})$ studied in section 2.8, Chapter 2 for
Myller configuration are valid. For instance
\begin{theorem}
The Levi-Civita parallelism of tangent vectors $(C,\overline{V})$
in the manifold $E_{3}^{2}$ preserves the lengths and angle of
vectors.
\end{theorem}
The concurrence in Myller sense of tangent vector fields
$(C,\overline{\xi})$ is characterized by Theorem 2.9.2 Chapter 2, by
the following equations
\begin{equation}
\displaystyle\frac{d}{ds}\left(\displaystyle\frac{c_{2}}{G}\right) = c_{1},
\end{equation}
where $c_{1} = \langle \overline{\alpha},\overline{\xi}\rangle$,
$c_{2} = \langle \overline{\alpha}, \overline{\mu}\rangle,$ $G =
G(\delta,d)\neq 0.$
Consequence: the concurrence of tangent versor field
$(C,\overline{\xi})$ in $E_{3}^{2}$ for $G\neq 0$ is characterized
by
\begin{equation}
\displaystyle\frac{d}{ds}\left[\displaystyle\frac{\widetilde{\omega}_{1}\omega_{2} -
\widetilde{\omega}_{2}\omega_{1}}{\delta s\cdot ds}\cdot \displaystyle\frac{1}{G}\right] =
\displaystyle\frac{\widetilde{\omega}_{1}\omega_{1} + \widetilde{\omega}_{2}\omega_{2}}{\delta sds}
\end{equation}
or by:
\begin{equation}
\displaystyle\frac{d}{ds}\left[\displaystyle\frac{1}{G}\sin (\alpha - \beta)\right] =\cos (\alpha-\beta).
\end{equation}
The properties of concurrence in Myller sense can be taken from
section 2.8, Chapter 2.
\section{Theory of curves in $E_{3}^{2}$}
\setcounter{theorem}{0}\setcounter{equation}{0}
To a curve $C$ in the nonholonomic manifold $E_{3}^{2}$ one can
uniquely associate a Myller configuration $\mathfrak{M}_{t} =
\mathfrak{M}_{t}(C, \overline{\alpha}, \pi)$ where $\pi(s)$ is the
orthogonal to the normal versor $I_{3}(s)$ of $E_{3}^{2}$ at every
point $P\in C.$
Thus the geometry of curves in $E_{3}^{2}$ is the geometry of
associated Myller configurations $\mathfrak{M}_{t}$. It is
obtained as a particular case taking $\overline{\xi}(s) =
\overline{\alpha}(s)$ from the previous sections of this chapter.
The Darboux frame of the curve $C$, in $E_{3}^{2}$ is given by
$$
\cal{R}_{D} = \{P(s); \overline{\alpha}(s), \overline{\mu}^{*}(s),
I_{3}(s)\},\;\; \overline{\mu}^{*} = I_{3}\times \overline{\alpha}
$$
and the fundamental equations of $C$ in $E_{3}^{2}$ are as
follows:
\begin{equation}
\displaystyle\frac{d\overline{r}}{ds} = \overline{\alpha}(s)
\end{equation}
and
\begin{eqnarray}
\displaystyle\frac{d\overline{\alpha}}{ds} &=& \kappa_{g}(s)\overline{\mu}^{*} +
\kappa_{n}(s)I_{3},\\\displaystyle\frac{d\overline{\mu}^{*}}{ds} &=&
-\kappa_{g}(s)\overline{\alpha} +\tau_{g}(s)I_{3},\\\displaystyle\frac{dI_{3}}{ds}&=&
-\kappa_{n}(s)\overline{\alpha} - \tau_{g}(s)\overline{\mu}^{*}.
\end{eqnarray}
The invariant $\kappa_{g}(s)$ is the {\it geodesic curvature} of $C$
at point $P\in C,$ $\kappa_{n}(s)$ is the {\it normal curvature} of
curve $C$ at $P\in C$ and $\tau_{g}(s)$ it {\it geodesic torsion}
of $C$ at $P\in C$ in $E_{3}^{2}$.
The geometrical interpretations of these invariants are exactly
those described in the section 3.2, Chapter 3. Also, a fundamental
theorem can be enounced as in section 3.2, Theorem 3.2.2, Chapter
3.
The calculus of $\kappa_{g}$, $\kappa_{n}$ and $\tau_{g}$ is obtained by
the formulae (5.3.2), for $\overline{\xi}=\overline{\alpha}$.
We have:
\begin{eqnarray}
\kappa_{g}&=&\left\langle \overline{\alpha}, \displaystyle\frac{d\overline{\alpha}}{ds},
I_{3}\right\rangle,\\\kappa_{n}&=&
\left\langle\displaystyle\frac{d\overline{\alpha}}{ds},I_{3}\right\rangle =
-\left\langle\overline{\alpha},\displaystyle\frac{dI_{3}}{ds}\right\rangle,\\\tau_{g} &=&
\left\langle\overline{\alpha}, I_{3}, \displaystyle\frac{dI_{3}}{ds}\right\rangle.
\end{eqnarray}
By using the expressions (3.1.4), Ch. 3 we
obtain
\begin{equation}
\kappa_{g} = \kappa \sin \varphi^{*},\; \kappa_{n}=\kappa\cos \varphi^{*},\; \tau_{g} =
\tau+\displaystyle\frac{d\varphi^{*}}{ds}
\end{equation}
with $\varphi^{*} = \sphericalangle (\overline{\alpha}_2,I_3)$, $\overline{\alpha}_{2}$ being the versor of
principal normal of curve $C$ at point $P$.
A theorem of Meusnier can be formulates as in section 3.1, Chapter
3.
The line $C$ is {\it geodesic} (or {\it autoparallel}) {\it line} of the
nonholonomic manifold $E_{3}^{2}$ if $\kappa_{g}(s) = 0$, $\forall
s\in (s_{1},s_{2})$.
\begin{theorem}
Along a geodesic $C$ of the nonholonomic manifold $E_{3}^{2}$ the
normal curvature $\kappa_{n}$ is equal to $\pm \kappa$ and geodesic
torsion $\tau_{g}$ is equal to the torsion $\tau$ of $C$.
The differential equations of geodesics are as follows
\begin{equation}
\kappa_{g} \equiv
\displaystyle\frac{\omega_{1}}{ds}\left[\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\omega_{2}}{ds}\right)
+ r_{1} \right] -
\displaystyle\frac{\omega_{3}}{ds}\left[\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\omega_{1}}{ds}\right)
- r_{2} \right]=0.
\end{equation}
\end{theorem}
This equations are equivalent to the system of differential equations
\begin{eqnarray}
\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\omega_{1}}{ds}\right) -r_{2}(s)&=&
h(s)\displaystyle\frac{\omega_{1}}{ds},\\\displaystyle\frac{d}{ds}\left(\displaystyle\frac{\omega_{2}}{ds}\right)
+ r_{1}(s) &=& h(s)\displaystyle\frac{\omega_{2}}{ds},
\end{eqnarray}
where $h(s)\neq 0$ is an arbitrary smooth function.
If we consider $\sigma = \sphericalangle (I_{3}, \overline{\nu}_{3})$,
where $\overline{\nu}_{3}$ the versor of binormal is of the
field $(C,I_{3})$ and $\chi_{2}$ is the torsion of $(C, I_{3})$,
then the $\kappa_{g}(s)$, the geodesic curvature of the field
$(C,\overline{\alpha})$ is obtained by the formulas of Gh. Gheorghiev:
\begin{equation}
\kappa_{g} = \displaystyle\frac{d\sigma}{ds}+\chi_{2}.
\end{equation}
It follows that: the geodesic lines of the nonholonmic manifold
$E_{3}^{2}$ are characterized by the differential equations:
\begin{equation}
\displaystyle\frac{d\sigma}{ds} + \chi_{2}(s) = 0. \end{equation}
By means of this equations we can prove a theorem of existence of
uniqueness of geodesic on the nonholonomic manifold $E_{3}^{2}$,
when $\sigma_{0} = \sphericalangle (I_{3}(s_{0}),
\overline{\nu}_{3}(s_{0}))$ is a priori given.
\section{The fundamental forms of $E_{3}^{2}$}
\setcounter{theorem}{0}\setcounter{equation}{0}
The first, second and third fundamental forms $\phi, \psi$ and
$\chi$ of the nonholonomic manifold $E_{3}^{2}$ are defined as in
the theory of surfaces in the Euclidean space $E_{3}.$
The tangent vector $d\overline{r}$ to a curve $C$ at point $P\in
C$ in $E_{3}^{2}$ is given by:
\begin{equation}
d\overline{r} = \omega_{1}(s)I_{1} + \omega_{2}(s)I_{2},\ \omega_3=0.
\end{equation}
The first fundamental form of $E_{3}^{2}$ at point $P\in
E_{3}^{2}$ is defined by:
\begin{equation}
\phi = \langle d\overline{r},d\overline{r}\rangle =
(\omega_{1}(s))^{2} + (\omega_{2}(s))^{2},\ \omega_3=0.
\end{equation}
Clearly, $\phi$ has geometric meaning and it is a quadratic positive
defined form.
The arclength of $C$ is determined by the formula (5.2.10) and the
arclength element $ds$ is given by:
\begin{equation}
ds^{2} = \phi = (\omega^{1})^{2} + (\omega^{2})^{2},\ \omega_3=0.
\end{equation}
The angle of two tangent vectors $d\overline{r}$ and $\delta
\overline{r} =\widetilde{\omega}_{1} I_{1} + \widetilde{\omega}_{2}I_{2}$
is expressed by
\begin{equation}
\cos \theta = \displaystyle\frac{\langle \delta \overline{r}, d\overline{r}\rangle}{\delta
sds} = \displaystyle\frac{\widetilde{\omega}_{1}\omega_{1} +
\widetilde{\omega}_{2}\omega_{2}}{\sqrt{(\omega_{1})^{2}+(\omega_{2})^{2}}\sqrt{(\widetilde{\omega}_{1})^{2}
+ (\widetilde{\omega}_{2})^{2} }}.
\end{equation}
The second fundamental from $\psi$ of $E_{3}^{2}$ at point $P$ is
defined by
\begin{equation}
\psi =-\langle d\overline{r}, dI_{3}\rangle = p\omega_{2} - q \omega_{1} =
p_{2}\omega_{2}^{2} +(p_{1} - q_{2})\omega_{1}\omega_{2} - q_{1}\omega_{1}^{2}.
\end{equation}
The form $\psi$ has a geometric meaning. It is not symmetric.
The third fundamental form $\chi$ of $E_{3}^{2}$ at point $P$ is
defined by
\begin{equation}
\chi = \langle d\overline{r},\overline{\theta}\rangle,\ \omega_3=0,
\end{equation}
where $\overline{\theta}$ is given by (5.1.7).
As a consequence, we have
\begin{equation}
\chi = p\omega_{1}+q\omega_{2} = p_{1}\omega_{1}^{2}
+(p_{2}+q_{1})\omega_{1}\omega_{2} + q_{2}\omega_{2}^{2}
\end{equation}
$\chi$ has a geometrical meaning and it is not symmetric.
One can introduce a fourth fundamental form of $E_{3}^{2}$, by
$$
\Theta = \langle dI_{3}, dI_{3}\rangle = p^{2} + q^{2}\; (mod\;
\omega_{3}).
$$
But $\Theta$ linearly depends by the forms $\phi, \psi, \chi$.
Indeed, we have
$$
\Theta = M\psi - K_{t}\phi - T_{m}\chi,
$$
where $M$ is mean curvature, $T_{m}$ is mean torsion and $K_{t}$
is total curvature of $E_{3}^{2}$ at point $P.$ The expression of
$K_{t}$ is
\begin{equation}
K_{t} = p_{1}q_{2} - p_{2}q_{1}.
\end{equation}
The formulae (5.5.6) and (5.5.7) and the fundamental forms $\phi, \psi, \chi$
allow to express the normal curvature and geodesic torsion of a
curve $C$ at a point $P\in C$ as follows
\begin{equation}
\kappa_{n} = \displaystyle\frac{\psi}{\phi} = \displaystyle\frac{p\omega_{2} - q\omega_{1}}{\omega_{1}^{2}
+\omega_{2}^{2}} = \displaystyle\frac{p_{2}\omega_{2}^{2} + (p_{1} - q_{2})\omega_{1}\omega_{2}
- q_{1}\omega_{1}^{2}}{\omega_{1}^{2} + \omega_{2}^{2}}
\end{equation}
and
\begin{equation}
\tau_{g} = \displaystyle\frac{\chi}{\phi} = \displaystyle\frac{p\omega_{1} +q\omega_{2}}{\omega_{1}^{2}
+ \omega_{2}^{2}} = \displaystyle\frac{p_{1}\omega_{1}^{2} + (p_{2} +q_{1})\omega_{1}\omega_{2}
+ q_{2}\omega_{2}^{2}}{\omega_{1}^{2} + \omega_{2}^{2}}.
\end{equation}
The cases when $\kappa_{n} = 0$ and $\tau_{g} =0$ are important. They
will be investigated in the next section.
\section{Asymptotic lines. Curvature lines}
\setcounter{theorem}{0}\setcounter{equation}{0}
An asymptotic tangent direction $d\overline{r}$ to $E^2_3$ at a point $P\in
E_{3}^{2}$ is defined by the property $\kappa_{n}(s) = 0.$ A line $C$
in $E_{3}^{2}$ for which the tangent directions $d\overline{r}$
are asymptotic directions is called an {\it asymptotic line} of the
nonholonomic manifold $E_{3}^{2}$.
The asymptotic directions are characterized by the following
equation of degree 2.
\begin{equation}
p_{2}\omega_{2}^{2} + (p_{1} - q_{2})\omega_{1}\omega_{2} - q_{1}\omega_{1}^{2}
=0.
\end{equation}
The realisant of this equation is the invariant
\begin{equation}
K_{g} = K_{t} - \displaystyle\frac{1}{4}T_{m}^{2},
\end{equation}
called the {\it gaussian curvature} of $E_{3}^{2}$ at point $P\in
E_{3}^{2}$.
We have
\begin{theorem}
At every point $P\in E_{3}^{2}$ there are two asymptotic
directions:
- real if $K_{g}<0$
- imaginary if $K_{g}>0$
- coincident if $K_{g}=0$
\end{theorem}
The point $P\in E_{3}^{2}$ is called {\it planar} if the
asymptotic directions of $E_{3}^{2}$ at $P$ are nondeterminated.
A planar point is characterized by the equations
\begin{equation}
p_{1} - q_{2} = 0,\;\; p_{2} = q_{1} = 0.
\end{equation}
The point $P\in E_{3}^{2}$ is called {\it elliptic} if the
asymptotic direction of $E_{3}^{2}$ at $P$ are imaginary and $P$
is a {\it hyperbolic} point if the asymptotic directions of
$E_{3}^{2}$ at $P$ are real.
Of course, if $P$ is a hyperbolic point of $E_{3}^{2}$ then,
exists two asymptotic line through the point $P$, tangent to the
asymptotic directions, solutions of the equations (5.7.1).
The curvature direction $d\overline{r}$ at a point $P\in E_{3}$ is
defined by the property $\tau_{g}(s) = 0.$ A line $C$ in
$E_{3}^{2}$ for which the tangent directions $d\overline{r}$ are
the curvature directions is called a {\it curvature line} of the
nonholonomic manifold $E_{3}^{2}$.
The curvature directions are characterized by the following second
order equations.
\begin{equation}
p_{1}\omega_{1}^{2} + (p_{2}+q_{1})\omega_{1}\omega_{2} + q_{2}\omega_{2}^{2} =0.
\end{equation}
The realisant of this equations is
\begin{equation}
T_{t} = K_{t} - \displaystyle\frac{1}{4}M^{2}.
\end{equation}
We have
\begin{theorem}
At every point $P\in E_{3}^{2}$ there are two curvature directions
real, if $T_{t}<0$
imaginary, if $T_{t} >0$
coincident, if $T_{t} = 0$
\end{theorem}
The curvature lines on $E_{3}^{2}$ are obtained by integrating the
equations (5.7.1) in the case $T_{t}\leq 0$.
\begin{remark}
In the case of surfaces $(T_{m}=0)$, the curvature lines coincides
with the lines of extremal normal curvature.
\end{remark}
\section{The extremal values of $\kappa_{n}$. Euler formula. Dupin indicatrix}
\setcounter{theorem}{0}\setcounter{equation}{0}
The extremal values at a point $P\in E_{3}^{2}$, of the normal
curvature $\kappa_{n} = \displaystyle\frac{\psi}{\phi}$ when $(\omega_{1},\omega_{2})$ are
variables are given by $\phi \displaystyle\frac{\partial \psi}{\partial \omega_{i}} -
\psi\displaystyle\frac{\partial\phi}{\partial \omega_{i}} = 0,$ $(i=1,2)$ which are equivalent
to the equations
\begin{equation}
\displaystyle\frac{\partial\psi}{\partial w_{i}} - \kappa_{n}\displaystyle\frac{\partial\phi}{\partial\omega_{i}} = 0,\;\; (i=1,2).
\end{equation}
Taking into account the form (5.6.5) of $\psi$ and (5.6.3) of $\phi$,
the system (5.8.1) can be written:
\begin{eqnarray}
-(q_{1} + \kappa_{n})\omega_{1} + \displaystyle\frac{1}{2}(p_{1}
-q_{2})\omega_{2}&=&0\\\displaystyle\frac{1}{2}(p_{1} - q_{1})\omega_{1}
+(p_{2}-\kappa_{n})\omega_{2} &=& 0.
\end{eqnarray}
It is a homogeneous and linear system in $(\omega_{1},\omega_{2})$-which
gives the directions $(\omega_{1},\omega_{2})$ of extremal values of
normal curvature.
But, the previous system has solutions if and only if the following equations hold
$$
\left|\begin{array}{ccc} -(q_{1} +\kappa_{n})&\displaystyle\frac{1}{2}(p_{1} -
q_{2})\\\displaystyle\frac{1}{2}(p_{1}-q_{2})&p_{2} - \kappa_{n}
\end{array}\right|=0
$$
equivalent to the following equations of second order in $\kappa_{n}$:
\begin{equation}
\kappa_{n}^{2} - (p_{2} - q_{1})\kappa_{n} +p_{1}q_{2} - p_{2}q_{1} -
\displaystyle\frac{1}{4}(p_{1} + q_{2})^{2} = 0.
\end{equation}
This equation has two real solutions $\displaystyle\frac{1}{R_{1}}$ and
$\displaystyle\frac{1}{R_{2}}$ because its realisant is
\begin{equation}
\Delta = (p_{2} +q_{1})^{2} + (p_{1} -q_{2})^{2}.
\end{equation}
We have $\Delta\geq 0.$ Therefore the solutions
$\displaystyle\frac{1}{R_{1}},\displaystyle\frac{1}{R_{2}}$ are real, different or equal,
$\displaystyle\frac{1}{R_{1}}, \displaystyle\frac{1}{R_{2}}$ are the extremal values of
normal curvature $\kappa_{n}$.
Let us denote \begin{equation} H = \displaystyle\frac{1}{R_{1}}+\displaystyle\frac{1}{R_{2}}
=p_{2} - q_{1}
\end{equation}
called the mean curvature of the nonholonomic manifold $E_{3}^{2}$
at point $P$, and
\begin{equation}
K_{g} = \displaystyle\frac{1}{R_{1}}\displaystyle\frac{1}{R_{2}} = p_{1}q_{2} - p_{2}q_{1} -
\displaystyle\frac{1}{4}(p_{1} +q_{2}).
\end{equation}
called the Gaussian curvature of $E_{3}^{2}$ at point $P$.
Substituting $\displaystyle\frac{1}{R_{1}},$ $\displaystyle\frac{1}{R_{2}}$ in the equations
(5.8.2) we have the directions of extremal values of the normal
curvature-called the {\it principal directions} of $E_{3}^{2}$ at
point $P\in E_{3}^{2}$. These directions are obtained from (5.8.2)
for $\kappa_{n} = \displaystyle\frac{1}{R_{1}}$, $\kappa_{n}=\displaystyle\frac{1}{R_{2}}$. Thus,
one obtains the following equations which determine the principal
directions:
\begin{equation}
(p_{1} - q_{2})\omega_{1}^{2} +2(p_{2}+q_{1})\omega_{1}\omega_{2} - (p_{1} -
q_{2})\omega_{2}^{2}=0.
\end{equation}
Let $(\omega_{1}, \omega_{2}), (\widetilde{\omega}_{1},\widetilde{\omega}_{2})$
the solution of (5.8.8). Then $d\overline{r} = \omega_{1}I_{1} +
\omega_{2}I_{2}$ and $\delta \overline{r} = \widetilde{\omega}_{1}I_{1} +
\widetilde{\omega}_{2}I_{2}$ are the principal directions on
$E_{3}^{2}$ at point $P.$
The principal directions $d\overline{r},\delta\overline{r}$ of $E_{3}^{2}$ in
every point $P$ are real and orthogonal.
Indeed, because $\Delta>0$, and if $\displaystyle\frac{1}{R_{1}}\neq
\displaystyle\frac{1}{R_{2}}$ we have $\langle \delta \overline{r},
d\overline{r}\rangle = 0$.
The curves on $E_{3}^{2}$ tangent to $\delta \overline{r}$ and
$d\overline{r}$ are called the {\it lines of extremal normal curvature}.
So, we have
\begin{theorem}
At every point $P\in E_{3}^{2}$ there are two real and orthogonal
lines of extremal normal curvature.\end{theorem}
Assuming that the frame $\mathcal{R} = (P; I_{1}, I_{2},I_{3})$
has the vectors $I_{1}, I_{2}$ in the principal directions $\delta
\overline{r}, d\overline{r}$ respectively, thus the equation (5.8.8)
implies
\begin{equation}
p_{1} - q_{2} = 0
\end{equation}
and the extremal values of normal curvature $\kappa_{n}$ are given by
\begin{equation}
\kappa_{n}^{2} - (p_{2}-q_{1})\kappa_{n}-p_{2}q_{1} =0.
\end{equation}
We have
\begin{equation}
\displaystyle\frac{1}{R_{1}} = p_{2},\;\;\displaystyle\frac{1}{R_{2}} = -q_{1}.
\end{equation}
Denoting by $\alpha$ the angle between the versor $I_{1}$ and the
versor $\overline{\xi}$ of an arbitrary direction at $P$, tangent
to $E_{3}^{2}$ we can write the normal curvature $\kappa_{n}$ from
(5.6.8) in the form
\begin{equation}
\kappa_{n} = \displaystyle\frac{1}{R_{1}}\cos^{2}\alpha +
\displaystyle\frac{1}{R_{2}}\sin^{2}\alpha,\;\; \alpha =
\sphericalangle(\overline{\xi},I_{1}).
\end{equation}
The formula (5.8.12) is called the {\it Euler formula} for normal
curvatures on the nonholonomic manifold $E_{3}^{2}$.
Consider the tangent vector $\overrightarrow{PQ} =
|\kappa_{n}|^{-1/2}\overline{\xi}$, i.e.
$$
\overrightarrow{PQ} = |\kappa_{n}|^{-\displaystyle\frac{1}{2}}(\cos \alpha I_{1} +\sin
\alpha I_{2} ).
$$
The cartesian coordinate $(x,y)$ of the point $Q$, with respect to
the frame $(P; I_{1}, I_{2})$ are given by
\begin{equation}
x = |\kappa_{n}|^{-\displaystyle\frac{1}{2}}\cos \alpha, \;\; y =
|\kappa_{n}|^{-\displaystyle\frac{1}{2}}\sin \alpha.
\end{equation}
Thus, eliminating the parameter $\alpha$ from (5.8.12) and (5.8.13) are
gets the geometric locus of the point $Q$ (in the tangent plan
$(P;I_{1},I_{2})$):
\begin{equation}
\displaystyle\frac{x^{2}}{R_{1}} + \displaystyle\frac{y^{2}}{R_{2}} = \pm 1
\end{equation}
called {\it the Dupin$'$s indicatrix} of the normal curvature of
$E_{3}^{2}$ at point $P$.
It is formed by a pair of conics, having the axis in the principal
directions $\delta \overline{r}, d \overline{r}$ and the invariants:
$H$-the mean curvature and $K_{g}$ - the gaussian curvature of
$E_{3}^{2}$ at $P$.
The Dupin indicatrix is formed by two ellipses, one real and
another imaginary, if $K_{g}>0.$ It is formed by a pair of
conjugate hyperbolae if $K_{g}<0$, whose asymptotic lines are
tangent to the asymptotic lines of $E_{3}^{2}$ at point $P$. It
follows that the asymptotic directions of $E_{3}^{2}$ at $P$ are
symmetric with respect to the principal directions of $E_{3}^{2}$
at $P.$
In the case $K_{g} = 0$, the Dupin indicatrix of $E_{3}^{2}$ at
$P$ is formed by a pair of parallel straight lines - one real and
another imaginary.
\section{The extremal values of $\tau_{g}$. Bonnet formula. Bonnet indicatrix}
\setcounter{theorem}{0}\setcounter{equation}{0}
The extremal values of the geodesic torsion $\tau_{g} =
\displaystyle\frac{\chi}{\phi}$ at the point $P$, on $E_{3}^{2}$ can be
obtained following a similar way as in the previous paragraph.
Such that, the extremal values of $\tau_{g}$ are given by the
system of equations
$$
\displaystyle\frac{\partial \chi}{\partial \omega_{i}} - \tau_{g}\displaystyle\frac{\partial \phi}{\partial \omega_{i}} = 0,
(i=1,2).
$$
Or, taking into account the expressions of $\phi = (\omega_{1})^{2}
+ (\omega_{2})^{2}$ and $\chi = p\omega_{1} + q\omega_{2} = (p_{1}\omega_{1}
+p_{2}\omega_{2})\omega_{1} +(q_{1}\omega_{1} + q_{2}\omega_{2})\omega_{2}$, we have:
\begin{eqnarray}
(p_{1}-\tau_{g})\omega_{1} + \displaystyle\frac{1}{2}(p_{2}+q_{1})\omega_{2} =
0\\\displaystyle\frac{1}{2}(p_{2} +q_{1})\omega_{1} + (q_{2}-\tau_{g})\omega_{2} =
0.\nonumber
\end{eqnarray}
But, these imply
\begin{equation}
\left|
\begin{array}{ccc}
p_{1} - \tau_{g}& \displaystyle\frac{1}{2}(p_{2}
+q_{1})\\\displaystyle\frac{1}{2}(p_{2}+q_{1})&q_{2} - \tau_{g}
\end{array}\right|=0
\end{equation}
or expanded:
\begin{equation}
\tau_{g}^{2} - (p_{1}+q_{2})\tau_{g} +(p_{1}q_{2} - p_{2}q_{1} -
\displaystyle\frac{1}{4}(p_{2} - q_{1})^{2}) = 0.
\end{equation}
The solutions $\displaystyle\frac{1}{T_{1}}, \displaystyle\frac{1}{T_{2}}$ of this equations
are the extremal values of geodesic torsion $\tau_{g}$. They are
real. The realisant of (5.9.3) is
\begin{equation}
\Delta_{1} = (p_{1} - q_{2})^{2} +(p_{2}+q_{1})^{2}
\end{equation}
Therefore $\displaystyle\frac{1}{T_{1}},$ $\displaystyle\frac{1}{T_{2}}$ are real, distinct
or coincident.
We denote
\begin{equation}
T_{m} = \displaystyle\frac{1}{T_{1}} + \displaystyle\frac{1}{T_{2}} = p_{1} +q_{2},
\end{equation}
{\it the mean torsion} of $E_{3}^{2}$ at point $P.$ And
\begin{equation}
T_{t} = \displaystyle\frac{1}{T_{1}}\displaystyle\frac{1}{T_{2}} = p_{1}q_{2} - p_{2}q_{1} -
\displaystyle\frac{1}{4}(p_{2} -q_{1})^{2} =K_{g} - \displaystyle\frac{1}{4}H^{2}
\end{equation}
is called {\it the total torsion} of $E_{3}^{2}$ at $P.$
But, as we know the condition of integrability of the Pfaf
equation $\omega_{3}=0$ is
$$
d\omega_{3} = 0,\; \mbox{modulo}\; \omega_{3},
$$
which is equivalent to $T_{m} = p_{1} +q_{2} = 0.$
\begin{theorem}
The nonholonomic manifold $E_{3}^{2}$ of mean torsion $T_{m}$
equal to zero is a family of surfaces in $E_{3}$.
\end{theorem}
The directions $\delta \overline{r} = \widetilde{\omega}_{1}I_{1} +
\widetilde{\omega}_{2}I_{2}$ and $d\overline{r} = \omega_{1}I_{1} +
\omega_{2}I_{2}$ of extremal geodesic torsion, corresponding to the
extremal values $\displaystyle\frac{1}{T_{1}}$ and $\displaystyle\frac{1}{T_{2}}$ of
$\tau_{g}$ are given by the equation obtained from (5.9.2) by
eliminating $\tau_{g}$ i.e.
\begin{equation}
(p_{2} + q_{1})\omega_{1}^{2} - (p_{1}-q_{2})\omega_{1}\omega_{2} -
(p_{2}+q_{1})\omega_{2}^{2} = 0.
\end{equation}
Thus $\delta \overline{r}$ and $d\overline{r}$ are real and orthogonal
at every point $P\in E_{3}^{2}$ for which $\displaystyle\frac{1}{T_{1}}\neq
\displaystyle\frac{1}{T_{2}}$. $\delta \overline{r}$ and $d \overline{r}$ are
called the direction of the {\it extremal geodesic torsion} at
point $P\in E_{3}^{2}.$
\begin{theorem}
At every point $P\in E_{3}^{2}$, where $\displaystyle\frac{1}{T_{1}}\neq
\displaystyle\frac{1}{T_{2}}$ there exist two real and orthogonal directions of
extremal geodesic torsion.
\end{theorem}
Now, assuming that, at point $P$, the frame $\cal{R} = (P;I_{1},
I_{2}, I_{3})$ has the vectors $I_{1}$ and $I_{2}$ in the
directions $\delta \overline{r}$, $d\overline{r}$ of the extremal
geodesic torsion, respectively, from (5.9.7) we deduce
\begin{equation}
p_{2} + q_{1} =0.
\end{equation}
Thus, the equation (5.9.3) takes the form
\begin{equation}
\tau_{g}^{2} - (p_{1}+q_{2})\tau_{g} +p_{1}q_{2} = 0.
\end{equation}
Its solutions are
\begin{equation}
\displaystyle\frac{1}{T_{1}} = p_{1},\;\; \displaystyle\frac{1}{T_{2}} = q_{2}
\end{equation}
and, setting $\beta = \sphericalangle (\overline{\xi}, I_{1})$, from
$\tau_{g} = \displaystyle\frac{\chi}{\phi}$ one obtains [65] the so called {\it
Bonnet formula} giving $\tau_{g}$ for the nonholonomic manifold
$E_{3}^{2}$:
\begin{equation}
\tau_{g}=\displaystyle\frac{\cos^{2}\beta}{T_{1}}+\displaystyle\frac{\sin^{2}\beta}{T_{2}},\; \beta =
\sphericalangle (\overline{\xi},I_{1}).
\end{equation}
If $T_{m} = 0,$ (5.9.11) reduce to the very known formula from
theory of surfaces:
\begin{equation}
\tau_{g} = \displaystyle\frac{1}{T_{1}}\cos 2\beta.
\end{equation}
By means of the formula (5.9.9) we can determine an indicatrix of
geodesic torsions.
In the tangent plane at point $P$ we take the cartesian frame $(P;
I_{1}, I_{2})$, $I_{1}$ having the direction of extremal geodesic
torsion $\delta \overline{r}$ corresponding to $\displaystyle\frac{1}{T_{1}}$ and
$I_{2}$ having the same direction with $d\overline{r}$
corresponding to $\displaystyle\frac{1}{T_{2}}$. Consider the point $Q^{\prime}
\in \pi(s)$ given by
\begin{equation}
\overrightarrow{PQ^{\prime}} = |\tau_{g}|^{-1}\overline{\xi} =
|\tau_{g}|^{-1}(\cos \beta I_{1} + \sin \beta I_{2})
\end{equation}
with $\beta = \sphericalangle (\overline{\xi},I_{1})$. The cartesian
coordinate $(x,y)$ at point $Q^{\prime}$:
$\overrightarrow{PQ^{\prime}} = xI_{1} +y I_{2}$, by means of
(5.9.13) give us
\begin{equation}
x = |\tau_{g}|^{-1}\cos\beta,\;\; y = |\tau_{g}|^{-1}\sin \beta.
\end{equation}
Eliminating the parameter $\beta$ from (5.9.14) and (5.9.11) one obtains
the locus of point $Q^{\prime}$:
\begin{equation}
\displaystyle\frac{x^{2}}{T_{1}} + \displaystyle\frac{y^{2}}{T_{2}} = \pm 1,
\end{equation}
called the {\it indicatrix of Bonnet} for the geodesic torsions at
point $P\in E_{3}^{2}$.
It consists of two conics having the axis the tangents in the
directions of extremal values of geodesic torsion. The invariants
of these conics are $T_{m}$-the mean torsion and $T_{t}$-the total
torsion of $E_{3}^{2}$ at point $P.$
The Bonnet indicatrix is formed by two ellipses, one real and
another imaginary, if and only if the total torsion $T_{t}$ is positive. It
is composed by two conjugated hyperbolas if $T_{t}<0.$ In this
case the curvature lines of $E_{3}^{2}$ at $P$ are tangent to the
asymptotics of these conjugated hyperbolas. Finally, if $T_{t} =
0,$ the Bonnet indicatrix (5.9.15) is formed by two pair of parallel
straightlines, one being real and another imaginary.
\noindent {\bf Remark 5.9.1}
1. If $T_{t}<0,$ the directions of asymptotics of the hyperbolas
$(5.9.15)$ are symmetric to the directions of the extremal geodesic
torsion.
2. The direction of extremal geodesic torsion are the bisectrices
of the principal directions.
3. In the case of surfaces, $T_{m} =0,$ the Bonnet indicatrix is
formed by two equilateral conjugates hyperbolas.
Assuming that the Dupin indicatrix is formed by two conjugate
hyperbolas and the Bonnet indicatrix is formed by two conjugate
hyperbolas, we present here only one hyperbolas from every of this
indicatrices as follows (Fig. 1):
\newpage
\includegraphics{indicatrices.eps}
\centerline{\footnotesize{Fig. 1}}
\section{The circle of normal curvatures and geodesic torsions}
\setcounter{theorem}{0}\setcounter{equation}{0}
Consider a curve $C\subset E_{3}^{2}$ and its tangent versor
$\overline{\alpha}$ at point $P\in C.$ Denoting by $\alpha =
\sphericalangle (\overline{\alpha},I_{1})$ and using the formulae
(5.3.8) we obtain \begin{eqnarray} \kappa_{n}&=& p_{2}\sin^{2}\alpha +
(p_{1} - q_{2})\sin \alpha \cos \alpha -
q_{1}c\cos^{2}\alpha\nonumber\\\tau_{g}&=& p_{1}\cos^{2}\alpha +(p_{2} +
q_{1})sin \alpha \cos \alpha +q_{2}\sin^{2}\alpha.
\end{eqnarray}
Eliminating the parameter $\alpha$ from (5.10.1) we deduce
\begin{equation}
\kappa_{n}^{2} +\tau_{g}^{2} -H \kappa_{n} - T_{m}\tau_{g} - K_{g} = 0
\end{equation}
where $H = p_{2}-q_{1}$ is the mean curvature of $E_{3}^{2}$ at
point $P$, $T_{m} = p_{1} + q_{2}$ is the mean torsion of
$E_{3}^{2}$ at $P$ and $K_{g} = p_{1}q_{2} - p_{2}q_{1}$ is the
Gaussian curvature of $E_{3}^{2}$ at point $P$. Therefore we have
\begin{theorem}
1. The normal curvature $\kappa_{n}$ and the geodesic torsion
$\tau_{g}$ at a point $P\in E_{3}^{2}$ satisfy the equations
$(5.10.2)$.
2. In the plan of variable $(\kappa_{n}, \tau_{g})$ the equation
$(5.10.2)$ represents a circle with the center $\(\displaystyle\frac{H}{2},
\displaystyle\frac{T_{m}}{2}\)$ and of the radius given by
\begin{equation}
R^{2} = \displaystyle\frac{1}{4}(H^{2}+T_{m}^{2})-K_{g}.
\end{equation}
\end{theorem}
Evidently, this circle is real if we have
$$
K_{g} <\displaystyle\frac{1}{4}(H^{2} + T_{m}^{2}).
$$
It is a complex circle if $K_{g}>\displaystyle\frac{1}{4}(H^{2} + T_m^{2})$.
This circle, for the nonholonomic manifolds $E_{3}^{2}$ was
introduced by the author [62]. It has been studied by Izu Vaisman
in [41], [42].
In the case of surfaces, $T_{m} = 0$ and (5.10.3) gives us
$$
R^{2} = \displaystyle\frac{1}{4}H^{2} - K_{g} = \displaystyle\frac{1}{4}\(\displaystyle\frac{1}{R_{1}} +
\displaystyle\frac{1}{R_{2}}\)^{2} -\displaystyle\frac{1}{R_{1}R_{2}}
=\displaystyle\frac{1}{4}\(\displaystyle\frac{1}{R_{1}} - \displaystyle\frac{1}{R_{2}}\)^{2}.
$$
So, for surfaces the circle of normal curvatures and geodesic torsions is real.
If $T_{m} = 0$, the total torsion is of the form
\begin{equation}
T_{t} = -\displaystyle\frac{1}{4}\left(\displaystyle\frac{1}{R_{1}} -
\displaystyle\frac{1}{R_{2}}\right)^{2}.
\end{equation}
This formula was established by Alexandru Myller [35]. He proved
that $T_{t}$ is just ``the curvature of Sophy Germain and Emanuel
Bacaloglu'' ([35]).
\section{The nonholonomic plane and sphere}
\setcounter{theorem}{0}\setcounter{equation}{0}
The notions of nonholonomic plane and nonholonomic sphere have
been defined by Gr. Moisil who expressed the Pfaff equations, provided the existence of these manifolds, and R. Miron [64] studied them. Gh.
Vr\u{a}nceanu, in the book [44] has studied the notion of non
holonomic quadrics.
The Pfaff equation of the nonholonomic plane $\Pi_{3}^{2}$ given
by Gr. Moisil [30], [71] is as follows
\begin{equation}
\hspace*{14mm}\omega\equiv (q_{0}z - r_{0}y +a)dx +(r_{0}x - p_{0}z +b)dy +(p_{0}y -
q_{0}x +c)dz = 0,
\end{equation}
where $p_{0}, q_{0}, r_{0}, a,b,c$ are real constants verifying
the nonholonomy condition: $ap_{0} + bq_{0} + cr_{0}\neq 0$.
While, the nonholonomic sphere $\Sigma_{3}^{2}$ has the equation
given by Gr. Moisil [30]:
\begin{eqnarray}
\hspace*{5mm}\omega&\equiv & [2x (ax+by+cz) - a(x^{2} + y^{2}+z^{2}) + \mu x
+q_{0}z - r_{0}y + h] dx \nonumber\\&&+[2y(ax+by+cz)- b(x^{2} +
y^{2}+z^{2}) + \mu y +r_{0}x - p_{0}z + k]dy \nonumber\\&& +
[2z(ax+by+cz) - c(x^{2} + y^{2}+z^{2}) + \mu z +p_{0}y -
q_{0}x+l]dz.
\end{eqnarray}
$\mu, p_{0}, q_{0}, r_{0}, a,b,c, h,k,l$ being constants which
verify the condition $\omega\wedge d\omega \neq 0.$
In this section we investigate the geometrical properties of these
special manifolds.
1. {\it The nonholonomic plane $\Pi_{3}^{2}$}
The manifold $\Pi_{3}^{2}$ is defined by the property $\psi\equiv
0$, $\psi$ being the second fundamental form (5.6.3) of $E_{3}^{2}$.
From the formula (5.6.3) is follows
\begin{equation}
p_{2} = q_{1} = p_{1} - q_{2} = 0.
\end{equation}
The nonholonomic plane $\Pi_{3}^{2}$ has the invariants $H,
T_{m},\ldots$ given by
\begin{equation}
H=0, T_{m}\neq 0, T_{t} = \displaystyle\frac{1}{4}T_{m}^{2}, K_{t} = T_{t},
K_{g} = 0.
\end{equation}
Conversely, (5.11.4) imply (5.11.3).
Therefore, the properties (5.11.4), characterize the nonholonomic
plane $\Pi_{3}^{2}$.
Thus, the following theorems hold:
\begin{theorem}
1. The following line on $\Pi_{3}^{2}$ are nondetermined:
-The lines tangent to the principal directions.
- The lines tangent to the directions of extremal geodesic
torsion.
- The asymptotic lines.
\end{theorem}
\begin{theorem}
Also, we have
- The lines of curvature coincide with the minimal lines,
$\langle \delta \overline{r}, \delta \overline{r}\rangle=0$. The normal
curvature $\kappa_{n}\equiv 0.$
- The geodesic of $\Pi_{3}^{2}$ are straight lines.
\end{theorem}
Consequences: the nonholonomic manifold $\Pi_{3}^{2}$ is totally
geodesic.
Conversely, a total geodesic manifold $\Pi_{3}^{2}$ is a
nonholonomic plane.
If $T_{m} = 0,$ $\Pi_{3}^{2}$ is a family of ordinary planes from
$E_{3}^{2}.$
\medskip
2. {\it The nonholonomic sphere, $\Sigma_{3}^{2}$}.
A nonholonomic manifold $E_{3}^{2}$ is a nonholonomic sphere
$\Sigma_{3}^{2}$ if the second fundamental from $\psi$ is
proportional to the first fundamental form $\Phi.$
By means of (5.6.3) it follows that $\Sigma_{3}^{2}$ is
characterized by the following conditions
\begin{equation}
p_{1} - q_{2} = 0,\;\; p_{2} +q_{1} = 0.
\end{equation}
It follows that we have
\begin{equation}
\psi = \displaystyle\frac{1}{2}H\phi,\ \chi = \displaystyle\frac{1}{2}T_{m}\phi,\ \Theta^{2} =
K_{t}\phi.
\end{equation}
Thus, we can say
\begin{proposition}
1. The normal curvature $\kappa_{n}$, at a point $P\in \Sigma^{2}_{3}$
is the same $\(=\displaystyle\frac{1}{2}H\)$ in all tangent direction at point
$P$.
2. The geodesic torsion $\tau_{g}$ at point $P\in \Sigma_{3}^{2}$
is the same in all tangent direction at $P.$
\end{proposition}
\begin{proposition}
1. The principal directions at point $P\in E_{3}^{2}$ are
nondetermined.
2. The directions of extremal geodesic torsion at $P$ are non
determinated, too.
\end{proposition}
\begin{proposition}
At every point $P\in \Sigma_{3}^{2}$ the following relations hold:
\begin{equation}
K_{g} = \displaystyle\frac{1}{4}H^{2},\;\; T_{t} = \displaystyle\frac{1}{4}T_{m}^{2},\;\;
K_{t} = \displaystyle\frac{1}{4}(H^{2} +T_{m}^{2}),
\end{equation}
\end{proposition}
Conversely, if the first two relations (5.11.7) are verified at any
point $P\in E_{3}^{2}$, then $E_{3}^{2}$ is a nonholonomic sphere.
Indeed, (5.11.7) are the consequence of (5.11.5). Conversely, from
$K_{g} = \displaystyle\frac{1}{4}H^{2}$, $T_{t} = \displaystyle\frac{1}{4}T_{m}^{2}$ we
deduce (5.11.5).
\begin{proposition}
The nonholonomic sphere $\Sigma_{3}^{2}$ has the properties
\begin{equation}
K_{g}>0, T_{t} >0, K_{t} >0.
\end{equation}
\end{proposition}
\begin{proposition}
1. If $H\neq 0$, the asymptotic lines of $\Sigma_{3}^{2}$ are
imaginary.
2. The curvature lines of $\Sigma_{3}^{2}$, $(T_{m}\neq 0)$ are
imaginary.
\end{proposition}
The geodesic of $\Sigma_{3}^{2}$ are given by the equation (5.5.7)
or by the equation (5.5.11).
\begin{theorem}
The geodesics of nonholonomic sphere $\Sigma_{3}^{2}$ cannot be
the plane curves.
\end{theorem}
\begin{proof}
By means of theorem 5.5.1, along a geodesic $C$ we have $\kappa_{n} =
|\kappa|,$ $\tau_{g} = \tau$, $\kappa$ and $\tau$ being curvature and
torsion of $C$. From (11.6) we get
$$
\kappa_{n} = \displaystyle\frac{1}{2}H,\;\; \tau_{g} = \displaystyle\frac{1}{2}T_{m}.
$$
So, the torsion $\tau$ of $C$ is different from zero. It can not be
a plane curve.
We stop here the theory of nonholonomic manifolds $E_{3}^{2}$ in
the Euclidean space $E_{3}$, remarking that the nonholonomic quadric was investigated by G. Vr\u{a}nceanu and A. Dobrescu [8], [44], [45].
\end{proof}
\newpage
\thispagestyle{empty}
\chapter*{Preface}
In 2010, the Mathematical Seminar of the ``Alexandru Ioan Cuza''
University of Ia\c{s}i comes to its 100th anniversary of
prodigious existence.
The establishing of the Mathematical Seminar by Alexandru Myller
also marked the beginning of the School of Geometry in Ia\c{s}i,
developed in time by prestigious mathematicians of international
fame, Octav Mayer, Gheorghe Vr\^{a}nceanu, Grigore Moisil, Mendel
Haimovici, Ilie Popa, Dimitrie Mangeron, Gheorghe Gheorghiev and
many others.
Among the first paper works, published by Al. Myller and O. Mayer,
those concerning the generalization of the Levi-Civita parallelism
must be specified, because of the relevance for the international
recognition of the academic School of Ia\c{s}i. Later on, through
the effort of two generations of Ia\c{s}i based mathematicians,
these led to a body of theories in the area of differential
geometry of Euclidian, affine and projective spaces.
At the half-centenary of the Mathematical Seminary, in 1960, the
author of the present opuscule synthesized the field$'$s results
and laid it out in the form of a ``whole, superb theory'', as
mentioned by Al. Myller. In the same time period, the book {\it The
geometry of the Myller configurations} was published by the
Technical Publishing House. It represents, as mentioned by Octav
Mayer in the Foreword, ``the most precious tribute ever offered to
Alexandru Myller''. Nowadays, at the 100th anniversary of the
Mathematical Seminary ``Alexandru Myller'' and 150 years after the
enactment, made by Alexandru Ioan Cuza, to set up the University
that carries his name, we are going to pay homage to those two
historical acts in the Romanian culture, through the publishing,
in English, of the volume {\it The Geometry of the Myller
Configurations}, completed with an ample chapter, containing new
results of the Romanian geometers, regarding applications in the
studying of non-holonomic manifolds in the Euclidean space. On
this occasion, one can better notice the undeniable value of the
achievements in the field made by some great Romanian
mathematicians, such as Al. Myller, O. Mayer, Gh. Vr\u{a}nceanu,
Gr. Moisil, M. Haimovici, I. Popa, I. Creang\u{a} and Gh. Gheorghiev.
The initiative for the re-publishing of the book belongs to the
leadership of the ``Alexandru Ioan Cuza'' University of Ia\c{s}i, to the local branch of the Romanian Academy, as
well as NGO Formare Studia Ia\c{s}i. A competent
assistance was offered to me, in order to complete this work, by
my former students and present collaborators, Professors Mihai
Anastasiei and Ioan Buc\u{a}taru, to whom I express my utmost gratitude.
\vspace*{1cm}
\,\,\, Ia\c{s}i, 2010 \hfill Acad. Radu Miron
\cleardoublepage
\chapter*{Preface of the book {\it Geometria configura\c{t}iilor Myller}
written in 1966 by Octav Mayer, former member of the Romanian
Academy {\Large{(Translated from Romanian)}}}
One can bring to the great scientists, who passed away, various homages. Some times, the strong wind
of the progress wipes out the trace of their steps. To not forget them
means to continue their work, connecting them to the living
present. In this sense, this scientific work is the most precious homage
which can be dedicated to Alexandru Myller.
Initiator of a modern education of Mathematics at the University
of Iassy, founder of the Geometry School, which is still
flourishing today, in the third generation, Alexandru Myller was
also a hardworking researcher, well known inside the country and
also abroad due to his papers concerning Integral Equations and
Differential Geometry.
Our Academy elected him as a member and published his scientific
work. The ``A. Humboldt'' University from Berlin awarded him the
title of ``doctor Honoris Causa'' for ``special efforts in creating
an independent Romanian Mathematical School''.
Some of Alexandru Myller$'$s discoveries have penetrated fruitfully the
impetuous torrent of ideas, which have changed the Science of Geometry
during the first decades of the century. Among others, it is the case
of ``Myller configurations''. The reader would be perhaps interested
to find out some more details about how he discovered these configurations.
Let us imagine a surface $S$, on which there is drawn a curve $C$
and, along it, let us circumscribe to $S$ a developing surface
$\Sigma $; then we apply (develop) the surface $\Sigma$ on a
plane $\pi $. The points of the curve $C$, connected to the
respective tangent planes, which after are developed overlap each
other on the plane $\pi $, are going to represent a curve $C'$
into this plane.
In the same way, a series $(d)$ of tangent directions to the
surface $S$ at the points of the curve $C$ becomes developing, on
the plane $\pi $, a series $(d')$ of directions getting out from
the points of the curve $C'$. The directions $(d)$ are parallel on
the surface $S$ if the directions $(d')$ are parallel in the
common sense.
Going from this definition of T. Levi-Civita parallelism (valid in
the Euclidian space), Alexandru Myller arrived to a more general
concept in a sensible process of abstraction. Of course, it was
not possible to leave the curve $C$ aside, neither the
directions $(d)$ whose parallelism was going to be defined
in a more general sense. What was left aside was the surface $S$.
For the surface $\Sigma$ was considered, in a natural way, the enveloping
of the family of planes constructed from the points of the curve
$C$, planes in which are given the directions $(d)$. Keeping
unchanged the remainder of the definition one gets to what Alexandru
Myller called ``parallelism into a family of planes''.
A curve $C$, together with a family of planes on its points and a
family of given directions (in an arbitrary way) in these planes constitutes
what the author called a ``Myller configuration''.
It was considered that this new introduced notion had a central
place into the classical theory of surfaces, giving the
possibility to interpret and link among them many particular
facts. It was obvious that this notion can be successfully applied
in the Geometry of other spaces different from the Euclidian one.
Therefore the foreworded study worthed all the efforts made by the
author, a valuable mathematician from the third generation of the
Ia\c si Geometry School.
By recommending this work, we believe that the reader (who needs
only basic Differential Geometry) will be attracted by the clear
lecture of Radu Miron and also by the beauty of the subject, which
can still be developed further on.
\bigskip
\hfill Octav Mayer, 1966
\newpage
\chapter*{A short biography of\\ Al. Myller}
\begin{figure}[h]
\begin{center}
\includegraphics[width=6cm,height=7cm]{Myller.eps}
\end{center}
\end{figure}
{\it ALEXANDRU MYLLER} was born in Bucharest in 1879, and died in
Ia\c{s}i on the 4th of July 1965. Romanian mathematician. Honorary
Member (27 May 1938) and Honorific Member (12 August 1948) of the
Romanian Academy. High School and University studies (Faculty of
Science) in Bucharest, finished with a bachelor degree in
mathematics.
He was a Professor of Mathematics at the Pedagogical Seminar; he
sustained his PhD thesis {\it Gewohnliche Differential Gleichungen
Hoherer Ordnung}, in G\"{o}ttingen, under the scientific
supervision of David Hilbert.
He worked as a Professor at the Pedagogical Seminar and the School
of Post and Telegraphy (1907 - 1908), then as a lecturer at the
University of Bucharest (1908 - 1910). He is appointed Professor
of Analytical Geometry at the University of Ia\c{s}i (1910 -
1947); from 1947 onward - consultant Professor. In 1910, he sets
up the Mathematical Seminar at the ``Alexandru Ioan Cuza``
University of Ia\c{s}i, which he endows with a library full of
didactic works and specialty journals, and which, nowadays, bears
his name. Creator of the mathematical school of Ia\c{s}i, through
which many reputable Romanian mathematicians have passed, he
conveyed to us research works in fields such as integral
equations, differential geometry and history of mathematics. He
started his scientific activity with papers on the integral
equations theory, including extensions of Hilbert$'$s results, and
then he studied integral equations and self- adjoint of even and
odd order linear differential equations, being the first
mathematician to introduce integral equations with skew symmetric
kernel.
He was the first to apply integral equations to solve problems for
partial differential equations of hyperbolic type. He was also
interested in the differential geometry, discovering a
generalization of the notion of parallelism in the Levi-Civita
sense, and introducing the notion today known as concurrence in
the Myller sense. All these led to ``the differential geometry of
Myller configurations''(R. Miron, 1960). Al. Myller, Gh.
\c{T}i\c{t}eica and O. Mayer have created ``the differential
centro-affine geometry``, which the history of mathematics refers
to as ``purely Romanian creation$!$`` He was also concerned with
problems from the geometry of curves and surfaces in Euclidian
spaces. His research outcomes can be found in the numerous
memoirs, papers and studies, published in Ia\c{s}i and abroad:
Development of an arbitrary function after Bessel$'$s functions
(1909); Parallelism in the Levi-Civita sense in a plane system
(1924); The differential centro-affine geometry of the plane
curves (1933) etc. - within the volume ``Mathematical Writings``
(1959), Academy Publishing House. Alexandru Myller has been an
Honorary Member of the Romanian Academy since 1938. Also, he was a
Doctor Honoris Causa of the Humboldt University of Berlin.
Excerpt from the volume ``Members of the Romanian Academy 1866 -
1999. Dictionary.`` - Dorina N. Rusu, page 360.
\cleardoublepage
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\noindent\it Proof.\ \rm{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}%
\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\partial@\fi\fi
\hbox{\hskip5\partial@\vrule width4\partial@ height6\partial@ depth1.5\partial@\hskip\partial@}%
}%
\def\cents{\hbox{\rm\rlap/c}}%
\def\miss{\hbox{\vrule height2\partial@ width 2\partial@ depth\z@}}%
\def\vvert{\Vert
\def\tcol#1{{\baselineskip=6\partial@ \vcenter{#1}} \Column} %
\def\dB{\hbox{{}}
\def\mB#1{\hbox{$#1$}
\def\nB#1{\hbox{#1}
\def\note{$^{\dag}}%
\defLaTeX2e{LaTeX2e}
\def\chkcompat{%
\if@compatibility
\else
\usepackage{latexsym}
\fi
}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\chkcompat
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#4%
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#4%
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\ifx\ds@amstex\relax
\message{amstex already loaded}\makeatother\endinpu
\else
\@ifpackageloaded{amstex}%
{\message{amstex already loaded}\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\message{amsgen already loaded}\makeatother\endinput}
{}
\fi
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\ex@.2326ex
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\displaystyle\frac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\mathop{\textstyle \int}}%
\def\tiint{\mathop{\textstyle \iint }}%
\def\tiiint{\mathop{\textstyle \iiint }}%
\def\tiiiint{\mathop{\textstyle \iiiint }}%
\def\tidotsint{\mathop{\textstyle \idotsint }}%
\def\toint{\mathop{\textstyle \oint}}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\def\dint{\mathop{\displaystyle \int}}%
\def\diint{\mathop{\displaystyle \iint }}%
\def\diiint{\mathop{\displaystyle \iiint }}%
\def\diiiint{\mathop{\displaystyle \iiiint }}%
\def\didotsint{\mathop{\displaystyle \idotsint }}%
\def\doint{\mathop{\displaystyle \oint}}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
}
\makeatother
\endinput
|
1,477,468,750,443 | arxiv | \section{Introduction}
In this paper we are interested in estimating the jump activity index
of a process defined on a filtered probability space $(\Omega,\mathcal
{F},\break (\mathcal{F}_t)_{t\geq0},\mathbb{P})$ and given by
\begin{equation}
\label{eq:X} dX_t = \alpha_{t}\,dt+\sigma_{t-}\,dL_t+dY_t,
\end{equation}
when $L$ is locally stable pure-jump L\'{e}vy process (i.e., a pure-jump
L\'{e}vy process whose L\'{e}vy measure around zero behaves like that
of a
stable process) and $Y$ is a pure-jump process which is ``dominated''
at high-frequencies by $L$ in a sense which is made precise below; see
Assumption~\ref{assA}. All formal conditions for $X$ are given in Section~\ref
{sec:setting}. The jump activity index of $X$ on a given fixed time
interval is the infimum of the set of powers $p$ for which the sum of
$p$th absolute moments of the jumps is finite. Provided $\sigma$ does
not vanish on the interval and has c\`{a}dl\`{a}g paths, the jump activity
index of $X$ coincides with the Blumenthal--Getoor index of the driving
L\'{e}vy process $L$ (recall $Y$ is dominated by $L$ at high frequencies).
The dominant role of $L$ at high frequencies, together with its
stable-like L\'{e}vy measure around zero, manifests into the following
limiting behavior at high frequencies:
\begin{equation}\qquad
\label{eq:ls} h^{-1/\beta}(X_{t+sh}-X_t) \stackrel{
\mathcal{L}} {\longrightarrow } \sigma_{t}\times(S_{t+s}-S_t)\qquad
\mbox{as $h\rightarrow0$ and $s\in[0,1]$},
\end{equation}
for every $t$ and where $S$ is $\beta$-stable process, with the
convergence being for the Skorokhod topology. Equation~(\ref{eq:ls}) holds when
$\beta>1$ which is the case we consider in this paper. (When $\beta<1$
the drift will be the ``dominant'' component at high-frequencies, and
some of our results can be extended to this case as well.) We study
estimation of $\beta$ from discrete equidistant observations of $X$ on
a fixed time interval with mesh of the observation grid shrinking to zero.
Estimation of the jump activity index has received a lot of attention
recently. \cite{NR} consider estimation from low-frequency observations
in the setting of L\'{e}vy processes. \cite{B11} and \cite{BP13} consider
estimation from low-frequency data in the setting of time-changed L\'{e}vy
processes with an independent time-change process. \cite{B10}~consider
estimation from low-frequency and options data. \cite{B11_b} and \cite
{BP12} consider estimation from low frequency data in certain
stochastic volatility models. \cite{Woerner,Woerner03,Woerner07}
propose estimation from high-frequency data using power variations in a
pure-jump setting. \cite{SJ07} and \cite{JKLM12} consider estimation in
high-frequency setting when the underlying process can contain a
continuous martingale via truncated power variations. \cite{TT09}
propose estimation of the jump activity index in pure-jump setting via
power variations with adaptively chosen optimal power. \cite{T13}
extend \cite{TT09} via power variations of differenced increments which
provide further robustness and efficiency gains. \cite{JKL11} consider
jump activity estimation from noisy high-frequency data.
The estimation of $\beta$ from high-frequency data, thus far, makes use
of the dependence of the scaling factor of the high-frequency
increments in (\ref{eq:ls}) on $\beta$. For example, consider the power
variation
\begin{eqnarray}
\label{eq:pv} V(p,\Delta_n)& =& \sum_{i=1}^n\bigl|
\Delta_i^nX\bigr|^p, \qquad\Delta_i^n
X= X_{
{i}/{n}}-X_{{(i-1)}/{n}},
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
\Delta_n &=& \frac{1}{n},\qquad
p>0.
\end{eqnarray}
Under certain technical conditions, (\ref{eq:ls}) implies
\begin{eqnarray*}
\Delta_n^{1-p/\beta}V(p,\Delta_n)& \stackrel{\mathbb
{P}} {\longrightarrow }& \mu\int_0^1|
\sigma_s|^p\,ds, \\
(2\Delta_n)^{1-p/\beta}V(p,2
\Delta _n) &\stackrel{\mathbb{P}} {\longrightarrow}& \mu\int
_0^1|\sigma _s|^p\,ds,
\end{eqnarray*}
where $\mu$ is some constant. An estimate of $\beta$ then can be simply
formed as a nonlinear function of the ratio $\frac{V(p,\Delta
_n)}{V(p,2\Delta_n)}$. This makes inference for $\beta$ possible
despite the unknown process $\sigma$.
The limit result in (\ref{eq:ls}), however, contains much more
information about $\beta$ than previously used in estimation. In
particular, (\ref{eq:ls}) implies that over a short interval of time
the increments of $X$, conditional on $\sigma$ at the beginning of the
interval, are approximately i.i.d. stable random variables. In this
paper we propose a new estimator of $\beta$ that utilizes this
additional information in (\ref{eq:ls}) and leads to significant
efficiency gains over existent estimators based on high-frequency data.
The key obstacle in utilizing the result in (\ref{eq:ls}) in inference
for $\beta$ is the fact that the process $\sigma$ is unknown and
time-varying. The idea of our method is to form a local estimator of
$\sigma$ using a block of high-frequency increments with asymptotically
shrinking time span via a localized version of (\ref{eq:pv}). We then
divide the high-frequency increments of $X$ by the local estimator of
$\sigma$. The division achieves ``self-normalization'' in the following
sense. First, the scale factor for the local estimator of $\sigma$ and
the high-frequency increment of $X$ are the same, and hence by taking
the ratio, they cancel. Second, both the high-frequency increment of
$X$ and the local estimator of $\sigma$ are approximately proportional
to the value of $\sigma$ at the beginning of the high-frequency
interval, and hence taking their ratio cancels the effect of the
unknown $\sigma$. The resulting scaled high-frequency increments are
approximately i.i.d. stable random variables, and we make inference for
$\beta$ via an analogue of the empirical characteristic function
approach, which has been used in various other contexts; see, for
example, \cite{CDH}.
After removing an asymptotic bias, the limit behavior of the empirical
characteristic function of the scaled high-frequency increments is
determined by two correlated normal random variables. One of them is
due to the limiting behavior of the empirical characteristic function
of the high-frequency increments scaled by the limit of the local power
variation. The other is due to the error in estimating the local scale
by the local realized power variation. Importantly, because of the
``self-normalization,'' the $\mathcal{F}$-conditional asymptotic
variance of the empirical characteristic function of the scaled
high-frequency increments is not random but rather a constant that
depends only on $\beta$ and the power $p$. This makes feasible
inference very easy.
When comparing the new estimator with existing ones based on the power
variation, we find nontrivial efficiency gains. There are two reasons
for the efficiency gains. First, as we noted above, our estimator makes
full use of the limiting result in (\ref{eq:ls}) and not just the
dependence of the scale of the high-frequency increments on $\beta$,
which is the case for existing ones. Second, by locally removing the
effect of the time-varying $\sigma$, we make the inference as if
$\sigma
$ is constant; that is, the limit variance is the same, regardless of
whether $X$ is L\'{e}vy or not. By contrast, the estimator based on the
ratios of power variations is asymptotically mixed normal with
$\mathcal
{F}$-conditional variance\vspace*{-2pt} of the form $K(p,\beta)\frac{\int_0^1|\sigma
_s|^{2p}\,ds}{ (\int_0^1|\sigma_s|^{p}\,ds )^2}$, for some
constant $K(p,\beta)$, and we note that $\frac{\int_0^1|\sigma
_s|^{2p}\,ds}{ (\int_0^1|\sigma_s|^{p}\,ds )^2}\geq1$ with
equality whenever the process $|\sigma|$ is almost everywhere constant
on the interval $[0,1]$. That is, the presence of time-varying $\sigma$
decreases the precision of the power-variation based estimator of
$\beta$.
The efficiency gains of our estimator are bigger for higher values of
$\beta$. In the limit case of $\beta=2$, which corresponds to $L$ being
a Brownian motion, we show that our estimator can achieve a faster rate
of convergence than the standard $\sqrt{n}$ rate for existing estimators.
The rest of the paper is organized as follows. In Section~\ref
{sec:setting} we introduce the setting. In Section~\ref{sec:constr} we
construct our statistic, and in Section~\ref{sec:limit} we derive its
limit behavior. In Section~\ref{sec:ja} we build on the developed limit
theory and construct new estimators of the jump activity and derive
their limit behavior. This section also shows the efficiency gains of
the proposed jump activity estimators over existing ones. Section~\ref
{sec:jd} deals with the limiting case of jump-diffusion. Sections~\ref
{sec:mc} and \ref{sec:emp} contain a Monte Carlo study and an empirical
application, respectively. Proofs are in Section~\ref{sec:proof}.
\section{Setting and assumptions}\label{sec:setting}
We start with introducing the setting and stating the assumptions that
we need for the results in the paper. We first recall that a L\'{e}vy
process $L$ with the characteristic triplet $(b,c,\nu)$, with respect
to truncation function $\kappa$ (Definition II.2.3 in \cite{JS}), is a
process with a characteristic function given by
\begin{eqnarray}
\label{eq:cf} \mathbb{E} \bigl(e^{iuL_t} \bigr) =\exp
\biggl[itub-tcu^2/2+t\int_{\mathbb
{R}}
\bigl(e^{iux}-1-iu\kappa(x) \bigr)\nu(dx) \biggr],
\nonumber
\\[-8pt]
\\[-8pt]
\eqntext{t\geq0.}
\end{eqnarray}
In what follows we will always assume for simplicity that $\kappa
(-x)=-\kappa(x)$. Our assumption for the driving L\'{e}vy process in
(\ref{eq:X}) as well as the ``residual'' jump component $Y$ is given in
Assumption \ref{assA}.
\renewcommand{\theassumption}{\Alph{assumption}}
\begin{assumption}\label{assA}
$L$ in (\ref{eq:X}) is a
L\'{e}vy
process with characteristic triplet $(0,0,\nu)$ for $\nu$ a L\'{e}vy
measure with density given by
\begin{equation}
\label{eq:levy} \nu(x) = \frac{A}{|x|^{1+\beta}}+\nu'(x),\qquad \beta\in(0,2),
\end{equation}
where $A>0$ and $\nu'(x)$ is such that there exists $x_0>0$
with $|\nu'(x)|\leq C/|x|^{1+\beta'}$ for $|x|\leq x_0$ and some
$\beta
'<\beta$.
$Y$ is an It\^o semimartingale with the characteristic
triplet (\cite{JS}, Definition II.2.6) $ (\int_0^t\int_{\mathbb{R}}\kappa(x)\nu_s^Y(dx)\,ds,0,dt\otimes\nu_t^Y(dx)
)$ when
$\beta'<1$ and $ (0,0,dt\otimes\nu_t^Y(dx) )$ otherwise, with
$\int_{\mathbb{R}}(|x|^{\beta'+\iota}\wedge1)\nu_t^Y(dx)$ being
locally bounded and predictable, for some arbitrarily small $\iota>0$.
\end{assumption}
Assumption~\ref{assA} formalizes the sense in which $Y$ is dominated at high
frequencies by $L$: the activity index of $Y$ is below that of $L$. We
also stress that $Y$ and $L$ can have dependence. Therefore, as shown
in \cite{TT12}, we can accommodate in our setup time-changed L\'{e}vy
models, with absolute continuous time-change process, that have been
extensively used in applied work. Finally, we note that (\ref{eq:levy})
restricts only the behavior of $\nu$ around zero, and $\nu'$ is a
signed measure. Therefore many parametric jump specifications outside
of the stable process are satisfied by Assumption~\ref{assA} (e.g., the tempered
stable process). We next state our assumption for the dynamics of
$\alpha$ and $\sigma$.
\begin{assumption}\label{assB}
The processes $\alpha$ and
$\sigma$ are It\^o semimartingales of the form
\begin{eqnarray}
\label{ass:b}
\alpha_t &=& \alpha_0+\int
_0^tb_s^{\alpha}\,ds+\int
_0^t\int_{E}\kappa
\bigl(\delta^{\alpha}(s,x)\bigr)\widetilde{\underline{\mu}}(ds,dx)\nonumber\\
&&{} +\int
_{E}\kappa'\bigl(\delta^{\alpha}(s,x)
\bigr)\underline{\mu}(ds,dx),
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
\sigma_t&=&\sigma_0+\int_0^tb_s^{\sigma}\,ds+
\int_0^t\int_{E}\kappa
\bigl(\delta ^{\sigma}(s,x)\bigr)\widetilde{\underline{\mu}}(ds,dx)\\
&&{} +\int
_{E}\kappa'\bigl(\delta^{\sigma}(s,x)
\bigr)\underline{\mu}(ds,dx), \nonumber
\end{eqnarray}
where $\kappa'(x) = x-\kappa(x)$, and:
\begin{longlist}[(a)]
\item[(a)] $|\sigma_t|^{-1}$ and $|\sigma_{t-}|^{-1}$ are
strictly positive;
\item[(b)] $\underline{\mu}$ is Poisson measure on $\mathbb
{R}_+\times E$, having arbitrary dependence with the jump measure of
$L$, with compensator $dt\otimes\lambda(dx)$ for some $\sigma$-finite
measures $\lambda$ on $E$;
\item[(c)] $\delta^{\alpha}(t,x)$ and $\delta^{\sigma}(t,x)$
are predictable, left-continuous with right limits in $t$ with $|\delta
^{\alpha}(t,x)|+|\delta^{\sigma}(t,x)|\leq\gamma_k(x)$ for all
$t\leq
T_k$, where $\gamma_k(x)$ is a deterministic function on $\mathbb{R}$
with $\int_{\mathbb{R}}(|\gamma_k(x)|^{r+\iota}\wedge1)\lambda
(dx)<\infty$ for arbitrarily small $\iota>0$ and some $0\leq r\leq
\beta
$, and $T_k$ is a sequence of stopping times increasing to $+\infty$;
\item[(d)] $b^{\alpha}$ and $b^{\sigma}$ are It\^o
semimartingales having dynamics as in (\ref{ass:b}) with coefficients
satisfying the analogues of conditions (b) and (c) above.
\end{longlist}
\end{assumption}
We note that $\underline{\mu}$ does not need to coincide with the jump
measure of $L$, and hence it allows for dependence between the
processes $\alpha$, $\sigma$ and $L$. This is of particular relevance
for financial applications. For example, Assumption~\ref{assB} is satisfied by
the COGARCH model of \cite{COGARCH} in which the jumps in $\sigma$ are
proportional to the squared jumps in $X$. More generally, Assumption~\ref{assB}
is satisfied if, for example, $(X,\alpha,\sigma)$ is modeled via a
L\'{e}vy-driven SDE, with each of the elements of the driving L\'{e}vy process
satisfying Assumption~\ref{assA}.
\section{Construction of the self-normalized statistics}\label{sec:constr}
We continue next with the construction of our statistics. The
estimation in the paper is based on observations of $X$ at the
equidistant grid times $0,\frac{1}{n},\ldots,1$ with $n\rightarrow
\infty$,
and we denote $\Delta_n = \frac{1}{n}$. To minimize the effect of the
drift in our statistics, we follow \cite{T13} and work with the first
difference of the increments, $\Delta_i^nX-\Delta_{i-1}^nX$, where
$\Delta_i^nX = X_{{i}/{n}}-X_{{(i-1)}/{n}}$ for $i=1,\ldots
,n$. The
above difference of increments is purged from the drift in the L\'{e}vy
case, and in the general case the drift has a smaller asymptotic effect
on it. For each $\Delta_i^nX-\Delta_{i-1}^nX$, we need a local power
variation estimate for the scale. It is constructed from a block of
$k_n$ high-frequency increments, for some $1<k_n<n-2$, as follows:
\begin{equation}
\label{eq:lpv} V_i^n(p) = \frac{1}{k_n}\sum
_{j=i-k_n-1}^{i-2}\bigl|\Delta_j^nX-
\Delta _{j-1}^nX\bigr|^p,\qquad i=k_n+3,
\ldots,n.
\end{equation}
Block-based local estimators of volatility have been also used in other
contexts in a high-frequency setting, for example, in \cite{JR} and
\cite{TT14}. The empirical characteristic function of the scaled
differenced increments is given by
\begin{equation}
\label{eq:ecf} \widehat{\mathcal{L}}^n(p,u) = \frac{1}{n-k_n-2}\sum
_{i=k_n+3}^n\cos \biggl(u\frac{\Delta_i^nX-\Delta_{i-1}^nX}{ (V_i^n(p))^{1/p} }
\biggr),\qquad u\in\mathbb{R}_+.
\end{equation}
We proceed with some notation needed for the limiting theory of
$\widehat{\mathcal{L}}^n(p,u)$. Let $S_1$, $S_2$ and $S_3$ be random
variables corresponding to the values of three independent L\'{e}vy
processes at time 1, each of which with the characteristic triplet
$(0,0,\nu)$, for any truncation function $\kappa$ and where $\nu$ has
the density $\frac{A}{|x|^{1+\beta}}$. Then we denote
$\mu_{p,\beta} = (\mathbb{E}|S_1-S_2|^p)^{\beta/p}$, which does not
depend on $\kappa$, and we further use the shorthand notation $\mathbb
{E} ( e^{iu(S_1-S_2)} ) = e^{-A_{\beta}u^{\beta}}$ for any
$u>0$ with $A_{\beta}$ being a (known) function of $A$ and $\beta$.
Using Example~25.10 in \cite{Sato} and references therein, we have
\begin{equation}
\label{eq:C} C_{p,\beta} = \frac{A_{\beta}}{\mu_{p,\beta}}= \biggl[\frac
{2^{p}\Gamma
({(1+p)}/{2} )\Gamma (1-{p}/{\beta}
)}{\sqrt
{\pi}\Gamma (1-{p}/{2} )}
\biggr]^{-\beta/p},
\end{equation}
which depends only on $p$ and $\beta$ but not on the scale parameter of
the stable random variables $S_1$ and $S_2$.
With this notation, we set
\begin{equation}
\label{eq:limit} \mathcal{L}(p,u,\beta) = e^{-C_{p,\beta}u^{\beta}},\qquad u\in\mathbb{R}_+,
\end{equation}
which will be the limit in probability of $\widehat{\mathcal
{L}}^n(p,u)$. We finish with some more notation needed to describe the
asymptotic variance of $\widehat{\mathcal{L}}^n(p,u)$. First, we denote
for some $u\in\mathbb{R}_+$,
\begin{eqnarray}
\xi_1(p,u,\beta) &=& \biggl(
\cos \biggl(\frac{u(S_1-S_2)}{\mu_{p,\beta
}^{1/\beta
}} \biggr) - \mathcal{L}(p,u,
\beta), \frac{|S_1-S_2|^p}{\mu
_{p,\beta
}^{p/\beta}}-1
\biggr)',
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
\xi_2(p,u,\beta) &=& \biggl(
\cos
\biggl(\frac{u(S_2-S_3)}{\mu_{p,\beta
}^{1/\beta
}} \biggr) - \mathcal{L}(p,u,\beta),
\frac{|S_2-S_3|^p}{\mu
_{p,\beta
}^{p/\beta}}-1
\biggr)'.
\end{eqnarray}
We then set for $u,v\in\mathbb{R}_+$
\begin{equation}
\Xi_i(p,u,v,\beta) = \mathbb{E} \bigl(\xi_1(p,u,\beta)
\xi '_{1+i}(p,v,\beta) \bigr),\qquad i=0,1
\end{equation}
and
\begin{eqnarray}
G(p,u,\beta)& =& \frac{\beta}{p}e^{-C_{p,\beta}u^{\beta}}C_{p,\beta
}u^{\beta},
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
H(p,u,\beta)& =& G(p,u,\beta) \biggl( \frac{\beta
}{p}C_{p,\beta
}u^{\beta}
-\frac{\beta}{p}-1 \biggr).
\end{eqnarray}
\section{Limit theory for \texorpdfstring{$\widehat{\mathcal{L}}^n(p,u)$}{widehat{mathcal{L}}n(p,u)}}\label
{sec:limit} We start with convergence in probability.
\begin{theorem}\label{thm:cp}
Assume $X$ satisfies Assumptions \ref{assA} and \ref{assB} for some $\beta\in(1,2)$ and
$\beta'<\beta$. Let $k_n$ be a deterministic sequence satisfying
$k_n\asymp n^{\varpi}$ for some $\varpi\in(0,1)$. Then, for
$0<p<\beta
$, we have
\begin{equation}
\label{cp_1} \widehat{\mathcal{L}}^n(p,u) \stackrel{\mathbb{P}} {
\longrightarrow } \mathcal{L}(p,u,\beta) \qquad\mbox{as $n\rightarrow\infty$},
\end{equation}
locally uniformly in $u\in\mathbb{R}_+$.
\end{theorem}
We note that we restrict $\beta>1$; that is, we focus on the infinite
variation case. The above theorem will continue to hold for $\beta\leq
1$, but for the subsequent results about the limiting distribution of
$\widehat{\mathcal{L}}^n(p,u)$, we will need quite stringent additional
restrictions in the case $\beta\leq1$. We do not pursue this here. The
other conditions for the convergence in probability result are weak.
The requirements for $\alpha$ and $\sigma$ for Theorem~\ref{thm:cp} to
hold are actually much weaker than what is assumed in Assumption~\ref{assB}, but
for simplicity of exposition we keep Assumption~\ref{assB} throughout. We note
that for consistency, we have a lot of flexibility about the block size
$k_n$: (1)~$k_n\rightarrow\infty$ so that we consistently estimate the
scale via $V_i^n(p)$ and (2) $k_n/n\rightarrow0$ so that the span of
the block is asymptotically shrinking to zero, and therefore no bias is
generated due to the time variation of $\sigma$. In the case when $X$
is a L\'{e}vy process, the second condition is obviously not needed.
To derive a central limit theorem (c.l.t.) for $\widehat{\mathcal
{L}}^n(p,u)$, we will need to restrict the choice of $k_n$ more. We
will assume $k_n/\sqrt{n}\rightarrow0$, so that biases due to the time
variation in $\sigma$, which are hard to feasibly estimate, are
negligible. For such a choice of $k_n$, however, an asymptotic bias due
to the sampling error of $V_i^n(p)$ appears, and for stating a c.l.t., we need to
consider the following bias-corrected estimator:
\begin{eqnarray}
\label{eq:ecf_debias} \qquad
\widehat{\mathcal{L}}^{n}(p,u,
\beta)' &= &\widehat{\mathcal {L}}^{n}(p,u)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&{} - \frac{1}{k_n}\frac{1}{2}H(p,u,\beta) \bigl( \Xi
_0^{(2,2)}(p,u,u,\beta) +2\Xi_1^{(2,2)}(p,u,u,
\beta) \bigr).
\end{eqnarray}
We state the c.l.t. for $\widehat{\mathcal{L}}^{n}(p,u,\beta)' $ in the
next theorem.
\begin{theorem}\label{thm:clt}
Assume $X$ satisfies Assumptions \ref{assA} and \ref{assB} with $\beta\in(1,2)$ and
$\beta
'<\frac{\beta}{2}$, and that the power $p$ and block size $k_n$ satisfy
\begin{eqnarray}
\label{clt_1} \frac{\beta\beta'}{2(\beta-\beta')}&\vee&\frac{\beta
-1}{2}<p<\frac
{\beta}{2},
\\
\label{clt_2} k_n\asymp n^{\varpi},\qquad \frac{p}{\beta}&\vee&
\frac{1}{3}<\varpi <\frac{1}{2}.
\end{eqnarray}
Then, as $n\rightarrow\infty$, we have
\begin{equation}
\label{clt_3} \sqrt{n} \bigl( \widehat{\mathcal{L}}^n(p,u,
\beta)' - \mathcal {L}(p,u,\beta) \bigr) \stackrel{\mathcal{L}} {
\longrightarrow } Z_1(u)+G(p,u,\beta)Z_2(u),
\end{equation}
locally uniformly in $u\in\mathbb{R}_+$. $Z_1(u)$ and $Z_2(u)$ are two
Gaussian processes with the following covariance structure:
\begin{equation}
\label{clt_4} \mathbb{E} \bigl(\mathbf{Z}(u)\mathbf{Z}(v) \bigr) = \Xi
_0(p,u,v,\beta )+2\Xi_1(p,u,v,\beta),\qquad u,v\in
\mathbb{R}_+,
\end{equation}
where $\mathbf{Z}(u) = (Z_1(u), Z_2(u) )'$.
Let $\widehat{\beta}$ be an estimator of $\beta$ with $\widehat
{\beta
}-\beta= o_p(k_n\sqrt{\Delta_n})$ as $n\rightarrow\infty$. Then
\begin{equation}
\label{clt_5} \sqrt{n} \bigl( \widehat{\mathcal{L}}^n(p,u,\widehat{
\beta})' - \widehat {\mathcal{L}}^n(p,u,
\beta)' \bigr) \stackrel{\mathbb {P}} {\longrightarrow} 0,
\end{equation}
locally uniformly in $u\in\mathbb{R}_+$.
\end{theorem}
The conditions for the power $p$ in (\ref{clt_1}) are exactly the same
as in \cite{T13} for the analysis of the realized power variation, and
they are relatively weak. For example, the condition $p>\frac{\beta
-1}{2}$ will be always satisfied as soon as we pick power slightly
above $\frac{1}{2}$. Moreover, this condition is not needed in the case
when $X$ is a L\'{e}vy process. Further, the condition in (\ref
{clt_2}) for
$k_n$ shows that we have more flexibility for the choice of $k_n$
whenever $p$ is not very close to its upper bound of $\beta/2$.
Due to the self-normalization in the construction of our statistic, the
limiting distribution in (\ref{clt_3}) is Gaussian and not mixed
Gaussian, which is the case for most limit results in high-frequency
asymptotics (and in particular for the power variation based estimator
of $\beta$); see \cite{V12} for another exception. This is very
convenient as the estimation of the asymptotic variance is
straightforward. The bias correction in (\ref{eq:ecf_debias}) is
infeasible, as it depends on $\beta$. However, (\ref{clt_5}) shows that
a feasible version of the debiasing would work provided\vspace*{1pt} the initial
estimator of $\beta$ is $o_p(k_n\sqrt{\Delta_n})$. When one estimates
$\beta$ using $\widehat{\mathcal{L}}^n(p,u)$, with explicit estimators
provided in the next section, $\widehat{\beta}-\beta$ will be
$O_p(1/k_n)$. Hence, such a preliminary estimate of $\beta$ will
satisfy the required rate condition in Theorem~\ref{thm:clt}.
\section{Jump activity estimation}\label{sec:ja} We now use the limit
theory developed above to form estimators of $\beta$. The simplest one
is based on $\widehat{\mathcal{L}}^n(p,u)$ and is given by
\begin{equation}
\label{beta:fs} \widehat{\beta}^{fs}(p,u,v) = \frac{\log (-\log(\widehat
{\mathcal
{L}}^n(p,u)) )-\log (-\log(\widehat{\mathcal
{L}}^n(p,v))
)}{\log(u/v)},
\end{equation}
for $u,v\in\mathbb{R}_+$ with $u\neq v$. Because of the asymptotic bias
in $\widehat{\mathcal{L}}^n(p,u)$, $\widehat{\beta
}^{fs}(p,\break u,v)-\beta$
will be only $O_p(1/k_n)$, with $p$ and $k_n$ satisfying (\ref
{clt_1})--(\ref{clt_2}). An explicit estimate of $\beta$ using feasible
debiasing is given by
\begin{equation}\qquad
\label{beta:ts} \widehat{\beta}(p,u,v) = \frac{\log (-\log(\widehat{\mathcal
{L}}^n(p,u,\widehat{\beta}^{fs})') )-\log (-\log
(\widehat
{\mathcal{L}}^n(p,v,\widehat{\beta}^{fs})') )}{\log(u/v)},
\end{equation}
for some $u,v\in\mathbb{R}_+$ with $u\neq v$, and where
$\widehat{\beta}^{fs}$ is a suitable initial estimator of $\beta$ [like
the one in (\ref{beta:fs})]. While convenient, the above estimators
have two potential drawbacks. One, we do not take into account the
information about $\beta$ in the constant $C_{p,\beta}$. This is
because in the asymptotic limit of the above estimators, $C_{p,\beta}$
gets canceled. Second, $u$ and $v$ are chosen arbitrarily, and one can
include more moment conditions for the estimation of $\beta$ using
$\widehat{\mathcal{L}}^n(p,u,\widehat{\beta}^{fs})'$. In the next
theorem we provide a general estimator of $\beta$ which overcomes these
drawbacks of the explicit estimators above.
\begin{theorem}\label{thm:gmm}
Assume $X$ satisfies Assumptions \ref{assA} and \ref{assB} with $\beta\in(1,2)$ and
$\beta
'<\beta/2$, and that the conditions in (\ref{clt_1}) and (\ref{clt_2})
hold. Suppose $\widehat{\beta}^{fs}$ is a consistent estimator of
$\beta
$ with $\widehat{\beta}^{fs}-\beta= o_p(k_n\sqrt{\Delta_n})$. Denote
with $\widehat{\mathbf{u}}_l$ and $\widehat{\mathbf{u}}_h$ two
sequences of $K\times1$-dimensional vectors, for some finite $K\geq
1$, satisfying $\widehat{\mathbf{u}}_l \stackrel{\mathbb
{P}}{\longrightarrow} \mathbf{u}_l$ and $\widehat{\mathbf
{u}}_h \stackrel{\mathbb{P}}{\longrightarrow} \mathbf{u}_h$ as
$n\rightarrow\infty$, for some $\mathbf{u}_l, \mathbf{u}_h\in
\mathbb
{R}_+^K$ with $u_l^i<u_h^i$, $u_l^j<u_h^j$ and $(u_l^i, u_h^i)\cap
(u_l^j, u_h^j)= \varnothing$ for every $i,j = 1,\ldots,K$ with $i\neq j$
where $u_l^i$ and $u_h^i$ denote the $i$th element of the vectors
$\mathbf{u}_l$ and $\mathbf{u}_h$, respectively. Set further the
shorthand $\mathbf{u}=[\mathbf{u}_l; \mathbf{u}_h]$ and $\widehat
{\mathbf{u}}=[\widehat{\mathbf{u}}_l; \widehat{\mathbf{u}}_h]$.
Let $\mathbf{W}(p,\mathbf{u},\beta)$ be $K\times K$ matrix with $(i,j)$
element given by
\begin{eqnarray}
\label{gmm_1} \mathbf{W}(p,\mathbf{u},\beta)_{i,j} &=& \int
_{u_l^i}^{u_h^i}\int_{u_l^j}^{u_h^j}w(p,u,v,
\beta)\,du\,dv,
\\
\label{gmm_2}
w(p,u,v,\beta) &= &\frac{1}{ \mathcal{L}(p,u,\beta)\mathcal
{L}(p,v,\beta)
}\pmatrix{ 1
\cr
G(p,u,\beta)}'\nonumber\\
&&{}\times\overline{
\Xi}(p,u,v,\beta) \pmatrix{ 1
\cr
G(p,v,\beta)},\nonumber
\end{eqnarray}
where $\overline{\Xi}(p,u,v,\beta) = \Xi_0(p,u,v,\beta)+2\Xi
_1(p,u,v,\beta) $.
Define the $K\times1$ vector $\widehat{\mathbf{m}}(p,\widehat
{\mathbf
{u}},\widehat{\beta}^{fs},\mathbf{u},\beta)$ by
\begin{equation}\quad
\label{gmm_3} \widehat{\mathbf{m}}\bigl(p,\widehat{\mathbf{u}},\widehat{\beta
}^{fs},\mathbf {u},\beta\bigr)_i = \int
_{\widehat{u}_l^i}^{\widehat{u}_h^i} \bigl(\log \bigl(\widehat{
\mathcal{L}}^n\bigl(p,u,\widehat{\beta}^{fs}
\bigr)'\bigr) - \log \bigl(\mathcal {L}(p,u,\beta)\bigr) \bigr)\,du,
\end{equation}
for $i=1,\ldots,K$, and set
\begin{eqnarray}
\label{eq:beta_gmm} &&\widehat{\beta}(p,\mathbf{u})
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad= \mathop{\operatorname{argmin}}_{\beta\in(1,2)}
\widehat{\mathbf{m}}\bigl(p,\widehat{\mathbf {u}},\widehat{\beta
}^{fs},\mathbf{u},\beta\bigr)'\mathbf{W}^{-1}
\bigl(p,\widehat{\mathbf {u}},\widehat {\beta}^{fs}\bigr) \widehat{
\mathbf{m}}\bigl(p,\widehat{\mathbf{u}},\widehat {\beta }^{fs},
\mathbf{u},\beta\bigr).
\end{eqnarray}
Finally define the $K\times1$ vector $\mathbf{M}(p,\mathbf{u},\beta
)$ by
\begin{equation}
\label{gmm_5} \mathbf{M}(p,\mathbf{u},\beta)_i = \int
_{u_l^i}^{u_h^i}\nabla _{\beta
}\log\bigl(
\mathcal{L}(p,u,\beta)\bigr)\,du,\qquad i=1,\ldots,K.
\end{equation}
Then for $\beta\in(1,2)$, $p\in (\frac{\beta\beta'}{2(\beta
-\beta
')},\frac{\beta}{2} )$ and $\beta'<\beta/2$, we have
\begin{equation}
\label{gmm_6}
\sqrt{n} \bigl( \widehat{\beta}(p,\mathbf{u}) - \beta
\bigr) \stackrel{\mathcal{L}} {\longrightarrow} \sqrt{\mathbf{M}(p,\mathbf {u},
\beta)' \mathbf{W}^{-1}(p,\mathbf{u},\beta)\mathbf {M}(p,
\mathbf {u},\beta)}\times\mathcal{N},
\end{equation}
for $n\rightarrow\infty$ with $\mathcal{N}$ being standard normal
random variable.
A consistent estimator for the asymptotic variance of $\widehat{\beta
}(p,\mathbf{u})$ is given by
\begin{equation}
\label{gmm_7} \mathbf{M}(p,\widehat{\mathbf{u}},\widehat{\beta})'
\mathbf {W}^{-1}(p,\widehat{\mathbf{u}},\widehat{\beta}) \mathbf {M}(p,
\widehat {\mathbf{u}},\widehat{\beta}),
\end{equation}
where $\mathbf{M}(p,\widehat{\mathbf{u}},\widehat{\beta})$ is defined
as $\mathbf{M}(p,\mathbf{u},\beta)$ with $\mathbf{u}$ and $\beta$
replaced by $\widehat{\mathbf{u}}$ and $\widehat{\beta}$.
\end{theorem}
Theorem~\ref{thm:gmm} allows us to adaptively choose the range of $u$
over which to match $\widehat{\mathcal{L}}^n(p,u,\widehat{\beta
}^{fs})'$ with its limit. This is convenient because the limiting
variance of $\widehat{\mathcal{L}}^n(p,u,\widehat{\beta}^{fs})'$
depends on $\beta$. For this reason also the weight function in (\ref
{gmm_1}) optimally weighs the moment conditions in the estimation. We
discuss the practical issues regarding the construction of $\widehat
{\mathbf{m}}(p,\widehat{\mathbf{u}},\widehat{\beta}^{fs},\mathbf
{u},\beta)$ in Section~\ref{sec:mc}.
We now illustrate the efficiency gains provided by the new method over
existing power variation based estimators of $\beta$. The power
variation estimator based on the differenced increments is given by
(see \cite{T13})
\begin{equation}
\label{eq:beta_pv} \widetilde{\beta}(p) = \frac{p\log(2)}{\log [\widetilde
{V}_2^n(p)/\widetilde{V}_1^n(p) ]}1_{\{\widetilde{V}_1^n(p)\neq
\widetilde{V}_2^n(p)\}},
\end{equation}
where
\begin{eqnarray}
\widetilde{V}_1^n(p) &=& \sum
_{i=2}^n\bigl|\Delta_i^nX-\Delta
_{i-1}^nX\bigr|^p,
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
\widetilde{V}_2^n(p)
&= &\sum_{i=4}^n\bigl|\Delta_i^nX-
\Delta _{i-1}^nX+\Delta_{i-2}^nX-
\Delta_{i-3}^nX\bigr|^p.
\end{eqnarray}
On Figure~\ref{fig:beta_ase}, we plot the limiting standard deviation
of the estimators in (\ref{eq:beta_gmm}) and (\ref{eq:beta_pv}) for
different values of $\beta$. [The estimator in (\ref{eq:beta_pv}) is
derived under exactly the same assumptions for $X$ as our estimator
here.] The asymptotic standard deviation of $\widetilde{\beta}(p)$ is
computed from \cite{T13}. $\widehat{\beta}(p,u)$ is far less sensitive
to the choice of $p$ than $\widetilde{\beta}(p)$, with lower powers
yielding marginally more efficient $\widehat{\beta}(p,u)$. The news
estimator $\widehat{\beta}(p,u)$ provides nontrivial efficiency gains
irrespective of the values of $p$ and $\beta$. The gains are bigger for
high values of the jump activity. For example, for $\beta=1.75$,
$\widehat{\beta}(p,u)$ is around two times more efficient (in terms of
asymptotic standard deviation) than $\widetilde{\beta}(p)$.
\begin{figure}
\includegraphics{1327f01.eps}
\caption{Asymptotic standard deviation of jump activity
estimators. The straight line corresponds to the asymptotic standard
deviation of the characteristic function based estimator defined in
(\protect\ref{eq:beta_gmm}) and the $*$ line to the power variation based
estimator of \cite{T13} given in (\protect\ref{eq:beta_pv}) (when
$\sigma$ is
constant). For each cases of $\beta$, the power $p$ ranges in the
interval $p\in (\frac{7}{40},\frac{19}{40} )\beta$. For the
estimator in (\protect\ref{eq:beta_gmm}), the vector $\mathbf{u}_l =
[0.1:0.05:5]$ and $\mathbf{u}_h = [0.15:0.05:5.05]$.}
\label{fig:beta_ase}
\end{figure}
\section{The limiting case of jump-diffusion}\label{sec:jd}
So far our analysis has been for the pure-jump case of $\beta\in(1,2)$.
We now look at the limiting case of $\beta=2$, which corresponds to $L$
in (\ref{eq:X}) being a Brownian motion. In this case the asymptotic
behavior of the high-frequency increments in (\ref{eq:ls}) holds with
$S$ being a Brownian motion. Thus deciding $\beta=2$ versus $\beta<2$
amounts to testing pure-jump versus jump-diffusion specification for
$X$. It turns out that when $\beta=2$, our estimation method can lead
to a faster rate of convergence than the $\sqrt{n}$ rate we have seen
for the case $\beta\in(1,2)$. This is unlike the power-variation based
estimation methods for which the rate of convergence is $\sqrt{n}$,
both for $\beta=2$ and $\beta<2$; see, for example,~\cite{TT09}.
The faster rate of convergence in the case $\beta=2$ can be achieved by
letting the argument $u$ of the empirical characteristic function
$\widehat{\mathcal{L}}(p,u)$ drift toward zero as $n\rightarrow
\infty$.
In this case, $\frac{-\log(\widehat{\mathcal
{L}}(p,u_n,2)')}{C_{p,2}u_n^2}$ and $\frac{-\log(\widehat{\mathcal
{L}}(p,\rho u_n,2)')}{C_{p,2}\rho^2u_n^2}$, for some $\rho>0$, are
asymptotically perfectly correlated, and their difference converges at
a faster rate. We note that this does not work in the pure-jump case of
$\beta<2$. To state the formal result we first introduce some notation.
For $S_1$, $S_2$ and $S_3$ being independent standard normal random
variables, we denote
\begin{eqnarray}
\widetilde{\xi}_1(p) &=& \biggl(
\frac{|S_1-S_2|^4}{\mu_{p,2}^{2}}- \frac{12}{\mu
_{p,2}^{2}}, \frac{|S_1-S_2|^p}{\mu_{p,2}^{p/2}}-1
\biggr)',
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
\widetilde{\xi}_2(p) &= &\biggl(
\frac{|S_2-S_3|^4}{\mu_{p,2}^{2}}- \frac{12}{\mu
_{p,2}^{2}}, \frac{|S_2-S_3|^p}{\mu_{p,2}^{p/2}}-1
\biggr)',
\end{eqnarray}
and then set $\widetilde{\Xi}_i(p) = \mathbb{E} (\widetilde
{\xi
}_1(p)\widetilde{\xi}'_{1+i}(p) )$ for $i=0,1$. The difference
from the analogous expression for the case $\beta<2$ is in the first
terms of $\widetilde{\xi}_1(p)$ and $\widetilde{\xi}_2(p)$. Note that
the expression for the bias-correction remains exactly the same as it
involves only the variance and covariance of the second elements of
$\widetilde{\xi}_1(p)$ and $\widetilde{\xi}_2(p)$, which remain the
same as their pure-jump counterparts.
\begin{theorem}\label{thm:cont}
Suppose $X$ has dynamics given by (\ref{eq:X}) with $L$ being a
Brownian motion, $Y$ satisfying the corresponding condition for it in
Assumption~\ref{assA} and $\alpha$ and $\sigma$ satisfying Assumption~\ref{assB} for some
$r<2$. Suppose $p<1$, $k_n\sqrt{\Delta_n}\rightarrow0$ and
$u_n\rightarrow0$, and further
\begin{eqnarray}
\label{cont_1} \frac{ \Delta_n^{ ({p}/{\beta'}-{p}/{2}
)\wedge
{(p+1)}/{(r\vee1+1)} -\iota}\vee k_n^{- ({1}/{p}\wedge
{3}/{2} )+\iota} \vee(k_n\Delta_n)^{1-\iota}}{u_n^6\sqrt
{\Delta
_n}}&\rightarrow&0,
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
\frac{(k_n\Delta_n)^{{1}/{r}\wedge
{(2-p)}/{2}-{1}/{2}}}{u_n^6}&\rightarrow& 0.
\end{eqnarray}
Then for some $\rho>0$
\begin{equation}
\label{cont_2} \widehat{\beta}^{fs}(p,u_n,\rho
u_n) - 2 = O_p\bigl(k_n^{-1}u_n^2
\bigr).
\end{equation}
Further, if for some initial estimator $\widehat{\beta}^{fs} -2 =
o_p(k_nu_n^{2}\sqrt{\Delta_n})$, then
\begin{equation}\qquad
\label{cont_3} \frac{\sqrt{n}}{u_n^2(1-\rho^2)} \bigl(\widehat{\beta}(p,u_n,\rho
u_n)-2 \bigr) \stackrel{\mathcal{L}} {\longrightarrow} -
\frac
{1}{\log
(\rho)} \biggl(\frac{1}{24C_{p,2}}Z_1-\frac{2}{p}C_{p,2}Z_2
\biggr),
\end{equation}
where $Z_1$ and $Z_2$ are two zero-mean normal random variables with
covariance given by
$\widetilde{\Xi}_0(p)+2\widetilde{\Xi}_1(p)$.
When $X$ is a L\'{e}vy process, the requirement for $k_n$ and $u_n$
reduces to
\begin{equation}
\label{cont_4} u_n\rightarrow0,\qquad \frac{ \Delta_n^{ ({p}/{\beta
'}\wedge
1-{p}/{2} ) -\iota}\vee k_n^{- ({1}/{p}\wedge
{3}/{2} )+\iota} }{u_n^6\sqrt{\Delta_n}}\rightarrow0.
\end{equation}
\end{theorem}
The rate of convergence of the estimator for $\beta$ is now $\sqrt
{n}u_n^{-2}$ and is faster than the one in Theortem \ref{thm:gmm}, when
$u_n$ converges to zero. The latter is determined by the restriction in
(\ref{cont_1}), which in turn is governed by the presence of the
``residual'' term $Y$, the variation in $\sigma$ and the sampling
variation in measuring the scale via $V_i^n(p)$. For the condition to
be satisfied we need $p\in(1/2,1)$ and $\beta'<1$; that is, the jumps
in $X$ are of finite variation; for testing the null hypothesis of
presence of diffusion when the process can contain infinite variation
jumps under the null, see the recent work of \cite{JKL14}. Without any
prior knowledge on $\beta'$ and $r$, we can set $k_n$ according to
(\ref{clt_2}), with $\beta=2$, and then set $u_n\asymp\log(n)^{-1}$.
The requirement on $u_n$ can be further relaxed when $X$ is a L\'{e}vy
process as evident from (\ref{cont_4}). Finally, we can draw a parallel
between our finding for faster rate of convergence of the estimator of
$\beta$ when $\beta=2$ with the result in \cite{DM73,DM83} for faster
rate of convergence for the maximum likelihood estimator of the
stability index of i.i.d. $\beta$-stable random variables when $\beta=2$.
\section{Monte Carlo}\label{sec:mc}
We test the performance of the proposed method for jump activity
estimation on simulated data from the following model
\begin{equation}
dX_t = \sigma_{t-}\,dL_t,\qquad d\sigma_t
= -0.03 \sigma_t\,dt+dZ_t,
\end{equation}
where $L$ and $Z$ are two L\'{e}vy processes independent of each other with
L\'{e}vy densities given by $\nu_L(x) = e^{-\lambda|x|} (\frac
{A_0}{|x|^{1+\beta}}+\frac{A_1}{|x|^{1+{\beta}/{3}}} )$ and
$\nu_Z(x) = \break 0.0293\frac{e^{-3x}}{x^{1.5}}1_{\{x>0\}}$, respectively.
$\sigma$ is a L\'{e}vy-driven Ornstein--Uhlenbeck process with a tempered
stable driving L\'{e}vy subordinator. The parameters governing the dynamics
of $\sigma$ imply $\mathbb{E}(\sigma_t) = 1$ and half-life of shock in
$\sigma$ of around one month (when unit of time is a day). $L$ is a
mixture of tempered stable processes with the parameter $\beta$
coinciding with the jump activity index of $X$. We fix $\lambda=
0.25$, and consider four cases for~$\beta$. In each of the cases we set
$A_0$ and $A_1$ so that $A_0\int_{\mathbb{R}}|x|^{1-\beta
}e^{-\lambda
|x|}\,dx = 1$ and $A_1\int_{\mathbb{R}}|x|^{1-{\beta
}/{3}}e^{-\lambda
|x|}\,dx = 0.2$. The four cases are: (1) $\beta= 1.05$ and $A_0 =
0.1299$, $A_1 = 0.0113$; (2) $\beta= 1.25$ and $A _0= 0.1443$, $A_1 =
0.0125$; (3) $\beta= 1.50$ and $A_0 = 0.1410$, $A_1 = 0.0141$ and (4)
$\beta= 1.75$ and $A_0 = 0.0975$, $A_1 = 0.0158$.
In the Monte Carlo we set $T = 10$ and $n = 100$ which corresponds
approximately to two weeks of 5-minute return data in a typical
financial setting. We further set $k_n = 50$ and $p=0.51$. The initial
estimator to construct the moments and the optimal weight matrix is
simply $\widehat{\beta}^{fs}(p,u,v)$ with $u = 0.1$ and $v=1.1$. If
$p\geq\widehat{\beta}^{fs}(p,u,v)/2$, then we reduce the power to $p =
\widehat{\beta}^{fs}(p,u,v)/4$. Based on the initial beta estimator, we
estimate the values of $u$ for which $\mathcal{L}(p,u,\beta)=0.95$ and
$\mathcal{L}(p,u,\beta)=0.25$, and then split this interval in five
equidistant regions which are used in constructing the moment vector in
(\ref{gmm_2}).
Regarding the number of moment conditions, $K$, in the construction of
our estimator, we should keep in mind the following. Larger $K$ helps
improve efficiency of the estimator as our equal weighting of the
characteristic function within each moment condition is suboptimal.
However, the feasible estimate of the optimal weight matrix is unstable
in small samples when $K$ is large. (This is similar to ``curse of
dimensionality'' problems occurring in related contexts; see, e.g.,
\cite{FLK} and \cite{K}.) Moreover, since the characteristic function
is smooth, one typically does not need many moment conditions to gain
efficiency. For example, we also experimented in the Monte Carlo with
ten moment conditions (by splitting the region of $u$ into ten
equidistant regions). The performance of the estimator based on the ten
moment conditions was very similar to the one based on the five moment
conditions whose performance we summarize below.
The results from the Monte Carlo are reported in Table~\ref{tb:mc}. For
comparison, we also report results for $\widetilde{\beta}(p)$ where $p$
is set to the level which minimizes the corresponding asymptotic
standard deviation in Figure~\ref{fig:beta_ase}. We notice satisfactory
finite sample performance of $\widehat{\beta}(p,\mathbf{u})$. In all
cases for $\beta$, $\widehat{\beta}(p,\mathbf{u})$ contains relatively
small upward biases. These biases, however, are well below those of
$\widetilde{\beta}(p)$. We note that the finite sample bias of
$\widehat
{\beta}(p,\mathbf{u})$ can be significantly reduced if, similar to
$\widetilde{\beta}(p)$, one uses an adaptive choice of power in the
range $(\beta/4,\beta/3)$. The superiority of $\widehat{\beta
}(p,\mathbf
{u})$ holds also in terms of precision in estimating $\beta$, with
inter-quantile ranges of $\widehat{\beta}(p,\mathbf{u})$ typically well
below those of $\widetilde{\beta}(p)$.
\begin{table}
\caption{Monte Carlo results}\label{tb:mc}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccc@{}}
\hline
& \multicolumn{3}{c}{$\bolds{\widehat{\beta}(p,\mathbf{u})}$} &
\multicolumn{3}{c}{$\bolds{\widetilde{\beta}(p)}$}\\[-6pt]
& \multicolumn{3}{c}{\hrulefill} &
\multicolumn{3}{c@{}}{\hrulefill}\\
\textbf{Case} & \textbf{Median} & \textbf{IQR} & \textbf{MAD} & \textbf{Median} & \textbf{IQR} & \textbf{MAD}\\
\hline
$\beta= 1.05$ & $1.0801$ & $0.0791$ & $0.0518$ & $1.1154$ &
$0.0925$ & $0.0792$\\
$\beta= 1.25$ & $1.3058$ & $0.0817$ & $0.0680$ & $1.3229$ &
$0.1158$ & $0.0932$\\
$\beta= 1.50$ & $1.5398$ & $0.0886$ & $0.0622$ & $1.5767$ &
$0.1405$ & $0.1072$\\
$\beta= 1.75$ & $1.7782$ & $0.0806$ & $0.0536$ & $1.8196$ &
$0.1704$ & $0.1183$\\
\hline
\end{tabular*}
\tabnotetext[]{}{\textit{Note}: IQR is the inter-quartile range, and MAD is the mean
absolute deviation around the true value. The power $p$ for $\widetilde
{\beta}(p)$ is set to the value which minimizes the corresponding
asymptotic standard deviation displayed in Figure~\ref{fig:beta_ase}.}
\end{table}
\section{Empirical application}\label{sec:emp}
We now apply the developed inference procedures on high-frequency data
for the VIX index. The VIX index is a option-based measure for
volatility in the market (S\&P 500 index). It serves as a popular
indicator for investors' uncertainty, and it is used as the underlying
asset for many volatility-based derivative contracts traded in the
financial exchanges. Earlier work, consistent with parametric models
for volatility, has provided evidence that the VIX index is a pure-jump
It\^o semimartingale. Here, we estimate its jump activity index. The
estimation is based on $5$-minute sampled data during the trading hours
for the year $2010$. Like in the Monte Carlo, we split the year into
intervals of $10$ days (two weeks) and estimate the jump activity over
each of them. The moments, the power $p$ and the block size $k_n$, are
selected in the same way as in the Monte Carlo. Estimation results are
presented in Figure~\ref{fig:emp}. The estimated jump activity index
takes values around $1.6$. Overall, our results support a pure-jump
specification of the VIX index.
\begin{figure}
\includegraphics{1327f02.eps}
\caption{Jump Activity for the VIX Index. Estimation is done
over periods of $10$ days in the year $2010$. In the estimation,
moments $p$ and $k_n$ are selected as in the Monte Carlo.}
\label{fig:emp}
\end{figure}
\section{Proofs}\label{sec:proof}
In the proofs we use the shorthand notation $\mathbb{E}_i^n(\cdot)
\equiv\mathbb{E}(\cdot|\mathcal{F}_{i\Delta_n})$ and $\mathbb
{P}_i^n(\cdot) \equiv\mathbb{P}(\cdot|\mathcal{F}_{i\Delta_n})$. We
also denote with $K$ a positive constant that does not depend on $n$
and $u$ and might change from line to line in the inequalities that
follow. When we want to highlight that the constant depends only on
some parameters $a$ and $b$, we write $K_{a,b}$.
\subsection{Decompositions and additional notation}
In what follows it is convenient to extend appropriately the
probability space and then decompose the driving L\'{e}vy process $L$
as follows:
\begin{equation}
\label{proof:jd} L_t+\widehat{S}_t = S_t+
\widetilde{S}_t,
\end{equation}
where $S$, $\widehat{S}$ and $\widetilde{S}$ are pure-jump L\'{e}vy
processes with the first two characteristics zero [with respect to the
truncation function $\kappa(\cdot)$] and L\'{e}vy densities $\frac
{A}{|x|^{1+\beta}}$, $2|\nu'(x)|1_{\{\nu'(x)<0\}}$ and $|\nu'(x)|$,
respectively. We denote the associated counting jump measures with $\mu
$, $\mu_1$ and $\mu_2$. (Note that there can be dependence between
$\mu
$, $\mu_1$ and $\mu_2$.)
$S$ is $\beta$-stable process, and $\widehat{S}$ and $\widetilde{S}$
are ``residual'' components whose effect on our statistic, as will be
shown, is negligible (under suitable conditions). The proof of the
decomposition in (\ref{proof:jd}) as well as the explicit construction
of $S$, $\widehat{S}$ and $\widetilde{S}$ can be found in Section~1 of
the supplementary Appendix of \cite{TT12}.
We now introduce some additional notation that will be used throughout
the proofs. We denote for $i=k_n+3,\ldots,n$,
\begin{eqnarray*}
\widehat{V}_i^n(p) &=& \frac{1}{k_n}\sum
_{j=i-k_n-1}^{i-2}|\sigma _{(j-2)\Delta_n-}|^p\bigl|
\Delta_j^nS-\Delta_{j-1}^nS\bigr|^p,
\\
\overline{V}_i^n(p) &= &\frac{1}{k_n}\sum
_{j=i-k_n-1}^{i-2}\frac
{|\Delta
_j^nS-\Delta_{j-1}^nS|^p}{\mu_{p,\beta}^{p/\beta}},
\\
\dot{V}_i^n(p) &=& \sum
_{j=i-k_n-1}^{i-2} \biggl\{\frac{[(i-j-4)\vee
0+1_{\{j<i-3\}}]}{k_n} \bigl(|
\sigma_{j\Delta_n-}|^p - |\sigma _{(j-2)\Delta_n-}|^p
\bigr)
\\
&&\hspace*{103pt}{} + \frac{ (|\sigma_{(j-1)\Delta_n-}|^p-|\sigma_{(j-2)\Delta
_n-}|^p)1_{\{
j<i-2\}} }{k_n} \biggr\}\\
&&\hspace*{38pt}{}\times \bigl|\Delta_j^nS-
\Delta_{j-1}^nS\bigr|^p,
\\
|\overline{\sigma}|_i^p& =& \frac{1}{k_n}\sum
_{j=i-k_n-1}^{i-2}|\sigma _{(j-2)\Delta_n-}|^p.
\end{eqnarray*}
We further denote the function
\[
f_{i,u}(x) = \exp \biggl( -\frac{C_{p,\beta}u^{\beta}|\sigma
_{(i-2)\Delta
_n-}|^{\beta}}{x^{\beta/p} } \biggr),
\]
and direct computation yields
\[
\cases{\displaystyle
f_{i,u}'(x) =
\frac{\beta}{p}f_{i,u}(x)\frac{
C_{p,\beta
}u^{\beta}|\sigma_{(i-2)\Delta_n-}|^{\beta} }{x^{\beta/p+1} },
\vspace*{2pt}\cr
\displaystyle f_{i,u}^{\prime\prime}(x) = f_i(u,x) \biggl(
\frac{\beta}{p}\frac{
C_{p,\beta
}u^{\beta}|\sigma_{(i-2)\Delta_n-}|^{\beta} }{x^{\beta/p+1} } \biggr)^2 \vspace*{2pt}\cr
\displaystyle \hspace*{43pt}{}- f_i(u,x)
\frac{\beta}{p} \biggl( \frac{\beta}{p}+1 \biggr)\frac{
C_{p,\beta}u^{\beta}|\sigma_{(i-2)\Delta_n-}|^{\beta} }{ x^{\beta/p+2}
}.}
\]
We note
\begin{equation}
\label{decomp_1} \sup_{x\in\mathbb{R}_+}\bigl|f_{i,u}(x) +
f_{i,u}'(x) + f_{i,u}^{\prime\prime}(x) +
f_{i,u}^{\prime\prime\prime}(x)\bigr|<K_u,
\end{equation}
where the positive constant $K_u$ depends only on $u$ and is finite as
soon as $u$ is bounded away from zero.
With this notation, we make the following decomposition for any $u\in
\mathbb{R}_+$:
\[
\widehat{\mathcal{L}}^n(p,u) - \mathcal{L}(p,u,\beta) =
\frac
{1}{n-k_n-2} \Biggl[\widehat{Z}_1^n(u)+
\widehat{Z}_2^n(u)+\sum_{j=1}^4R_j^n(u)
\Biggr],
\]
where $\widehat{Z}_j^n(u) = \sum_{i=k_n+3}^nz_i^j(u)$ for $j=1,2$ with
\begin{eqnarray*}
z_i^1(u)& =& \cos \biggl(u\frac{\sigma_{(i-2)\Delta_n-}(\Delta
_i^nS-\Delta
_{i-1}^nS)}{ (V_i^n(p))^{1/p} } \biggr) -
\exp \biggl(-\frac{A_{\beta
}u^{\beta}|\sigma_{(i-2)\Delta_n-}|^{\beta} }{\Delta
_n^{-1}(V_i^n(p))^{\beta/p}} \biggr),
\\
z_i^2(u)&= & \exp \biggl(-\frac{C_{p,\beta}u^{\beta}|\sigma
_{(i-2)\Delta
_n-}|^{\beta} }{\Delta_n^{-1} ( |\overline{\sigma
}|_i^p\overline
{V}_i^n(p) )^{\beta/p}} \biggr) -
\exp \biggl( -\frac
{C_{p,\beta
}u^{\beta}|\sigma_{(i-2)\Delta_n-}|^{\beta}}{ (|\overline
{\sigma
}|_i^p )^{\beta/p}} \biggr),
\end{eqnarray*}
and $R_j^n(u) = \sum_{i=k_n+3}^nr_i^j(u)$ for $j=1,2,3,4$ with
\begin{eqnarray*}
r_i^1(u)& =&\cos \biggl(u\frac{\Delta_i^nX-\Delta_{i-1}^nX}{
(V_i^n(p))^{1/p} } \biggr) - \cos
\biggl(u\frac{\sigma_{(i-2)\Delta
_n-}(\Delta_i^nS-\Delta_{i-1}^nS)}{ (V_i^n(p))^{1/p} } \biggr),
\\
r_i^2(u) &=& \exp \biggl(-\frac{A_{\beta}u^{\beta}|\sigma
_{(i-2)\Delta
_n-}|^{\beta} }{\Delta_n^{-1}(V_i^n(p))^{\beta/p}} \biggr) -
\exp \biggl(-\frac{A_{\beta}u^{\beta}|\sigma_{(i-2)\Delta_n-}|^{\beta}
}{\Delta
_n^{-1}(\widehat{V}_i^n(p))^{\beta/p}} \biggr),
\\
r_i^3(u)& =& \exp \biggl(-\frac{A_{\beta}u^{\beta}|\sigma
_{(i-2)\Delta
_n-}|^{\beta} }{\Delta_n^{-1}(\widehat{V}_i^n(p))^{\beta/p}} \biggr) -
\exp \biggl(- \frac{C_{p,\beta}u^{\beta}|\sigma_{(i-2)\Delta
_n-}|^{\beta
}}{\Delta_n^{-1} ( |\overline{\sigma}|_i^p\overline
{V}_i^n(p)
)^{\beta/p} } \biggr),
\\
r_i^4(u) &=& \exp \biggl( -\frac{C_{p,\beta}u^{\beta}|\sigma
_{(i-2)\Delta
_n-}|^{\beta}}{ (|\overline{\sigma}|_i^p )^{\beta/p}} \biggr) -
\exp \bigl(-C_{p,\beta}u^{\beta} \bigr).
\end{eqnarray*}
We finally introduce the following: $\overline{Z}_1^n(u) = \sum_{i=k_n+3}^n\overline{z}_i^1(u)$,
$\overline{Z}_2^{(a,n)}(u) = \break \sum_{i=k_n+3}^n\overline{z}_i^{(a,2)}(u)$ and $\overline{Z}_2^{(b,n)}(u) =
\sum_{i=k_n+3}^n\overline{z}_i^{(b,2)}(u)$ where
\begin{eqnarray*}
\overline{z}_i^{1}(u) &=& \cos \bigl(u\Delta_n^{-1/\beta}
\mu _{p,\beta
}^{-1/\beta}\bigl(\Delta_i^nS-
\Delta_{i-1}^nS\bigr) \bigr) - \mathcal {L}(p,u,\beta),
\\
\overline{z}_i^{(a,2)}(u)& =&G(p,u,\beta) \bigl(
\Delta_n^{-p/\beta
}\overline{V}_i^n(p) -1
\bigr),\\
\overline{z}_i^{(b,2)}(u) &= &\tfrac
{1}{2}H(p,u,
\beta) \bigl( \Delta_n^{-p/\beta}\overline{V}_i^n(p)
-1 \bigr)^2.
\end{eqnarray*}
\subsection{Localization}
We prove results under the following strengthened version of Assumption~\ref{assB}:
\renewcommand{\theassumption}{SB}
\begin{assumption}\label{assSB} We have Assumption \ref{assB} and in addition:
\begin{longlist}[(a)]
\item[(a)] the processes $|\sigma_t|$ and $|\sigma_t|^{-1}$
are uniformly bounded;
\item[(b)] the processes $b^{\alpha}$ and $b^{\sigma}$ are
uniformly bounded;
\item[(c)] $|\delta^{\alpha}(t,x)|+|\delta^{\sigma
}(t,x)|\leq
\gamma(x)$ for all $t$, where $\gamma(x)$ is a deterministic bounded
function on $\mathbb{R}$ with $\int_{\mathbb{R}}|\gamma
(x)|^{r+\iota
}\lambda(dx)<\infty$ for arbitrarily small $\iota>0$ and some $0\leq
r\leq\beta$;
\item[(d)] the coefficients in the It\^o semimartingale
representation of $b^{\alpha}$ and $b^{\sigma}$ satisfy the analogues
of conditions (b) and (c) above;
\item[(e)] the process $\int_{\mathbb{R}}(|x|^{\beta
'+\iota
}\wedge1)\nu_t^Y(dx)$ is bounded, and the jumps of $\widehat{S}$,
$\widetilde{S}$ and $Y$ are bounded.
\end{longlist}
\end{assumption}
Extending the results to the case of the more general Assumption~\ref{assB}
follows by standard localization arguments given in Section~4.4.1 of
\cite{JP}.
\subsection{Preliminary results}
The strategy of the proofs is to bound the terms $R_j^n(u)$ for
$j=1,2,3,4$ as well as $\widehat{Z}_1^n(u) - \overline{Z}_1^n(u)$ and
$\widehat{Z}_2^n(u) - \overline{Z}_2^{(a,n)}(u) - \overline
{Z}_2^{(b,n)}(u)$, and to derive the asymptotic limits of $\overline
{Z}_1^n(u)$, $\overline{Z}_2^{(a,n)}(u)$ and $\overline
{Z}_2^{(b,n)}(u)$. We do this in a sequence of lemmas starting with one
containing some preliminary bounds needed for the subsequent lemmas.
\begin{lemma}\label{lema:prelim-a}
Under Assumptions \ref{assA} and \ref{assSB} and $k_n\asymp n^{\varpi}$ for $\varpi\in
(0,1)$, we have for $0<p<\beta$, $\iota>0$ arbitrarily small and
$1\leq
x<\frac{\beta}{p}$ and $y\geq1$,
\begin{eqnarray}
\label{prelim_a-1}
&&\Delta_n^{-p/\beta}
\mathbb{E}\bigl|V_i^n(p) - \widehat{V}_i^n(p)\bigr|
\leq K\alpha_n,
\\
\nonumber
&&\alpha_n = \frac{\Delta_n^{(2-1/\beta)(1+(p-1/2)\wedge0-\iota
)}}{\sqrt
{k_n}}\vee\Delta_n^{{1}/{\beta}-\iota}
\vee\Delta _n^{
{p}/{\beta'}\wedge1-{p}/{\beta}-\iota}\\
&&\hspace*{24pt}{}\vee\Delta_n^{
{(p+1)}/{(\beta+1)}-\iota},\nonumber
\\
\label{prelim_a-2}
&&\mathbb{E}\bigl\llvert \Delta_n^{-p/\beta}
\widehat{V}_i^n(p) - \mu _{p,\beta
}^{p/\beta}|
\overline{\sigma}|_i^p\bigr\rrvert ^{x}+
\mathbb{E}\bigl\llvert \Delta_n^{-p/\beta}\overline{V}_i^n(p)
- 1\bigr\rrvert ^{x}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad\leq K\cases{
k_n^{-x/2},&\quad $\mbox{if $\beta/p>2$},$
\vspace*{2pt}\cr
k_n^{1-x},&\quad $\mbox{if $\beta/p\leq2$},$}
\\
\label{prelim_a-3}
&&\bigl\llvert \mathbb{E}_{i-k_n-3}^n\bigl(|
\overline{\sigma}|_i^p - |\sigma _{(i-2)\Delta_n-}|^p
\bigr)\bigr\rrvert \leq Kk_n\Delta_n,
\\
\label{prelim_a-3-b}
&& \mathbb{E}_{i-k_n-3}^n\bigl\llvert |\overline{
\sigma}|_i^p - |\sigma _{(i-2)\Delta_n-}|^p
\bigr\rrvert ^y\leq K(k_n\Delta_n)^{{y}/{r}\wedge
1-\iota},
\\
\label{prelim_a-4}
&&\Delta_n^{-p/\beta}\bigl\llvert
\mathbb{E}_{i-k_n-3}^n \bigl(\widehat {V}_i^n(p)
- \mu_{p,\beta}^{p/\beta}|\overline{\sigma }|_i^p
\overline {V}_i^n(p) - \dot{V}_i^n(p)
\bigr) \bigr\rrvert \leq Kk_n\Delta_n,
\\
\label{prelim_a-4-b}
&& \Delta_n^{-xp/\beta}\mathbb{E}\bigl\llvert
\widehat{V}_i^n(p) - \mu _{p,\beta
}^{p/\beta}|
\overline{\sigma}|_i^p\overline{V}_i^n(p)
- \dot {V}_i^n(p)\bigr\rrvert ^{x}\leq
K(k_n\Delta_n)^{{x}/{r}\wedge
1-\iota},
\\
\label{prelim_a-4-c}
&& \Delta_n^{-xp/\beta}\mathbb{E}\bigl\llvert
\dot{V}_i^n(p)\bigr\rrvert ^{x}\leq K
\Delta_n^{{(\beta-xp)}/{\beta}\wedge{x}/{r}-\iota}.
\end{eqnarray}
\end{lemma}
\begin{pf}
We start with
(\ref{prelim_a-1}). We apply exactly the same decomposition and bounds
as for the term $A_3$ in Section~5.2.3 in \cite{T13} to get the result
in (\ref{prelim_a-1}).
We continue with (\ref{prelim_a-2}). Without loss of generality we
assume $k_n\geq2$, and we denote the two sets
\[
\label{proof_a_1} \cases{
\displaystyle J_i^{e}
= \biggl\{i-k_n-1+2k\dvtx k=0,\ldots,\biggl\lfloor \frac
{k_n-1}{2}
\biggr\rfloor \biggr\},
\vspace*{2pt}\cr
\displaystyle J_i^{o} = \biggl\{i-k_n-1+2k+1\dvtx k=0,
\ldots,\biggl\lfloor\frac{k_n-2}{2}\biggr\rfloor \biggr\}.}
\]
With this notation, we can decompose $\widehat{V}_i^{n}(p)$ into
\begin{eqnarray*}
\label{proof_a_2} \widehat{V}_i^{(e,n)}(p)& =& \frac{1}{k_n}
\sum_{j\in J_i^e}|\sigma _{(j-2)\Delta_n-}|^p\bigl|
\Delta_j^nS-\Delta_{j-1}^nS\bigr|^p,\\
\widehat {V}_i^{(o,n)}(p) &=& \widehat{V}_i^{n}(p)
- \widehat {V}_i^{(e,n)}(p).
\end{eqnarray*}
We further denote $|\overline{\sigma}|_{e,i}^p =
\frac{1}{k_n}\sum_{j\in J_i^e}|\sigma_{(j-2)\Delta_n-}|^p$ and $|\overline{\sigma
}|_{o,i}^p =\break \frac{1}{k_n}\sum_{j\in J_i^o}|\sigma_{(j-2)\Delta
_n-}|^p$. Using the triangular inequality, we then have
\begin{eqnarray*}
\label{proof_a_3}
&&\bigl\llvert \Delta_n^{-p/\beta}
\widehat{V}_i^{n}(p) - \mu_{p,\beta
}^{p/\beta
}|
\overline{\sigma}|_i^p\bigr\rrvert
\\
&&\qquad \leq \bigl\llvert \Delta_n^{-p/\beta}\widehat{V}_i^{(e,n)}(p)
- \mu_{p,\beta
}^{p/\beta}|\overline{\sigma}|_{e,i}^p
\bigr\rrvert + \bigl\llvert \Delta_n^{-p/\beta}
\widehat{V}_i^{(o,n)}(p) - \mu_{p,\beta
}^{p/\beta}|
\overline{\sigma}|_{o,i}^p\bigr\rrvert .
\end{eqnarray*}
Now, since $\mathbb{E}_{j-2}^n|\Delta_j^nS-\Delta_{j-1}^nS|^p =
\Delta
_n^{p/\beta}\mu_{p,\beta}^{p/\beta}$, the sums $\Delta_n^{-p/\beta
}\widehat{V}_i^{(e,n)}(p) - \mu_{p,\beta}^{p/\beta}|\overline
{\sigma
}|_{e,i}^p$ and $\Delta_n^{-p/\beta}\widehat{V}_i^{(o,n)}(p) - \mu
_{p,\beta}^{p/\beta}|\overline{\sigma}|_{o,i}^p$ are discrete
martingales. From here, the result in (\ref{prelim_a-2}) for the case
$\beta/p\leq2$ follows by a direct application of the
Burkholder--Davis--Gundy inequality and the algebraic inequality
\begin{equation}
\label{proof_a_3-a} \biggl|\sum_i|a_i|\biggr|^p
\leq\sum_{i}|a_i|^p\qquad
\forall p\in(0,1] \mbox{ and any real-valued $\{a_{i}
\}_{i\geq1}$}.
\end{equation}
We are left with the case $\beta/p>2$. We only show the bound involving
the term $\widehat{V}_i^{(e,n)}(p)$, with the result for $\widehat
{V}_i^{(o,n)}(p)$ being shown analogously. We first denote $\Delta
_n^{-p/\beta}\widehat{V}_i^{(e,n)}(p) - \mu_{p,\beta}^{p/\beta
}|\overline{\sigma}|_{e,i}^p = \frac{1}{k_n}\sum_{j\in J_i^e}\zeta_j^n$
where $\zeta_j^n = \Delta_n^{-p/\beta}\times\break |\sigma_{(j-2)\Delta
_n-}|^p (
|\Delta_j^nS-\Delta_{j-1}^nS|^p - \mu_{p,\beta}^{p/\beta} )$.
Applying the Burkholder--Davis--Gundy inequality, we have
\[
\mathbb{E}\biggl| \sum_{j\in J_i^e}\zeta_j^n
\biggr|^x \leq K\mathbb{E} \biggl(\sum_{j\in J_i^e}
\bigl(\zeta_j^n\bigr)^2 \biggr)^{x/2}.
\]
If $x\leq2$, the result in (\ref{prelim_a-2}) then follows by Jensen's
inequality. If $x>2$, applying again Burkholder--Davis--Gundy, we have
\begin{eqnarray}
\label{proof_a_3-b}
&&\mathbb{E} \biggl(\sum_{j\in J_i^e}
\bigl(\zeta_j^n\bigr)^2 \biggr)^{x/2}
\nonumber\\
&&\qquad \leq K\mathbb{E} \biggl(\sum_{j\in J_i^e}\bigl(\bigl(
\zeta_j^n\bigr)^2-\mathbb
{E}_{j-2}^n\bigl(\zeta_j^n
\bigr)^2\bigr) \biggr)^{x/2}+K\mathbb{E} \biggl(\sum
_{j\in
J_i^e}\mathbb{E}_{j-2}^n\bigl(
\zeta_j^n\bigr)^2 \biggr)^{x/2}
\\
&&\qquad \leq K\mathbb{E} \biggl(\sum_{j\in J_i^e}\bigl(\bigl(
\zeta_j^n\bigr)^2-\mathbb
{E}_{j-2}^n\bigl(\zeta_j^n
\bigr)^2\bigr)^2 \biggr)^{x/4}+Kk_n^{x/2},\nonumber
\end{eqnarray}
where we also made use of the fact that the $\beta$-stable random
variable has finite $p$th absolute moment as soon as $p\in(0,\beta)$.
If $x\leq4$, the result will then follow from an application of (\ref
{proof_a_3-a}). If $x>4$, then we repeat (\ref{proof_a_3-b}) with $x$
replaced by $x/2$ and $\zeta_j^n$ replaced $(\zeta_j^n)^2-\mathbb
{E}_{j-2}^n(\zeta_j^n)^2$. We continue in this way, applying $k = \sup
\{i\dvtx2^i<x \}$ times (\ref{proof_a_3-b}) and then
(\ref{proof_a_3-a}). This shows (\ref{prelim_a-2}).
We continue with (\ref{prelim_a-3}) and (\ref{prelim_a-3-b}). We make
use of the following algebraic inequality:
\[
\label{proof_a_4} \bigl\llvert |a+b|^p-|a|^p - p
\operatorname{sign}\{a\}|a|^{p-1}b\bigr\rrvert \leq K_p|a|^{p-2}|b|^2,
\]
for any $a,b\in{R}$ with $a\neq0$, $0<p<1$ and $K_p$ that depends only
on $p$. Applying this inequality as well as the triangular inequality,
and using the fact that under Assumption \ref{assSB} the process $|\sigma|$ is
bounded from below, we have
\begin{eqnarray}
\label{proof_a_5} \bigl\llvert \mathbb{E}_s\bigl(|
\sigma_t|^p-|\sigma_s|^p\bigr)
\bigr\rrvert &\leq& K|t-s|,\qquad 0\leq s\leq t,
\\
\mathbb{E}_s\bigl\llvert |\sigma_t|^p-|
\sigma_s|^p\bigr\rrvert ^q&\leq& K\mathbb
{E}_s \bigl(|\sigma_t-\sigma_s|^q
\vee|\sigma_t-\sigma _s|^{2q} \bigr),
\nonumber
\\[-8pt]
\\[-8pt]
\eqntext{ 0\leq s
\leq t, q\geq1,}
\end{eqnarray}
with some constant $K$ that does not depend on $s$ and $t$. From here
(\ref{prelim_a-3}) follows. Application of Corollary~2.1.9 of \cite{JP}
further gives
\begin{equation}
\label{proof_a_6} \mathbb{E}_s|\sigma_t-
\sigma_s|^q\leq K|t-s|^{{q}/{r}\wedge
1-\iota},\qquad 0\leq s\leq t,
q\geq1,
\end{equation}
and applying this inequality with $q=y$ and $q=2y$, for $y$ the
constant in (\ref{prelim_a-3-b}), we have that result.
We proceed by showing the bounds in (\ref{prelim_a-4})--(\ref
{prelim_a-4-c}). We can decompose $|\overline{\sigma}|_i^p - |\sigma
_{(k-2)\Delta_n-}|^p = \sum_{j=1}^4a_k^j$ for $k=i-k_n-1,\ldots,i-2$ and
\[
\label{proof_a_7} \cases{
\displaystyle a_k^1
= \frac{1}{k_n}\sum_{j=k+3}^{i-2} \bigl(|
\sigma _{(j-2)\Delta_n-}|^p-|\sigma_{k\Delta_n-}|^p
\bigr),
\vspace*{2pt}\cr
\displaystyle a_k^2 = \frac
{(i-k-4)\vee0}{k_n} \bigl(|\sigma_{k\Delta_n-}|^p
- |\sigma _{(k-2)\Delta
_n-}|^p \bigr),
\vspace*{2pt}\cr
\displaystyle a_k^3 = \bigl(\bigl (|\sigma_{k\Delta_n-}|^p - |\sigma
_{(k-2)\Delta_n-}|^p\bigr)1_{\{k<i-3\}}\vspace*{2pt}\cr
\hspace*{28pt}{}+\bigl(|\sigma_{(k-1)\Delta
_n-}|^p-|\sigma
_{(k-2)\Delta_n-}|^p\bigr)1_{\{k<i-2\}} \bigr)/{k_n},
\vspace*{2pt}\cr
\displaystyle a_k^4 = \frac{1}{k_n}\sum
_{j=i-k_n-1}^k \bigl(|\sigma_{(j-2)\Delta
_n-}|^p-|
\sigma_{(k-2)\Delta_n-}|^p \bigr),}
\]
with $a_k^1$ being zero for $k\geq i-4$. Using the law of iterated
expectations and the bound in (\ref{prelim_a-3-b}), we have for
$k=i-k_n-1,\ldots,i-2$,
\begin{equation}
\label{proof_a_8} \Delta_n^{-xp/\beta}\mathbb{E} \bigl(\bigl\llvert
a_k^1+a_k^4\bigr\rrvert \bigl|
\Delta _k^nS-\Delta_{k-1}^nS\bigr|^p
\bigr)^x\leq K(k_n\Delta_n)^{
{x}/{r}\wedge1-\iota}.
\end{equation}
Using the H\"{o}lder inequality, the bound in (\ref{proof_a_5}), as well
as the fact that a stable random variable has finite absolute moments
for powers less than $\beta$, we have for $k=i-k_n-1,\ldots,i-2$,
\begin{eqnarray}
\label{proof_a_9} &&\Delta_n^{-xp/\beta}\mathbb{E} \bigl(\bigl\llvert
a_k^2+a_k^3\bigr\rrvert\bigl |
\Delta _k^nS-\Delta_{k-1}^nS\bigr|^p
\bigr)^x
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad\leq K\Delta_n^{ ({(\beta x/r)}/{(\beta-xp)}\wedge1 ){(\beta-xp)}/{\beta} -\iota}.
\end{eqnarray}
Combining (\ref{proof_a_8}) and (\ref{proof_a_9}), we get the results
in (\ref{prelim_a-4-b}) and (\ref{prelim_a-4-c}).
Further, using (\ref{proof_a_5}), we get for $k=i-k_n-1,\ldots,i-2$,
\begin{equation}
\label{proof_a_10} \Delta_n^{-p/\beta}\bigl\llvert
\mathbb{E}_{i-k_n-3} ^n \bigl(\bigl(a_k^1+a_k^4
\bigr)\bigl|\Delta_k^nS-\Delta_{k-1}^nS\bigr|^p
\bigr)\bigr\rrvert \leq Kk_n\Delta_n.
\end{equation}
From here we get the result in (\ref{prelim_a-4}).
\end{pf}
\begin{lemma}\label{lema:prelim-b}
Under Assumptions \ref{assA} and \ref{assSB} and $k_n\asymp n^{\varpi}$ for $\varpi\in
(0,1)$, we have for $0<p<\beta$, $\iota>0$ arbitrarily small and every
$0<a<b<\infty$,
\begin{eqnarray}\qquad\hspace*{4pt}
\label{prelim_b-1}
&& \frac{1}{n-k_n-2}\mathbb{E} \Bigl(\sup_{u\in[a,b]}\bigl|R_1^n(u)\bigr|
\Bigr)
\leq K_{a,b} \bigl(\alpha_n\vee k_n^{- ({\beta}/{(2p)}\wedge{(\beta-p)}/{p} )+\iota}
\bigr),
\\
\label{prelim_b-2}
&&\frac{1}{n-k_n-2}\mathbb{E} \Bigl(\sup_{u\in[a,b]}\bigl|R_2^n(u)\bigr|
\Bigr)\leq K_{a,b}\alpha_n,
\\
\label{prelim_b-3}
&&\frac{1}{n-k_n-2}\mathbb{E} \Bigl(\sup
_{u\in[a,b]}\bigl|R_3^n(u)\bigr| \Bigr)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad \leq\cases{
\displaystyle K_{a,b}
\bigl((k_n\Delta_n)^{1-\iota}\vee
k_n^{-1/2}(k_n\Delta_n)^{{1}/{r}\wedge{(\beta-p)}/{\beta
}-\iota
}
\bigr),&\quad $\mbox{if $\beta/p>2$,}$
\vspace*{2pt}\cr
\displaystyle K_{a,b} \bigl((k_n\Delta _n)^{
{1}/{r}\wedge1-\iota}
\vee\Delta_n^{{(\beta-p)}/{\beta}-\iota
} \bigr),&\quad $\mbox{if $\beta/p\leq2$,}$}
\\
\label{prelim_b-4}
&& \frac{1}{n-k_n-2}\mathbb{E} \Bigl(\sup_{u\in[a,b]}\bigl|R_4^n(u)\bigr|
\Bigr)\leq K_{a,b}(k_n\Delta_n)^{1-\iota},
\end{eqnarray}
where $K_{a,b}$ depends only on $a$, and $b$ and is finite-valued.
\end{lemma}
\begin{pf} We start with
showing (\ref{prelim_b-1}). We define the set
\[
\label{proof_b_1} \mathcal{C}_i^n =\bigl \{ \bigl\llvert
\Delta_n^{-p/\beta}V_i^n(p)-\mu
_{p,\beta
}^{p/\beta}|\overline{\sigma}|_i^p
\bigr\rrvert > \tfrac{1}{2}\mu _{p,\beta
}^{p/\beta}|\overline{
\sigma}|_i^p \bigr\},\qquad i=k_n+3,\ldots ,n,
\]
and then we note that
\begin{eqnarray*}
\label{proof_b_2}
1_{\{\mathcal{C}_i^n\}}&\leq&1 \bigl(\Delta_n^{-p/\beta}\bigl|V_i^n(p)
- \widehat{V}_i^n(p)\bigr|>\tfrac{1}{4}
\mu_{p,\beta}^{p/\beta}|\overline {\sigma }|_i^p
\bigr)
\\
&&{} +1 \bigl(\bigl|\Delta_n^{-p/\beta}\widehat {V}_i^n(p)-
\mu _{p,\beta}^{p/\beta}|\overline{\sigma}|_i^p\bigr|>
\tfrac{1}{4}\mu _{p,\beta
}^{p/\beta}|\overline{
\sigma}|_i^p \bigr).
\end{eqnarray*}
Hence we can apply (\ref{prelim_a-1}) and (\ref{prelim_a-2}) and conclude
\begin{equation}
\label{proof_b_3} \mathbb{E} \Bigl[\sup_{u\in\mathbb{R}_+}
\bigl(\bigl|r_i^1(u)\bigr|1_{\{\mathcal
{C}_i^n\}
}\bigr) \Bigr]\leq K
\bigl(\alpha_n\vee k_n^{- ({\beta
}/{(2p)}\wedge
{(\beta-p)}/{p} )+\iota} \bigr).
\end{equation}
We proceed with a sequence of inequalities. First, from Assumption \ref{assSB},
\begin{equation}
\label{proof_b_4} \mathbb{E}_{i-2}^n\biggl\llvert \int
_{(i-1)\Delta_n}^{i\Delta_n}(\alpha _u-\alpha
_{u-\Delta_n})\,du\biggr\rrvert \leq K\Delta_n^{1+{1}/{(r\vee1)}-\iota}.
\end{equation}
Next, if $\beta'<1$, we can decompose
\begin{equation}
\label{proof_b_5} \widehat{S}_t = \int_0^t
\int_{\mathbb{R}}x\mu_1(ds,dx)-t\int_{\mathbb
{R}}
\kappa(x)2\bigl|\nu'(x)\bigr|1_{\{\nu'(x)<0\}}\,dx,
\end{equation}
and separate accordingly $ \int_{(i-1)\Delta_n}^{i\Delta_n}\sigma
_{u-}\,d\widehat{S}_u$ and $ \int_{(i-2)\Delta_n}^{(i-1)\Delta
_n}\sigma
_{u-}\,d\widehat{S}_u$. For the difference of the integrals against time,
we can proceed exactly as in (\ref{proof_b_4}). Further, using the
algebraic inequality in (\ref{proof_a_3-a}), as well as Assumption~\ref{assA}
for the measure~$\nu'$, we have
\begin{equation}
\label{proof_b_6} \mathbb{E}_{i-1}^n\biggl\llvert \int
_{(i-1)\Delta_n}^{i\Delta_n}\int_{\mathbb
{R}}
\sigma_{u-}x\mu_1(du,dx) \biggr\rrvert ^{x}\leq
K\Delta_n^{x/\beta
'-\iota
}\qquad \mbox{for $x\leq\beta'$}.
\end{equation}
When $\beta'\geq1$, we can apply the Burkholder--Davis--Gundy
inequality and get
\begin{equation}
\label{proof_b_7} \mathbb{E}_{i-1}^n\biggl\llvert \int
_{(i-1)\Delta_n}^{i\Delta_n}\sigma _{u-}x\,d
\widehat{S}_u \biggr\rrvert ^{x}\leq K
\Delta_n^{x/\beta'-\iota
}\qquad \mbox{for $x\leq\beta'$}.
\end{equation}
The same inequalities hold for the analogous integrals involving
$\widetilde{S}$. Next, application of the Burkholder--Davis--Gundy and
H\"{o}lder inequalities, as well as Assumption~\ref{assSB} yields
\begin{equation}
\label{proof_b_8} \mathbb{E}_{i-2}^n\biggl\llvert \int
_{(i-1)\Delta_n}^{i\Delta_n}(\sigma _{u-}-
\sigma_{(i-2)\Delta_n-})\kappa(x)\widetilde{\mu}(du,dx) \biggr\rrvert \leq K
\Delta_n^{{2}/{\beta}-\iota}.
\end{equation}
Finally, denoting $\kappa'(x) = x-\kappa(x)$ and upon noting that
$\kappa'(x)$ is zero for $x$ sufficiently close to zero, we have
\begin{equation}\qquad
\label{proof_b_9} \mathbb{E}_{i-2}^n\biggl\llvert \int
_{(i-1)\Delta_n}^{i\Delta_n}(\sigma _{u-}-
\sigma_{(i-2)\Delta_n-})\kappa'(x)\mu(du,dx)\biggr\rrvert
^{\iota
}\leq K\Delta_n\qquad \forall\iota>0.
\end{equation}
Combining the estimates in (\ref{proof_b_4})--(\ref{proof_b_9}), as
well as the inequality $|\cos(x)-\cos(y)|\leq2|x-y|^p$ for every
$x,y\in\mathbb{R}$ and $p\in(0,1]$, we have
\begin{equation}\qquad
\label{proof_b_10} \mathbb{E} \Bigl[\sup_{u\geq a}
\bigl(\bigl|r_i^1(u)\bigr|1_{\{(\mathcal{C}_i^n)^c\}
}\bigr) \Bigr]\leq
K_a \bigl(\Delta_n^{{(\beta-\beta')}/{(\beta(\beta
'\vee
1))}-\iota}\vee
\Delta_n^{{1}/{\beta}\wedge{1}/{(r\vee
1)}-\iota} \bigr).
\end{equation}
Equations (\ref{proof_b_3}) and (\ref{proof_b_10}) yield (\ref{prelim_b-1}). We
continue next with (\ref{prelim_b-2}). This bound follows from a
first-order Taylor expansion of $f_{i,u}(x)$ and the bounds in (\ref
{decomp_1}) and~(\ref{prelim_a-1}).
We proceed by showing the result for $R_4^n(u)$. Using a second-order
Taylor expansion and the Cauchy--Schwarz inequality, as well as (\ref
{prelim_a-3-b}), we get
\begin{equation}
\label{proof_b_11} \mathbb{E} \Biggl(\sup_{u\in[a,b]}\Biggl\llvert
R_4^n(u) - \frac{\beta
}{p}e^{-C_{p,\beta}u^{\beta}}C_{p,\beta}u^{\beta}
\sum_{i=k_n+3}^n\widetilde{r}_i^4
\Biggr\rrvert \Biggr)\leq Kk_n,
\end{equation}
where
\[
\label{proof_b_12} \widetilde{r}_i^4 = \frac{|\sigma_{(i-2)\Delta_n-}|^p-|\overline
{\sigma
}|_i^p}{|\sigma_{(i-k_n-3)\Delta_n-}|^p}.
\]
Using (\ref{prelim_a-3}), we have
\begin{equation}
\label{proof_b_13} \mathbb{E}\Biggl\llvert \sum_{i=k_n+3}^n
\mathbb{E}_{i-k_n-3}^n\bigl(\widetilde {r}_i^4
\bigr)\Biggr\rrvert \leq K k_n.
\end{equation}
Further, without loss of generality (because $k_n\Delta_n\rightarrow
0$), we assume $n\geq2k_n+3$. Using the shorthand $\chi_i =
\widetilde
{r}_i^4-\mathbb{E}_{i-k_n-3}^n(\widetilde{r}_i^4)$, we then decompose
\begin{eqnarray*}
\label{proof_b_14} \sum_{i=k_n+3}^n
\chi_i &=& \sum_{j=1}^{k_n+1}A_j+
\sum_{i=2k_n+4+
(\lfloor{(n-k_n-2)}/{(k_n+1)}\rfloor-1 )(k_n+1)}^n\chi _i,
\\
\label{proof_b_15} A_j& =& \sum_{i=1}^{\lfloor{(n-k_n-2)}/{(k_n+1)}\rfloor}
\chi _{k_n+3+(j-1)+(i-1)(k_n+1)},\qquad j=1,\ldots,k_n+1.
\end{eqnarray*}
Applying the Burkholder--Davis--Gundy inequality for discrete
martingales and making use of (\ref{prelim_a-3-b}), we have
\begin{equation}
\label{proof_b_16} \mathbb{E}|A_j|\leq K(k_n
\Delta_n)^{-\iota},\qquad j=1,\ldots,k_n+1.
\end{equation}
Combining (\ref{proof_b_11}) and (\ref{proof_b_16}), we get the bound
in (\ref{prelim_b-4}).
We are left with (\ref{prelim_b-3}). The case $\beta/p\leq2$ follows from
\[
\mathbb{E}\bigl|r_i^3(u)\bigr|\leq K_{a,b}\bigl\llvert
\Delta_n^{-p/\beta}\widehat {V}_i^n(p)-
\mu_{p,\beta}^{p/\beta}\overline{V}_i^n(p)\bigr
\rrvert
\]
and by applying the bounds in (\ref{prelim_a-4-b})--(\ref
{prelim_a-4-c}). We now show (\ref{prelim_b-3}) for the case \mbox{$\beta/p>
2$}. We first decompose $r_i^3(u) = \sum_{j=1}^3\varrho_i^j(u)$, where
\begin{eqnarray*}
\label{proof_b_17} \varrho_i^1(u) &= & f'_{i,u}
\bigl(\Delta_n^{-p/\beta}\overline {V}_i^n(p)|
\overline{\sigma}|_i^p \bigr)\\
&&{}\times \Delta_n^{-p/\beta}
\bigl(\mu _{p,\beta}^{-p/\beta}\widehat{V}_i^n(p)
- |\overline{\sigma }|_i^p\overline{V}_i^n(p)
- \mu_{p,\beta}^{-p/\beta}\dot {V}_i^n(p)
\bigr),
\\
\label{proof_b_18} \varrho_i^2(u) &=& f'_{i,u}
(\widetilde{x} )\Delta _n^{-p/\beta
}\mu_{p,\beta}^{-p/\beta}
\dot{V}_i^n(p),
\\
\label{proof_b_19}
\varrho_i^3(u) &=&
\bigl(f'_{i,u} (\widetilde{x} ) - f'_{i,u}
\bigl(\Delta_n^{-p/\beta}\overline{V}_i^n(p)|
\overline {\sigma }|_i^p \bigr) \bigr)
\\
&&{} \times\Delta_n^{-p/\beta} \bigl(\mu_{p,\beta}^{-p/\beta}
\widehat{V}_i^n(p) - |\overline {\sigma
}|_i^p\overline{V}_i^n(p) -
\mu_{p,\beta}^{-p/\beta}\dot {V}_i^n(p) \bigr)
\end{eqnarray*}
and $\widetilde{x}$ is a random number between $\Delta_n^{-p/\beta
}\mu
_{p,\beta}^{-p/\beta}\widehat{V}_i^n(p)$ and $\Delta_n^{-p/\beta
}|\overline{\sigma}|_i^p\overline{V}_i^n(p)$. We further introduce
\begin{eqnarray*}
\label{proof_b_20} \widetilde{\varrho}_i^1(u) =
\frac{G(p,u,\beta)}{|\sigma
_{(i-k_n-3)\Delta_n-}|^p} \Delta_n^{-p/\beta} \bigl(\mu_{p,\beta
}^{-p/\beta}
\widehat{V}_i^n(p) - |\overline{\sigma}|_i^p
\overline {V}_i^n(p) - \mu_{p,\beta}^{-p/\beta}
\dot{V}_i^n(p) \bigr)
\end{eqnarray*}
and note $G(p,u,\beta) = |\sigma_{(i-2)\Delta_n-}|^pf'_{i,u}
(|\sigma_{(i-2)\Delta_n-}|^p )$. Then direct calculation for the
function $xf'_{i,u}(x)$ and the boundedness of the process $|\sigma|$ yields
\[
\label{proof_b_21}
\bigl|\varrho_i^1(u) - \widetilde{
\varrho}_i^1(u)\bigr|\leq K_{a,b}
\bigl(d_i^{(1)}+d_i^{(2)}
\bigr)e_i,
\]
where
\[
\label{proof_b_22} \cases{
d_i^{(1)} = \bigl|
\Delta_n^{-p/\beta}\overline {V}_i^n(p)-1\bigr|,
\vspace*{2pt}\cr
d_i^{(2)} = \bigl||\overline{\sigma}|_i^p
- |\sigma _{(i-2)\Delta_n-}|^p\bigr|+\bigl| |\sigma_{(i-2)\Delta_n-}|^p
- |\sigma _{(i-k_n-3)\Delta_n-}|^p\bigr|,
\vspace*{2pt}\cr
e_i = \Delta_n^{-p/\beta}\bigl\llvert \mu
_{p,\beta
}^{-p/\beta}\widehat{V}_i^n(p) - |
\overline{\sigma}|_i^p\overline {V}_i^n(p)
- \mu_{p,\beta}^{-p/\beta}\dot{V}_i^n(p)
\bigr\vert.}
\]
From here, we use the H\"{o}lder inequality and (\ref{prelim_a-2}),
(\ref{prelim_a-3-b}) and (\ref{prelim_a-4-b}) to get
\begin{equation}
\label{proof_b_23} \cases{
\mathbb{E}\bigl|d_i^{(1)}e_i\bigr|
\leq K \bigl(\mathbb{E} \bigl[ \bigl(d_i^{(1)}
\bigr)^{{\beta
}/{(p+\beta\iota)}} \bigr] \bigr)^{{p}/{\beta}+\iota} \bigl(\mathbb {E}
\bigl(e_i^{{\beta}/{(\beta-p-\beta\iota)}} \bigr) \bigr)^{
{(\beta-p)}/{\beta}-\iota}
\vspace*{2pt}\cr
\hspace*{42pt}\leq Kk_n^{-1/2}(k_n\Delta_n)^{
{1}/{r}\wedge{(\beta-p)}/{\beta}-2\iota},
\vspace*{2pt}\cr
\mathbb {E}\bigl|d_i^{(2)}e_i\bigr|\leq\sqrt{
\mathbb{E}\bigl(d_i^{(2)}\bigr)^2\mathbb
{E}(e_i)^2}\leq K (k_n\Delta_n)^{1-\iota}.}
\end{equation}
For the sum $\sum_{i=k_n+3}^n\widetilde{\varrho}_i^1(u)$, using the
bounds in (\ref{prelim_a-4}) and (\ref{prelim_a-4-b}), we can proceed
exactly as for the analysis of $\sum_{i=k_n+3}^n\chi_i$ above and split
it into $k_n+1$ terms, which are the terminal values of discrete
martingales. Together, this yields
\begin{equation}
\label{proof_b_24} \mathbb{E} \Biggl(\sup_{u\in[a,b]}\Biggl\llvert \sum
_{i=k_n+3}^n\widetilde {\varrho
}_i^1(u)\Biggr\rrvert \Biggr)\leq K_{a,b}k_n(k_n
\Delta_n)^{-\iota}.
\end{equation}
Next, using the bound in (\ref{prelim_a-4-c}) as well as the
boundedness of the derivative $f_{i,u}'(x)$ (for $u\in[a,b]$), we have
\begin{equation}
\label{proof_b_25} \mathbb{E} \Bigl(\sup_{u\in[a,b]}\bigl|
\varrho_i^2(u) \bigr| \Bigr)\leq K_{a,b}
\Delta_n^{{(\beta-p)}/{\beta}\wedge{1}/{r}-\iota}.
\end{equation}
We continue with the term $\varrho_i^3(u)$. We first introduce the set
\[
\label{proof_b_26} \mathcal{E}_i^n = \bigl\{ \bigl\llvert
\mu_{p,\beta}^{-p/\beta}\widehat {V}_i^n(p) - |
\overline{\sigma}|_i^p\overline{V}_i^n(p)
- \mu _{p,\beta
}^{-p/\beta}\dot{V}_i^n(p)\bigr
\rrvert >1 \bigr\},\qquad i=k_n+3,\ldots,n.
\]
With this notation, using (\ref{prelim_a-4-b}) and the boundedness of
the derivative $f_{i,u}'(x)$ (for $u\in[a,b]$), we have
\begin{equation}
\label{proof_b_27} \mathbb{E} \Bigl(\sup_{u\in[a,b]}\bigl|
\varrho_i^3(u) \bigr|1_{\{\mathcal
{E}_i^n\}
} \Bigr)\leq
K_{a,b}(k_n\Delta_n)^{1-\iota}.
\end{equation}
Next using the boundedness of the second derivative $f_{i,u}^{\prime\prime}(x)$,
as well as the bounds in (\ref{prelim_a-4-b}) and (\ref{prelim_a-4-c}),
we get
\begin{equation}
\label{proof_b_28} \mathbb{E} \Bigl(\sup_{u\in[a,b]}\bigl|
\varrho_i^3(u)\bigr |1_{\{(\mathcal
{E}_i^n)^c\}} \Bigr)\leq
K_{a,b} \bigl((k_n\Delta_n)^{1-\iota}\vee
\Delta_n^{{(\beta-p)}/{\beta}\wedge{1}/{r}-\iota} \bigr).
\end{equation}
Combining (\ref{proof_b_23})--(\ref{proof_b_28}), we get the result in
(\ref{prelim_b-3}).
\end{pf}
\begin{lemma}\label{lema:prelim-c}
Under Assumptions \ref{assA} and \ref{assSB} and $k_n\asymp n^{\varpi}$ for $\varpi\in
(0,1)$, we have for $0<p<\beta$, $\iota>0$ arbitrarily small and every
$0<a<b<\infty$,
\begin{eqnarray}
\label{prelim_c-1} &&\frac{1}{n-k_n-2}\sup_{u\in[a,b]}\bigl|
\widehat{Z}_1^n(u) - \overline {Z}_1^n(u)
\bigr|
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad= o_p \bigl(\alpha_n\vee k_n^{- ({\beta
}/{(2p)}\wedge
{(\beta-p)}/{p} )+\iota}
\vee\sqrt{\Delta_n} \bigr),
\end{eqnarray}
and further if $p<\beta/2$,
\begin{eqnarray}
\label{prelim_c-2}
&&\frac{1}{n-k_n-2}\sup_{u\in[a,b]}\bigl|
\widehat{Z}_2^n(u) - \overline {Z}_2^{(a,n)}(u)
- \overline{Z}_2^{(b,n)}(u)\bigr|
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad = o_p \bigl( k_n^{-({1}/{2}) ({\beta}/{p}\wedge3
)+\iota
}\vee
k_n^{-1/2}(k_n\Delta_n)^{{1}/{r}\wedge{(\beta
-p)}/{\beta
}-\iota}
\bigr).
\end{eqnarray}
\end{lemma}
\begin{pf}
We start with
(\ref{prelim_c-1}). We split $\widehat{Z}_1^n(u) - \overline{Z}_1^n(u)
= E_1^n(u)+E_2^n(u)$ with $E_1^n(u) = \sum_{i=k_n+3}^n(z_{i}^1(u) -
\overline{z}_{i}^1(u))1_{\{\mathcal{C}_i^n\}}$ and $E_2^n(u) = \sum_{i=k_n+3}^n(z_{i}^1(u) - \overline{z}_{i}^1(u))1_{\{(\mathcal
{C}_i^n)^c\}}$. For $E_1^n(u)$, using Lemma~\ref{lema:prelim-a}, we
easily have
\begin{equation}
\label{proof_c_2} \frac{1}{n-k_n-2}\mathbb{E} \Bigl(\sup_{u\in[a,b]}\bigl|E_1^n(u)
\bigr| \Bigr)\leq K_{a,b} \bigl(\alpha_n\vee
k_n^{- ({\beta}/{(2p)}\wedge{(\beta-p)}/{p} )+\iota} \bigr).
\end{equation}
We proceed with $E_2^n(u)$. We first note that
\begin{equation}
\label{proof_c_3} \mathbb{E}_{i-2}^n \bigl[
\bigl(z_{i}^1(u) - \overline{z}_{i}^1(u)
\bigr)1_{\{
(\mathcal{C}_i^n)^c\}} \bigr] = 0.
\end{equation}
Further, using the algebraic inequalities $|\cos(x)-\cos(y)|^2\leq
2|x-y|$ for $x,y\in\mathbb{R}$ and $|e^{-x}-e^{-y}|^2\leq2|x-y|$ for
$x,y\in\mathbb{R}_+$, as well as the definition of the set $\mathcal
{C}_i^n$, we get
\[
\mathbb{E}_{i-2}^n\bigl\llvert \bigl(z_{i}^1(u)
- \overline{z}_{i}^1(u)\bigr)1_{\{
(\mathcal{C}_i^n)^c\}} \bigr
\rrvert ^2\leq K_{a,b}\bigl\llvert \Delta
_n^{-p/\beta
}V_i^n(p) -
\mu_{p,\beta}^{p/\beta}|\sigma_{(i-2)\Delta
_n-}|^p\bigr
\rrvert .
\]
Applying the above two inequalities, the bounds in (\ref{prelim_a-1}),
(\ref{prelim_a-2}) and (\ref{prelim_a-3-b}), as well as the algebraic
inequality $2xy\leq x^2+y^2$ for $x,y\in\mathbb{R}$, we have
\begin{eqnarray*}
\label{proof_c_4}
\mathbb{E} \bigl( E_2^n(u)
\bigr)^2 &=& \mathbb{E} \Biggl(\sum_{i=k_n+3}^n
\bigl(z_{i}^1(u) - \overline{z}_{i}^1(u)
\bigr)^21_{\{(\mathcal
{C}_i^n)^c\}} \Biggr)
\\
&&{} +\mathbb{E} \biggl(\sum_{i,j\dvtx
|i-j|=1}\bigl(z_{i}^1(u)
- \overline{z}_{i}^1(u)\bigr)1_{\{(\mathcal{C}_i^n)^c\}
}
\bigl(z_{j}^1(u) - \overline{z}_{j}^1(u)
\bigr)1_{\{(\mathcal{C}_j^n)^c\}
} \biggr)
\\
& \leq& K_{a,b}\sum_{i=k_n+3}^n
\mathbb{E}\bigl\llvert \Delta_n^{-p/\beta
}V_i^n(p)
- \mu_{p,\beta}^{p/\beta}|\sigma_{(i-2)\Delta
_n-}|^p\bigr
\rrvert
\\
&\leq& K_{a,b}\Delta_n^{-1} \bigl(
\alpha_n\vee k_n^{- ({\beta}/{(2p)}\wedge{(\beta-p)}/{p} )+\iota}\vee(k_n\Delta
_n)^{{1}/{r}\wedge1-\iota} \bigr).
\end{eqnarray*}
As a result, $\frac{1}{\sqrt{n-k_n-2}}E_2^n(u) \stackrel{\mathbb
{P}}{\longrightarrow} 0$ finite-dimensionally in $u$. Finally, we need
to show that the convergence holds uniformly in $u\in[a,b]$. For this
we apply a criteria for tightness on the space of continuous functions
equipped with the uniform topology; see, for example, Theorem~12.3 of
\cite{Billingsley}. Using again (\ref{proof_c_3}), we have
\begin{eqnarray*}
\label{proof_c_5}
&&\mathbb{E} \bigl( E_2^n(u)-E_2^n(v)
\bigr)^2
\\
&&\qquad \leq K\mathbb {E} \Biggl(\sum_{i=k_n+3}^n
\bigl(z_{i}^1(u) - \overline{z}_{i}^1(u)
- z_{i}^1(v) + \overline{z}_{i}^1(v)
\bigr)^21_{\{(\mathcal{C}_i^n)^c\}} \Biggr).
\end{eqnarray*}
Hence for arbitrarily small $\iota>0$,
\begin{eqnarray*}
\label{proof_c_6}
&&\frac{1}{n-k_n-2}\mathbb{E} \Biggl(\sum
_{i=k_n+3}^n\bigl(z_{i}^1(u) -
\overline{z}_{i}^1(u) - z_{i}^1(v)
+ \overline{z}_{i}^1(v)\bigr)^21_{\{
(\mathcal{C}_i^n)^c\}}
\Biggr)
\\
&&\qquad \leq K \bigl\{\bigl|u^{\beta
}-v^{\beta}\bigr|^2
\vee|u-v|^{\beta-\iota} \bigr\},
\end{eqnarray*}
and since $\beta>1$, we have $\frac{1}{\sqrt{n-k_n-2}}\sup_{u\in
[a,b]}|E_2^n(u)| \stackrel{\mathbb{P}}{\longrightarrow} 0$. We turn
next to~(\ref{prelim_c-2}). We first introduce some additional
notation. Based on a second-order Taylor expansion of the function
$f_{i,u}(x)$, we can further decompose $\widehat{Z}_2^n(u) = \widehat
{Z}_2^{(a,n)}(u) +\widehat{Z}_2^{(b,n)}(u)+\widehat{Z}_2^{(c,n)}(u)$,
with $\widehat{Z}_2^{(k,n)}(u) = \sum_{i=k_n+3}^nz_i^{(k,2)}(u)$ for
$k=a,b,c$, where $z_i^{(c,2)}(u) =
z_i^{2}(u)-z_i^{(a,2)}(u)-z_i^{(b,2)}(u)$ and
\begin{eqnarray*}
\label{proof_c_7} z_i^{(a,2)}(u)& =& f_{i,u}'
\bigl( |\overline{\sigma}|_i^p \bigr)|\overline{
\sigma}|_i^p \bigl( \Delta_n^{-p/\beta}
\overline {V}_i^n(p) - 1 \bigr),
\\
\label{proof_c_8} z_i^{(b,2)}(u) &=& \tfrac{1}{2}f_{i,u}^{\prime\prime}
\bigl( |\overline{\sigma}|_i^p \bigr) \bigl(|\overline{
\sigma}|_i^p\bigr)^2 \bigl(
\Delta_n^{-p/\beta
}\overline {V}_i^n(p) -
1 \bigr)^2.
\end{eqnarray*}
Note further that
\[
\label{proof_c_9} \cases{
|\sigma_{(i-2)\Delta_n-}|^pf_{i,u}'
\bigl( |\sigma _{(i-2)\Delta_n-}|^p \bigr) = G(p,u,\beta),
\vspace*{2pt}\cr
|\sigma _{(i-2)\Delta
_n-}|^{2p}f_{i,u}^{\prime\prime} \bigl(
|\sigma_{(i-2)\Delta_n-}|^p \bigr) = H(p,u,\beta). }
\]
Direct calculation, and using the boundedness of the process $\sigma$
by Assumption~\ref{assSB}, shows
\begin{eqnarray*}
\label{proof_c_10}
&& \bigl\llvert |\overline{\sigma}|_i^pf_{i,u}'
\bigl( |\overline{\sigma}|_i^p \bigr) - G(p,u,\beta)\bigr
\rrvert + \bigl\llvert \bigl(|\overline{\sigma}|_i^{p}
\bigr)^2f_{i,u}^{\prime\prime} \bigl( |\overline {\sigma
}|_i^p \bigr) - H(p,u,\beta)\bigr\rrvert
\\
& &\qquad\leq K_{a,b}\bigl\llvert |\overline{\sigma}|_i^p-|
\sigma_{(i-2)\Delta_n-}|^p\bigr\rrvert ,\qquad u\in [a,b],
i=k_n+3,\ldots,n,
\end{eqnarray*}
for some finite-valued constant $K_{a,b}$ which depends only $a$ and
$b$. From here, using the bounds in (\ref{prelim_a-2}) and (\ref
{prelim_a-3-b}), we have
\[
\label{proof_c_11} \mathbb{E} \Bigl(\sup_{u\in[a,b]}\bigl|z_i^{(a,2)}(u)
- \overline {z}_i^{(a,2)}(u)\bigr| \Bigr)\leq K_{a,b}
\bigl( k_n^{-1/2}(k_n\Delta _n)^{{1}/{r}\wedge{(\beta-p)}/{\beta}-\iota}
\bigr),
\]
and similarly
\[
\label{proof_c_11_b} \mathbb{E} \Bigl(\sup_{u\in[a,b]}\bigl|z_i^{(b,2)}(u)
- \overline {z}_i^{(b,2)}(u)\bigr| \Bigr)\leq K_{a,b}
\bigl(k_n^{-{\beta
}/{(2p)}+\iota} \vee k_n^{-1/2}(k_n
\Delta_n)^{{1}/{r}\wedge{(\beta-p)}/{\beta}-\iota} \bigr).
\]
Therefore,
\begin{eqnarray}
\label{proof_c_12}
&&\frac{1}{n-k_n-2}\sup_{u\in[a,b]}\bigl
\llvert \widehat{Z}_2^{(a,n)}(u) - \overline{Z}_2^{(a,n)}(u)
+ \widehat{Z}_2^{(b,n)}(u) - \overline {Z}_2^{(b,n)}(u)
\bigr\rrvert
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
& &\qquad= o_p \bigl( k_n^{-{\beta
}/{(2p)}+\iota} \vee
k_n^{-1/2}(k_n\Delta_n)^{{1}/{r}\wedge
{(\beta
-p)}/{\beta}-\iota}
\bigr).
\end{eqnarray}
We are left with $\widehat{Z}_2^{(c,n)}(u)$. Using the boundedness of
the derivatives in (\ref{decomp_1}), we have
\[
\label{proof_c_13} \bigl\llvert z_i^{(c,2)}(u)\bigr\rrvert \leq
K_{a,b}\bigl\llvert \Delta_n^{-p/\beta
}
\overline{V}_i^n(p) - 1\bigr\rrvert ^{x},\qquad 2<x<
\beta/p\wedge3.
\]
From here, applying (\ref{prelim_b-2}), we have
\begin{equation}
\label{proof_c_14} \frac{1}{n-k_n-2}\sup_{u\in[a,b]}\bigl\llvert
\widehat {Z}_2^{(c,n)}(u)\bigr\rrvert = o_p
\bigl(k_n^{-({1}/{2}) ({\beta}/{p}\wedge3
)+\iota
} \bigr).
\end{equation}
Combining the results in (\ref{proof_c_12}) and (\ref{proof_c_14}), we
get (\ref{prelim_c-2}).
\end{pf}
\begin{lemma}\label{lema:prelim-d}
Let $p\in(0,\beta/2)$. If $k_n\asymp n^{\varpi}$ for $\varpi\in(0,1)$,
we have
\begin{equation}
\label{prelim_d-1} \frac{1}{\sqrt{n-k_n-2}}\pmatrix{
\overline{Z}_1^n(\mathbf{u})
\vspace*{2pt}\cr
\overline {Z}_2^{(a,n)}(\mathbf{u})} \stackrel{\mathcal{L}} {\longrightarrow} \zeta(\mathbf{u}),
\end{equation}
where $\zeta(\mathbf{u})$ is a Gaussian process with covariance
function given by
\begin{equation}
\label{prelim_d-2}\pmatrix{ 1
\vspace*{2pt}\cr
G(p,u,\beta) }'\overline{
\Xi}(p,u,v,\beta)\pmatrix{ 1
\vspace*{2pt}\cr
G(p,v,\beta) }, \qquad u,v\in\mathbb{R}_+,
\end{equation}
for $\overline{\Xi}(p,u,v,\beta) = \Xi_0(p,u,v,\beta)+2\Xi
_1(p,u,v,\beta
)$. The convergence in (\ref{prelim_d-1}) is in the space of continuous
functions $\mathbb{R}_+\rightarrow\mathbb{R}^2$ equipped with the local
uniform topology. The convergence result for $\overline{Z}_1^n(\mathbf
{u})$ in (\ref{prelim_d-1}) continues to hold for $p\in[\beta
/2,\beta)$.
Further, for some $\iota>0$,
\begin{eqnarray}
\label{prelim_d-3}
&&\frac{k_n}{n-k_n-2}\overline{Z}_2^{(b,n)}(u)\nonumber\\
&&\quad{}-
\frac
{1}{2}H(p,u,\beta ) \bigl( \Xi_0^{(2,2)}(p,u,u,
\beta) +2\Xi_1^{(2,2)}(p,u,u,\beta) \bigr)
\\
&&\qquad = o_p \bigl((k_n\Delta_n)^{1-{2p}/{\beta}\vee{1}/{2}-\iota}
\bigr),\nonumber
\end{eqnarray}
locally uniformly in $u\in\mathbb{R}_+$.
\end{lemma}
\begin{pf}
We can write
\begin{equation}
\label{proof_d_1}\pmatrix{\overline{Z}_1^n(u)
\vspace*{2pt}\cr
\overline{Z}_2^{(a,n)}(u) } = \sum_{i=k_n+1}^{n-k_n-1}\bolds{
\zeta}_i(u)+E_l(u)+E_r(u),
\end{equation}
where
\begin{eqnarray*}
\label{proof_d_2} \bolds{\zeta}_i(u) &=& \pmatrix{\cos \bigl( u\Delta_n^{-1/\beta}
\mu_{p,\beta
}^{-1/\beta
} \bigl(\Delta_i^nS-
\Delta_{i-1}^nS\bigr) \bigr) - \mathcal{L}(p,u,\beta)
\vspace*{2pt}\cr
G(p,u,\beta) \bigl[\Delta_n^{-p/\beta} \mu_{p,\beta}^{-p/\beta
}\bigl|
\Delta _i^nS-\Delta_{i-1}^nS\bigr|^p-1
\bigr] },
\\
\label{proof_d_3} E_l(u)& =& \sum_{i=2}^{k_n}
\frac{i-1}{k_n}\pmatrix{ 0
\vspace*{2pt}\cr
\bolds{\zeta}^{(2)}_i(u) }-\sum_{i=k_n+1}^{k_n+2}\pmatrix{ \bolds{\zeta}^{(1)}_i(u)
\vspace*{2pt}\cr
0 },
\\
\label{proof_d_4} E_r(u)& =& \sum_{i=n-k_n}^{n-2}
\frac{n-1-i}{k_n}\pmatrix{ 0
\vspace*{2pt}\cr
\bolds{\zeta}^{(2)}_i(u)}+\sum_{i=n-k_n}^{n}\pmatrix{ \bolds{\zeta}^{(1)}_i(u)
\vspace*{2pt}\cr
0 }.
\end{eqnarray*}
We note that for $u\in\mathbb{R}_+$,
\begin{equation}
\label{proof_d_5} \mathbb{E}_{i-2}^n \bigl(\bolds{
\zeta}_i(u) \bigr) = 0,\qquad i=2,\ldots,n.
\end{equation}
Further, making using of the inequality $|\cos(x)-\cos(y)|\leq
2|x-y|^p$ for every $p\in(0,1]$ and $x,y\in\mathbb{R}$, we have for
$u,v\in\mathbb{R}_+$,
\begin{equation}\qquad
\label{proof_d_6} \mathbb{E}_{i-2}^n \bigl(\bolds{
\zeta}^{(1)}_i(u)-\bolds{\zeta }^{(1)}_i(v)
\bigr)^2\leq K|u-v|^p\vee\bigl|u^{\beta}-v^{\beta}\bigr|^2,\qquad
1<p<\beta.
\end{equation}
Making use of (\ref{proof_d_5}) and the fact that $\bolds{\zeta
}^{(2)}_i(u)$ depends on $u$ only through $H(p,u,\beta)$ and $\sup_{u\in
\mathbb{R}^+}|H(p,u,\beta)|$ is a finite constant, we have
\begin{equation}
\label{proof_d_7} \frac{1}{k_n}\mathbb{E} \Bigl(\sup_{u\in\mathbb
{R}_+}\bigl|E_l(u)\bigr|^2
\Bigr)\leq K.
\end{equation}
Making use of (\ref{proof_d_6}) and the differentiability of
$G(p,u,\beta)$ in $u$, we also have
\[
\label{proof_d_8} \frac{1}{k_n}\mathbb{E} \bigl(E_r(u)-E_r(v)
\bigr)^2\leq \bigl|F(u)-F(v)\bigr|^p,
\]
for some increasing function $F(\cdot)$ and some $p>1$. Applying then a
criteria for tightness on the space of continuous functions equipped
with the uniform topology (see, e.g., Theorem~12.3 in \cite
{Billingsley}) as well as making use of the fact that $k_n\Delta
_n\rightarrow0$, we have locally uniformly in $u$,
\begin{equation}
\label{proof_d_9} \frac{1}{\sqrt{n-k_n-2}}E_r(u) \stackrel{\mathbb {P}} {
\longrightarrow} 0.
\end{equation}
We are left with the first term on the right-hand side of (\ref
{proof_d_1}). First, we establish convergence for this term
finite-dimensionally in $u$. We have the decomposition
\[
\label{proof_d_10} \sum_{i=k_n+1}^{n-k_n-1}\bolds{
\zeta}_i(u) = \sum_{i=k_n+1}^{n-k_n-1}
\bigl(\bolds{\zeta}_i(u)-\mathbb{E}_{i-1}^n
\bigl(\bolds {\zeta}_i(u)\bigr)\bigr)+\sum
_{i=k_n}^{n-k_n-2}\mathbb{E}_i^n
\bigl(\bolds{\zeta }_{i+1}(u)\bigr).
\]
From here, we can apply a c.l.t. for triangular arrays (see, e.g., Theorem~2.2.13 of \cite{JP}) to establish that $\frac{1}{\sqrt
{n-2k_n-1}}\sum_{i=k_n+1}^{n-k_n-1}\bolds{\zeta}_i(u)$
converges finite-dimensionally in $u$ to $\zeta(\mathbf{u})$. This
convergence holds also locally uniformly in $u$ using the bound in
(\ref{proof_d_6}) and Theorem VI.4.1 in \cite{JS}. Combining the latter
with the asymptotic negligibility results in (\ref{proof_d_7}) and
(\ref{proof_d_9}), together with the fact that $k_n/n\rightarrow0$, we
have the result in (\ref{prelim_d-1}). Furthermore, since $\overline
{Z}_1^n(\mathbf{u})$ depends on $p$ only through $\mu_{p,\beta}$, the
marginal convergence in (\ref{prelim_d-1}) involving $\overline
{Z}_1^n(\mathbf{u})$ holds for any $p\in(0,\beta)$.
We turn next to (\ref{prelim_d-3}). We denote
\[
\label{proof_d_11} \chi_i = k_n \bigl(\Delta_n^{-p/\beta}
\overline{V}_i^n(p) -1 \bigr)^2 - \bigl(
\Xi_0^{(2,2)}(p,u,u,\beta) +2\Xi_1^{(2,2)}(p,u,u,
\beta) \bigr),
\]
and we note that $\Xi_0^{(2,2)}(p,u,u,\beta) $ and $\Xi
_1^{(2,2)}(p,u,u,\beta) $ do not depend on $u$.
Without loss of generality we can assume $n\geq2k_n+3$, and then we set
\[
\label{proof_d_12} A_j = \sum_{i=1}^{\lfloor{(n-k_n-2)}/{(k_n+1)}\rfloor}
\chi _{k_n+3+(j-1)+(i-1)(k_n+1)},\qquad j=1,\ldots,k_n+1.
\]
Since $\mathbb{E}|\chi_i|<K$,
\begin{equation}
\label{proof_d_13} \Biggl\llvert \sum_{i=k_n+3}^n
\chi_i - \sum_{j=1}^{k_n+1}A_j
\Biggr\rrvert = O_p(k_n).
\end{equation}
Further, direct computation shows
\[
\label{proof_d_14} \mathbb{E}_{i-k_n-3}^n (\chi_i )=
0,\qquad i=k_n+3,\ldots,n,
\]
and applying the Burkholder--Davis--Gundy inequality for discrete
martingales, we have
\begin{equation}
\label{proof_d_15} \mathbb{E}|A_j|^x\leq K(k_n
\Delta_n)^{- ({x}/{2}\vee
1
)}, \qquad 1\leq x<\frac{\beta}{2p}.
\end{equation}
Using inequality in means we further have
\[
\label{proof_d_16} \Biggl\llvert \frac{1}{k_n+1}\sum
_{j=1}^{k_n+1}A_j\Biggr\rrvert
^x\leq\frac
{1}{k_n+1}\sum_{j=1}^{k_n+1}|A_j|^x,\qquad
1\leq x<\frac{\beta
}{2p}.
\]
Applying the above inequality with $x$ sufficiently close to $\beta
/(2p)$ and the bound in (\ref{proof_d_15}), we have $ \Delta
_n(k_n\Delta
_n)^{{2p}/{\beta}\wedge{1}/{2}-1+\iota}
\sum_{j=1}^{k_n+1}A_j \stackrel{\mathbb{P}}{\longrightarrow} 0$, and
together with the result in (\ref{proof_d_13}), this implies (\ref
{prelim_d-3}).
\end{pf}
\subsection{Proofs of Theorems \texorpdfstring{\protect\ref{thm:cp}}{1} and
\texorpdfstring{\protect\ref{thm:clt}}{2}}
Theorem~\ref{thm:cp} and (\ref{clt_3}) of Theorem~\ref{thm:clt} follow
readily by combining Lemmas \ref{lema:prelim-a}--\ref{lema:prelim-d}
[and using (\ref{prelim_a-2}) for bounding $\widehat{Z}_2^n(u)$ in the
proof of Theorem~\ref{thm:cp}]. To show (\ref{clt_5}), we note first
that $H(p,u,\beta)$ and $\Xi_i(p,u,u,\beta)$, for $i=0,1$, are
continuously differentiable in $\beta$. For $H(p,u,\beta)$ this is
directly verifiable, and for $\Xi_i(p,u,u,\beta)$ with $i=0,1$, this
follows from the continuous differentiability of the characteristic
function $\beta\rightarrow e^{-A_{\beta}u^{\beta}}$ for $u\in
\mathbb
{R}_+$. Moreover, the derivative $\nabla_{\beta}H(p,u,\beta)$ is
bounded in $u$. From here, (\ref{clt_5}) follows from an application of
the continuous mapping theorem.
\subsection{Proof of Theorem \texorpdfstring{\protect\ref{thm:gmm}}{3}}
We denote the true value of the parameter $\beta$ with $\beta_0$. Then
the claim in (\ref{gmm_6}) will follow if we can show the following:
\begin{equation}
\label{proof_gmm_1} \widehat{\mathbf{m}}\bigl(p,\widehat{\mathbf{u}},\widehat{\beta
}^{fs},\mathbf {u},\beta\bigr) \stackrel{\mathbb{P}} {\longrightarrow}
\mathbf {m}(p,\mathbf {u},\beta) \qquad\mbox{uniformly in $\beta\in[1,2]$,}
\end{equation}
where $\mathbf{m}(p,\mathbf{u},\beta)$ is defined via
\begin{eqnarray}
\mathbf{m}(p,\mathbf{u},\beta)_i &=& \int_{u_l^i}^{u_h^i}
\bigl(\log \bigl(\mathcal{L}(p,u,\beta_0)\bigr) - \log\bigl(\mathcal{L}(p,u,
\beta)\bigr) \bigr)\,du,
\nonumber
\\
\qquad\quad\label{proof_gmm_2} \sqrt{n} \widehat{\mathbf{m}}\bigl(p,\widehat{\mathbf{u}},
\widehat {\beta }^{fs},\mathbf{u},\beta_0\bigr) &\stackrel{
\mathcal{L}} {\longrightarrow }& \mathbf{W}^{1/2}(p,\mathbf{u},
\beta_0)\times\mathcal{\mathbf{N}},
\end{eqnarray}
where $\mathcal{\mathbf{N}}$ is $K\times1$ standard normal vector and
\begin{equation}\qquad
\label{proof_gmm_3} \mathbf{M}(p,\widehat{\mathbf{u}},\beta) \stackrel{\mathbb {P}} {
\longrightarrow} \mathbf{M}(p,\mathbf{u},\beta) \qquad\mbox {uniformly in a
neighborhood of $\beta_0$.}
\end{equation}
This is because $\mathbf{m}(p,\mathbf{u},\beta) = \mathbf{0}$ if and
only if $\beta=\beta_0$ and $W(p,\mathbf{u},\beta_0)$ is positive definite.
We start with (\ref{proof_gmm_1}). We have
\[
\int_{\widehat
{u}_l^i}^{\widehat{u}_h^i} \log\bigl(\mathcal{L}(p,u,\beta)\bigr)\,du \stackrel
{\mathbb{P}}{\longrightarrow} \int_{u_l^i}^{u_h^i} \log\bigl(\mathcal
{L}(p,u,\beta)\bigr)\,du
\]
uniformly in $\beta\in[1,2]$ for $i=1,\ldots,K$ because
of $\widehat{\mathbf{u}}_l \stackrel{\mathbb{P}}{\longrightarrow
} \mathbf{u}_l$ and $\widehat{\mathbf{u}}_h \stackrel{\mathbb
{P}}{\longrightarrow} \mathbf{u}_h$ as well as the continuity of the
function $u^{\beta}$ in $\beta$ for every $u\in\mathbb{R}_+$, and the
argument can be used to show (\ref{proof_gmm_3}). To show (\ref
{proof_gmm_1}) it remains to show $\int_{\widehat{u}_l^i}^{\widehat
{u}_h^i} \log(\widehat{\mathcal{L}}^n(p,u,\widehat{\beta
}^{fs})')\,du \stackrel{\mathbb{P}}{\longrightarrow} \int_{u_l^i}^{u_h^i}
\log(\mathcal{L}(p,u,\beta_0))\,du$ for $i=1,\ldots,K$.\vspace*{-3pt} Due the continuous
differentiability of the de-biasing\vspace*{1pt} term in $\beta$, $\widehat{\beta
}^{fs} \stackrel{\mathbb{P}}{\longrightarrow} \beta_0$ and the
asymptotic boundedness of $\widehat{\mathbf{u}}_l$ and $\widehat
{\mathbf
{u}}_h$ and of $\widehat{\mathcal{L}}^n(p,u,\widehat{\beta}^{fs})'$
from below, we have $\int_{\widehat{u}_l^i}^{\widehat{u}_h^i} [\log
(\widehat{\mathcal{L}}^n(p,u,\widehat{\beta}^{fs})')- \log
(\widehat
{\mathcal{L}}^n(p,u,\beta_0))]\,du \stackrel{\mathbb
{P}}{\longrightarrow
} 0$. From here (\ref{proof_gmm_1}) follows by applying Theorem~\ref{thm:cp}.
We are left with (\ref{proof_gmm_2}). This result follows from applying
the uniform convergence of $\widehat{\mathcal{L}}^n(p,u,\widehat
{\beta
}^{fs})'$ in Theorem~\ref{thm:clt}.
Finally, (\ref{gmm_7}) follows from the continuity of $G(p,\mathbf
{u},\beta)$ and $W^{-1}(p,\mathbf{u},\beta)$ in $\mathbf{u}$ and
$\beta$.
\subsection{Proof of Theorem \texorpdfstring{\protect\ref{thm:cont}}{4}}
We will use the shorthand notation $v_n = \rho u_n$. We start with the
following lemma.
\begin{lemma}\label{lema:prelim-e}
Under the conditions of Theorem~\ref{thm:cont} we have
\begin{eqnarray}
\label{prelim_e-1} \widehat{\mathcal{L}}^n\bigl(p,u_n,
\widehat{\beta}^{fs}\bigr)' - \mathcal {L}
\bigl(p,u_n,\widehat{\beta}^{fs}\bigr) &=& O_p
\bigl(\sqrt{\Delta_n}u_n^{2}\bigr),
\\
\label{prelim_e-2}
\frac{\sqrt{n}}{u_n^2-v_n^2}\widehat{Z}_n &\stackrel{
\mathcal {L}} {\longrightarrow} &\frac{1}{24C_{p,2}}Z_1-
\frac{2}{p}C_{p,2}Z_2,
\end{eqnarray}
where
\begin{eqnarray*}
\widehat{Z}_n &=& \frac{1}{C_{p,2}u_n^2} \bigl(\widehat{\mathcal
{L}}^n\bigl(p,u_n,\widehat{\beta}^{fs}
\bigr)' - \mathcal{L}\bigl(p,u_n,\widehat {\beta
}^{fs}\bigr) \bigr) \\
&&{}- \frac{1}{C_{p,2}v_n^2} \bigl(\widehat{\mathcal
{L}}^n\bigl(p,v_n,\widehat{\beta}^{fs}
\bigr)' - \mathcal{L}\bigl(p,v_n,\widehat {\beta
}^{fs}\bigr) \bigr).
\end{eqnarray*}
\end{lemma}
\begin{pf}
We use the same
decomposition of $\widehat{\mathcal{L}}^n(p,u,\beta) - \mathcal
{L}(p,u,\beta)$ as in the proofs of Theorems \ref{thm:cp} and \ref
{thm:clt}. We start with the leading terms $\overline{Z}_1^n(u_n)$,
$\overline{Z}_2^{(a,n)}(u_n)$ and $\overline{Z}_2^{(b,n)}(u_n)$. Using
Taylor's series expansion, we have for any $u\in\mathbb{R}_+$ and $Z\in
\mathbb{R}$,
\begin{eqnarray*}
\label{proof_e_1} \cos(uZ)-1 &=& -\frac{u^2Z^2}{2}+\frac{u^4Z^4}{24}+R(uZ),\qquad
\bigl|R(uZ)\bigr|\leq K|uZ|^6,
\\
\label{proof_e_2} 1-e^{-u^2} &=& u^2-\frac{u^4}{2}+O
\bigl(u^6\bigr) \qquad\mbox{as $u\rightarrow 0$}.
\end{eqnarray*}
Using this approximation we have (note that when $L_t$ is a Brownian
motion, then $A_{\beta} = 1$ and so $C_{p,\beta} = 1/\mu_{p,\beta}$)
\begin{eqnarray}
\label{proof_e_3}
&&\frac{1}{C_{p,2}u_n^2}\overline{Z}_1^n(u_n)
- \frac
{1}{C_{p,2}v_n^2}\overline{Z}_1^n(v_n)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad = \frac
{u_n^2-v_n^2}{24C_{p,2}}\sum_{i=k_n+3}^n
\biggl[\frac{n^2(\Delta
_i^nS-\Delta_{i-1}^nS)^4}{\mu_{p,2}^{2}}-\frac{12}{\mu
_{p,2}^{2}} \biggr]+O_p
\bigl(u_n^4\sqrt{n}\bigr).
\end{eqnarray}
We similarly get
\begin{eqnarray}
\label{proof_e_4}
&&\frac{1}{C_{p,2}u_n^2}\overline{Z}_2^{(a,n)}(u_n)
- \frac
{1}{C_{p,2}v_n^2}\overline{Z}_2^{(a,n)}(v_n)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad = \bigl(v_n^2-u_n^2\bigr)
\frac{2}{p}C_{p,2}\sum_{i=k_n+3}^n
\bigl(\Delta _n^{-p/2}\overline{V}_i^n(p)-1
\bigr)+O_p\bigl(u_n^4\sqrt{n}\bigr),
\end{eqnarray}
and also
\begin{eqnarray}
\label{proof_e_5}
&&\frac{k_n}{n-k_n-2}\sum_{i=k_n+3}^n
\bigl(\Delta_n^{-p/2}\overline {V}_i^n(p)-1
\bigr)^2 \nonumber\\
&&\quad{}- \bigl(\Xi_0^{(2,2)}(p,u_n,u_n,2)+2
\Xi _1^{(2,2)}(p,u_n,u_n,2) \bigr)
\\
&& \qquad= O_p (\sqrt{k_n\Delta _n} ). \nonumbe
\end{eqnarray}
As in Lemma~\ref{lema:prelim-d}, it is easy to show
\begin{equation}
\label{proof_e_6} \frac{1}{\sqrt{n-k_n-2}}\sum_{i=k_n+3}^n
\pmatrix{\displaystyle\frac{n^2(\Delta_i^nS-\Delta_{i-1}^nS)^4}{\mu
_{p,2}^{2}}-\frac{12}{\mu_{p,2}^{2}}
\vspace*{2pt}\cr
\displaystyle\Delta_n^{-p/2}\overline {V}_i^n(p)-1
} \stackrel{\mathcal{L}} {\longrightarrow} \pmatrix{ Z_1
\vspace*{2pt}\cr
Z_2 }.
\end{equation}
Next, using Taylor's expansion as well as $\widehat{\beta}^{fs}-2 =
o_p(k_nu_n^{2}\sqrt{\Delta_n})$, we have
\begin{equation}
\label{proof_e_7} \frac{\sqrt{n}}{u_n^{2}k_n} \bigl(H\bigl(p,u_n,\widehat{\beta
}^{fs}\bigr)-H(p,u_n,2) \bigr) = o_p(1).
\end{equation}
We proceed with the rest of the terms in the decomposition of $\widehat
{\mathcal{L}}^n(p,u_n,\beta) - \mathcal{L}(p,u_n,\beta)$ and
$\widehat
{\mathcal{L}}^n(p,v_n,\beta) - \mathcal{L}(p,v_n\beta)$.
We start with the term $R_1^n(u_n)$. It relies on the bound in (\ref
{prelim_a-1}), which in turn depends on the analysis of the term $A_3$
in Section~5.2.3 of \cite{T13}. When $L$ is a Brownian motion, the
bounds for this term get slightly changed. In particular, the bound in
equation (41) of that paper becomes now $K\Delta_n^{1-\iota}$ for
$q>r\vee1$ (this follows by using integration by parts and the
Burkholder--Davis--Gundy inequality) and arbitrarily small $\iota>0$.
Using this, it is easy to show that when $L$ is a Brownian motion, the
bound in (\ref{prelim_a-1}) holds with $\alpha_n$ replaced by $\beta
_n$, where
\[
\label{proof_e_8} \beta_n = \frac{\Delta_n^{({3}/{2})(1+(p-1/2)\wedge0-\iota
)}}{\sqrt
{k_n}}\vee\Delta_n^{{1}/{(r\vee1)}-\iota}
\vee\Delta _n^{{p}/{\beta'}\wedge1-{p}/{2}-\iota}\vee\Delta_n^{
{(p+1)}/{(r\vee
1+1)}-\iota}.
\]
Now the bound for $R_1^n(u_n)$ becomes
\begin{equation}
\label{proof_e_9} \mathbb{E}\biggl\llvert \frac{R_1^n(u_n)}{nu_n^{2}}\biggr\rrvert \leq K
\biggl( \frac
{\beta_n\vee k_n^{-{1}/{p}+\iota}}{u_n^2} \biggr).
\end{equation}
Further, using the same steps as in the proofs of Lemmas \ref
{lema:prelim-a}--\ref{lema:prelim-c}, as well as
\[
\label{proof_e_10} \sup_{u,x\in\mathbb{R}_+} \bigl(|u|^p\bigl|f'_{i,u}(x)\bigr|+|u|^{2p}\bigl|f^{\prime\prime}_{i,u}(x)\bigr|
\bigr)<\infty,
\]
we get
\begin{eqnarray}
\label{proof_e_11} &&\mathbb{E}\biggl\llvert \frac{R_2^n(u_n)}{nu_n^{2}}\biggr\rrvert \leq K
\beta _nu_n^{-2},\qquad \mathbb{E}\biggl\llvert
\frac{R_4^n(u_n)}{nu_n^{2}}\biggr\rrvert \leq K(k_n\Delta_n)^{1-\iota},
\\
\label{proof_e_12}
&&\mathbb{E}\biggl\llvert \frac{R_3^n(u_n)}{nu_n^{2}}\biggr\rrvert \leq
Ku_n^{-2-2p} \bigl( (k_n\Delta_n)^{1-\iota}
\vee k_n^{-1/2}(k_n\Delta_n)^{
{1}/{r}\wedge
{(2-p)}/{2}-\iota}
\bigr),
\\
\label{proof_e_13}
&&\frac{\mathbb{E}\llvert \widehat{Z}_1^n(u_n)-\overline
{Z}_1^n(u_n)\rrvert }{nu_n^{2}}\leq K \biggl(
\frac{(\beta_n\vee k_n^{-{1}/{p}+\iota
})}{u_n^{2}} \vee\sqrt{
\Delta_n}(k_n\Delta_n)^{1/2-\iota}
\biggr),
\\
\label{proof_e_14}
&&\frac{\mathbb{E}\llvert \widehat{Z}_2^n(u_n)-\overline{Z}_2^{(a,n)}(u_n)
- \overline{Z}_2^{(b,n)}(u_n)\rrvert }{nu_n^{2}}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad \leq K \biggl(\frac{k_n^{-{1}/{p}+\iota
}}{u_n^{2+2p}} \vee k_n^{-3/2+\iota}\vee
k_n^{-1/2}(k_n\Delta _n)^{
{1}/{r}\wedge{(2-p)}/{2}-\iota}
\biggr).
\end{eqnarray}
Combining the bounds in (\ref{proof_e_9})--(\ref{proof_e_14}), together
with (\ref{proof_e_3})--(\ref{proof_e_5}), the result in (\ref
{proof_e_6}) and (\ref{proof_e_7}), we establish Lemma~\ref
{lema:prelim-e}. We further note that when $X$ is a L\'{e}vy process,
$R_3^n(u)$ and $R_4^n(u)$ are identically zero.
\end{pf}
We proceed with the proof of Theorem~\ref{thm:cont}. Using Taylor's
expansion and the result in (\ref{prelim_e-1}), $\widehat{Z}_n$,
defined in the statement of Lemma~\ref{lema:prelim-e}, is
asymptotically equivalent to
\begin{eqnarray*}
&&\frac{1}{C_{p,2}u_n^2} \bigl(-\log\bigl(\widehat{\mathcal {L}}^n
\bigl(p,u_n,\widehat {\beta}^{fs}\bigr)'\bigr)
- C_{p,2}u_n^2 \bigr) \\
&&\qquad{}- \frac{1}{C_{p,2}v_n^2}
\bigl(-\log\bigl(\widehat{\mathcal{L}}^n\bigl(p,v_n,
\widehat{\beta}^{fs}\bigr)'\bigr) - C_{p,2}v_n^2
\bigr).
\end{eqnarray*}
Using again Taylor's series expansion, the result in (\ref{prelim_e-1})
and that $u_n^{-2}\sqrt{\Delta_n}\rightarrow0$, we have that the above
is asymptotically equivalent to
\begin{eqnarray*}
&& \bigl(\log \bigl(-\log\bigl(\widehat{\mathcal{L}}^n
\bigl(p,u_n,\widehat {\beta }^{fs}\bigr)'
\bigr) \bigr)-\log\bigl(C_{p,2}u_n^2\bigr)
\bigr)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad - \bigl(\log \bigl(-\log\bigl(\widehat {\mathcal{L}}^n
\bigl(p,v_n,\widehat{\beta}^{fs}\bigr)'\bigr)
\bigr)-\log \bigl(C_{p,2}v_n^2\bigr) \bigr).
\end{eqnarray*}
From here result (\ref{cont_3}) in Theorem~\ref{thm:cont}, both in the
general and L\'{e}vy case, follows from Lemma~\ref{lema:prelim-e}.
\section*{Acknowledgments}
I would
like to thank the Editor, the Associate Editor and two anonymous
referees for many useful suggestions and comments. I would also like to
thank Denis Belomestny, Jose Manuel Corcuera, Valentine Genon-Catalot,
Jean Jacod, Cecilia Mancini, Philip Protter, Markus Reiss, Peter
Spreij, Mathias Vetter and seminar participants at the workshop on
Statistical Inference for L\'{e}vy processes at the Lorentz Center,
University of Leiden and the workshop on Statistics of High-Frequency
Data at Humboldt University.
|
1,477,468,750,444 | arxiv | \section{Introduction}
Lattice QCD, the only known first principle non-perturbative approach to QCD,
has made a dramatic progress in the last decade because of the increasing
power in the computing resources and the improvement of the
numerical algorithms for the simulations. Today fully dynamical lattice QCD
simulations together with effective theories enable one to make first
principle predictions of hadronic physics. Recently, a so-called
mixed-action lattice simulation
using different kind of fermions in the valence and sea sectors has gained
attention because these simulations can reach the pion masses as low
as $300{\text{MeV}}$ \cite{Aubin:2004fs,Aubin:2004ej,Silas:2005,Edward:2005,
Silas:2006.1,Silas:2006.2,Silas:2006.3,Silas:2006.4,Edward:2005.1,Negele:2006.1,Negele:2006.2}. In particular, despite
the controversy of the validity of the fourth root trick used to
overcome the fermion taste doubling problem of the staggered fermions,
the lattice calculations using Ginsparg-Wilson valence quarks in
staggered sea quarks has become popular
because many quantitative results are obtained
\cite{Aubin:2004fs,Aubin:2004ej,Silas:2005,Edward:2005,
Silas:2006.1,Silas:2006.2,Silas:2006.3,Silas:2006.4}. Further,
the publicly available MILC configurations
\cite{Bernard:2001av} have encouraged the
use of lattice calculations with dynamical staggered fermions.
The lattice action can be described by Symanzik
action~\cite{Symanzik:1983dc,Symanzik:1983gh} which is based on the
symmetries of the underlying lattice theory and can be
organized in powers of the lattice spacing $a$:
\begin{equation}
{\mathcal L}_{\text{Sym}} =
{\mathcal L}
+
a \, {\mathcal L}^{(5)}
+
a^2 \, {\mathcal L}^{(6)}
+
\dots \label{eq:syman}
\, ,\end{equation}
where ${\mathcal L}^{(n)}$ represents the contribution from dimension-$n$
operators. The symmetries of the lattice action are respected
by the Symanzik Lagrangian ${\mathcal L}_{\text{Sym}}$ order-by-order in $a$.
In order to address lattice spacing corrections, based on the Symanzik
action, several mixed-action chiral perturbation
theories are developed to investigate the systematic errors which
arise from lattice simulations due to the non-vanishing lattice spacing
\cite{Bar:2002nr,Bar:2003mh,Silas:2003,bct0501,OBar:2005,bct0508}. These mixed-action chiral
perturbation theories provide a way to test the results from the lattice
calculations using the mixed lattice action and vice-versa.
The nucleon axial charge, $g_A$, is an important quantity in QCD which
quantifies the spontaneous chiral symmetry breaking. Since its value
is known to a very high precision in experiments \cite{PDG:2006}, it can be
used as a very good test for the first principle lattice calculations.
This quantity of hadronic physics has been studied extensively
in both lattice simulations
and chiral perturbation theory \cite{Edward:2005,Jenkins:1991jv,Jenkins:1991es,
Blum:2004,Khan:2005,Borasoy:1999,Zhu:2001,s&m:2002,s&m:2004}. Although to perform the lattice
simulations to calculate $g_A$ is straightforward, some controversy remains.
For example,
the $g_A$ obtained from lattice simulations has raised the controversy of
the possibility of large volume effects in the chiral limit
\cite{Jaffe:2002,Cohen:2002}.
Recently, using the mixed-action of Domain Wall valence quarks with
staggered sea quarks, \cite{Edward:2005} obtained a nice result of $g_A$
which seems to match nicely
with the prediction from chiral perturbation theory
and it is expected that the future data might be able to make contact
with experiment via extrapolation. On the other hand,
in \cite{Edward:2005}, the discretization errors
in computing $g_A$ using the mixed-action might be overlooked.
With the advent of mixed-action $\chi$PT \cite{OBar:2005} and extension
to the baryon sector \cite{bct0508}, it is surprising that
lattice spacing effects on the nucleon axial charge have
not been addressed. In this paper we address this issue mentioned above,
namely, we use the mixed-action chiral perturbation theory to study
the lattice-spacing dependence of $g_A$. In particular,
we found that it depends on the unphysical low energy constant $g_{1}$
and a new unknown LEC for which the actual value must be obtained from lattice
calculations.
This paper is organized as follow. In Section
$2$, we briefly review the chiral perturbation theory of
Ginsparg-Wilson valence quarks in staggered sea. In particular, we will put
emphasis on the lattice-spacing $a$ dependence of the mixed-action
Lagrangian. Next, in Section $3$, we will construct the axial-vector current
from the mixed-action Lagrangian and compute the $pn$ matrix element of the
axial-vector current. Further we will compare the results obtained in the
presence and absence of lattice-spacing $a$. Finally we conclude
in Section $4$. For completeness, we include some standard functions
which arise in this calculation in the Appendix.
\section{Chiral Lagrangian}
To address the finite lattice spacing issue, with the pioneering
work in \cite{Sharpe:1998xm,Lee:1999zx},
\cite{Rupak:2002sm,Bar:2002nr,Bar:2003mh} have extended
the $\chi$PT in the meson sector. This
finite lattice spacing artifacts has also be investigated in staggered
$\chi$PT for the mesons
\cite{Aubin:2003mg,Aubin:2003uc,Sharpe:2004is} and
the mixed-action PQ$\chi$PT \cite{Silas:2003,OBar:2005,bct0501,bct0508}.
To address the finite lattice spacing effects, one
utilizes a dual expansion in the quark masses and lattice spacing
with the usual energy scale \cite{Silas:2003,Bar:2003mh}:
\begin{equation}
m_{q} \ll \Lambda_{QCD} \ll \frac{1}{a}\, ,
\end{equation}
and the following power counting scheme \cite{bct0508}:
\begin{equation} \label{eqn:pc}
{\varepsilon}^2 \sim
\begin{cases}
m_q / {\Lambda}_{QCD} \\
a^2 {\Lambda}_{QCD}^2
\end{cases}
\, ,\end{equation}
which is relevant for current improved staggered quarks simulations.
In this section, we briefly introduce the chiral perturbation theory of
the partially-quenched mixed-action theory. In particular, we focus on the
mixed-action of Ginsparg-Wilson valence quarks in staggered sea quarks.
We only write the chiral Lagrangian of the associated mixed-action
theory and do not go into the detail of how this chiral Lagrangian can be
constructed from the Symanzik Lagrangian since the detail of the procedure
can be found in the references cited above.
For our purpose in this paper, we only
need to keep in mind that ${\mathcal L}^{(5)} = 0$ and the taste-symmetry breaking and
$SO(4)$ breaking operators will not enter at the order we calculate
the axial-vector current matrix element \cite{bct0508}.
\subsection{Mesons}
In the following, the strange quark mass is assumed to be fixed to its
physical value, therefore one can use two-flavor theory and does not
need to worry about the extrapolation in the strange quark mass.
The lattice action we consider here is built from $2$ flavors of
Ginsparg-Wilson
valence quarks and $2$ flavors of staggered sea quarks. In the continuum
limit, the Lagrangian is just the partially quenched Lagrangian which is
given by:
\begin{equation}
{\mathcal L} = \ol Q D\hskip-0.65em / \, Q + \ol Q m_q Q\, ,
\end{equation}
where the quark fields appears as a vector $Q$ with entries given by:
\begin{equation}
Q = ( u, d, j_1, j_2, j_3, j_4, l_1, l_2, l_3, l_4, \tilde{u}, \tilde{d},
)^T\,
.\end{equation}
and transforms in the fundamental representation of the graded group
$SU(10|2)$.
Notice the fermion doubling has produced four tastes for each flavor $(j,l)$
of staggered sea quark. The partially quenched generalization of the mass
matrix $m_q$ in the isospin limit is given by:
\begin{equation}
m_q = \text{diag} (m_u, m_u, m_j \xi_I, m_j \xi_I, m_u, m_u)
\, ,\end{equation}
with $\xi_I$ as the $4$ x $4$ taste identity matrix.
The low-energy effective theory of the theory we consider above is written in
terms of the pseudo-Goldstone mesons emerging from spontaneous symmetry
breaking which are realized non-linearly in an $U(10|2)$ matrix exponential
$\Sigma$
\begin{equation}
\Sigma=\exp\left(\frac{2i\Phi}{f}\right)
= \xi^2\,.
\label{sig}
\end{equation}
To the order (next-to-leading order) we work in investigating
the
$a^2$-dependence of the matrix element of the axial-vector current between
nucleons,
the relevant partially-quenched chiral perturbation theory Lagrangian for the
mesons up to order $O(\epsilon^2)$ is given by:
\begin{equation} \label{eq:Llead}
{\mathcal L} =
\frac{f^2}{8}
\text{str} \left( \partial_\mu\Sigma^\dagger \partial_\mu\Sigma\right)
- {\lambda} \,\text{str}\left(m_q\Sigma^\dagger+m_q^\dagger\Sigma\right)
+ \frac{1}{6} \mu_0^2 \, (\text{str} \, \Phi)^2
+ a^2 \mathcal{V}\, .
\end{equation}
where
\begin{equation}
\Phi=
\left(
\begin{array}{cc}
M & \chi^{\dagger} \\
\chi & \tilde{M}
\end{array}
\right)
\, ,\end{equation}
$f=132$~MeV, the str() denotes a graded flavor trace and
$\Sigma$ is defined in (\ref{sig}). The $M$,
$\tilde{M}$, and $\chi$ are matrices of pseudo-Goldstone bosons and
pseudo-Goldstone fermions, for example, see \cite{s&m:2002}.
The potential $\mathcal{V}$ contains the effects of dimension-$6$ operators
in the Symanzik action \cite{OBar:2005}. Expanding the Lagrangian in
Eq.~\eqref{eq:Llead} to the leading order, one can determine the meson masses
needed for the calculations of baryon observables. In particular, the
relevant mesons needed for the axial-vector current matrix element
calcualtions are the valence pion, mesons made
of $S_i \overline{V}$ with one staggered sea quark $S_{i}$ of flavor $S$ and
quark taste $i$ and a Ginsparg-Wilson valence quark $\overline{V}$ and
finally mesons
with two staggered quarks in a flavor-neutral, taste-singlet combination.
The associated masses to the lowest order for the latter two mesons can
be written in terms of the valence pion mass $m^2_{VV}$ which can be
determined from the valence spectroscopy and the pseudoscalar taste
pion mass $m^2_{SS}$ which the mass can be learned from the MILC
spectroscopy and are given by:
\begin{eqnarray}
&m_{S V}^2& = \frac{m^2_{SS}+m^2_{VV}}{2} +
\frac{16 \, a^2 \, C_{\text{mix}}}{f^2}\, ,\nonumber \\
&m_{SS,I}^2& = m^2_{SS} + \frac{64 \, a^2}{f^2} (C_3 + C_4)\,
\label{qmass}
,\end{eqnarray}
where $C_{\text{mix}}$ and $C_3+C_4$ are the parameters in the potential
$\mathcal{V}$. Since these masses are independent of the quark taste, we do
not specify the taste index in (\ref{qmass}). Notice the flavor singlet field
$\Phi$ is rendered heavy by
the $U(1)_A$ anomaly and can been integrated out in {PQ$\chi$PT}. However,
the propagator of the flavor-neutral field deviates from a simple pole
form \cite{Sharpe:2001fh}. Since only the valence-valence flavor-neutral
propagators are needed for our later calculations,
for $V,V' = u,d$, the
leading-order $\eta_{V} \eta_{V'}$ propagator in the isospin limit is given by
\cite{s&m:2002}:
\begin{equation}
{\cal G}_{\eta_{V} \eta_{V'}} =
\frac{i \delta_{VV'}}{q^2 - m^2_{VV} +i\epsilon}
- \frac{i}{2} \frac{\left(q^2 - m^2_{SS,I}\right)}
{\left(q^2 - m^2_{VV} +i\epsilon \right)^{2}}\, ,
\end{equation}
One can further show that these propagators can be conveniently rewritten in a
compact form which will be useful in our later calculations:
\begin{equation}
{\cal G}_{\eta_{V} \eta_{V'}} =
{\delta}_{VV'} P_V +
{\cal H}_{VV}\left(P_V,P_V\right)\, ,
\end{equation}
where
\begin{eqnarray}
\qquad
P_V\ =\ { i \over q^2- m_{VV}^2 + i \epsilon}\, ,\ \ \
{\mathcal H}_{VV}(A, A)
=
\frac{1}{2}
\Big[\,
(m_{SS,I}^2 - m_{VV}^2)\frac{\partial}{\partial m_{VV}^2}A \,-\,A \Big]
\label{eq:HPsdef}
\end{eqnarray}
In writing (\ref{eq:HPsdef}), we have used $A = A(m^2_{VV})$.
\subsection{Baryon}
As has been shown in \cite{bct0508}, to $O(\epsilon^2)$, the free Lagrangian
for the $\bf{572}$-dimensional super-multiplet ${\mathcal B}^{ijk}$ and the
$\bf{340}$-dimensional super-multiplet ${\mathcal T}_\mu^{ijk}$ fields in the
mixed-action $SU(10|2)$ partially quenched $\chi$PT has the
same form as in quenched and partially quenched theories
\cite{Labrenz:1996jy,MSavage:2002,s&m:2002} with the addition of new lattice-spacing
dependent terms:
\begin{eqnarray} \label{eqn:L}
{\mathcal L}
&=&
i\left(\ol{\mathcal B} v\cdot{\mathcal D}{\mathcal B}\right)
+2{\alpha}^{(PQ)}_{M}\left(\ol{\mathcal B} {\mathcal B}{\mathcal M}_+\right)
+2{\beta}^{(PQ)}_{M}\left(\ol{\mathcal B} {\mathcal M}_+{\mathcal B}\right)
+2\sigma^{(PQ)}_{M}\left(\ol{\mathcal B}\cB\right)\text{str}\left({\mathcal M}_+\right)
+ a^2 \mathcal{V}_{{\mathcal B}} \nonumber \\
&-&i\left(\ol{\mathcal T}^{\mu} v\cdot{\mathcal D}{\mathcal T}_\mu\right)
+{\Delta}\left(\ol{\mathcal T}^{\mu}{\mathcal T}_\mu\right)
-2{\gamma}^{(PQ)}_{M}\left(\ol{\mathcal T}^{\mu} {\mathcal M}_+{\mathcal T}_\mu\right)
-2\ol\sigma^{(PQ)}_{M}\left(\ol{\mathcal T}^{\mu}{\mathcal T}_\mu\right)\text{str}\left({\mathcal M}_+\right)
+ a^2 \mathcal{V}_{{\mathcal T}}\, . \notag \\
\end{eqnarray}
The baryon potentials $\mathcal{V}_{\mathcal B}$ and $\mathcal{V}_{\mathcal T}$ in (\ref{eqn:L})
arise from the operators in ${\mathcal L}^{(6)}$ of the Symanzik Lagrangian.
In the baryon Lagrangian, the mass operator is defined by:
\begin{equation}
{\mathcal M}_+ = \frac{1}{2}\left(\xi^\dagger m_Q \xi^\dagger + \xi m_Q \xi\right)
.\end{equation}
and the parameter ${\Delta} \sim {\varepsilon} {\Lambda}_\chi$ is the mass splitting between the
$\bf{572}$ and $\bf{340}$ in the chiral limit.
The parenthesis notation used in Eq.~\eqref{eqn:L} is that
of~\cite{Labrenz:1996jy}. Further, the embeding of the octet and decuplet
baryons in their super-multiplets is the same as before
\cite{MSavage:2002,s&m:2002}.
The Lagrangian describing the interactions of the ${\mathcal B}^{ijk}$
and ${\mathcal T}_\mu^{ijk}$ with the pseudo-Goldstone mesons in the mixed-action is
again the same as in the partially quenched theories \cite{s&m:2002} with a
new $a^2$-dependence term:
\begin{eqnarray} \label{eqn:Linteract}
{\cal L} &=&
2 {\alpha} \left(\ol {\mathcal B} S^{\mu} {\mathcal B} A_\mu \right)
+ 2 {\beta} \left(\ol {\mathcal B} S^{\mu} A_\mu {\mathcal B} \right)
+ 2{\mathcal H}\left(\ol{{\mathcal T}}^{\nu} S^{\mu} A_\mu {\mathcal T}_\nu\right) \nonumber \\
&+& \sqrt{\frac{3}{2}}{\mathcal C}
\left[
\left(\ol{{\mathcal T}}^{\nu} A_\nu {\mathcal B}\right)+ \left(\ol {\mathcal B} A_\nu {\mathcal T}^{\nu}\right)
\right] + a^2 {\mathcal L}^{(6)}_{int} \,
\label{interL}
.\end{eqnarray}
The axial-vector and vector meson fields $A_\mu$ and $V_\mu$
are defined by: $ A^{\mu}=\frac{i}{2}
\left(\xi\partial^{\mu}\xi^\dagger-\xi^\dagger\partial^{\mu}\xi\right)$
and $V^{\mu}=\frac{1}{2} \left(\xi\partial^{\mu}\xi^\dagger+\xi^\dagger\partial^{\mu}\xi\right)$.
The latter appears in Eq.~\eqref{eqn:L} for the
covariant derivatives of ${\mathcal B}_{ijk}$ and ${\mathcal T}_{ijk}$
that both have the form
\begin{equation}
({\mathcal D}^{\mu} {\mathcal B})_{ijk}
=
\partial^{\mu} {\mathcal B}_{ijk}
+(V^{\mu})^{l}_{i}{\mathcal B}_{ljk}
+(-)^{\eta_i(\eta_j+\eta_m)}(V^\mu)^{m}_{j}{\mathcal B}_{imk}
+(-)^{(\eta_i+\eta_j)(\eta_k+\eta_n)}(V^{\mu})^{n}_{k}{\mathcal B}_{ijn}
.\end{equation}
The vector $S^{\mu}$ is the covariant spin operator \cite{Jenkins:1991jv,
Jenkins:1991es} and $a^2{\mathcal L}^{(6)}_{int}$ is from the operators in
${\mathcal L}^{(6)}$ of the Symanzik Lagrangian. As we will see later, the explicit
form of this term is not required in our calculations. The effective
axial-vector current from $a^2{\mathcal L}^{(6)}_{int}$ can be obtained by
a simple argument.
The parameters that appear in the mixed-action {PQ$\chi$PT}\ Lagrangian can be
related to those in {$\chi$PT}\ by matching. Further, since
QCD is contained in the fourth-root of the sea-sector of the theory,
one should restrict oneself to one taste for each flavor of
staggered sea quark when performing the matching. To be more specific,
one restricts to the $q_{S}q_{S}q_{S}$ sector \cite{bct0508} and compares the
mixed-action {PQ$\chi$PT}\ Lagrangian obtained with the $\chi$PT. With this identification and matching procedure, one finds \cite{s&m:2002}:
\begin{eqnarray}
\alpha_M & = & {2\over 3}\alpha_M^{(PQ)} - {1\over 3} \beta_M^{(PQ)}
\, ,\quad \
\sigma_M \ =\ \sigma_M^{(PQ)} + {1\over 6}\alpha_M^{(PQ)}
+ {2\over 3}\beta_M^{(PQ)}
\nonumber\\
\gamma_M & = & \gamma_M^{(PQ)}
\, , \quad \
\overline{\sigma}_M\ =\ \overline{\sigma}_M^{(PQ)}
\, .
\label{eq:mequals}
\end{eqnarray}
Further, when restricting to the tree level, from (\ref{eqn:Linteract}) one
also finds \cite{s&m:2002}:
\begin{eqnarray}
\alpha & = & {4\over 3} g_A\ +\ {1\over 3} g_1
\, ,\ \ \
\beta \ =\ {2\over 3} g_1 - {1\over 3} g_A
\, ,\ \ \
{\cal H} \ =\ g_{\Delta\Delta}
\, ,\ \ \
{\cal C} \ =\ -g_{\Delta N}
\, ,
\label{eq:axrels}
\end{eqnarray}
and $g_X = 0$.
\section{The Axial-Vector Current}
The matrix element of the axial-vector current,
$\overline{q}\tau^a\gamma_\mu\gamma_5 q$, have been studied extensively both
on the lattice \cite{Edward:2005,Blum:2004,Khan:2005} and
$\chi$PT \cite{Jenkins:1991jv,Jenkins:1991es,Borasoy:1999,Zhu:2001,
s&m:2002,s&m:2004}.
For our convention, we will follow \cite{bct0412,bct0504} and use the
following charge matrix for the flavor-changing current in extending the
isovector axial current $\overline{Q}\overline{\tau}^a\gamma_\mu\gamma_5 Q$
to PQQCD since we are interested in the neutron to proton axial
transition:
\begin{equation}
\overline{\tau}^{+} = (0,1,0,0,0,....,0)\, .
\end{equation}
With this convention, at leading order, the flavor-changing axial current is
given as \cite{s&m:2002}:
\begin{eqnarray}
^{(PQ)}j_{\mu,5}^{+}
& &\rightarrow\,
2\alpha\ \left(\overline{\cal B} S_\mu {\cal B}\ {\overline{\tau}^{+}_{\xi +}}\right)
\ +\
2\beta\ \left(\overline{\cal B} S_\mu\ {\overline{\tau}^{+}_{\xi +}}{\cal B} \right)
\ +\
2{\cal H} \left(\overline{\cal T}^\nu S_\mu\ {\overline{\tau}^{+}_{\xi +}}{\cal T}_\nu \right)
\nonumber\\
& &
\ +\
\sqrt{3\over 2}{\cal C}
\left[\
\left( \overline{\cal T}_\mu\ {\overline{\tau}^{+}_{\xi +}} {\cal B}\right)\ +\
\left(\overline{\cal B}\ {\overline{\tau}^{+}_{\xi +}} {\cal T}_\mu\right)\ \right]
\ \ + \ a^2 j^{+}_{a \mu,5}\, \ldots.
\label{eq:LOaxialcurrent}
\end{eqnarray}
where $ \overline{\tau}^{+}_{\xi +}\ =\ {1\over 2}\left(
\xi\overline{\tau}^{+}\xi^\dagger
+\xi^\dagger\overline{\tau}^{+}\xi\right)$ and $j^{+}_{a \mu,5}$ is obtained
from ${\mathcal L}^{(6)}_{int}$ in (\ref{eqn:Linteract}). Notice since the insertion of
the mass matrix will be at $O(\epsilon^4)$, we only need to take the tree
level contribution from $a^2 J^{+}_{a \mu, 5}$ into our calculation.
Therefore, although there are a lot of terms in the axial current arise
from the $a^2$ corrections to the lattice operator itself,
the net contributions to the current
$j^{+}_{a \mu,5}$ can be effectively written as:
\begin{equation}
j^{+}_{a \mu,5}\, \overset{\text{eff}}{=} \, 2C_{a}\ \left(\overline{\cal B} S_\mu {\cal B}\ {\overline{\tau}^{+}_{\xi +}}\right)
\ +\
2C'_{a}\ \left(\overline{\cal B} S_\mu\ {\overline{\tau}^{+}_{\xi +}}{\cal B} \right)\, .
\end{equation}
\begin{figure}
\includegraphics[width=0.5\textwidth]{axial.eps}
\caption{The one-loop diagrams which contribute to the leading non-analytic
terms of the nucleon axial transition matrix element. Mesons are represented
by a dashed line while the single and double lines are the symbols
for a nucleon and a decuplet respectively.
The solid circle is an insertion of the axial current operator and the
solid squares are the couplings given in (\ref{interL}).
The wave function renormalization diagrams are depicted by the two
diagrams at the bottom row.}
\label{fig0}
\end{figure}
The calculation of the matrix element of the
neutron to proton axial transition at next-to-leading order is
$O(\epsilon^3)$ in our power counting. These leading non-analytic terms
are from the one-loop diagrams shown in Fig \ref{fig0}. To obtain the
complete $O(\epsilon^3)$ calculation, from our power counting scheme:
$\epsilon^2 \sim a^2\Lambda^2_QCD$ and
$\epsilon^2 \sim m_{q}/\Lambda_{QCD}$, we see that in addition to taking
the $a^2$-dependence of the loop meson masses into our calculation,
we must evaluate the $j^{+}_{a \mu,5}$ at tree level. After carrying
out the calculation, one finds:
\begin{eqnarray}
\langle p |j^{+}_{\mu,5} | n\rangle
& = & \Big [\
\Gamma_{pn} \,+\, c_{pn} \ \Big ] \ 2 \overline{U}_p S_\mu U_n\,,
\label{eq:axmat}
\end{eqnarray}
where $c_{pn}$'s are from the contributions of
local counterterms involving one insertion of mass matrix
$m_Q$\footnote{These local terms have the expressions
$a_{1}m^2_{VV} + a_{2}m^2_{SS}$ with $a_{1}$ and $a_{2}$ must being
determined from the lattice calculations}
and $\Gamma_{pn}$ is given by:
\begin{eqnarray}
\qquad \qquad \Gamma_{pn} & = &\, g_A + a^2C' - \frac{4}{3f^2}\,\Bigg [ \,\, g_A^3\frac{3}{2}\Big
(\,2R_{1}(m_{VS},\mu)\,+\,2A(m_{SS,I})\, \Big )\,\,\qquad \qquad
\nonumber \\
&-& g_1^3\frac{1}{8}\Big (\, R_{1} (m_{VV},\mu)\,-\,R_{1}(m_{VS},\mu)\, \Big)\,+\,\frac{3}{2}g_A R_{1}(m_{VS},\mu) \, \nonumber \\
&-& g_A^2g_{1}\Big
(\,2R_{1} (m_{VV},\mu)\,-\,2R_{1}(m_{VS},\mu)\,-\,6A(m_{SS,I}) \Big )\,
\nonumber \\
&-& g_Ag_1^2\Big
(\,\frac{17}{8} R_{1} (m_{VV},\mu)\,-\,\frac{17}{8}R_{1}(m_{VS},\mu)\,-\,3A(m_{SS,I})
\Big )
\nonumber\\
& - & g_{\Delta N}^2g_{1}\frac{4}{9}\Big
(\, N_{1} (m_{VV},\Delta, \mu)\,-\, N_{1}(m_{VS},\Delta,\mu)\, \Big )\,
\nonumber \\
& + & g_{\Delta N}^2g_{\Delta \Delta}\frac{60}{81}\Big
(\,J_{1} (m_{VV},\Delta,\mu)\,+\,\frac{2}{3} J_{1}(m_{VS},\Delta,\mu)\,
\Big ) \nonumber\\
& - & g_{\Delta N}^2g_{A}\Big
(\, \frac{16}{9}\Big \{ N_{1} (m_{VV},\Delta, \mu) + N_{1} (m_{VS},\Delta, \mu)\Big \} \nonumber \\
&-&2 \Big \{J_{1}(m_{VV},\Delta,\mu) + J_{1}(m_{VS},\Delta,\mu) \Big \}\, \Big )\, \Bigg ]
\label{eq:npaxial1}
\end{eqnarray}
where $C' = C_{a} + C'_{a}$, the functions $J_1(m,\Delta,\mu)$, $R_1(m,\mu)$, $N_1(m,\Delta,\mu)$
are defined in the Appendix, $\Delta$ is the $\Delta$-nucleon mass
splitting, $m_{ab}$ are given in (\ref{qmass}) and lastly, the function
$A(m_{SS,I})$ is given by:
\begin{equation}
A(m_{SS,I}) = \frac{1}{32\pi^2}\Big(m^2_{SS,I}-m^2_{VV}\Big)\Big[\,\log \frac{m^2_{VV}}{\mu^2}\,+\,1\,\Big ]
\label{eq:A(m,a)}
\end{equation}
\begin{figure}
\includegraphics[width=0.6\textwidth]{noa.eps}
\caption{$\Gamma_{pn} \rightarrow g_A$ in the chiral limit. Here the physical
value of $g_A$ is set to be $1.25$. The band structure indicates the
variation of $g_A$ due to the bounds on $g_{\Delta \Delta}$ and $g_{\Delta N}$.}
\label{fig1}
\end{figure}
All of the couplings in eq.~(\ref{eq:npaxial1})
take their chiral-limit values. It is easy to see that all the
$a^2$-dependence in (\ref{eq:npaxial1}) is contained in $m_{VS}$,
$m_{SS,I}$ and $a^2C'$. Notice with our $R_{1}(m,\mu)$,
$J_{1}(m,\Delta,\mu)$ and
$N_{1}(m,\Delta,\mu)$ we have subtracted off the chiral and continuum limit
values of the loop diagrams by hand. This corresponds to a renormalization
of the tree level coefficients, and produces $g_A$ which is the chiral
limit value (Fig \ref{fig1}).
The lattice spacing dependence in Eq.~(\ref{eq:npaxial1})
is completely determined by three parameters, namely,
$C_{\text{mix}}$ and $C_3+C_4$, and a new unknown low energy
constant $C'$. In order to
investigate the $a^2$-dependence of $np$ axial transition matrix
element, in Fig \ref{fig2}, we have taken the unquenched limit
in which $m_{VV} = m_{SS}$\footnote
{The unphysical effects of the mixed-action PQ theory
arise from two sources. One is from the mass difference between
valence and sea quarks and the other is because of the non-vanishing
lattice spacing. By taking the unquenched limit, namely,
$m^2_{VV} = m^2_{SS}$, one can eliminate part of
the unphysical effects. Notice when the lattice spacing is zero,
the physics is recovered from the PQ theory in the unquenched
limit.} and plotted
(\ref{eq:npaxial1}) at two different lattice spacing $a = 0.12{\text{fm}}$
and $a = 0$ with the pion mass varying from
$1{\text{MeV}}$ to $500{\text{MeV}}$\footnote{We have dropped the local
terms $c_{pn}$'s since the loop contributions
formally dominate over these terms}.
In the figure, $g_A$ is set to be $1.25$ and the low-energy constants
$g_{1}$, $g_{\Delta N}$, $g_{\Delta \Delta}$
are allowed to vary within their reasonably known bounds::
$-1 \leq g_{1} \leq 1$, $1.0 \leq |g_{\Delta N}| \leq 2.0$ and
$2.5 \leq |g_{\Delta \Delta}| \leq 3.5$. Further, $C_{\text{mix}}$
is estimated to be
$4.5{\text{fm}}^{-6}\leq C_{\text{mix}} \leq 9.5{\text{fm}}^{-6}$
\cite{bct:0607} and $C_3 + C_4$ is determined as
$C_3+C_4 = 2.34{\text{fm}}^{-6}$ \cite{Aubin:2004fs}.
Finally, since $C'$ is assumed to be of natural size which is
$\Lambda^2_{QCD} \sim 2.3 {\text{fm}}^{-2}$,
we have used $-3 {\text{fm}}^{-2} \leq C' \leq 3 {\text{fm}}^{-2}$
for the figure.
However, keep in mind that the actual value of $C'$ must be determined from
lattice calculations.
\begin{figure}
\includegraphics[width=0.6\textwidth]{yesa1.eps}
\caption{In this figure, we have plotted $\Gamma_{pn}$ as a function of
$m_{\pi}$. The green band stands for the band with $a = 0$
while the blue band are the $\Gamma_{pn}$
obtained from $g_A = 1.25$, $C_3 + C_4 = 2.34{\text{fm}}^{-6}$,
reasonably known bounds on $g_1$,
$g_{\Delta N}$ and $g_{\Delta \Delta}$:
$-1 \leq g_{1} \leq 1$, $1.0 \leq |g_{\Delta N}| \leq 2.0$,
$2.5 \leq |g_{\Delta \Delta}| \leq 3.5$, a natural estimated value
for $C_{\text{mix}}$:
$4.5{\text{fm}}^{-6} \leq C_{\text{mix}} \leq 9.5{\text{fm}}^{-6}$,
and finally a natural variation
of $C'$: $-3{\text{fm}}^{-2}\leq C' \leq 3{\text{fm}}^{-2}$.
The lattice spacing $a$ for the
blue band is fixed to be $0.12{\text{fm}}$.
}
\label{fig2}
\end{figure}
From the figure, we indeed see a large lattice spacing
dependence for $\Gamma_{pn}$.
One might be concerned that the correction at the chiral limit is
sufficiently large
that the $\chi$PT prediction might be breaking down. Further,
the band at the chiral limit does not cover the value of $1.25$
which is the expected $g_A$ at $m_{\pi} \sim 0$.
We point out that with $a = 0.12{\text{fm}}$, the
mass square corrections to the $VS$ and $SS,I$ mesons are
around $(400{\text{MeV}})^2$ to $(450{\text{MeV}})^2$
which implies that $\Gamma_{pn}$ at $m_{\pi} \sim 0$ on the figure
is effectively similar to that obtained at
$m_{\pi} \sim 450{\text{MeV}}$ with $a = 0$. Therefore a large correction
is no suprise. By reducing the lattice spacing $a$ to $0.08{\text{fm}}$,
which might be the standard lattice spacing in future simulations,
the correction is around $(270{\text{MeV}})^2$.
Notice by shifting the $a = 0$ result to the left by
$450{\text{MeV}}$ units, now the green band indeed overlaps with the blue
band at $m_{\pi} \sim 0$.
In addition to the large $a^2$ mass shifts,
the lack of unitarity in the PQ theory will also lead to divergences
when approaching the chiral limit and hence contributes
a large correction to $\Gamma_{pn}$ at $m_{\pi} \sim 0$.
It is clear that the $A(m_{SS,I})$s' in Eq.~(\ref{eq:npaxial1}) are responsible
for this divergence since they are from the double pole of the
flavor neutral propagator which violate the unitarity. Indeed
if we plot Eq.~(\ref{eq:npaxial1}) as a function of $m_{\pi}$
with $C_{\text{mix}} = 0$, we see a large correction to $\Gamma_{pn}$ at
$m_{\pi} \sim 0$ (Fig \ref{fig1.6}). A closer look at
Eq.~(\ref{eq:npaxial1}) and Eq.~(\ref{eq:A(m,a)}) shows that the
divergence arises from the difference of valence and taste-singlet
pion masses. In principle, $m_{VV}$ can be tuned to $m_{SS,I}$
\cite{Golterman:2005} such that the divergence is less severe. However
the large $m_{SS,I}$ will still cast doubt on the chiral expansion due
to the large mass. Also notice if
$C_{\text{mix}} = C_3 + C_4 = 0$ in Eq.~(\ref{eq:npaxial1}),
then a narrow band centered around the result obtained by setting
$a = 0$ should be observed. Indeed this is comfirmed in Fig \ref{fig1.5}.
Lastly, to investigate quantitatively the lattice spacing effects on $g_A$,
let us focus on $m_{\pi} \sim 320{\text{MeV}}$ which is the
relevant smallest pion mass used in most recent mixed-action simulations.
In Fig \ref{fig2}, we see with $a = 0.12{\text{fm}}$, $\Gamma_{pn}$ receives a
$62$ percent correction at $m_{\pi} \sim 320{\text{MeV}}$.
This observation shows that the lattice spacing
effects on $g_A$ of current related simulations should not be overlooked
and that it is not clear whether $\chi$PT can be reliably used to account
precisely for lattice spacing effects. However, with lattice spacing
$a = 0.08{\text{fm}}$, we observe a reasonable $28$ percent correction
to $\Gamma_{pn}$ at $m_{\pi} \sim 320 {\text{MeV}}$.
We also point out that there
is an unphysical parameter $g_1$-dependence in (\ref{eq:npaxial1}) even in
the unquenched limit. This is a '' partially quenched''
artifact depending on the mixed-action and clearly from (\ref{eq:npaxial1})
the $g_{1}$-dependence will disappear when the lattice
spacing $a$ is set to zero.
\begin{figure}
\includegraphics[width=0.6\textwidth]{noc3.eps}
\caption{$\Gamma_{pn}$ obtained by letting
$C_{\text{mix}} = 0$ in (\ref{eq:npaxial1}).}
\label{fig1.6}
\end{figure}
\section{Conclusion}
The mixed-actions provide us with a powerful tool in lattice simulations
because they enable one to use different type of fermions in the valence
and sea sectors. In particular, current lattice calcualtions using
Ginsparg-Wilson valence quarks in staggered sea quarks can reach the
dynamical pion masses as low as $300{\text{MeV}}$.
\begin{figure}
\includegraphics[width=0.6\textwidth]{nocmix1.eps}
\caption{The blue band is the $\Gamma_{pn}$ obtained by letting
$C_{\text{mix}} = C_3 + C_4 = 0$. The green band
represents the $\Gamma_{pn}$ with $a = 0$.}
\label{fig1.5}
\end{figure}
In this paper, we have calculated the
$pn$ matrix element of the axial-vector current up to $O(\epsilon^3)$ order
using the mixed-action of Ginsparg-Wilson
valence quarks in staggered sea quarks. Further, we have detailed the
lattice spacing artifacts for this matrix element. To $O(\epsilon^3)$,
we found that the $pn$ axial-current matrix element depends on the lattice
spacing via three parameters, namely, $C_{{\text{mix}}}$, $C_{3} + C_{4}$ and
finally a new low energy constant $C'$. The low energy constant
$C_{{\text{mix}}}$ affects the masses of mesons made from a Ginsparg-Wilson
quark and a staggered quark. Its physical value can be fixed either from
mixed meson masses or the pion charge radius \cite{bct:0607}. Further, the
combination of parameters $C_3+C_4$ has been already constrained from
staggered meson lattice data \cite{Aubin:2004fs}. The new low energy constant
$C'$, on the other hand, can be evaluated from the lattice simulations
of determining the nucleon axial charge
\cite{Blum:2004,Khan:2005,Edward:2005}. However, as has been already
demonstrated in \cite{bct:0607},
the continuum extrapolation with only one lattice spacing available will
lead to a large amount of uncertainty in the physical values of the associated
low energy constants. Ideally, this low energy
constant $C'$ will be more accurately determined if a variety of lattice
spacings in the lattice simulations are available.
Lastly, since the finite volume effects are small at
the scale of dynamical quark masses and the box size available in today's
simulations as indicated in \cite{s&m:2004,Edward:2005}, the
formulas given here should be sufficient for the comparisons between the
predictions from $\chi$PT and the results from numerical simulations.
\begin{acknowledgments}
We thank Brian Tiburzi for critical discussions and reading a draft of the
manuscript. We also thank Andreas Fuhrer for checking the figures
and D. J. Cecile for assistance with the English in writing
this manuscript. This work is supported in part by
Schweizerischer Nationalfonds.
\end{acknowledgments}
\section{Appendix}
In this Appendix, we list the functions needed in our calculations:
\begin{eqnarray}
\qquad\qquad R_{1}(m,\mu) = \frac{1}{16\pi^2}m^{2}\log\Big(\frac{m^{2}}{\mu^2}\Big)\, , \qquad\qquad\ \nonumber \\
\qquad
\end{eqnarray}
\begin{eqnarray}
K_{1}(m,\Delta,\mu) & = & \,-\,\frac{1}{2}\frac{1}{16\pi^2}\Delta^2\log\Big(\frac{\mu^2}{4\Delta^2} \Big)\,+\,\frac{3}{4}\frac{1}{16\pi^2}\Bigg(
\Big(m^2-{2\over 3}\Delta^2\Big)\log\Big({m^2\over\mu^2}\Big) \nonumber \\
&& \quad\,+\, {2\over 3}\Delta \sqrt{\Delta^2-m^2}
\log\Big({\Delta-\sqrt{\Delta^2-m^2+ i \epsilon}\over
\Delta+\sqrt{\Delta^2-m^2+ i \epsilon}}\Big)
\nonumber\\
& & \quad \, +\, {2\over 3} {m^2\over\Delta} \Big(\ \pi m -
\sqrt{\Delta^2-m^2}
\log\Big({\Delta-\sqrt{\Delta^2-m^2+ i \epsilon}\over
\Delta+\sqrt{\Delta^2-m^2+ i \epsilon}}\Big)
\Big) \Bigg )\, .
\label{eq:Kdecfun}
\end{eqnarray}
\begin{eqnarray}
J_{1}(m,\Delta,\mu) & = &\,-\,\frac{3}{2}\frac{1}{16\pi^2}\Delta^2\log\Big(\frac{\mu^2}{4\Delta^2} \Big)\,+\,\frac{3}{4}\frac{1}{16\pi^2}\Bigg(
\Big(m^2-2\Delta^2\Big)\log\Big({m^2\over\mu^2}\Big) \quad \qquad \nonumber \\
&&\quad \,+\,2\Delta\sqrt{\Delta^2-m^2}
\log\Big({\Delta-\sqrt{\Delta^2-m^2+ i \epsilon}\over
\Delta+\sqrt{\Delta^2-m^2+ i \epsilon}}\Big) \Bigg)\, .
\label{eq:decfun}
\end{eqnarray}
|
1,477,468,750,445 | arxiv | \section{Introduction}\label{introduction}}
Cohort state-transition models (cSTMs), commonly known as Markov models, are decision models that simulate disease dynamics over time. cSTMs are widely used to evaluate various health policies and clinical strategies, such as screening and surveillance programs,\textsuperscript{\protect\hyperlink{ref-Suijkerbuijk2018}{1},\protect\hyperlink{ref-Sathianathen2018a}{2}} diagnostic procedures,\textsuperscript{\protect\hyperlink{ref-Lu2018b}{3}} disease management programs,\textsuperscript{\protect\hyperlink{ref-Djatche2018}{4}} and interventions.\textsuperscript{\protect\hyperlink{ref-Pershing2014}{5},\protect\hyperlink{ref-Smith-Spangler2010}{6}}. The simplest cSTMs are time-independent (time-homogeneous), meaning that the transition probabilities and other model parameters remain fixed over the simulated time horizon. In an introductory tutorial, we described the implementation of time-independent cSTMs.\textsuperscript{\protect\hyperlink{ref-Alarid-Escudero2021a}{7}} In many applications, a time-independent cSTM is limited because key parameters may vary over time. For example, background mortality changes as a cohort ages, the risk of cancer recurrence might change as a function of time since diagnosis, or the incidence of vector-borne diseases may have temporal or seasonal trends. The costs and utility of residing in a particular health state might also vary over time. For example, cancer-related health care costs and utility might depend on whether a patient is in their first year of remission or their fifth. Similarly, a person's last year of life is often their most expensive in health care spending. Allowing for flexibility in capturing time-varying dynamics is often essential in realistic models.
This tutorial expands the cSTM framework described in the introductory time-independent cSTM tutorial by allowing transition probabilities, costs, and utilities to vary over time and for one-time costs or utilities when the cohort experiences events when transitioning between states, often called transition rewards. We also demonstrate how to compute various epidemiological measures from cSTMs needed for model calibration or validation.
We distinguish between two types of time-dependency that require different approaches: (1) Simulation time-dependency, which represents the time since the start of the simulation, and (2) state residence time-dependency, representing time spent in a health state. Simulation time-dependency affects parameters that vary with time for the entire cohort in the same way. The most common example of simulation time-dependency is age-specific background mortality over time as the cohort ages. Since all members of the cohort age at the same rate, we can implement this dependency by changing the mortality parameters as the simulation progresses.\textsuperscript{\protect\hyperlink{ref-Snowsill2019}{8}} Similarly, in a model simulating a cohort starting from disease diagnosis, any dependence on time since diagnosis can be implemented as a dependence on the time since simulation start. Seasonal or temporal variation in disease incidence can also be reflected through simulation-time dependency, where the risk of developing a disease can vary based on the current season or year in the simulation.
State-residence time-dependency captures time dependence on events that members of the cohort could experience at different times. For example, in a model simulating a cohort of healthy individuals, they may experience disease onset at different times. Thus, parameters that depend on the time since disease onset cannot be implemented based on simulation time; instead, we need to track cohort members \emph{from their disease onset time}. We implement this type of time-dependency by expanding the model state space to include disease states that encode the time since an event has occurred. For example, instead of a single ``Sick'' state, individuals would transition first to the ``Sick - cycle 1'' state at disease onset, then ``Sick - cycle 2'' at the next cycle, and so on. In this way, each replicate of the ``Sick'' state can have different transition probabilities (e.g., mortality, risk of complications, etc.), costs, or utilities. We should note that we can also use this state expansion approach to time-dependency to model simulation-time dependency. However, as we see below, substantial state expansion can be cumbersome and potentially computationally burdensome. For simplicity, modelers should always consider simulation time-dependency first.
Besides describing the two forms of time-dependency, we also illustrate the concept and implementation of transition rewards, such as one-time costs or utilities applied when individuals experience events when transitioning between certain states. These transition rewards reflect event-driven outcome impacts, such as a higher cost for the first cycle of disease onset due to increased diagnostic and management costs or the increased cost of transition to the dead state incurred from end-of-life interventions or management.\textsuperscript{\protect\hyperlink{ref-Krijkamp2019}{9}}
In this tutorial, we describe how to implement simulation-time dependent cSTMs and then add state residence dependency to account for both time dependencies. We illustrate the use of these cSTMs to conduct a cost-effectiveness analysis (CEA) in R, a statistical software with increasing use in health decision sciences.\textsuperscript{\protect\hyperlink{ref-Jalal2017b}{10}} We also illustrate the calculation of various epidemiological measures beyond the simple cohort trace, including survival, life expectancy, prevalence, and incidence. Some of these calculations leverage the state-transition array implementation that we will introduce, while others are manipulations of the cohort trace. Such epidemiological measures may be of interest for specific analyses but are also important for calibrating and validating the model against real-world data.
Readers can find the most up-to-date model code and code to create the tutorial graphs in the accompanying GitHub repository (\url{https://github.com/DARTH-git/cohort-modeling-tutorial-timedep}). We encourage the reader first to review the basics of decision modeling and how to develop time-independent cSTMs in R, as described in the introductory tutorial.
\hypertarget{simulation-time-dependency}{%
\section{Simulation-time dependency}\label{simulation-time-dependency}}
As described above, simulation-time dependency represents the time since the model starts and can be represented by defining the transition probability matrix as a function of time, \(P_t\). The elements of \(P_t\) are the transition probabilities of moving from state \(i\) to state \(j\) in time \(t\), \(p_{[i,j,t]}\), where \(\{i,j\} = 1,\ldots, n_S\) and \(t = 0,\ldots,n_T\), \(n_S\) is the number of health states of the model and \(n_T\) is the number of cycles that represent total simulation time.
\[
P_t =
\begin{bmatrix}
p_{[1,1,t]} & p_{[1,2,t]} & \cdots & p_{[1,n_S,t]} \\
p_{[2,1,t]} & p_{[2,2,t]} & \cdots & p_{[2,n_S,t]} \\
\vdots & \vdots & \ddots & \vdots \\
p_{[n_S,1,t]} & p_{[n_S,2,t]} & \cdots & p_{[n_S,n_S,t]} \\
\end{bmatrix}.
\]
Note that in each cycle \(t\) all rows of the transition probability matrix must sum to one, \(\sum_{j=1}^{n_S}{p_{[i,j,t]}} = 1\) for all \(i = 1,\ldots,n_S\) and \(t = 0,\ldots, n_T\).
Next, we specify the initial distribution of the cohort at \(t = 0\). We then define \(\mathbf{m}_{0}\) as the vector that captures the distribution of the cohort among the states at \(t = 0\) (i.e., the initial state vector). As illustrated in the introductory tutorial, we iteratively compute the cohort distribution among the health states in each cycle from the transition probability matrix and the cohort's distribution from the prior cycle. The state vector at the next cycle \(t+1\) (\(\mathbf{m}_{t+1}\)) is then calculated as the matrix product of the state vector at cycle \(t\), \(\mathbf{m}_{t}\), and the transition probability matrix that the cohort faces in cycle \(t\), \(P_t\),
\begin{equation}
\label{eq:time-dep-matrix-mult}
\mathbf{m}_{t+1} = \mathbf{m}_{t} P_t \qquad\text{ for }\qquad t = 0,\ldots, (n_T - 1).
\end{equation}
Equation \eqref{eq:time-dep-matrix-mult} is iteratively evaluated until \(t = n_T\).
The cohort trace matrix, \(M\), is a matrix of dimensions \((n_T+1) \times n_S\) where each row is a state vector \((-\mathbf{m}_{t}-)\), such that
\[
M =
\begin{bmatrix}
- \mathbf{m}_0 - \\
- \mathbf{m}_1 - \\
\vdots \\
- \mathbf{m}_{n_T} -
\end{bmatrix}.
\]
\(M\) stores the output of the cSTM, which we can use to compute various epidemiological measures, such as prevalence and survival probability over time, and economic measures, such as cumulative resource use and costs.
\hypertarget{time-dependency-on-state-residence}{%
\section{Time dependency on state residence}\label{time-dependency-on-state-residence}}
The implementation of state residence time dependency is slightly more involved than simulation time dependency. As described above, dependence on state residence occurs in applications where transition probabilities or rewards depend on the time spent in a given state. To account for state residence dependency, we expand the number of states with as many transient states as the number of cycles required for state residency. These transient states are often called \emph{tunnel} states, where the cohort stays for only one cycle in each tunnel state and either transitions to the next tunnel state or completely exits the tunnel. The total number of states for a cSTM with tunnels is \(n_{S_\text{tunnels}}\) and equals \(n_S + n_{\text{tunnels}} - 1\), where \(n_{\text{tunnels}}\) is the total number of times a health state needs to be expanded for. We subtract one because one of the original states is expanded to a tunnel state. The transition probability matrix also needs to be expanded to incorporate these additional transient states, resulting in a transition probability matrix of dimensions \(n_{S_\text{tunnels}} \times n_{S_\text{tunnels}}\). If the transition probabilities are also dependent on simulation time, as described in the previous section, we add a third dimension such that the dimensions will become \(n_{S_\text{tunnels}} \times n_{S_\text{tunnels}} \times n_T\). As state residence time dependency extends in more health states, the total number of health states will increase, causing what has been referred to as ``state explosion''. Table \ref{tab:Timedep-cSTM-components-table} describes the core components of time-dependent cSTMs and their suggested R code names.
\begin{longtable}[]{@{}
>{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.19}}
>{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.57}}
>{\centering\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.17}}
>{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.06}}@{}}
\caption{\label{tab:Timedep-cSTM-components-table} Core components of time-dependent cSTMs with their R name.}\tabularnewline
\toprule
Element & Description & R name & \\
\midrule
\endfirsthead
\toprule
Element & Description & R name & \\
\midrule
\endhead
\(n_S\) & Number of states & \texttt{n\_states} & \\
\(\mathbf{m}_0\) & Initial state vector & \texttt{v\_s\_init} & \\
\(\mathbf{m}_t\) & State vector in cycle \(t\) & \texttt{v\_mt} & \\
\(M\) & Cohort trace matrix & \texttt{m\_M} & \\
\(\mathbf{P}\) & Time-dependent transition probability array & \texttt{a\_P} & \\
\(\mathbf{A}\) & Transition-dynamics array & \texttt{a\_A} & \\
\(n_{\text{tunnels}}\) & Number of tunnel states & \texttt{n\_tunnel\ size} & \\
\(n_{S_{\text{tunnels}}}\) & Number of states including tunnel states & \texttt{n\_states\_tunnels} & \\
\(\mathbf{m}_{\text{tunnels}_0}\) & Initial state vector for the model with tunnel states & \texttt{v\_s\_init\_tunnels} & \\
\bottomrule
\end{longtable}
For a more detailed description of these components with their variable types, data structure, and R name, please see the Supplementary Materials.
\hypertarget{case-study-a-cost-effectiveness-analysis-using-a-time-dependent-sick-sicker-model}{%
\section{Case study: A Cost-effectiveness analysis using a time-dependent Sick-Sicker model}\label{case-study-a-cost-effectiveness-analysis-using-a-time-dependent-sick-sicker-model}}
We demonstrate simulation-time and state-residence time-dependency in R by expanding the time-independent 4-state ``Sick-Sicker'' model described in the introductory tutorial.\textsuperscript{\protect\hyperlink{ref-Alarid-Escudero2021a}{7}} The state-transition diagram of the Sick-Sicker model is presented in Figure \ref{fig:STD-Sick-Sicker}. We first modify this cSTM to account for simulation-time dependency by incorporating age-dependent background mortality and then expand it to account for state-residence dependency. We will use this cSTM to conduct a CEA of different treatment strategies accounting for transition rewards.
\begin{figure}[H]
{\centering \includegraphics[width=10.64in]{figs/Sick-Sicker}
}
\caption{State-transition diagram of the Sick-Sicker cohort state-transition model. The circles represent the health states, and the arrows represent the possible transition probabilities. The labels next to the arrows represent the variable names for these transitions.}\label{fig:STD-Sick-Sicker}
\end{figure}
As described in the introductory tutorial, this model simulates the health care costs and quality-adjusted life-years (QALYs) of a cohort of 25-year-old individuals at risk of developing a hypothetical disease with two stages: ``Sick'' and ``Sicker''.\textsuperscript{\protect\hyperlink{ref-Enns2015e}{11}} We simulate the cohort of individuals who all start in the ``Healthy'' state (denoted by ``H'') over their lifetime, which means that we will simulate the cohort for 75 cycles. The total number of cycles is denoted as \(n_T\) and defined in R as \texttt{n\_cycles}.
In the introductory tutorial, healthy individuals face a constant risk of death. To illustrate simulation time dependency, here we assume that healthy individuals face age-specific background mortality. Healthy individuals who survive might become ill over time and transition to the ``Sick'' state (denoted by ``S1''). In this tutorial, we consider that once healthy individuals get sick, they incur a one-time utility decrement of 0.01 (\texttt{du\_HS1}, a disutility of transitioning from H to S1) and a transition cost of \$1,000 (\texttt{ic\_HS1}) that reflects the immediate costs of developing the illness. We demonstrate how to include these in the section transition rewards.
The S1 state is associated with higher mortality, higher health care costs, and lower quality of life than the H state. Once in the S1 state, individuals may recover (returning to the H state), die (move to the D state), or progress further to the ``Sicker'' state (denoted by ``S2''), with further increases in mortality risk and health care costs and reduced quality of life. We assume that it is not clinically possible to distinguish between individuals in S1 and S2. Individuals in S1 and S2 face an increased hazard of death, compared to healthy individuals, in the form of a hazard ratio (HR) of 3 and 10, respectively, relative to the background age-specific mortality hazard rate. In the state-residence time-dependent model, the risk of progressing from S1 to S2 depends on the time spent in S1. Once simulated individuals die, they transition to the absorbing D state, where they remain, and incur a one-time cost of \$2,000 (\texttt{ic\_D}) for the expected acute care received before dying. All transitions between non-death states are assumed to be conditional on surviving each cycle. We simulated the evolution of the cohort in one-year discrete-time cycles.
We use this cSTM to evaluate the cost-effectiveness of four strategies: Strategy A, strategy B, a combination of A and B (strategy AB), and the standard of care (strategy SoC). Strategy A involves administering treatment A to individuals in S1 and S2. Treatment A increases the QoL of individuals only in S1 from 0.75 (utility without treatment, \texttt{u\_S1}) to 0.95 (utility with treatment A, \texttt{u\_trtA}) and costs \$12,000 per year (\texttt{c\_trtA}). This strategy does not impact the QoL of individuals in S2, nor does it change the risk of becoming sick or progressing through the sick states. Strategy B uses treatment B to reduce the rate of Sick individuals progressing to the S2 state with a hazard ratio (HR) of 0.6 (\texttt{hr\_S1S2\_trtB}) and costs \$13,000 per year (\texttt{c\_trtB}). Treatment B does not affect QoL. Strategy AB involves administering both treatments A and B, resulting in benefits in increased QoL in patients in the S1 state and reduced risk of progression from S1 to S2, accounting for the costs of both treatments. We discount both costs and QALYs at an annual rate of 3\%. Model parameters and the corresponding R variable names are presented in Table \ref{tab:param-table} and follow the notation described in the DARTH coding framework.\textsuperscript{\protect\hyperlink{ref-Alarid-Escudero2019e}{12}}
Note that for strategy A, the model has identical transition probabilities to SoC. The only difference is the added cost of the treatment for S1 and S2, and QoL increases for S1. After comparing the four strategies in terms of expected QALYs and costs, we calculate the incremental cost per QALY gained between non-dominated strategies as explained below.
\begin{longtable}[]{@{}lccc@{}}
\caption{\label{tab:param-table} Description of parameters, their R variable name, base-case value and distribution.}\tabularnewline
\toprule
\begin{minipage}[b]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
\textbf{Parameter}\strut
\end{minipage} & \begin{minipage}[b]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\textbf{R name}\strut
\end{minipage} & \begin{minipage}[b]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\textbf{Base-case}\strut
\end{minipage} & \begin{minipage}[b]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
\textbf{Distribution}\strut
\end{minipage}\tabularnewline
\midrule
\endfirsthead
\toprule
\begin{minipage}[b]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
\textbf{Parameter}\strut
\end{minipage} & \begin{minipage}[b]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\textbf{R name}\strut
\end{minipage} & \begin{minipage}[b]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\textbf{Base-case}\strut
\end{minipage} & \begin{minipage}[b]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
\textbf{Distribution}\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
Number of cycles (\(n_{T}\))\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{n\_cycles}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
75 years\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
constant \strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
Names of health states \strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{v\_names\_states}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
H, S1, S2, D\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
constant \strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
Annual discount rate for costs\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{d\_c}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
3\%\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
constant \strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
Annual discount rate for QALYs\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{d\_e}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
3\%\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
constant \strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
Number of PSA samples (\(K\))\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{n\_sim}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
1,000\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
constant \strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
Annual transition probabilities conditional on surviving\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Disease onset (H to S1)\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{p\_HS1}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
0.15\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
beta(30, 170)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Recovery (S1 to H)\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{p\_S1H}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
0.5\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
beta(60, 60)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Time-independent disease progression (S1 to S2)\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{p\_S1S2}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
0.105\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
beta(84, 716)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Time-dependent disease progression (S1 to S2)\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{v\_p\_S1S2\_tunnels}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
~~~~Weibull parameters\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
~~~~~~~~Scale (\(\lambda\))\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{p\_S1S2\_scale}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
0.08\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
lognormal(log(0.08), 0.02)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
~~~~~~~~Shape (\(\gamma\))\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{p\_S1S2\_shape}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
1.10\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
lognormal(log(1.10), 0.05)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
Annual mortality\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Age-dependent background mortality rate (H to D)\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{v\_r\_HDage}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
age-specific\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
constant \strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Hazard ratio of death in S1 vs H\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{hr\_S1}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
3.0\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
lognormal(log(3.0), 0.01)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Hazard ratio of death in S2 vs H\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{hr\_S2}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
10.0\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
lognormal(log(10.0), 0.02)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
Annual costs\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Healthy individuals\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{c\_H}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\$2,000\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
gamma(100.0, 20.0)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Sick individuals in S1\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{c\_S1}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\$4,000\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
gamma(177.8, 22.5)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Sick individuals in S2\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{c\_S2}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\$15,000\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
gamma(225.0, 66.7)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Dead individuals\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{c\_D}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\$0\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
constant \strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Cost of treatment A as an additional costs on individuals treated in S1 or S2\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{c\_trtA}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\$12,000\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
gamma(576.0, 20.8)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Cost of treatment B as an additional costs on individuals treated in S1 or S2\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{c\_trtB}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\$13,000\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
gamma(676.0, 19.2)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
Utility weights\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Healthy individuals\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{u\_H}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
1.00\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
beta(200, 3)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Sick individuals in S1\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{u\_S1}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
0.75\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
beta(130, 45)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Sick individuals in S2\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{u\_S2}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
0.50\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
beta(230, 230)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Dead individuals\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{u\_D}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
0.00\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
constant \strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
Treatment A effectiveness\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Utility for treated individuals in S1\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{u\_trtA}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
0.95\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
beta(300, 15)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
Treatment B effectiveness\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Reduction in rate of disease progression (S1 to S2) as hazard ratio (HR)\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{hr\_S1S2\_trtB}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
log(0.6)\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
lognormal(log(0.6), 0.1)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
Transition rewards\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Utility decrement of healthy individuals\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{du\_HS1}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
0.01\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
beta(11,1088)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
when transitioning to S1\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Cost of healthy individuals\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{ic\_HS1}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\$1,000\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
gamma(25, 40)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
when transitioning to S1\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.45}}\raggedright
- Cost of dying when transitioning to D\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.16}}\centering
\texttt{ic\_D}\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\centering
\$2,000\strut
\end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.20}}\centering
gamma(100, 20)\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
The R code below describes the initialization of the input parameters.
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\# General setup}
\NormalTok{cycle\_length }\OtherTok{\textless{}{-}} \DecValTok{1} \CommentTok{\# cycle length equal one year}
\NormalTok{n\_age\_init }\OtherTok{\textless{}{-}} \DecValTok{25} \CommentTok{\# age at baseline}
\NormalTok{n\_age\_max }\OtherTok{\textless{}{-}} \DecValTok{100} \CommentTok{\# maximum age of follow up}
\NormalTok{n\_cycles }\OtherTok{\textless{}{-}}\NormalTok{ n\_age\_max }\SpecialCharTok{{-}}\NormalTok{ n\_age\_init }\CommentTok{\# number of cycles}
\CommentTok{\# The 4 health states of the model:}
\NormalTok{v\_names\_states }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"H"}\NormalTok{, }\CommentTok{\# Healthy (H)}
\StringTok{"S1"}\NormalTok{, }\CommentTok{\# Sick (S1)}
\StringTok{"S2"}\NormalTok{, }\CommentTok{\# Sicker (S2)}
\StringTok{"D"}\NormalTok{) }\CommentTok{\# Dead (D)}
\NormalTok{n\_states }\OtherTok{\textless{}{-}} \FunctionTok{length}\NormalTok{(v\_names\_states) }\CommentTok{\# number of health states }
\NormalTok{d\_e }\OtherTok{\textless{}{-}} \FloatTok{0.03} \CommentTok{\# discount rate for QALYs of 3\% per cycle }
\NormalTok{d\_c }\OtherTok{\textless{}{-}} \FloatTok{0.03} \CommentTok{\# discount rate for costs of 3\% per cycle }
\NormalTok{v\_names\_str }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"Standard of care"}\NormalTok{, }\CommentTok{\# store the strategy names}
\StringTok{"Strategy A"}\NormalTok{, }
\StringTok{"Strategy B"}\NormalTok{,}
\StringTok{"Strategy AB"}\NormalTok{) }
\DocumentationTok{\#\# Transition probabilities (per cycle), hazard ratios and odds ratio (OR)}
\NormalTok{p\_HS1 }\OtherTok{\textless{}{-}} \FloatTok{0.15} \CommentTok{\# probability of becoming Sick when Healthy}
\NormalTok{p\_S1H }\OtherTok{\textless{}{-}} \FloatTok{0.5} \CommentTok{\# probability of becoming Healthy when Sick}
\NormalTok{p\_S1S2 }\OtherTok{\textless{}{-}} \FloatTok{0.105} \CommentTok{\# probability of becoming Sicker when Sick}
\NormalTok{hr\_S1 }\OtherTok{\textless{}{-}} \DecValTok{3} \CommentTok{\# hazard ratio of death in Sick vs Healthy}
\NormalTok{hr\_S2 }\OtherTok{\textless{}{-}} \DecValTok{10} \CommentTok{\# hazard ratio of death in Sicker vs Healthy }
\CommentTok{\# Effectiveness of treatment B}
\NormalTok{hr\_S1S2\_trtB }\OtherTok{\textless{}{-}} \FloatTok{0.6} \CommentTok{\# hazard ratio of becoming Sicker when Sick under treatment B}
\DocumentationTok{\#\# State rewards}
\DocumentationTok{\#\# Costs}
\NormalTok{c\_H }\OtherTok{\textless{}{-}} \DecValTok{2000} \CommentTok{\# cost of being Healthy for one cycle }
\NormalTok{c\_S1 }\OtherTok{\textless{}{-}} \DecValTok{4000} \CommentTok{\# cost of being Sick for one cycle }
\NormalTok{c\_S2 }\OtherTok{\textless{}{-}} \DecValTok{15000} \CommentTok{\# cost of being Sicker for one cycle}
\NormalTok{c\_D }\OtherTok{\textless{}{-}} \DecValTok{0} \CommentTok{\# cost of being dead for one cycle}
\NormalTok{c\_trtA }\OtherTok{\textless{}{-}} \DecValTok{12000} \CommentTok{\# cost of receiving treatment A for one cycle}
\NormalTok{c\_trtB }\OtherTok{\textless{}{-}} \DecValTok{13000} \CommentTok{\# cost of receiving treatment B for one cycle }
\CommentTok{\# Utilities}
\NormalTok{u\_H }\OtherTok{\textless{}{-}} \DecValTok{1} \CommentTok{\# utility of being Healthy for one cycle }
\NormalTok{u\_S1 }\OtherTok{\textless{}{-}} \FloatTok{0.75} \CommentTok{\# utility of being Sick for one cycle }
\NormalTok{u\_S2 }\OtherTok{\textless{}{-}} \FloatTok{0.5} \CommentTok{\# utility of being Sicker for one cycle}
\NormalTok{u\_D }\OtherTok{\textless{}{-}} \DecValTok{0} \CommentTok{\# utility of being dead for one cycle}
\NormalTok{u\_trtA }\OtherTok{\textless{}{-}} \FloatTok{0.95} \CommentTok{\# utility when receiving treatment A for one cycle}
\DocumentationTok{\#\# Transition rewards}
\NormalTok{du\_HS1 }\OtherTok{\textless{}{-}} \FloatTok{0.01} \CommentTok{\# one{-}time utility decrement when transitioning from Healthy to Sick}
\NormalTok{ic\_HS1 }\OtherTok{\textless{}{-}} \DecValTok{1000} \CommentTok{\# one{-}time cost when transitioning from Healthy to Sick}
\NormalTok{ic\_D }\OtherTok{\textless{}{-}} \DecValTok{2000} \CommentTok{\# one{-}time cost when dying}
\end{Highlighting}
\end{Shaded}
\hypertarget{incorporating-simulation-time-dependency}{%
\subsection{Incorporating simulation-time dependency}\label{incorporating-simulation-time-dependency}}
To illustrate simulation-time dependency in the Sick-Sicker cSTM, we model all-cause mortality as a function of age. We obtain all-cause mortality from life tables in the form of age-specific mortality hazard rates, \(\mu(a)\), where \(a\) refers to age. For this example, we create a vector \texttt{v\_r\_mort\_by\_age} to represent age-specific background mortality hazard rates for 0 to 100 year-olds obtained from US life tables.\textsuperscript{\protect\hyperlink{ref-Arias2017}{13}} To compute the transition probability from state H to state D, corresponding to the cohort's age at each cycle, we transform the rate \(\mu(a)\) to a transition probability assuming a constant exponential hazard rate within each year of age
\[
p_{[H,D,t]} = 1-\exp\left\{{-\mu(a_0 + t)}\right\},
\]
where \(a_0 =\) 25 is the starting age of the cohort. Instead of iterating through the mortality hazard rates, we obtain a vector of background mortality hazard rates for the ages of interest between 25 through 100 by subsetting \(\mu(a)\) (R variable name \texttt{v\_r\_mort\_by\_age}) for these ages. We transform the resulting R variable, \texttt{v\_r\_HDage}, to a probability.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Age{-}specific mortality rate in the Healthy state (background mortality)}
\NormalTok{v\_r\_HDage }\OtherTok{\textless{}{-}}\NormalTok{ v\_r\_mort\_by\_age[(n\_age\_init }\SpecialCharTok{+} \DecValTok{1}\NormalTok{) }\SpecialCharTok{+} \DecValTok{0}\SpecialCharTok{:}\NormalTok{(n\_cycles }\SpecialCharTok{{-}} \DecValTok{1}\NormalTok{)]}
\CommentTok{\# Transform to age{-}specific background mortality risk}
\NormalTok{v\_p\_HDage }\OtherTok{\textless{}{-}} \DecValTok{1} \SpecialCharTok{{-}} \FunctionTok{exp}\NormalTok{(}\SpecialCharTok{{-}}\NormalTok{v\_r\_HDage) }
\end{Highlighting}
\end{Shaded}
Because mortality in S1 and S2 are relative to background mortality which depends on age, mortality in S1 and S2 will also be age-dependent. To generate the age-specific mortality in S1 and S2, we multiply the age-specific background mortality rate, \texttt{v\_r\_HDage}, by the constant hazard ratios \texttt{hr\_S1} and \texttt{hr\_S2}, respectively. We then convert the resulting age-specific mortality rates to probabilities ensuring that the transition probabilities to D are bounded between 0 and 1.
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\# Age{-}specific mortality rates in the Sick and Sicker states}
\NormalTok{v\_r\_S1Dage }\OtherTok{\textless{}{-}}\NormalTok{ v\_r\_HDage }\SpecialCharTok{*}\NormalTok{ hr\_S1 }\CommentTok{\# when Sick}
\NormalTok{v\_r\_S2Dage }\OtherTok{\textless{}{-}}\NormalTok{ v\_r\_HDage }\SpecialCharTok{*}\NormalTok{ hr\_S2 }\CommentTok{\# when Sicker}
\DocumentationTok{\#\# Age{-}specific probabilities of dying in the Sick and Sicker states}
\NormalTok{v\_p\_S1Dage }\OtherTok{\textless{}{-}} \DecValTok{1} \SpecialCharTok{{-}} \FunctionTok{exp}\NormalTok{(}\SpecialCharTok{{-}}\NormalTok{v\_r\_S1Dage) }\CommentTok{\# when Sick}
\NormalTok{v\_p\_S2Dage }\OtherTok{\textless{}{-}} \DecValTok{1} \SpecialCharTok{{-}} \FunctionTok{exp}\NormalTok{(}\SpecialCharTok{{-}}\NormalTok{v\_r\_S2Dage) }\CommentTok{\# when Sicker}
\end{Highlighting}
\end{Shaded}
To incorporate simulation-time dependency into the transition probability matrix, we expand the dimensions of the matrix and create a 3-dimensional transition probability array, \(\mathbf{P}\) and named \texttt{a\_P} in R, of dimensions \(n_S \times n_S \times n_T\). The first two dimensions of this array correspond to transitions between states and the third dimension to time. The \(t\)-th element in the third dimension corresponds to the transition probability matrix at cycle \(t\). A visual representation of \texttt{a\_P} is shown in Figure \ref{fig:Array-Time-Dependent}.
\begin{figure}[H]
{\centering \includegraphics[width=1\linewidth]{figs/3D-state-transition-array-sick-sicker-without-tunnels}
}
\caption{A 3-dimensional representation of the transition probability array of the Sick-Sicker model with simulation-time dependency.}\label{fig:Array-Time-Dependent}
\end{figure}
First, we initialize the transition probability array for SoC, \texttt{a\_P\_SoC}, with a default value of zero for all transition probabilities.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Initialize the transition probability array}
\NormalTok{a\_P\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\DecValTok{0}\NormalTok{, }\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(n\_states, n\_states, n\_cycles),}
\AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(v\_names\_states, v\_names\_states, }\DecValTok{0}\SpecialCharTok{:}\NormalTok{(n\_cycles }\SpecialCharTok{{-}} \DecValTok{1}\NormalTok{)))}
\end{Highlighting}
\end{Shaded}
Filling \texttt{a\_P\_SoC} with the corresponding transition probabilities of the cohort under the SoC strategy is comparable with filling a transition probability matrix for a time-independent cSTM. However, this requires a slight modification of the code from the time-independent cSTM. Accounting for the time dimension is represented by the third dimension of the array. The code below illustrates how to assign age-dependent transition probabilities in the third dimension of the array. For constant transitions over time, we only need to provide one value for the transition probability. R replicates the value of such transitions as many times as the number of cycles (\(n_T+1\) times in our example). We create the transition probability array for strategy A as a copy of SoC's because treatment A does not alter the cohort's transition probabilities.
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\#\# Fill in array}
\DocumentationTok{\#\# From H}
\NormalTok{a\_P\_SoC[}\StringTok{"H"}\NormalTok{, }\StringTok{"H"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_HDage) }\SpecialCharTok{*}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ p\_HS1)}
\NormalTok{a\_P\_SoC[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_HDage) }\SpecialCharTok{*}\NormalTok{ p\_HS1}
\NormalTok{a\_P\_SoC[}\StringTok{"H"}\NormalTok{, }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ v\_p\_HDage}
\DocumentationTok{\#\# From S1}
\NormalTok{a\_P\_SoC[}\StringTok{"S1"}\NormalTok{, }\StringTok{"H"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}\NormalTok{ p\_S1H}
\NormalTok{a\_P\_SoC[}\StringTok{"S1"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ (p\_S1H }\SpecialCharTok{+}\NormalTok{ p\_S1S2))}
\NormalTok{a\_P\_SoC[}\StringTok{"S1"}\NormalTok{, }\StringTok{"S2"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}\NormalTok{ p\_S1S2}
\NormalTok{a\_P\_SoC[}\StringTok{"S1"}\NormalTok{, }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ v\_p\_S1Dage}
\DocumentationTok{\#\# From S2}
\NormalTok{a\_P\_SoC[}\StringTok{"S2"}\NormalTok{, }\StringTok{"S2"}\NormalTok{, ] }\OtherTok{\textless{}{-}} \DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S2Dage}
\NormalTok{a\_P\_SoC[}\StringTok{"S2"}\NormalTok{, }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ v\_p\_S2Dage}
\DocumentationTok{\#\# From D}
\NormalTok{a\_P\_SoC[}\StringTok{"D"}\NormalTok{, }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}} \DecValTok{1}
\DocumentationTok{\#\# Initialize transition probability matrix for strategy A as a copy of SoC\textquotesingle{}s}
\NormalTok{a\_P\_strA }\OtherTok{\textless{}{-}}\NormalTok{ a\_P\_SoC}
\end{Highlighting}
\end{Shaded}
As mentioned above, each slice along the third dimension of \texttt{a\_P\_SoC} corresponds to a transition probability matrix. For example, the transition matrix for 25-year-olds in the Sick-Sicker model under the SoC strategy can be retrieved by indexing the first slice of the array using:
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{a\_P\_SoC[, , }\DecValTok{1}\NormalTok{]}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## H S1 S2 D
## H 0.8491385 0.1498480 0.0000000 0.001013486
## S1 0.4984813 0.3938002 0.1046811 0.003037378
## S2 0.0000000 0.0000000 0.9899112 0.010088764
## D 0.0000000 0.0000000 0.0000000 1.000000000
\end{verbatim}
For strategy B, we first initialize the three-dimensional array of transition probabilities, \texttt{a\_P\_strB} as a copy of \texttt{a\_P\_SoC} and update only the probability of remaining in S1 and the transition probability from S1 to S2 (i.e., \texttt{p\_S1S2} is replaced with \texttt{p\_S1S2\_trtB}). Next, we create the transition probability array for strategy AB, \texttt{a\_P\_strAB}, as a copy of \texttt{a\_P\_strB} since the cSTMs for strategies B and AB have identical transition probabilities.
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\# Initialize transition probability array for strategy B}
\NormalTok{a\_P\_strB }\OtherTok{\textless{}{-}}\NormalTok{ a\_P\_SoC}
\DocumentationTok{\#\# Update only transition probabilities from S1 involving p\_S1S2}
\NormalTok{a\_P\_strB[}\StringTok{"S1"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ (p\_S1H }\SpecialCharTok{+}\NormalTok{ p\_S1S2\_trtB))}
\NormalTok{a\_P\_strB[}\StringTok{"S1"}\NormalTok{, }\StringTok{"S2"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}\NormalTok{ p\_S1S2\_trtB}
\DocumentationTok{\#\# Initialize transition probability matrix for strategy AB as a copy of B\textquotesingle{}s}
\NormalTok{a\_P\_strAB }\OtherTok{\textless{}{-}}\NormalTok{ a\_P\_strB}
\end{Highlighting}
\end{Shaded}
Once we create the transition probability arrays, we check they are valid (i.e., ensuring transition probabilities are between 0 and 1, and transition probabilities from each state sum to 1) using the functions \texttt{check\_sum\_of\_transition\_array} and \texttt{check\_transition\_probability}, provided in the \texttt{darthtools} package (\url{https://github.com/DARTH-git/darthtools}).
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\#\# Check if transition probability matrices are valid}
\DocumentationTok{\#\# Check that transition probabilities are [0, 1]}
\FunctionTok{check\_transition\_probability}\NormalTok{(a\_P\_SoC)}
\FunctionTok{check\_transition\_probability}\NormalTok{(a\_P\_strA)}
\FunctionTok{check\_transition\_probability}\NormalTok{(a\_P\_strB)}
\FunctionTok{check\_transition\_probability}\NormalTok{(a\_P\_strAB)}
\DocumentationTok{\#\# Check that all rows sum to 1}
\FunctionTok{check\_sum\_of\_transition\_array}\NormalTok{(a\_P\_SoC, }\AttributeTok{n\_states =}\NormalTok{ n\_states, }\AttributeTok{n\_cycles =}\NormalTok{ n\_cycles)}
\FunctionTok{check\_sum\_of\_transition\_array}\NormalTok{(a\_P\_strA, }\AttributeTok{n\_states =}\NormalTok{ n\_states, }\AttributeTok{n\_cycles =}\NormalTok{ n\_cycles)}
\FunctionTok{check\_sum\_of\_transition\_array}\NormalTok{(a\_P\_strB, }\AttributeTok{n\_states =}\NormalTok{ n\_states, }\AttributeTok{n\_cycles =}\NormalTok{ n\_cycles)}
\FunctionTok{check\_sum\_of\_transition\_array}\NormalTok{(a\_P\_strAB, }\AttributeTok{n\_states =}\NormalTok{ n\_states, }\AttributeTok{n\_cycles =}\NormalTok{ n\_cycles)}
\end{Highlighting}
\end{Shaded}
In the Sick-Sicker model, the entire cohort starts in the Healthy state. Therefore, we create the \(1 \times n_S\) initial state vector \texttt{v\_s\_init} with all of the cohort assigned to the H state:
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{v\_s\_init }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\AttributeTok{H =} \DecValTok{1}\NormalTok{, }\AttributeTok{S1 =} \DecValTok{0}\NormalTok{, }\AttributeTok{S2 =} \DecValTok{0}\NormalTok{, }\AttributeTok{D =} \DecValTok{0}\NormalTok{) }\CommentTok{\# initial state vector}
\NormalTok{v\_s\_init}
\CommentTok{\# H S1 S2 D }
\CommentTok{\# 1 0 0 0}
\end{Highlighting}
\end{Shaded}
We use the variable \texttt{v\_s\_init} to initialize \(M\) represented by \texttt{m\_M} for the cohort under SoC strategy. We also create a trace for each of the other treatment-based strategies. Note that the initial state vector, \texttt{v\_s\_init}, can be modified to account for the distribution of the cohort across the states at the start of the simulation and might vary by strategy. To simulate the cohort over the \(n_T\) cycles for the simulation-time-dependent cSTM, we initialize four cohort trace matrices, one for each strategy.
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\# Initialize cohort trace for age{-}dependent cSTM under SoC}
\NormalTok{m\_M\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\ConstantTok{NA}\NormalTok{, }
\AttributeTok{nrow =}\NormalTok{ (n\_cycles }\SpecialCharTok{+} \DecValTok{1}\NormalTok{), }\AttributeTok{ncol =}\NormalTok{ n\_states, }
\AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(}\DecValTok{0}\SpecialCharTok{:}\NormalTok{n\_cycles, v\_names\_states))}
\CommentTok{\# Store the initial state vector in the first row of the cohort trace}
\NormalTok{m\_M\_SoC[}\DecValTok{1}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ v\_s\_init}
\DocumentationTok{\#\# Initialize cohort trace for strategies A, B, and AB}
\CommentTok{\# Structure and initial states are the same as for SoC}
\NormalTok{m\_M\_strA }\OtherTok{\textless{}{-}}\NormalTok{ m\_M\_SoC }\CommentTok{\# Strategy A}
\NormalTok{m\_M\_strB }\OtherTok{\textless{}{-}}\NormalTok{ m\_M\_SoC }\CommentTok{\# Strategy B}
\NormalTok{m\_M\_strAB }\OtherTok{\textless{}{-}}\NormalTok{ m\_M\_SoC }\CommentTok{\# Strategy AB}
\end{Highlighting}
\end{Shaded}
We then use the matrix product to get the state vector of the cohort's distribution at each cycle \(t\). This equation is similar to the one described for the time-independent model. The only modification required is to index the transition probability arrays by \(t\) to obtain each strategy's cycle-specific transition probability matrices.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Iterative solution of age{-}dependent cSTM}
\ControlFlowTok{for}\NormalTok{(t }\ControlFlowTok{in} \DecValTok{1}\SpecialCharTok{:}\NormalTok{n\_cycles)\{}
\CommentTok{\# For SoC}
\NormalTok{ m\_M\_SoC[t }\SpecialCharTok{+} \DecValTok{1}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ m\_M\_SoC[t, ] }\SpecialCharTok{\%*\%}\NormalTok{ a\_P\_SoC[, , t]}
\CommentTok{\# For strategy A}
\NormalTok{ m\_M\_strA[t }\SpecialCharTok{+} \DecValTok{1}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ m\_M\_strA[t, ] }\SpecialCharTok{\%*\%}\NormalTok{ a\_P\_strA[, , t]}
\CommentTok{\# For strategy B}
\NormalTok{ m\_M\_strB[t }\SpecialCharTok{+} \DecValTok{1}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ m\_M\_strB[t, ] }\SpecialCharTok{\%*\%}\NormalTok{ a\_P\_strB[, , t]}
\CommentTok{\# For strategy AB}
\NormalTok{ m\_M\_strAB[t }\SpecialCharTok{+} \DecValTok{1}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ m\_M\_strAB[t, ] }\SpecialCharTok{\%*\%}\NormalTok{ a\_P\_strAB[, , t]}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
A graphical representation of the cohort trace for all cycles of the age-dependent cSTM under SoC is shown in Figure \ref{fig:Sick-Sicker-Trace-AgeDep}.
\begin{figure}[H]
{\centering \includegraphics{figs/Sick-Sicker-Trace-AgeDep-1}
}
\caption{Cohort trace of the age-dependent cSTM under SoC.}\label{fig:Sick-Sicker-Trace-AgeDep}
\end{figure}
\hypertarget{incorporating-time-dependency-on-state-residence}{%
\subsection{Incorporating time dependency on state residence}\label{incorporating-time-dependency-on-state-residence}}
Here we add the dependency on state-residence to the simulation-time-dependent Sick-Sicker model defined above. We assume the risk of progression from S1 to S2 increases as a function of the time \(\tau = 1, \ldots, n_{\text{tunnels}}\) the cohort remains in the S1 state. This increase follows a Weibull hazard function, \(h(\tau)\), defined as
\[
h(\tau) = \gamma \lambda (\lambda \tau)^{\gamma-1},
\]
with a corresponding cumulative hazard, \(H(\tau)\),
\begin{equation}
H(\tau) = (\lambda \tau)^{\gamma},
\label{eq:H-weibull}
\end{equation}
where \(\lambda\) and \(\gamma\) are the scale and shape parameters of the Weibull hazard function, respectively.
To derive a transition probability from S1 to S2 as a function of the time the cohort spends in S1, \(p_{\left[S1_{\tau},S2, \tau\right]}\), we assume constant rates within each cycle interval (i.e., piecewise exponential transition times), where the cycle-specific probability of a transition is
\begin{equation}
p_{\left[S1_{\tau},S2, \tau\right]} = 1-\exp{\left(-\mu_{\left[S1_{\tau},S2, \tau\right]}\right)},
\label{eq:tp-from-rate}
\end{equation}
where \(\mu_{\left[S1_{\tau},S2, \tau\right]}\) is the rate of transition from S1 to S2 in cycle \(\tau\) defined as the difference in cumulative hazards between consecutive cycles\textsuperscript{\protect\hyperlink{ref-Diaby2014}{14}}
\begin{equation}
\mu_{\left[S1_{\tau},S2, \tau\right]} = H(\tau) - H(\tau-1).
\label{eq:tr-from-H}
\end{equation}
Substituting the Weibull cumulative hazard from Equation \eqref{eq:H-weibull} into Equation \eqref{eq:tr-from-H} gives
\begin{equation}
\mu_{\left[S1_{\tau},S2, \tau\right]} = (\lambda \tau)^{\gamma} - (\lambda (\tau-1))^{\gamma},
\label{eq:tr-from-H-weibull}
\end{equation}
and the transition probability
\begin{equation}
p_{\left[S1_{\tau},S2, \tau\right]} = 1-\exp{\left(- \left((\lambda \tau)^{\gamma} - (\lambda (\tau-1))^{\gamma}\right) \right)}.
\label{eq:tp-from-H-weibull}
\end{equation}
We assume that state-residence dependency affects the cohort in the S1 state throughout the whole simulation (i.e., \(n_{\text{tunnels}}=n_T\)) and create a new variable called \texttt{n\_tunnel\_size} with the length of the tunnel equal to \texttt{n\_cycles}. Thus, there will be 75 S1 tunnel states plus 3 more states (H, S2, D) resulting in a total of \(n_{S_{\text{tunnels}}}\) = 78.
Figure \ref{fig:STD-Sick-Sicker-tunnels} shows the Sick-Sicker model's state-transition diagram with state-residence dependency with \(n_{\text{tunnels}}\) tunnel states for S1.
\begin{figure}[H]
{\centering \includegraphics[width=1\textwidth]{figs/Sick-Sicker-with-tunnels}
}
\caption{State-transition diagram of the Sick-Sicker model with tunnel states expanding the Sick state ($S1_1, S1_2,...,S1_{n_\text{tunnels}}$).}\label{fig:STD-Sick-Sicker-tunnels}
\end{figure}
To implement state-residence dependency in the Sick-Sicker cSTM, we create the vector variables \texttt{v\_Sick\_tunnel} and \texttt{v\_names\_states\_tunnels} with the names of the Sick tunnel states' and all the states of the cSTM, including tunnels, respectively, and use the parameters listed in Table \ref{tab:Timedep-cSTM-components-table}.
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\# Number of tunnels}
\NormalTok{n\_tunnel\_size }\OtherTok{\textless{}{-}}\NormalTok{ n\_cycles }
\DocumentationTok{\#\# Vector with cycles for tunnels}
\NormalTok{v\_cycles\_tunnel }\OtherTok{\textless{}{-}} \DecValTok{1}\SpecialCharTok{:}\NormalTok{n\_tunnel\_size}
\DocumentationTok{\#\# Vector with the names of the Sick tunnel state}
\NormalTok{v\_Sick\_tunnel }\OtherTok{\textless{}{-}} \FunctionTok{paste}\NormalTok{(}\StringTok{"S1\_"}\NormalTok{, }\FunctionTok{seq}\NormalTok{(}\DecValTok{1}\NormalTok{, n\_tunnel\_size), }\StringTok{"Yr"}\NormalTok{, }\AttributeTok{sep =} \StringTok{""}\NormalTok{)}
\DocumentationTok{\#\# Create variables for model with tunnels}
\NormalTok{v\_names\_states\_tunnels }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"H"}\NormalTok{, v\_Sick\_tunnel, }\StringTok{"S2"}\NormalTok{, }\StringTok{"D"}\NormalTok{) }\CommentTok{\# state names}
\NormalTok{n\_states\_tunnels }\OtherTok{\textless{}{-}} \FunctionTok{length}\NormalTok{(v\_names\_states\_tunnels) }\CommentTok{\# number of states}
\DocumentationTok{\#\# Initialize first cycle of Markov trace accounting for the tunnels}
\NormalTok{v\_s\_init\_tunnels }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\FunctionTok{rep}\NormalTok{(}\DecValTok{0}\NormalTok{, n\_tunnel\_size), }\DecValTok{0}\NormalTok{, }\DecValTok{0}\NormalTok{) }
\end{Highlighting}
\end{Shaded}
Then, the transition rate and probability dependent on state residence from Sick to Sicker, \texttt{v\_r\_S1S2\_tunnels} and \texttt{v\_p\_S1S2\_tunnels}, respectively, based on a Weibull hazard function are:
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Weibull parameters}
\NormalTok{p\_S1S2\_scale }\OtherTok{\textless{}{-}} \FloatTok{0.08} \CommentTok{\# scale}
\NormalTok{p\_S1S2\_shape }\OtherTok{\textless{}{-}} \FloatTok{1.10} \CommentTok{\# shape}
\CommentTok{\# Weibull function}
\NormalTok{v\_r\_S1S2\_tunnels }\OtherTok{\textless{}{-}}\NormalTok{ (v\_cycles\_tunnel}\SpecialCharTok{*}\NormalTok{p\_S1S2\_scale)}\SpecialCharTok{\^{}}\NormalTok{p\_S1S2\_shape }\SpecialCharTok{{-}}
\NormalTok{ ((v\_cycles\_tunnel}\DecValTok{{-}1}\NormalTok{)}\SpecialCharTok{*}\NormalTok{p\_S1S2\_scale)}\SpecialCharTok{\^{}}\NormalTok{p\_S1S2\_shape}
\NormalTok{v\_p\_S1S2\_tunnels }\OtherTok{\textless{}{-}} \DecValTok{1} \SpecialCharTok{{-}} \FunctionTok{exp}\NormalTok{(}\SpecialCharTok{{-}}\NormalTok{v\_r\_S1S2\_tunnels}\SpecialCharTok{*}\NormalTok{cycle\_length)}
\end{Highlighting}
\end{Shaded}
To adapt the 3-dimensional transition probability array to incorporate both age and state-residence dependency in the Sick-Sicker model under SoC, we first create an expanded 3-dimensional array accounting for tunnels, \texttt{a\_P\_tunnels\_SoC}. The dimensions of this array are \(n_{S_{\text{tunnels}}} \times n_{S_{\text{tunnels}}} \times n_T\). A visual representation of \texttt{a\_P\_tunnels\_SoC} of the Sick-Sicker model with tunnel states expanding the Sick state is shown in Figure \ref{fig:Array-Time-Dependent-Tunnels}.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Initialize array}
\NormalTok{a\_P\_tunnels\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\DecValTok{0}\NormalTok{, }\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(n\_states\_tunnels, n\_states\_tunnels, n\_cycles),}
\AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(v\_names\_states\_tunnels, }
\NormalTok{ v\_names\_states\_tunnels, }
\DecValTok{0}\SpecialCharTok{:}\NormalTok{(n\_cycles }\SpecialCharTok{{-}} \DecValTok{1}\NormalTok{)))}
\end{Highlighting}
\end{Shaded}
\begin{figure}[H]
{\centering \includegraphics[width=1\textwidth]{figs/3D-state-transition-array-sick-sicker-tunnels}
}
\caption{The 3-dimensional transition probability array of the Sick-Sicker model expanded to account for simulation-time and state-residence dependency using $\tau$ tunnel states for S1 .}\label{fig:Array-Time-Dependent-Tunnels}
\end{figure}
Filling \texttt{a\_P\_tunnels\_SoC} with the corresponding transition probabilities is similar to how it's done with \texttt{a\_P\_SoC} above, with the difference being that we now fill the transition probabilities from all the tunnel states by iterating through all the tunnel states and assigning the corresponding disease progression transition probabilities for each tunnel state.
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\#\# Fill in array}
\DocumentationTok{\#\# From H}
\NormalTok{a\_P\_tunnels\_SoC[}\StringTok{"H"}\NormalTok{, }\StringTok{"H"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_HDage) }\SpecialCharTok{*}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ p\_HS1)}
\NormalTok{a\_P\_tunnels\_SoC[}\StringTok{"H"}\NormalTok{, v\_Sick\_tunnel[}\DecValTok{1}\NormalTok{], ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_HDage) }\SpecialCharTok{*}\NormalTok{ p\_HS1}
\NormalTok{a\_P\_tunnels\_SoC[}\StringTok{"H"}\NormalTok{, }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ v\_p\_HDage}
\DocumentationTok{\#\# From S1}
\ControlFlowTok{for}\NormalTok{(i }\ControlFlowTok{in} \DecValTok{1}\SpecialCharTok{:}\NormalTok{(n\_tunnel\_size }\SpecialCharTok{{-}} \DecValTok{1}\NormalTok{))\{}
\NormalTok{ a\_P\_tunnels\_SoC[v\_Sick\_tunnel[i], }\StringTok{"H"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}\NormalTok{ p\_S1H}
\NormalTok{ a\_P\_tunnels\_SoC[v\_Sick\_tunnel[i], }
\NormalTok{ v\_Sick\_tunnel[i }\SpecialCharTok{+} \DecValTok{1}\NormalTok{], ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}
\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ (p\_S1H }\SpecialCharTok{+}\NormalTok{ v\_p\_S1S2\_tunnels[i]))}
\NormalTok{ a\_P\_tunnels\_SoC[v\_Sick\_tunnel[i], }\StringTok{"S2"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}\NormalTok{ v\_p\_S1S2\_tunnels[i]}
\NormalTok{ a\_P\_tunnels\_SoC[v\_Sick\_tunnel[i], }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ v\_p\_S1Dage}
\NormalTok{\}}
\CommentTok{\# Repeat code for the last cycle to force the cohort stay in the last tunnel state of Sick}
\NormalTok{a\_P\_tunnels\_SoC[v\_Sick\_tunnel[n\_tunnel\_size], }\StringTok{"H"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}\NormalTok{ p\_S1H}
\NormalTok{a\_P\_tunnels\_SoC[v\_Sick\_tunnel[n\_tunnel\_size],}
\NormalTok{ v\_Sick\_tunnel[n\_tunnel\_size], ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}
\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ (p\_S1H }\SpecialCharTok{+}\NormalTok{ v\_p\_S1S2\_tunnels[n\_tunnel\_size]))}
\NormalTok{a\_P\_tunnels\_SoC[v\_Sick\_tunnel[n\_tunnel\_size], }\StringTok{"S2"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}
\NormalTok{ v\_p\_S1S2\_tunnels[n\_tunnel\_size]}
\NormalTok{a\_P\_tunnels\_SoC[v\_Sick\_tunnel[n\_tunnel\_size], }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ v\_p\_S1Dage}
\DocumentationTok{\#\#\# From S2}
\NormalTok{a\_P\_tunnels\_SoC[}\StringTok{"S2"}\NormalTok{, }\StringTok{"S2"}\NormalTok{, ] }\OtherTok{\textless{}{-}} \DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S2Dage}
\NormalTok{a\_P\_tunnels\_SoC[}\StringTok{"S2"}\NormalTok{, }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ v\_p\_S2Dage}
\CommentTok{\# From D}
\NormalTok{a\_P\_tunnels\_SoC[}\StringTok{"D"}\NormalTok{, }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}} \DecValTok{1}
\end{Highlighting}
\end{Shaded}
Next, we create the transition probability array for Strategy B. To implement the effectiveness of treatment B, we multiply the vector of transition rates, \texttt{v\_r\_S1S2\_tunnels}, by the hazard ratio of treatment B, \texttt{hr\_S1S2\_trtB}. Then, we transform to a vector of transition probabilities that account for the duration of S1 state-residence under treatment B, \texttt{v\_p\_S1S2\_tunnels\_trtB}, following Equation \eqref{eq:tp-from-rate}.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Apply hazard ratio to rate to obtain transition rate of becoming Sicker when }
\CommentTok{\# Sick for treatment B}
\NormalTok{v\_r\_S1S2\_tunnels\_trtB }\OtherTok{\textless{}{-}}\NormalTok{ v\_r\_S1S2\_tunnels }\SpecialCharTok{*}\NormalTok{ hr\_S1S2\_trtB}
\CommentTok{\# transform rate to probability to become Sicker when Sick under treatment B }
\CommentTok{\# conditional on surviving}
\NormalTok{v\_p\_S1S2\_tunnels\_trtB }\OtherTok{\textless{}{-}} \DecValTok{1} \SpecialCharTok{{-}} \FunctionTok{exp}\NormalTok{(}\SpecialCharTok{{-}}\NormalTok{v\_r\_S1S2\_tunnels\_trtB}\SpecialCharTok{*}\NormalTok{cycle\_length) }
\end{Highlighting}
\end{Shaded}
Then, we initialize the three-dimensional transition probability array for treatment B, \texttt{a\_P\_tunnels\_trtB}, based on \texttt{a\_P\_tunnels\_SoC}. The only difference here is that we update the transition probabilities from S1 involving \texttt{v\_p\_S1S2\_tunnels} to \texttt{v\_p\_S1S2\_tunnels\_trtB} instead.
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\# Initialize transition probability array for treatment B}
\NormalTok{a\_P\_tunnels\_trtB }\OtherTok{\textless{}{-}}\NormalTok{ a\_P\_tunnels\_SoC}
\DocumentationTok{\#\# Update only transition probabilities from S1 involving v\_p\_S1S2\_tunnels}
\ControlFlowTok{for}\NormalTok{(i }\ControlFlowTok{in} \DecValTok{1}\SpecialCharTok{:}\NormalTok{(n\_tunnel\_size }\SpecialCharTok{{-}} \DecValTok{1}\NormalTok{))\{}
\NormalTok{ a\_P\_tunnels\_trtB[v\_Sick\_tunnel[i], }\StringTok{"H"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}\NormalTok{ p\_S1H}
\NormalTok{ a\_P\_tunnels\_trtB[v\_Sick\_tunnel[i], }
\NormalTok{ v\_Sick\_tunnel[i }\SpecialCharTok{+} \DecValTok{1}\NormalTok{], ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}
\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ (p\_S1H }\SpecialCharTok{+}\NormalTok{ v\_p\_S1S2\_tunnels\_trtB[i]))}
\NormalTok{ a\_P\_tunnels\_trtB[v\_Sick\_tunnel[i], }\StringTok{"S2"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}\NormalTok{ v\_p\_S1S2\_tunnels\_trtB[i]}
\NormalTok{ a\_P\_tunnels\_trtB[v\_Sick\_tunnel[i], }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ v\_p\_S1Dage}
\NormalTok{\}}
\CommentTok{\# repeat code for the last cycle to force the cohort stay in the last tunnel state of Sick}
\NormalTok{a\_P\_tunnels\_trtB[v\_Sick\_tunnel[n\_tunnel\_size], }\StringTok{"H"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}\NormalTok{ p\_S1H}
\NormalTok{a\_P\_tunnels\_trtB[v\_Sick\_tunnel[n\_tunnel\_size],}
\NormalTok{ v\_Sick\_tunnel[n\_tunnel\_size], ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}
\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ (p\_S1H }\SpecialCharTok{+}\NormalTok{v\_p\_S1S2\_tunnels\_trtB[n\_tunnel\_size]))}
\NormalTok{a\_P\_tunnels\_trtB[v\_Sick\_tunnel[n\_tunnel\_size], }\StringTok{"S2"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1} \SpecialCharTok{{-}}\NormalTok{ v\_p\_S1Dage) }\SpecialCharTok{*}
\NormalTok{ v\_p\_S1S2\_tunnels\_trtB[n\_tunnel\_size]}
\NormalTok{a\_P\_tunnels\_trtB[v\_Sick\_tunnel[n\_tunnel\_size], }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ v\_p\_S1Dage}
\end{Highlighting}
\end{Shaded}
Once we create both three-dimensional transition probability arrays with tunnels, we check they are valid (i.e., between 0 and 1 and transition probabilities from each state sum to 1).
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\#\# Check if transition probability matrices are valid}
\DocumentationTok{\#\# Check that transition probabilities are [0, 1]}
\FunctionTok{check\_transition\_probability}\NormalTok{(a\_P\_tunnels\_SoC)}
\FunctionTok{check\_transition\_probability}\NormalTok{(a\_P\_tunnels\_trtB)}
\DocumentationTok{\#\# Check that all rows sum to 1}
\FunctionTok{check\_sum\_of\_transition\_array}\NormalTok{(a\_P\_tunnels\_SoC, }\AttributeTok{n\_states =}\NormalTok{ n\_states\_tunnels, }
\AttributeTok{n\_cycles =}\NormalTok{ n\_cycles)}
\FunctionTok{check\_sum\_of\_transition\_array}\NormalTok{(a\_P\_tunnels\_trtB, }\AttributeTok{n\_states =}\NormalTok{ n\_states\_tunnels, }
\AttributeTok{n\_cycles =}\NormalTok{ n\_cycles)}
\end{Highlighting}
\end{Shaded}
To simulate the cohort and store its state occupation over the \(n_T\) cycles for the cSTM accounting for state-residence dependency, we initialize two new cohort trace matrices for the SoC and treatment B, \texttt{m\_M\_tunnels\_SoC} and \texttt{m\_M\_tunnels\_trtB}, respectively. The dimensions of both matrices are \((n_T+1) \times n_{S_{\text{tunnels}}}\).
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Initialize cohort for state{-}residence cSTM under SoC}
\NormalTok{m\_M\_tunnels\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\DecValTok{0}\NormalTok{, }
\AttributeTok{nrow =}\NormalTok{ (n\_cycles }\SpecialCharTok{+} \DecValTok{1}\NormalTok{), }\AttributeTok{ncol =}\NormalTok{ n\_states\_tunnels, }
\AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(}\DecValTok{0}\SpecialCharTok{:}\NormalTok{n\_cycles, v\_names\_states\_tunnels))}
\CommentTok{\# Store the initial state vector in the first row of the cohort trace}
\NormalTok{m\_M\_tunnels\_SoC[}\DecValTok{1}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ v\_s\_init\_tunnels}
\DocumentationTok{\#\# Initialize cohort trace under treatment B}
\NormalTok{m\_M\_tunnels\_trtB }\OtherTok{\textless{}{-}}\NormalTok{ m\_M\_tunnels\_SoC}
\end{Highlighting}
\end{Shaded}
We then use the matrix product, similar to the simulation-time-dependent cSTM, to generate the full cohort trace.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Iterative solution of state{-}residence{-}dependent cSTM}
\ControlFlowTok{for}\NormalTok{(t }\ControlFlowTok{in} \DecValTok{1}\SpecialCharTok{:}\NormalTok{n\_cycles)\{}
\CommentTok{\# For SoC}
\NormalTok{ m\_M\_tunnels\_SoC[t }\SpecialCharTok{+} \DecValTok{1}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ m\_M\_tunnels\_SoC[t, ] }\SpecialCharTok{\%*\%}\NormalTok{ a\_P\_tunnels\_SoC[, , t]}
\CommentTok{\# Under treatment B}
\NormalTok{ m\_M\_tunnels\_trtB[t }\SpecialCharTok{+} \DecValTok{1}\NormalTok{,] }\OtherTok{\textless{}{-}}\NormalTok{ m\_M\_tunnels\_trtB[t, ] }\SpecialCharTok{\%*\%}\NormalTok{ a\_P\_tunnels\_trtB[, , t]}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
To compute a summarized cohort trace to capture occupancy in the H, S1, S2, D states under SoC, we aggregate over the tunnel states in each cycle(Figure \ref{fig:Sick-Sicker-Trace-HistDep}).
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Create aggregated trace}
\NormalTok{m\_M\_tunnels\_SoC\_sum }\OtherTok{\textless{}{-}} \FunctionTok{cbind}\NormalTok{(}\AttributeTok{H =}\NormalTok{ m\_M\_tunnels\_SoC[, }\StringTok{"H"}\NormalTok{], }
\AttributeTok{S1 =} \FunctionTok{rowSums}\NormalTok{(m\_M\_tunnels\_SoC[, }\FunctionTok{which}\NormalTok{(v\_names\_states}\SpecialCharTok{==}\StringTok{"S1"}\NormalTok{)}\SpecialCharTok{:}
\NormalTok{ (n\_tunnel\_size }\SpecialCharTok{+}\DecValTok{1}\NormalTok{)]), }
\AttributeTok{S2 =}\NormalTok{ m\_M\_tunnels\_SoC[, }\StringTok{"S2"}\NormalTok{],}
\AttributeTok{D =}\NormalTok{ m\_M\_tunnels\_SoC[, }\StringTok{"D"}\NormalTok{])}
\end{Highlighting}
\end{Shaded}
\begin{figure}[H]
{\centering \includegraphics{figs/Sick-Sicker-Trace-HistDep-1}
}
\caption{Cohort trace of the age-dependent cSTM accounting for state-residence dependency under SoC. }\label{fig:Sick-Sicker-Trace-HistDep}
\end{figure}
\hypertarget{epidemiological-and-economic-measures}{%
\section{Epidemiological and economic measures}\label{epidemiological-and-economic-measures}}
cSTMs can be used to generate different epidemiological and economic outputs. In a CEA, the main outcomes are typically the total discounted expected QALYs and total costs accrued by the cohort over the predefined time horizon. However, epidemiological outcomes can be helpful for calibration and validation. Some common epidemiological outcomes include survival, prevalence, incidence, the average number of events, and lifetime risk of events.\textsuperscript{\protect\hyperlink{ref-Siebert2012c}{15}} We show how to obtain some of these outcomes from the trace and transition probability objects.
\hypertarget{epidemiological-measures}{%
\subsection{Epidemiological measures}\label{epidemiological-measures}}
We provide the epidemiological definition of some of these outcomes and how they can be generated from a cSTM using the simulation time-dependent Sick-Sicker cSTM under SoC. In the GitHub repository, we provide the code to generate these outcomes from the state-residence-dependent cSTM.
\hypertarget{survival-probability}{%
\subsubsection{Survival probability}\label{survival-probability}}
The survival probability, \(S(t)\), captures the proportion of the cohort remaining alive by cycle \(t\). To estimate \(S(t)\) from the simulated cohort of the simulation-time-dependent Sick-Sicker model, shown in Figure \ref{fig:Sick-Sicker-Surv-AgeDep}, we sum the proportions of the non-death states for all \(n_T\) cycles in \texttt{m\_M\_SoC}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{v\_S\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{rowSums}\NormalTok{(m\_M\_SoC[, }\SpecialCharTok{{-}}\FunctionTok{which}\NormalTok{(v\_names\_states }\SpecialCharTok{==} \StringTok{"D"}\NormalTok{)]) }\CommentTok{\# vector with survival curve}
\end{Highlighting}
\end{Shaded}
\begin{figure}[H]
{\centering \includegraphics{figs/Sick-Sicker-Surv-AgeDep-1}
}
\caption{Survival curve of the age-dependent cSTM}\label{fig:Sick-Sicker-Surv-AgeDep}
\end{figure}
\hypertarget{life-expectancy}{%
\subsubsection{Life expectancy}\label{life-expectancy}}
Life expectancy (LE) refers to the expected number of time units remaining to be alive.\textsuperscript{\protect\hyperlink{ref-Lee2003a}{16}} In continuous-time, LE is the area under the entire survival curve.\textsuperscript{\protect\hyperlink{ref-Klein2003}{17}}
\[
LE = \int_{t=0}^{\infty}{S(t) dt}.
\]
In discrete-time using cSTMs, we often calculate restricted LE over a fixed time horizon (e.g., \(n_T\)) at which most of the cohort has transitioned to the Dead state and is defined as
\[
LE = \sum_{t=0}^{n_T}{S(t)}.
\]
In the simulation-time-dependent Sick-Sicker model, where we simulated a cohort over \(n_T\)= 75 cycles, life expectancy \texttt{le\_SoC} is 41.2 cycles, which is calculated as
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{le\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{sum}\NormalTok{(v\_S\_SoC) }\CommentTok{\# life expectancy}
\end{Highlighting}
\end{Shaded}
Note that this equation expresses LE in the units of \(t\). We use an annual cycle length; thus, the resulting LE will be in years. Analysts can also use other cycle lengths (e.g., monthly or daily), but the LE must be correctly converted to the desired unit if different than the cycle length units.
\hypertarget{prevalence}{%
\subsubsection{Prevalence}\label{prevalence}}
Prevalence is defined as the proportion of the population or cohort with a specific condition (or being in a particular health state) among those alive.\textsuperscript{\protect\hyperlink{ref-Rothman2008h}{18}} To calculate the prevalence of S1 at cycle \(t\), \(\text{prev}(t)_i\), we compute the ratio between the proportion of the cohort in S1 and the proportion alive at that cycle.\textsuperscript{\protect\hyperlink{ref-Keiding1991}{19}} The proportion of the cohort alive is given by the survival probability \(S(t)\) defined above. The individual prevalence of the S1 and S2 health states and the overall prevalence of sick individuals (i.e., S1 + S2) of the age-dependent Sick-Sicker cSTM at each cycle \(t\) is computed as follows and are shown in Figure \ref{fig:Sick-Sicker-Prev-AgeDep}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{v\_prev\_S1\_SoC }\OtherTok{\textless{}{-}}\NormalTok{ m\_M\_SoC[, }\StringTok{"S1"}\NormalTok{] }\SpecialCharTok{/}\NormalTok{ v\_S\_SoC }\CommentTok{\# vector with prevalence of Sick}
\NormalTok{v\_prev\_S2\_SoC }\OtherTok{\textless{}{-}}\NormalTok{ m\_M\_SoC[, }\StringTok{"S2"}\NormalTok{] }\SpecialCharTok{/}\NormalTok{ v\_S\_SoC }\CommentTok{\# vector with prevalence of Sicker}
\NormalTok{v\_prev\_S1S2\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{rowSums}\NormalTok{(m\_M\_SoC[, }\FunctionTok{c}\NormalTok{(}\StringTok{"S1"}\NormalTok{, }\StringTok{"S2"}\NormalTok{)])}\SpecialCharTok{/}\NormalTok{v\_S\_SoC }\CommentTok{\# prevalence of Sick and Sicker}
\end{Highlighting}
\end{Shaded}
\begin{figure}[H]
{\centering \includegraphics{figs/Sick-Sicker-Prev-AgeDep-1}
}
\caption{Prevalence of sick states in age-dependent cSTM}\label{fig:Sick-Sicker-Prev-AgeDep}
\end{figure}
\hypertarget{economic-measures}{%
\subsection{Economic measures}\label{economic-measures}}
In CEA, we can calculate economic outcomes from state and transition rewards. A ``state reward'' refers to a value (e.g., cost, utility) assigned to individuals for remaining in a given health state for one cycle. A ``transition reward'' refers to the increase or decrease in either costs or utilities of transitioning from one health state to another, which may be associated with a one-time cost or utility impact. In the accompanying tutorial, we describe how to incorporate state rewards in CEA in detail.\textsuperscript{\protect\hyperlink{ref-Alarid-Escudero2021a}{7}} Here, we describe and illustrate how to implement both state and transition rewards together using a transition array.
\hypertarget{state-rewards}{%
\subsubsection{State rewards}\label{state-rewards}}
As shown in the introductory tutorial, to add state rewards to the Sick-Sicker model, we first create a vector of utilities and costs for each of the four strategies considered. The vectors of utilities and costs, \texttt{v\_u\_SoC} and \texttt{v\_c\_SoC}, respectively, contain the utilities and costs corresponding to being in each of the four health states under SoC, shown in Table \ref{tab:param-table}.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Vector of state utilities under SoC}
\NormalTok{v\_u\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\AttributeTok{H =}\NormalTok{ u\_H, }\AttributeTok{S1 =}\NormalTok{ u\_S1, }\AttributeTok{S2 =}\NormalTok{ u\_S2, }\AttributeTok{D =}\NormalTok{ u\_D)}
\CommentTok{\# Vector of state costs under SoC}
\NormalTok{v\_c\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\AttributeTok{H =}\NormalTok{ c\_H, }\AttributeTok{S1 =}\NormalTok{ c\_S1, }\AttributeTok{S2 =}\NormalTok{ c\_S2, }\AttributeTok{D =}\NormalTok{ c\_D)}
\end{Highlighting}
\end{Shaded}
We account for the benefits and costs of both treatments individually and their combination to create the state-reward vectors under treatments A and B (strategies A and B, respectively) and when applied jointly (strategy AB). Only treatment A affects QoL, so we create a vector of utilities for strategy A, \texttt{v\_u\_strA}, where we substitute the utility of being in S1 under SoC, \texttt{u\_S1}, with the utility associated with the benefit of treatment A in being in that state, \texttt{u\_trtA}. Treatment B does not affect QoL, so the vector of utilities for strategy B, \texttt{v\_u\_strB}, is the same as SoC's vector. However, when both treatments A and B are applied jointly (strategy AB), the resulting vector of utilities \texttt{v\_u\_strAB} equals that of strategy A.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Vector of state utilities for strategy A}
\NormalTok{v\_u\_strA }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\AttributeTok{H =}\NormalTok{ u\_H, }\AttributeTok{S1 =}\NormalTok{ u\_trtA, }\AttributeTok{S2 =}\NormalTok{ u\_S2, }\AttributeTok{D =}\NormalTok{ u\_D)}
\CommentTok{\# Vector of state utilities for strategy B}
\NormalTok{v\_u\_strB }\OtherTok{\textless{}{-}}\NormalTok{ v\_u\_SoC}
\CommentTok{\# Vector of state utilities for strategy AB}
\NormalTok{v\_u\_strAB }\OtherTok{\textless{}{-}}\NormalTok{ v\_u\_strA}
\end{Highlighting}
\end{Shaded}
Both treatments A and B incur a cost. To create the vector of state costs for strategy A, \texttt{v\_c\_strA}, we add the cost of treatment A, \texttt{c\_trtA}, to S1 and S2 state costs. Similarly, when constructing the vector of state costs for strategy B, \texttt{v\_c\_strB}, we add the cost of treatment B, \texttt{c\_trtB}, to S1 and S2 state costs. Finally, for the vector of state costs for strategy AB, \texttt{v\_c\_strAB}, we add both treatment costs to the state costs of S1 and S2.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Vector of state costs for strategy A}
\NormalTok{v\_c\_strA }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\AttributeTok{H =}\NormalTok{ c\_H, }
\AttributeTok{S1 =}\NormalTok{ c\_S1 }\SpecialCharTok{+}\NormalTok{ c\_trtA, }
\AttributeTok{S2 =}\NormalTok{ c\_S2 }\SpecialCharTok{+}\NormalTok{ c\_trtA, }
\AttributeTok{D =}\NormalTok{ c\_D)}
\CommentTok{\# Vector of state costs for strategy B}
\NormalTok{v\_c\_strB }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\AttributeTok{H =}\NormalTok{ c\_H, }
\AttributeTok{S1 =}\NormalTok{ c\_S1 }\SpecialCharTok{+}\NormalTok{ c\_trtB, }
\AttributeTok{S2 =}\NormalTok{ c\_S2 }\SpecialCharTok{+}\NormalTok{ c\_trtB, }
\AttributeTok{D =}\NormalTok{ c\_D)}
\CommentTok{\# Vector of state costs for strategy AB}
\NormalTok{v\_c\_strAB }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\AttributeTok{H =}\NormalTok{ c\_H, }
\AttributeTok{S1 =}\NormalTok{ c\_S1 }\SpecialCharTok{+}\NormalTok{ (c\_trtA }\SpecialCharTok{+}\NormalTok{ c\_trtB), }
\AttributeTok{S2 =}\NormalTok{ c\_S2 }\SpecialCharTok{+}\NormalTok{ (c\_trtA }\SpecialCharTok{+}\NormalTok{ c\_trtB), }
\AttributeTok{D =}\NormalTok{ c\_D)}
\end{Highlighting}
\end{Shaded}
\hypertarget{transition-rewards}{%
\subsubsection{Transition rewards}\label{transition-rewards}}
As previously mentioned, dying (i.e., transitioning to the Dead state) incurs a one-time cost of \$2,000 that reflects the acute care that might be received immediately preceding death. We also have a disutility and a cost increment on the transition from H to S1. Incorporating transition rewards requires keeping track of the proportion of the cohort that transitions between health states in each cycle while capturing the origin and destination states for each transition. The cohort trace, \(M\), does not capture this information. However, obtaining this information is relatively straightforward in a cSTM and described in detail by Krijkamp et al.~(2020).\textsuperscript{\protect\hyperlink{ref-Krijkamp2019}{9}} Briefly, this approach involves changing the core computation in a traditional cSTM, from \(m_t P_t\) to \(\text{diag}(m_t) P_t\). This simple change allows us to compute the proportion of the cohort that transitions between any two states in a cycle \(t\). The result is no longer a cohort trace matrix, but rather a three-dimensional array that we refer to as a transition-dynamics array (\(\mathbf{A}\)) with dimensions \(n_S \times n_S \times [n_T+1]\). The \(t-\)th slice of \(\mathbf{A}\), \(A_t\), is a matrix that stores the proportion of the population that transitioned between any two states from cycles \(t-1\) to \(t\). Similarly, we define the transition rewards by the states of origin and destination.
To account for both state and transition rewards, we create a \emph{matrix} of rewards \(R_t\) of dimensions \(n_S \times n_S\). The off-diagonal entries of \(R_t\) store the transition rewards, and the diagonal of \(R_t\) stores the state rewards for cycle \(t\) and assumes that rewards occur at the beginning of the cycle.\textsuperscript{\protect\hyperlink{ref-Krijkamp2019}{9}} Finally, we multiply this matrix by \(A_t\), the \(t\)-th slice of \(A\), and apply discounting, within-cycle correction, and compute the overall reward for each strategy outcome. Below, we illustrate these computations in R.
To compute \(\mathbf{A}\) for the simulation-time-dependent Sick-Sicker model under SoC, we initialize a three-dimensional array \texttt{a\_A\_SoC} of dimensions \(n_S \times n_S \times [n_T+1]\) and set the diagonal of the first slice to the initial state vector \texttt{v\_s\_init}. Next, we create a three-dimensional array for each of the strategies as a copy of the array under SoC.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Initialize transition{-}dynamics array under SoC}
\NormalTok{a\_A\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\DecValTok{0}\NormalTok{,}
\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(n\_states, n\_states, (n\_cycles }\SpecialCharTok{+} \DecValTok{1}\NormalTok{)),}
\AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(v\_names\_states, v\_names\_states, }\DecValTok{0}\SpecialCharTok{:}\NormalTok{n\_cycles))}
\CommentTok{\# Set first slice to the initial state vector in its diagonal}
\FunctionTok{diag}\NormalTok{(a\_A\_SoC[, , }\DecValTok{1}\NormalTok{]) }\OtherTok{\textless{}{-}}\NormalTok{ v\_s\_init}
\CommentTok{\# Initialize transition{-}dynamics array for strategies A, B, and AB}
\CommentTok{\# Structure and initial states are the same as for SoC}
\NormalTok{a\_A\_strA }\OtherTok{\textless{}{-}}\NormalTok{ a\_A\_SoC}
\NormalTok{a\_A\_strB }\OtherTok{\textless{}{-}}\NormalTok{ a\_A\_SoC}
\NormalTok{a\_A\_strAB }\OtherTok{\textless{}{-}}\NormalTok{ a\_A\_SoC}
\end{Highlighting}
\end{Shaded}
We then compute a matrix multiplication between a diagonal matrix of each of the \(t\)-th rows of the cohort trace matrix under SoC and treatment B, denoted as \texttt{diag(m\_M\_SoC{[}t,\ {]})} and \texttt{diag(m\_M\_strB{[}t,\ {]})}, by the \(t\)-th matrix of the array of transition matrices, \texttt{a\_P\_SoC{[},\ ,\ t{]}} and \texttt{a\_P\_strB{[},\ ,\ t{]}}, respectively, over all \(n_T\) cycles.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Iterative solution to produce the transition{-}dynamics array}
\ControlFlowTok{for}\NormalTok{ (t }\ControlFlowTok{in} \DecValTok{1}\SpecialCharTok{:}\NormalTok{n\_cycles)\{}
\CommentTok{\# For SoC}
\NormalTok{ a\_A\_SoC[, , t }\SpecialCharTok{+} \DecValTok{1}\NormalTok{] }\OtherTok{\textless{}{-}} \FunctionTok{diag}\NormalTok{(m\_M\_SoC[t, ]) }\SpecialCharTok{\%*\%}\NormalTok{ a\_P\_SoC[, , t]}
\CommentTok{\# For strategy A}
\NormalTok{ a\_A\_strA[, , t }\SpecialCharTok{+} \DecValTok{1}\NormalTok{] }\OtherTok{\textless{}{-}} \FunctionTok{diag}\NormalTok{(m\_M\_strA[t, ]) }\SpecialCharTok{\%*\%}\NormalTok{ a\_P\_strA[, , t]}
\CommentTok{\# For strategy B}
\NormalTok{ a\_A\_strB[, , t }\SpecialCharTok{+} \DecValTok{1}\NormalTok{] }\OtherTok{\textless{}{-}} \FunctionTok{diag}\NormalTok{(m\_M\_strB[t, ]) }\SpecialCharTok{\%*\%}\NormalTok{ a\_P\_strB[, , t]}
\CommentTok{\# For strategy AB}
\NormalTok{ a\_A\_strAB[, , t }\SpecialCharTok{+} \DecValTok{1}\NormalTok{] }\OtherTok{\textless{}{-}} \FunctionTok{diag}\NormalTok{(m\_M\_strAB[t, ]) }\SpecialCharTok{\%*\%}\NormalTok{ a\_P\_strAB[, , t]}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
To create the arrays of rewards for costs and utilities for the simulation-time-dependent Sick-Sicker cSTM, we create strategy-specific three-dimensional arrays of rewards and fill each of their rows across the third dimension with the vector of state rewards.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Arrays of state and transition rewards}
\CommentTok{\# Utilities under SoC}
\NormalTok{a\_R\_u\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\FunctionTok{matrix}\NormalTok{(v\_u\_SoC, }\AttributeTok{nrow =}\NormalTok{ n\_states, }\AttributeTok{ncol =}\NormalTok{ n\_states, }\AttributeTok{byrow =}\NormalTok{ T), }
\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(n\_states, n\_states, n\_cycles }\SpecialCharTok{+} \DecValTok{1}\NormalTok{),}
\AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(v\_names\_states, v\_names\_states, }\DecValTok{0}\SpecialCharTok{:}\NormalTok{n\_cycles))}
\CommentTok{\# Costs under SoC}
\NormalTok{a\_R\_c\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\FunctionTok{matrix}\NormalTok{(v\_c\_SoC, }\AttributeTok{nrow =}\NormalTok{ n\_states, }\AttributeTok{ncol =}\NormalTok{ n\_states, }\AttributeTok{byrow =}\NormalTok{ T), }
\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(n\_states, n\_states, n\_cycles }\SpecialCharTok{+} \DecValTok{1}\NormalTok{),}
\AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(v\_names\_states, v\_names\_states, }\DecValTok{0}\SpecialCharTok{:}\NormalTok{n\_cycles))}
\CommentTok{\# Utilities under Strategy A}
\NormalTok{a\_R\_u\_strA }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\FunctionTok{matrix}\NormalTok{(v\_u\_strA, }\AttributeTok{nrow =}\NormalTok{ n\_states, }\AttributeTok{ncol =}\NormalTok{ n\_states, }\AttributeTok{byrow =}\NormalTok{ T), }
\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(n\_states, n\_states, n\_cycles }\SpecialCharTok{+} \DecValTok{1}\NormalTok{),}
\AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(v\_names\_states, v\_names\_states, }\DecValTok{0}\SpecialCharTok{:}\NormalTok{n\_cycles))}
\CommentTok{\# Costs under Strategy A}
\NormalTok{a\_R\_c\_strA }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\FunctionTok{matrix}\NormalTok{(v\_c\_strA, }\AttributeTok{nrow =}\NormalTok{ n\_states, }\AttributeTok{ncol =}\NormalTok{ n\_states, }\AttributeTok{byrow =}\NormalTok{ T), }
\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(n\_states, n\_states, n\_cycles }\SpecialCharTok{+} \DecValTok{1}\NormalTok{),}
\AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(v\_names\_states, v\_names\_states, }\DecValTok{0}\SpecialCharTok{:}\NormalTok{n\_cycles))}
\CommentTok{\# Utilities under Strategy B}
\NormalTok{a\_R\_u\_strB }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\FunctionTok{matrix}\NormalTok{(v\_u\_strB, }\AttributeTok{nrow =}\NormalTok{ n\_states, }\AttributeTok{ncol =}\NormalTok{ n\_states, }\AttributeTok{byrow =}\NormalTok{ T), }
\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(n\_states, n\_states, n\_cycles }\SpecialCharTok{+} \DecValTok{1}\NormalTok{),}
\AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(v\_names\_states, v\_names\_states, }\DecValTok{0}\SpecialCharTok{:}\NormalTok{n\_cycles))}
\CommentTok{\# Costs under Strategy B}
\NormalTok{a\_R\_c\_strB }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\FunctionTok{matrix}\NormalTok{(v\_c\_strB, }\AttributeTok{nrow =}\NormalTok{ n\_states, }\AttributeTok{ncol =}\NormalTok{ n\_states, }\AttributeTok{byrow =}\NormalTok{ T), }
\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(n\_states, n\_states, n\_cycles }\SpecialCharTok{+} \DecValTok{1}\NormalTok{),}
\AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(v\_names\_states, v\_names\_states, }\DecValTok{0}\SpecialCharTok{:}\NormalTok{n\_cycles))}
\CommentTok{\# Utilities under Strategy AB}
\NormalTok{a\_R\_u\_strAB }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\FunctionTok{matrix}\NormalTok{(v\_u\_strAB, }\AttributeTok{nrow =}\NormalTok{ n\_states, }\AttributeTok{ncol =}\NormalTok{ n\_states, }\AttributeTok{byrow =}\NormalTok{ T), }
\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(n\_states, n\_states, n\_cycles }\SpecialCharTok{+} \DecValTok{1}\NormalTok{),}
\AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(v\_names\_states, v\_names\_states, }\DecValTok{0}\SpecialCharTok{:}\NormalTok{n\_cycles))}
\CommentTok{\# Costs under Strategy AB}
\NormalTok{a\_R\_c\_strAB }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\FunctionTok{matrix}\NormalTok{(v\_c\_strAB, }\AttributeTok{nrow =}\NormalTok{ n\_states, }\AttributeTok{ncol =}\NormalTok{ n\_states, }\AttributeTok{byrow =}\NormalTok{ T), }
\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(n\_states, n\_states, n\_cycles }\SpecialCharTok{+} \DecValTok{1}\NormalTok{),}
\AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(v\_names\_states, v\_names\_states, }\DecValTok{0}\SpecialCharTok{:}\NormalTok{n\_cycles))}
\end{Highlighting}
\end{Shaded}
To account for the transition rewards, we either add or subtract them in the corresponding location of the reward matrix representing the transitions of interest. Thus, for example, to account for the disutility of transitioning from H to S1 under strategy A, we subtract the disutility to the entry of the array of rewards corresponding to the transition from H to S1 across all cycles.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Add disutility due to transition from Healthy to Sick}
\NormalTok{a\_R\_u\_strA[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ a\_R\_u\_strA[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\SpecialCharTok{{-}}\NormalTok{ du\_HS1}
\end{Highlighting}
\end{Shaded}
In a similar approach, we add the costs of transitioning from H to S1 and the cost of dying under strategy A.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Add transition cost due to transition from Healthy to Sick}
\NormalTok{a\_R\_c\_strA[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ a\_R\_c\_strA[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\SpecialCharTok{+}\NormalTok{ ic\_HS1}
\CommentTok{\# Add transition cost of dying from all non{-}dead states}
\NormalTok{a\_R\_c\_strA[}\SpecialCharTok{{-}}\NormalTok{n\_states, }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ a\_R\_c\_strA[}\SpecialCharTok{{-}}\NormalTok{n\_states, }\StringTok{"D"}\NormalTok{, ] }\SpecialCharTok{+}\NormalTok{ ic\_D}
\NormalTok{a\_R\_c\_strA[, , }\DecValTok{1}\NormalTok{]}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## H S1 S2 D
## H 2000 17000 27000 2000
## S1 2000 16000 27000 2000
## S2 2000 16000 27000 2000
## D 2000 16000 27000 0
\end{verbatim}
Below, we show how to add the transition rewards to the reward matrices under SoC and strategies B and AB.
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\# SoC}
\CommentTok{\# Add disutility due to transition from H to S1}
\NormalTok{a\_R\_u\_SoC[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ a\_R\_u\_SoC[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\SpecialCharTok{{-}}\NormalTok{ du\_HS1}
\CommentTok{\# Add transition cost due to transition from H to S1}
\NormalTok{a\_R\_c\_SoC[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ a\_R\_c\_SoC[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\SpecialCharTok{+}\NormalTok{ ic\_HS1}
\CommentTok{\# Add transition cost of dying from all non{-}dead states}
\NormalTok{a\_R\_c\_SoC[}\SpecialCharTok{{-}}\NormalTok{n\_states, }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ a\_R\_c\_SoC[}\SpecialCharTok{{-}}\NormalTok{n\_states, }\StringTok{"D"}\NormalTok{, ] }\SpecialCharTok{+}\NormalTok{ ic\_D}
\DocumentationTok{\#\# Strategy B}
\CommentTok{\# Add disutility due to transition from Healthy to Sick}
\NormalTok{a\_R\_u\_strB[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ a\_R\_u\_strB[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\SpecialCharTok{{-}}\NormalTok{ du\_HS1}
\CommentTok{\# Add transition cost due to transition from Healthy to Sick}
\NormalTok{a\_R\_c\_strB[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ a\_R\_c\_strB[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\SpecialCharTok{+}\NormalTok{ ic\_HS1}
\CommentTok{\# Add transition cost of dying from all non{-}dead states}
\NormalTok{a\_R\_c\_strB[}\SpecialCharTok{{-}}\NormalTok{n\_states, }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ a\_R\_c\_strB[}\SpecialCharTok{{-}}\NormalTok{n\_states, }\StringTok{"D"}\NormalTok{, ] }\SpecialCharTok{+}\NormalTok{ ic\_D}
\DocumentationTok{\#\# Strategy AB}
\CommentTok{\# Add disutility due to transition from Healthy to Sick}
\NormalTok{a\_R\_u\_strAB[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ a\_R\_u\_strAB[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\SpecialCharTok{{-}}\NormalTok{ du\_HS1}
\CommentTok{\# Add transition cost due to transition from Healthy to Sick}
\NormalTok{a\_R\_c\_strAB[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ a\_R\_c\_strAB[}\StringTok{"H"}\NormalTok{, }\StringTok{"S1"}\NormalTok{, ] }\SpecialCharTok{+}\NormalTok{ ic\_HS1}
\CommentTok{\# Add transition cost of dying from all non{-}dead states}
\NormalTok{a\_R\_c\_strAB[}\SpecialCharTok{{-}}\NormalTok{n\_states, }\StringTok{"D"}\NormalTok{, ] }\OtherTok{\textless{}{-}}\NormalTok{ a\_R\_c\_strAB[}\SpecialCharTok{{-}}\NormalTok{n\_states, }\StringTok{"D"}\NormalTok{, ] }\SpecialCharTok{+}\NormalTok{ ic\_D}
\end{Highlighting}
\end{Shaded}
The state and transition rewards are applied to the model dynamics by element-wise multiplication between \(\mathbf{A}\) and \(\mathbf{R}\), indicated by the \(\odot\) sign, which produces the array of outputs for all \(n_T\) cycles, \(\mathbf{Y}\). Formally,
\begin{equation}
\mathbf{Y} = \mathbf{A} \odot \mathbf{R}
\label{eq:array-outputs}
\end{equation}
To obtain \(\mathbf{Y}\) for QALYs and costs for all four strategies, we apply Equation \eqref{eq:array-outputs} by the element-wise multiplication of the transition array \texttt{a\_A\_SoC} by the corresponding array of rewards.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# For SoC}
\NormalTok{a\_Y\_c\_SoC }\OtherTok{\textless{}{-}}\NormalTok{ a\_A\_SoC }\SpecialCharTok{*}\NormalTok{ a\_R\_c\_SoC}
\NormalTok{a\_Y\_u\_SoC }\OtherTok{\textless{}{-}}\NormalTok{ a\_A\_SoC }\SpecialCharTok{*}\NormalTok{ a\_R\_u\_SoC}
\CommentTok{\# For Strategy A}
\NormalTok{a\_Y\_c\_strA }\OtherTok{\textless{}{-}}\NormalTok{ a\_A\_strA }\SpecialCharTok{*}\NormalTok{ a\_R\_c\_strA}
\NormalTok{a\_Y\_u\_strA }\OtherTok{\textless{}{-}}\NormalTok{ a\_A\_strA }\SpecialCharTok{*}\NormalTok{ a\_R\_u\_strA}
\CommentTok{\# For Strategy B}
\NormalTok{a\_Y\_c\_strB }\OtherTok{\textless{}{-}}\NormalTok{ a\_A\_strB }\SpecialCharTok{*}\NormalTok{ a\_R\_c\_strB}
\NormalTok{a\_Y\_u\_strB }\OtherTok{\textless{}{-}}\NormalTok{ a\_A\_strB }\SpecialCharTok{*}\NormalTok{ a\_R\_u\_strB}
\CommentTok{\# For Strategy AB}
\NormalTok{a\_Y\_c\_strAB }\OtherTok{\textless{}{-}}\NormalTok{ a\_A\_strAB }\SpecialCharTok{*}\NormalTok{ a\_R\_c\_strAB}
\NormalTok{a\_Y\_u\_strAB }\OtherTok{\textless{}{-}}\NormalTok{ a\_A\_strAB }\SpecialCharTok{*}\NormalTok{ a\_R\_u\_strAB}
\end{Highlighting}
\end{Shaded}
The total rewards for each health state at cycle \(t\), \(\mathbf{y}_t\), is obtained by summing the rewards across all \(j = 1,\ldots, n_S\) health states for all \(n_T\) cycles.
\begin{equation}
\mathbf{y}_t = \mathbf{1}^T Y_t = \left[\sum_{i=1}^{n_S}{Y_{[i,1,t]}}, \sum_{i=1}^{n_S}{Y_{[i,2,t]}}, \dots , \sum_{i=1}^{n_S}{Y_{[i,n_S,t]}}\right].
\label{eq:exp-rewd-trans}
\end{equation}
To obtain the expected costs and QALYs per cycle for each strategy, \(\mathbf{y}\), we apply Equation \eqref{eq:exp-rewd-trans} again across all the matrices of the third dimension of \(\mathbf{Y}\) for all the outcomes.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Vectors of rewards}
\CommentTok{\# QALYs under SoC}
\NormalTok{v\_qaly\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{rowSums}\NormalTok{(}\FunctionTok{t}\NormalTok{(}\FunctionTok{colSums}\NormalTok{(a\_Y\_u\_SoC)))}
\CommentTok{\# Costs under SoC}
\NormalTok{v\_cost\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{rowSums}\NormalTok{(}\FunctionTok{t}\NormalTok{(}\FunctionTok{colSums}\NormalTok{(a\_Y\_c\_SoC)))}
\CommentTok{\# QALYs under Strategy A}
\NormalTok{v\_qaly\_strA }\OtherTok{\textless{}{-}} \FunctionTok{rowSums}\NormalTok{(}\FunctionTok{t}\NormalTok{(}\FunctionTok{colSums}\NormalTok{(a\_Y\_u\_strA)))}
\CommentTok{\# Costs under Strategy A}
\NormalTok{v\_cost\_strA }\OtherTok{\textless{}{-}} \FunctionTok{rowSums}\NormalTok{(}\FunctionTok{t}\NormalTok{(}\FunctionTok{colSums}\NormalTok{(a\_Y\_c\_strA)))}
\CommentTok{\# QALYs under Strategy B}
\NormalTok{v\_qaly\_strB }\OtherTok{\textless{}{-}} \FunctionTok{rowSums}\NormalTok{(}\FunctionTok{t}\NormalTok{(}\FunctionTok{colSums}\NormalTok{(a\_Y\_u\_strB)))}
\CommentTok{\# Costs under Strategy B}
\NormalTok{v\_cost\_strB }\OtherTok{\textless{}{-}} \FunctionTok{rowSums}\NormalTok{(}\FunctionTok{t}\NormalTok{(}\FunctionTok{colSums}\NormalTok{(a\_Y\_c\_strB)))}
\CommentTok{\# QALYs under Strategy AB}
\NormalTok{v\_qaly\_strAB }\OtherTok{\textless{}{-}} \FunctionTok{rowSums}\NormalTok{(}\FunctionTok{t}\NormalTok{(}\FunctionTok{colSums}\NormalTok{(a\_Y\_u\_strAB)))}
\CommentTok{\# Costs under Strategy AB}
\NormalTok{v\_cost\_strAB }\OtherTok{\textless{}{-}} \FunctionTok{rowSums}\NormalTok{(}\FunctionTok{t}\NormalTok{(}\FunctionTok{colSums}\NormalTok{(a\_Y\_c\_strAB)))}
\end{Highlighting}
\end{Shaded}
\hypertarget{within-cycle-correction-and-discounting-future-rewards}{%
\subsubsection{Within-cycle correction and discounting future rewards}\label{within-cycle-correction-and-discounting-future-rewards}}
Following the steps in the introductory cSTM tutorial,\textsuperscript{\protect\hyperlink{ref-Alarid-Escudero2021a}{7}} here we use Simpson's 1/3rd rule for within-cycle correction (WCC),\textsuperscript{\protect\hyperlink{ref-Elbasha2016a}{21}} and use exponential discounting for costs and QALYs. In our example, the WCC vector, \(\mathbf{wcc}\), is the same for both costs and QALYs; thus, only one vector, \texttt{v\_wcc}, is required.
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\# Vector with cycles}
\NormalTok{v\_cycles }\OtherTok{\textless{}{-}} \FunctionTok{seq}\NormalTok{(}\DecValTok{1}\NormalTok{, n\_cycles }\SpecialCharTok{+} \DecValTok{1}\NormalTok{)}
\DocumentationTok{\#\# Generate 2/3 and 4/3 multipliers for even and odd entries, respectively}
\NormalTok{v\_wcc }\OtherTok{\textless{}{-}}\NormalTok{ ((v\_cycles }\SpecialCharTok{\%\%} \DecValTok{2}\NormalTok{)}\SpecialCharTok{==}\DecValTok{0}\NormalTok{)}\SpecialCharTok{*}\NormalTok{(}\DecValTok{2}\SpecialCharTok{/}\DecValTok{3}\NormalTok{) }\SpecialCharTok{+}\NormalTok{ ((v\_cycles }\SpecialCharTok{\%\%} \DecValTok{2}\NormalTok{)}\SpecialCharTok{!=}\DecValTok{0}\NormalTok{)}\SpecialCharTok{*}\NormalTok{(}\DecValTok{4}\SpecialCharTok{/}\DecValTok{3}\NormalTok{)}
\DocumentationTok{\#\# Substitute 1/3 in first and last entries}
\NormalTok{v\_wcc[}\DecValTok{1}\NormalTok{] }\OtherTok{\textless{}{-}}\NormalTok{ v\_wcc[n\_cycles }\SpecialCharTok{+} \DecValTok{1}\NormalTok{] }\OtherTok{\textless{}{-}} \DecValTok{1}\SpecialCharTok{/}\DecValTok{3}
\end{Highlighting}
\end{Shaded}
The discount vectors, \(\mathbf{d}\), for costs and QALYs for the Sick-Sicker model, \texttt{v\_dwc} and \texttt{v\_dwe}, respectively, are
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Discount weight for effects}
\NormalTok{v\_dwe }\OtherTok{\textless{}{-}} \DecValTok{1} \SpecialCharTok{/}\NormalTok{ ((}\DecValTok{1} \SpecialCharTok{+}\NormalTok{ d\_e) }\SpecialCharTok{\^{}}\NormalTok{ (}\DecValTok{0}\SpecialCharTok{:}\NormalTok{(n\_cycles))) }
\CommentTok{\# Discount weight for costs }
\NormalTok{v\_dwc }\OtherTok{\textless{}{-}} \DecValTok{1} \SpecialCharTok{/}\NormalTok{ ((}\DecValTok{1} \SpecialCharTok{+}\NormalTok{ d\_c) }\SpecialCharTok{\^{}}\NormalTok{ (}\DecValTok{0}\SpecialCharTok{:}\NormalTok{(n\_cycles))) }
\end{Highlighting}
\end{Shaded}
To account for both discounting and WCC, we incorporate \(\mathbf{wcc}\) in equation \eqref{eq:tot-exp-disc-rewd-wcc} using an element-wise multiplication with \(\mathbf{d}\), indicated by the \(\odot\) symbol, such that
\begin{equation}
y = \mathbf{y}^{'} \left(\mathbf{d} \odot \mathbf{wcc}\right).
\label{eq:tot-exp-disc-rewd-wcc}
\end{equation}
The total expected discounted costs and QALYs under all four strategies accounting for WCC, \(y\), is obtained by applying Equation \eqref{eq:tot-exp-disc-rewd-wcc} to the expected outcomes accounting for transition rewards.
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\#\# For SoC}
\DocumentationTok{\#\# QALYs}
\NormalTok{n\_tot\_qaly\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{t}\NormalTok{(v\_qaly\_SoC) }\SpecialCharTok{\%*\%}\NormalTok{ (v\_dwe }\SpecialCharTok{*}\NormalTok{ v\_wcc)}
\DocumentationTok{\#\# Costs}
\NormalTok{n\_tot\_cost\_SoC }\OtherTok{\textless{}{-}} \FunctionTok{t}\NormalTok{(v\_cost\_SoC) }\SpecialCharTok{\%*\%}\NormalTok{ (v\_dwc }\SpecialCharTok{*}\NormalTok{ v\_wcc)}
\DocumentationTok{\#\#\# For Strategy A}
\DocumentationTok{\#\# QALYs}
\NormalTok{n\_tot\_qaly\_strA }\OtherTok{\textless{}{-}} \FunctionTok{t}\NormalTok{(v\_qaly\_strA) }\SpecialCharTok{\%*\%}\NormalTok{ (v\_dwe }\SpecialCharTok{*}\NormalTok{ v\_wcc)}
\DocumentationTok{\#\# Costs}
\NormalTok{n\_tot\_cost\_strA }\OtherTok{\textless{}{-}} \FunctionTok{t}\NormalTok{(v\_cost\_strA) }\SpecialCharTok{\%*\%}\NormalTok{ (v\_dwc }\SpecialCharTok{*}\NormalTok{ v\_wcc)}
\DocumentationTok{\#\#\# For Strategy B}
\DocumentationTok{\#\# QALYs}
\NormalTok{n\_tot\_qaly\_strB }\OtherTok{\textless{}{-}} \FunctionTok{t}\NormalTok{(v\_qaly\_strB) }\SpecialCharTok{\%*\%}\NormalTok{ (v\_dwe }\SpecialCharTok{*}\NormalTok{ v\_wcc)}
\DocumentationTok{\#\# Costs}
\NormalTok{n\_tot\_cost\_strB }\OtherTok{\textless{}{-}} \FunctionTok{t}\NormalTok{(v\_cost\_strB) }\SpecialCharTok{\%*\%}\NormalTok{ (v\_dwc }\SpecialCharTok{*}\NormalTok{ v\_wcc)}
\DocumentationTok{\#\#\# For Strategy AB}
\DocumentationTok{\#\# QALYs}
\NormalTok{n\_tot\_qaly\_strAB }\OtherTok{\textless{}{-}} \FunctionTok{t}\NormalTok{(v\_qaly\_strAB) }\SpecialCharTok{\%*\%}\NormalTok{ (v\_dwe }\SpecialCharTok{*}\NormalTok{ v\_wcc)}
\DocumentationTok{\#\# Costs}
\NormalTok{n\_tot\_cost\_strAB }\OtherTok{\textless{}{-}} \FunctionTok{t}\NormalTok{(v\_cost\_strAB) }\SpecialCharTok{\%*\%}\NormalTok{ (v\_dwc }\SpecialCharTok{*}\NormalTok{ v\_wcc)}
\end{Highlighting}
\end{Shaded}
The total expected discounted QALYs and costs for the simulation-time-dependent Sick-Sicker model under the four strategies accounting for WCC are shown in Table \ref{tab:Expected-outcomes-table}.
\begin{table}[!h]
\caption{\label{tab:Expected-outcomes-table}Total expected discounted QALYs and costs per average individual in the cohort of the simulation-time-dependent Sick-Sicker model by strategy accounting for within-cycle correction .}
\centering
\begin{tabular}[t]{llc}
\toprule{}
& Costs & QALYs\\
\midrule{}
Standard of care & \$114,560 & 19.142\\
Strategy A & \$211,911 & 19.840\\
Strategy B & \$194,481 & 20.480\\
Strategy AB & \$282,370 & 21.302\\
\bottomrule{}
\end{tabular}
\end{table}
\hypertarget{incremental-cost-effectiveness-ratios-icers}{%
\section{Incremental cost-effectiveness ratios (ICERs)}\label{incremental-cost-effectiveness-ratios-icers}}
To conduct the cost-effectiveness analysis, we follow the coding approach described in the introductory cSTM tutorial.\textsuperscript{\protect\hyperlink{ref-Alarid-Escudero2021a}{7}} We combine the total expected discounted costs and QALYs for all four strategies into outcome-specific vectors, \texttt{v\_cost\_str} for costs and \texttt{v\_qaly\_str} for QALYs. We use the R package \texttt{dampack} (\url{https://cran.r-project.org/web/packages/dampack/})\textsuperscript{\protect\hyperlink{ref-Alarid-Escudero2021}{22}} to calculate the incremental costs and effectiveness and the incremental cost-effectiveness ratio (ICER) between non-dominated strategies. \texttt{dampack} organizes and formats the results as a data frame,\texttt{df\_cea}, that can be printed as a formatted table.
\begin{Shaded}
\begin{Highlighting}[]
\DocumentationTok{\#\#\# Vector of costs}
\NormalTok{v\_cost\_str }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(n\_tot\_cost\_SoC, n\_tot\_cost\_strA, n\_tot\_cost\_strB, n\_tot\_cost\_strAB)}
\DocumentationTok{\#\#\# Vector of effectiveness}
\NormalTok{v\_qaly\_str }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(n\_tot\_qaly\_SoC, n\_tot\_qaly\_strA, n\_tot\_qaly\_strB, n\_tot\_qaly\_strAB)}
\DocumentationTok{\#\#\# Calculate incremental cost{-}effectiveness ratios (ICERs)}
\NormalTok{df\_cea }\OtherTok{\textless{}{-}}\NormalTok{ dampack}\SpecialCharTok{::}\FunctionTok{calculate\_icers}\NormalTok{(}\AttributeTok{cost =}\NormalTok{ v\_cost\_str, }
\AttributeTok{effect =}\NormalTok{ v\_qaly\_str,}
\AttributeTok{strategies =}\NormalTok{ v\_names\_str)}
\end{Highlighting}
\end{Shaded}
In terms of their costs and effectiveness, SoC is the least costly and effective strategy, followed by Strategy B producing an expected benefit of 1.338 QALYs per individual for an additional expected cost of \$79,920 with an ICER of \$59,726/QALY followed by Strategy AB with an ICER \$106,927/QALY. Strategy A is a dominated strategy. The results of the CEA of the simulation-time-dependent Sick-Sicker model are presented in Table \ref{tab:table-cea}. The non-dominated strategies, SoC, B, and AB, form the cost-effectiveness efficient frontier of the CEA based on the simulation-time-dependent Sick-Sicker model (Figure \ref{fig:Sick-Sicker-CEA-AgeDep}).
\begin{table}[!h]
\caption{\label{tab:table-cea}Cost-effectiveness analysis results for the simulation-time-dependent Sick-Sicker model. ND: Non-dominated strategy; D: Dominated strategy.}
\centering
\begin{tabular}[t]{rcccccc}
\toprule{}
Strategy & Costs (\$) & QALYs & Incremental Costs (\$) & Incremental QALYs & ICER (\$/QALY) & Status\\
\midrule{}
Standard of care & 114,560 & 19.142 & NA & NA & NA & ND\\
Strategy B & 194,481 & 20.480 & 79,920 & 1.338 & 59,726 & ND\\
Strategy AB & 282,370 & 21.302 & 87,890 & 0.822 & 106,927 & ND\\
Strategy A & 211,911 & 19.840 & NA & NA & NA & D\\
\bottomrule{}
\end{tabular}
\end{table}
\begin{figure}[H]
{\centering \includegraphics{figs/Sick-Sicker-CEA-AgeDep-1}
}
\caption{Cost-effectiveness efficient frontier of all four strategies for the simulation-time-dependent Sick-Sicker model.}\label{fig:Sick-Sicker-CEA-AgeDep}
\end{figure}
\hypertarget{probabilistic-sensitivity-analysis}{%
\section{Probabilistic sensitivity analysis}\label{probabilistic-sensitivity-analysis}}
We conducted a probabilistic sensitivity analysis (PSA) to quantify the effect of model parameter uncertainty on cost-effectiveness outcomes.\textsuperscript{\protect\hyperlink{ref-Briggs2012}{23}} In a PSA, we randomly draw parameter sets from distributions that reflect the current uncertainty in model parameter estimates. The parameters' distributions and their values are described in Table \ref{tab:param-table} and more detail in the Supplementary Material. We compute model outcomes for each sampled set of parameter values (e.g., total discounted cost and QALYs) for each strategy. We follow the steps to conduct a PSA from a previously published article describing the Decision Analysis in R for technologies in Health (DARTH) coding framework.\textsuperscript{\protect\hyperlink{ref-Alarid-Escudero2019e}{12}}
To conduct the PSA of the CEA using the simulation time-dependent Sick-Sicker cSTM, we sampled 1,000 parameter sets. For each set, we computed the total discounted costs and QALYs of each simulated strategy. Results from a PSA can be represented in various ways. For example, the joint distribution, 95\% confidence ellipse, and the expected values of the total discounted costs and QALYs for each strategy can be plotted in a cost-effectiveness (CE) scatter plot (Figure \ref{fig:CE-scatter-TimeDep}),\textsuperscript{\protect\hyperlink{ref-Briggs2002}{24}} where each of the 4,000 simulations (i.e., 1,000 combinations of total discounted expected costs and QALYs for each of the four strategies) are plotted as a point in the graph. The CE scatter plot for the CEA using the simulation-time-dependent model shows that strategy AB has the highest expected costs and QALYs. Standard of care has the lowest expected cost and QALYs. Strategy B is more effective and least costly than strategy A. And therefore, strategy A is strongly dominated by strategy B.
\begin{figure}[H]
{\centering \includegraphics{figs/CE-scatter-TimeDep-1}
}
\caption{Cost-effectiveness scatter plot.}\label{fig:CE-scatter-TimeDep}
\end{figure}
In Figure \ref{fig:CEAC-AgeDep}, we present the cost-effectiveness acceptability curves (CEACs) showing the probability that each strategy is cost-effective, and the cost-effectiveness frontier (CEAF), which shows the strategy with the highest expected net monetary benefit (NMB), over a range of willingness-to-pay (WTP) thresholds. Each strategy's NMB is computed using \(\text{NMB} = \text{QALY} \times \text{WTP} - \text{Cost}\)\textsuperscript{\protect\hyperlink{ref-Stinnett1998b}{25}} for each PSA sample. At WTP thresholds less than \$65,000 per QALY, SoC is the strategy with the highest probability of being cost-effective and the highest expected NMB. Strategy B has the highest probability of being cost-effective and the highest expected NMB for WTP thresholds greater than \$65,000 and smaller than \$105,000 per QALY. Strategy AB, has the highest expected NMB for WTP thresholds greater than or equal to \$105,000 and is the strategy with the highest probability of being cost-effective.
\begin{figure}[H]
{\centering \includegraphics{figs/CEAC-AgeDep-1}
}
\caption{Cost-effectiveness acceptability curves (CEACs) and frontier (CEAF).}\label{fig:CEAC-AgeDep}
\end{figure}
The CEAC and CEAF do not show the magnitude of the expected net benefit lost (i.e., expected loss) when the chosen strategy is not the cost-effective strategy in all the samples of the PSA. We quantify expected loss from each strategy over a range of WTP thresholds with the expected loss curves (ELCs) to complement these results (Figure \ref{fig:ELC-AgeDep}). The expected loss considers both the probability of making the wrong decision and the magnitude of the loss due to this decision, representing the foregone benefits of choosing a suboptimal strategy. The expected loss of the optimal strategy represents the lowest envelope of the ELCs because, given current information, the loss cannot be minimized further. The lower envelope also represents the expected value of perfect information (EVPI), which quantifies the value of eliminating parameter uncertainty. The strategy SoC has the lowest expected loss for WTP thresholds less than \$60,000 per QALY, strategy B has the lowest expected loss for WTP threshold greater than or equal to \$65,000 and less than \$105,000. Strategy AB has the lowest expected loss for WTP threshold greater than or equal to \$105,000 per QALY. At a WTP threshold of \$65,000 per QALY, the EVPI is highest at \$6,953. For a more detailed description of these outputs and the R code to generate them, we refer the reader to a previous publication by our group.\textsuperscript{\protect\hyperlink{ref-Alarid-Escudero2019}{26}}
\begin{figure}[H]
{\centering \includegraphics{figs/ELC-AgeDep-1}
}
\caption{Expected loss curves (ELCs) and expected value of perfect information (EVPI) generated from the probabilistic sensitivity analysis (PSA) output.}\label{fig:ELC-AgeDep}
\end{figure}
\hypertarget{discussion}{%
\section{Discussion}\label{discussion}}
In this tutorial, we conceptualize time-dependent cSTMs with their mathematical description and a walk-through of their implementation for a CEA in R using the Sick-Sicker example. We described two types of time-dependency: dependence on the time since the start of the simulation (simulation-time dependency) or on time spent in a health state (state-residence time dependency). We also illustrate how to generate various epidemiological measures from the model, incorporate transition rewards in CEAs, and conduct a PSA.
We implemented simulation-time dependency by expanding the transition probability matrix into a transition probability array, where the third dimension captures time. However, there are alternative implementations of simulation-time dependency in cSTMs. For example, the model could be coded such that the time-varying elements of the transition probability matrix \(P_t\) are updated at each time point \(t\) as the simulation is run. This would eliminate the need for the transition probability array \texttt{a\_P}, reducing computer memory requirements. But this comes at the expense of increasing the number of operations at every cycle, potentially slowing down the simulation.
We incorporated state-residence time dependency using tunnel states by expanding the corresponding health states on the first and second dimensions of the 3-dimensional array to account for time spent in the current state in addition to simulation-time dependence. Another approach to account for state-residence time dependency is to use a 3-dimensional transition probability array with dimensions for the current state, future state, and time in the current state.\textsuperscript{\protect\hyperlink{ref-Hawkins2005}{27}} However, in examples combining simulation-time and state-residence time dependencies, this would necessitate a 4-dimensional array, which may be challenging to index.
It is the case that any time-varying feature in a discrete-time model can most generally be implemented as tunnel states with, at the extreme, every state having a different tunnel state for each time step. The cohort would then progressively move through these tunnel states to capture their progression through time, and the model features (e.g., transition probabilities, costs or utilities) that change over time. Using time-varying transition probabilities is a shortcut that is possible when the cohort experiences these time-varying processes simultaneously as a function of the time from the simulation start. Even if the time-varying process has a different periodicity than the cycle length, either tunnel states or time-varying transition probabilities can be used to capture these effects. However, this time-varying process needs to be represented or approximated over an integer number of cycle lengths.
As described in the introductory tutorial, cSTMs are recommended when the number of states is considered ``not too large''.\textsuperscript{\protect\hyperlink{ref-Siebert2012c}{15}} Incorporating time-dependency in cSTMs using the provided approaches requires expanding the number of states by creating a multidimensional array for simulation-time dependency and/or creating tunnel states for state-residence time dependency, increasing the amount of computer memory (RAM) required. It is possible to build reasonably complex time-dependent cSTMs in R as long as there is sufficient RAM to store the transition probability array and outputs of interest. For example, a typical PC with 8GB of RAM can handle a transition probability array of about 1,000 states and 600 time-cycle slices. However, if a high degree of granularity is desired, the dimensions of these data objects can grow quickly; if the required number of states gets too large and difficult to code, it may be preferable to use a stochastic (Monte Carlo) version of the state-transition model -- often called individual-based state-transition models (iSTM) or microsimulation models -- rather than a cohort simulation model.\textsuperscript{\protect\hyperlink{ref-Siebert2012c}{15}} In an iSTM, the risks and rewards of simulated individuals need not depend only on a current health state; they may also depend on an individual's characteristics and attributes. In addition, modelers can store health state history and other events over time for each individual to determine the risk of new events and corresponding costs and effects. Thus, we recommend carefully considering the required model structure before implementing it. An iSTM will also require additional functions to describe the dependency of transition probabilities and rewards on individuals' history. In a previous tutorial, we showed how to construct these functions for an iSTM using the Sick-Sicker example.\textsuperscript{\protect\hyperlink{ref-Krijkamp2018}{28}}
We also described the concept and implementation of transition rewards. While these event-driven impacts could instead be captured by expanding the model state-space to include an initial disease state or a pre-death, end-of-life state, we can avoid state-space expansion by calculating incurred rewards based on the proportion of the cohort making the relevant transition. For implementation, this requires storing not just the cohort trace, which reflects how many individuals are in each state at a given cycle, but also a cohort state-transition array, which records how many individuals are making each possible transition at a given cycle.\textsuperscript{\protect\hyperlink{ref-Krijkamp2019}{9}}
In summary, this tutorial extends our conceptualization of time-independent cSTMs to allow for time dependency. It provides a step-by-step guide to implementing time-dependent cSTMs in R to generate epidemiological and economic measures, account for transition rewards, and conduct a CEA and a corresponding PSA. We hope that health decision scientists and health economists find this tutorial helpful in developing their cSTMs in a more flexible, efficient, and open-source manner. Ultimately, our goal is to facilitate R in health economic evaluations research with the overall aim to increase model transparency and reproducibility.
\section*{Acknowledgements}\label{acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
Dr. Alarid-Escudero was supported by grants U01-CA199335 and U01-CA253913 from the National Cancer Institute (NCI) as part of the Cancer Intervention and Surveillance Modeling Network (CISNET), and a grant by the Gordon and Betty Moore Foundation. Miss Krijkamp was supported by the Society for Medical Decision Making (SMDM) fellowship through a grant by the Gordon and Betty Moore Foundation (GBMF7853). Dr. Enns was supported by a grant from the National Institute of Allergy and Infectious Diseases of the National Institutes of Health under award no. K25AI118476. Dr. Hunink received research funding from the American Diabetes Association, the Netherlands Organization for Health Research and Development, the German Innovation Fund, Netherlands Educational Grant (``Studie Voorschot Middelen''), and the Gordon and Betty Moore Foundation. Dr. Jalal was supported by a grant from the National Institute on Drug Abuse of the National Institute of Health under award no. K01DA048985. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The funding agencies had no role in the design of the study, interpretation of results, or writing of the manuscript. The funding agreement ensured the authors' independence in designing the study, interpreting the data, writing, and publishing the report. We also want to thank the anonymous reviewers of \emph{Medical Decision Making} for their valuable suggestions and the students who took our classes where we refined these materials.
\hypertarget{references}{%
\section*{References}\label{references}}
\addcontentsline{toc}{section}{References}
\hypertarget{refs}{}
\begin{CSLReferences}{0}{0}
\leavevmode\hypertarget{ref-Suijkerbuijk2018}{}%
\CSLLeftMargin{1. }
\CSLRightInline{Suijkerbuijk AWM, Van Hoek AJ, Koopsen J, et al. {Cost-effectiveness of screening for chronic hepatitis B and C among migrant populations in a low endemic country}. \emph{PLoS ONE} 2018; 13: 1--16.}
\leavevmode\hypertarget{ref-Sathianathen2018a}{}%
\CSLLeftMargin{2. }
\CSLRightInline{Sathianathen NJ, Konety BR, Alarid-Escudero F, et al. {Cost-effectiveness Analysis of Active Surveillance Strategies for Men with Low-risk Prostate Cancer}. \emph{European Urology}; 75: 910--917, \url{https://linkinghub.elsevier.com/retrieve/pii/S0302283818308534} (2019).}
\leavevmode\hypertarget{ref-Lu2018b}{}%
\CSLLeftMargin{3. }
\CSLRightInline{Lu S, Yu Y, Fu S, et al. {Cost-effectiveness of ALK testing and first-line crizotinib therapy for non-small-cell lung cancer in China}. \emph{PLoS ONE} 2018; 13: 1--12.}
\leavevmode\hypertarget{ref-Djatche2018}{}%
\CSLLeftMargin{4. }
\CSLRightInline{Djatche LM, Varga S, Lieberthal RD. {Cost-Effectiveness of Aspirin Adherence for Secondary Prevention of Cardiovascular Events}. \emph{PharmacoEconomics - Open}; 2: 371--380, \url{https://doi.org/10.1007/s41669-018-0075-2} (2018).}
\leavevmode\hypertarget{ref-Pershing2014}{}%
\CSLLeftMargin{5. }
\CSLRightInline{Pershing S, Enns EA, Matesic B, et al. {Cost-Effectiveness of Treatment of Diabetic Macular Edema}. \emph{Annals of Internal Medicine} 2014; 160: 18--29.}
\leavevmode\hypertarget{ref-Smith-Spangler2010}{}%
\CSLLeftMargin{6. }
\CSLRightInline{Smith-Spangler CM, Juusola JL, Enns EA, et al. {Population Strategies to Decrease Sodium Intake and the Burden of Cardiovascular Disease: A Cost-Effectiveness Analysis}. \emph{Annals of Internal Medicine}; 152: 481--487, \url{http://annals.org/article.aspx?articleid=745729} (2010).}
\leavevmode\hypertarget{ref-Alarid-Escudero2021a}{}%
\CSLLeftMargin{7. }
\CSLRightInline{Alarid-Escudero F, Krijkamp E, Enns EA, et al. {An Introductory Tutorial on Cohort State-Transition Models in R Using a Cost-Effectiveness Analysis Example}. \emph{arXiv:200107824v3}; 1--26, \url{https://arxiv.org/abs/2001.07824} (2021).}
\leavevmode\hypertarget{ref-Snowsill2019}{}%
\CSLLeftMargin{8. }
\CSLRightInline{Snowsill T. {A New Method for Model-Based Health Economic Evaluation Utilizing and Extending Moment-Generating Functions}. \emph{Medical Decision Making}; 39: 523--539, \url{http://journals.sagepub.com/doi/10.1177/0272989X19860119} (2019).}
\leavevmode\hypertarget{ref-Krijkamp2019}{}%
\CSLLeftMargin{9. }
\CSLRightInline{Krijkamp EM, Alarid-Escudero F, Enns E, et al. {A Multidimensional Array Representation of State-Transition Model Dynamics}. \emph{Medical Decision Making} 2019; In Press.}
\leavevmode\hypertarget{ref-Jalal2017b}{}%
\CSLLeftMargin{10. }
\CSLRightInline{Jalal H, Pechlivanoglou P, Krijkamp E, et al. {An Overview of R in Health Decision Sciences}. \emph{Medical Decision Making}; 37: 735--746, \url{http://journals.sagepub.com/doi/10.1177/0272989X16686559} (2017).}
\leavevmode\hypertarget{ref-Enns2015e}{}%
\CSLLeftMargin{11. }
\CSLRightInline{Enns EA, Cipriano LE, Simons CT, et al. {Identifying Best-Fitting Inputs in Health-Economic Model Calibration: A Pareto Frontier Approach}. \emph{Medical Decision Making}; 35: 170--182, \url{http://www.ncbi.nlm.nih.gov/pubmed/24799456} (2015).}
\leavevmode\hypertarget{ref-Alarid-Escudero2019e}{}%
\CSLLeftMargin{12. }
\CSLRightInline{Alarid-Escudero F, Krijkamp E, Pechlivanoglou P, et al. {A Need for Change! A Coding Framework for Improving Transparency in Decision Modeling}. \emph{PharmacoEconomics}; 37: 1329--1339, \url{https://doi.org/10.1007/s40273-019-00837-x} (2019).}
\leavevmode\hypertarget{ref-Arias2017}{}%
\CSLLeftMargin{13. }
\CSLRightInline{Arias E, Heron M, Xu J. {United States Life Tables, 2014}. \emph{National Vital Statistics Reports}; 66: 63, \url{https://www.cdc.gov/nchs/data/nvsr/nvsr66/nvsr66\%7B/_\%7D04.pdf} (2017).}
\leavevmode\hypertarget{ref-Diaby2014}{}%
\CSLLeftMargin{14. }
\CSLRightInline{Diaby V, Adunlin G, Montero AJ. {Survival modeling for the estimation of transition probabilities in model-based economic evaluations in the absence of individual patient data: A tutorial}. \emph{PharmacoEconomics} 2014; 32: 101--108.}
\leavevmode\hypertarget{ref-Siebert2012c}{}%
\CSLLeftMargin{15. }
\CSLRightInline{Siebert U, Alagoz O, Bayoumi AM, et al. {State-Transition Modeling: A Report of the ISPOR-SMDM Modeling Good Research Practices Task Force-3}. \emph{Medical Decision Making}; 32: 690--700, \url{http://mdm.sagepub.com/cgi/doi/10.1177/0272989X12455463} (2012).}
\leavevmode\hypertarget{ref-Lee2003a}{}%
\CSLLeftMargin{16. }
\CSLRightInline{Lee ET, Wang JW. \emph{{Statistical methods for Survival Data Analysis}}. 3rd ed. Hoboken, NJ: Wiley, 2003.}
\leavevmode\hypertarget{ref-Klein2003}{}%
\CSLLeftMargin{17. }
\CSLRightInline{Klein JP, Moeschberger ML. \emph{{Survival Analysis: Techniques for Censored and Truncated Data}}. 2nd ed. Springer-Verlag, \url{http://www.springer.com/statistics/life+sciences,+medicine+\%7B/\&\%7D+health/book/978-0-387-95399-1} (2003).}
\leavevmode\hypertarget{ref-Rothman2008h}{}%
\CSLLeftMargin{18. }
\CSLRightInline{Rothman KJ, Greenland S, Lash TL. \emph{{Modern Epidemiology}}. 3rd ed. Lippincott Williams {\&} Wilkins, 2008.}
\leavevmode\hypertarget{ref-Keiding1991}{}%
\CSLLeftMargin{19. }
\CSLRightInline{Keiding N. {Age-Specific Incidence and Prevalence: A Statistical Perspective}. \emph{Journal of the Royal Statistical Society Series A (Statistics in Society)} 1991; 154: 371--412.}
\leavevmode\hypertarget{ref-Elbasha2016}{}%
\CSLLeftMargin{20. }
\CSLRightInline{Elbasha EH, Chhatwal J. {Theoretical foundations and practical applications of within-cycle correction methods}. \emph{Medical Decision Making} 2016; 36: 115--131.}
\leavevmode\hypertarget{ref-Elbasha2016a}{}%
\CSLLeftMargin{21. }
\CSLRightInline{Elbasha EH, Chhatwal J. {Myths and misconceptions of within-cycle correction: a guide for modelers and decision makers}. \emph{PharmacoEconomics} 2016; 34: 13--22.}
\leavevmode\hypertarget{ref-Alarid-Escudero2021}{}%
\CSLLeftMargin{22. }
\CSLRightInline{Alarid-Escudero F, Knowlton G, Easterly CA, et al. Decision analytic modeling package (dampack), \url{https://cran.r-project.org/web/packages/dampack/} (2021).}
\leavevmode\hypertarget{ref-Briggs2012}{}%
\CSLLeftMargin{23. }
\CSLRightInline{Briggs AH, Weinstein MC, Fenwick EAL, et al. {Model Parameter Estimation and Uncertainty Analysis: A Report of the ISPOR-SMDM Modeling Good Research Practices Task Force Working Group-6.} \emph{Medical Decision Making} 2012; 32: 722--732.}
\leavevmode\hypertarget{ref-Briggs2002}{}%
\CSLLeftMargin{24. }
\CSLRightInline{Briggs AH, Goeree R, Blackhouse G, et al. {Probabilistic Analysis of Cost-Effectiveness Models: Choosing between Treatment Strategies for Gastroesophageal Reflux Disease}. \emph{Medical Decision Making}; 22: 290--308, \url{http://mdm.sagepub.com/cgi/doi/10.1177/027298902400448867} (2002).}
\leavevmode\hypertarget{ref-Stinnett1998b}{}%
\CSLLeftMargin{25. }
\CSLRightInline{Stinnett AA, Mullahy J. {Net Health Benefits: A New Framework for the Analysis of Uncertainty in Cost-Effectiveness Analysis}. \emph{Medical Decision Making}; 18: S68--S80, \url{http://mdm.sagepub.com/cgi/doi/10.1177/0272989X9801800209} (1998).}
\leavevmode\hypertarget{ref-Alarid-Escudero2019}{}%
\CSLLeftMargin{26. }
\CSLRightInline{Alarid-Escudero F, Enns EA, Kuntz KM, et al. {"Time Traveling Is Just Too Dangerous" But Some Methods Are Worth Revisiting: The Advantages of Expected Loss Curves Over Cost-Effectiveness Acceptability Curves and Frontier}. \emph{Value in Health} 2019; 22: 611--618.}
\leavevmode\hypertarget{ref-Hawkins2005}{}%
\CSLLeftMargin{27. }
\CSLRightInline{Hawkins N, Sculpher M, Epstein D. {Cost-effectiveness analysis of treatments for chronic disease: Using R to incorporate time dependency of treatment response.} \emph{Medical Decision Making}; 25: 511--9, \url{http://www.ncbi.nlm.nih.gov/pubmed/16160207} (2005).}
\leavevmode\hypertarget{ref-Krijkamp2018}{}%
\CSLLeftMargin{28. }
\CSLRightInline{Krijkamp EM, Alarid-Escudero F, Enns EA, et al. {Microsimulation Modeling for Health Decision Sciences Using R: A Tutorial}. \emph{Medical Decision Making}; 38: 400--422, \url{http://journals.sagepub.com/doi/10.1177/0272989X18754513} (2018).}
\end{CSLReferences}
\newpage
\begin{landscape}
\section*{Supplementary Material}
\hypertarget{cohort-tutorial-model-components}{%
\subsection*{Cohort tutorial model
components}\label{cohort-tutorial-model-components}}
This table contains an overview of the key model components used in the
code for the Sick-Sicker example from the
\href{http://darthworkgroup.com/publications/}{DARTH} manuscript: ``A
Tutorial on Time-Dependent Cohort State-Transition Models in R''. The
first column gives the mathematical notation for some of the model
components that are used in the equations in the manuscript. The second
column gives a description of the model component with the R name in the
third column. The forth gives the data structure, e.g.~scalar, list,
vector, matrix etc, with the according dimensions of this data structure
in the fifth column. The final column indicated the type of data that is
stored in the data structure, e.g.~numeric (5.2,6.3,7.4), category
(A,B,C), integer (5,6,7), logical (TRUE, FALSE).
\begin{longtable}[]{@{}
>{\raggedright\arraybackslash}p{(\columnwidth - 10\tabcolsep) * \real{0.07}}
>{\raggedright\arraybackslash}p{(\columnwidth - 10\tabcolsep) * \real{0.35}}
>{\raggedright\arraybackslash}p{(\columnwidth - 10\tabcolsep) * \real{0.14}}
>{\raggedright\arraybackslash}p{(\columnwidth - 10\tabcolsep) * \real{0.14}}
>{\raggedright\arraybackslash}p{(\columnwidth - 10\tabcolsep) * \real{0.17}}
>{\raggedright\arraybackslash}p{(\columnwidth - 10\tabcolsep) * \real{0.12}}@{}}
\toprule
Element & Description & R name & Data structure & Dimensions & Data
type \\
\midrule
\endhead
\(n_t\) & Time horizon & \texttt{n\_t} & scalar & & numeric \\
\(v_s\) & Names of the health states & \texttt{v\_n} & vector &
\texttt{n\_states} x 1 & character \\
\(n_s\) & Number of health states & \texttt{n\_states} & scalar & &
numeric \\
\(n_{S_{tunnels}}\) & Number of health states with tunnels &
\texttt{n\_states\_tunnels} scalar & & numeric & \\
\(v_{str}\) & Names of the strategies & \texttt{v\_names\_str} & scalar
& & character \\
\(n_{str}\) & Number of strategies & \texttt{n\_str} & scalar & &
character \\
\(\mathbf{d_c}\) & Discount rate for costs & \texttt{d\_c} & scalar & &
numeric \\
\(\mathbf{d_e}\) & Discount rate for effects & \texttt{d\_e} & scalar &
& numeric \\
& Discount weights for costs & \texttt{v\_dwc} & vector & (\texttt{n\_t}
x 1 ) + 1 & numeric \\
& Discount weights for effects & \texttt{v\_dwe} & vector &
(\texttt{n\_t} x 1 ) + 1 & numeric \\
\(\mathbf{wcc}\) & Within-cycle correction weights using Simpson's 1/3
rule & \texttt{v\_wcc} & vector & (\texttt{n\_t} x 1 ) + 1 & numeric \\
\(age_{_0}\) & Age at baseline & \texttt{n\_age\_init} & scalar & &
numeric \\
\(age\) & Maximum age of follow up & \texttt{n\_age\_max} & scalar & &
numeric \\
\(M_{ad}\) & Cohort trace for age-dependency & \texttt{m\_M\_ad} &
matrix & (\texttt{n\_t} + 1) x \texttt{n\_states} & numeric \\
\(M_{tunnels}\) & Aggregated Cohort trace for state-dependency &
\texttt{m\_M\_tunnels} & matrix & (\texttt{n\_t} + 1) x
\texttt{n\_states} & numeric \\
\(m_0\) & Initial state vector & \texttt{v\_s\_init} & vector & 1 x
\texttt{n\_states} & numeric \\
\(m_t\) & State vector in cycle t & \texttt{v\_mt} & vector & 1 x
\texttt{n\_states} & numeric \\
& & & & & \\
& \textbf{Transition probabilities} & & & & \\
\(p_{[H,S1]}\) & From Healthy to Sick conditional on surviving &
\texttt{p\_HS1} & scalar & & numeric \\
\(p_{[S1,H]}\) & From Sick to Healthy conditional on surviving &
\texttt{p\_S1H} & scalar & & numeric \\
\(p_{[S1,S2]}\) & From Sick to Sicker conditional on surviving &
\texttt{p\_S1S2} & scalar & & numeric \\
\(r_{[H,D]}\) & Constant rate of dying when Healthy (all-cause mortality
rate) & \texttt{r\_HD} & scalar & & numeric \\
\(hr_{[S1,H]}\) & Hazard ratio of death in Sick vs Healthy &
\texttt{hr\_S1} & scalar & & numeric \\
\(hr_{[S2,H]}\) & Hazard ratio of death in Sicker vs Healthy &
\texttt{hr\_S2} & scalar & & numeric \\
\(hr_{[S1,S2]_{trtB}}\) & Hazard ratio of becoming Sicker when Sick
under treatment B & \texttt{hr\_S1S2\_trtB} & scalar & & numeric \\
\(p_{[S1,S2]_{trtB}}\) & probability to become Sicker when Sick under
treatment B conditional on surviving & \texttt{p\_S1S2\_trtB} & scalar &
& numeric \\
& & & & & \\
& \textbf{Age-specific mortality} & & & & \\
\(r_{[H,D,t]}\) & Age-specific background mortality rates &
\texttt{v\_r\_HDage} & vector & \texttt{n\_t} x 1 & numeric \\
\(r_{[S1,D,t]}\) & Age-specific mortality rates in the Sick state &
\texttt{v\_r\_S1Dage} & vector & \texttt{n\_t} x 1 & numeric \\
\(r_{[S2,D,t]}\) & Age-specific mortality rates in the Sicker state &
\texttt{v\_r\_S2Dage} & vector & \texttt{n\_t} x 1 & numeric \\
\(p_{[H,D,t]}\) & Age-specific mortality risk in the Healthy state &
\texttt{v\_p\_HDage} & vector & \texttt{n\_t} x 1 & numeric \\
\(p_{[S1,D,t]}\) & Age-specific mortality rates in the Sick state &
\texttt{v\_p\_S1Dage} & vector & \texttt{n\_t} x 1 & numeric \\
\(p_{[S2,D,t]}\) & Age-specific mortality rates in the Sicker state &
\texttt{v\_p\_S2Dage} & vector & \texttt{n\_t} x 1 & numeric \\
\(p_{[S1,S2, t]}\) & Time-dependent transition probabilities from sick
to sicker & \texttt{v\_p\_S1S2\_tunnels} & vector & \texttt{n\_t} x 1 &
numeric \\
& & & & & \\
& \textbf{Annual costs} & & & & \\
& Healthy individuals & \texttt{c\_H} & scalar & & numeric \\
& Sick individuals in Sick & \texttt{c\_S1} & scalar & & numeric \\
& Sick individuals in Sicker & \texttt{c\_S2} & scalar & & numeric \\
& Dead individuals & \texttt{c\_D} & scalar & & numeric \\
& Additional costs treatment A & \texttt{c\_trtA} & scalar & &
numeric \\
& Additional costs treatment B & \texttt{c\_trtB} & scalar & &
numeric \\
& & & & & \\
& \textbf{Utility weights} & & & & \\
& Healthy individuals & \texttt{u\_H} & scalar & & numeric \\
& Sick individuals in Sick & \texttt{u\_S1} & scalar & & numeric \\
& Sick individuals in Sicker & \texttt{u\_S2} & scalar & & numeric \\
& Dead individuals & \texttt{u\_D} & scalar & & numeric \\
& Treated with treatment A & \texttt{u\_trtA} & scalar & & numeric \\
& & & & & \\
& \textbf{Transition weights} & & & & \\
& Utility decrement of healthy individuals when transitioning to S1 &
\texttt{du\_HS1} & scalar & & numeric \\
& Cost of healthy individuals when transitioning to S1 & \texttt{ic\_S1}
& scalar & & numeric \\
& Cost of dying & \texttt{ic\_D} & scalar & & numeric \\
& & & & & \\
& \textbf{Lists} & & & & \\
& Cohort traces for each strategy & \texttt{l\_m\_M} & list & &
numeric \\
& Transition arrays for each strategy & \texttt{l\_A\_A} & list & &
numeric \\
& number of tunnel states & \texttt{n\_tunnel\_size} & scalar & &
numeric \\
& tunnel names of the Sick state & \texttt{v\_Sick\_tunnel} & vector & 1
x \texttt{n\_states} & numeric \\
& state names including tunnel states & \texttt{v\_n\_tunnel} & vector &
1 x \texttt{n\_states} & character \\
& number of states including tunnel states & \texttt{n\_states\_tunnels}
& scalar & & numeric \\
& initial state vector for the model with tunnels &
\texttt{v\_s\_init\_tunnels} & & & numeric \\
& & & & & \\
\(\mathbf{P}\) & Time-dependent transition probability array &
\texttt{a\_P} & array & \texttt{n\_states} x \texttt{n\_states} x
\texttt{n\_t} & numeric \\
\(\mathbf{P}_{tunnels}\) & Transition probability array for the model
with tunnels & \texttt{a\_P\_tunnels} & array &
\texttt{n\_states\_tunnels} x \texttt{n\_states\_tunnels} x
\texttt{n\_t} & numeric \\
\(\mathbf{A}\) & Transition dynamics array & \texttt{a\_A} & array &
\texttt{n\_states} x \texttt{n\_states} x (\texttt{n\_t} + 1) &
numeric \\
\(\mathbf{R_u}\) & Transition rewards for effects & \texttt{a\_R\_u} &
array & \texttt{n\_states} x \texttt{n\_states} x (\texttt{n\_t} + 1) &
numeric \\
\(\mathbf{R_c}\) & Transition rewards for costs & \texttt{a\_R\_c} &
array & \texttt{n\_states} x \texttt{n\_states} x (\texttt{n\_t} + 1) &
numeric \\
\(\mathbf{Y_u}\) & Expected effects per states per cycle &
\texttt{a\_Y\_u} & array & \texttt{n\_states} x \texttt{n\_states} x
(\texttt{n\_t} + 1) & numeric \\
\(\mathbf{Y_c}\) & Expected costs per state per cycle & \texttt{a\_Y\_c}
& array & \texttt{n\_states} x \texttt{n\_states} x (\texttt{n\_t} + 1)
& numeric \\
& & & & & \\
& \textbf{Data structures} & & & & \\
& Expected QALYs per cycle under a strategy & \texttt{v\_qaly\_str} &
vector & 1 x (\texttt{n\_t} + 1) & numeric \\
& Expected costs per cycle under a strategy & \texttt{v\_cost\_str} &
vector & 1 x (\texttt{n\_t} + 1) & numeric \\
& Total expected discounted QALYs for a strategy &
\texttt{n\_tot\_qaly\_str} & scalar & & numeric \\
& Total expected discounted costs for a strategy &
\texttt{n\_tot\_cost\_str} & scalar & & numeric \\
& Summary of the model outcomes & \texttt{df\_cea} & data frame & & \\
& Summary of the model outcomes & \texttt{table\_cea} & table & & \\
& Input parameters values of the model for the cost-effectiveness
analysis & \texttt{df\_psa} & data frame & & \\
\bottomrule
\end{longtable}
\end{landscape}
\end{document} |
1,477,468,750,446 | arxiv | \section{Introduction}
Texture is intuitively defined as a repeated arrangement of a basic pattern or object in an image. There is no universal mathematical definition of a texture though. The human visual system is able to identify and segment different textures in a given image without much effort. Automating this task for a computer, though, is far from trivial.\\
Apart from being a tough academic problem, texture segmentation has several applications. Texture segmentation has been applied to detect landscape changes from aerial photographs in remote sensing and GIS \cite{yu2003}, content based image retrieval \cite{fauzi2006} and diagnosing ultrasound images \cite{muzzolini1993} and others.\\
There are three major components in any texture segmentation algorithm: (a) The model or features that define or characterize a texture, (b) the metric defined on this representation space, and (c) the clustering algorithm that runs over these features in order to segment a given image into different textures.\\
There are two approaches of modeling a texture: Structural and Statistical. The structural approach describes a texture as a specific spatial arrangement of a primitive element. Vor-onoi polynomials are used to specify the spatial arrangement of these primitive elements \cite{voronoi_polynomial,ahuja_texel}. The statistical approach describes a texture using features that encode the regularity in arrangement of gray-levels in an image. Examples of features used are responses to Gabor filters \cite{dennis_gabor}, graylevel co-occurrence matrices \cite{zucker1980,haralick1979}, Wavelet coefficients \cite{do2002}, human visual perception based Tamura features \cite{tamura1978}, Laws energy measures \cite{laws1980}, Local binary patterns\cite{ojala1996} and Covariance matrices of features \cite{tuzel_region_covariance,michael_covariance}. In \cite{howarth04}, the authors compare performance of some of the above mentioned features for the specific goal of image retrieval. In fact, Zhu, Wu \& Mumford \cite{zhu98} propose a mechanism of choosing an optimal set of features for texture modeling from a given general filter bank. Markov random fields\cite{george_mrf}, Fractal dimensions\cite{chaudhuri_fractal} and the space of oscillating functions \cite{vese2003} have also been used to model textures.\\
Various metrics have been used to quantify dissimilarity of features: Euclidean, Chi-squared, Kullback-Leibler \& its symmetrized version \cite{wang_dti}, manifold distance on the Gabor feature space \cite{sagiv2006} and others. $k-$NN, Bayesian inference, $c-$ means, alongwith active contours algorithms are some of the methods used for clustering/segmentating texture areas in the image with similar features.\\
In this paper, we use intensity convariance matrices over a region as the texture feature. Since these are symmetric positive definite matrices which form a manifold, denoted by $PD(n)$, it is natural to use the intrinsic manifold distance as a measure of feature dissimilarity. Using a novel active contours method, we propose to find the background/foreground texture regions in a given image by maximizing the geodesic distance between the interior and exterior covariance matrices. This is the main contribution of our paper. \\
In the next subsection we list out some existing texture segmentation approaches using active contours model.
\subsection{Related work}
{
Sagiv, Sochen \& Zeevi \cite{sagiv2006} generalize both, geodesic active contours and Chan \& Vese active contours, to work on a Gabor feature space. The Gabor feature space is a parametric $2-$D manifold embedded in ${\mathbb R}^7$ whose natural metric is used to define an edge detector function for geodesic active contours, and to define the intra-region variance in case of the Chan \& Vese active contours. In \cite{savelonas2006}, the authors use Chan \& Vese active contours on Local Binary Pattern features for texture segmentation.\\
In \cite{rousson2003}, the authors propose a Chand \& Vese active contour model on probability distribution of the structure tensor of the image as a feature.
The closest approach to our algorithm is by Houhou et.\;al.\cite{nawal_acm}, where the authors find a contour that maximizes the KL-divergence based metric on probability distribution of a feature for points lying inside the contour and outside the contour. The feature used is based on principal curvatures of the intensity image considered as a $2-$D manifold embedded in ${\mathbb R}^3$. In particular, the cost function for a curve $\Omega$ is defined as
\begin{align*}
KL(p_{in}(\Omega),p_{out}(\Omega)) & = \int_{{\mathbb R}^+} \left(p_{in}(\kappa_t,\Omega) - p_{out}(\kappa_t,\Omega)\right)\\
& \cdot \left(\log p_{in}(\kappa_t,\Omega) - \log p_{out}(\kappa_t,\Omega)\right)\ d\kappa_t
\end{align*}
where $p_{in}(\Omega),p_{out}(\Omega)$ is the probability distribution of the feature $\kappa$ inside and outside the closed contour $\Omega$ respectively. Gaussian distribution is assumed as the model for the probability distribution of the feature both inside as well as outside the contour.
In our approach, instead of using some scalar feature to represent texture, we iteratively compute a contour that maximizes the geodesic distance between the interior and exterior intensity covariance matrix of the contour. It can be seen that the maximization process has to be carried out over the manifold of symmetric positive definite matrices, making it fundamentally different from the approach in \cite{nawal_acm}. Moreover, we can easily extend this approach to covariance matrices of any other texture feature one may want to use.\\
The paper is organized as follows:In next section we provide a brief review of active contour models for image segmentation.
In section refer{section label}, we describe our active contour model based on geodesic distance between the interior and exterior covariance matrices of a contour. We give our experimental results in Section \ref{sec:expts} followed by conclusions and future scope.
\subsection{Active contours and Level sets}
In classical active contours \cite{kass}, user initializes a curve $C(q): [0,1]\rightarrow \Omega \subseteq\mathbb{R}^{2}$ on an intensity image $I:\Omega \rightarrow {\mathbb R}^2$ which evolves and stabilizes on the object boundary. The curve evolution is gradient descent of an energy functional, $E(C)$, given by
\begin{align*}
E(C)=\alpha\int_{0}^{1}\mid C^{'}(q)\mid^{2} dq & + \beta \int_{0}^{1}\mid C^{''}(q)\mid^{2} dq \\
& - \lambda \int_{0}^{1}\mid\nabla I(C(q))\mid dq
\end{align*}
where $\alpha,\beta$ and $\lambda$ are real positive constants, $C^{'}$ and $C^{''}$ are first and second derivatives of $C$ and $\nabla I$ is the image gradient. First two terms are regularizers, while the third term pushes the curve towards the object boundary.\\
Geodesic active contours \cite{kimmel} is an active contour model where the objective function can be interpreted as the length of a curve $C:[0,1] \rightarrow {\mathbb R}^2$ in a Riemannian space with metric induced by image intensity. The energy functional for geodesic active contour is given by
\[
E=\int_{0}^{1} g(\vert \nabla I(C(q)\vert)\vert C'(q) \vert dq,
\]
where $g:{\mathbb R}\rightarrow{\mathbb R}$ is a monotonically decreasing \emph{edge detector} function. One such choice is $g(s) = \exp\left(-s\right)$.
The curve evolution equation that minimizes this energy is given by
\[
\frac{\partial C}{\partial t} = \left(g(I)\kappa - \langle\nabla g, \hat{n}\rangle\right)\hat{n}
\]
where $\hat{n}$ is the inward unit normal and $\kappa$ is the curvature of the curve $C$.\\
A convenient computational procedure for curve evolution is the level set formulation \cite{problems_in_image_processing}. Here the curve is embedded in the zero set of a function $\phi:\mathbb{R}^{2}\rightarrow\mathbb{R}$ and the function is made to evolve so that its zero level set evolves according to the desired curve evolution equation.
For a curve evolution equation of the form $\frac{\partial C}{\partial t}=v \hat{n}$, the corresponding level set evolution is $\frac{\partial \phi}{\partial t}=v \vert\nabla \phi \vert$. See Appendix in \cite{kimmel}. In particular the level set evolution for geodesic active contours is given by
\begin{align*}
\frac{\partial \phi}{\partial t}=g(I)\vert\nabla\phi\vert div\left(\frac{\nabla\phi}{\vert\nabla\phi\vert}\right)+ \nabla g(I).\nabla\phi.
\end{align*}
Another active contours approach was introduced by Chan and Vese \cite{chan2001} where the energy function was based on regional similarity properties of an object, rather then its edges (image gradient). Suppose that $C$ is the initial curve defined on the domain $\Omega$ of the intensity image $I$. $\Omega$ can be divided into two parts, interior (denoted by $int(C)$) and exterior (denoted by $ext(C)$).
Let us represent the mean gray value of the region $int(C)$ and $ext(C)$ by $c_{1}$ and $c_{2}$ respectively, then the energy function for which the object boundary is a minima is given by
\begin{align*}
F_{1}(C)+F_{2}(C) & = \int_{int(C)}\vert I(x,y)-c_{1}\vert^{2} dx dy\\
& + \int_{ext(C)}\vert I(x,y)-c_{2}\vert^{2} dx dy
\end{align*}
After adding some regularizing terms the energy functional $F(c_{1},c_{2},C)$, is given by
\begin{align*}
F(c_{1},c_{2},C) &= \mu.Length (C)\; + \;\nu.Area (int(C)) \\
&\;+\; \lambda_{1}\int_{int(C)}\vert I(x,y)-c_{1}\vert^{2} dx dy \\
&+ \lambda_{2}\int_{ext(C)}\vert I(x,y)-c_{2}\vert^{2} dx dy
\end{align*}
where $\mu\geq 0, \nu \geq 0 , \lambda_{1},\lambda_{2}> 0$ are fixed scalar parameters.
The level set evolution equation is given by
\begin{align*}
\frac{\partial \phi}{\partial t}= \delta_{\epsilon}(\phi)\left[\mu \;div\left(\frac{\nabla\phi}{\vert\nabla\phi\vert}\right) - \nu - \lambda_{1}\left(I-c_{1}\right)^{2} +\lambda_{2}\left(I-c_{2}\right)^{2}\right]
\end{align*}%
where $\delta_{\epsilon}$ is a smooth approximation of the Dirac delta function. A nice survey on active contours and level set implementation can be found in \cite{problems_in_image_processing}. We now describe our active contour model for texture segmentation.
\section{Proposed Active contour model for texture segmentation}
\label{sec:proposedalg}
In what follows, we assume familiarity with concepts from differential geometry like geodesic distance, Riemannian Exponential and Riemannian Logarithm maps. A thorough introduction to these concepts can be found in the books \cite{boothby1975,docarmo1992}.
We are given an intensity image $I:\Omega\subset {\mathbb R}^2 \rightarrow {\mathbb R}$. Our algorithm assumes that the image contains a background and a foreground texture. At every point $x\in \Omega$, let $N(x)$ be a $R^2 \times 1$ vector of intensities over a small neighborhood\footnote{Although we use a continuous region $\Omega$ to model the image domain, we are implicitly assuming a discrete image domain while defining the concept of a neighborhood $N(x)$. We choose to ignore this discrepancy.}, say of size $R \times R$. Given a closed contour $C$ on $\Omega$, we define the following two covariance matrices:
\begin{align}
\nonumber M^i(C) = & \frac{\int_{int(C)} N(x)N(x)^T\ dx}{\int_{int(C)} \ dx}\\
M^e(C) = & \frac{\int_{ext(C)} N(x)N(x)^T\ dx}{{\int_{ext(C)} \ dx}}
\label{eqn:mime}
\end{align}
where $int(C), ext(C)$ denote the interior and exterior of $C$ respectively. Note that $M^i(C)$ and $M^e(C)$ both belong to the set of $R^2 \times R^2$ symmetric positive definite matrices, which is a Riemannian manifold henceforth denoted by $PD(R^2)$. Let $d :PD(R^2) \times PD(R^2) \rightarrow {\mathbb R}$ denote the geodesic distance between two points of this manifold. Since the image contains two different texture regions, it is evident that the two covariance matrices(points on this manifold) defined in \eqref{eqn:mime} will be furthest away (in terms of geodesic distance) from each other when the contour $C$ lies on the boundary between the two textures. We justify this claim with an empirical evidence in Figure \ref{img:contourdistance}.
\begin{figure}[h]
\centering
\subfigure[]
{
\includegraphics[width=1.1 in]{contour_1.png}
}
\subfigure[]
{
\includegraphics[width=1.3 in]{distance_1.png}
}
\\
\subfigure[]
{
\includegraphics[width=1.1 in]{contour_2.png}
}
\subfigure[]
{
\includegraphics[width=1.3 in]{distance_2.png}
}
\caption{(left) Different contours on an image with a foreground/background textures, (right) the corresponding (referred by appropriate contour number) geodesic distance between the covariance matrices defined in \eqref{eqn:mime}.} \label{img:contourdistance}
\end{figure}
For a given texture image $I:\Omega\rightarrow {\mathbb R}$, we propose the following cost function on the set of all closed contours defined on $\Omega$:
\begin{equation}
J(C) = d(M^i(C),M^e(C))
\label{eqn:geocost}
\end{equation}
where $M^i(C),M^e(C)$ are defined in Equation \eqref{eqn:mime}. We find the contour $C$ that maximizes this cost, using the gradient ascent approach giving us a novel active contour scheme. Instead of working on parametric representations of $C$, we work with its level set representation which has several benefits as discussed in \cite{problems_in_image_processing}.\\
The curve $C$ is represented as the zero level set of the function $\phi:\Omega\rightarrow{\mathbb R}$, i.e., $C = \phi^{-1}(0)$. A typical choice of $\phi$ is the signed distance function of $C$:
\begin{align*}
\phi(x) = \left\{
\begin{array}{c l}
-d_C(x,C) & x \in int(C)\\
d_C(x,C) & x \in \Omega \setminus int(C)
\end{array}\right.
\end{align*}
where
\begin{equation*}
d_C(x,C) = \inf_{y \in C} d_{{\mathbb R}^2}(x,y)
\end{equation*}
with $d_{{\mathbb R}^2}(x,y)$ as the usual Euclidean distance on ${\mathbb R}^2$ between $x$ and $y$.
The sets $int(C),ext(C)$ can then be defined in terms of the level set function $\phi$ as
\begin{align}
\nonumber int(C) = & \{x \in \Omega | \phi(x) < 0 \} \\
ext(C) = & \{x \in \Omega | \phi(x) \geq 0 \}.
\end{align}
Using the level set function $\phi$ and the Heaviside function
\begin{align*}
H(\phi) = \left\{
\begin{array}{c l}
1, &\ \phi \geq 0\\
0, &\ \mbox{ otherwise},
\end{array}\right.
\end{align*}
we redefine the covariance matrices from Equation \eqref{eqn:mime}, as
\begin{align}
\nonumber M^{i}(\phi)= \frac{\int_{\Omega} (1-H(\phi)) N(x)N(x)^{T} dx}{\int_{\Omega}(1-H(\phi)) dx}\\
M^{e}(\phi)=\frac{\int_{\Omega} H(\phi) N(x)N(x)^{T} dx}{\int_{\Omega}H(\phi) dx}
\label{eqn:covlevelset}
\end{align}
Re-writing our cost function from Equation \eqref{eqn:geocost} in terms of the level set function $\phi$ gives us
\begin{equation}
J(\phi)=d(M^{i}(\phi),M^{e}(\phi))
\label{eqn:phicost}
\end{equation}
To maximize this cost function we use gradient ascent algorithm, and the gradient is computed as follows
\begin{align}
\frac{\partial J}{\partial \phi}(\phi) = \left\langle \frac{\partial d}{\partial M^{i}} , \frac{\partial M^{i}(\phi)}{\partial \phi}\right\rangle_{M^i}
+ \left\langle \frac{\partial d}{\partial M^e} , \frac{\partial M^e(\phi)}{\partial \phi}\right\rangle_{M^e}
\label{eqn:gradcost1}
\end{align}
where $\left\langle .,.\right\rangle_{M^i}$ and $\left\langle .,.\right\rangle_{M^e}$ are the Riemannian inner products defined on the Tangent space of $PD(R^2)$ at points $M^i(\phi)$ and $M^e(\phi)$, respectively. Specific details on this inner product can be found in \cite{fletcher2007}.
The derivatives of the geodesic distance $d$ is given by\footnote{A simpler explanation for this can be given in case we are working with ${\mathbb R}^2$ instead of $PD(R^2)$. In this case $\frac{\partial d}{\partial x}(x,y) = -(y-x)$ and $\frac{\partial d}{\partial y}(x,y) = -(x-y)$. This is exactly what is done by the Riemannian Log map on manifolds.}
\begin{align}
\label{eqn:graddist1}\frac{\partial }{\partial M^{i}} d (M^{i},M^{e})= -Log_{M^i}(M^e) \in T_{M^i}PD(R^2) \\
\frac{\partial }{\partial M^{e}} d (M^{i},M^{e})= -Log_{M^e}(M^i) \in T_{M^e}PD(R^2)
\label{eqn:graddist2}
\end{align}
where Log denotes the Riemannian log map defined on $PD(R^2)$. Derivatives of the covariance matrices defined in Equation \eqref{eqn:covlevelset} are given by
\begin{align}
\label{eqn:dmidp}\frac{\partial M^i}{\partial \phi}(\phi) & = \frac{1}{|\Omega_{int}|} \int_{\Omega} \left( M^i(\phi) -N(x)N(x)^{T}\right)\delta(\phi) dx \\
\frac{\partial M^e}{\partial \phi}(\phi) & = \frac{1}{|\Omega_{ext}|} \int_{\Omega} \left( N(x)N(x)^{T} - M^e(\phi) \right)\delta(\phi) dx
\label{eqn:dmedp}
\end{align}
where $|\Omega_{int}|$ and $|\Omega_{ext}|$ are given by
\begin{align*}
|\Omega_{int}| & = \int_{\Omega} (1-H(\phi))dx \\
|\Omega_{ext}| & = \int_{\Omega} H(\phi)dx.
\end{align*}
and $\delta$ is the Dirac delta function.
Substituting Equations \eqref{eqn:graddist1},\eqref{eqn:graddist2},\eqref{eqn:dmidp},\eqref{eqn:dmedp} into Equation \eqref{eqn:gradcost1}, we get
\begin{align}
\nonumber\frac{\partial J}{\partial \phi} & = \int_{\Omega} \left[\left\langle -Log_{M^i}(M^e), \frac{1}{|\Omega_{int}|} \left( M^i(\phi) -N(x)N(x)^{T}\right)\delta(\phi) \right\rangle_{M^i} \right.\\
&\left. + \left\langle -Log_{M^e}(M^i),\frac{1}{|\Omega_{ext}|} \left( N(x)N(x)^{T} - M^e(\phi) \right)\delta(\phi) \right\rangle_{M^e}\right]\ dx
\end{align}
The gradient ascent as a level set evolution equation is therefore given by
\begin{align}
\nonumber\frac{\partial \phi}{\partial t}(x) & = \frac{\partial J}{\partial \phi}(x)\\
\nonumber& =\left\langle -Log_{M^i}(M^e), \frac{1}{|\Omega_{int}|} \left( M^i(\phi) -N(x)N(x)^{T}\right)\delta(\phi) \right\rangle_{M^i}\\
&+ \left\langle -Log_{M^e}(M^i),\frac{1}{|\Omega_{ext}|} \left( N(x)N(x)^{T} - M^e(\phi) \right)\delta(\phi) \right\rangle_{M^e}
\label{eqn:gradascent}
\end{align}
In the next section, we provide necessary implementation details and results on various images.
\section{Experiments}
\label{sec:expts}
The Heaviside and the Dirac delta functions are not continuous, instead, we use the following smooth approximation as given in \cite{chan2001}:
\begin{align}
H_{\epsilon}(\phi) = & \frac{1}{2}\left(1 + \frac{2}{\pi}\arctan\frac{\phi}{\epsilon}\right)\\
\delta_{\epsilon}(\phi) = & \frac{d}{d\phi} H_{\epsilon}(\phi)
\end{align}
We add curvature flow as a regularizer to
obtain the following level set evolution equation:
\begin{align}
\nonumber & \frac{\partial \phi}{\partial t}(x) = \\
\nonumber& \left\langle -Log_{M^i}(M^e), \frac{1}{|\Omega_{int}|} \left( M^i(\phi) -N(x)N(x)^{T}\right)\delta(\phi) \right\rangle_{M^i}\\
\nonumber&+ \left\langle -Log_{M^e}(M^i),\frac{1}{|\Omega_{ext}|} \left( N(x)N(x)^{T} - M^e(\phi) \right)\delta(\phi) \right\rangle_{M^e} \\
& + \lambda \cdot div\left(\frac{\nabla \phi}{|\nabla \phi|}\right) \delta(\phi),
\label{eqn:final}
\end{align}
where $\kappa = div\frac{\nabla \phi}{|\nabla \phi|}$ is the curvature of the curve and $\lambda$ is the weight assigned to the curvature term.\\
We evolve an initial contour given by the user till the cost function given in Equation \eqref{eqn:geocost} increases, we stop the evolution the moment it decreases. We re-initialize the level set function when required following the algorithm given in \cite{sussman1994}. All images shown in this section are of size $200 \times 200$ pixels. The results shown in this section were obtained on a Intel Core2Duo, 2GB RAM machine using MATLAB. The time required for computing these results was under $10$ minutes, most of the time taken in re-initializing the level set function.\\
We begin by first validating our algorithm on artificially created images. Results are shown in Figure \ref{fig:arttextures}.
\begin{figure}[htbp]
\centering
\subfigure[]
{
\includegraphics[width=1 in]{texture_result_2.png}
\label{arttexure1}
}
\subfigure[]
{
\includegraphics[width=1 in]{texture_result_1.png}
\label{arttexure2}
}
\subfigure[]
{
\includegraphics[width=1 in]{texture_result_3.png}
\label{arttexture3}
}
\caption{Segmentation results on artificial texture images. As is evident, the change of topology property is preserved by our model. The size of neighborhood for these results is $R = 5$ ($5 \times 5$ pixels) here, i.e. the manifold under consideration is $PD(25)$. Initial contour in yellow, final contour in red.}
\label{fig:arttextures}
\end{figure}
Topology of the evolving contour can change due to the level set implementation. Next, we give results on real texture images, that of a zebra and the Europe night sky image, in Figure \ref{fig:realtextures}. Texture being a neighborhood property rather than a pixel property, the segmented boundary will lie a pixel or two away from the actual boundary.
\begin{figure}[htbp]
\centering
\subfigure[]
{
\includegraphics[width=1.3 in]{texture_result_4.png}
\label{realtexure1}
}
\subfigure[]
{
\includegraphics[width=1.3 in]{europe.png}
\label{realtexure2}
}
\caption{Segmentation results on real texture images. The size of neighborhood for these results is $R = 5$ ($5 \times 5$ pixels) here, i.e. the manifold under consideration is $PD(25)$. Initial contour in yellow, final contour is shown in red.}
\label{fig:realtextures}
\end{figure}
We next compare our results with the results generated by the algorithm in \cite{nawal_acm}, on some images from the Berkeley Segmentation dataset \cite{martin2001}, in Figure \ref{fig:comparewithKL}. We have used images from \cite{nawal_acm} to display their results. One can clearly see that our algorithm gives a better texture segmentation. Small noise-like artifacts are in fact regions where texture similar to the object texture is present, for instance, in the tiger image, there are reflection of the tiger strips in the water that our algorithm is able to successfully segment.
\begin{figure}[htbp]
\centering
\subfigure[]
{
\includegraphics[width=1.3 in]{KL_Result_4.png}
\label{kl1}
}
\subfigure[]
{
\includegraphics[width=1.3 in]{our_Result_4.png}
\label{our1}
}\\
\subfigure[]
{
\includegraphics[width=1.3 in]{KL_Result_1.png}
\label{kl2}
}
\subfigure[]
{
\includegraphics[width=1.3 in]{our_Result_1.png}
\label{our2}
}\\
\subfigure[]
{
\includegraphics[width=1.3 in]{KL_Result_2.png}
\label{kl3}
}
\subfigure[]
{
\includegraphics[width=1.3 in]{our_Result_2.png}
\label{our3}
}\\
\subfigure[]
{
\includegraphics[width=1.3 in]{KL_Result_5.png}
\label{kl4}
}
\subfigure[]
{
\includegraphics[width=1.3 in]{our_Result_5.png}
\label{our4}
}
\caption{Comparing results from \cite{nawal_acm}(left column) with results from our algorithm (right column). Initialized contour is shown in yellow, while the final contour in shown in red. The size of neighborhood for these results is $R = 5$ ($5 \times 5$ pixels) here, i.e. the manifold under consideration is $PD(25)$. It can be seen that our results are better in most cases. The small noise-like artifacts are points around which object-like texture is present. For example, in the last image, our algorithm also captures the tiger-strips that appear due to reflection in water.}
\label{fig:comparewithKL}
\end{figure}
With our algorithm, one can also segment usual gray level images, as explained next. Let the neighborhood size $R$ to be $1$, i.e. $N(x) = I(x)$. The covariance matrices $M^i(C), M^e(C)$ defined in Equation \eqref{eqn:mime}, will simply be the mean of squared intensities in the interior and exterior of the closed contour $C$, respectively. Also note that the covariance matrices now belong to $PD(1)$, i.e., the set of positive real numbers ${\mathbb R}^+$, of course with a metric different from the usual Euclidean one on ${\mathbb R}$. Our algorithm will then find the contour that maximizes the difference (geodesic distance on $PD(1)$) between the two numbers $M^i(C)$ and $M^e(C)$. Typical image segmentation results using this approach and results using the Chan \& Vese active contours \cite{chan2001} is shown in Figure \ref{fig:imgsegment}.
\begin{figure}[htbp]
\centering
\subfigure[]
{
\includegraphics[width=1.3 in]{bird_cv.png}
}
\subfigure[]
{
\includegraphics[width=1.3 in]{bird_our.png}
}
\caption{Segmentation results on images using Chan \& Vese active contours (left) and our approach(right). The neighborhood size is $R=1$, as explained in the text. Initial contour in yellow, final contour in red.}
\label{fig:imgsegment}
\end{figure}
Of course, with $R=5$, the covariance matrix can capture textures of that scale only. If we have large scale textures, our algorithm will over-segment the image, as can be seen in Figure \ref{fig:largescale}. Simply increasing the neighborhood size $R$ may not solve the problem, as the detected boundary may not be properly localized near the actual texture boundary.
\begin{figure}[htbp]
\centering
\includegraphics[width = 1.3in]{texture_result_7.png}
\caption{The size of neighborhood for these results is $R = 5$ ($5 \times 5$ pixels) here, i.e. the manifold under consideration is $PD(25)$. Our algorithm oversegments a texture image when both (foreground and background) textures under consideration are of larger scale than the neighborhood size used by our algorithm.}
\label{fig:largescale}
\end{figure}
\section{Conclusion}
In this paper, we propose a novel active contour based unsupervised texture segmentation algorithm. The algorithm finds a contour with maximum geodesic distance between its interior and exterior intensity covariance matrices. The results from previous section are in favor of our algorithm. With the least possible neighborhood size $R=1$, the process successfully segments gray-level images.\\
In its current state, the method depends on the size of the neighborhood $R$. Efforts are on to make it independent of $R$, either using a semi-supervised approach or using other multi-scale methods. Instead of intensity covariance matrices, one may also use covariance matrices of well-chosen multi-scale texture features. The method is able to capture even a slight deviation in a texture. This may be an advantage in some cases, but generally one may want the algorithm to be more invariant to small deviations.
\bibliographystyle{plain}
|
1,477,468,750,447 | arxiv | \section{Introduction}
A (block) design is a pair $(\mathcal{P}, \mathcal{B})$ consisting of a finite set $\mathcal{P}$ of points and a finite collection $\mathcal{B}$ of nonempty subsets of $\mathcal{P}$ called blocks. Designs serve as a fundamental tool to investigate combinatorial objects. Also designs have attracted many researchers from different fields for solutions of applications problems including binary sequences with $2$-level autocorrelation, optical orthogonal codes, low density parity check codes, synchronization, radar, coded aperture imaging, and optical image alignment, distributed storage systems and cryptographic functions with high nonlinearity\cite {chung1989,delsarte,dillon1974,ding2015,olmez2}.
One of the main construction method of designs is called difference set method. This method served as a powerful tool to construct symmetric designs, error correcting codes, graphs and cryptographic functions \cite {assmus1992,bernasconi2001,bjl,brouwer2012,buratti2011,dingbook,dingbook2,pott,pott2011}. This paper will focus on the links between designs and a family of a function known as plateaued functions from cryptography. Especially we will investigate the connections between partial geometric difference sets and graph of plateaued functions.
A function from the field $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_p$ is called a $p$-ary function. If $p=2$ then the function is simply called as Boolean. $p$-ary functions with various characteristics have been an active research subject in cryptography. Bent functions and plateaued functions are two well-known families which has prominent properties in this field \cite{carlet2003,CCC,Carlet,Carlet1,Carlet2,CarM,Mes1,Mes2}. These two families of functions can be characterized by their Walsh spectrum. A function $f$ from $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_p$ is called an $s$-plateaued function if the Walsh transform $|\widehat{f}(\mu)|\in \{0,p^{\frac{n+s}{2}}\}$ for each $\mu \in\mathbb{F}_{p^n}$. A $0$-plateaued function $f$ is called as bent and its Walsh transform satisfies $|\widehat{f}(\mu)|= p^{\frac{n}{2}}$ for each $\mu \in\mathbb{F}_{p^n}$. Plateaued functions and bent functions play a significant role in cryptography, coding theory and sequences for communications \cite{Carlet1, Carlet2, CarM}.
Boolean bent functions were introduced by Rothaus in \cite{rot1976}. These functions have optimal nonlinearity and can only exist when $n$ is even. In \cite{dillon1974}, it is shown that the existence of Boolean bent functions is equivalent to the existence of a family of difference sets known as Hadamard difference sets. Boolean plateaued functions are introduced by Zheng and Zhang as a generalization of bent functions in \cite{ZZ}. Boolean plateaued functions have attracted the attention of researchers since these functions provide some suitable candidates that can be used in cryptosystems. A difference set characterization of these functions was recently provided by the second author. In \cite{olmez2}, it is shown that the existence of Boolean plateaued functions is equivalent to the existence of partial geometric difference sets.
In arbitrary characteristic, the graph of $f:\mathbb{F}_{p^n}\rightarrow \mathbb{F}_{p^m}$, $G_{f}=\{(x,f(x)):x\in \mathbb{F}_{p^{n}}\}$, plays an important role for the relation to difference sets, \cite{pott1, tan2010}. For instance, the graph of a $p$-ary bent function can be recognized as a relative difference set. In general, a characterization of plateaued functions in terms of difference sets is not known. A partial result in this direction is provided in \cite{cmt} for partially bent functions which is a subfamily of plateaued functions.
There are recent result concerning explicit characterization of plateaued functions in odd characteristics through their second order derivatives in \cite{CMOS,mos1,mos2}.
In this paper, we first investigate the link between the graph of a plateaued function and partial geometric difference set. We also provide several characterization of plateaued functions in terms of associated difference set properties. By using these characterization we provide a family of vectorial plateaued functions which has an interesting connection to three-valued cross correlation functions.
The organization of the paper is as follows. In Section 2, we provide preliminary results concerning partial geometric difference sets. In Section 3, we mainly provide the links between vectorial plateaued functions and partial geometric difference sets. We also provide a construction as a result of our characterizations.
In Section 4, we focus on $p-$ary plateaued function. We provide several characteristics which are obtained from Butson-Hadamard-like matrices. This section also provides results concerning partially bent functions and partial geometric
designs.
\section{Preliminaries}
Let $G$ be a group of order $v$ and let $S \subset G$ be a $k$-subset. For each $g \in G$, we define \[\delta(g):=|\{(s,t)\in S \times S \colon g=st^{-1}\}|.\]
Next we define the difference sets of our interest.
\begin{defn} Let $v, k$ be positive integers with $v>k>2$.
Let $G$ be a group of order $v$. A $k$-subset $S$ of $G$ is called a partial geometric difference set (PGDS) in $G$ with parameters $(v, k; \alpha,\beta)$ if there exist constants $\alpha$ and $\beta$ such that, for each $x\in G$,
\[\sum\limits_{y\in S}\delta(xy^{-1})=\left \{\begin{array}{ll} \alpha & \mbox{if } x\notin S,\\
\beta & \mbox{if } x\in S.\\ \end{array} \right .\]
\end{defn}
There are two subclasses of PGDS namely difference sets and semiregular relative difference sets which have deep connections with coding theory, and cryptography \cite{assmus1992, dingbook,dingbook2}:
\begin{itemize}
\item A $(v,k,\lambda)$-difference set (DS) in a finite group $G$ of order $v$ is a $k$-subset $D$ with the property that $\delta(g)=\lambda$ for all nonzero elements of $G$.
\item A $(m,u,k,\lambda)$-relative difference set (RDS) in a finite group $G$ of order $m$ relative to a (forbidden) subgroup $U$ is a $k$-subset $R$ with the property that
\[ \delta(g)=\begin{cases}
k & g=1_G \\
\lambda & g\in G \backslash U\\
0 & otherwise \\
\end{cases}
\]
The RDS is called {\em semiregular} if $m = k = u \lambda$.
\end{itemize}
Clearly a $(v,k,\lambda)$-DS is a $(v,k; k\lambda, n+k\lambda)$-PGDS and an $(m,u,k,\lambda)$ semiregular RDS is a $(mu,k; \lambda(k-1), k(\lambda+1)-\lambda)$-PGDS \cite{olmez1}.
Group characters are powerful objects to investigate various types of difference sets.
A {\em character} $\chi$ of an abelian group $G$ is a homomorphism from $G$ to the multiplicative group of the complex numbers. The character $\chi_0$ defined by $\chi_0(g) = 1$ for all $g \in G$ is called the {\em principal character}; all other characters are called {\em nonprincipal}. We define the character sum of a subset $S$ of an abelian group $G$ as $\chi(S):= \sum_{s \in S} \chi(s)$.
\begin{thm}[Theorem 2.12 \cite{olmez1}]
\label{chartheoryPGDS}
A $k$-subset $S$ of an abelian group $G$ is a partial geometric difference set in $G$ with parameters $(v,k;\alpha,\beta)$ if and only if $|\chi (S)|=\sqrt{\beta-\alpha}$ or $\chi (S)=0$ for every non-principal character $\chi$ of $G$.
\end{thm}
For instance, let $f$ be a $p$-ary bent function from the field $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_{p}$. The set $G_f=\{(x,f(x)): x\in \mathbb{F}_{p^{n}} \}$ is called graph of $f$. Any non-principal character $\chi$ of the additive group of $\mathbb{F}_{p^{n}}\times \mathbb{F}_p$ satisfies $|\chi(G_f)|^2=p^n$ or $0$. This observation yields that $G_f$ is a $(p^{n+1},p^n, p^{2n-1}-p^{n-1},p^{2n-1}-p^{n-1}+p^n)$-PGDS in $H=\mathbb{F}_{p^{n}}\times\mathbb{F}_{p}$.
Walsh transform provides interesting connections between $p$-ary functions and difference sets. For a prime $p$, we define a primitive complex $p$-th root of unity $\zeta_p=e^{\frac{2\pi i}{p}}$. Let $f$ be a function from the field $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_p$ and let $F(x)=\zeta_p^{f(x)}$. The Walsh transform of $f$ is defined as follows:
$$\displaystyle \widehat{f}(\mu)=\sum_{x\in \mathbb{F}_{p^{n}}} \zeta_p^{f(x)-Tr_n(\mu x)},~~~\mu \in \mathbb{F}_{p^{n}}$$ where $$Tr_n(z)=\sum_{i=0}^{n-1}z^{p^i}.$$
The convolution of $F$ and $G$ is defined by
$$(F * G)(a)=\sum_{x\in \mathbb{F}_{p^n} } F(x)G(x-a).$$ We will also take advantage of the convolution theorem of Fourier analysis. The Fourier transform of a
convolution of two functions is
$$\widehat{F * G}=\widehat{F}\cdot \widehat{G}.$$
\section{Results on Vectorial Functions}
Let $F$ be a vectorial function from $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_{p^{m}}$. For every $b\in \mathbb{F}^*_{p^{m}}$, the component function $F_b$ of $F$ from $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_p$ is defined as $F_{b}(x)=Tr_m(bF(x))$. A vectorial function is called \textit{vectorial plateaued} if all its nonzero component functions are plateaued. If the nonzero component functions of a vectorial plateaued function are s-plateaued for the same $0 \leq s \leq n$ then $F$ is called as \textit{s-plateaued} following the terminology in \cite{mos1}.
The set $G_{F}=\{(x,F(x)):x\in \mathbb{F}_{p^{n}}\}$ is called the graph of $F$. Next we will characterize vectorial functions by their graphs.
\begin{thm} \label{plateaued-PGDS}
Let $F:\mathbb{F}_{p^{n}} \rightarrow \mathbb{F}_{p^{m}}$ be a vectorial function. Then the graph of $F$ is a $(p^{n+m},p^n; \alpha,\beta)$ partial geometric difference set in $H=\mathbb{F}_{p^{n}}\times\mathbb{F}_{p^m}$ satisfying $\beta-\alpha=\theta$ if and only if $|\widehat{F_b}(a)| \in \{0, \sqrt{\theta}\}$ for all non zero $b \in \mathbb{F}_{p^{m}}$ and $a \in \mathbb{F}_{p^{n}}$. In particular, $\alpha=p^{2n-m}-p^{n+s-m}$ and $\beta=p^{n+s}+p^{2n-m}-p^{n+s-m}$.
\end{thm}
\begin{proof}
A non-principal character of $\mathbb{F}_{p^{n}}\times \mathbb{F}_{p^{m}}$ can be written as $\chi_{(a,b)}(x,y)=\zeta_p^{Tr_{n}(ax)+Tr_m(by)}$ for a nonzero $(a,b) \in \mathbb{F}_{p^{n}}\times \mathbb{F}_{p^{m}}$.
For any nonzero $b \in \F_{p^n}$, the Walsh transform of $F_b$ is
\begin{align*}
\widehat{F_b}(a)=&\sum_{x\in \mathbb{F}_{p^{n}}}\zeta_p^{-Tr_{n}(ax)+Tr_m(bF(x))}=\sum_{x\in \mathbb{F}_{p^{n}}}\chi_{(-a,b)}(x,F_b(x)) \\
=&\chi_{(-a,b)}(G_F)
\end{align*}
for any $a \in F_{p^m}$.
Therefore $|\widehat{F_b}(a)| \in \{0, \sqrt{\theta}\} |$ for all non zero $b \in \mathbb{F}_{p^{m}}$ if and only if $G_{F}=\{(x,F(x)):x\in \mathbb{F}_{p^{n}}\}$ is a $(p^{n+m},p^n;\alpha,\beta)$ partial geometric difference set satisfying $\beta-\alpha=\theta$. Using the well-known \textit{Parseval identiy}, one immediately sees that $\theta=p^{n+s}$ and $|\{a\in \mathbb{F}_{p^{n}}:\widehat{F_b}(a)\neq 0\}|=p^{n-s}$ for some $0 \leq s \leq n$.
The parameters of a partial geometric difference set satisfies the relation in \cite{olmez1} and hence we have
\[p^{3n}=(\beta-\alpha)p^n+\alpha\nu=p^{n+s}p^n+\alpha\nu.\]
Then we see that $\alpha=p^{2n-m}-p^{n+s-m}$ and $\beta=p^{n+s}+p^{2n-m}-p^{n+s-m}$.
\end{proof}
\begin{remark}
Theorem \ref{plateaued-PGDS} implies that a vectorial function $F:\mathbb{F}_{p^{n}} \rightarrow \mathbb{F}_{p^{m}}$ is s-plateaued if and only if its graph is a partial geometric difference set with the parameters $(p^{n+m},p^n;p^{2n-m}-p^{n+s-m},p^{n+s}+p^{2n-m}-p^{n+s-m})$. Note that Theorem \ref{plateaued-PGDS} is also valid for $m=1$, i.e. the case of p-ary functions. Since for an $s$-plateaued $p$-ary function $f:\mathbb{F}_{p^{n}}\rightarrow \mathbb{F}_{p}$, the function $bf(x)$ is $s$-plateaued for each $b\in \mathbb{F}_{p}^*$, we can consider $f$ as a vectorial $s$-plateaued function.
The case $s=0$ is the case of vectorial bent functions and if we additionally have $m=n$, these vectorial functions are known as \textit{planar} functions \cite{ColM}.
\end{remark}
Next we will investigate links between vectorial $s-$plateaued functions and partial geometric difference sets.
\begin{prop} \label{diff-plateauedprop}
Let $F:\mathbb{F}_{p^{n}}\rightarrow \mathbb{F}_{p^{m}}$ be a vectorial function. Then $F$ is $s$-plateaued if and only if
\[\sum\limits_{a \in \mathbb{F}_{p^{n}}}|\{s \in \mathbb{F}_{p^{n}}\colon y=F(s+x-a)-F(s)+F(a)\}|=\left \{\begin{array}{ll} \alpha & \mbox{if } y\neq F(x),\\
\beta & \mbox{if } y=F(x)\\ \end{array} \right . \]
\end{prop}
\begin{proof}
\[\begin{split}
\delta((x,y))&=|\{((s_1,t_1),(s_2,t_2))\in G_F \times G_F \colon x=s_1-s_2, y=t_1-t_2=F(s_1)-F(s_2)\}|\\
&=|\{s_2 \in \mathbb{F}_{p^{n}}\colon y=F(s_2+x)-F(s_2)\}|\\
\end{split}\]
So the criteria for PGDS is given by
\[\sum\limits_{a \in \mathbb{F}_{p^{n}}}\delta((x-a,y-F(a)))=\left \{\begin{array}{ll} \alpha & \mbox{if } y\neq F(x),\\
\beta & \mbox{if } y=F(x)\\ \end{array} \right .\]
and hence
\[\sum\limits_{a \in \mathbb{F}_{p^{n}}}|\{s \in \mathbb{F}_{p^{n}}\colon y=F(s+x-a)-F(s)+F(a)\}|=\left \{\begin{array}{ll} \alpha & \mbox{if } y\neq F(x),\\
\beta & \mbox{if } y=F(x)\\ \end{array} \right .\]
\end{proof}
The above result can be associated with the derivative of an $s-$plateaued function. The derivative of a vectorial function is defined by
$$D_aF(x)=F(x+a)-F(x).$$
To see the connection let us first replace $y$ in the expression
\[ y=F(s+x-a)-F(s)+F(a)\]
by $F(x)-c$ for $c \in \mathbb{F}_{p^n}$. Hence we have
\[c=F(s)-F(a)-F(s+x-a)+F(x)=D_{s-a}F(a)-D_{s-a}F(x).\]
This observation yields
\begin{align*}
&\sum\limits_{a \in \mathbb{F}_{p^{n}}}|\{s \in \mathbb{F}_{p^{n}}\colon y=F(s+x-a)-F(s)+F(a)\}|\\
=&\sum\limits_{a \in \mathbb{F}_{p^{n}}}|\{s \in \mathbb{F}_{p^{n}}\colon D_{s-a}F(a)-D_{s-a}F(x)=c\}|\\
=&\sum\limits_{a \in \mathbb{F}_{p^{n}}}|\{t \in \mathbb{F}_{p^{n}}\colon D_tF(a)-D_tF(x)=c\}|\\
=&|\{(t,a) \in \mathbb{F}_{p^{n}}\times \mathbb{F}_{p^{n}}\colon D_tF(a)-D_tF(x)=c\}|\\
=&N_F(c, x)
\end{align*}
where $N_F(c, x)$ represents the number of pairs $(t,a) \in \mathbb{F}_{p^{n}}\times \mathbb{F}_{p^{n}}$ such that $$D_tF(a)-D_tF(x)=c$$ as in Section 2 of \cite{mos1}. Thus we will have the following result concerning the derivative and PGDS parameters.
\begin{thm} \label{diff2-plateaued}
Let $F$ be a function from $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_{p^m}$. Then the set $G_F$ is a PGDS with parameters $(p^{n+m},p^n ;\alpha, \beta)$ if and only if
\[N_F(c, x)= \begin{cases}
\alpha, & c\ne 0 \\
\beta, & c = 0 \\
\end{cases}
\]
for all $x \in \mathbb{F}_{p^n}$ and some constants $\alpha$ and $\beta$.
\end{thm}
\begin{remark}
Using Theorem \ref{diff2-plateaued}, Proposition \ref{diff-plateauedprop} and Theorem \ref{plateaued-PGDS}, we are able to prove Theorem 8i. in \cite{mos1} with a different approach using the properties of partial geometric difference sets. This gives an interesting relation between the parameters of a PGDS and the second order derivatives of (vectorial) plateaued functions.
\end{remark}
\subsection{A family of vectorial $s-$plateaued functions}
In this section, we will discuss the link between vectorial $s$-plateaued functions $F(x)=x^d$ from $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_{p^{n}}$ and the cross-correlation function between two $p$-ary $m$-sequences that differ by a decimation $d$. An $m$-sequence and its decimation is defined by $u(t)=\sigma^t$ and $v(t)=u(dt)$ where $\sigma$ is a primitive element of the finite field. The cross-correlation between the sequences $u$ and $v$ is defined by
$$\theta(\tau)=\sum_{t=0}^{p^n-2}\zeta_p^{u(t+\tau)-v(t)}.$$ It can be shown that
$$\theta(\tau)=-1+\sum_{x \in \mathbb{F}_{p^{n}} }\zeta_p^{Tr_n(ax+x^d)}$$ where $a=-\sigma^\tau$.
Therefore for $F_1(x)=Tr(x^d)$, we have
\begin{equation}\label{CC-Walsh}
\theta(\tau)=-1+\widehat{F_1}(-a).
\end{equation}
\begin{thm}\label{cross-plateaued}
Let $F(x)=x^{d}$ be a vectorial function from $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_{p^{n}}$ with $gcd(d,p^n-1)=1$. If the cross-correlation of the $p$-ary $m$-sequences that differ by decimation $d$ takes three values, namely $-1, -1+p^{\frac{n+s}{2}}$ and $-1-p^{\frac{n+s}{2}}$, then $F$ is a vectorial $s$-plateaued function.
\end{thm}
\begin{proof}
For each $b\in \mathbb{F}_{p^{n}}^*$, we denote by $F_b(x)$ the function $F_b(x)=Tr(bF(x))=Tr(bx^d)$.
The walsh transform
\[\widehat{F_b}(0)=\sum_{x \in \mathbb{F}_{p^{n}}} \zeta_p^{Tr_n(bx^d)}=0\]
since $x^d$ is a permutation.
For each $a\in \mathbb{F}_{p^{n}}^*$, the Walsh transform of $F_b(x)$
\begin{align*}
\widehat{F_b}(a)&=\sum_{x \in \mathbb{F}_{p^{n}}} \zeta_p^{Tr_n(bx^d-ax)}\\
&=\sum_{x \in\mathbb{F}_{p^{n}} }\zeta_p^{Tr_n(c^dx^d-\frac{a}{c}cx)}\\
&=\sum_{y \in\mathbb{F}_{p^{n}} }\zeta_p^{Tr_n(y^d-\mu y)}\\
&=\widehat{F_1}(\mu)
\end{align*}
where $b=c^d$ and $\mu=a/c$. Note that any $b\in \mathbb{F}_{p^{n}}^*$ can be written as $b=c^d$ for some $c \in \mathbb{F}_{p^{n}}^*$.
Then using Equation (\ref{CC-Walsh}), we immediately obtain the result that $F(x)$ is vectorial s-plateaued.
\end{proof}
\begin{lemma}
Let $p$ be an odd prime and $n,k$ be positive integers with $gcd(n,k)=s$. If $ n/s$ is odd then
$\gcd{(p^n-1,d)}=1$ for $d=(p^{2k}+1)/2$ and $d=p^{2k}-p^k+1$.
\end{lemma}
\begin{remark}
There are only finitely many known functions with a three-valued cross-correlation. Trachtenberg proved the following in his thesis \cite{trachtenberg}. Let $n$ be an odd integer and $k$ be an integer such that $gcd(n,k)=s$. Then for each of the decimations $d=\frac{p^{2k}+1}{2}$ and $d=p^{2k}-p^k+1$ the cross-correlation function $\theta_d(\tau)$ takes the values $-1,-1\pm p^{\frac{n+s}{2}}$.
This result is generalized later by Helleseth in Theorem 4.9 in \cite{helleseth}. He showed that if $gcd(n,k)=s$ and $ n/s$ is odd then for the same decimations, $\theta_d(\tau)$ has the values $-1,-1\pm p^{\frac{n+s}{2}}$. Our result implies that the corresponding vectorial functions are $s-$plateaued.
\end{remark}
Pott et. al. provided a classification of weakly regular bent functions via partial difference sets, \cite{tan2010}. In this classification authors showed that a function from $\mathbb{F}_3^n$ to $\mathbb{F}_3$ when $n$ is even satisfying $f(-x) = f(x)$ and $f(0)=0$ is weakly regular if and only if a $D_1=\{x: f(x)=1\}$ and $D_2=\{x: f(x)=2\}$ are partial difference sets. Later in \cite{olmez3}, the author provided a similar classification for weakly regular bent functions from $\mathbb{F}_3^n$ to $\mathbb{F}_3$ when $n$ is odd via partial geometric difference sets.
Next we will also show that our vectorial functions have a similar classification in the case of $p=3$ and $d=\frac{3^{2k}+1}{2}$.
\begin{thm}\label{partitioned}.
Let $n \ge 3$ be an integer and $d=\frac{3^{2k}+1}{2}$ with $\gcd{(n,k)}=s$ and $n/s$ is odd. For $i=0,1,2$ the sets $D_i=\{x: F(x)=Tr_n(x^d)=i\}$ are $(3^n, 3^{n-1}, 3^{2n-3}-3^{n-2}, 3^{n-1}+3^{2n-3}-3^{n-2})$ partial geometric difference sets in the additive group of $\mathbb{F}_{3^{n}}$.
\end{thm}
\begin{proof}
For each $a\in \F_{3^n}^*$,
Suppose that $\chi_a(D_1)=x_a+y_a \zeta_3$ with $x_a,y_a\in \mathbb{R}$. Actually, when we are calculating $\chi_a(D_1)$, we are summing powers of $\zeta_3$ as many times as the number of elements in $D_1$ for which $Tr_{n}(ax)=0,1$ or $2$. In other words, if we set
\[D_{1,i}=\{x\in D_1:Tr(x)=i\}, i=0,1,2\]
we have
\[\chi_a(D_1)=|D_{1,0}|+|D_{1,1}|\zeta_3+|D_{1,2}|\zeta_3^2=(|D_{1,0}|-|D_{1,2}|)+(|D_{1,1}|-|D_{1,2}|)\zeta_3\]
and hence
\[x_a=|D_{1,0}|-|D_{1,2}|,y_a=|D_{1,1}|-|D_{1,2}|\] are both in $\mathbb{Z}$.
Also note that for any $x \in \mathbb{F}_{3^n}$,
\[F_1(-x)=Tr((-x)^d)=Tr(-x^d)=-F_1(x)\]
and that gives us
\[x\in D_1 \Longleftrightarrow 2x \in D_2.\]
As a consequence,
$\chi_a(D_2)=\overline{\chi_a(D_1)}=x_a+y_a \zeta_3^2$.
Since $D_i$'s form a partition of the additive group of $\mathbb{F}_{3^{n}}$
$$\chi_a(D_0)+ \chi_a(D_1)+\chi_a(D_2)=0,$$ and we obtain
\[\chi_a(D_0)=y_a-2x_a\]
Consider the Walsh transform values $\widehat{F_1}(a), \widehat{F_1}(-a)$ of $F_1(x)=Tr_n(x^d)$.
\begin{align*}
\widehat{F_1}(-a)&=\sum_{x\in \mathbb{F}_{3^{n}}}\zeta_3^{Tr_{n}(ax+x^d)}\\
&=\chi_a(D_0)+\zeta_3\chi_a(D_1)+\zeta_3^2\chi_a(D_2)\\
&=y_a-2x_a+\zeta_3(x_a+y_a \zeta_3)+\zeta_3^2(x_a+y_a \zeta_3^2)\\
&=y_a-2x_a+\zeta_3(x_a+y_a)+\zeta_3^2(x_a+y_a)=-3x_a\\
\end{align*}
and
\begin{align*}
\widehat{F_1}(a)&=\sum_{x\in \mathbb{F}_{3^{n}}}\zeta_3^{Tr_{n}(-ax+x^d)}\\
&=\chi_{-a}(D_0)+\zeta_3\chi_{-a}(D_1)+\zeta_3^2\chi_{-a}(D_2)\\
&=\overline{\chi_a(D_0)}+\zeta_3\overline{\chi_a(D_1)}+\zeta_3^2\overline{\chi_a(D_2)}\\
&=\overline{\chi_a(D_0)}+\zeta_3\chi_a(D_2)+\zeta_3^2\chi_a(D_1)\\
&=y_a-2x_a+\zeta_3(x_a+y_a \zeta_3^2)+\zeta_3^2(x_a+y_a \zeta_3)\\
&=3(y_a-x_a).
\end{align*}
Since $\widehat{F_1}(a),\widehat{F_1}(-a)\in \{0,\pm3^{(n+s)/2}\}$,
\[(x_a,y_a) \in \{(0,0),(0,C),(0,-C),(C,C),(C,0),(C,2C),(-C,-C),(-C,-2C),(-C,0)\}\]
where $C=3^{(n+s-2)/2}$.
In the following table values of $\chi_a(D_0),\chi_a(D_1),\chi_a(D_2), W_{F}(a),W_{F}(-a)$ corresponding to each possible $(x_a,y_a)$ tuple is given.
\begin{table}[h]\footnotesize
\centering
\begin{tabular} { | c | c | c | c | c | c | c | c | c | c |}
\hline
& $(0,0)$ & $(0,C)$ & $(0,-C)$ & $(C,C)$ & $(C,0)$ & $(C,2C)$ & $(-C,-C)$ & $(-C,-2C)$ & $(-C,0)$\\
\hline
$\chi_a(D_0)$ & $0$ & $C$ & $-C$ & $-C$ & $-2C$ & $0$ & $C$ & $0$ & $2C$\\
\hline
$\chi_a(D_1)$ & $0$ & $C\zeta_3$ & $-C\zeta_3$ & $-C\zeta_3^2$ & $C$ & $C\zeta_3-C\zeta_3^2$ & $C\zeta_3^2$ & $C\zeta_3^2-C\zeta_3$ & $-C$\\
\hline
$\chi_a(D_2)$ & $0$ & $C\zeta_3^2$ & $-C\zeta_3^2$ & $-C\zeta_3$ & $C$ & $C\zeta_3^2-C\zeta_3$ & $C\zeta_3$ & $C\zeta_3-C\zeta_3^2$ & $-C$\\
\hline
$\widehat{F_1}(a)$ & $0$ & $0$ & $0$ & $-3C$ & $-3C$ & $-3C$ & $3C$ & $3C$ & $3C$\\
\hline
$\widehat{F_1}(-a)$ & $0$ & $-3C$ & $3C$ & $-0$ & $-3C$ & $3C$ & $0$ & $-3C$ & $3C$\\
\hline
\end{tabular}
\end{table}
The tuples $(C,2C)$ and $(-C,-2C)$ are impossible since
\[|\chi_a(D_1)|=\sqrt{x_a^2-x_ay_a+y_a^2}=\sqrt{3C^2}=\sqrt{3^{n+s-1}}\]
which contradicts the fact that $|\chi_a(D_1)|\in \mathbb{Z}$ since $n+s-1$ is odd.
Therefore the sets $D_1,D_2$ are PGDS.
Next we will show that the tuples $(C,0)$ and $(-C,0)$ are also impossible.
First note that $\widehat{F_1}(0)=0$ since the function $Tr(x^d)$ is balanced. For a non-zero element $a$ the following holds
\[
\begin{split}
\widehat{F_1}(a)\widehat{F_1}(-a) =& \chi_a(D_0)\chi_a(D_0)+\zeta_p\chi_a(D_0)\chi_a(D_2)+\zeta_p^2(\chi_a(D_0)\chi_a(D_1))\\
&+\zeta_p\chi_a(D_0)\chi_a(D_1)+\zeta_p^2\chi_a(D_1)\chi_a(D_2)+(\chi_a(D_1)\chi_a(D_1))\\
&+\zeta_p^2(\chi_a(D_0)\chi_a(D_2))+(\chi_a(D_2)\chi_a(D_2)+\zeta_p\chi_a(D_1)\chi_a(D_2)\\
&=\left( \chi_a(D_0)\chi_a(D_0)+ \chi_a(D_0)\chi_a(D_1)+ \chi_a(D_1)\chi_a(D_1)\right) (2-\zeta_p-\zeta_p^2)\\
&=3\left( \chi_a(D_0)\chi_a(D_0)+ \chi_a(D_0)\chi_a(D_1)+ \chi_a(D_1)\chi_a(D_1)\right)\\
&=3\left( \chi_a(D_0)\chi_a(D_0)- \chi_a(D_1)\chi_a(D_2)\right)\\
\end{split}
\]
We need the following auxiliary lemma.
\begin{lemma} Let $S$ be a $k-$subset of an abelian group $G$ of order $v$. Then
$$\displaystyle \sum_{i=0}^{v-1}\chi_i(SS^{-1})= \sum_{i=0}^{v-1}\chi_i(ke+\sum_{g\in G-{e}}a_gg)=vk.$$
\end{lemma}
\begin{proof} In the group ring $\mathbb{Z}G$ the product $SS^{-1}=k\cdot e+\sum_{g\in G-{e}}a_g\cdot g$ where $a_g \in \mathbb{Z}$. Then
\[
\begin{split}
\displaystyle \sum_{i=0}^{v-1}\chi_i(SS^{-1}) &= \sum_{i=0}^{v-1}\chi_i(ke+\sum_{g\in G-{e}}a_gg)\\
&= \sum_{i=0}^{v-1}k\cdot \chi_i(e)+\sum_{g\in G-{e}}a_g\cdot \sum_{i=0}^{v-1}\chi_i(g)\\
&=vk
\end{split}
\]
This holds since for any $g\in G-{e}$ we have $\sum_{i=0}^{v-1}\chi_i(g)=0$ and $\sum_{i=0}^{v-1}\chi_i(e)=v.$
\end{proof}
Using the previous lemma for $S=D_0,D_1$ separately, we obtain
\[
\begin{split}
\sum_{a\in \mathbb{F}_{3^n}}\widehat{F_1}(a)\widehat{F_1}(-a)&=3\sum_{a\in \mathbb{F}_{3^n}}\left( \chi_a(D_0)\chi_a(D_0)- \chi_a(D_1)\chi_a(D_2)\right)\\
&=3\sum_{a\in \mathbb{F}_{3^n}} \chi_a(D_0D_0^{-1})- 3\sum_{a\in \mathbb{F}_{3^n}}\chi_a(D_1D_1^{-1})\\
&=3\cdot 3^{n}\cdot 3^{n-1}-3\cdot 3^{n}\cdot 3^{n-1}\\
&=0
\end{split}
\]
On the other hand, using the values from the table, we can also write
\[\sum_{a\in \mathbb{F}_{3^n}}\widehat{F_1}(a)\widehat{F_1}(-a)= 9(\Lambda_1 +\Lambda_2)C^2\]
where $\Lambda_1=|\{a \in \F_{3^n}^*:(x_a,y_a)=(C,0)\}|, \Lambda_2=|\{a \in \F_{3^n}^*:(x_a,y_a)=(-C,0)\}|$.
This implies that for any $a\in \F_{3^n}^*$, $(x_a,y_a) \ne (C,0), (-C,0)$ and hence $D_0$ is also a PGDS.
\end{proof}
\begin{remark}
By Theorem \ref{cross-plateaued} we have PGDS with parameters $(v=p^{2n}, k=p^n, \alpha=p^{n}-p^s, \beta=p^{s+n}+p^{n}-p^s)$. If $p=3$ then by Theorem \ref{partitioned} we also have PGDS with parameters $(v=3^{n}, k=3^{n-1}, \alpha=3^{2n-3}-3^{n-2}, \beta=3^{n-1}+3^{2n-3}-3^{n-2})$. Here we also note that not all decimation will lead such a partition of the finite field $ \F_{3^n}$. For instance, let $D_i=\{x: F(x)=Tr_n(x^d)=i\}$ for $d=3^{2}-3+1$. Then computational results imply that none of the $D_i$'s is a partial geometric difference set in $\F_{3^5}$. In general, it is a challenging task to characterize all functions which can be used to obtain a partition of a group into partial geometric difference sets.
\end{remark}
If there is such a partition we can define a set of complex vectors
$$z_{a}=(\chi_a(D_0),\chi_a(D_1),\chi_a(D_2))$$ for any $a\in \mathbb{F}_{3^n}$. Let $$e=(1,\zeta_3, \zeta_3^2).$$ Then norm of the complex inner product of any vector $z_{a}$ and $e$ is either $0$ or $C$ for some integer $C$.
\begin{thm}
Let $D_0$, $D_1$ and $D_2$ be a partition of $\mathbb{F}_{3^n}$ and $\lambda=3^{\frac{n+s-1}{2}}$ be an integer.
Suppose for $i=0,1,2$ each $D_i$ is a partial geometric difference set such that $\chi_a(D_i) \in \{0, \pm \lambda,\pm \lambda \zeta_3,\pm \lambda \zeta_3^2\}$ for each nonprincipal character $\chi_a$. If one of the cases holds
\begin{itemize}
\item $|D_0|=|D_1|=|D_2|$,
\item $|D_i|=|D_j|=3^{n-1}-3^{\frac{n+s-2}{2}}$ and $|D_k|=3^{n-1}+2.3^{\frac{n+s-2}{2}}$,
\item $|D_i|=|D_j|=3^{n-1}+3^{\frac{n+s-2}{2}}$ and $|D_k|=3^{n-1}-2.3^{\frac{n+s-2}{2}}$.
\end{itemize}
and $|<z_a,e>|^2$ is either $0$ or $3\lambda^2$ then
\[ f(x)=\begin{cases}
0, & x \in D_0\\
1, & x\in D_1\\
2, & x\in D_2\\
\end{cases}
\]
is an $s$-plateaued function.
\end{thm}
\begin{proof}
The Walsh transform
\[\widehat{f}(0)=\sum_{x \in \mathbb{F}_{p^{n}}} \zeta_3^{f(x)}=|D_0|+\zeta_3|D_1|+\zeta_3^2|D_2|.\]
With the conditions given in the theorem, we obtain
\[\widehat{f}(0)=\begin{cases}
0 & \text{if}\; |D_0|=|D_1|=|D_2|,\\
3^{\frac{n+s}{2}} & \text{if}\;|D_i|=|D_j|=3^{n-1}-3^{\frac{n+s-2}{2}}, |D_k|=3^{n-1}+2.3^{\frac{n+s-2}{2}},\\
-3^{\frac{n+s}{2}} & \text{if}\; |D_i|=|D_j|=3^{n-1}+3^{\frac{n+s-2}{2}}, |D_k|=3^{n-1}-2.3^{\frac{n+s-2}{2}}.
\end{cases}\]
For each $a\in \mathbb{F}_{3^n}^*$, the Walsh transform of $f(x)$ is
\begin{align*}
\widehat{f}(a)&=\sum_{x \in \mathbb{F}_{p^{n}}} \zeta_3^{f(x)-Tr_n(ax)}\\
&=\sum_{x \in D_0} \zeta_3^{Tr_n(-ax)} + \zeta_3\sum_{x \in D_1} \zeta_3^{Tr_n(-ax)} + \zeta_3^2\sum_{x \in D_2} \zeta_3^{Tr_n(-ax)}\\
&=\chi_{-a}(D_0)+\zeta_3 \chi_{-a}(D_1) + \zeta_3^2\chi_{-a}(D_2).
\end{align*}
Then with the assumptions of the theorem and after some easy calculations one can show that $|\widehat{f}(a)|^2 \in \{0,3\lambda^2\}$.
\end{proof}
\section{Results on p-ary Functions}
In this section we will develop some tools to characterize $p-$ary $s-$plateaued functions. Our results are mimicking the results of \cite{olmez2}.
\begin{lemma} \label{Matrix-Eq} Let $f$ be a function from $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_{p}$ and $M=(m_{x,y})$ be a $p^n \times p^n$ matrix where $m_{x,y}=\zeta_p^{f(x+y)}$. Then, $f$ is an $s$-plateaued function if and only if \begin{equation}MM^*M=p^{n+s}M \label{Mfequation}\end{equation}
where $M^*$ is the adjoint of the matrix $M$.
\end{lemma}
\begin{proof} Suppose $f$ is an plateaued function with $|\widehat{f}(x)| \in \{0, p^{(n+s)/2}\}$ for all $x \in \mathbb{F}_{p^{n}}$. Then,
\[
\begin{split}
(MM^*M)_{x,y}&=\sum_{z\in\mathbb{F}_{p^{n}}} \left(\sum_{c\in \mathbb{F}_{p^{n}}} m_{x,c}\overline{m_{z,c}}\right)m_{z,y} \\
&=\sum_{z\in \mathbb{F}_{p^{n}}} \left(\sum_{c\in \mathbb{F}_{p^{n}}} F(x+c)\overline{F(z+c)}\right)F(z+y) \\
&=\sum_{c\in \mathbb{F}_{p^{n}}} F(x+c)\left(\sum_{w\in \mathbb{F}_{p^{n}} } \overline{F(w)}F(w+y-c)\right)\\
&=\sum_{c\in \mathbb{F}_{p^{n}}} (\overline{F}*F)(c-y)F(x+c) \\
&=\sum_{u\in \mathbb{F}_{p^{n}}} (F*\overline{F})(u-x-y)F(u) \\
&=((F*\overline{F})*F)(x+y).
\end{split}
\]
Let $A=(F*\overline{F})*F$. Then, the Fourier transform of $A$ is
$\widehat{A}=\widehat{F}\cdot\widehat{\overline{F}}\cdot\widehat{F}$. Now by Fourier inversion
\[
\begin{split}
A(x+y)&=\frac{1}{p^{n}}\sum_{\beta \in \mathbb{F}_{p^{n}}} \widehat{F}(\beta)\widehat{\overline{F}}(\beta)\widehat{F}(\beta)\zeta_p^{Tr((x+y)\beta)}\\
&=p^{n+s}\frac{1}{p^{n}}\sum_{\beta \in \mathbb{F}_{p^{n}}} \widehat{F}(\beta)\zeta_p^{Tr((x+y) \beta)}\\
&=p^{n+s}F(x+y).\\
\end{split}
\]
Hence the equation holds.
Suppose $MM^*M=p^{n+s}M$. This implies $((F*\overline{F})*F)(x)=p^{n+s}F(x)$ for all $x \in \mathbb{F}_{p^n}$. Apply the Fourier transform on both of the sides. Then,
$$\widehat{F}(x)(\widehat{F}(x)\cdot \overline{\widehat{F}(x)}-p^{n+s})=0$$ for all $x \in \mathbb{F}_{p^n}$. Hence, $|\widehat{F}(x)| \in \{0, p^{(n+s)/2}\}$ for all $x \in \mathbb{F}_{p^{n}}.$
\end{proof}
\begin{remark} An $n \times n$ complex matrix $M$ is called a Butson-Hadamard matrix if
$$MM^* =nI_n.$$ It is easy to see that a $q^n \times q^n $ Butson-Hadamard matrix $M$ also satisfies
$$MM^*M=q^nM.$$ Our result implies that $M$ can be associated with $0-$plateaued function. This indicates the well-known connection between Butson-Hadamard matrices and bent functions.
\end{remark}
As a corollary of Lemma \ref{Matrix-Eq}, we can characterize $s$-plateaued functions with their first and second derivatives. First and second derivative of a $p-$ary function is defined by
$$D_af(x)=f(x+a)-f(x)$$ and
$$D_aD_bf(x)=f(x+a+b)+f(x)-f(x+a)-f(x+b)$$
respectively.
\begin{cor}[Theorem 3, \cite{mos1}] $f$ is an $s$-plateaued function from $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_{p}$ if and only if the expression $\sum_{a,b\in \mathbb{F}_{p^{n}}} \zeta_p ^{D_aD_b f(u)}$ does not depend on $u \in \mathbb{F}_{p^{n}} $. This constant expression equals to $p^{n+s}$.
\end{cor}
\begin{proof} Since the equation $$MM^*M=p^{n+s}M$$ holds, $$M^*MM^*=p^{n+s}M^*$$ holds too.
Fix two non-zero elements $x$ and $y$ of $\mathbb{F}_{p^{n}}$ and let $u=x+y$.
\[
\begin{split}
\sum_{z\in\mathbb{F}_{p^{n}}} \left(\sum_{c\in \mathbb{F}_{p^{n}}} \overline{m_{x,c}}m_{z,c}\right)\overline{m_{z,y}} &=\sum_{z\in \mathbb{F}_{p^{n}}} \left(\sum_{c\in \mathbb{F}_{p^{n}}} \zeta_p^{-f(x+c)}\zeta_p^{f(z+c)}\right)\zeta_p^{-f(z+y)} \\
&=\sum_{c,z\in \mathbb{F}_{p^{n}}} \zeta_p ^{-f(x+c)+f(z+c)-f(z+y)}\\
&=p^{n+s}\zeta_p ^{-f(x+y)}
\end{split}
\]
Now let $z=a+x$ and $c=b+y$. Then
$$p^{n+s}=\sum_{a,b\in \mathbb{F}_{p^{n}}} \zeta_p ^{-f(x+y+b)+f(a+b+x+y)-f(a+x+y)+f(x+y)}$$
holds.
Thus, $$p^{n+s}=\sum_{a,b\in \mathbb{F}_{p^{n}}} \zeta_p ^{D_aD_b f(u)}.$$
\end{proof}
\begin{cor} Let $\Delta_f(a)=\sum_{x \in \mathbb{F}_{p^n}} \zeta_p^{D_af(x)}$. $f$ is an $s$-plateaued function from $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_{p}$ if and only if $\sum_{a\in \mathbb{F}_{p^{n}}} \overline{\Delta_f(a)}\Delta_f(a)=p^{2n+s}$.
\end{cor}
\begin{proof} We have $$MM^*MM^*=p^{n+s}MM^*.$$Let $$N=MM^*.$$ Then
\[
\begin{split}
(N)_{0,a}&=\sum_{x\in \mathbb{F}_{p^{n}}} \zeta_p^{f(x)-f(a+x)} \\
&=\overline{\Delta_f(a)}\\
\end{split}
\]
and
\[
\begin{split}
(N)_{a,0}&=\sum_{x\in \mathbb{F}_{p^{n}}} \zeta_p^{f(a+x)-f(x)} \\
&=\Delta_f(a)\\
\end{split}
\]
Therefore
\[
\begin{split}
(N^2)_{0,0}&=\sum_{a\in \mathbb{F}_{p^{n}}} \overline{\Delta_f(a)}\Delta_f(a)\\
&=p^{n+s}(N)_{0,0}\\
&=p^{2n+s}\\
\end{split}
\]
\end{proof}
Now we will use our characterization to provide a simple construction of $s$-plateaued functions. If $A$ is an $m \times n$ matrix and $B$ is an $s \times t$ matrix, then the Kronecker product $A \otimes B$ is the $ms \times nt$ block matrix:
$${\displaystyle \mathbf {A} \otimes \mathbf {B} ={\begin{bmatrix}a_{11}\mathbf {B} &\cdots &a_{1n}\mathbf {B} \\\vdots &\ddots &\vdots \\a_{m1}\mathbf {B} &\cdots &a_{mn}\mathbf {B} \end{bmatrix}},}$$
\begin{prop}
Let $f: \mathbb{F}_{p^{n}} \to \mathbb{F}_{p} $ and $g : \mathbb{F}_{p^{m}} \to \mathbb{F}_{p}$ be $s_1$-plateaued and $s_2$-plateaued functions respectively. Let $M$ and $N$ be the matrices whose entries determined by $m_{x,y}=\zeta_p^{f(x+y)}$ and $n_{a,b}=\zeta_p^{g(a+b)}$. Let $P=M \otimes N$ be the Kronecker product of $M$ and $N$. Then
$$PP^*P=p^{n+m+s_1+s_2}P$$ holds.
\end{prop}
\begin{proof}
Let $P=M \otimes N$. Then
\[
\begin{split}
PP^*P&=\left( M \otimes N\right) \left( M^* \otimes N^*\right)\left( M \otimes N\right) \\
&=\left( MM^*M\right) \otimes \left( NN^*N \right)\\
&=\left( p^{n+s_1}M\right) \otimes \left( p^{m+s_2}N \right)\\
&=p^{n+m+s_1+s_2} \left( M\otimes N\right) \\
&=p^{n+m+s_1+s_2}P\\
\end{split}
\]
\end{proof}
\begin{cor}
Let $f: \mathbb{F}_{p^{n}} \to \mathbb{F}_{p} $ and $g : \mathbb{F}_{p^{m}} \to \mathbb{F}_{p}$ be two $s_1$-plateaued and $s_2$-plateaued functions respectively. Then there exists an $(s_1+s_2)$-plateaued function from $\mathbb{F}_{p^{n+m}}$ to $\mathbb{F}_{p}$.
\end{cor}
\begin{proof} Let $M$ and $N$ be matrices associated with $f$ and $g$ such that $m_{x,y}=\zeta_p^{f(x+y)}$ and $n_{a,b}=\zeta_p^{g(a+b)}$. Let $P=M \otimes N$. We need to show that there exist a function $h$ such that the entries of $P$ can be associated with $h$.
We will index the rows and columns of $P$ by the elements of $\mathbb{F}_{p^{n+m}}$. First note that there is a subgroup $H$ of the additive group of $\mathbb{F}_{p^{n+m}}$ which is isomorphic to the additive group of $\mathbb{F}_{p^{m}}$. Let us fix a transversal $T=\{\gamma_1,\gamma_2,\dots, \gamma_{p^n} \}$ of this subgroup in $\mathbb{F}_{p^{n+m}}$. Now order the rows and columns of $P$ by $\gamma_1+H$, $\gamma_2+H$, ..., $\gamma_{p^n}+H$ in the block form. Here we have isomorphisms namely $$\phi_1: H \to \mathbb{F}_{p^{m}}$$ and $$\phi_2: \{\gamma_1+H,\gamma_2+H,\dots, \gamma_{p^n}+H \} \to \mathbb{F}_{p^{n}}$$
If $x,y \in \mathbb{F}_{p^{n+m}}$ then $x=\gamma_i+u$ and $y=\gamma_j+v$ for unique elements $u,v \in H$. Now let us examine the $xy$-th entry of $P$.
$$P_{x,y}=\zeta_p^{f(\phi_2(\gamma_i+H)+\phi_2(\gamma_j+H))} \cdot \zeta_p^{g(\phi_1(u)+\phi_1(v))}= \zeta_p^{h(x+y)}$$ where
$h$ is the desired $(s_1+s_2)$-plateaued function from $\mathbb{F}_{p^{n+m}}$ to $\mathbb{F}_{p}.$ Moreover if $y=0$ then $\gamma_j=v=0$. Thus
$$h(x)= f(\phi_2(\gamma_i+H))+g(\phi_1(u)).$$
\end{proof}
\subsection{Partially bent functions }
This section is devoted to investigate a family of $s-$plateaued functions known as partially bent functions. A $p-$ary function is called partially bent if the derivative $D_af$ is either balanced or constant for any $a \in \mathbb{F}_{p^n}$. Here we will provide some characterization via their associated designs.
Let $f$ be an $s$-plateaued function and for $a \in \mathbb{F}_{p^{n}} $ define the set
$$T_a=\{x+a,f(x)+f(a): x \in \mathbb{F}_{p^{n}} \}.$$
Fix an element $a$ of $\mathbb{F}_{p^{n}} $, then the graph of $f$ can be also written as
$$G_f=\{(x,f(x)): x \in \mathbb{F}_{p^{n}}\}=\{(x+a,f(x+a)): x \in \mathbb{F}_{p^{n}}\}.$$
\begin{lemma} Let $f$ be an $s$-plateaued function with $f(0)=0$.
If $f$ has a linear structure $\Lambda$ then $T_a=G_f$ for all $a \in \Lambda$.
\end{lemma}
\begin{cor} Let $f$ be an $s$-plateaued function with $f(0)=0$ and linear structure $\Lambda$ of dimension $m$. Then the incidence matrix $A$ of the design associated with the partial geometric difference set $G_f$ can be written as a Kronecker product of $1\times p^m$ all-ones matrix $j$ and an incidence matrix $N$ of a partial geometric design.
\end{cor}
\begin{proof} Let $j$ be the $1\times p^m$ all-ones matrix. Let $D$ be the block design associated with $G_f$ where the point set is $\mathbb{F}_{p^{n}} \times \mathbb{F}_{p} $ and the blocks are the translates of the graph of $f$. Suppose $\textbf{B}$ is a block in $D$. Note that $\textbf{B}=(u,v)+G_f$ for some $(u,v) \in \mathbb{F}_{p^{n}} \times \mathbb{F}_{p}.$ Then for each $a\in \Lambda$ we have
$$\textbf{B}+(a,f(a))=\textbf{B}$$ since
$$(u,v)+G_f=\{(x+u+a,f(x+a)+v): x \in \mathbb{F}_{p^{n}}\}.$$ Thus each block is repeated $p^m$ many times. Therefore, $A=j\otimes N$ for some incidence matrix $N$. Now we are going to show that $N$ is an incidence matrix of a partial geometric design. Since $A$ is an incidence matrix of a partial geometric design,
\[ \begin{split}
AA^tA&=(\beta-\alpha) j \otimes N+\alpha J_{p^n \times p^n}\\
&=j \otimes (\beta-\alpha) N+\alpha j \otimes J_{p^n \times p^{n-m}}\\
&= j\otimes (\beta-\alpha) N+\alpha J_{p^n \times p^{n-m}}.\\
\end{split}
\]
We also have
\[ \begin{split}
AA^tA&=jj^tj\otimes NN^tN\\
&=p^m j\otimes NN^tN\\
&=j\otimes p^m NN^tN.\\
\end{split}
\]
By comparing the left hand sides we can conclude that the equation
$$NN^tN=\frac{(\beta-\alpha)}{p^m} N+\frac{\alpha}{p^m} J_{p^n \times p^{n-m}}$$
holds.
\end{proof}
Let $f$ be an $s$-plateaued function from $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_{p}$. Then the set $D_f$ is a PGDS with parameters $(p^{n+1},p^n ; p^{2n-1}-p^{n+s-1}, p^{2n-1}-p^{n+s-1}+p^{n+s})$. If $f$ is a partially bent function then there is an integer $s\geq 0$ such that $f$ is $s$-plateaued and the linear space of $f$ has dimension $s$. Our observation yields the following result.
\begin{cor}
If $f$ is partially bent function then $f$ can be associated with a partial geometric design with parameters
$$v=p^{n+1}, b=p^{n+1-s}, k=p^n, r=p^{n-s}, \alpha= p^{2n-1-s}-p^{n-1}, \beta=p^n+p^{2n-1-s}-p^{n-1}.$$
\end{cor}
|
1,477,468,750,448 | arxiv | \section{INTRODUCTION}
Among the low-mass X-ray binaries, soft X-ray transients (SXTs) offer
a favorable opportunity of detecting stellar-mass black-hole candidates
(BHCs). Measurements of the binary periods and the orbital velocities
of the companion stars in seven SXTs yield mass functions
$\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 3M_{\odot}$ for their compact stars (McClintock 1998,
Filippenko \hbox{et al.\ }$\,$ 1999).
As these masses are higher than the upper limit to the
mass of neutron stars for known equations of state of nucleon matter,
the compact stars in SXTs are all BHCs.
In other two systems, the mass functions are $< 3M_{\odot}$ but the
inclination angles are thought to be sufficiently low that the
compact objects can also be classified as BHCs (McClintock 1998).
SXTs show transitions between different spectral states.
In a simple classification, we can identify an
``off'' (or ``quiescent'') state when the source is inactive, and a
``high-soft state'' and a ``low-hard state'' defined by their X-ray
spectral properties, when the source is X-ray active.
Often an increase of the optical brightness and radio flares
accompany the X-ray outbursts. (See e.g.\ Tanaka \& Lewin 1995
for a review of the properties of SXTs.)
GRO~J1655$-$40 is an SXT discovered by BATSE on 1994 July 27
(Zhang \hbox{et al.\ }$\,$ 1994). Between 1994 and 1995 it underwent several X-ray
outbursts (see Tavani \hbox{et al.\ }$\,$ 1996), after which it retreated to a quiescent
state. In late April 1996, another outburst started, and it lasted more than
one year. The 2 -- 12 keV X-ray light curve of GRO~J1655$-$40 during the
1996 outburst is shown in
Figure~\ref{batse_asm} (from the quick-look results provided by the
RXTE/ASM team); in the same figure we also show the hard (20 -- 100 keV)
X-ray flux detected by BATSE over the same period of time.
The binary period of GRO~J1655$-$40 is $P = 2.62157 \pm 0.00015$~d
(Orosz \& Bailyn 1997). The time of
inferior conjunction of the secondary star is
HJD $2449838.4209 \pm 0.0055$ (Orosz \& Bailyn 1997). An alternative
ephemeris, given by van der Hooft \hbox{et al.\ }$\,$ (1998), is
HJD $2449838.4198(52) + 2.62168(14) \times$~N.
The mass function determined from the
kinematics of the system by Orosz \& Bailyn (1997) implies a mass
$M_{1} = (7.0 \pm 0.2)~M_{\sun}$ for the compact object.
By taking into account the effect of X-ray irradiation of the secondary
star, Phillips, Shahbaz \& Podsiadlowski (1999) derived
lower values for the mass of the primary: they obtained
$4.1 < M_{1} < 6.6 M_{\sun}$ (90 per cent confidence level).
By using only data taken during an X-ray quiescent state, Shahbaz \hbox{et al.\ }$\,$
(1999) obtained $5.5 < M_{1} < 7.9 M_{\sun}$ (95 per cent confidence level).
In any case, the mass of the compact object makes GRO~J1655$-$40
a candidate black hole binary. The major
orbital parameters of the system are listed in Table 1.
During the quiescent states the system is optically faint, with
$V \approx 17$ mag (Bailyn \hbox{et al.\ }$\,$ 1995b), but it can
brighten substantially when an X-ray outburst occurs. In August 1994, it
reached $V = 14.4$ (Bailyn \hbox{et al.\ }$\,$ 1995a),
and in May 1996, $V = 15.4$ (Horne \hbox{et al.\ }$\,$ 1996).
Since its discovery, we have observed GRO~J1655$-$40 spectroscopically
on four occasions: 1994 August -- September, 1996 April,
1996 June and 1997 June. The observations covered
periods in which the system was in a pre-outburst quiescence, at the
onset of a hard X-ray flare and in a high-soft state. In a previous
paper (Soria \hbox{et al.\ }$\,$ 1998) we presented the orbital-phase-dependent
velocity shifts of the \ion{He}{2} $\lambda 4686$ and
\ion{N}{3} $\lambda \lambda 4641, 4642$ emission lines.
In this paper we extend our study to include the Balmer lines and the orbital
variations in their line profiles.
Technical details of our observations are outlined in \S2. In \S3
we present the results of the spectroscopic observations we conducted
in 1996 April, and in \S4 we present those we obtained in 1996 June
and 1997 June. In \S5 we discuss the profiles of the Balmer lines
observed in the 1996 -- 1997 soft X-ray outburst, and we attempt to
locate the emission regions. We also discuss some
physical mechanisms responsible for line emission and absorption in the disk.
In \S6 we present the results of the observations we carried out during
the 1994 hard X-ray outburst. We outline the general spectral features,
we discuss the origin, intensity and profile of the main absorption
and emission lines, and we compare the two sets of spectra from 1994 and 1996.
In particular, we show the evolution of the optical spectrum before,
during and after a major hard X-ray flare.
\section{OBSERVATIONS}
GRO~J1655$-$40 was observed by one of us (RWH) as a target of
opportunity on each night from 1994 August 30 to 1994 September 4,
with the RGO spectrograph and Tektronix 1k $\times$ 1k thinned CCD
on the 3.9~m Anglo-Australian Telescope (AAT).
Spectra were obtained in two regions, $6278-6825$ \AA,
centered on the H$\alpha$ line, and $4432-5051$ \AA,
covering the \ion{N}{3}, \ion{He}{2} and H$\beta$ lines.
Gratings with 1200 grooves/mm were used, with the blaze direction
oriented towards the 25 cm camera, giving a resolution of
1.3 \AA\ FWHM. On September 6, we obtained simultaneously a blue
($3925-5500$ \AA) spectrum with the RGO spectrograph
and 600 grooves/mm gratings, at a resolution of 2.5 \AA\ FWHM,
and a red ($5500-11000$ \AA) spectrum
with the FORS spectrograph, via a dichroic beam splitter,
at a resolution of 20 \AA\ FWHM. The latter observations
were kindly obtained for us by Paul Francis.
In 1996 April 20 -- 21 we carried out a scheduled observation with
the Double Beam Spectrograph (DBS) on the ANU 2.3~m Telescope at Siding
Spring Observatory. The detectors on the two arms of the spectrograph
were SITe 1752$\times$532 CCDs. Gratings with 300 grooves/mm were used
for the blue ($3600-5700$ \AA) and the red ($5700-9300$ \AA) bands,
and low-resolution spectra (resolution $=4.8$ \AA\ FWHM) were
obtained. More extensive observations were carried out
in 1996 June 8 -- 12 and June 17, and 1997 June 14 -- 15,
with the DBS on the ANU 2.3~m Telescope.
1200 grooves/mm gratings were used for both the blue
($4150-5115$ \AA) and the red ($6300-7250$ \AA) bands, giving a
resolution of 1.3 \AA\ FWHM.
We list our observations in Table 2.
\section{PRE-OUTBURST STATE (1996 APRIL)}
\subsection{Overview}
GRO~J1655$-$40 was in quiescence between late 1995 August and 1996 April.
In 1996 March, its soft X-ray luminosity, inferred from the
ASCA observations (at $2-10$ keV), was $\approx 2\times 10^{32}$ erg s$^{-1}$
(see Orosz \& Bailyn 1997). The source was not detected by RXTE/ASM
(at $2-12$ keV) above the intensity level of $\approx 12$ mCrab
before 1996 April 25 (Levine \hbox{et al.\ }$\,$ 1996;
Remillard \hbox{et al.\ }$\,$ 1996); it was also undetected by BATSE
before May. On April $25.38 \pm 0.78$ UT (HJD $2450198.88 \pm 0.78$), the
soft ($2-12$ keV) X-ray intensity began to rise (Remillard \hbox{et al.\ }$\,$ 1996).
The optical brightening, however, had already started on
April 20 (Orosz \hbox{et al.\ }$\,$ 1997). The fitted times of the initial rise
were April $19.25 \pm 0.29$ UT (HJD $2450192.76 \pm 0.29$) for the
$I$ band, April $19.37 \pm 0.26$ UT (HJD $2450192.88 \pm 0.26$) for the
$R$ band, April $19.82 \pm 0.15$ UT (HJD $2450193.32 \pm 0.15$) for the
$V$ band and April $20.34 \pm 0.18$ UT (HJD $2450193.84 \pm 0.18$) for
the $B$ band.
Our spectroscopic observations were conducted between HJD $2450194.26$
and HJD $2450194.30$ (April 20), and between
HJD $2450195.10$ and HJD $2450195.15$ (April 21),
just after the initial rise in the optical brightness, but before the
RXTE/ASM detected the rise in the X-ray intensity.
On the first night we took a series of five
600-s spectra and one 1200-s spectrum (although focussing problems on the red
arm of the spectrograph affected the quality our red spectra).
The binary phase during the
observations was $0.485 < \phi < 0.501$ (ephemeris of
Orosz \& Bailyn 1997), and the inferior conjunction of the secondary
star was at $\phi = 0.75$. On the following night, we took seven
600-s spectra, at the binary phase $0.804 < \phi < 0.824$.
(If we adopt the ephemeris of van der
Hooft \hbox{et al.\ }$\,$ (1998), the observations were made at phase
$0.480 < \phi < 0.496$ on the first night, and
$0.799 < \phi < 0.819$ on the second.)
\subsection{General spectral features}
The \ion{H}{1} Balmer and Paschen lines are clearly seen
in absorption in our spectra (Figures~\ref{fluxspectra} and \ref{96apr}).
The \ion{He}{2} $\lambda 4686$ emission line,
the Bowen fluorescence \ion{N}{3} $\lambda 4634$ line and the
blend \ion{N}{3} $\lambda \lambda 4641, 4642$ were not detected on either
night. These emission lines, which are often seen from X-ray
irradiated accretion disks (Smak 1981; Horne \& Marsh 1986), were,
however, observed 20 days later (on May 11) by Hynes \hbox{et al.\ }$\,$ (1998b).
By comparing our spectra with the
spectra of F giant and subgiant stars and with the spectra of the system
taken by Orosz \& Bailyn (1997) during a quiescent state in February 1996,
we conclude that the optical continuum emission was predominantly
from the secondary star.
This conclusion is supported by the agreement between the observed
velocity shifts of the Balmer lines in our spectra and the
radial velocity of the secondary star expected at those phases
(not shown).
\section{HIGH-SOFT STATE (1996--1997)}
\subsection{1996 June observations}
In late 1996 April, a transition from quiescence
to a high/soft X-ray state occurred. The soft ($2-12$ keV) X-ray flux
(measured by RXTE/ASM) increased rapidly, reaching $\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 1.5$~Crab
and remaining above that level for more than six months.
Around 1996 November the soft X-ray intensity began to decline; but in
1997 January it increased again (Figure~\ref{batse_asm}). It then
stayed at the $\approx 1.3$~Crab level till 1997 July, after which the
system entered another quiescent state.
At the initial stage of the outburst the X-ray spectrum consisted
of a blackbody component and a truncated power-law tail. The black body
temperature was $T \simeq 0.8$ keV. The photon
index of the power-law tail was $\Gamma \simeq 3.5$ and the exponential
cutoff was at $\simeq 13$ keV (Hynes \hbox{et al.\ }$\,$ 1998b). Around
1996 May 27 -- 28 (HJD 2450230 -- 31) the X-ray spectrum hardened,
and the source became detectable by BATSE (Figure~\ref{batse_asm}).
The photon index of the power-law was $\Gamma \simeq 2.5$ on June 20,
and there was no obvious sign of a high-energy cutoff below
$\simeq 100$~keV (Hynes \hbox{et al.\ }$\,$ 1998b).
A radio flare was observed by the Molongo Observatory Synthesis
Telescope (MOST), apparently coincident the X-ray spectral transition
(Hunstead \& Campbell-Wilson 1996; Hunstead, Wu \& Campbell-Wilson 1997).
The optical continuum had increased a few days before the onset of the
soft X-ray outburst (see \S3.1). After reaching a peak in mid-May, the
optical brightness then declined steadily, with an e-folding time of
about 50 days (Hynes \hbox{et al.\ }$\,$ 1998b). By August 1996 it had returned to
the previous quiescent level. Flickerings on time-scales of a few
seconds, apparently correlated with X-ray variability, were reported by
Hynes \hbox{et al.\ }$\,$ (1998a).
Our observations were carried out on June 8 -- 12 and June 17,
during the soft X-ray outburst (Figure~\ref{batse_asm}). We
obtained fifty simultaneous red and blue spectra,
each of 2000 s duration. In Figure~\ref{96jun} we show
two blue and red spectra taken on June 8 and June 10.
The intensity of the optical continuum in May was
slightly higher than in June (cf. Hynes \hbox{et al.\ }$\,$ 1998b).
The soft X-ray flux was similar in these two epochs; however,
the hard X-rays turned on after May 27.
\subsection{General spectral features in the 1996 June observations}
The H$\alpha$ and H$\beta$ lines showed a broad absorption trough in
1996 June. The absorption component was stronger on
June 8 (EW $\simeq -6$ \AA,
FWHM $\simeq 70$ \AA\ $\simeq 3000$ km s$^{-1}$\, for H$\alpha$;
EW $\simeq -10$ \AA,
FWHM $\simeq 70$ \AA\ $\simeq 4000$ km s$^{-1}$\, for H$\beta$)
than on the other nights.
The absorption troughs were partly filled by narrower
emission components (Figure~\ref{96jun}), which
were not seen in the spectra obtained by Hynes \hbox{et al.\ }$\,$ (1998b) four weeks
earlier.
The equivalent width of the H$\alpha$ emission seemed to
increase with the hard X-ray flux (Figure~\ref{batse_EW1}).
The emergence of the H$\alpha$ emission component after the
May 27 turn-on of the hard X-ray flux also suggests a direct association
between these emissions. We have calculated the discrete correlation
function, defined by Edelson \& Krolik (1988), for
the EW of the H$\alpha$ emission line and the hard X-ray flux and
have found a 2-$\sigma$ peak in the correlation function at the lag
of $0.0 \pm 0.5$ d, for a 0.5 d interval bin (Figure~\ref{lag}).
A similar correlation is also found in our 1994 August -- September
data (see \S6.4.1).
The average H$\alpha$ emission profile was double-peaked, with
FWHM $\simeq 900$ km s$^{-1}$\,, and peak-to-peak velocity separation
$= 500 \pm 50$ km s$^{-1}$\,. However, the relative strength of the two peaks
varied substantially over each orbital cycle (see \S4.3).
The \ion{He}{2} $\lambda 4686$ emission line had a double-peaked
profile (Figure~\ref{HeII_profile96}) with a peak-to-peak
separation of $8.5 \pm 0.5$ \AA, corresponding to a velocity separation of
$545 \pm 30$ km s$^{-1}$\,. A double-peaked \ion{He}{2} $\lambda 4686$ emission
line had been detected before the X-ray hardening (see Table 6
in Hynes \hbox{et al.\ }$\,$ 1998b): archival spectra from the Anglo-Australian
Telescope show that the peak-to-peak velocity separation
was $480 \pm 40$ km s$^{-1}$\, on 1996 May 11. The central position of the line varied
sinusoidally, with the variations consistent with the projected
radial velocity of the primary star for a mass ratio $q = 0.33$
(Soria \hbox{et al.\ }$\,$ 1998), suggesting that the emission originated
from the accretion disk. The average line center was blue-shifted
by $40 \pm 10 \pm 5$ km s$^{-1}$\, with respect to the systemic velocity
determined by Orosz \& Bailyn (1997), where
the first source of error is in the determination of the line center,
the second is the systematic error in the wavelength calibration.
We have not found any significant correlation between
the EW of the \ion{He}{2} $\lambda 4686$ line and the hard X-ray
flux (Figure~\ref{batse_EW1}).
The \ion{He}{1} $\lambda 6678$ emission line appeared to be double-peaked,
with only one peak visible at some orbital phases. The velocity separation,
which was determined when the two peaks were present, varied between $520$
and $650$ km s$^{-1}$\, over the time of our observations. These velocities are
consistent with those of the H$\alpha$ and the \ion{He}{2} $\lambda 4686$
lines. The EW of the line was about $0.7$ \AA.
A weak narrow absorption component was also detected at $\lambda 6678$
(not shown), superimposed on the broader, double-peaked emission component.
The narrow absorption component was visible at most
orbital phases and had an EW $\simeq -0.2$ \AA\ and an
FWHM $\simeq 4$ \AA\ ($\simeq 170$ km s$^{-1}$\,). A similar narrow absorption
component was also found at $\lambda 7065$.
The velocity shifts of the narrow absorption components
of the \ion{He}{1} lines were consistent with the projected
radial velocity of the secondary star (Figure~\ref{HeI_abs}),
suggesting that these lines were due to absorption near the secondary star.
The apparent small discrepancy between the observed and predicted
velocities may be due in part to the uncertainty in our measurement of
the line position, in part to additional absorption by cold gas near the
rim of the accretion disk or in the accretion stream, or to a non-uniform
absorption by the photosphere of the secondary star, which was
strongly irradiated on the side facing the primary.
We note that \ion{He}{1} $\lambda 7065$ showed a stronger broad absorption
component briefly on June 8, with EW $\simeq -1.8$ \AA\ and
FWHM $\simeq 35$ \AA\ $\simeq 1500$ km s$^{-1}$\,.
It disappeared later on the same night while the hard
X-ray flux increased, and was not seen during the following nights
(see Figure~\ref{96jun}).
Narrow, single-peaked \ion{N}{3} $\lambda 4634$ and
\ion{N}{3} $\lambda \lambda 4641, 4642$ Bowen fluorescence lines
were observed in 1996 June.
As these lines were also seen before the hard X-ray
turned on [see the spectra taken by Hynes \hbox{et al.\ }$\,$ (1998b) in 1996 May],
they were probably uncorrelated with the hard X-ray flux.
By comparing the Bowen line profiles and radial velocity shifts
with those of the \ion{He}{2} $\lambda 4686$ line, we conclude that
the Bowen fluorescence lines and the \ion{He}{2} line originated from
different regions. Judging from the radial velocities shifts
of the Bowen lines, Soria \hbox{et al.\ }$\,$ (1998) suggest that
they were emitted from a localized region in the
outer accretion disk, possibly a hot spot in phase with the
secondary star.
\subsection{Profile of the H$\alpha$ emission component}
Among the emission lines observed in 1996 June, the H$\alpha$ line
had the highest signal-to-noise ratio. If we adopt a reddening
$E(B-V) = 1.2$ (Hynes \hbox{et al.\ }$\,$ 1998b), the inferred average flux for
the H$\alpha$ emission line was
$F_{\scriptsize {\mbox {H$\alpha$}}}
\approx 5 \times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$ (cf.\
$F_{\scriptsize {\mbox {He{\tiny{ II}}}}} \approx 3
\times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$
for the \ion{He}{2} $\lambda 4686$ line).
Although the EW of the H$\alpha$ line appeared to be correlated
with the hard X-ray flux (Figure~\ref{batse_EW1}), we have found that
the velocity profile of the line, with its peak normalized to unity,
depended only on the binary phase.
The systematic variation of the H$\alpha$ emission line profiles
over an orbital cycle can be seen in the sequence of profiles plotted in
Figures~\ref{Ha_stack1}, \ref{Ha_stack2} and \ref{Ha_stack3}.
The red-shifted peak was stronger than the blue-shifted peak between
phase 0.25 and 0.85, with the blue-shifted peak dominating at other phases.
By considering the averaged, normalized H$\alpha$ line
profiles over the whole epoch of our observations (see Figure 2 in
Soria \hbox{et al.\ }$\,$ 1998), and the averaged
profiles obtained between orbital phases 0.80 and 0.92
and between phases 0.19 and 0.28 (Figure~\ref{Ha_avg}), when
the highest signal-to-noise ratio was achieved and both peaks had
comparable strength, we deduce that the FWHM of the line was
$\simeq 900$ km s$^{-1}$\,, and the peak-to-peak velocity separation was
$500 \pm 50$ km s$^{-1}$\,. These values are similar to those determined for
the double-peaked \ion{He}{1} and \ion{He}{2} emission lines.
In principle, symmetric double-peaked lines can be produced
by an accretion disk (Smak 1981; Horne \& Marsh 1986). As the H$\alpha$ line
profile changed with a period equal to the binary
period, this eliminates the possibility that the line asymmetry was due to a
precession of the accretion disk, which should have a period longer than
the binary period (e.g.\ Kumar 1986, Warner 1995). Neither could it be
caused by the secondary star eclipsing either side of the disk, because
our data show asymmetry of the two peaks even when the star was behind
the accretion disk.
If the emission were from the irradiatively-heated
surface of the secondary star, we would expect a stronger blue component
in the emission line when the star was approaching
(at phases 0.25 --- 0.75),
and vice versa. This is inconsistent with the observation that
the blue peak of the line was stronger at around phase 0.
Therefore we reject the possibility that the secondary star was
the dominant source of the emission.
(See \S5 for more detailed discussion on the H$\alpha$ emission.)
\subsection{1997 June observations}
The soft X-ray flux in 1997 June was slightly
lower than in 1996 June, while the hard X-ray flux
was significantly lower (Figure~\ref{batse_asm}).
We observed the system on 1997 June 14 and 15.
No optical photometric observations were carried out during
this period. Although the conditions were not photometric,
a comparison with other stars in the field led to an
estimate of $V \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 16.5$ for the source,
about half a magnitude fainter than in 1996 June.
The H$\alpha$ emission line had a red-shifted peak around phase 0.5
and a blue-shifted peak around phase 0, as in 1996 June;
the signal-to-noise ratio was much lower than in 1996.
The velocity shifts with respect to the
center of mass of the binary were $\simeq 255$ km s$^{-1}$\, for the
red-shifted peak, and $\simeq -230$ km s$^{-1}$\, for the blue-shifted peak,
similar to those measured in 1996.
For the H$\alpha$ line, EW $=2.5 \pm 0.4$ \AA\ on June 14, and
$2.7 \pm 0.3$ \AA\ on June 15. The emission component of the
H$\beta$ and higher Balmer lines was not detectable. The Balmer
emission was therefore weaker than in 1996 June.
The \ion{He}{2} $\lambda 4686$ line was seen in emission with
EW $=3 \pm 1$ \AA, as strong as in 1996. The Bowen fluorescence
\ion{N}{3} lines were also detected.
\section{BALMER EMISSION IN THE HIGH-SOFT STATE}
\subsection{Kinematic interpretation of the H$\alpha$ emission}
\subsubsection{Peak separation}
Figure~\ref{plotpeaks} shows the velocity shift of the
peaks of the H$\alpha$ emission component detected in the 1996 June and the
1997 June spectra. Only one peak was present at some
orbital phases (cf.\ Figures~\ref{Ha_stack1}, \ref{Ha_stack2}
and \ref{Ha_stack3}). The velocity separation of the peaks
($\simeq 500$ km s$^{-1}$\,) was larger than the radial velocity amplitude of the
secondary star. This implies that the emission region
was located within the orbit of the secondary star, thus ruling
out the possibility of emission from the surface of the secondary
(see also \S4.3) or from a circumbinary disk.
According to Orosz \& Bailyn (1997), the projected radial velocity
semi-amplitude of the secondary star in GRO~J1655$-$40 is
$K_2=228.2 \pm 2.2$ km s$^{-1}$\,, and the masses of the two components
are $M_1 \simeq 7 M_{\odot}$ and $M_2 \simeq 2.3 M_{\odot}$.
If we assume a thin, non-circular, Keplerian accretion disk
truncated at the tidal radius, as described
in Paczy\'{n}ski (1977), then the disk emission lines would have peaks
with observed radial velocities at each phase given by the dash-dotted
lines in Figure~\ref{plotpeaks}. The tidal truncation radius is approximately
given by $R_{\rm d} = 0.60 \, a /(1+q)$ for $0.03 < q < 1$ (Warner 1995),
where $a$ is the separation between the centers of mass of the
binary components. The peak-to-peak velocity separation
averaged over an orbital cycle would be $\simeq 820$ km s$^{-1}$\,.
Phillips \hbox{et al.\ }$\,$ (1999) derived lower values for the radial velocity
semi-amplitude of the secondary star
and for the masses of the binary components. They obtained
$192 < K_2 < 214$ km s$^{-1}$\,,
$4.1 < M_1 < 6.6 M_{\odot}$ and $1.4 < M_2 < 2.2 M_{\odot}$ (90 per cent
confidence limit). For a fixed mass ratio $q=1/3$, this implies that
the peak-to-peak velocity separation averaged over an orbital cycle,
expected from a tidally-truncated Keplerian disk, would
be $690 < \Delta V < 760$ km s$^{-1}$\,. Peak separations $\simeq 770$ km s$^{-1}$\,
would instead be expected if we adopt the parameters determined
by Shahbaz \hbox{et al.\ }$\,$ (1999), who inferred a velocity semi-amplitude
$K_2=215.5 \pm 2.4$ km s$^{-1}$\, for the companion star.
The velocity separations deduced from
our 1996 June and 1997 June data are lower than the predicted values,
especially for the red-shifted peak at phases around 0.5 and
the blue-shifted peak at phases around 0. Therefore, the double-peaked
profile of the H$\alpha$ emission component cannot be explained by
a conventional thin Keplerian accretion disk truncated at the tidal radius.
We note that the peak-to-peak velocity separations of \ion{He}{2}
$\lambda 4686$ and \ion{He}{1} $\lambda 6678$ were
also too small to be consistent with a tidally-truncated Keplerian disk.
Remarkably low rotational velocities, inconsistent with Keplerian
disks truncated at their tidal radii, have also been inferred from the
double-peaked H$\alpha$ emission line profiles observed from A0620$-$00
and GS~1124$-$68 (Orosz \hbox{et al.\ }$\,$ 1994).
A possible explanation for the low rotational velocities inferred
for the outer edge of the accretion disks in these three systems
is that the disks extended slightly beyond their tidal radii.
(cf.\ Whitehurst 1988).
\subsubsection{A schematic model}
We have found that the observed variations in the H$\alpha$
line profile can be explained by the model shown in
Figure~\ref{sectors96}, with two separate
emission regions on the accretion disk.
The model can reproduce the low radial velocities of the
peaks and their behavior at various orbital phases. For
example, only the red-shifted component would be visible around phase 0.5 and
only the blue-shifted component around phase 1, while both peaks
would be visible around phases 0.25 and 0.75.
Non-axisymmetric patterns of emission similar to our model
were proposed by Steeghs, Harlaftis \& Horne (1997) to explain the
Balmer and helium emission line profiles observed from the dwarf nova
IP Peg, and by Neustroev \& Borisov (1998) to explain the Balmer
line profiles observed from the dwarf nova U Gem. In both cases,
they were interpreted as evidence of a two-armed spiral density wave
or shock in the accretion disk, induced by the tidal interaction with
the companion star. In the case of IP Peg, the observed rotational
velocities also suggested that the disk could extend beyond its tidal
radius (Steeghs \hbox{et al.\ }$\,$ 1997).
It has been suggested that accretion stream overflows, hot spots, and
uneven illumination of the accretion disk from the central X-ray
source can also produce an anisotropic emissivity. However, it is beyond the
scope of the present paper to explore in details why the
Balmer emission regions might assume the geometrical configuration
shown in our model.
Furthermore, a narrow $H\alpha$ absorption component from the companion
star is also likely to be present, as suggested by the detection
of other stellar absorption features from \ion{He}{1} and \ion{Fe}{1}.
It can be noticed from the radial velocity curve of the companion star
that the presence of a stellar absorption line
superimposed on the disk emission would also be qualitatively consistent
with the observed behavior of the line profile at various orbital phases,
if absorption and emission have comparable strength
(Figures~\ref{Ha_stack1}, \ref{Ha_stack2} and \ref{Ha_stack3}).
However, we argue that the effect of anisotropic disk emission
is more significant than the effect of a stellar absorption line,
and can better explain the velocity shifts of the emission peaks
observed over an orbital phase.
One might expect the H$\alpha$ emission line profiles to be roughly
symmetric at superior and inferior conjunction; however,
the H$\alpha$ line in general showed a red-shifted peak
stronger than the blue-shifted peak, and a blue wing more extended than
the red wing (see Figure~\ref{Ha_stack1} and Figure~\ref{Ha_stack3}).
The asymmetry of the profiles may indicate the presence of both opacity
and kinematic effects, and may be evidence of a thin disk-wind.
It is worth noting that P-Cygni profiles were observed
in some UV resonance lines from the system (Hynes \hbox{et al.\ }$\,$ 1998b).
\subsection{Formation of absorption and emission lines}
\subsubsection{Broad absorption lines}
The H$\alpha$ and H$\beta$ lines in the spectra obtained during
the 1996/1997 outburst all show broad (FWHM $\approx 3000$ km s$^{-1}$\,),
shallow absorption components.
Broad absorption components are also seen in the 1994 August --
September spectra, when the system was in outburst
(see \S6.2), but not in the 1996 April
spectra, when the system was in quiescence.
Absorption lines can be produced in an accretion disk if the disk
is optically thick and its temperature decreases with height above the
central plane. The absorption features seen in our spectra are
broader than the emission cores: this suggests that they were probably
formed in the inner part of an optically thick disk, where viscous
heating from the central plane dominates over external irradiative heating.
Broad, shallow absorption features at the Balmer lines have been seen
in the spectra of other BHCs in outburst, such as A0620$-$00
(Whelan \hbox{et al.\ }$\,$ 1977), GS~1124$-$68 (Della Valle, Masetti \& Bianchini 1998)
and, most prominently, GRO~J0422$+$32 (Casares \hbox{et al.\ }$\,$ 1995). Similar
features are also often present in the spectra
of UX UMa stars (e.g.\ Warner 1995) and in the outburst spectra of
Dwarf Novae (Robinson, Marsh \& Smak 1993).
\subsubsection{Double-peaked emission lines}
Double-peaked lines can be emitted from a hot temperature-inversion layer
on the X-ray irradiated surface of the accretion disk.
Provided that the spectrum is sufficiently soft,
X-rays can be absorbed at a small depth, forming a thin temperature-inversion
layer but leaving most of the vertical structure of the disk
undisturbed (see Tuchman, Mineshige \& Wheeler 1990; Wu \hbox{et al.\ }$\,$ 1999).
The strong, double-peaked \ion{He}{2} $\lambda 4686$ line
seen during the high-soft state in 1996--1997 was probably emitted
via radiative recombination in this thin, hot layer,
at temperatures $\sim 10^5$ K.
While the \ion{He}{2} $\lambda 4686$ was detected throughout the 1996
outburst, Balmer emission was seen only after the hard X-rays turned on.
Balmer lines are likely to be emitted at lower temperatures ($\sim 10^4$ K).
Moreover, the H$\alpha$ double-peaked profile is strongly asymmetric and
phase-dependent (\S5.1.2), while the \ion{He}{2} $\lambda 4686$ profile is
approximately symmetric at {\em all} phases.
For all these reasons we suggest that \ion{He}{2} $\lambda 4686$ and
the Balmer lines were emitted from different regions: the Balmer lines
probably originated from a deeper, denser layer in the disk where matter
was heated by harder X-ray photons.
\section{ONSET OF A HARD X-RAY OUTBURST (1994 AUGUST -- SEPTEMBER)}
\subsection{Overview}
GRO J1655$-$40 was active in the radio and hard X-ray energy bands
in 1994 August -- September. The 843 MHz radio flux density,
measured by the MOST, was declining after a large outburst that
reached about 8 Jy in early August (Wu \& Hunstead 1997).
On 14 September another radio outburst started. It was weaker
than the previous outburst, with a peak flux density of about 2 Jy.
The hard X-ray flux recorded by BATSE (Harmon \hbox{et al.\ }$\,$ 1995) was below
the 0.7 photon cm$^{-2}$ s$^{-1}$ level
during the period August 15 -- September 4. A sharp rise in the X-ray
flux occurred on September 5, and the flux stayed at the
2 photon cm$^{-2}$ s$^{-1}$ level for about 10 days. It then declined
abruptly, apparently in coincidence with the rise in
the 843 MHz radio luminosity around September 14
(see Fig.~1 in Wu \& Hunstead 1997).
The span of our observations covered the transition phase around the
hard X-ray rapid increase. Good signal-to-noise spectra,
with a resolution of 1.3 \AA\ FWHM, in the spectral regions
centered at H$\alpha$ and at \ion{He}{2}/ H$\beta$,
were obtained on August 30 -- September 4, before the rise in the
hard X-ray flux. The sampling of binary phase was roughly uniform
over the six-night observations. Two lower-resolution spectra
in the H$\alpha$ and \ion{He}{2}/ H$\beta$ regions
were taken on 1994 September 6, after the rise in the X-ray flux, and
showed a dramatic change compared with the August 30--September 4 spectra.
\subsection{General spectral features before September 6}
In Figure~\ref{total94_ha} we show the red spectrum,
centered at H$\alpha$, averaged over the six nights from August 30 to
September 4. As in the spectra obtained during the 1996 -- 1997
high-soft state, we notice a broad absorption component partly filled by
narrow H$\alpha$ emission. The average EW of the absorption
component is $-6 \pm 1$ \AA, and its average FWHM
$60 \pm 10$ \AA\ ($\simeq 2700 \pm 500$ km s$^{-1}$\,). The
narrow emission component has an average EW $= 5.3 \pm 0.2$ \AA\
and an average FWHM $= 450 \pm 20$ km s$^{-1}$\,.
Figure~\ref{total94_hb1} shows the averaged blue spectrum,
centered at \ion{He}{2}/ H$\beta$. The broad absorption component
of H$\beta$ has an EW $= -4 \pm 1$ \AA\
and an FWHM $= 50 \pm 10$ \AA\ ($\simeq 3000 \pm 600$ km s$^{-1}$\,).
A narrow emission component is superimposed on the broad absorption
trough, with EW $= 1.1 \pm 0.1$ \AA\ and FWHM $= 550 \pm 30$ km s$^{-1}$\,.
\ion{He}{2} $\lambda 4686$ appears in the spectrum as a strong narrow
emission line (Figure~\ref{total94_hb1}).
The EW of the line is $ 5.2 \pm 0.2$ \AA\ and its FWHM is $540 \pm 20$ km s$^{-1}$\,.
Weaker emission is detected at \ion{He}{2} $\lambda 4542$, with
EW $= 0.5 \pm 0.2$ \AA\ and FWHM $= 800 \pm 50$ km s$^{-1}$\,.
A broad emission component (FWHM $= 950 \pm 100$ km s$^{-1}$\,) was
detected on each night at \ion{He}{1} $\lambda 6678$. An additional narrower
component (FWHM $= 450 \pm 50$ km s$^{-1}$\,) was detected on some nights; it was
particularly strong on August 30 (cf.\ Figures~\ref{n_narrow94}). We note
that this line might be contaminated by \ion{He}{2} $\lambda 6683$ emission.
The spectra show a broad, flat-topped line at about $5005$ \AA. It
cannot be attributed to [\ion{O}{3}]
$\lambda 5007$ both because the wavelengths are discrepant, and
because we do not see any emission from [\ion{O}{3}] $\lambda 4959$,
whose strength should be one-third of that of
[\ion{O}{3}] $\lambda 5007$.
(The only forbidden lines possibly detected in our spectra obtained
between 1994 August 30 and September 4 are [\ion{N}{2}] $\lambda 6548$,
blended with the much stronger blue wing of H$\alpha$, and
[\ion{N}{2}] $\lambda 6589$; see Figures~\ref{n_narrow94}).
We identify the line at about $5005$ \AA\ as \ion{N}{2} $\lambda 5005$.
Another strong
low-ionization metal line detected in the spectrum is the blend
\ion{O}{2} $\lambda \lambda 4941,4943$. Other broad emission lines
detected from \ion{N}{2} and \ion{O}{2} are listed in Table 3.
We do not include a weak emission line at $6375$ \AA\ in Table 3,
as it was seen only on the first two nights (EW $= 0.7 \pm 0.1$);
its wavelength is consistent with that of an \ion{Fe}{10} line.
Bowen \ion{N}{3} emission lines were also observed, broader than in
1996 June, and with a radial velocity curve consistent with that of the
primary rather than the secondary.
\subsection{Line classification}
The FWHMs of the optical lines observed before 1994 September 6 span
a large range of values, from $\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 400$ km s$^{-1}$\, to $\approx 3000$ km s$^{-1}$\,.
The broadest lines are those seen in absorption at H$\alpha$ and H$\beta$.
We notice that a broad absorption trough
at H$\beta$ was also detected in the 1995 March outburst
by Bianchini \hbox{et al.\ }$\,$ (1997), and during the 1996 May--June high-soft state.
The physical interpretation of the broad Balmer absorption lines is
probably the same as discussed in \S5.2.1.
Henceforth, we will focus only on the emission lines.
For the orbital parameters of GRO~J1655$-$40, the projected rotational
velocity of the outer rim of a thin, Keplerian accretion disk,
truncated at or slightly beyond its tidal radius, is $\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 350$ km s$^{-1}$\,.
Therefore, any lines with an FWHM $< 700$ km s$^{-1}$\, cannot come from
a thin Keplerian disk.
We classify the emission lines observed
before 1994 September 6 as ``broad'' and ``narrow''
(Table 3), according to whether their FWHM was
larger or smaller than the minimum value of FWHM expected for disk
emission.
The broad emission lines were usually flat-topped, while
the narrow emission lines were either single-peaked or had a hint of
a double-peaked profile but with very low velocity separation.
In some cases the distinction is blurred, especially
for the weak lines. However, in most situations the two kinds
of line profiles can be discerned.
We notice that both the EW and the FWHM of the narrow lines
changed significantly from night to night, while EW and FWHM of
the broad lines were more stable.
Our phenomenological classification of broad
and narrow emission lines in GRO J1655$-$40 is similar to
the classification of emission lines in AGNs. For example,
a study of 123 high-luminosity AGNs (Wills \hbox{et al.\ }$\,$ 1993) has
shown that the profile of the \ion{C}{4} $\lambda 1549$ emission
line consists of a ``broad'' base and a ``narrow'' core, which
are emitted from distinct regions. The FWHM of the line is determined
by the core/base ratio: when the core is dominant, the line will appear
narrow and sharply peaked; and when the base is dominant the line
will be broad and flat-topped (cf.\ Fig.\ 2 in Wills \hbox{et al.\ }$\,$ 1993).
\subsubsection{Origin of the ``broad'' emission lines}
The low-ionization metal lines from \ion{N}{2} and \ion{O}{2} detected
in our spectra are examples of broad emission lines; their FWHMs are
consistent with a disk origin. Unlike the disk emission lines seen in
1996 June,
their profiles were generally flat-topped rather than double-peaked.
Flat-topped lines can be emitted in
a wind from the surface of the accretion disk. Murray \& Chiang (1997)
showed that if the lines have substantial optical depth ($\tau_l \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 1$),
the contribution of the wind from the front and back sectors of the
disk (where the projected radial velocities are lower) is enhanced
relative to the contribution of the wind from the sides of the disk
(where the projected radial velocities
are higher), thus smearing out the two peaks in the line profile.
From the fact that we detected only low-ionization broad lines
from \ion{He}{1}, \ion{O}{2}, \ion{N}{2} and \ion{N}{3}, we infer
an ionization parameter
$U \equiv L_{\rm x}/n_{\scriptsize {\mbox {H}}}r^2 \approx 10$
in the line-emitting disk wind. The corresponding temperature would be
$\sim 1$ -- $2 \times 10^4$ K if the gas was collisionally ionized
(Hatchett, Buff \& McCray 1976); the temperature would be lower
if the line-emitting gas was photoionized.
The possibility of an outflow or disk wind suggested by the broad optical
emission line profiles observed in the 1994 X-ray
outburst was already discussed in Hunstead \hbox{et al.\ }$\,$ (1997), who
compared the spectrum of GRO J1655$-$40 with the spectra of
WN6-8 Wolf-Rayet stars. In spite of the similarities, there are several
differences between the two cases. Firstly, high-ionization emission lines
usually found in WR stars, e.g.\ from \ion{N}{4} and \ion{N}{5},
were not observed, indicating a lower temperature for the outflow
in GRO~J1655$-$40.
Secondly, no forbidden [\ion{O}{3}] lines were observed,
implying a higher density.
Thirdly, the broad, flat-topped profiles are
better explained by a disk-wind model rather than
by a spherical outflow as in typical Wolf-Rayet stars.
\subsubsection{Kinematics of the ``narrow'' emission lines}
The narrow emission lines showed large night-to-night changes in their
EWs and line profiles. In particular, they showed transitions between
narrow, single-peaked and slightly broader, double-peaked profiles
over the time-span of our observations. We notice that
\ion{He}{2} $\lambda 4686$ line was generally
broader and more clearly double-peaked than H$\alpha$ (Figure~\ref{n_stack94};
cf.\ also the double-peaked profile in the combined spectrum shown in
Figure~\ref{total94_hb1}); its peak-to-peak velocity separation
was however always $< 400$ km s$^{-1}$\,.
Some lines (most notably \ion{He}{1} $\lambda 6678$) were probably
emitted from both the broad- and the narrow-line regions,
with the intensity ratio determined by the relative contribution of the two
components, i.e.\ as in the ``base'' vs ``core'' classification scheme
suggested by Wills \hbox{et al.\ }$\,$ (1993) in their study of AGNs.
Both the H$\alpha$ and \ion{He}{2} $\lambda 4686$ emission lines
were, on average, asymmetric; they were slightly
blue-shifted with respect to the central line wavelength (taking into account
the systemic velocity). In the combined
spectrum of all six nights, the H$\alpha$ line is blue-shifted by
$55 \pm 9$ km s$^{-1}$\,,
the H$\beta$ line by $61 \pm 12$ km s$^{-1}$\,, the narrow component of the
\ion{He}{1} $\lambda 6678$ line by $82 \pm 22$ km s$^{-1}$\,,
and the \ion{He}{2} $\lambda 4686$ line by $41 \pm 13$ km s$^{-1}$\,.
We note that a similar phenomenon is often observed in the emission lines
from quasars and in some cataclysmic variables, and can be attributed
to the expansion of clouds
or to a gas outflow, when the receding gas is occulted from our view.
\subsubsection{Optically thin narrow-emission-line region}
Owing to the well-determined orbital parameters, the upper limit to
the size of the accretion disk in GRO J1655$-$40 is reasonably well
constrained. We have found that the velocities inferred from the
narrow widths of the H$\alpha$, H$\beta$ and
\ion{He}{2} $\lambda 4686$ lines were significantly lower than
the velocities expected from a Keplerian disk contained in the Roche lobe
of the compact star.
One might argue that the low FWHM of the narrow emission
lines was due to their formation in a
circumbinary disk (Artymowicz \& Lubow 1996). If this were true,
no significant velocity shifts from the center-of-mass
systemic velocity should be seen over an orbital cycle.
However, Soria \hbox{et al.\ }$\,$ (1998) have measured the radial velocity shifts
of the line wings at one-quarter of the maximum intensity above the continuum,
and found a sinusoidal variation consistent with the one observed
in 1996 June, which was interpreted
as the modulation due to orbital motion of the compact star.
Moreover, the profiles of these
lines changed substantially from night to night. In the case of
H$\alpha$, the line profile was single-peaked on four nights, but
double-peaked on the other two, with a velocity separation larger than
expected from a circumbinary disk.
We can therefore rule out the possibility of emission from
a circumbinary disk.
We interpret the narrow lines, instead, as emission
from a spheroidal or extended optically thin region high above
the accretion disk (at lower rotational velocity). We notice
that some cataclysmic variables show narrow, often single-peaked
Balmer and \ion{He}{2} emission lines, yielding rotational velocities
much slower than Keplerian. For example, there is strong evidence
of an optically thin
line-emitting region extended well above the disk plane in
the eclipsing system PG 1012$-$029 (Honeycutt, Schlegel \& Kaitchuck 1986).
The larger FWHM of the high-ionization \ion{He}{2}
line suggests that it was probably emitted closer to the disk plane
than H$\alpha$, and/or at slightly smaller radii, where we expect the
irradiation from the central source to be stronger.
The night-to-night variability in the line profiles (Figure~\ref{n_stack94})
might reflect changes in the line-of-sight opacity of the optically thin
region. Opacity variations can be caused by
matter inhomogeneities (i.e.\ moving clouds) in the line emission
region or variable obscuration along the light path.
Furthermore, the variability between single-peaked and
double-peaked profiles indicates that the extended
line-emitting region was not in steady state; for instance,
the profiles would tend to become broader and more ``disk-like''
(as those observed in 1996 June) as the thin clouds dissipated,
and less ``disk-like'' when the emission from the clouds was stronger.
The presence of optically thin line-emitting gas well above the disk
plane may be explained in various ways: some accreting
matter might not yet have settled down onto the disk, after the end of the
strong hard X-ray flare observed by BATSE in 1994 August. We shall argue
in \S6.4.2 that the outer disk is likely to be
disrupted and puffed up into a thick torus during a hard X-ray flare.
Some of the gas in the extended
narrow-line region may also have been the result of bipolar outflows
from the disk surface, as suggested by the systematic blue-shift
and by the detection of strong winds in the broad-line region.
The extended, optically thin narrow-line-region model is valid only
if the UV and soft X-ray flux is not too high. Soft irradiating flux
of order of the Eddington limit at distances $R < 10^{12}$ cm would yield
an ionization parameter $U > 10^4$ in the thin narrow-line region,
for a gas with densities $n_{\scriptsize {\mbox {H}}} < 10^{13}$ cm$^{-3}$.
This would imply temperatures $\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 10^7$ K,
difficult to reconcile with the presence of lines such as H$\alpha$.
Although no UV and soft X-ray data were obtained at that epoch, the fact
that we detected only low-ionization metal lines (for example from
\ion{N}{2} and \ion{O}{2}) implies that the
soft flux could not have been too strong in the 1994 X-ray outburst.
\subsection{Spectral transition on 1994 September 6}
\subsubsection{General features}
A sharp increase in the hard X-ray flux was detected by BATSE
around September 5 (Harmon \hbox{et al.\ }$\,$ 1995). The
broad \ion{N}{2}, \ion{N}{3} and \ion{O}{2} emission lines
disappeared from the blue spectrum obtained on September 6
(Figure~\ref{06sep}), and
\ion{He}{2} $\lambda 4686$ and the Bowen \ion{N}{3} lines were
much weaker than in the previous nights. H$\alpha$
and H$\beta$ were instead much stronger:
for H$\alpha$, EW $= 100 \pm 5$ \AA, and for H$\beta$,
EW $=8.0 \pm 0.5$ \AA. The low-resolution of the red spectrum does not
allow for a reliable determination of the FWHM of H$\alpha$; for H$\beta$,
FWHM $= 530 \pm 50$ km s$^{-1}$\,. The H$\alpha$$/$H$\beta$ EW ratio increased
by a factor of 4 between September 4 and September 6.
Other lines clearly seen in emission are
the Paschen series of \ion{H}{1} and several
\ion{He}{1} lines ($\lambda 4922$, $\lambda 5016$,
$\lambda 5876$, $\lambda 6678$, $\lambda 7065$). All these emission
lines were single-peaked; unfortunately, the low S/N of the blue spectrum
and the low resolution of the red
spectrum do not permit an accurate determination of their strengths
and widths.
The sudden increase in the Balmer emission between September 4 and
September 6 coincided with the sharp increase in the hard X-ray flux.
The EWs (in absolute value) of H$\alpha$, H$\beta$
and \ion{He}{2} $\lambda 4686$ are shown in Figure~\ref{batse_EW94}
compared with the hard X-ray flux measured by BATSE.
The Balmer EWs and the hard X-ray flux appear well correlated,
as in the 1996 June observations.
Such correlation is not seen for \ion{He}{2} $\lambda 4686$.
In fact, its intensity dropped when the the hard X-ray flux rose.
Shrader \hbox{et al.\ }$\,$ (1996) also noticed the dramatic increase in the
H$\alpha$ emission and the disappearance of \ion{He}{2} $\lambda 4686$
around TJD 9600, after the hard X-ray flux surge
(see their Fig.\ 1). Their low-resolution spectrum obtained on
September 8 showed only emission from the low-ionization lines
\ion{H}{1}, \ion{He}{1} and \ion{O}{1}.
Optical spectra taken by Bianchini \hbox{et al.\ }$\,$ (1997) on 1994, August 12 -- 15,
during the decline of an earlier hard X-ray flare, showed
emission almost as strong as that in our September 6 spectrum from
low-ionization lines like H$\alpha$ (mean EW $= 68.4$ \AA) and H$\beta$
(mean EW $= 7.4$ \AA). The H$\alpha$$/$H$\beta$ EW ratio was 9.2,
a factor of 3 higher than on September 4, and slightly lower than on
September 6. The high-ionization
\ion{He}{2} $\lambda 4686$ (mean EW $= 5.3$ \AA) and
\ion{N}{3} $\lambda \lambda 4641, 4642$ (mean EW $= 5.9$ \AA) lines
were also present, showing disk or
disk-wind signatures (see Fig.\ 2 in Bianchini \hbox{et al.\ }$\,$ 1997). From
the peak separation and the width of the flat-topped profiles,
we can infer that the outer-disk radius was
$\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} (1.3 \pm 0.1) \times 10^{11}$ cm. Broad absorption troughs were
not seen in those spectra.
We suspect that the conditions on August 12 -- 15 were intermediate
between those we observed before and after September 5.
In the period of low X-ray activity between August 15
and September 5, a geometrically thin accretion disk was probably re-forming
and re-expanding to its tidal radius after having been disrupted in
the previous hard X-ray flare, only to be then ``evaporated'' again
into an extended atmosphere or cocoon by the following flare.
\subsubsection{Optically thick Balmer emission}
As the photoionization cross-section for
hydrogenic ions falls approximately as $\nu^{-7/2}$ for photon
energies larger than the threshold energy (13.6 eV for hydrogen)
(see Osterbrock 1989), the hard ($20-100$ keV) X-ray photons
contribute little to the ionization level of the gas, compared with the
UV and soft X-ray photons.
However, Compton heating by the hard X-rays can
alter the disk structure and drive a disk-wind
(e.g.\ Begelman, McKee \& Shields 1983; Idan \& Shaviv 1996). In the
extreme conditions, the dense disk-wind would create a cocoon
surrounding the accretion disk; in this situation one may consider
the accretion disk being practically evaporated into a thick atmosphere.
The absence of the characteristic broad absorption troughs at H$\alpha$
and H$\beta$ and of broad disk-like emission lines in our September 6 spectrum
suggests the absence of an exposed, geometrically thin, optically thick
accretion disk.
An increase in the optical depth
of the corona could have caused the increase in Balmer line emission and
the disappearance of the high-ionization lines.
The higher H$\alpha$$/$H$\beta$ EW ratio measured during the 1994 August and
September hard X-ray flares, and the Paschen emission lines observed on
September 6, indicate that the corona was optically
thick in the Balmer series. When this situation occurs, every H$\beta$
photon is resonantly absorbed by a nearby hydrogen atom in the
$n=2$ state; the process of emission and reabsorption is repeated until
the H$\beta$ photon splits into H$\alpha$ + P$\alpha$ photons, which
will both escape (``Case C'' recombination, see e.g.\ McCray 1996).
After the end of the 1994 August hard X-ray
flare, the extended cocoon probably became optically thin to the Balmer
lines (``Case B'' recombination), as observed on August 30 -- September 4.
The disk was ``puffed up'' again by the hard X-ray flare
after September 4, so that Balmer lines were emitted via Case C
recombination. Archival spectra obtained from the Anglo-Australian
Observatory, taken on September 27 (Figure~\ref{Greenhill94}), show
features almost identical to those observed on August 30 -- September 4
(cf.\ Figures~\ref{total94_ha} and \ref{total94_hb1}), suggesting that
the cocoon once again became optically thin after the end of the
second hard X-ray flare.
The simultaneous decrease in the observed strength of the
\ion{He}{2} and Bowen \ion{N}{3} lines at the time of the hard X-ray
increase can perhaps be attributed to a
geometric effects. As GRO~J1655$-$40 has a high orbital
inclination, a thick outer-disk rim, inflated by hard X-ray irradiation,
may occult the inner region where the high-ionization lines are formed,
and explain their weakness in the September 6 spectrum.
Finally, we note that a major episode of plasma ejection occurred on
1994 September 9, inferred from the radio observations
(Harmon \hbox{et al.\ }$\,$ 1995; see also Wu \& Hunstead 1997), a few days after
the change in the optical and hard X-ray spectra. It was suggested
(Fabian \& Rees 1995) that there may be a connection between the
production of collimated jets and the presence of an extended hot,
high pressure atmosphere and a quasi-spherical accretion flow in
the centers of elliptical galaxies, the usual hosts for radio-loud
emission. Whether the same mechanism was a catalyst
for the collimated ejections observed in the radio-loud outburst
of GRO J1655$-$40 in 1994 August -- September is an issue worth
investigating.
\section{SUMMARY}
We have obtained optical spectra of the soft X-ray transient
GRO~J1655$-$40 during different energy states (quiescence,
high-soft and hard outburst) between 1994 August
and 1997 June.
Spectra taken one day after the observed brightening
in $V$ in 1996 April did not at that stage show evidence of emission
lines from an accretion disk. The main spectral features were narrow
absorption lines from the secondary star. The optical brightening
was probably due to an increase in the continuum emission from
the disk; emission lines could not be formed at that stage,
in the absence of X-ray irradiation.
For the 1996 -- 97 high-soft state, we have identified characteristic
features such as broad absorption lines at H$\alpha$ and H$\beta$,
partly filled by double-peaked emission lines. We argue that the broad
absorption was formed in the hot inner disk, and the double-peaked
\ion{He}{2} $\lambda 4686$ emission originated in a temperature-inversion
layer on the disk surface, created by the soft X-ray irradiation.
The observed rotational velocities suggest that the disk was probably
extended beyond its tidal radius.
We have also found that the double-peaked H$\alpha$ emission was associated
with the hard X-ray flux, suggesting that it was probably emitted at
comparable radii but from
deeper layers (at higher optical depth) than \ion{He}{2} $\lambda 4686$.
We note that the Balmer lines were not emitted uniformly from the whole
disk surface, but appeared to come only from a double-armed
region, possibly the effect of tidal density waves or shocks on the disk.
We have identified three classes of lines in the spectra taken
in 1994 August -- September before the onset of a hard X-ray flare:
broad absorption, broad (flat-topped) emission and narrow emission.
The narrow emission line profiles cannot be explained by a conventional
thin accretion disk model. We speculate that the system was in a transient
state in which the accretion disk had an extended, optically
thin cocoon and significant matter outflow, which would also help to
explain the systematic blue-shift of the narrow emission lines and the
flat-topped profiles of the broad emission lines.
After the onset of a hard X-ray flare, the disk signatures disappeared,
and strong H$\alpha$ and Paschen emission was detected, suggesting
that the cocoon became optically thick to the Balmer lines.
High-ionization lines disappeared or weakened.
Two weeks after the end of the flare, the cocoon appeared to be
once again optically thin.
We thank: David Buckley, Gary Da Costa, Paul Francis, Charlene Heisler
and Raffaella Morganti for
taking some of the spectra; Alan Harmon and Shuang-Nan Zhang
for the BATSE data and discussions; and Geoff Bicknell,
Ralph Sutherland, Martijn de Kool, Stefan Wagner and John Greenhill
for comments and discussions. R.~W.~H. acknowledges financial assistance from
the Australian Research Council, and thanks his co-observers in 1994
August -- September, Max Pettini, Dave King and Linda Smith, for their
tolerance in allowing the ToO observations to displace some of the
scheduled observing program. K.~W. acknowledges support from
the ARC through an Australian Research Fellowship and the support
from S.~N.~Zhang for the visits to NASA-MSFC and UAH.
\clearpage
|
1,477,468,750,449 | arxiv | \section{Introduction}
Fields with operators appear everywhere in mathematics, and are particularly present in areas close to algebra. The development of differential and difference algebra dates back to J. Ritt (\cite{Ritt}) in the 1950's, and was then further expanded by E. Kolchin (\cite{Ko}, \cite{Ko2}) and R. Cohn (\cite{Co}) in the 1960's.
The study of differential and difference fields has been important in mathematics since the 1940’s and has applications in many areas of mathematics.
One can also mix the operators, this gives the notion of differential-difference fields, i.e., a field equipped with commuting derivations and automorphisms. These fields were first studied from the point of view of algebra by Cohn in \cite{Co70}.
Model theorists have long been interested in fields with operators,
until recently mainly on fields of characteristic $0$ with one or
several commuting derivations (ordinary or partial differential fields), and
on fields with one automorphisms (difference fields). The first author also
started in \cite{Bu1} the model-theoretic study of the existentially closed
difference-differential fields of characteristic $0$, around 2005 (one derivation, one automorphism). Then work of D. Pierce (\cite{Pier}) and of O. Le\'on-S\'anchez (\cite{LS}) brought back the model-theory of differential fields with several commuting derivations in the forefront of research in the area, as well as when a generic automorphism is added to these
fields. Unlike in the pure ordinary differential case, and in the pure
difference case, little is known on the possible interactions between definable subsets of existentially closed differential fields with several derivations, nor with an added automorphism.
In this paper we study groups definable in existentially closed differential-difference fields. We were motivated by the following result of Phyllis Cassidy (Theorem~19 in \cite{Ca2}):
\medskip\noindent
{\bf Theorem}. {\em Let $\calu$ be a differentially closed field of
characteristic $0$ (with $m$ commuting derivations), let $H$
be a simple algebraic group, and $G\leq H(\calu)$ a
$\Delta$-algebraic subgroup of $H(\calu)$ which is Zariski dense in $H$. Then $G$ is definably isomorphic to $H(L)$,
where $L$ is the constant field of a set $\Delta'$ of commuting derivations. Furthermore, the isomorphism is
given by conjugation by an element of $H(\calu)$. }
\medskip\noindent
She has similar results for Zariski dense $\Delta$-closed subgroups of
semi-simple algebraic groups.
A version of her result for (existentially closed) difference
fields was also proved by Chatzidakis, Hrushovski and Peterzil (Proposition 7.10 of \cite{CHP}):
\medskip\noindent
{\bf Theorem}. {\em
Let $(\calu, \sigma)$ be a model of $\acfa$. Let $H$ be an almost simple algebraic group defined over $\calu$, and let $G$ be a Zariski dense definable subgroup
of $H(\calu)$. If $SU(G)$ is infinite then $G = H(\calu)$.
If $SU(G)$ is finite, there are an isomorphism $f: H \rightarrow H' $ of algebraic groups, and integers $m>0$ and $n$ such that some subgroup of $f(G)$
of finite index is conjugate to a subgroup of $H'(\fix (\sigma^m Frob^n))$. In particular, the generic types of $G$ are non-orthogonal to the formula
$\sigma^m(x)= x^{p^{-n}}$. If $H$ is defined over $\fix(\sigma)^{alg}$, then we may take $H = H'$ and $f$ to be conjugation by an element of $H(\calu)$.}
\medskip\noindent
In this paper, we
generalise
Cassidy's results to the theory $\dcfa$, the model companion of
the theory of fields of characteristic $0$ with $m$
derivations and an automorphism which commute, and one of our main
results is: \\[0.1in]
{\bf Theorem \ref{prop1}}. {\em Let $\calu$ be a model of $\dcfa$, let $H$ be a simple algebraic group defined
over $\rat$, and $G$ a definable subgroup
of $H(\calu)$ which is Zariski dense in $H$.
Then $G$ has a definable subgroup $G_0$ of finite index, the
Kolchin closure of which is conjugate to $H(L)$, where $L$
is an
$\call_\Delta$-definable subfield of $\calu$, say by an element $g$. Furthermore,
either $G_0^g=H(L)$, or $G_0^g\subseteq H(\fix(\si^\ell)(L))$ for some
integer $\ell\geq 1$.
In the latter case, if $H$ is centerless, we are able to describe
precisely the subgroup $G_0^g$ as $\{g\in H(L)\mid \si^r(g)=\varphi(g)\}$
for some $r$ and algebraic automorphism
$\varphi$ of $H(L)$. }\\[0.05in]
We have analogous results for Zariski dense definable subgroups of semi-simple
centerless algebraic groups (Theorem \ref{prop2semi}). Using an
isogeny result (Proposition \ref{thm1}), and introducing the correct
notion of {\em definably quasi-(semi-)simple} definable group, gives then
slightly more general results, see Theorem \ref{MainTheorem}. \\
Inspired by results of Hrushovski and Pillay on groups definable in
pseudo-finite fields, we then endeavour to show that definable groups
which are definably
quasi-semi-simple have a smallest definable subgroup of finite
index (this smallest definable subgroup is called the {\em connected
component}). This is done in Corollary \ref{connected2}, and follows from
several intermediate results. We first show the result for Zariski dense definable
subgroups of simply connected algebraic simple groups, give a
precise description of the connected component (Theorem
\ref{connected1}), and show that every definable Zariski dense subgroup
of $H(\calu)$ is quantifier-free definable. We then show the existence
of a smallest definable subgroup of finite index for an arbitrary simple
algebraic group $H$ (Theorem \ref{prop2}), to finally reach the conclusion.
Part of the study involves giving a description of definable subgroups
of algebraic groups and we obtain the following result, of independent
interest: \\[0.05in]
{\bf Theorem \ref{sbgp}}. {\em Let $H$ be an algebraic group, $G\leq
H(\calu)$ a Zariski dense definable subgroup. Then there are an
algebraic group $H'$,
a quantifier-free definable subgroup $R$ of $H'(\calu)$, together with a
quantifier-free definable $f:R\to G$, with $f(R)$ contained and of finite index in $G$, and $\Ker(f)$
finite central in $R$.}\\[0.05in]
We conclude the paper with some results on the model theory of the fixed
subfield $\fix(\si)=\{a\in\calu\mid \si(a)=a\}$ and of its finite
algebraic
extensions.
\bigskip\noindent
The paper is organised as follows. Section 2 contains the algebraic and
model-theoretic preliminaries. Section 3 introduces the notions of
definably quasi-(semi)simple groups and shows the isogeny result
(\ref{thm1}). Section 4 contains the main results of the paper:
description of Zariski dense definable subgroups of simple and
semi-simple algebraic groups (\ref{prop1}, \ref{MainTheorem} and
\ref{prop2semi}). Section 5 gives the results on definable subgroups of
algebraic groups which are not quantifier-free definable (\ref{sbgp})
and shows that definably quasi-semi-simple definable groups have a
definable connected component. Section 6 gives the results on
the fixed field.
\section{Preliminaries}
This section is divided in four subsections: 2.1 - Differential and
difference algebra; 2.2 - Model theory of differential and difference
fields; 2.3 - The results of Cassidy; 2.4 - Quantifier-free canonical
bases.
\noindent{\bf Notation and conventions:}
{\bf All rings are commutative, all
fields are commutative of characteristic 0.}
\noindent
If $K$ is a field, then $K^{alg}$ denotes an algebraic closure of
$K$ (in the sense of the theory of fields).
\subsection{Differential and difference algebra}
\begin{defn}
For more details, please see \cite{Ko2},
\cite{Co} and \cite{Co70}.
\begin{enumerate}
\item Recall that a \emph{derivation} on a ring $R$ is a map $\delta: R
\to R$ which satisfies $\delta(a +b) = \delta(a) + \delta (b)$ and
$\delta(ab) = a \delta (b) + \delta(a) b$ for all $a, b \in R$.
\item A \emph{differential ring}, or \emph{$\Delta$-ring}, is a ring
equipped with a set $\Delta=\{\delta_1,\ldots,\delta_m\}$ of
commuting derivations. A {\em differential field} is a differential
ring which is a field.
\item A {\em difference ring} is a ring equipped with a distinguished
automorphism, which we denote by $\si$. (This differs from the usual
definition which only requires $\si$ to be an endomorphism.) A {\em difference field} is a
difference ring which is a field.
\item A {\em difference-differential ring} is a differential ring
equipped with an automorphism $\si$ (which commutes with the
derivations). A {\em difference-differential field} is a
difference-differential ring which is a field.
\end{enumerate}
\end{defn}
\begin{notation}
\begin{enumerate}
\item If $\Delta'$ is a set of derivations on the field $K$, then
$K^{\Delta'}$ denotes the field of $\Delta'$-constants, i.e.,
$\{a\in K\mid \delta(a)=0\, \forall \delta\in \Delta'\}$.
\item
Similarly,
if $K$ is a difference field, then $\fix(\si)(K)$, or
$\fix(\si)$ if there is no ambiguity, denotes
the {\em fixed field} of $K$, $\{a\in K\mid \si(a)=a\}$.
\item Let $K\subset\calu$ be difference-differential fields, and
$A\subset \calu$. Then $K(A)_\Delta$ denotes the differential field
generated by $A$ over $K$, $K(A)_\si$ the difference field generated
by $A$ over $K$, and $K(A)_{\si,\Delta}$ the difference-differential
field generated by $A$ over $K$. (Note that we require $K(A)_\si$ and
$K(A)_{\si,\Delta}$ to be closed under $\si\inv$.)
\end{enumerate}
\end{notation}
\medskip\noindent
{\bf \large Polynomial rings and the corresponding ideals and topologies}
\begin{defn} Let $K$ be a difference-differential ring, $y=(y_1,\ldots,y_n)$ a tuple of
indeterminates.
\begin{itemize}
\item
Then $K\{y\}$ (or $K\{y\}_\Delta$) denotes the ring of
polynomials in the variables $\delta_1^{i_1}\cdots\delta_m^{i_m}y_j$,
where $1\leq j\leq n$, and the superscripts $i_k$ are non-negative
integers. It becomes naturally a differential ring, by setting
$\delta_k(\delta_1^{i_1}\cdots\delta_m^{i_m}y_j)=\delta_1^{j_1}\cdots\delta_m^{j_m}y_j$,
where $i_\ell=j_\ell$ if $\ell\neq k$, and $j_k=i_k+1$. The elements of
$K\{y\}$ are called {\em differential polynomials}, or {\em
$\Delta$-polynomials}.
\item $K[y]_\si$ denotes the ring of polynomials in the variables
$\si^i(y_j)$, $1\leq j\leq n$, $i\in\zee$, with the obvious action of
$\si$; thus it is also a difference ring. They are called {\em
difference polynomials}, or {\em $\si$-polynomials}.
\item
$K\{y\}_\si$ denotes the ring of
polynomials in the variables $\si^i \delta_1^{i_1}\cdots\delta_m^{i_m}y_j$, with the obvious action of $\si$
and derivations. They are called {\em
difference-differential polynomials}, or {\em $\si$-$\Delta$-polynomials}.
\item A {\em $\Delta$-ideal} of a differential ring $R$ is an ideal
which is closed under the derivations in $\Delta$ and it is
called {\em linear} if it is generated by homogeneous linear
$\Delta$-polynomials.
Similarly, a {\em $\si$-ideal} $I$ of a
difference ring $R$ is an ideal closed under $\si$; if it is also
closed under $\si\inv$, we will call it {\em reflexive}; if whenever
$a\si^n(a)\in I$ then $a\in I$, it is {\em perfect}.
Finally, a {\em $\si$-$\Delta$-ideal} is an ideal which is closed under
$\si$ and $\Delta$.
\end{itemize}
\end{defn}
\begin{rem}
As with the Zariski topology, if $K$ is a difference-differential field,
the set of zeroes of differential
polynomials, $\si$-polynomials and $(\si, \Delta)$-polynomials in some
$K^n$ are the basic closed sets of a Noetherian topology on $K^n$, see
Corollary 1 of Theorem III in \cite{Co70}. We will call these sets {\em
$\Delta$-closed} (or {\em Kolchin
closed}, or {\em $\Delta$-algebraic}), {\em $\si$-closed/algebraic} and
{\em $\si$-$\Delta$-closed/algebraic} respectively. These
topologies are called the {\em Kolchin topology} (or {\em
$\Delta$-topology}), {\em $\si$-topology} and {\em $(\si, \Delta)$-topology}
respectively. There are natural notions of closures and of irreducible
components.
\end{rem}
\begin{rem} \vlabel{defga} The following results are certainly classical
and well-known, but we did not know any reference. Recall that we are in
characteristic $0$, this result is false in positive characteristic. We
let $K$ be a differential subfield of the differentially closed field $\calu$. Consider the commutative monoid $\Theta$ (with $1$)
generated by $\delta_1,\ldots,\delta_m$, and let $K\Theta$ be the
$K$-vector space with basis $\Theta$. It can be made into a ring,
using the commutation rule $\delta_i \cdot a=a\delta_i
+\delta_i(a)$, $i=1,\ldots,m$. Each element $f$ of $\calu\Theta$ defines a {\em linear
differential operator} $L_f:\ga\to\ga$, defined by $a\mapsto f(a)$. One
has $L_{f\cdot g}=L_f\circ L_g$. Every $\Delta$-closed subgroup of $\ga(\calu)$ is
defined as the set of zeroes of linear differential operators in $\calu\Theta$, and for
$n\geq 1$, every $\Delta$-closed subgroup of $\ga^n(\calu)$ is defined by
conjunctions of equations of the form $L_1(x_1)+\cdots +L_n(x_n)=0$,
with the $L_i$ in $\calu\Theta$, and with the $L_i$ in $K\Theta$ if
the subgroup is defined over $K$, see e.g. Proposition 11 in \cite{Ca1}.\\[0.05in]
Let $S$ be a $K$-subspace of $K\Theta$, and assume that it is closed
under $\delta_i$, $i=1,\ldots,m$, and that it does not contain $1$. Then
the differential ideal $I$ generated by the set $S(x):=\{f(x)\mid f\in S\}\subset
K\{x\}_\Delta$ does not contain $x$, and is prime. \\
Note that $I$ is simply the $K\{x\}_\Delta$-module generated by $S(x)$,
i.e., an element of $I$ is a finite $K\{x\}_\Delta$-linear combination
of elements of $S(x)$. Moreover, all elements of $I$ have constant term
$0$. Every element $f$ in $K\{x\}_\Delta$ can be written uniquely as
$f_0+f_1+f_{>1}$, with $f_0$ the constant term, $f_1$ the sum of the
linear terms, and $f_{>1}$ the sum of terms of $f$ of total degree $\geq
2$. Note that $(f+g)_i=(f+g)_i$ for $i\in \{0,1,>1\}$. Moreover
$$(fg)_0=f_0g_0,\ \ (fg)_1 = f_0g_1+f_1g_0, \ \
(fg)_{>1}=f_{>1}g+fg_{>1}+f_1g_1.$$
This easily implies that if $f\in I$, then $f_1\in S(x)$: if $g\in I$,
then $g_0=0$ and so $(fg)_1=f_0g_1$. As $1\notin
S$, this gives that $x\notin I$. Furthermore, the primeness of $I$
follows from the fact that it is generated by linear differential
polynomials, so that, as a ring, $K\{x\}_\Delta/I$ is isomorphic to a
polynomial ring (in maybe infinitely many indeterminates) over
$\calu$.
\end{rem}
\subsection{Model theory of differential and difference fields} \label{prel}
\begin{notation}
We consider the language $\call$ of rings, let
$\Delta=\{\delta_1,\ldots,\delta_m\}$. We define $\call_\Delta=\call\cup
\Delta$, $\call_\si=\call\cup \{\si\}$ and $\call_{\si,
\Delta}=\call_\Delta\cup \{\si\} $ where the $\delta_i$ and $\si$ are
unary function symbols.
\end{notation}
\para {\bf The theory $\dcf$}\\
The model theoretic study of differential fields (with one derivation, in
characteristic $0$) started with the work of Abraham Robinson (\cite{Ro}
) and
of Lenore Blum (\cite{Bl}
). For several commuting derivations,
Tracey McGrail showed in \cite{MG} that the $\call_\Delta$-theory of differential fields of characteristic zero
with $m$ commuting derivations has a model companion, which we denote by
$\dcf$. The $\call_\Delta$-theory $\dcf$ is complete, $\omega$-stable and eliminates
quantifiers and imaginaries. Its models are called {\em differentially
closed}. Differentially closed fields had appeared earlier in the work
of Ellis Kolchin (\cite{Ko}).
\para {\bf Definable and algebraic closure, independence}. Let $(\calu, \Delta)$ be a
differentially closed field.
If $A\subset \calu$, then $\dcl_\Delta(A)$ and $\acl_\Delta(A)$ denote
the definable and algebraic closure in the sense of the theory $\dcf$.
Then $\dcl_\Delta(A)$ is the smallest differential field
containing $A$, and $\acl_\Delta(A)$ is the field-theoretic algebraic closure
of $\dcl_\Delta(A)$. Independence is given by independence in the sense
of the theory ACF (of algebraically closed fields) of the algebraic
closures, i.e., $A\dnfo_CB$ iff $\acl_\Delta(CA)$ and $\acl_\Delta(CB)$
are linearly disjoint over $\acl_\Delta(C)$.
\para {\bf The theories {\rm ACFA} and $\dcfa$}\\
The $\call_\si$-theory of difference fields has a
model companion denoted $\acfa$ (\cite{M}, see also \cite{CH} and \cite{CHP}).
Omar Le\'on-S\'anchez showed that the $\call_{\si,\Delta}$-theory of
difference-differential fields admits a model companion, $\dcfa$, and he
gave an explicit axiomatisation of this theory in \cite{LS}. (When
$m=1$, the theory was extensively studied by the third author, in
\cite{Bu2}, see also \cite{Bu1}). \\[0.05in]
The theories $\acfa$ and $\dcfa$ have similar properties, they are model-complete, supersimple and eliminate imaginaries, but they are not complete and do not eliminate quantifiers. The completions of both theories are obtained by
describing the isomorphism type of the difference subfield
$\rat^{alg}$. In what follows we will view ACFA as $\dcfa$ with $m=0$,
and we fix a (sufficiently saturated) model $\calu$ of $\dcfa$.
\para{\bf The fixed field}\\
The fixed field of $\calu$, $\fix (\sigma):= \{x \in \calu: \sigma (x)=x\}$, is a pseudo-finite
field. Then $\fix (\si^k)$ is the unique extension of $\fix
(\si)$ of degree $k$.
\begin{thm} (\cite{LS}, Propositions 3.1, 3.3 and 3.4). \vlabel{LS1}
Let $a, b$ be tuples in $\calu$ and let $A\subseteq \calu$.
We will denote by $\acl(A)$ the model theoretic closure of $A$ in the
$\call_{\si,\Delta}$-structure $\calu$. Then:
\begin{enumerate}
\item $\acl(A)$ is the (field-theoretic) algebraic closure of the
difference-differential field generated by $A$.
\item If $A = \acl(A)$, then the union of the quantifier-free diagramme of $A$ and of the theory $\dcfa$ is a complete theory in the language $\call_{\si,\Delta}(A)$.
\item $tp(a/A)=tp(b/A)$ if and only if there is an
$\call_{\si,\Delta}(A)$-isomorphism $\acl(Aa)\to \acl(Ab)$ sending $a$
to $b$.
\item Every $\call_{\si,\Delta}$-formula $\varphi(x)$ is
equivalent modulo $\dcfa$ to a disjunction of formulas of
the form $\exists y\, \psi(x,y)$, where $\psi$ is
quantifier-free (positive), and such that for every tuples
$a$ and $b$ (in a difference-differential field of
characteristic $0$), if $\psi(a,b)$ holds, then $b\in \acl(a)$.
\item Every completion of $\dcfa$ is supersimple (of SU-rank
$\omega^{m+1}$). Independence is given by independence (in the sense
of {\rm ACF}) of algebraically closed sets:\\
$a$ and $b$ are independent over $C$ if and only if the fields
$\acl(Ca)$ and $\acl(Cb)$ are linearly disjoint over $\acl(C)$.
\item Every completion of $\dcfa$ eliminates
imaginaries.
\item If $k\geq 1$, and $\calu\models \dcfa$, then the difference-differential field
$\calu[k]=(\calu,+,\cdot,\Delta,\si^k)$ is also a model of $\dcfa$, and the algebraic closure of
$\fix(\si)$ is a model of $\dcf$.
\end{enumerate}
\end{thm}
\begin{rem} \vlabel{rem-LS1}\begin{enumerate}
\item[(a)] Item (4) is stated in a slightly different way in
\cite{LS}. Here we prefer to have our set defined positively, at
the cost of $y$ consisting of maybe several elements. This gives
us that every definable subset of $\calu^n$ is the projection of a
$\si$-$\Delta$-algebraic set $W$ by a projection with finite fibers.
\item[(b)] Recalling that independence in $\dcf$ is given by
independence (in the sense of ACF) of algebraically closed sets, it
follows that another way of phrasing (5) is to say that independence
is given by independence (in the sense of $\dcf$) of algebraically
closed sets.
This shows in particular that $\dcfa$ is {\em one-based
over}
$\dcf$, a notion which was introduced by Thomas Blossier, Amador Martin-Pizarro and Frank O. Wagner in
\cite{BMW}.
\item[(c)] As with ACFA, it then follows that if $G$ is a definable subgroup
of some algebraic group $H$, and if one defines the {\em prolongations}
$p_n:H(\calu)\to H(\calu)\times \si(H(\calu))\times \cdots \times
\si^n(H(\calu))$, $g\mapsto (g,\si(g),\ldots,\si^n(g))$, and let $G_{(n)}$ be the Kolchin closure of
$p_n(G)$, then an element $g\in G$ is a generic if and only if for
each $n$, $p_n(g)$ is a generic of the $\Delta$-closed subgroup
$G_{(n)}$ of $H(\calu)\times \si(H(\calu))\times \cdots \times
\si^n(H(\calu))$. In particular, $G$ will have finite index in its
$\si$-$\Delta$-closure.
\item[(d)] Let $A\subset \calu$ be a difference-differential
subfield, and let $L$ be a difference-differential
field extending $A$. Assume that $L\cap
A^{alg}=A$. Then there is an $A$-embedding of $L$ into $\calu$. Indeed,
our assumption implies that $L\otimes_A A^{alg}$ is an integral domain,
and because $A^{alg}=\acl(A)$, the conclusion follows.
\item[(e)] This has the following consequence, which we will use:\\
Let $q$ be a
quantifier-free type over a difference-differential subfield $A$ of $\calu$, and
suppose that $q$ is stationary, i.e., if $a$ realises $q$, then
$A(a)_{\si,\Delta}\cap A^{alg}=A$. Let $f:A\to A'\subset\calu$ be an
isomorphism; then $f(q)$ is realised in $\calu$.
\item[(f)] When $m=0$, all these results appear in \cite{CH}. When
$m=1$, they appear in \cite{Bu1}, \cite{Bu2}.
\end{enumerate}
\end{rem}
\subsection{The results of Cassidy}
Let $\calu$ be a (sufficiently saturated) differentially closed
field of characteristic $0$. An (affine) \emph{$\Delta$-algebraic group} is a subset of affine space
which is both a differential variety in the sense of Kolchin and Ritt,
and whose group laws are morphisms of differential varieties. By
quantifier elimination of the theory $\dcf$, they correspond to
definable groups, see \cite{Ca1}. This context was extended to the non
affine setting, see e.g. \cite{Ko2} chapter 1 \S2. An
affine $\Delta$-algebraic group is then just a group definable in $\dcf$
(by quantifier-elimination). Cassidy shows that a connected semi-simple
differential group surjects (with finite kernel) onto a linear
differential algebraic group, i.e., a subgroup of some
GL$_n(\calu)$ (see Corollary 3 of Theorem 13 in \cite{Ca1}). A result
of Pillay (Theorem 4.1 and Corollary 4.2 in \cite{P-Fund}; it is
proved for one derivation, but the proof adapts immediately to
several commuting derivations) also tells us that every
differential algebraic group embeds into an algebraic
group, so putting the two together tells us that a connected semi-simple
differential group embeds into a semi-simple algebraic group.
\begin{defn}
A $\Delta$-algebraic group $G$ is \emph{$\Delta$-simple} if $G$ is non-commutative and has no proper connected normal $\Delta$-closed subgroup.
Thus a finite center is allowed. \\
Similarly, a $\Delta$-algebraic group $G$ is \emph{$\Delta$-semi-simple}
if it has no non-trivial connected normal commutative $\Delta$-closed subgroup.
\end{defn}
The following results were shown by Phyllis Cassidy in \cite{Ca2}:
\begin{thm}\vlabel{thm15} (Cassidy, \cite{Ca2}, Theorem 15) Let $G$ be a
Zariski dense $\Delta$-closed subgroup
of a semi-simple algebraic group $A\leq {\rm GL}(n,\calu)$, with
simple components $A_1,\ldots,A_t$. Then there exist connected
nontrivial $\Delta$-simple normal $\Delta$-closed subgroups $G_1,\ldots,G_t$
of $G$ such that
\begin{enumerate}
\item If $i\neq j$, then $[G_i,G_j]=1$.
\item The product morphism $G_1\times \cdots \times G_t\to G$ is a
$\Delta$-isogeny (i.e., is onto, with finite kernel).
\item $G_i$ is the identity component of $G\cap A_i$, and is
Zariski dense in $A_i$.
\item $G$ is $\Delta$-semi-simple.
\end{enumerate}
\end{thm}
\begin{thm} \vlabel{Cassidy} (Cassidy, \cite{Ca2}, Theorem 19). Let $H$
be a simple algebraic group defined and split over $\rat$, and $G\leq H(\calu)$ be a
$\Delta$-algebraic subgroup which is Zariski dense in $H$. Then $G$ is definably isomorphic to $H(L)$,
where $L$ is the constant field of a set $\Delta'$ of commuting derivations. Furthermore, the isomorphism is
given by conjugation by an element of $H(\calu)$.
\end{thm}
\begin{rem}\vlabel{rem-Cass}
Cassidy's results are stated in different terms. Instead of speaking
of {\em simple algebraic groups, defined and split over $\rat$} in \cite{Ca2}, she
speaks about {\em simple Chevalley groups}. In fact, all her results are
stated in terms of Chevalley groups, but we chose not to do
that. Recall that any simple algebraic group is isomorphic to one
which is defined and split over the prime field, $\rat$ in our case. \\
When the field $F$ is algebraically closed, a \emph{Chevalley
group} is $G(F)$, where $G$ is a semi-simple connected algebraic group $G$ which is defined
over $\rat$ and is split over $\rat$. When the field $F$ is not
algebraically closed, with $G$ as above, it is defined as the
subgroup of $G(F)$ generated by the unipotent subgroups, and thus may be strictly
smaller than $G(F)$. Since we will consider fields which are not
algebraically closed, we preferred using the ``simple'' terminology. \\
Note also that the field $L$ of Theorem \ref{Cassidy} is
algebraically closed. We will therefore be able to use Fact
\ref{simple} below.
\end{rem}
\begin{fact}\vlabel{simple} Let $G$ be a simple algebraic group defined and split over
$\rat$, let $K$ be an algebraically closed field of characteristic
$0$. Then
\begin{itemize}
\item[(a)] $G(K)$ has no infinite normal subgroup;
\item[(b)] The field $K$ is
definable in the pure group $G(K)$.
\end{itemize}
\end{fact}
\noindent
Both assertions are well-known, but we were not able to find easy
references. The first assertion follows from the fact that if $g\in G(K)\setminus
Z(G(K))$, then the infinite irreducible Zariski closed set $(g^{G(K)}g\inv)$ is
connected, contains $1$, and therefore generates a Zariski closed normal
subgroup of
$G(K)$, which must equal $G(K)$. The second is also well-known, see for instance
Theorem~3.2 in \cite{KRT}.
\subsection{Quantifier-free canonical bases} \vlabel{qfcb}
As $\dcfa$ is
supersimple there is a notion of canonical basis for complete types
which is defined as a sort of amalgamation basis, and is not easy to
describe. In our case, we will focus on an easier concept: canonical
bases of quantifier-free types. They are defined as follows:\\[0.05in]
We work in a model $(\calu, \si, \Delta)$ of $\dcfa$.
Let $a$ be a finite tuple in
$\calu$, and $K\subset\calu$ a difference-differential field. We
define the {\em quantifier-free canonical basis} of $tp(a/K)$, denoted
by $\qfcb(a/K)$, as the smallest difference-differential subfield $k$ of
$K$ such that $k(a)_{\si,\Delta}$ and $K$ are linearly disjoint over
$k$. Another way of viewing this field is as the smallest
difference-differential subfield of $K$ over which the smallest
$K$-definable $\si$-$\Delta$-closed set containing $a$ is defined
(this set is called the
{\em $\si$-$\Delta$-locus of $a$ over $K$}). Analogous notions exist
for $\dcf$ and ACFA. We
were not able to find explicit statements of the following easy
consequences of the Noetherianity of the $\si$-$\Delta$-topology, so
we will indicate a proof.
\begin{lem} \vlabel{cb1} Let $a,K\subset\calu$ be as above.
\begin{enumerate}
\item $\qfcb(a/K)$ exists and is unique; it is finitely generated as a
difference-differential field.
\item Let $K\subset M\subset K(a)_{\si,\Delta}$. Then
$M=K(b)_{\si,\Delta}$ for some finite tuple $b$ in $M$.
\end{enumerate}
\end{lem}
\prf (1) Let $n=|a|$, and write $K\{y\}_\si=\bigcup_{r\in\nat} K[r]$, where $$K[r]=K[\si^i\delta_1^{i_1}\delta_2^{i_2}\cdots\delta_m^{i_m}y_j\mid
1\leq j\leq n, |i|+\sum_j i_j\leq r].$$
Then each $K[r]$ is finitely generated over $K$ as a ring, and is Noetherian. For each $r$, consider the ideal $I[r]=\{f\in K[r]\mid f(a)=0\}$, and
the corresponding $\si$-$\Delta$-closed subset $X[r]$ of $\calu^n$
defined by $I[r]$. Then the sets $X[r]$ form a decreasing sequence of
$\si$-$\Delta$-closed subsets of $\calu^n$, which stabilises for some
$r$, which we now fix. Note that the ideal $I[r]$ is a prime ideal (of
the polynomial ring $K[r]$), and as such has a smallest field of
definition, say $k_0$, and that $k_0$ is finitely generated as a
field, and is unique. We now let $k$ be the difference-differential
field generated by $k_0$. \\[0.05in]
{\bf Claim 1}.
$k(a)_{\si,\Delta}$ and $K$ are linearly disjoint over $k$.
\begin{proof}
This follows from the fact that $X[s]=X[r]$ for every $s\geq r$.
\end{proof}
\noindent
(2) Consider $B:=\qfcb(a/M)$. By (1), $B$ is finitely generated as a
difference-differential field. \\[0.05in]
{\bf Claim 2}. $KB=M$.
\begin{proof}
Indeed, by definition, $B(a)_{\si,\Delta}$ and $M$ are linearly disjoint
over $B$. Hence, $KB(a)_{\si,\Delta}$ and $M$ are linearly disjoint over
$KB$. But this is only possible if $KB=M$.
\end{proof}
\begin{rem}\vlabel{canonicalbase}
Given fields $K\subset L$ (of characteristic $0$),
the field $L$ is a regular extension of $L_0:=K^{alg}\cap L$. So, if
$L=K(a)_{\si,\Delta}$ for some (maybe infinite) tuple $a$, then $\qfcb(a/K^{alg})$ is contained in $L_0$, and we have $\qfcb(a/K^{alg})K=L_0$.
\end{rem}
\section{The isogeny result}
\noindent
We work in a sufficiently saturated model $(\calu, \Delta, \si)$ of
$\dcfa$. We will often work in its reduct to $\call_\Delta$. Unless
otherwise mentioned, definable will mean
$\call_{\si,\Delta}$-definable.
\begin{defn} Let $G$ be a definable group. We say that $G$ is
\emph{definably quasi-simple} if $G$ has no abelian subgroup of
finite index and if whenever $H$ is a definable
infinite subgroup of $G$ of infinite index, then its normaliser $N_G(H)$ has infinite
index in $G$.
We say that $G$ is {\em definably quasi-semi-simple} if $G$ has no abelian subgroup of
finite index and if whenever $H$ is a
definable infinite commutative subgroup of $G$ of infinite index, then its normaliser
$N_G(H)$ has infinite index in $G$.
\end{defn}
\begin{remark}\vlabel{rem-defss} In our context (of a supersimple theory), a definable group will in general have infinitely many
definable subgroups of finite index, so it will not have a smallest one. Note that our definition takes care of that problem, as both notions are
preserved when going to definable subgroups of finite index and
quotients by finite normal subgroups.
\end{remark}
\begin{lem}\vlabel{lem-defss} Let $G$ be a group, $G_0$ a definable
subgroup of $G$ of finite index, and $Z$ a finite normal subgroup of $G$.
\begin{enumerate}
\item $G$ is definably quasi-simple if and only if $G_0$ is definably
quasi-simple.
\item $G$ is definably quasi-simple if and only if $G/Z$ is definably
quasi-simple.
\item The same assertions hold with ``quasi-semi-simple'' in
place of quasi-simple.
\end{enumerate}
\end{lem}
\prf (1) Suppose $G_0$ is definably quasi-simple, let $H$ be an infinite subgroup of $G$ of infinite
index, and assume that $N_G(H)$ has finite index in $G$. Then
$N_G(H)\cap G_0$ has finite index in $G_0$; but $H\cap G_0$ has finite
index in $H$, hence is infinite, and of infinite index in $G_0$, and
we get the desired contradiction. \\
For the other direction, assume $H$
is an infinite subgroup of $G_0$ of infinite index in $G_0$, and with
$N_{G_0}(H)$ of finite index in $G_0$; then $N_G(H)$ has finite index
in $G$, which gives us the desired contradiction. \\[0.05in]
(2) By (1), going to a definable sugroup of $G$ of finite index, we
may assume that $Z$ is central in $G$. Assume $G/Z$ is definably
quasi-simple, and let $H$ be an infinite definable
subgroup of $G$ of infinite index. Then $HZ/Z$ is infinite and has
infinite index in $G/Z$, so its normalizer $N$ has infinite index in
$G/Z$, and if $N'\supset Z$ is such that $N'/Z=N$, then $N'$ has
infinite index in $G$, and normalizes $HZ$. But $HZ$ is a finite union
of cosets of $H$, $N'$ permutes these cosets, which implies that
$N_G(H)$ has infinite index in $G$. The other direction is immediate
because $Z$ is central. \\[0.05in]
(3) Reason as in (1) and (2).
\begin{prop} \vlabel{thm1} Let $G$ be a group definable in $\calu$, and
assume that $G$ is definably quasi-simple (resp. definably
quasi-semi-simple). Then there are a definable subgroup $G_0$ of finite index in $G$,
a $\Delta$-simple (resp. $\Delta$-semi-simple) $\Delta$-algebraic
group $H$ defined and split over $\rat$,
and a definable homomorphism $\phi:G_0\to H(\calu)$, with finite
kernel and Kolchin dense image.
\end{prop}
\prf By Remark \ref{rem-LS1}(b), and by Theorem~4.9 and Corollary 4.10
of \cite{BMW}, there is a homomorphism $\phi$ of some definable subgroup $G_0$ of finite
index in $G$ into a group $\bar G$ which is definable
in the differential field $\calu$, and with $\Ker(\phi)$ finite. We may assume that the
image of $G_0$ is Kolchin dense in $\bar G$ and, going to a subgroup of $G_0$ of finite index, that $\bar G$ is connected (as a
$\Delta$-algebraic group).\\
Moreover, if $G$ is definably quasi-simple, we may assume that $\bar G$ is a
$\Delta$-simple group: if $N$ is an $\call_\Delta$-definable
connected normal subgroup of $\bar G$,
then $\phi\inv(N) \cap G_0$ is a normal subgroup of $\phi(G_0)$. Our
hypothesis on $G$ implies that $\phi\inv(N) \cap G_0$ is finite, and so
is $N\cap
\phi(G_0)$.
We may therefore compose $\phi$ with the projection
$\bar G\to \bar G/N$. \\
If $G$ is definably quasi-semi-simple, the same reasoning allows us to assume that
$\bar G$ is $\Delta$-semi-simple, i.e., that it has no proper connected abelian
normal
$\Delta$-definable subgroup. Then Theorem 17 of \cite{Ca2} and its
corollary give us the result in the simple case, and Theorem 18 of
\cite{Ca2} in the semi-simple case.
\section{Definable subgroups of semi-simple algebraic
groups} \label{SDefGroups}
In this section we give a description of Zariski dense definable subgroups of simple and semi-simple algebraic groups.
We work in a sufficiently saturated model $(\calu, \Delta, \si)$ of
$\dcfa$. Unless
otherwise mentioned, definable will mean
$\call_{\si,\Delta}$-definable.
\begin{thm} \vlabel{prop1} Let $H$ be a simple algebraic group defined
over $\rat$, and $G$ a definable subgroup
of $H(\calu)$ which is Zariski dense in $H$.
Then $G$ has a definable subgroup $G_0$ of finite index, the
Kolchin closure of which is conjugate to $H(L)$, where $L$
is an
$\call_\Delta$-definable subfield of $\calu$, say by an element $g$. Furthermore,
either $G_0^g=H(L)$, or $G_0^g\subseteq H(\fix(\si^\ell)(L))$ for some
integer $\ell\geq 1$.
In the latter case, if $H$ is centerless, we are able to describe
precisely the subgroup $G_0^g$ as $\{g\in H(L)\mid \si^n(g)=\varphi(g)\}$
for some $n$ and algebraic automorphism
$\varphi$ of $H(L)$.
\end{thm}
\prf Replacing $G$ by a subgroup of finite index, we may
assume that the Kolchin closure $\bar G$ of $G$ is connected. Then
$\bar G$ is also Zariski dense in $H$, and by Theorem \ref{Cassidy},
$\bar G$ is conjugate to $H(L)$, for some $\call_\Delta$-definable
subfield $L$ of $\calu$.\\[0.05in]
The strategy is the same as in the proof of Proposition 7.10 in
\cite{CHP}. Going to the $\si$-closure of $G$ within $H(L)$, and then to a subgroup
of finite index, we may assume that $G$ is quantifier-free definable,
and that it is connected for the $\si$-$\Delta$-topology. If $G=H(L)$, then we
are done, because $H(L)$ has no proper definable subgroup of finite index,
since it is simple (see Fact \ref{simple}). Assume therefore that $G\neq H(L)$. We will first do the case
where $H$ is centerless.\\[0.05in]
In the notation of Remark \ref{rem-LS1}(c), let $n$ be the smallest integer such
that $G_{(n)}$ is not equal to $H(L)\times \si(H(L))\times
\cdots\times \si^n (H(L))$. If $\pi$ is the projection on the last factor
$\si^n(H(L))$, then $\pi(G_{(n)})=\si^n(H(L))$. \\[0.05in]
Write $G_{(n)}\cap ((1)^n\times
\si^n(H(L)))=(1)^n\times S_0$. Because $G_{(n)}$ projects onto $\si^n(H(L))$, it follows that $S_0$ is a
normal subgroup of $\si^n(H(L))$: Let $s \in S_0$ and $g \in
\si^n(H(L))$. Since $\pi(G_{(n)})=\si^n(H(L))$, there is $h\in
H(L)\times \cdots \times \si^{n-1}(H(L))$ such that
$(h, g) \in G_{(n)}$. Then $(h, g)^{-1}(1, s)(h, g) = (1, g^{-1}sg) \in
G_{(n)}$, so $g^{-1}sg \in S_0$. \\[0.05in]
Since $G_{(n)}$ projects onto
$G_{(n-1)}=H(L)\times \cdots\times \si^{n-1}(H(L))$ and is not equal to
$H(L)\times \cdots\times \si^n(H(L))$, the normal subgroup $S_0$ must
equal $(1)$ (because $Z(H)=(1)$). So $G_{(n)}$ is the graph of a group epimorphism
$\theta:H(L)\times \dots\times \si^{n-1}(H(L))\to \si^n(H(L))$. As
all $\si^i(H(L))$ are simple, it follows that $\Ker(\theta)$ is a product
of some of the factors, and by minimality of $n$, the first factor
$H(L)$ is not contained in $\Ker(\theta)$. Hence,
$\Ker(\theta)=\si(H(L))\times \cdots\times \si^{n-1}(H(L))$, and
$G_{(n)}$ is in fact defined by the equation $\si^n(g)=\theta'(g)$,
where $\theta'$ is the morphism $H(L)\to \si^n(H(L))$ induced by $\theta$.
Note that $\theta'$ is $\call_\Delta$-definable, and defines an
isomorphism between the groups $H(L)$ and $H(\si^n(L))$.\\[0.05in]
The Theorem of
Borel-Tits (see Theorem A in \cite{Bo-Ti}, or 2.7, 2.8 in \cite{St72}, or Theorem~4.17 in \cite{Po})
which describes abstract isomorphisms between simple algebraic groups,
tells us that there are an
algebraic automorphism $\varphi$ of the algebraic group
$H(L)$ and a field
isomorphism $\psi:L\to \si^n(L)$, such that $\theta'=
\bar\psi\varphi$, where
$\bar\psi$ is the obvious isomorphism $H(L)\to H(\si^n(L))$ induced by
$\psi$. Since $\theta'$ and $\varphi$ are
$\call_\Delta$-definable, so is $\psi$, by Fact \ref{simple}(b).
\begin{claim} $L=\si^n(L)$ and $\psi=id$.
\end{claim}
\begin{proof} The graph of $\psi$ defines an additive subgroup $S$ of
$L\times \si^n(L)\leq \calu\times \calu$. \\
By Remark \ref{defga} there are linear differential polynomials $F_i(x)$ and $G_i(y)$,
$i=1,\ldots, s$, such that $S$ is defined by the equations
$F_i(x)=G_i(y)$, $i=1,\ldots,s$. Because $S$ is the graph of an
isomorphism, we have
$\bigcap_{i=1}^s\Ker(F_i)=\{0\}=\bigcap_{i=1}^s\Ker(G_i)$. Hence, $x$
belongs to the differential ideal generated by the $F_i(x)$, and this
implies (see Remark \ref{defga}) that there are linear differential polynomials $L_1,\ldots,L_s$
such that $\sum_{i=1}^s L_i(F_i(x))=x$; letting $G(y)=\sum_{i=1}^s L_i(G_i(y))$,
we get $x=G(y)$. Since $S$ is the graph of a field automorphism, we must then
have $G(y)=y$, i.e.: $\psi=id$.\end{proof}
\noindent
An alternate proof is to quote Sonat Suer (Theorem~3.38 in
\cite{S-PhD}) to deduce that $L=\si^n(L)$, and then show that
$\psi=id$. \\%[0.05in]
\noindent
In other words, we have shown that $\theta'$ is an algebraic group
automorphism of $H(L)$, and in particular shown the last assertion: when $G<H(L)$, then $G$ is
defined by
$$\{h\in H(L)\mid \si^n(h) = \theta'(h)\}.$$
By Proposition~14.9 of \cite{Bo}, the group
${\rm Inn}(H)$ of
inner automorphisms of $H(L)$ has finite index in the group
$\Aut(H)$ of
algebraic automorphisms of $H(L)$. Moreover $\si^n$ induces a
permutation of $\Aut(H)/{\rm Inn}(H)$, and hence there are some $r\in\nat^*$ and
$h\in H(L)$ such
that $$\si^{n(r-1)}(\theta')\circ\si^{n(r-2)}(\theta')\circ \cdots \circ\theta'=\lambda_h,$$ where
$\lambda_h$ is conjugation by $h$.
I.e., our group $G$ is contained in the group $G'$ defined by
$\si^{nr}(g)=\lambda_h(g)$.\\
By $\dcfa$, there is some $u\in H(L)$ such that $\si^{nr}(u)=h\inv
u$. So, if $g\in G'$, then
\begin{align*}
\si^{nr}(u\inv gu)&=\si^{nr}(u\inv)\lambda_h(g)\si^{nr}(u)\\
&= h (h\inv gh)(h\inv u)\\
&=u\inv gu.
\end{align*}
I.e., $u\inv G'u\subset H(\fix(\si^{nr})\cap L))$. \\[0.1in]
This does the case when $H$ is centerless.
Assume that the center $Z$ of $H$ is non-trivial. By the first part we know
that there are $u\in H(\calu)$ and $\ell\geq 1$ such that $(u\inv
GZu)/Z\subseteq
(H/Z)(\fix(\si^{\ell}(L))$. Since $Z$ is finite and characteristic,
there is some $s\in\nat$ such that for all $a\in Z$, we have
$\prod_{i=0}^{s-1}\si^i(a)=1$. If $g\in u\inv
Gu$, then $\si^\ell(g)g\inv\in Z$; hence $\si^{\ell s}(g)g\inv=1$, and
$u\inv G u\subset H(\fix(\si^{\ell s}))$. \qed
\begin{cor} \vlabel{MainTheorem}
Let $G$ be an infinite group definable in a model
$\calu$ of $\dcfa$, and suppose that $G$ is definably quasi-simple.
Then there are a simple algebraic group $H$ defined and split over
$\rat$, a definable subgroup $G_0$ of $G$
of finite index, and a definable group homomorphism $\phi:G_0\to H(\calu)$, with the following properties:
\begin{enumerate}
\item $\Ker(\phi)$ is finite.
\item The Kolchin closure of $\phi(G_0)$ is $H(L)$ for some $\call_\Delta$-definable subfield
$L$ of the differential field $\calu$.
\item Either $\phi(G_0)=H(L)$, or for some integer $\ell$,
$\phi(G_0)$ is a subgroup of $H(\fix(\si^\ell)\cap L)$.
\end{enumerate}
\end{cor}
\begin{proof}
By Proposition \ref{thm1} we can reduce to the case where $G$ is a
definable subgroup of a simple algebraic group $H$.
Then apply Proposition \ref{prop1} to conclude.
\end{proof}
\begin{lem}\vlabel{lem2} Let $H$ be a simple algebraic group, defined
and split over $\rat$, let $L\leq \calu$ be a field of constants, and
let $\varphi$ be an algebraic
automorphism of $H$. Let $\ell\geq 1$, and consider the subgroup
$G\leq H(L)$ defined by $\si^\ell(g)=\varphi(g)$. Then $G$ is
definably quasi-simple.
\end{lem}
\prf By Lemma \ref{lem-defss}, we may assume that $Z(H)=(1)$. Let $U$ be an infinite definable
subgroup of $G$ of infinite index, and assume by way of contradiction
that its normalizer $N$ has finite index in $G$. \\
Consider $p_{\ell}$ as defined in Remark \ref{rem-LS1}(c), and $U_{(\ell)}\leq
G_{(\ell)}$. Then $U_{(\ell)}\unlhd
N_{(\ell)}=G_{(\ell)}$ (the latter equality because
$[G:N]$ is finite). In particular, $U_{(0)}\unlhd G_{(0)}=H(L)$,
and as the group
$H(L)$ is simple (by Fact \ref{simple}(a)), the Kolchin closure of $U$ must be
$H(L)$. \\
Moreover, as every generic of $U$ is a generic of its
$\si$-$\Delta$-closure $\tilde U$, it follows
that $G$ normalizes $\tilde U$. So, we may replace $U$ by $\tilde U$; then $G$ also normalises the connected
component of $\tilde U$ (for the $\si$-$\Delta$-topology), and so we may
assume that $U$ is $\si$-$\Delta$-closed and connected. By Theorem \ref{prop1}, for some $r\leq \ell$
and algebraic automorphism $\psi$ of $H(L)$, the group $\tilde U$ is defined
within $H(L)$ by
the equation $\si^r(g)=\psi(g)$. We will show that this is impossible
unless $r=\ell$ (and $\psi=\varphi$). Indeed, suppose that $r<\ell$,
take a generic $(u,g)$ of $U\times G$.
Consider now
$(u,\si^r(u))$, and $(g,\si^r(g))$. The elements $u$, $g$ and
$\si^r(g)$ are independent generics of the algebraic group $H$. Since
$u\in \tilde U$, we have $$\si^r(g\inv ug)= \si^r(g)\inv \psi(u)\si^r(g)=\psi(g\inv
ug)=\psi(g)\inv \psi(u)\psi(g).$$ I.e., $\si^r(g)\psi(g)\inv\in
C_H(\psi(u))$. As $\psi$ is an automorphism of $H$, the elements $\si^r(g)$, $\psi(g)$ and $\psi(u)$
are independent generics of $H$; this gives us the desired
contradiction, as $\si^r(g)\psi(g)\inv$ and $\psi(u)$ are independent
generics of the non-commutative algebraic group $H$. \qed
\para The semi-simple case needs some additional lemmas. Indeed, Zariski
denseness and the previous results do not suffice to give a complete
description. Here is a simple example: Let $H$ be a simple algebraic
group defined and split over $\rat$, and
consider the subgroup $G$ of $H(\calu)^2$ defined by $$G=\{(g_1,g_2)\in
H(\calu)^2\mid \si(g_1)=g_2\}.$$
Then $G$ is Kolchin dense in $H(\calu)^2$, however $G$ is isomorphic to
$H(\calu)$, via the projection on the first factor. We will now
prove several lemmas which will allow us to take care of this problem.
\begin{lem}\vlabel{lem3} Let $G_1,\ldots,G_t$ be centerless simple
$\Delta$-algebraic groups, with $G_i$ Zariski dense in some algebraic
group $H_i$, and $G\leq G_1\times \cdots\times G_t$ a
$\Delta$-definable subgroup, which projects via the natural
projections onto each $G_i$. Then there are a set $\Psi\subset
\{1,\ldots,t\}^2$, algebraic isomorphisms $\psi_{i,j}:H_i\to H_j$
whenever $(i,j)\in \Psi$, such that
$$G=\{(g_1,\ldots,g_t)\in \prod_{i=1}^tG_i\mid g_j=\psi_{i,j}(g_i),
(i,j)\in \Psi\}.$$
Moreover, if $(i,j), (j,k)\in \Psi$ with $k\neq i$, then $(j,i), (i,k)\in \Psi$,
$\psi_{j,i}=\psi_{i,j}\inv$, and
$\psi_{i,k}=\psi_{j,k}\psi_{i,j}$.
\end{lem}
\prf Let us first remark the following result, which is implicit in the proof
of Theorem \ref{prop1} (paragraphs 4 to 6, and the Claim): assume in addition that $G$ projects onto
$\prod_{i=2}^t G_i$, but $G\neq \prod_{i=1}^t G_i$. Then there is some
index $i\geq 2$, and an algebraic
isomorphism $\psi_{1,i}:H_1\to H_i$ such that $$G=\{(g_1,\ldots,g_t)\in \prod
G_i\mid g_i=\psi_{1,i}(g_1)\}.$$
We let $\Psi$ be the set of pairs $(i,j)\in \{1,\ldots,t\}^2$ such
that the image $G_{i,j}$ of $G$ under the natural projection $\prod_{\ell=1}^t H_\ell\to
H_i\times H_j$ is a proper subgroup of
$G_i\times G_j$. By the above, if $(i,j)\in \Psi$, then $G_{i,j}$ is
the graph of an isomorphism $G_i\to G_j$, restriction of some algebraic
isomorphism $\psi_{i,j}:H_i\to H_j$. Then the set $(\Psi, \psi_{i,j})$
satisfies
the moreover part of the conclusion, and we have
$$G\leq \{(g_1,\ldots,g_t)\in \prod_{i=1}^tG_i\mid g_j=\psi_{i,j}(g_i),
(i,j)\in \Psi\}.$$
To prove equality, we let $T\subset \{1,\ldots,t\}$ be maximal such that
whenever $i,j\in T$, then $(i,j)\notin\Psi$; then the natural projection $\prod_{\ell=1}^t
H_\ell\to \prod_{\ell\in T}H_\ell$ defines an injection on $G$, and
sends $G$ to a subgroup $G'$ of $\prod_{\ell\in T}G_\ell$, with the property
that whenever $k\neq \ell\in T$, then $G'$ projects onto $G_k\times
G_\ell$. By the first case and an easy induction, this implies that $G'=
\prod_{\ell\in T}G_\ell$, and finishes the proof of the lemma. \qed
\begin{lem} \vlabel{lem4} Let $H_1,\ldots,H_r$ be simple centerless
algebraic groups defined and split over $\rat$, $L_1,\ldots,L_r$ $\call_\Delta$-definable subfields
of $\calu$, and $G\leq \prod_{i=1}^rH_i(L_i)$ a Kolchin dense
quantifier-free definable subgroup, which is connected
for the $\si$-$\Delta$-topology. Let $\tilde G_i\leq H_i(L_i)$ be the
$\si$-$\Delta$-closure of the projection of $G$ on the $i$-th factor $H_i(L_i)$.\\
Then there is a partition of
$\{1,\ldots,r\}$ into subsets $I_1,\ldots,I_s$, such that for each
$1\leq k\leq s$, the following holds:\\
If $i\neq j\in I_k$, then there are an integer
$n_{ij}\in\zee$ and an algebraic
isomorphism $\theta_{ij}:H_i(L_i)\to H_j(\si^{n_{ij}}(L_j))$ such that if $\pi_{I_k}$ is the projection $\prod_{j=1}^r
H_j(L_j)\to \prod_{j\in I_k}H_j(L_j)$, and $i\in I_k$ is fixed,
then $$\pi_{I_k}(G)=\{(g_j)_{j\in I_k}\in \prod_{j\in I_k}H_j(L_j)\mid \theta_{ij}(g_i)=
\si^{n_{ij}}(g_j) \hbox{ if }j\neq i\}. $$
Moreover, $G\simeq \prod_{k=1}^s\pi_{I_k}(G)$, and $G$ projects onto
each $\tilde G_i$.
\end{lem}
\prf
We use the
prolongations $p_n$ defined in \ref{rem-LS1}, and choose $N$ large
enough so that $G=\{\bar g\in \prod_{i=1}^r H_i(L_i)\mid p_N(\bar g)\in G_{(N)}\}$. Then $G_{(N)}$
is a $\Delta$-algebraic subgroup of $$\prod_{i=1}^r (\tilde G_{i})_{(N)}\leq
\prod_{1\leq i\leq r, 0\leq k\leq N}H_i(\si^k(L_i)). $$
Let $\Psi\subset (\{1,\ldots,r\}\times \{0,\ldots,N\})^2$ be the set of pairs
given by Lemma \ref{lem3}, and $\psi_{(i,k),(j,\ell)}$, ${((i,k),(j,\ell))\in \Psi}$, the
corresponding set of algebraic isomorphisms $$\psi_{(i,k),(j,\ell)}:
H_i(\si^k(L_i))\to H_j(\si^\ell(L_j)).$$ So, if $(g_1,\ldots,g_r)\in G$, then
$$\psi_{(i,k),(j,\ell)}(\si^k(g_i))=\si^\ell(g_j).\eqno{(1)}$$
Note the following, whenever $((i,k),(j,\ell))\in\Psi$:
\begin{itemize}
\item If $k+1,\ell+1\leq N$, then $((i,k+1),(j,\ell+1))\in \Psi$, with
$\psi_{(i,k+1),(j,\ell+1) }={\psi_{(i,k),(j,\ell) }}^\si$ (here,
${\psi_{(i,k),(j,\ell) }}^\si$ denotes the isomorphism obtained by
applying $\si$ to the coefficients of the isomorphism
${\psi_{(i,k),(j,\ell) }})$;
\item If $k,\ell\geq 1$, then $((i,k-1),(j,\ell-1))\in \Psi$, with
$\psi_{(i,k-1),(j,\ell-1) }={\psi_{(i,k),(j,\ell) }}^{\si\inv}$
\item If $k\leq \ell$, then
applying $\si^{-k}$ to equation (1) gives $$((i,0),(j,\ell-k))\in
\Psi, \ \hbox{and}\ \psi_{(i,k),(j,\ell)}= {\psi_{(i,0),(j,\ell-k)}}^{\si^{k}}.$$
\item Finally, if $i=j$ and $k<\ell$, then $\tilde G_i$ is defined by an equation
$\si^{n_i}(g)=\varphi_i(g)$ within $H_i(L_i)$ for some integer $n_i$ and
algebraic automorphism $
\varphi_i$ of $H(L)$, $((i,0),(i,n_i))\in \Psi$ with
associated isomorphism $\psi_{(i,0),(i,n_i)}=\varphi_i$, and $\ell-k$ is a
multiple of the integer $n_i$.
This is because $G$ projects
onto a subgroup of finite index of $\tilde G_i$, and therefore $G_{(N)}$
projects onto $(\tilde G_{i})_{(N)}$
\end{itemize}
By Lemma \ref{lem3}, we know that the set $\Psi$ and the
$\psi_{i,j}$ completely determine $G$, and by the above observations,
each condition $\si^k(g_i)=\psi_{(i,k),(j,\ell)}(\si^\ell(g_j))$ is
implied by
$$\si^{k-\ell}(g_i)={\psi_{(i,k),(j,\ell)}}^{\si^{-\ell}}(g_j).\eqno{(2)}$$
The set $\Psi$ defines a structure of graph on $\{1,\ldots,r\}\times
\{0,\ldots,N\}$, which in turn induces a graph structure on
$\{1,\ldots,r\}$, by
$E(i,j)$ iff there are some $k,\ell$ such that $((i,k),(j,\ell))\in
\Psi$. If $E(i,j)$, then the isomorphism $\tilde G_i\to \tilde G_j$ is
given by equation (2). Then $(\{1,\ldots,r\},E)$ has finitely many
connected components, say $I_1,\ldots,I_s$, and for every $k$, if $i\in
I_k$, then $I_k=
\{i\}\cup \{j\mid E(i,j)\}$.
Lemma \ref{lem4} shows that $G=\prod_{k=1}^s \pi_{I_k}(G)$, and gives
the desired description of $\pi_{I_k}(G)$, with
$\theta_{i,j}={\psi_{(i,k),(j,\ell)}}^{\si^{-\ell}}$ and
$n_{i,j}=k-\ell$, if $((i,k),(j,\ell))\in\Psi$. \qed
\begin{thm}\vlabel{prop2semi}
Let $G$ be a definable subgroup of $H(\calu)$, where $H$ is a
semi-simple algebraic group defined and split over $\rat$, and with
trivial center.
Assume that $G$ is Zariski dense in $H$.
\begin{enumerate}
\item Assume that
the $\si$-$\Delta$-closure of $G$ is
connected (for the $\si$-$\Delta$-topology). Then there are $s$ and
simple normal algebraic subgroups $H_1,\ldots,H_s$ of $H$, a
projection $\pi:H\to H_1\times \cdots \times H_s$ which restricts to an
injective map on $G$, $\call_\Delta$-definable subfields $L_i$
of $\calu$, definable subgroups $G_i$ and
$G'_i$ of $H_i(L_i)$ for $1\leq i\leq s$, and
$h\in \pi(H)(\calu)$, such
that $$ G_1\times \ldots \times G_s\leq h\inv \pi(G)h \leq G'_1\times
\cdots \times G'_s,$$
and each $G_i$ is a
normal subgroup of finite index of $G'_i$.
\item Assumptions as in (1). If in addition $G$ is $\si$-$\Delta$-closed, then
$h\inv \pi(G)h=G_1\times \cdots \times G_s$, and for each $i$, either $G_i=H_i(L_i)$, or for some
integer $\ell_i$ and automorphism $\varphi_i$ of $H_i(L_i)$, $G_i$ is
defined within $H_i(L_i)$ by $\si^{\ell_i}(g)=\varphi_i(g)$.
\end{enumerate}
\end{thm}
\prf
By Theorem \ref{thm15}, if $H_1,\ldots,H_r$ are the simple
algebraic
components of $H$, and $\bar G$ is the Kolchin closure of $G$, then
$\bar G$ is $\Delta$-semi-simple; if $\bar G_i$ is the connected (for the
$\Delta$-topology) component of
$\bar G\cap
H_i(\calu)$, then the morphism $\rho:\bar G_1\times \cdots\times \bar G_r\to
\bar G$ is an isogeny, and because $H$ is centerless, is an
isomorphism.
\noindent
By Theorem \ref{Cassidy}, we know that there are
$\Delta$-definable
subfields $L_i$ of $\calu$, such that each $\bar G_i$ is conjugate to
$H_i(L_i)$ within $H_i(\calu)$. But as $[H_i,H_j]=1$ for $i\neq j$,
there is $h\in H(\calu)$ such that $h\inv \bar G_i h\leq
H_i(L_i)$ for all $i$. We will replace $G$ by $h\inv Gh$, so that
$\bar G_i= H_i(L_i)$ for every $i$. \\[0.05in]
(1)
For each $i$, consider the projection $\pi_i$ on the $i$-th
factor $H_i(L_i)$, and let $G'_i=\pi_i(G)$. Further, let $G_i=H_i(L_i)\cap G$. So, $G_1\times
\cdots\times G_r$ is a subgroup of $G$.
\medskip\noindent
{\bf Claim 1}. $G'_i$ is Kolchin dense in $H_i(L_i)$, for
$i=1,\ldots,r$.
\prf
Since $G$ is Kolchin dense in $\bar{G}$, any generic $g:= (g_1,
\ldots, g_r)$ of $G$ is a generic of the $\Delta$-algebraic group $\bar G$.
Then $g_i$ is a generic of $H_i(L_i)$ for all $i$, and the claim is
proved. \qed
\medskip\noindent
{\bf Claim 2}. For all $i \in \{1, \ldots, r\}$, $G_i \unlhd G_i'$.
\prf Let $q:H\to H_2\times \cdots \times H_r$ be the
projection on the last $r-1$ factors. Then $G\cap \Ker(q)$ is
normal in $G$, contained in $H_1(L_1)\times (1)^{r-1}$, and equals
$G_1\times (1)^{r-1}$. As $G$
projects onto $G'_1$, we get $G_1\unlhd G'_1$. The proof for the other indices is similar. \qed
\medskip\noindent
{\bf Claim 3}. If $G_i\neq (1)$, then $[G'_i:G_i]<\infty$. If moreover $G$ is
quantifier-free definable, then $G_i=G'_i$.
\prf
Both $G_i$ and $G'_i$ are definable subgroups of the simple
$\Delta$-algebraic group $H_i(L_i)$ and $G'_i$ is Kolchin dense in
$H_i(L_i)$.
\noindent
If
$G'_i=H_i(L_i)$, then $G_i=G'_i$ since $H_i(L_i)$ is a simple (abstract)
group (by \ref{simple}, and because $Z(H)=(1)$). If $G'_i\neq H_i(L_i)$,
then by Theorem \ref{prop1}, Claim 1 and Lemma \ref{lem2},
$G'_i$ is definably quasi-simple. Hence, Claims 1 and 2 give the
result when $G$ is definable.\\
If $G$ is quantifier-free definable, so is every $G_i$, and therefore $G_i$
is closed in the $\si$-$\Delta$-topology. This implies that
$G_i=G'_i$, because $G$, and therefore also $G'_i$, is connected for the $\si$-$\Delta$-topology. \qed
\medskip\noindent
If all $G_i$ are non-trivial, we have shown that our group $G$ is squeezed between $G_1\times
\cdots \times G_r$ and $G'_1\times \cdots \times G'_r$. And that if $G$
is quantifier-free definable, then $G=\prod_{i=1}^r G_i$.\\[0.05in]
Assume now that some $G_i$ are trivial. If $\tilde G$ denotes the
$\si$-$\Delta$-closure of $G$, and $\tilde G_i$ the
$\si$-$\Delta$-closure of $G'_i$, then these groups are connected for
the $\si$-$\Delta$-topology, quantifier-free definable, and $\tilde G$
is a proper subgroup of $\prod_{i=1}^r \tilde G_i$. Hence Lemma
\ref{lem4} applies, and gives a subset $T$ of $\{1,\ldots,r\}$ such that
the natural projection $\pi_T$ defines an isomorphism $\tilde G\to
\prod_{i\in T}\tilde G_i$, which restricts to an embedding $G\to
\prod_{i\in T}G'_i$ with $[\prod_{i\in
T}G'_i:\pi_T(G)]<\infty$. Moreover, applying Claim 3 to $G''_i:=\pi_T(G)\cap
H_i(L_i)$, $i\in T$, we get
$$\prod_{i\in T}G''_i\leq \pi_T(G)\leq \prod_{i\in T}G'_i,$$
with $G''_i$ a normal subgroup of $G'_i$ of finite index. This finishes
the proof of (1) (modulo a change of notation). \\
We showed that $\pi_T(\tilde G)=\prod_{i\in T}\tilde G_i$, which,
together with Theorem \ref{prop1}, proves (2). \qed
\begin{rem} In the general case of $Z(H)\neq (1)$, we can obtain a
similar result in a particular case: let $H_i(L_i)$ are
the subgroups of $\bar G$ given by Theorem
\ref{thm15}, and define $G_i=G\cap H_i(L_i)$ as above. Then if
all $G_i$ are infinite or trivial, the same proof gives some
subset $T$ of $\{1,\ldots,r\}$, and an isogeny $\prod_{i\in T}G_i$ onto a
subgroup of finite index of $G$. \\
In the general case, however, we can only obtain such a representation
of a proper quotient of $G$: the problem arises from the fact that the
groups $G_i$ may be finite non-trivial, so that the projection $\pi_T$ defined in
the proof will restrict to an isogeny on $G$. So, we might as well
work with the image of $G$ in $H/Z(H)$.
\end{rem}
\section{Definable subgroups of finite index}
We work in a sufficiently saturated model $(\calu, \si, \Delta)$ of
$\dcfa$. Unless
otherwise mentioned, definable will mean
$\call_{\si,\Delta}$-definable.\\
The aim of this section is to show that a definably quasi-simple group
definable in $\calu$ has a definable connected component. To do that, we
investigate definable subgroups of algebraic groups which are not
quantifier-free definable, and obtain a description similar to the one
obtained by Hrushovski and Pillay in Proposition 3.3 of \cite{HP2}.
\begin{thm} \vlabel{sbgp} Let $H$ be an algebraic group, $G\leq
H(\calu)$ a Zariski dense definable subgroup. Then there are an
algebraic group $H'$,
a quantifier-free definable subgroup $R$ of $H'(\calu)$, together with a
quantifier-free definable $f:R\to G$, with $f(R)$ contained and of finite index in $G$, and $\Ker(f)$
finite central in $R$.
\end{thm}
\prf We follow the proof of Hrushovski-Pillay given in
\cite[Prop.~3.3]{HP2}, but with a slight simplification due to
characteristic $0$. Passing to a subgroup of $G$ of finite index, we may
assume that $\tilde G$ is connected for the $\si$-$\Delta$-topology. We
work over some small $F_0=\acl(F_0)\subset \calu$ over which $G$ is defined.
By Theorem~\ref{LS1}(4), we know that there is some quantifier-free
definable set $W$, and a projection $\pi:W\to\tilde G$, with
finite fibers and such that $G=\pi(W)$. \\[0.05in]
Let $b,c$ be independent generics of $G$, let $a\in G$ be such that
$ab=c$, and let $\hat b,\hat c\in \calu$ be such that $ (b,\hat b),
(c,\hat c)\in W$. So $\hat b \in \acl(F_0b)$, and $\hat c \in \acl(F_0c)$.
We let $a_1\in \calu$ be such that $\acl(F_0a)\cap F_0(b,\hat b,c,\hat
c)_{\si,\Delta}=F_0(a,a_1)_{\si,\Delta,}$. Note that because $a=cb\inv$ and
$\acl(F_0a)$ is Galois over $F_0(a)_{\si,\Delta}$,
$F_0(b,\hat b,c,\hat c)_{\si,\Delta}$ is a regular extension of
$\acl(F_0a)\cap F_0(b,\hat b,c,\hat c)_{\si,\Delta}$, which is finitely
generated algebraic over $F_0(a)_{\si,\Delta}$. Hence $a_1$ can
be chosen finite by Lemma \ref{cb1}. Moreover, $qftp(b,\hat b,c,\hat
c/F_0(a,a_1)_{\si,\Delta})$ is stationary (see Remark \ref{rem-LS1}(e)), and $F_0(a,a_1)_{\si,\Delta}$
contains $\qfcb(b,\hat b,c,\hat c/\acl(F_0a))$ (the quantifier-free
canonical basis, see subsection \ref{qfcb}).
Observe that $qftp(c,\hat c,a,a_1/F_0(b,\hat b)_{\si,\Delta})$ is stationary: this is
because $qftp(c,\hat c/F(b,\hat b)_{\si,\Delta})$ is stationary, and $(a,a_1)\in
F_0(b,\hat b,c,\hat c)_{\si,\Delta}$. Hence, if $b_1$ is such that
$\acl(F_0b)\cap
F_0(a,a_1,c,\hat c)_{\si,\Delta}=F_0(b,b_1)_{\si,\Delta}$, then $b_1\in
F_0(b,\hat b)_{\si,\Delta}$. Similarly, if $c_1$ is such that $\acl(F_0c)\cap
F_0(a,a_1,b,b_1)_{\si,\Delta}=F_0(c,c_1)_{\si,\Delta}$, then $c_1\in
F_0(c,\hat c)_{\si,\Delta}$. So we obtain
$\qfcb(a,a_1,c,\hat c/\acl(F_0b))\subseteq F_0(b,b_1)_{\si,\Delta}$
and $\qfcb(qftp(a,a_1,b,b_1/\acl(F_0c)))\subseteq F_0(c,c_1)_{\si,\Delta}$. This implies that
$b_1\in F_0(a,a_1,c,c_1)_{\si,\Delta}$ and $a_1\in
F_0(b,b_1,c,c_1)_{\si,\Delta}$. I.e., we have
$$F_0(a,a_1,c,c_1)_{\si,\Delta}=F_0(a,a_1,b,b_1)_{\si,\Delta}=F_0(b,b_1,c,c_1)_{\si,\Delta}.$$
\noindent
As in \cite{HP2}, $(a,a_1)$ defines the germ of a generically
defined, invertible, $\si$-$\Delta$-rational map $g_{a,a_1}$ from (the
set of realisations of) $q_1=qftp(b,b_1/F_0)$ to
$q_2=qftp(c,c_1/F_0)$. (In our setting, this means: there are
$\call_\Delta$-definable sets $U_1$ and $U_2$, with $U_i$ intersecting
the set of realisations of $q_i$ in a Kolchin dense subset, and such
that $g_{a,a_1}$ defines a $\Delta$-rational
invertible map $U_1\to U_2$. We may shrink the $U_i$ if necessary to
relatively Kolchin dense subsets.)
Choose $(\tilde a,\tilde a_1)\in \calu$ realising $qftp(a,a_1/F_0)$ and
independent from $(b,c)$ over $F_0$. Let $F_0'\prec \calu$ contain
$F_0(\tilde a)$ and such that $(a,b,c)$ is independent from $F'_0$ over
$F_0$. Let $(b',b'_1)$ be such that
$qftp(a,a_1,b,b_1,c,c_1/F_0)=qftp(\tilde a,\tilde
a_1,b',b'_1,c,c_1/F_0)$; note that $(b',b'_1)\in F_0(\tilde a,\tilde
a_1,c,c_1)_{\si,\Delta}$, and let $d=(\tilde a)\inv a$. Let
$r=qftp(a,a_1/F'_0)$ (the unique non-forking extension of
$qftp(a,a_1/F_0)$ to $F'_0$).
\medskip\noindent
{\bf Claim 1}.
\begin{enumerate}
\item[(i)] $F'_0(b,\hat b,c,\hat c)_{\si,\Delta}\cap
\acl(F'_0d)=F'_0(a,a_1)_{\si,\Delta}$.
\item[(ii)] $qftp(b,b_1/F'_0)=qftp(b',b'_1/F'_0)=:q'_1$ is the unique
non-forking extension of $q_1$ to $F'_0$.
\item [(iii)] $(a,a_1)$ defines over $F'_0$ the germ of an
invertible generically defined function from $q'_1$ to
$q'_1$.
\item[(iv)] $d\in F'_0(a,a_1)_{\si,\Delta}$.
\item[(v)] $db=b'$.
\item[(vi)] $(a,a_1)\in F'_0(b,b_1,b',b'_1)_{\si,\Delta}$.
\end{enumerate}
\prf
This follows immediately from the fact that $(a,b,c)$ is independent
from $F'_0$ over $F_0$, that
$F'_0(a)_{\si,\Delta}=F'_0(d)_{\si,\Delta}$, and the definition of $a_1$.
\qed
\medskip\noindent
{\bf Claim 2}. $r$ is closed under generic composition.
\begin{proof}
Let $(a',a'_1)$ realise $r$ in $\calu$, and
independent from $(a,b,b')$ over $F'_0$. If $(b'',b''_1)\in \calu$ is such that $$qftp(
a',a'_1,b',b'_1,b'',b''_1/F'_0)=qftp(a,a_1,b,b_1,b',b'_1/F'_0),$$ then
from the fact that
$$F'_0(a,a_1,b,b_1)_{\si,\Delta}=F'_0(a,a_1,b',b'_1)_{\si,\Delta}=F'_0(b,b_1,b',b'_1)_{\si,\Delta},$$ we
obtain that $(b,b_1)$ and $(b'',b''_1)$ are independent over $F'_0$, and
that $qftp(b,b_1,b'',b''_1/F'_0)=qftp(b,b_1,b',b'_1/F'_0)$; hence if
$(a'',a''_1)\in F'_0(b,b_1,b'',b''_1)_{\si,\Delta}$ is such that
$qftp(a'',a''_1,b,b_1,b'',b''_1/F'_0)=qftp(a,a_1,b,b_1,b',b'_1/F'_0)$,
then $qftp(a'',a''_1/F'_0)=r$ as desired.
\end{proof}
\noindent
Furthermore, note that $a''\in$ $ F'_0(a,a')$, and, unravelling the
definitions, that $$(a'', a''_1)\in
F'_0(b,b_1,a,a_1,a',a'_1)_{\si,\Delta}.$$ Hence $(a'',a''_1)\in
F'_0(a,a')_{\si,\Delta}^{alg}\cap
F'_0(b,b_1,a,a_1,a',a'_1)_{\si,\Delta}=F'_0(a,a_1,a',a'_1)_{\si,\Delta}$ because
$(b,b_1)$ is independent from $(a,a_1,a',a'_1)$ over $F'_0$. Similarly,
using the fact that the first part of the tuple lives in the algebraic
group $H$, one gets that the group law which to $((a,a_1), (a',a'_1))$
associates $(a'',a''_1)$ as above, is associative. Hence we are in
presence of a normal group law as in \cite{W1} (page 359), involving however
infinite tuples. \\[0.05in]
We now will reason as in \cite{P-Fund} (Lemma 2.3 and
Propositions 3.1 and 4.1 in \cite{P-Fund}), use the fact that the $\si$-$\Delta$-topology is
Noetherian, and obtain that $r$ is the generic type of a
quantifier-free definable subgroup $R$ of
some algebraic group $H'$. \\
More precisely: as in Lemma 2.3 of
\cite{P-Fund}, we replace
$(a,a_1)$ by the infinite tuple obtained by closing $(a,a_1)$ under $\si$,
$\si\inv$ and the $\delta_i$. This allows to represent the normal group law
as a normal group law on some inverse limit of algebraic sets,
together with a ($\si$-$\Delta$-rational) map from the set of realisations of $r$ to this
inverse limit. Then
Proposition~3.1 of \cite{P-Fund} shows how to replace this inverse limit by an inverse
limit of algebraic groups. And finally, as in Theorem~4.1 of \cite{P-Fund}, the
Noetherianity of the $\si$-$\Delta$-topology guarantees that the map
from the set of realizations of $r$ to this inverse limit of
groups must yield an injection at some finite stage.
Observe also that
$qftp(b,b_1,b',b'_1/F'_0)=qftp(b',b'_1,b,b_1/F'_0)$, and so we get a
realisation of $r$ which is the germ of the inverse of $(a,a_1)$; as
the first coordinate of this germ belongs to $F'_0(a)$, it follows
that it belongs to
$ F'_0(a,a_1)_{\si,\Delta}$.
Let us now look at $p=qftp(a,a_1,d/F'_0)$,
and recall that $F'_0(a)_{\si,\Delta}=F'_0(d)_{\si,\Delta}$, and let $K$ be the subgroup of
$(H'\times H)(\calu)$ generated by the realisations of $p$. It is
definable by a quantifier-free $\call_{\si,\Delta}$-formula.\\[0.05in]
As in \cite{HP2}, it follows that $K$ is the graph of a group
epimorphism $f:R\to \tilde G$, with finite kernel. Because $R$ is
connected for the $\si$-$\Delta$-topology, the kernel is central.
\medskip\noindent
{\bf Claim 3}. $f(R)\leq G$.
\begin{proof}
Let $(g,g_1)$ be a generic of $R$, i.e., a
realisation of $r$. Then $g\in\tilde G$. We know that $qftp(b,\hat
b,c, \hat c/F'_0(a,a_1)_{\si,\Delta})$ is stationary, and therefore so is its image
under any $F'_0$-automorphism of the differential field $\calu$
sending $(a,a_1)$ to $(g,g_1)$, so that there are $(h,\hat h, u,\hat
u)$ in $\calu$ such that
$$qftp(a,a_1,b,\hat b,c,\hat c/F'_0)=qftp(g,g_1,h,\hat h,u,\hat
u/F'_0).$$
Thus $h,u\in G$, and so does $g=uh\inv$.
Observe that $f(R)$ has finite index in $G$, because it has the same generics.
\end{proof}
\begin{remark}\vlabel{rem-sbgp} In the notation of Theorem \ref{sbgp},
consider $R_{(n)}$ and $G_{(n)}$, as well as the natural
$\call_\Delta$-map $f_{(n)}:R_{(n)}\to G_{(n)}$. While the map $f$ is
clearly not surjective in the difference-differential field $\calu$, the
map $f_{(n)}$ is surjective for all $n\geq 0$ (in the differential field
$\calu$). This follows from quantifier-elimination in $\dcf$. Moreover,
the image of $R$ in $G$ is dense for the $\si$-$\Delta$-topology, i.e., this
is the appropriate notion of a {\em dominant map} between
difference varieties.
\end{remark}
\begin{defn} Let $H$ be an algebraic group. It is {\em simply connected}
if it is connected and whenever $f:H'\to H$ is an isogeny from the connected algebraic group
$H'$ onto $H$, then $f$ is an isomorphism. \\
The {\em universal covering of the connected algebraic group $H$} is a
simply connected algebraic group $\hat H$, together with an isogeny
$\pi:\hat H\to H$. It satisfies the following universal property (see
18.8 in \cite{Mil}): if
$\varphi:H'\to H$ is an isogeny of connected algebraic groups,
then there is a
unique algebraic homomorphism $\psi: \hat H\to \hat H'$ such that
$\varphi\psi=\pi$.
\end{defn}
\begin{remark} \begin{enumerate}
\item The definition of simply connected in arbitrary
characteristic is a little
more complicated. The algebraic groups we will consider will be
semi-simple algebraic groups, defined and split over $\rat$, and we
will be considering their rational points in some algebraically closed
field $K$.
\item Every simple algebraic group has a universal covering, see
section~5 in \cite{St62} for properties, or Chapter 19 in \cite{Mil}
\item Note that if $H$ is a simple algebraic group and $K$ is
algebraically closed, then $H(K)/Z(H(K))$ is simple as
an abstract group
\item Moreover, since a semi-simple algebraic group is isogenous to the
product of its simple factors, it follows that the universal
covering of a semi-simple algebraic group is simply the product of the
universal coverings of its simple factors.
\end{enumerate}
\end{remark}
\begin{lem} \vlabel{lem5} Let $H$ be a simple algebraic group defined
over the algebraically closed field $L$ of characteristic $0$, and
$\pi:\hat H\to H$ its
universal covering. Then any algebraic automorphism of $H(L)$ lifts to
one of $\hat H(L)$.
\end{lem}
\begin{proof} Let $\varphi$ be an algebraic automorphism of $H(L)$, and consider the
map $p:\hat H(L) \to H(L)$ defined by $\varphi\circ \pi$. Then there is a map
$\psi:\hat H(L)\to \hat H(L)$ such that $\pi=\varphi\circ\pi\circ\psi$. It
then follows easily that $\psi$ is an isomorphism: $\psi(\hat H(L))$ is
a subgroup of $\hat H(L)$ which projects onto $H(L)$ via $\pi$, hence must
equal $\hat H(L)$. So $\psi$ is onto, and because $\Ker(\pi)$ is finite, it
must be injective.
\end{proof}
\begin{thm} \vlabel{connected1} Let $H$ be a simply connected simple algebraic group defined
and split over $\rat$, and $G\leq H(\calu)$ a proper Zariski dense
definable subgroup. Then $G$ is quantifier-free definable. Equivalently,
$G$ has a smallest definable subgroup $G^0$ of finite index, and $G^0$ is
quantifier-free definable. \\
Furthermore, there is an $\call_\Delta$-definable subfield $L$ of
$\calu$, such that $h\inv G^0h\leq H(L)$ for some $h\in H(\calu)$, and
either $h\inv G^0h=H(L)$, or $$h\inv G^0h=\{g\in H(L)\mid \si^n(g)=\theta(g)\}$$ for
some integer $n$ and algebraic automorphism $\theta$ of $H(L)$. \end{thm}
\prf Let us first discuss the equivalence of the two assertions. If any
Zariski dense definable subgroup of $H(\calu)$ is quantifier-free
definable, then every definable subgroup of $G$ of finite index is
quantifier-free definable, and
the Noetherianity of the $\si$-$\Delta$ topology implies
that there is a smallest one, $G^0$. Conversely, let $G$ be a Zariski
dense definable subgroup of $H(\calu)$, and assume it has a smallest
definable subgroup of finite index, $G^0$, and that $G^0$ is
quantifier-free definable. Then so is $G$, since it is a finite union of
cosets of $G^0$. \\[0.05in]
By Theorem \ref{prop1}, $G$ has a definable subgroup of finite
index $G_0$ which is conjugate to a Kolchin dense subgroup of $H(L)$, for some
definable subfield $L$ of $\calu$. So, without loss of generality, we will assume that $G\leq
H(L)$ is connected for the $\si$-$\Delta$-topology, is
quantifier-free definable, and we will show that $G$ has no proper
definable subgbroup of finite index. \\
First note that if $G=H(L)$, then
$G$ has no definable subgroups of finite index (by Fact \ref{simple}(a)), and the result is
proved. Assume therefore that $G$ is a proper subgroup of $H(L)$.
Let $H'=H/Z(H)$.
By Theorem \ref{prop1}, there are an integer $n\geq 1$ and an
algebraic automorphism $\theta'$ of $H'(L)$ such that the
$\si$-$\Delta$-closure $G'$ of $GZ/Z$ (in $H'(L)$) is defined by
$$G'=\{g\in H'(L)\mid \si^n(g)=\theta'(g)\}.$$
As $H$ is simply connected, $H\to H/Z$ is the universal covering of
$H/Z$. By Lemma~\ref{lem5}, there is an algebraic automorphism $\theta$ of $H(L)$
which lifts $\theta'$.
\medskip\noindent
{\bf Claim.} $G=\{g\in H(L)\mid \si^n(g)=\theta(g)\}$.
\begin{proof} The group on the right hand side is clearly quantifier-free
definable, connected for the $\si$-$\Delta$-topology, and projects onto
a subgroup of finite index of $GZ/Z$, with finite kernel. As $G$ is the
connected component of the
group $GZ$ (for the $\si$-$\Delta$-topology), the
conclusion follows.
\end{proof}
\noindent
Assume by way of contradiction that $G$ has a definable subgroup of
finite index $>1$. By Proposition \ref{sbgp}, there are a quantifier-free
definable group $R$ (living in some algebraic group $S$) and a
(quantifier-free) definable map $f:R\to G$ with finite non-trivial
kernel, and image
of finite index $>1$ in $G$. We may assume that $R$ is connected for the
$\si$-$\Delta$-topology, so that $\Ker(f)$ is central.\\[0.05in]
For every $r\geq 1$, the map $f$ induces a dominant $\Delta$-map
$f_{(r)}:R_{(r)}\to G_{(r)}$, and for $r\geq n-1$, this map has finite
central kernel, since for $r\geq n-1$, the natural map $G_{(r)}\to
G_{(n-1)}$ has trivial kernel. Fix $r\geq n-1$,
and consider the map
$f_{(r)}:R_{(r)}\to
G_{(r)}\simeq H(L)^n$. Because $H$ is simply connected, so is $H^n$,
and therefore $R_{(r)}\simeq H(L)^n\times
\Ker(f_{r})$. Since $H(L)$ equals its commutator subgroup, it follows that
$[R_{(r)},R_{(r)}]$ ($\simeq H(L)^n$) is
a $\Delta$-definable normal subgroup of $R_{(r)}$ which
projects via $f_{(r)}$ onto $G_{(r)}\simeq H(L)^n$. As $R$ is
connected for the $\si$-$\Delta$-topology, $R_{(r)}$ is connected for
the $\Delta$-topology, and we must therefore have $\Ker(f_{(r)})=(1)$. \qed
\begin{thm} \vlabel{prop2} Let $H$ be a simple algebraic group, $G\leq
H(\calu)$ be a definable subgroup which is Zariski dense in $H$. Then
$G$ has a smallest definable subgroup $G^0$ of finite index. Let $\pi:\hat H\to H$ be the universal finite central extension of
$H$, and let $\tilde G$ be the connected component of the
$\si$-$\Delta$-closure of $\pi\inv(G)$. Then $G^0=\pi(\tilde G)$.
\end{thm}
\prf By Lemma \ref{connected1}, $\tilde
G$ has no definable subgroup of finite index. Hence, neither does
$\pi(\tilde G)$, which is therefore the smallest definable subgroup of
finite index of $G$. \qed
\begin{cor} \vlabel{connected2} Let $G$ be a definably quasi-semi-simple definable
group. Then $G$ has a definable connected component.
\end{cor}
\begin{proof} Let P be the property ``having a
definable connected component''. The result follows easily from Proposition \ref{thm1},
Proposition \ref{prop2semi}, Theorem \ref{prop2},
and the following remarks:
\begin{itemize}
\item[(a)] If $G_0$ is a definable subgroup of finite index of $G$, then $G_0$ has
P if and only if $G$ has $P$;
\item[(b)] If the group $G$ is the direct product of its definable subgroups $G_1$, $G_2$,
and $G_1$, $G_2$ have $P$, then so does $G$;
\item[(c)] Let $f:G\to G_1$ be a definable onto map, with $\Ker(f)$ finite. Then $G_1$ has
P if and only if $G$ has $P$. One direction is clear, for the other, we may assume that $G_1$
is connected, so that $\Ker(f)$ is central, finite. If $G_0$ is a subgroup of finite index of $G$,
then $f(G_0)=G_1$, so that $G_0\Ker(f)=G$; hence $[G:G_0]\leq
|\Ker(f)|$. Let $G_0<G$ be definable, of finite index, and with
$|G_0\cap \Ker(f)|$ minimal. Then $G_0$ has no proper definable
subgroup of finite index.
\end{itemize}
\end{proof}
\section{The fixed field}
\begin{defn}
Let $M$ be a $\mathcal{L}$-structure. A definable subset $D$ of $M$ is {\em stably embedded} if
every $M$-definable subset of $D^n$ is definable with parameters from
$D$, for any $n\geq 1$.
\end{defn}
\begin{nota}
Let $(\calu, \si, \Delta)$ be a sufficiently saturated model of $\dcfa$.
For $\ell\geq 1$, we consider the difference-differential field
$F_\ell=\fix(\si^\ell)$.
\end{nota}
\begin{lem}\vlabel{psf1} Fix $\ell\geq 1$ . Then $F_\ell$ is stably embedded, and
its induced structure is that of the pure difference-differential
field. If $\ell=1$, it is the pure differential field.
\end{lem}
\begin{proof} The first part follows from elimination of imaginaries
(Prop.~3.3 in \cite{LS}): if $c$ is a code for a definable subset $S$ of
$F_\ell^n$, then $\si^\ell(c)=c$. So every definable subset of $F_\ell^n$ is
definable using parameters from $F_\ell$.
\noindent
By the description of types in $\dcfa$, every formula $\varphi(x)$ is
equivalent (modulo $\dcfa$) to a formula of the form $\exists y\,
\psi(x,y)$, where $\psi(x,y)$ is quantifier-free, and whenever $(a,b)$
realises $\psi$, then $b\in \acl(a)$. But if $a\in F_\ell$, then $b\in
F_\ell^{alg}$. Let $d$ be a bound on the degree of $b$ over $a$, and $N(d)$ the least common multiple of all integers $\leq d$.
\noindent
Let $F_0\prec F_\ell^\Delta$ be small, and let $\alpha\in F_0^{alg}$ generate
the unique extension of $F_0$ of degree $N(d)$. Note that it also
generates the unique extension of $F_\ell$ of degree $N(d)$. So, if $(a,b)\in
F_\ell^{alg}$ satisfies $\psi$ as above, then $b\in F_\ell[\alpha]$. If $u$ is the
$N(d)$-tuple of coefficients of the minimal polynomial of $\alpha$ over
$F_0$, one sees that the differential field $(F_\ell(\alpha),\si)$ is
interpretable in $F_\ell$ (with parameters in $F_0$, or even in
$\rat(u)$). Thus there is an $\call_{\si,\Delta}(F_0)$-formula $\theta(x,z)$
such that for any tuples $a\in F_\ell$ and $b\in F_\ell(\alpha)$, if
$b=\sum_{i=0}^{N(d)-1} c_i\alpha^i$ with the $c_i$ in $F_\ell$, then
$$(F_\ell(\alpha),\si)\models \psi(a,b)\iff (F_\ell,\si)\models \theta(a,c).$$
To prove the last statement, it suffices to notice that if $b\in
F_0(a,\alpha)$, then the tuple $c$ belongs to $F_0(a,\alpha,b)$, and it
also belongs to $F_\ell$. As both $\alpha$ and $b$ are algebraic over
$F_0(a)$, it follows that so is $c$. This finishes the proof.
\end{proof}
\begin{cor} \vlabel{psf4}
If $A\subset F_\ell$, then $\acl_{F_\ell}(A)=\acl(A)\cap
F_\ell$, and independence is given by independence (in the sense of
ACF) of algebraic closures.
\end{cor}
\begin{proof} This follows directly from Lemma \ref{psf1}.
\end{proof}
\begin{cor} \vlabel{psf5} Same hypotheses as in \ref{sbgp}, and assume that
$G\leq H(F_\ell)$. Then the group $R$ can be taken to be
quantifier-free definable in the $\call_{\si,\Delta}$-structure
$F_\ell$.
\end{cor}
\begin{proof} Inspection of the proof of Theorem \ref{sbgp} shows
that if the tuples $a,b,c$ are in $F_\ell$, then by Lemma \ref{psf2},
so are the tuples $\hat a$, $\hat b$ et $\hat c$, and therefore also
the tuples $a_1$, $b_1$ et $c_1$. I.e., the whole reasoning can be
done inside $F_\ell$.
\end{proof}
\begin{defn}
Let $F$ be a differential field. We say that $F$ is \emph{$\Delta$-PAC} if whenever $L$ is a differential field extending $F$ and which is
regular over $F$ (i.e., $L\cap F^{alg}=F$), then $F$ is existentially closed in $L$.
\end{defn}
\begin{rem}\vlabel{psf2} This definition coincides with the notion of
PAC-substructure of a model of
$\dcf$, which was
given by Pillay and Polkowska in \cite{PP} .
\noindent
Consider the theory of the differential field $F=\fix(\si)$, in the
language $\call_\Delta$ augmented by the constant symbols needed to
define all algebraic extensions of $F_0$. We know that $F^{alg}$ is
a model of $\dcf$, by \cite[Prop.~3.4(vi)]{LS}.
\end{rem}
\begin{prop}\vlabel{psf3}
The differential field $F$ is a model of the theory
UC$_m$ introduced by Tressl in \cite{Tr}. In particular,
\begin{enumerate}
\item ${\rm
Th}(F)$ is model-complete in the language $\call_\Delta(F_0)$.
\item $F$ is $\Delta$-PAC.
\end{enumerate}
\end{prop}
\prf The theory UC$_m$ has the following property (Thm 7.1 in
\cite{Tr}): if a theory $T$ of fields of characteristic $0$ is model
complete, then $T\cup {\rm UC}_m$ is the model companion of the theory
$T\cup {\rm DF}_m$, where DF$_m$ is the theory of differential fields
with $m$ commuting derivations. We know that $F$ is large as a pure
field (all PAC fields are large), and that its theory in the language of
rings augmented by constant symbols for $F_0$ is model-complete. Hence
it has a regular extension $F^*$ which is a model of UC$_m$ (Thm 6.2 in
\cite{Tr}). Consider the differential field ${\rm
Frac}(\calu\otimes_FF^*)$, and extend $\si$ to $F^*$ by setting it to be
the identity. As $\calu$ is existentially closed in ${\rm
Frac}(\calu\otimes_FF^*)$, it follows that $F$ is existentially closed
in $F^*$, and therefore must be a model of UC$_m$. This proves the
first part, and the same proof gives (2).
\noindent (1) follows from \cite[thm~7.1]{Tr}. \qed
|
1,477,468,750,450 | arxiv | \section{Introduction}
It has been suggested that orbital angular momentum carried by participants in off-central heavy ion collisions (HIC) can result in spin polarization of final state particles \cite{Liang:2004ph,Liang:2004xn}. Realistic model calculations have indicated that significant vorticity is present in quark-gluon plasma (QGP) produced in HIC \cite{Deng:2012pc,Pang:2016igs,Xia:2018tes}. Theoretical predictions of final particle spin polarization have been made based on a spin-orbit coupling picture \cite{Gao:2007bc,Huang:2011ru,Jiang:2016woz}. Such a picture is indeed consistent with early experimental measurement of Lambda hyperon global polarization \cite{STAR:2017ckg}. However, recent measurement of Lambda hyperon local polarization \cite{STAR:2019erd} shows an overall sign difference from theoretical predictions \cite{Becattini:2017gcx,Wei:2018zfb,Fu:2020oxj}. Different explanations have been proposed to understand the puzzle \cite{Wu:2019eyi,Liu:2019krs}, yet no consensus has been reached.
Recently it has been realized that shear can also contribute to spin polarization \cite{Liu:2021uhn,Becattini:2021suc}. In particular, it has been found based on a free theory analysis that spin responds to thermal vorticity and thermal shear in the same way. Phenomenological implementations have shown the right trend toward the measured local polarization results \cite{Fu:2021pok,Becattini:2021iol,Yi:2021ryh,Fu:2022myl,Wu:2022mkr}. However, as we shall show in this paper, the contribution discussed so far is still incomplete. Vorticity and shear differ in one important aspect: the former does not change the particle distribution while the latter necessarily does. The redistribution of particles by shear flow leads to an extra contribution to spin polarization. The extra contribution can be consistently described in the framework of quantum kinetic theory (QKT), see \cite{Hidaka:2022dmn} for a review and references therein. Rapid development of QKT has been made to include collisional term systematically via self-energy over the past few years\cite{Hidaka:2016yjf,Zhang:2019xya,Li:2019qkf,Carignano:2019zsh,Yang:2020hri,Wang:2020pej,Shi:2020htn,Weickgenannt:2020aaf,Hou:2020mqp,Yamamoto:2020zrs,Weickgenannt:2021cuo,Sheng:2021kfc,Wang:2021qnt,Lin:2021mvw}. The QKT is formulated using the Wigner function, whose axial component can be related to spin polarization. The axial component of Wigner function for fermion in a collisional QKT is given by \cite{Yang:2020hri,Lin:2021mvw,Hattori:2019ahi}\footnote{The definitions of Wigner function in \cite{Yang:2020hri} and \cite{Lin:2021mvw} differ by a sign. We use the latter definition.}
\begin{align}\label{calA}
{\cal A}^{\mu}=-2{\pi}\hbar\[a^{\mu} f_A+\frac{{\epsilon}^{{\mu}{\nu}{\rho}{\sigma}}P_{\rho} u_{\sigma}{\cal D}_{\nu} f}{2(P\cdot u+m)}\]{\delta}(P^2-m^2),
\end{align}
with $P$ and $u$ being momentum of particle and flow velocity. $a^{\mu} f_A$ is a dynamical contribution \cite{Hattori:2019ahi,Weickgenannt:2019dks,Gao:2019znl,Liu:2020flb,Guo:2020zpa}. ${\cal D}_{\nu}$ is a covariant derivative acting on the distribution function $f$ defined as ${\cal D}_{\nu}={\partial}_{\nu}-{\Sigma}_{\nu}^>-{\Sigma}_{\nu}^<\frac{1-f}{f}$. The partial derivative term is what has been considered so far, the extra contribution comes from self-energies ${\Sigma}^{>/<}$. Naively one may expect the self-energy term to be suppressed by powers of coupling in a weakly coupled system described by the QKT. In fact this is not true. In a simple relaxation time approximation, the self-energy contribution can be estimated as $\frac{{\delta} f}{{\tau}_R}$. The appearance of ${\delta} f$ follows from the fact that the self-energy contribution in the covariant derivative vanishes in equilibrium by detailed balance. The combination $\frac{{\delta} f}{{\tau}_R}$ can be further related to ${\partial} f_0$ by kinetic equation with $f_0$ being local equilibrium distribution. Consequently the self-energy contribution is at the same order as the derivative one, with the dependence on coupling completely canceled between $\frac{1}{{\tau}_R}$ and ${\delta} f$.
A second question we attempt to address is the gauge dependence of spin polarization. Since theoretical calculation is usually done in the QGP phase while experiments measure particle after freezeout. The gauge dependence is only present in the partonic level calculations. On general ground, we expect that it is a gauge invariant spin polarization that is passed through freezeout. However, \eqref{calA} is expressed in terms of self-energy, which is in general gauge dependent. It is necessary to include gauge link contribution to restore gauge invariance. Since collisions are mediated by off-shell particles, it is essential to consider quantum gauge field fluctuations in the gauge link. The quantum gauge field fluctuation also feels the flow via interaction with on-shell fermions. It turns out that there is a similar contribution associated with the gauge link, which is also at the same order as the derivative one. As a conceptual development, we generalize the definition of gauge link to the Schwinger-Keldysh contour, in which the collisional QKT is naturally derived. We also adapt the straight path widely used for background gauge field to the Schwinger-Keldysh contour to allow for consistent treatment of quantum gauge field fluctuations.
The aim of the paper is to evaluate the two contributions mentioned above. We illustrate the calculations by using a massive probe fermion in a massless QED plasma. While the method we use is applicable to arbitrary hydrodynamic flow, we consider the plasma with shear flow only for simplicity. The paper is organized as follows: in Section II, we briefly review the classical limit of QKT, which is the Boltzmann equation widely used in early studies of transport coefficients. By solving the Boltzmann equation we determines the particle redistribution in the presence of shear flow. The information of particle redistribution will be used to calculate the self-energy contribution and the gauge link contribution in Section III and IV respectively. Analytic results can be obtained at the leading logarithmic order. The results will be discussed and compared with the derivative contribution in Section V. Finally we summarize and provide outlook in Section VI.
\section{Particle redistribution in shear flow}
We consider a QED plasma with $N_f$ flavor of massless fermions in a shear flow. The shear flow relaxes on the hydrodynamic scale, which is much slower than the relaxation of plasma constituents, thus we can take a steady shear flow. The presence of shear flow leads to redistribution of fermions and photons, which gives rise to off-equilibrium contribution to energy-momentum tensor responsible for shear viscosity. The kinetic equation addressing this problem has been written down long ago \cite{Arnold:2002zm,Arnold:2000dr,Arnold:2003zc}. The kinetic equation is simply the Boltzmann equation with collision term given by elastic and inelastic scatterings. For simplicity we keep to the leading-logarithmic (LL) order, for which the inelastic scatterings are irrelevant. The resulting Boltzmann equations for fermion and photon read respectively
\begin{subequations}\label{Boltzmann}
\begin{align}
\({\partial}_t+{\hat p}\cdot\nabla_x\)f_p=&-\frac{1}{2}\int_{p',k',k}(2{\pi})^4{\delta}^4(P+K-P'-K')\frac{1}{16p_0k_0p_0'k_0'}\times\nonumber\\
&\bigg[\;\;|{\cal M}|_{\text{Coul},f}^2\(f_pf_k(1-f_{p'})(1-f_{k'})-f_{p'}f_{k'}(1-f_{p})(1-f_{k})\)\nonumber\\
&+|{\cal M}|_{\text{Comp},f}^2\(f_p\tilde{f}_k(1+\tilde{f}_{p'})(1-f_{k'})-\tilde{f}_{p'}f_{k'}(1-f_{p})(1+\tilde{f}_{k})\)\nonumber\\
&+|{\cal M}|_{\text{anni},f}^2\(f_pf_k(1+\tilde{f}_{p'})(1+\tilde{f}_{k'})-\tilde{f}_{p'}\tilde{f}_{k'}(1-f_{p})(1-f_{k})\)\bigg],
\end{align}
\begin{align}
\({\partial}_t+{\hat p}\cdot\nabla_x\)\tilde{f}_p=&-\frac{1}{2}\int_{p',k',k}(2{\pi})^4{\delta}^4(P+K-P'-K')\frac{1}{16p_0k_0p_0'k_0'}\times\nonumber\\
&\bigg[\;\;|{\cal M}|_{\text{Comp},{\gamma}}^2\(\tilde{f}_pf_k(1-f_{p'})(1+\tilde{f}_{k'})-f_{p'}\tilde{f}_{k'}(1+\tilde{f}_{p})(1-f_{k})\)\nonumber\\
&+2N_f|{\cal M}|_{\text{anni},{\gamma}}^2\(\tilde{f}_p\tilde{f}_k(1-\tilde{f}_{p'})(1-\tilde{f}_{k'})-f_{p'}f_{k'}(1+\tilde{f}_{p})(1+\tilde{f}_{k})\)\bigg].
\end{align}
\end{subequations}
We have used $f_p$ and $\tilde{f}_p$ to denote distribution functions for fermions and photon carrying momentum $p$ respectively. $|{\cal M}|^2$ is partially summed amplitude square with the subscripts ``Coul'', ``Comp'' and ``anni'' indicate Coulomb, Compton and annihilation processes respectively. The subscripts $f$ and ${\gamma}$ distinguish the fermionic and photonic amplitude squares, whose explicit expressions we shall present shortly. The overall factor $\frac{1}{2}$ on the RHS coming from spin average and $\int_p\equiv\int\frac{d^3p}{(2{\pi})^3}$. When there is imbalance between electron and position, there should be a separate equation for position. We restrict ourselves to neutral plasma, in which the positron distribution is identical to that of anti-fermion.
Now we can work out the redistribution of particles in the presence of thermal shear, given by solution to the Boltzmann equation. We solve \eqref{Boltzmann} by noting that $f_p$ and $\tilde{f}_p$ on the LHS are the local equilibrium distributions and deviations of equilibrium distributions appear only on the RHS. We can parametrize the local equilibrium distribution by thermal velocity ${\beta}_{\mu}={\beta} u_{\mu}$ as $f^{(0)}_p=\frac{1}{e^{P\cdot{\beta}}+1}$ and ${\tilde f}^{(0)}_p=\frac{1}{e^{P\cdot{\beta}}-1}$ and the thermal shear is given by
\begin{align}\label{Sij}
S_{ij}=\frac{1}{2}\({\partial}_i{\beta}_j+{\partial}_j{\beta}_i\)-\frac{1}{3}{\delta}_{ij}{\partial}\cdot{\beta}.
\end{align}
When only thermal shear is present, we can evaluate the LHS as
\begin{align}\label{shear_grad}
&{\hat p}_i\nabla_if^{(0)}_p=-f^{(0)}_p(1-f^{(0)}_p){\partial}_i{\beta}_j\frac{p_ip_j}{E_p}=-f^{(0)}_p(1-f^{(0)}_p)S_{ij}I^p_{ij}p,\nonumber\\
&{\hat p}_i\nabla_i{\tilde f}^{(0)}_p=-{\tilde f}^{(0)}_p(1+{\tilde f}^{(0)}_p){\partial}_i{\beta}_j\frac{p_ip_j}{E_p}=-{\tilde f}^{(0)}_p(1+{\tilde f}^{(0)}_p)S_{ij}I^p_{ij}p,
\end{align}
with $I^p_{ij}={\hat p}_i{\hat p}_j-\frac{1}{3}{\delta}_{ij}$ being a symmetric traceless tensor defined with 3-momentum $p$. We have also replaced $P_iP_j$ by its traceless part by traceless property of $S_{ij}$. Following the method in \cite{Arnold:2000dr}, we parametrize the deviation of distributions by
\begin{align}
f^{(1)}_p=f^{(0)}_p(1-f^{(0)}_p){\hat f_p},\quad
{\tilde f}^{(1)}_p={\tilde f}^{(0)}_p(1+{\tilde f}^{(0)}_p){\hat \tilde{f}_p},
\end{align}
with the superscripts $(0)$ and $(1)$ counting the order of gradient. To linear order in gradient, the parametrization adopts simple relations for the collision term
\begin{align}\label{linearize}
&f_pf_k(1-f_{p'})(1-f_{k'})-(p,k\leftrightarrow p',k')=f^{(0)}_pf^{(0)}_k(1-f^{(0)}_{p'})(1-f^{(0)}_{k'})({\hat f_p}+{\hat f_k}-{\hat f_{p'}}-{\hat f_{k'}}),\nonumber\\
&f_p\tilde{f}_k(1+\tilde{f}_{p'})(1-f_{k'})-(p,k\leftrightarrow p',k')=f^{(0)}_p{\tilde f}^{(0)}_k(1+{\tilde f}^{(0)}_{p'})(1-f^{(0)}_{k'})({\hat f_p}+{\hat \tilde{f}_k}-{\hat \tilde{f}_{p'}}-{\hat f_{k'}}),\nonumber\\
&f_pf_k(1+\tilde{f}_{p'})(1+f_{k'})-(p,k\leftrightarrow p',k')=f^{(0)}_pf^{(0)}_k(1+{\tilde f}^{(0)}_{p'})(1+{\tilde f}^{(0)}_{k'})({\hat f_p}+{\hat f_k}-{\hat \tilde{f}_{p'}}-{\hat \tilde{f}_{k'}}).
\end{align}
By rotational symmetry, we expect
\begin{align}\label{para_fneq}
{\hat f}_p=S_{ij}I^p_{ij}{\chi}(p),\quad
{\hat \tilde{f}}_p=S_{ij}I^p_{ij}{\gamma}(p).
\end{align}
Using \eqref{linearize} and \eqref{para_fneq}, we obtain a linearized Boltzmann equation from \eqref{Boltzmann}:
\begin{align}\label{chi_gamma}
&-f_p(1-f_p)S_{ij}I^p_{ij}p=-\frac{1}{2}\int_{p',k',k}(2{\pi}){\delta}^4(P+K-P'-K')\frac{1}{16p_0k_0p_0'k_0'}S_{ij}\times\nonumber\\
&\qquad\qquad\qquad\bigg[|{\cal M}|_{\text{Coul},f}^2\(I^p_{ij}{\chi}_p+I^k_{ij}{\chi}_k-I^{p'}_{ij}{\chi}_{p'}-I^{k'}_{ij}{\chi}_{k'}\)f_pf_k(1-f_{p'})(1-f_{k'})\nonumber\\
&\qquad\qquad\qquad+|{\cal M}|_{\text{Comp},f}^2\(I^p_{ij}{\chi}_p+I^k_{ij}{\gamma}_k-I^{p'}_{ij}{\gamma}_{p'}-I^{k'}_{ij}{\chi}_{k'}\)f_p\tilde{f}_k(1+\tilde{f}_{p'})(1-f_{k'})\nonumber\\
&\qquad\qquad\qquad\textcolor{black}{+}|{\cal M}|_{\text{anni},f}^2\(I^p_{ij}{\chi}_p+I^k_{ij}{\chi}_k-I^{p'}_{ij}{\gamma}_{p'}-I^{k'}_{ij}{\gamma}_{k'}\)f_pf_k(1+\tilde{f}_{p'})(1+\tilde{f}_{k'})\bigg],\nonumber\\
&-\tilde{f}_p(1+\tilde{f}_p)S_{ij}I^p_{ij}p=-\frac{1}{2}\int_{p',k',k}(2{\pi}){\delta}^4(P+K-P'-K')\frac{1}{16p_0k_0p_0'k_0'}S_{ij}\times\nonumber\\
&\qquad\qquad\qquad\bigg[|{\cal M}|_{\text{Comp},{\gamma}}^2\(I^p_{ij}{\gamma}_p+I^k_{ij}{\chi}_k-I^{p'}_{ij}{\chi}_{p'}-I^{k'}_{ij}{\gamma}_{k'}\)\tilde{f}_pf_k(1-f_{p'})(1+\tilde{f}_{k'})\nonumber\\
&\qquad\qquad\qquad\textcolor{black}{+}|{\cal M}|_{\text{anni},{\gamma}}^2\(I^p_{ij}{\gamma}_p+I^k_{ij}{\gamma}_k-I^{p'}_{ij}{\chi}_{p'}-I^{k'}_{ij}{\chi}_{k'}\)\tilde{f}_p\tilde{f}_k(1-f_{p'})(1-f_{k'})\bigg],
\end{align}
where we have used short-hand notations ${\chi}_p={\chi}(p)$ and ${\gamma}_p={\gamma}(p)$. $S_{ij}$ is arbitrary, thus we can equate its coefficient on two sides. The resulting tensor equations can be converted to scalar ones by contracting with $I^p_{ij}$. The flavor dependence in the amplitude squares can be expressed in terms of elementary amplitude squares as
\begin{align}\label{M2}
&|{\cal M}|_{\text{Coul},f}^2=2N_f|{\cal M}|_{\text{Coul}}^2\nonumber\\
&|{\cal M}|_{\text{Comp},f}^2=|{\cal M}|_{\text{Comp}}^2,\quad |{\cal M}|_{\text{Comp},{\gamma}}^2=2N_f|{\cal M}|_{\text{Comp}}^2\nonumber\\
&|{\cal M}|_{\text{anni},f}^2=\frac{1}{2}|{\cal M}|_{\text{anni}}^2,\quad |{\cal M}|_{\text{anni},{\gamma}}^2=N_f|{\cal M}|_{\text{anni}}^2,
\end{align}
with
\begin{align}
&|{\cal M}|_{Coul}^2=8e^4\frac{s^2+u^2}{t^2}\nonumber\\
&|{\cal M}|_{Comp}^2=8e^4\frac{s}{-t}\nonumber\\
&|{\cal M}|_{anni}^2=8e^4\(\frac{u}{t}+\frac{t}{u}\)\nonumber.
\end{align}
The factor $2N_f$ in Coulomb case comes from scattering with $N_f$ fermions and anti-fermions. For scattering between identical fermions, the symmetry factor $\frac{1}{2}$ in the final state is compensated by an identical $u$-channel contribution to the LL accuracy. Similarly the factor $2N_f$ in the Compton case comes from scattering of photon with $N_f$ fermions and anti-fermions. The factor $N_f$ in photon pair annihilation corresponds to $N_f$ possible final states and $\frac{1}{2}$ in fermion pair annihilation is a final state symmetry factor.
The phase space integrations are performed in appendix A. The results turn the linearized Boltzmann equations \eqref{chi_gamma} into
\begin{align}\label{LL}
f^{(0)}_p(1-f^{(0)}_p)\frac{2p}{3}=&e^4\ln e^{-1}\frac{1}{(2{\pi})^4}\Big[8N_f\frac{{\pi}^3\cosh^{-2}\frac{{\beta} p}{2}\(6{\chi}_p+p((-2+{\beta} p\tanh\frac{{\beta} p}{2}){\chi}_p'-p{\chi}_p'')\)}{72p^2{\beta}^3}\nonumber\\
&\qquad\qquad\qquad+2\frac{{\chi}_p-{\gamma}_p}{p}\frac{{\pi}^2}{8{\beta}^2}\frac{4{\pi}}{3}f^{(0)}_p(1+{\tilde f}^{(0)}_p)\Big]\nonumber\\
{\tilde f}^{(0)}_p(1+{\tilde f}^{(0)}_p)\frac{2p}{3}=&e^4\ln e^{-1}\frac{1}{(2{\pi})^4}4N_f\frac{{\gamma}_p-{\chi}_p}{p}\frac{{\pi}^2}{8{\beta}^2}\frac{4{\pi}}{3}{\tilde f}^{(0)}_p(1-f^{(0)}_p).
\end{align}
The second equation of \eqref{LL} is algebraic. It is solved by
\begin{align}\label{sol2}
\frac{{\gamma}_p-{\chi}_p}{(2{\pi})^3}=\frac{1}{e^4\ln e^{-1}}\frac{2{\beta}^2}{{\pi}^2N_f}p^2\frac{1+{\tilde f}^{(0)}_p}{1-f^{(0)}_p}.
\end{align}
The first equation is differential and need to be solved numerically. In the limit ${\beta} p\gg1$, the differential terms are subleading, reducing it to an algebraic equation. Combining with \eqref{sol2}, we find the following asymptotic solution
\begin{align}\label{sol1}
\frac{{\chi}(p\to\infty)}{(2{\pi})^3}=\frac{1}{e^4\ln e^{-1}}\frac{3(1+2N_f){\beta}^2p^2}{4{\pi}^2N_f^2}.
\end{align}
We have combined ${\chi}_p$ and ${\gamma}_p$ with $\frac{1}{(2{\pi})^3}$ in \eqref{sol2} and \eqref{sol1}. It is convenient as the same factor will appear in phase space integration measure.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5cm,clip]{chi}
\caption{${\chi} e^4\ln e^{-1}/(2{\pi})^3$ versus $p/T$ for massless QED with $N_f=2$. Solid and dashed lines correspond to numerical solution and approximate analytic solution \eqref{sol1}. At low $p$, the approximate solution is slightly below the numerical one.}
\label{fig:chi}
\end{center}
\end{figure}
The numerical solution is obtained with the boundary condition \eqref{sol1} and ${\chi}(p=0)=0$\footnote{A series analysis of the differential equation in \eqref{LL} around $p=0$ indicate ${\chi}(p)\sim p^2$.}. In fact, it has been pointed out in \cite{Arnold:2000dr} that the ansatz ${\chi}_p,{\gamma}_p\sim p^2$ gives very good approximation to the numerical solution. Fig.~\ref{fig:chi} compares \eqref{sol1} with numerical solution, confirming this point.
As a further check, we calculate shear viscosity for plasma at constant temperature. In this case $T_{ij}=\eta TS_{ij}$. Expressing $T_{ij}$ using kinetic theory, we obtain
\begin{align}
\eta=\frac{1}{15}\int_pp\[4N_ff_p(1-f_p){\chi}_p+2\tilde{f}_p(1+\tilde{f}_p){\gamma}_p\].
\end{align}
Integrations with numerical solution reproduces the corresponding entries in Table I of \cite{Arnold:2003zc}. Integrations with approximate solution \eqref{sol2} and \eqref{sol1} gives results with an error of about $1\%$ for $N_f=1$ and about $3\%$ for $N_f=2$. We will simply use the approximate solution in the analysis below.
\section{Self-energy correction}
In the previous section, we have determined the redistribution of constituents in plasma with thermal shear. Now we introduce a massive probe fermion to the plasma and study its polarization in the shear flow. To this end, we need to calculate self-energy correction to axial component of its Wigner function \eqref{calA}. In general both Coulomb and Compton scatterings contribute to the self-energy\footnote{For probe fermion, pair annihilation is irrelevant.}. Following \cite{Li:2019qkf}, we take the heavy probe limit $m\gg eT$ so that the Coulomb scattering dominates in the self-energy. The Coulomb contribution to the self-energy diagram is depicted in Fig.~\ref{fig:Coulomb}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5cm,clip]{Coulomb}
\caption{Self-energy of probe fermion from Coulomb scattering with medium fermion. The massive probe fermion carries momentum $P$ and the massless medium fermions run in the loop.}
\label{fig:Coulomb}
\end{center}
\end{figure}
We evaluate the self-energy as\footnote{${\Sigma}^>(x,y)$ is defined by $-e^2\langle{\slashed A}(x){\psi}(x)\bar{{\psi}}(y){\slashed A}(y)\rangle$.}
\begin{align}\label{Sigma}
{\Sigma}^>(P)=&{\color{black}+}e^4N_f\int_{P',K',K}(2{\pi})^4{\delta}^4(P+K-P'-K'){\gamma}^{\mu} S^>(P'){\gamma}^{\nu} D_{{\mu}{\beta}}^{22}(-Q)D_{{\alpha}{\nu}}^{11}(-Q)\nonumber\\
&\times\text{tr}[{\gamma}^{\alpha} \underline{S}^<(K){\gamma}^{\beta} \underline{S}^>(K')],
\end{align}
with $\int_P=\int\frac{d^4P}{(2{\pi})^4}$ and $Q=P'-P$. ${\Sigma}^<$ can be obtained by the replacement $>\leftrightarrow<$, $11\leftrightarrow22$.
The propagators in \eqref{Sigma} are given by
\begin{align}
&S^>(P)=2{\pi}{\epsilon}(p_0)({\slashed P}+m)(1-f_p){\delta}(P^2-m^2),\nonumber\\
&\underline{S}^>(K)=2{\pi}{\epsilon}(k_0){\slashed K}(1-f_k){\delta}(K^2),\nonumber\\
&D_{{\mu}{\beta}}^{22}(-Q)=\frac{i g_{{\mu}{\beta}}}{Q^2},\quad D_{{\alpha}{\nu}}^{11}(-Q)=\frac{-i g_{{\alpha}{\nu}}}{Q^2}.
\end{align}
We have indicated propagators of medium fermions by an underline. $\underline{S}^<$ can be obtained by the replacement $1-f_k\to -f_k$. Feynman gauge is used for photon propagators.
The component of self-energy contributing to polarization is ${\Sigma}^{>{\lambda}}=\frac{1}{4}\text{tr}\[{\Sigma}^>(P){\gamma}^{\lambda}\]$. The traces involved in this component are evaluated as
\begin{align}\label{traces}
\text{tr}\[{\gamma}^{\mu} S^>(P'){\gamma}^{\nu}{\gamma}^{\lambda}\]=&4\(P'{}^{\mu} g^{{\nu}{\lambda}}+P'{}^{\nu} g^{{\mu}{\lambda}}-P'{}^{\lambda} g^{{\mu}{\nu}}\)2{\pi}{\epsilon}(p_0'){\delta}(P'{}^2-m^2)(1-f_{p'}),\nonumber\\
\text{tr}\[{\gamma}^{\alpha} S^<(K){\gamma}^{\beta} S^>(K')\]=&4\(K^{\alpha} K'{}^{\beta}+K^{\beta} K'{}^{\alpha}-K\cdot K' g^{{\alpha}{\beta}}\)(2{\pi})^2{\epsilon}(k_0){\epsilon}(k_0')\times\nonumber\\
&{\delta}(K^2){\delta}(K'{}^2)(-f_k)(1-f_k').
\end{align}
Note that the LL contribution arises from the regime $q\ll P,K$, we may replace ${\epsilon}(p_0')\simeq {\epsilon}(p_0)=1$ for probe fermion and ${\epsilon}(k_0){\epsilon}(k_0')\simeq {\epsilon}(k_0)^2=1$.
Below we assume an equilibrium distribution for probe fermion for illustration purpose. Relaxation of this assumption only involves unnecessary complication. It can be important for realistic modeling of phenomenology, which will be studied elsewhere. The medium fermions is off-equilibrium, with the distribution determined in the previous section.
The combination needed for polarization is $-f_p{\Sigma}_k^>(P)-(1-f_p){\Sigma}_k^<(P)$. Using \eqref{Sigma} and \eqref{traces}, we obtain
\begin{align}\label{fSigma}
&-f_p{\Sigma}_k^>(P)-(1-f_p){\Sigma}_k^<(P)\nonumber\\
=&{\color{black}-}16e^4N_f\int d^3kd^3q\frac{1}{(2{\pi})^5}{\delta}(p_0+k_0-p_0'-k_0')\frac{1}{8p_0'k_0k_0'}\[2k_kP\cdot K-q_kP\cdot K\]\frac{1}{\(Q^2\)^2}\nonumber\\
&\times\(f_p(1-f_{p'})f_k(1-f_{k'})-f_{p'}(1-f_{p})f_{k'}(1-f_{k})\)\nonumber\\
=&{\color{black}-}16e^4N_f\int d^3kd^4q\frac{1}{(2{\pi})^5}{\delta}(p_0-p_0'+q_0){\delta}(k_0-k_0'-q_0)\frac{1}{8p_0'k_0k_0'}\[k_kP\cdot K'+k_k'P\cdot K\]\nonumber\\
&\frac{1}{\(Q^2\)^2}S_{ij}\(I^k_{ij}{\chi}_k-I^{k'}_{ij}{\chi}_{k'}\)f^{(0)}_pf^{(0)}_k(1-f^{(0)}_{p'})(1-f^{(0)}_{k'})\nonumber\\
\equiv& S_{ij}R_{ijk}.
\end{align}
We have inserted a factor of $2$ corresponding to fermions and anti-fermion in the loop and kept term up to $O(q^2)$ in the square bracket. In the second equality, we have used the assumption that only the distribution of medium fermions is off-equilibrium.
$R_{ijk}$ involves complicated tensor integrals of ${{\vec k}}$ and ${{\vec k}}'$. They are evaluated by first converting to tensor integrals of ${{\vec q}}$ by rotational symmetry and ${\delta}(k_0-k_0'-q_0)$, which correlates ${\vec k}$ and ${\vec q}$. The resulting tensor integrals of ${\vec q}$ are further performed with rotational symmetry and ${\delta}(p_0-p_0'+q_0)$. Details of the evaluation can be found in appendix B. In the end, we find the following component relevant for spin polarization
\begin{align}
{\cal A}^i=2{\pi}\frac{{\epsilon}^{ijk}p_jR_{mnk}S_{mn}}{2(p_0+m)}{\delta}(P^2-m^2)\simeq{\color{black}-}\frac{1}{p_0+m}(I_2+I_3)\frac{{\epsilon}^{iml}p_np_lS_{mn}}{p^5}{\delta}(P^2-m^2)C_f,
\end{align}
with
\begin{align}\label{Is}
I_2=&\frac{{\pi}^2\cosh^{-2}\frac{{\beta} p_0}{2}\((15p^4-87p^2p_0^2+72p_0^4)\ln(\frac{p_0-p}{p_0+p})+\frac{8p^5}{p_0}-126p^3p_0+144pp_0^3\)}{72{\beta}}\nonumber\\
&+\frac{3\cosh^{-2}\frac{{\beta} p_0}{2}\((12p^2p_0-12p_0^3)\ln\frac{p_0-p}{p_0+p}+28p^3-\frac{28p^5}{3p_0^2}-24pp_0^2\)\zeta(3)}{8{\beta}^2},\nonumber\\
I_3=&-\frac{\((p^4-9p^2p_0^2+8p_0^4)\ln\frac{p_0-p}{p_0+p}-\frac{38p_0p^3}{3}+16p_0^3p\)\({\pi}^2-9\tanh\frac{{\beta} p_0}{2}\zeta(3)\)}{4{\beta}\(1+\cosh({\beta} p_0)\)}.
\end{align}
and $C_f=\frac{3N_f(1+2N_f)}{4{\pi}^2N_f^2}$.
We reiterate that the self-energy correction scales as ${\partial}f^{(0)}$, with the dependence on coupling cancels as follows: $e^4$ from vertices and $\ln e^{-1}$ from LL enhancement combine to give $\frac{1}{{\tau}_R}\sim e^4\ln e^{-1}$, which is canceled by a counterpart in $f^{(1)}\sim \frac{{\partial}f^{(0)}}{e^4\ln e^{-1}}$.
Before closing this section, we wish to comment on the gauge dependence of \eqref{Is}. We illustrate this with a comparison of Feynman gauge and Coulomb gauge. Let us rewrite \eqref{Sigma} as
\begin{align}\label{fSigma_Pi}
{\Sigma}^>=e^2\int_Q{\gamma}^{\mu} S^>(P'){\gamma}^{\nu} D_{{\mu}{\beta}}^{22}(-Q)D_{{\alpha}{\nu}}^{11}(-Q){\Pi}^{<{\alpha}{\beta}}(Q),
\end{align}
with ${\Pi}^{<{\alpha}{\beta}}(Q)$ being the off-equilibrium photon self-energy.
In the presence of shear flow, the self-energy can be decomposed into four independent tensor structures as
\begin{align}\label{Pi}
{\Pi}^{<{\alpha}{\beta}}(Q)=P_T^{{\alpha}{\beta}}{\Pi}_T^<+P_L^{{\alpha}{\beta}}{\Pi}_L^<+P_{TT}^{{\alpha}{\beta}}{\Pi}_{TT}^<+P_{LT}^{{\alpha}{\beta}}{\Pi}_{LT}^<.
\end{align}
Here $P_{T/L}$ are transverse and longitudinal projectors defined by
\begin{align}\label{projectors}
P^{{\alpha}{\beta}}_T=P^{{\alpha}{\beta}}-\frac{P^{{\alpha}{\mu}}P^{{\beta}{\nu}}Q_{\mu} Q_{\nu}}{q^2},\quad P^{{\alpha}{\beta}}_L=P^{{\alpha}{\beta}}-P^{{\alpha}{\beta}}_T,
\end{align}
with $P^{{\alpha}{\beta}}=u^{\alpha} u^{\beta}-g^{{\alpha}{\beta}}$. $P_{TT}^{{\alpha}{\beta}}$ and $P_{LT}^{{\alpha}{\beta}}$ are emergent projectors owning to the shear flow, which are constructed as\footnote{The obvious structure constructed by sandwiching $S_{{\rho}{\sigma}}$ with two $P_L$ is not independent.}
\begin{align}\label{projectors2}
P_{TT}^{{\alpha}{\beta}}=P_T^{{\alpha}{\rho}}S_{{\rho}{\sigma}}P_T^{{\sigma}{\beta}},\quad
P_{LT}^{{\alpha}{\beta}}=P_L^{{\alpha}{\rho}}S_{{\rho}{\sigma}}P_T^{{\sigma}{\beta}}+(L\leftrightarrow T).
\end{align}
Note that photon self-energy is gauge invariant but propagator is not. Now we illustrate gauge dependence is generically present by using Feynman and Coulomb gauges.
For LL accuracy, we can simply use bare photon propagators in \eqref{fSigma_Pi}. For spacelike momentum $Q$ relevant for our case, we have a simple relation $D^{11}_{{\alpha}{\beta}}=-D^{22}_{{\alpha}{\beta}}=-iD^R_{{\alpha}{\beta}}$. The retarded propagator in Feynman and Coulomb gauges have the following representations
\begin{align}\label{gauges}
\text{Feynman}:&\;D_{{\alpha}{\beta}}^R=P_{{\alpha}{\beta}}^T\frac{-1}{Q^2}+\(\frac{Q^2}{q^2}u^{\alpha} u^{\beta}-\frac{q_0(u^{\alpha} Q^{\beta}+u^{\beta} Q^{\alpha})}{q^2}+\frac{Q^{\alpha} Q^{\beta}}{q^2}\)\frac{-1}{Q^2}\nonumber\\
\text{Coulomb}:&\;D_{{\alpha}{\beta}}^R=P_{{\alpha}{\beta}}^T\frac{-1}{Q^2}+\(\frac{Q^2}{q^2}u^{\alpha} u^{\beta}\)\frac{-1}{Q^2}.
\end{align}
Using \eqref{Pi} and \eqref{gauges}, we easily seen contribution to ${\Sigma}^>$ from ${\Pi}_T^<$ and ${\Pi}_{TT}^<$ are identical in two gauges. For contribution from ${\Pi}_L^<$ and ${\Pi}_{LT}^<$, we use Ward identity ${\Pi}^{{\alpha}{\beta}}Q_{\alpha}=0$ and transverse conditions $P_{T/L}^{{\alpha}{\beta}}Q_a=0$, $P_{T}^{a\{\beta}}u_{\alpha}=0$ to find the following structures, which are present only in Feynman gauge
\begin{align}
{\Pi}_L^< P^{{\alpha}{\beta}}_L u_{\alpha} u_{\beta} Q_{\mu} Q_{\nu},\quad
{\Pi}_L^< P^{{\alpha}{\beta}}_L u_{\alpha} u_{\beta} u_{\mu} Q_{\nu}+({\mu}\leftrightarrow{\nu}) ,\quad
{\Pi}_{LT}^< P^{{\alpha}{\beta}}_L u_{\alpha} Q_{\mu} P^T_{{\beta}{\nu}}+({\mu}\leftrightarrow{\nu}).
\end{align}
We have also confirmed the gauge dependence of self-energy contribution by explicit calculations.
\section{Gauge link contribution}
The gauge dependence we found in the previous section should not be a surprise. The reason is the underlying quantum kinetic theory is derived using a gauge fixed propagators. For Wigner function of the probe fermion, its gauge dependence can be removed by inserting a gauge link. If the gauge field in the link is external, i.e. a classical background, the gauge link simply becomes a complex phase. However, when we consider self-energy of fermions arising from exchanging quantum gauge fields, we need to worry about ordering of quantum field operators from expanding the gauge link and interaction vertex. A systematic treatment of the ordering is still not available at present. We will follow a different approach. Since we have already obtained the axial component of Wigner function without gauge link, we will find correction from expanding the gauge link that contributing at the same order.
When fluctuations of quantum gauge fields appear both in the interaction vertices and in the gauge link, it is natural to order them on the Schwinger-Keldysh contour. The latter is also the base of collisional kinetic theory in the recent development of quantum kinetic theory. However we immediately find the well-known straight path for the gauge link becomes inadequate for the Wigner function joining points on forward and backward contours. To find a proper generalization in Schwingwer-Keldysh contour, let us take a close look at the gauge transformation of the bare Wigner function $S^<(x,y)$:
\begin{align}
S^<(x,y)\to e^{-ie{\alpha}_2(y)}S^<(x,y)e^{ie{\alpha}_1(x)},
\end{align}
with ${\alpha}_{1,2}$ being gauge parameters on contour $1$ and $2$ respectively. If there is only classical background field, the gauge fields on contour $1$ and $2$ are the same, we may take ${\alpha}_1={\alpha}_2$. In this case, placing the straight path on either contour is equivalent. This is no longer true when quantum fluctuations are present. We propose to use double gauge links
\begin{align}\label{gauge_link}
\bar{S}^<(x,y)={\psi}_1(x)\bar{{\psi}}_2(y)U_2(y,\infty)U_1(\infty,x),
\end{align}
with $U_i(y,x)=\exp\(-ie\int_y^x dw\cdot A_i(w)\)$ and the $i=1,2$ identifying the forward and backward contours respectively. Assuming quantum fluctuations vanishes at past and future infinities, we easily arrive at the gauge invariance of \eqref{gauge_link}. We have not specified the paths for the gauge links appearing in \eqref{gauge_link}. A natural choice would be to take the straight line joining $x$ and $y$ and extending to future infinite. This is illustrated in Fig.~\ref{fig:path}. When there is only classical background gauge field, $A_1=A_2$ so that the two gauge links in \eqref{gauge_link} cancel partially, leaving a phase from the straight path between $x$ and $y$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=1cm,clip]{path}
\caption{Path for the gauge link in the Schwinger-Keldysh contour. The path in the full spacetime dimension is determined by a straight path joining $x$ and $y$, which is extended to future infinity.}
\label{fig:path}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5cm,clip]{link}
\caption{Diagram for gauge link contribution with the propagator connecting one quantum gauge field in the medium and the other in the gauge link. The dashed semi-circle denotes the gauge link. The shear gradient enters through the photon self-energy. In LL approximation, only one insertion of the self-energy is needed.}
\label{fig:link}
\end{center}
\end{figure}
Now we are ready to evaluate possible corrections associated with the gauge link. Note that we need a correction of $O({\partial} f_0)$. Such a contribution can arise from the diagram in Fig.~\ref{fig:link}. We shall evaluate its contribution to axial component of Wigner function below. Note that the diagram in Fig.\ref{fig:link} contains one quantum fluctuation of gauge field from the link and the other from the interaction vertex. Both fluctuations can occur on either contour $1$ or $2$, and they need to be contour ordered. Enumerating all possible insertions of the two gauge fields along the Schwinger-Keldysh contour, we obtain
\begin{align}\label{link_contour}
&{\color{black}-}e^2S_{11}(x,z){\gamma}^{\lambda} S^<(z,y)\(\int_\infty^x dw^{\mu} D_{{\lambda}{\mu}}^{11}(z,w)+\int_y^\infty dw^{\mu} D_{{\lambda}{\mu}}^<(z,w)\)\nonumber\\
&{\color{black}+}e^2S^<(x,z){\gamma}^{\lambda} S_{22}(z,y)\(\int_\infty^x dw^{\mu} D_{{\lambda}{\mu}}^>(z,w)+\int_y^\infty dw^{\mu} D_{{\lambda}{\mu}}^{22}(z,w)\),
\end{align}
where the two lines corresponding to the vertex coordinate $z$ taking values on contour $1$ and $2$ respectively and the two terms in either bracket corresponding to link coordinate $w$ taking values on contour $1$ and $2$ respectively. The relative sign comes from sign difference of vertices on contour $1$ and $2$. $D_{{\lambda}{\mu}}^{>/<}$ stands for resummed photon propagators in medium with shear flow.
Using $S_{11}=-iS_R+S^<$ and $S_{22}=S^<+iS_A$ and the representation
\begin{align}
&S_R=Re S_R+\frac{i}{2}\(S^>-S^<\)\simeq \frac{i}{2}\(S^>-S^<\),\nonumber\\
&S_A=Re S_R-\frac{i}{2}\(S^>-S^<\)\simeq -\frac{i}{2}\(S^>-S^<\),
\end{align}
we obtain $S_{11}\simeq S_{22}\simeq \frac{1}{2}\(S^>+S^<\)$ with $Re S_R$ ignored in the quasi-particle approximation. Similar expressions can be obtained for $D_R$. Plugging the resulting expressions into \eqref{link_contour}, we have
\begin{align}\label{link_expansion}
&{\color{black}-}\frac{e^2}{2}\big[S^>(x,z){\gamma}^{\lambda} S^<(z,y)\int_y^x dw^{\mu} D_{{\lambda}{\mu}}^<(z,w)-S^<(x,z){\gamma}^{\lambda} S^>(z,y)\int_y^x dw^{\mu} D_{{\lambda}{\mu}}^>(z,w)\big]\nonumber\\
&{\color{black}-}\frac{e^2}{2}S^<(x,z){\gamma}^{\lambda} S^<(z,y)\int_y^x dw^{\mu}\(D_{{\lambda}{\mu}}^<(z,w)-D_{{\lambda}{\mu}}^>(z,w)\)\nonumber\\
&{\color{black}-}e^2S_{11}(x,z){\gamma}^{\lambda} S^<(z,y)\int_\infty^x dw^{\mu}\frac{1}{2}\(D_{{\lambda}{\mu}}^>(z,w)-D_{{\lambda}{\mu}}^<(z,w)\)\nonumber\\
&{\color{black}-}e^2S^<(x,z){\gamma}^{\lambda} S_{22}(z,y)\int_y^\infty dw^{\mu}\frac{1}{2}\(D_{{\lambda}{\mu}}^>(z,w)-D_{{\lambda}{\mu}}^<(z,w)\).
\end{align}
The first line is very similar to what we have considered in self-energy correction. The other lines are all proportional to the photon spectral density ${\rho}_{{\lambda}{\mu}}(z,w)=D_{{\lambda}{\mu}}^>(z,w)-D_{{\lambda}{\mu}}^<(z,w)$, which is medium independent, thus the other lines are subleading compared to the first one. Below we keep only the first line.
The spin polarization of probe fermion comes from axial component of the Wigner function. We apply Wigner transform to the first line of \eqref{link_expansion}. Since the two terms are simply related by $>\leftrightarrow<$, we focus on the evaluation of the first term. Its Wigner transform is given by
\begin{align}
-\frac{e^2}{2}\int_{s,z,w}e^{iP\cdot s}\int_{P_1,P_2,Q}S^>(P_1){\gamma}^{\lambda} S^<(P_2)D_{{\lambda}{\rho}}^<(-Q)e^{-iP_1\cdot(x-z)-iP_2\cdot(z-y)+iQ\cdot(z-w)}.
\end{align}
The $z$-integration imposes momentum conservation as $\int_z e^{i(P_1-P_2+Q)\cdot z}={\delta}(P_1-P_2+Q)$, which allows us to simplify the remaining exponentials as $e^{iP\cdot s}e^{-iP_1\cdot(x-y)+iQ\cdot(y-w)}$. The $w$-integration is performed along the straight line
\begin{align}
\int_y^x dw^{\rho} e^{-iQ\cdot(w-y)}=\int_0^1 dt s^{\rho} e^{-it Q\cdot s}\simeq s^{\rho},
\end{align}
where we have used $Q\cdot s\ll1$. The condition corresponds to exchange of soft photon, which is necessary for LL enhancement as we already know from the self-energy calculations. We finally replace $s^{\rho}\to-i\frac{{\partial}}{{\partial} P_{\rho}}$ to arrive at
\begin{align}
\frac{ie^2}{2}\frac{{\partial}}{{\partial} P_{\rho}}\int_QS^>(P){\gamma}^{\lambda} S^<(P+Q)D_{{\lambda}{\rho}}^<(-Q).
\end{align}
For the axial component, we need the following trace
\begin{align}\label{trace}
\frac{1}{4}\text{tr}\[({\slashed P}+m){\gamma}^{\lambda}\({\slashed P}+{\slashed Q}+m\){\gamma}^{\mu}{\gamma}^5\]=-i{\epsilon}^{{\alpha}{\lambda}{\beta}{\mu}}P_{\alpha} Q_{\beta}.
\end{align}
Collecting everything, we obtain the following contributions to axial component of Wigner function
\begin{align}\label{link_exp}
-\frac{e^2}{2}\frac{{\partial}}{{\partial} P_{\rho}}&\Big[\int_Q\((1-f_p)f_{p'}D_{{\lambda}{\rho}}^>(Q)-f_p(1-f_{p'})D_{{\lambda}{\rho}}^<(Q)\){\epsilon}^{{\alpha}{\lambda}{\beta}{\mu}}P_{\alpha} Q_{\beta}(2{\pi})^2\nonumber\\
&\times{\delta}(P^2-m^2){\delta}(P'{}^2-m^2)\Big],
\end{align}
with $P'=P+Q$. We further use explicit representation of photon propagators in Feynman gauge
\begin{align}\label{D_exp}
D_{{\lambda}{\rho}}^>(Q)=(-1)2N_fe^2\int_K\text{tr}[{\slashed K}{\gamma}^{\alpha}{\slashed K}'{\gamma}^{\beta}](1-f_k)(-f_{k'})\frac{-ig_{{\lambda}{\alpha}}}{Q^2}\frac{ig_{{\rho}{\beta}}}{Q^2}(2{\pi})^2{\delta}(K^2){\delta}(K'{}^2),
\end{align}
with $K'=K-Q$.
For the purpose of extracting LL result, we have used bare propagators for photons. The factor of $2N_f$ arises from equal contributions from $N_f$ flavors of fermion and anti-fermion in the medium. A similar expression for $D_{{\lambda}{\mu}}^<(Q)$ can be obtained by interchanging $K$ and $K'$ in \eqref{D_exp}. Plugging \eqref{D_exp} into \eqref{link_exp}, we have
\begin{align}\label{link_exp2}
{\color{black}+}N_f\frac{{\partial}}{{\partial} P_{\rho}}&\Big[\int\frac{d^3kd^4Q}{(2{\pi})^52k2p_0'2k'}\(-(1-f_p)f_{p'}(1-f_k)f_{k'}+f_p(1-f_{p'})f_k(1-f_{k'})\)\nonumber\\
&\times4(K_{\lambda} K_{\rho}'+K_{\rho} K_{\lambda}')\frac{1}{(Q^2)^2}{\delta}(2K\cdot Q){\epsilon}^{{\alpha}{\lambda}{\beta}{\mu}}P_{\alpha} Q_{\beta}{\delta}(P^2-m^2){\delta}(P'{}^2-m^2)\Big].
\end{align}
It has a similar structure with loss and gain terms as the self-energy counterpart \eqref{fSigma}, thus a result proportional to the shear gradient is expected when we take into account redistribution of particles through $f\to f^{(0)}+f^{(1)}$.
The remaining task of evaluating the phase space integrals are tedious but straightforward with method sketched in the previous section. Here we simply list the final results with details collected in appendix C
\begin{align}\label{link_final}
{\cal A}^i={\color{black}}\frac{1}{(2{\pi})}C_f\frac{9\zeta(3)}{2{\beta}^4}(J_1+J_2+J_3+J_4)\frac{{\epsilon}^{iml}p_np_lS_{mn}}{2p^5}f_p(1-f_{p}){\delta}(P^2-m^2),
\end{align}
with
\begin{align}\label{Js}
&J_1=\frac{8{\pi}{\beta}^2p^3}{p_0},\nonumber\\
&J_2=-\frac{8{\pi}{\beta}^2p^5}{p_0^3},\nonumber\\
&J_3=-\frac{4{\pi}{\beta}^2\(8p^5-56p^3p_0^2+66pp_0^4+(6p^4p_0-39p^2p_0^3+33p_0^5)\ln\frac{p_0-p}{p_0+p}\)}{9p_0^3},\nonumber\\
&J_4=-\frac{-1+e^{{\beta} p_0}}{1+e^{{\beta} p_0}}\frac{2{\pi}{\beta}^3(-2p^2+11p_0^2)\(-4p^3+6pp_0^2+(3p_0^3-3p^2p_0)\ln\frac{p_0-p}{p_0+p}\)}{9p_0^2}.
\end{align}
\section{Discussion}
Let us put together different contributions\footnote{Note that we have assumed the probe fermion has an equilibrium distribution $f_p=f^{(0)}_p$.}
\begin{align}\label{As}
&{\cal A}^i_{\partial}=\frac{2{\pi}}{2(p_0+m)}{\epsilon}^{iml}p_np_lS_{mn}f_p(1-f_p){\delta}(P^2-m^2),\nonumber\\
&{\cal A}^i_{\Sigma}={\color{black}-}C_f\frac{1}{(p_0+m)p^5}{\epsilon}^{iml}p_np_lS_{mn}(I_2+I_3){\delta}(P^2-m^2),\nonumber\\
&{\cal A}^i_U={\color{black}}\frac{1}{2{\pi}}\frac{9{\zeta}(3)}{2{\beta}^4}C_f\frac{1}{2p^5}{\epsilon}^{iml}p_np_lS_{mn}(J_1+J_2+J_3+J_4)f_p(1-f_p){\delta}(P^2-m^2).
\end{align}
The first two lines come from partial derivative and self-energy terms in \eqref{calA} respectively\footnote{In arriving at the first line, an identity similar to \eqref{shear_grad} needs to be used}. The third line comes from the gauge link contribution. The first one is known in the literature \cite{Liu:2021uhn,Becattini:2021suc}. The second and third ones are the main results of the paper. The expressions of $I$ and $J$ can be found in \eqref{Is} and \eqref{Js}.
It is instructive to take limits to gain some insights from the long expressions. We consider the limit $p_0\gg T$, which allows us to replace in \eqref{Is} the $\cosh$ functions by Boltzmann factors and $\tanh$ function by unity. Similarly $f_p(1-f_p)$ can also be replaced by Boltzmann factor. The limits further allows us to neglect the second line in $I_2$ and $J_1$ through $J_3$. On top of this, we consider separately non-relativistic $m\gg p$ and relativistic limit $m\ll p$. For the former $m\gg p$, we have
\begin{align}\label{massive}
&{\cal A}_{\partial}^i\simeq\frac{{\pi}}{2m}{\epsilon}^{iml}p_np_lS_{mn}e^{-{\beta} p_0}{\delta}(P^2-m^2),\nonumber\\
&{\cal A}_{\Sigma}^i\simeq{\color{black}-}\frac{9{\zeta}(3)C_f}{5{\beta} m^2}{\epsilon}^{iml}p_np_lS_{mn}e^{-{\beta} p_0}{\delta}(P^2-m^2),\nonumber\\
&{\cal A}_U^i\simeq{\color{black}-}\frac{11{\zeta}(3)C_f}{5{\beta} m^2}{\epsilon}^{iml}p_np_lS_{mn}e^{-{\beta} p_0}{\delta}(P^2-m^2).
\end{align}
The fact that the non-relativistic limit is regular in $p$ is a non-trivial: it follows from a cancellation between powers of $p$ from expansion of $I$'s and $J$'s in the numerator and $p^5$ in the denominator in \eqref{massive}, which holds separately for self-energy and gauge link contribution. Since we expect the spin polarization to be well-defined in the non-relativistic limit. The regularity of the results serves as a check of our results.
For the relativistic limit $m\ll p$\footnote{Note that we can still have $m\gg eT$ such that Ignoring Compton scattering is justified.}, we have
\begin{align}\label{massless}
&{\cal A}_{\partial}^i\simeq\frac{{\pi}}{p}{\epsilon}^{iml}p_np_lS_{mn}e^{-{\beta} p_0}{\delta}(P^2-m^2),\nonumber\\
&{\cal A}_{\Sigma}^i\simeq{\color{black}}\frac{(2{\pi}^2-135{\zeta}(3))C_f}{9{\beta} p^2}{\epsilon}^{iml}p_np_lS_{mn}e^{-{\beta} p_0}{\delta}(P^2-m^2),\nonumber\\
&{\cal A}_U^i\simeq{\color{black}-}\frac{9{\zeta}(3)C_f}{2{\beta} p^2}{\epsilon}^{iml}p_np_lS_{mn}e^{-{\beta} p_0}{\delta}(P^2-m^2).
\end{align}
The regularity of the results is also non-trivial in that the logarithmically divergent factor $\ln\frac{p_0-p}{p_0+p}$ as $\frac{p}{m}\to\infty$ is compensated by a vanishing prefactor in both self-energy and gauge link contributions in the relativistic limit.
It is worth mentioning that in both limits ${\cal A}_{\Sigma}^i$ and ${\cal A}_U^i$ have opposite sign to ${\cal A}_{\partial}^i$. The magnitude of ${\cal A}_U^i$ is larger(smaller) than ${\cal A}_{\Sigma}^i$ in the non-relativistic(relativistic) limit. In the limit $p_0\gg T$ we consider, ${\cal A}_{\Sigma}^i$ and ${\cal A}_U^i$ are suppressed by the factor $\frac{1}{{\beta} m}$ or $\frac{1}{{\beta} p}$ compared to ${\cal A}_{\partial}^i$. The suppression factor can be easily understood from \eqref{As}: ${\cal A}_{\partial}^i$ depends on the temperature through the factor $f_p(1-f_p)$, which arises from our local equilibrium assumption on the distribution function of the probe fermion. The other two contributions originate from collisions between probe fermion and medium fermion, thus is characterized by at least one power of temperature, giving rise to a factor $\frac{T}{p_0}$ or $\frac{T}{p}$, which is consistent with the explicit limits we have. The medium dependence is also reflected in the constant $C_f$, which encodes the field content of the medium. In view of application to spin polarization in heavy ion collisions, the contributions from self-energy and gauge link depend on the numerical factors. We plot in Fig.~\ref{fig:Bs} three contributions for phenomenologically motivated parameters, with the caveat that our QED calculation is only meant to provide insights to QCD case. We take $m=100$ MeV, $T=150$ MeV and $p$ in the range of a few GeV. The plot shows for a combined contribution from self-energy and gauge link leads to a modest suppression of the derivative contribution.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5cm,clip]{B}
\caption{$B_M/B_{\partial}$ versus $p/T$ for probe fermion mass $m=100$ MeV at $T=150$ MeV for $N_f=2$. $B_M$ are defined by ${\cal A}^i_M=B_M{\epsilon}^{iml}p_np_lS_{mn}$ with $M={\partial},{\Sigma},U$.}
\label{fig:Bs}
\end{center}
\end{figure}
\section{Summary and Outlook}
We have revisited spin polarization in a shear flow and found two new contributions. The first one is the self-energy contribution arising from particle redistribution in the shear flow. We illustrate it with a massive probe fermion in a massless QED plasma. It is found that the self-energy contribution is parametrically the same as the derivative contribution considered in the literature.
The self-energy contribution is gauge dependent. In order to restore the gauge invariance of spin polarization, we have proposed a gauge invariant Wigner function, which contains double gauge links stretching along the Schwinger-Keldysh contour. This allows us to include gauge field fluctuations in both forward and backward contours, which is needed for consistent description of gauge field mediated collisions. We have found a second contribution associated with the gauge link, which is also parametrically of the same order.
Both contributions come from particle redistribution in the medium due to the shear flow. The particle redistribution is determined in a steady shear flow, thus the two contributions correspond to non-dynamical ones. A complete description of spin polarization still lacks a dynamical contribution corresponding to the term $a^{\mu} f_A$ in \eqref{calA}. It is worth pointing out that current phenomenological studies seem to indicate an insufficient magnitude from the derivative contribution as compared to measured spin polarization data \cite{Becattini:2021iol,Yi:2021ryh}. The suppression from the new contributions found in this work seems to point to an important role by the dynamical contribution.
Initial efforts have already been made already in \cite{Fang:2022ttm,Wang:2022yli}.
For phenomenological application, several generalizations of the present work are needed: first of all it is crucial to generalize the QED analysis to QCD case. Such a generalization in collisionless limit has been made in \cite{Luo:2021uog,Yang:2021fea}. In the collisional case, we expect the redistribution of both quarks and gluons to play a role; secondly going beyond the LL order is necessary to understand the significance of Compton and annihilation processes in the spin polarization problem; last but not least it is also important to relax our assumption of the equilibrium distribution for the probe fermion. These will be reported elsewhere.
\section*{Acknowledgments}
We are grateful to Jian-hua Gao, Yun Guo and Shi Pu for fruitful discussions. This work is in part supported by NSFC under Grant Nos 12075328, 11735007 (S.L.) and 12005112 (Zy.W.).
|
1,477,468,750,451 | arxiv | \section{introduction}
One of the standard assumptions of textbook quantum mechanics is the ``L\"uders rule'' which states that when an observable---represented by a self-adjoint operator with a non-degenerate spectrum---is measured in a system, the state of the system collapses to the eigenstate of the observable associated with the observed eigenvalue \cite{Luders2006, Busch2009a}. But assuming the universal validity of quantum theory, such a state change must be consistent with a description of the measurement process as a physical interaction between the system to be measured and a given (quantum) measuring apparatus. In his groundbreaking contribution to quantum theory in 1932 \cite{Von-Neumann-Foundations}, von Neumann introduced just such a model for the measurement process, where the system and apparatus interact unitarily.
While L\"uders measurements, and von Neumann's model for their realisation, are always available within the formal framework of quantum theory, they may not always be feasible in practice---technological obstacles and fundamental physical principles must also be accounted for. One such principle is that of conservation laws and, as shown by the famous Wigner-Araki-Yanase theorem \cite{E.Wigner1952,Busch2010, Araki1960,Ozawa2002,Miyadera2006a,Loveridge2011,Ahmadi2013b,Loveridge2020a,Mohammady2021a,Kuramochi2022}, only observables commuting with the conserved quantity admit a L\"uders measurement. This observation naturally raises the following question: do other physical principles constrain quantum measurements, and if so, how? Given that von Neumann's model for the measurement process assumes that the measuring apparatus is initially prepared in a pure state, an obvious candidate for consideration immediately presents itself: the third law of thermodynamics, or Nernst's unattainability principle, which states that a system cannot be cooled to absolute zero temperature with finite time, energy, or control complexity; in the quantum regime, the third law prohibits the preparation of pure states \cite{Schulman2005, Allahverdyan2011a, Reeb2013a, Masanes2014, Ticozzi2014, Scharlau2016a, Wilming2017a, Freitas2018, Clivaz2019, Taranto2021}. As argued by Guryanova \emph{et al.}, such unattainability rules out L\"uders measurements for any self-adjoint operator with a non-degenerate spectrum \cite{Guryanova2018}.
The more modern quantum theory of measurement \cite{PaulBuschMarianGrabowski1995, Busch1996, Heinosaari2011, Busch2016a} states that the properties of a quantum system are not exhausted by its sharp observables, i.e., observables represented by self-adjoint operators. Indeed, observables can be fundamentally unsharp, and are properly represented as positive operator valued measures (POVM) \cite{Busch2010a, Jaeger2019}. Similarly, the state change that results from measurement is more properly captured by the notion of instruments \cite{Davies1970}, which need not obey the L\"uders rule. Moreover, the interaction between system and apparatus during the measurement process is not necessarily unitary, and is more generally described as a channel, which more accurately describes situations where the interaction with the environment cannot be neglected. Therefore, how the third law constrains general measurements should be addressed; in this paper, we shall thoroughly examine this in the finite-dimensional setting.
First, we provide a minimal operational formulation of the third law by constraining the class of permissible channels so that the availability of a channel not so constrained is both necessary and sufficient for the preparation of pure states. The considered class of channels include those whose input and output spaces are not the same, which is the case when the process considered involves composing and discarding systems, and is more general than the class of rank non-decreasing channels, such as unitary channels. Indeed, the rank non-decreasing concept can only be properly applied to the limited cases where the input and output systems of a channel are the same.
Subsequently, we consider the most general class of measurement schemes that are constrained by the third law. That is, we do not assume that the measured observable is sharp, or that the pointer observable is sharp, or that the measurement interaction is rank non-decreasing. Next, we determine if the instruments realised by such measurement schemes may satisfy several desirable properties and, if so, under what conditions. These properties are:
\begin{enumerate}[\bf(i)]
\item {\bf Non-disturbance:} a non-selective measurement does not affect the subsequent measurement statistics of any observable that commutes with the measured observable.
\item {\bf First-kindness:} a non-selective measurement of an observable does not affect its subsequent measurement statistics.
\item {\bf Repeatability:} successive measurements of an observable are guaranteed to produce the same outcome.
\item {\bf Ideality:} whenever an outcome is certain from the outset, the measurement does not change the state of the measured system.
\item {\bf Extremality:} the instrument cannot be written as a probabilistic mixture of distinct instruments.
\end{enumerate}
L\"uders measurements of sharp observables simultaneously satisfy the above properties.
In general, however, these properties can be satisfied by instruments that do not obey the L\"uders rule, and also for observables that are not necessarily sharp. Moreover, they are in general not equivalent: an instrument can enjoy one while not another \cite{Lahti1991, Heinosaari2010, DAriano2011}. We therefore investigate each such property individually, and provide necessary conditions for their fulfilment by a measurement constrained by the third law.
We show that the third law prohibits a measurement of any \emph{small-rank} observable---an observable that has at least one rank-1 effect, or POVM element---from satisfying any of the above properties. On the other hand, extremality is shown to be permitted for an observable if each effect has sufficiently large rank, but only if the interaction between the system and apparatus is non-unitary. Finally, we show that while repeatbility and ideality are forbidden for all observables, non-disturbance and first-kindness are permitted for observables that are \emph{completely unsharp}: the effects of such observables do not have either eigenvalue 1 or 0, and so such observables do not enjoy the ``norm-1'' property. That is, non-disturbance and first-kindness are only permitted for observables that cannot have a definite value in any state. Our results are summarised in Table \ref{table:results}.
\begin{table}[!htb]
\begin{tabular}{ |p{0.5cm}||p{1.5cm}|p{1.5cm}|p{1.5cm}| p{1.5cm}| }
\hline
\multicolumn{1}{|c||}{} & \multicolumn{4}{c|}{Observable} \\
\hline
& Small-rank & Sharp & Norm-1 & Completely unsharp\\
\hline
\bf{(i)} & \xmark & \xmark & \xmark & \cmark\\
\hline
\bf{(ii)} & \xmark & \xmark & \xmark & \cmark\\
\hline
\bf{(iii)} & \xmark & \xmark & \xmark & \xmark\\
\hline
\bf{(iv)} & \xmark & \xmark & \xmark & \xmark \\
\hline
\bf{(v)} & \xmark & \cmark & \cmark & \cmark\\
\hline
\end{tabular}
\caption{The possibility (\cmark) or impossibility (\xmark) of an observable to admit the properties {\bf(i) - (v)} outlined above are indicated for four classes of observables: small-rank observables have at least one rank-1 effect; sharp observables are such that all effects are projections; norm-1 observables are such that every effect has eigenvalue 1; and completely unsharp observables are such that no effect has eigenvalue 1 or 0. }
\label{table:results}
\end{table}
\section{Operational formulation of the third law for channels}
The third law of thermodynamics states that in the absence of infinite resources of time, energy, or control complexity, a system cannot be cooled to absolute zero temperature. Assuming the universal validity of this law, then it must also hold in the quantum regime \cite{Schulman2005, Allahverdyan2011a, Reeb2013a, Masanes2014, Ticozzi2014, Scharlau2016a, Wilming2017a, Freitas2018, Clivaz2019, Taranto2021}. Throughout, we shall only consider quantum systems with a finite-dimensional Hilbert space ${\mathcal{H}}$. When such a system is in thermal equilibrium at some temperature, it is in a Gibbs state, and whenever the temperature is non-vanishing, such states are full-rank. Conversely, at absolute zero temperature the system will be in a low-rank state, i.e., it will not have full rank. In the special case of a non-degenerate Hamiltonian, the system will in fact be in a pure state. A minimal operational formulation of the third law in the quantum regime can therefore be phrased as follows: the possible transformations of quantum systems must be constrained so that the only attainable states have full rank.
In the Schr\"odinger picture, the most general transformations of quantum systems are represented by channels $\Phi: {\mathcal{L}}({\mathcal{H}}) \to {\mathcal{L}}({\mathcal{K}})$, i.e., completely positive trace-preserving maps from the algebra of linear operators on an input Hilbert space ${\mathcal{H}}$ to that of an output space ${\mathcal{K}}$. In the special case where ${\mathcal{H}} = {\mathcal{K}}$, we say that $\Phi$ acts in ${\mathcal{H}}$. But in general ${\mathcal{H}}$ need not be identical to ${\mathcal{K}}$, and the two systems may have different dimensions. This is because physically permissible transformations include the composition of multiple systems, and discarding of subsystems.
Previous formulations of the third law (see, for example, Proposition 5 of Ref. \cite{Reeb2013a} and Appendix B of Ref. \cite{Taranto2021}) have restricted the class of available channels to those with the same input and output system, and where the channel does not reduce the rank of the input state of such a system: these are referred to as rank non-decreasing channels, with unitary channels constituting a simple example. An intuitive argument for such restriction is as follows. Consider the case where we wish to cool the system of interest by an interaction with an infinitely large heat bath. But to utilise all degrees of freedom of such a bath one must either manipulate them all at once, which requires an infinite resource of control complexity, or one must approach the quasistatic limit, which requires an infinite resource of time. It stands to reason that, in a realistic protocol, only finitely many degrees of freedom of the bath can be accessed and so the system of interest effectively interacts with a finite, bounded, thermal bath. Such a bath is represented by a Gibbs state with a non-vanishing temperature, which has full rank. It is a simple task to show that if the interaction between the system of interest and the finite thermal bath is a rank non-decreasing channel---such as a unitary channel---acting in the compound of system-plus-bath, then the rank of the system cannot be reduced unless infinite energy is spent. It follows that if the input state of the system is full-rank, for example if it is a Gibbs state with a non-vanishing temperature, then the third law thus construed will only allow for such a state to be transformed to another full-rank state.
The above formulation has some drawbacks, however. First, the argument relies on the strong assumption that the system interacts with a thermal environment, which is not justified under purely operational grounds; the environment may in fact be an out of equilibrium system. Second, the rank non-decreasing condition can only be properly applied to channels with an identical input and output: the rank of a state on ${\mathcal{H}}$ only has meaning in relation to the dimension of ${\mathcal{H}}$. Indeed, the partial trace channel (describing the process by which one subsystem is discarded) and the composition channel (describing the process by which the system of interest is joined with an auxiliary system initialised in some fixed state) are physically relevant transformations that must also be addressed, but lead to absurdities when the change in the state's rank is examined. The partial trace channel is rank-decreasing, but tracing out one subsystem of a global full-rank state can only prepare a state that has full rank in the remaining subsystem. On the other hand, the composition channel is rank-increasing. But it is simple to show that if the rank of the auxiliary state is sufficiently small, then a unitary channel can be applied on the compound so as to purify the system of interest. We thus propose the following minimal definition for channels constrained by the third law, which is conceptually sound, and which does not rely on any assumptions regarding the environment and how it interacts with the system under study, and accounts for the most general class of channels:
\begin{definition}\label{defn:third-law}
A channel $\Phi : {\mathcal{L}}({\mathcal{H}}) \to {\mathcal{L}}({\mathcal{K}})$ is constrained by the third law if for every full-rank state $\rho$ on ${\mathcal{H}}$, $\Phi(\rho)$ is a full-rank state on ${\mathcal{K}}$.
\end{definition}
Properties of channels obeying the above definition are given in \app{app:channel-third-law-properties}, and as shown in \app{app:third-law-channel-proof}, if we are able to implement any channel that is constrained by the third law, then the added ability to implement a channel that is not so constrained, that is, a channel that may map some full-rank state to a low-rank state, is both necessary and sufficient for preparing a system in a pure state, given any unknown initial state $\rho$. Moreover, note that while a rank non-decreasing channel acting in ${\mathcal{H}}$ satisfies \defref{defn:third-law}, a channel acting in ${\mathcal{H}}$ and which satisfies such a definition need not be rank non-decreasing: a channel constrained by the third law may reduce the rank of some input state, but only if such a state is not full-rank. Finally, \defref{defn:third-law} has the benefit that for any pair of channels $\Phi_1$ and $\Phi_2$ satisfying such property, where the output of the former corresponds with the input of the latter, so too does their composition $\Phi_2 \circ \Phi_1$; the set of channels constrained by the third law is thus closed under composition.
\defref{defn:third-law} also allows us to re-examine the constraints imposed by the third law on state preparations, without modeling a finite-dimensional environment prepared in a Gibbs state, or assuming that the system interacts with such an environment by a rank non-decreasing channel. A state preparation is a physical process so that, irrespective of what input state is given, the output is prepared in a unique state $\rho$; indeed, an operational definition of a state is precisely the specification of procedures, or transformations, that produce it. As stated in Ref. \cite{Gour2020b}, \emph{``A quantum state can be understood
as a preparation channel, sending a trivial quantum system
to a non-trivial one prepared in a given state''}. That is, state preparations on a Hilbert space ${\mathcal{H}}$ may be identified with the set of preparation channels
\begin{align*}
\mathscr{P}({\mathcal{H}}) := \{\Phi : {\mathcal{L}}(\mathds{C}^1) \to {\mathcal{L}}({\mathcal{H}}) \} .
\end{align*}
Here, the input space is a 1-dimensional Hilbert space $\mathds{C}^1 \equiv \mathds{C} |\Omega\>$, and the only state on such a space is the rank-1 projection $|\Omega\>\<\Omega|$. The triviality of the input space captures the notion that the output of the channel $\Phi$ is independent of the input, and so the prepared state $\rho = \Phi(|\Omega\>\<\Omega|)$ is uniquely identified with the channel itself. Without any constraints, all states $\rho$ on ${\mathcal{H}}$ may be prepared by some $\Phi \in \mathscr{P}({\mathcal{H}})$. But now we may restrict the class of preparations by the third law as follows: $\mathscr{P}({\mathcal{H}})$ is constrained by the third law if all $\Phi \in \mathscr{P}({\mathcal{H}})$ map full-rank states to full-rank states as per \defref{defn:third-law}. But note that $|\Omega\>\<\Omega|$ has full rank in $\mathds{C}^1$, and so $\rho $ is guaranteed to be full-rank in ${\mathcal{H}}$.
\section{Quantum measurement}
Before investigating how the third law constrains quantum measurements, we shall first cover briefly some basic elements of quantum measurement theory which will be used in the sequel \cite{PaulBuschMarianGrabowski1995, Busch1996, Heinosaari2011, Busch2016a}.
\subsection{Observables}
Consider a quantum system ${\mathcal{S}}$ with a Hilbert space ${\mathcal{H}\sub{\s}}$ of finite dimension $2 \leqslant \dim({\mathcal{H}\sub{\s}}) < \infty$. We denote by $\mathds{O}$ and $\mathds{1}\sub{\s}$ the null and identity operators on ${\mathcal{H}\sub{\s}}$, respectively, and an operator $E$ on ${\mathcal{H}\sub{\s}}$ is called an \emph{effect} if it holds that $\mathds{O} \leqslant E \leqslant \mathds{1}\sub{\s}$. An observable of ${\mathcal{S}}$ is represented by a normalised positive operator valued measure (POVM) $\mathsf{E} : \Sigma \to \mathscr{E}({\mathcal{H}\sub{\s}})$, where $\Sigma$ is a sigma-algebra of some value space ${\mathcal{X}}$, representing the possible measurement outcomes, and $\mathscr{E}({\mathcal{H}\sub{\s}})$ is the space of effects on ${\mathcal{H}\sub{\s}}$. We restrict ourselves to discrete observables for which ${\mathcal{X}} := \{x_1, x_2, \dots \}$ is countable and finite. In such a case we may identify an observable with the set $\mathsf{E}:= \{\mathsf{E}_x: x \in {\mathcal{X}}\}$, where $\mathsf{E}_x \equiv \mathsf{E}(\{x\})$ are the (elementary) effects of $\mathsf{E}$ (also called POVM elements) which satisfy $\sum_{x\in {\mathcal{X}}} \mathsf{E}_x = \mathds{1}\sub{\s}$. The probability of observing outcome $x$ when measuring $\mathsf{E}$ in the state $\rho$ is given by the Born rule as $p^\mathsf{E}_\rho(x) := \mathrm{tr}[\mathsf{E}_x \rho]$.
Without loss of generality, we shall always assume that $\mathsf{E}_x \ne \mathds{O}$, since for any $x$ such that $\mathsf{E}_x = \mathds{O}$, the outcome $x$ is never observed, i.e., it is observed with probability zero; in such a case we may simply replace ${\mathcal{X}}$ with the smaller value space ${\mathcal{X}} \backslash \{x\}$. Additionally, we shall always assume that the observable is non-trivial, as trivial observables cannot distinguish between any states, and are thus uninformative; an effect is trivial if it is proportional to the identity, and an observable is non-trivial if at least one of its effects is not trivial.
We shall employ the short-hand notation $[\mathsf{E}, A]=\mathds{O}$ to indicate that the operator $A$ commutes with all effects of $\mathsf{E}$, and $[\mathsf{E}, \mathsf{F}]=\mathds{O}$ to indicate that all the effects of observables $\mathsf{E}$ and $\mathsf{F}$ mutually commute. An observable $\mathsf{E}$ is commutative if $[\mathsf{E},\mathsf{E}]=\mathds{O}$, and a commutative observable is also sharp if additionally $\mathsf{E}_x \mathsf{E}_y = \delta_{x,y} \mathsf{E}_x$, i.e., if $\mathsf{E}_x$ are mutually orthogonal projection operators. Sharp observables are also referred to as projection valued measures, and by the spectral theorem a sharp observable may be represented by a self-adjoint operator $A = \sum_x \lambda_x \mathsf{E}_x$, where $\{\lambda_x\} \subset \mathds{R}$ satisfies $\lambda_x \neq \lambda_y$ for $x\neq y$.
An observable that is not sharp will be called unsharp. An observable $\mathsf{E}$ has the norm-1 property if it holds that $\|\mathsf{E}_x\|=1$ for all $x$. In finite dimensions, this implies that each effect of a norm-1 observable has at least one eigenvector with eigenvalue 1. While sharp observables are trivially norm-1, this property may also be enjoyed by some unsharp observables.
We now introduce definitions for classes of observables that are of particular significance to our results:
\begin{definition}\label{defn:small-rank}
An observable $\mathsf{E}:= \{\mathsf{E}_x : x\in {\mathcal{X}}\}$ is called ``small-rank'' if there exists some $x\in {\mathcal{X}}$ such that $\mathsf{E}_x$ has rank 1. An observable is called ``large-rank'' if it is not small-rank.
\end{definition}
In particular, a sub-class of small-rank observables are called rank-1, for which every effect has rank 1 \cite{Holland1990, Pellonpaa2014}. For example, the effects of a sharp observable represented by a non-degenerate self-adjoint operator $A = \sum_x \lambda_x |\psi_x\>\<\psi_x|$ are the rank-1 projections $\mathsf{E}_x = |\psi_x\>\<\psi_x|$. Such observables are therefore rank-1, and hence small-rank. On the other hand, a sharp observable represented by a degenerate self-adjoint operator such that the eigenspace corresponding to each (distinct) eigenvalue has dimension larger than 1 is large-rank, as each effect is a projection with rank larger than 1.
\begin{definition}\label{defn:non-degenerate}
An observable $\mathsf{E}:= \{\mathsf{E}_x : x\in {\mathcal{X}}\}$ is called ``non-degenerate'' if there exists some $x\in {\mathcal{X}}$ such that there are no multiplicities in the strictly positive eigenvalues of $\mathsf{E}_x$. An observable that is not non-degenerate is called degenerate.
\end{definition}
An example of a non-degenerate observable is a small-rank observable, since in such a case there exists an effect that has exactly one strictly positive eigenvalue. On the other hand, a large-rank sharp observable is degenerate, since in such a case each effect has more than one eigenvector with eigenvalue 1.
\begin{definition}\label{defn:complete-unsharp}
An observable $\mathsf{E}:= \{\mathsf{E}_x : x\in {\mathcal{X}}\}$ is called ``completely unsharp'' if for each $x\in {\mathcal{X}}$, the spectrum of $\mathsf{E}_x$ does not contain either 1 or 0.
\end{definition}
Completely unsharp observables evidently do not have the norm-1 property, since it holds that $\| \mathsf{E}_x\| < 1$ for all $x$. But since the effects also do not have eigenvalue 0, then the effects are in fact full-rank. It follows that completely unsharp observables are also large-rank. But a completely unsharp observable may be either degenerate or non-degenerate.
\subsection{Instruments}
An instrument \cite{Davies1970}, or operation valued measure, describes how a system is transformed upon measurement, and is given as a collection of operations (completely positive trace non-increasing linear maps) ${\mathcal{I}}:= \{{\mathcal{I}}_x : x\in {\mathcal{X}}\}$ such that ${\mathcal{I}}_{\mathcal{X}}(\cdot) := \sum_{x\in {\mathcal{X}}} {\mathcal{I}}_x(\cdot)$ is a channel. Throughout, we shall always assume that the instrument acts in ${\mathcal{H}\sub{\s}}$, i.e., that both the input and output space of ${\mathcal{I}}_x$ is ${\mathcal{H}\sub{\s}}$. An instrument ${\mathcal{I}}$ is identified with a unique observable $\mathsf{E} $ via the relation $\mathrm{tr}[{\mathcal{I}}_x(\rho)] = \mathrm{tr}[\mathsf{E}_x\rho]$ for all outcomes $x$ and states $\rho$, and we shall refer to such ${\mathcal{I}}$ as an $\mathsf{E}$-compatible instrument, or an $\mathsf{E}$-instrument for short, and to ${\mathcal{I}}_{\mathcal{X}}$ as the corresponding $\mathsf{E}$-channel \cite{Heinosaari2015}. Note that while every instrument is identified with a unique observable, every observable $\mathsf{E}$ admits infinitely many $\mathsf{E}$-compatible instruments; the operations of the L\"uders instrument ${\mathcal{I}}^L$ compatible with $\mathsf{E}$ are written as
\begin{align}\label{eq:Luders}
{\mathcal{I}}^L_x(\cdot) := \sqrt{\mathsf{E}_x} \cdot \sqrt{\mathsf{E}_x},
\end{align}
and it holds that the operations of every $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ can be constructed as ${\mathcal{I}}_x = \Phi_x \circ {\mathcal{I}}^L_x$, where $\Phi_x$ are arbitrary channels acting in ${\mathcal{H}\sub{\s}}$ that may depend on outcome $x$ \cite{Ozawa2001,Pellonpaa2013a}.
\subsection{Measurement schemes}
A quantum system is measured when it undergoes an appropriate physical interaction with a measuring apparatus so that the transition of some variable of the apparatus---such as the position of a pointer along a scale---registers the outcome of the measured observable. The most general description of the measurement process is given by a \emph{measurement scheme}, which is a tuple ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ where ${\mathcal{H}\sub{\aa}}$ is the Hilbert space for (the probe of) the apparatus $\aa$ and $\xi$ is a fixed state of $\aa$, ${\mathcal{E}}$ is a channel acting in ${\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}}$ which serves to correlate ${\mathcal{S}}$ with $\aa$, and $\mathsf{Z} := \{\mathsf{Z}_x : x\in {\mathcal{X}}\}$ is a POVM acting in ${\mathcal{H}\sub{\aa}}$ which is referred to as a ``pointer observable''. For all outcomes $x$, the operations of the instrument ${\mathcal{I}}$ implemented by ${\mathcal{M}}$ can be written as
\begin{align}\label{eq:instrument-dilation}
{\mathcal{I}}_x (\cdot) = \mathrm{tr}\sub{\aa}[(\mathds{1}\sub{{\mathcal{S}}}\otimes \mathsf{Z}_x) {\mathcal{E}}(\cdot \otimes \xi)],
\end{align}
where $\mathrm{tr}\sub{\aa}[\cdot]$ is the partial trace over $\aa$. The channel implemented by ${\mathcal{M}}$ is thus ${\mathcal{I}}_{\mathcal{X}}(\cdot) = \mathrm{tr}\sub{\aa}[{\mathcal{E}}(\cdot \otimes \xi)]$. Every $\mathsf{E}$-compatible instrument admits infinitely many \emph{normal} measurement schemes, where $\xi$ is chosen to be a pure state, ${\mathcal{E}}$ is chosen to be a unitary channel, and $\mathsf{Z}$ is chosen to be sharp \cite{Ozawa1984}. Von Neumann's model for the measurement process is one such example of a normal measurement scheme. However, unless stated otherwise, we shall consider the most general class of measurement schemes, where $\xi$ need not be pure, ${\mathcal{E}}$ need not be unitary, and $\mathsf{Z}$ need not be sharp.
\section{Measurement schemes constrained by the third law}
We now consider how the third law constrains measurement schemes, and subsequently examine how such constraints limit the possibility of a measurement to satisfy the proprieties {\bf (i) - (v)} outlined in the introduction.
Since the third law only pertains to channels and state preparations, the only elements of a measurement scheme ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ that will be limited by the third law are the interaction channel ${\mathcal{E}}$, and the apparatus state preparation $\xi$. By \defref{defn:third-law} and the proceeding discussion, we therefore introduce the following definition:
\begin{definition}\label{defn:third-law-measurement}
A measurement scheme ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ is constrained by the third law if the following hold:
\begin{enumerate}[(i)]
\item $\xi$ is a full-rank state on ${\mathcal{H}\sub{\aa}}$.
\item For every full-rank state $\varrho$ on ${\mathcal{H}\sub{\s}} \otimes {\mathcal{H}\sub{\aa}}$, ${\mathcal{E}}(\varrho)$ is also a full-rank state.
\end{enumerate}
\end{definition}
Properties of measurement schemes constrained by the third law are given in \app{app:third-law-measurement} and \app{app:fixed-point-measurement}. Note that the third law does not impose any constraints on the measurability of observables; we may always choose ${\mathcal{M}}$ to be a ``trivial'' measurement scheme, where ${\mathcal{H}\sub{\aa}} \simeq {\mathcal{H}\sub{\s}}$ and ${\mathcal{E}}$ is a unitary swap channel, in which case the observable $\mathsf{E}$ measured in the system is identified with the pointer observable $\mathsf{Z}$ of the apparatus, which can be chosen arbitrarily. This is in contrast to the case where a measurement is constrained by conservation laws; by the Yanase condition, the pointer observable is restricted so that it commutes with the apparatus part of the conserved quantity, and it follows that an observable not commuting with the system part of the conserved quantity is measurable only if it is unsharp, and only if the apparatus preparation has a large coherence in the conserved quantity \cite{Mohammady2021a}.
The measurability of observables notwithstanding, let us note that an instrument implemented by a trivial measurement scheme is also trivial, i.e., it will hold that for all outcomes $x$ and states $\rho$, the operations of ${\mathcal{I}}$ satisfy ${\mathcal{I}}_x(\rho) = \mathrm{tr}[\mathsf{E}_x \rho] \xi$. Irrespective of what outcome is observed and what the initial state is, the final state is always $\xi$. In such a case, ${\mathcal{I}}$ fails all the properties {\bf (i) - (v)} that are the subject of our investigation. Therefore, whether or not an observable admits an instrument---realisable by a measurement scheme constrained by the third law as per \defref{defn:third-law-measurement}---with such properties remains to be seen: we shall now investigate this.
\subsection{Non-disturbance}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.45\textwidth]{Non-Disturbance}
\vspace*{-0.2cm}
\caption{The top half of the figure represents a sequential measurement of possibly different observables in a system initially prepared in state $\rho$, with the histograms representing the statistics obtained for each measurement in the sequence. The bottom half shows the case where the first measurement in the sequence is removed, and only the second measurement takes place. When the statistics of such a measurement are the same in both scenarios, for all states $\rho$, then the first measurement is said to not disturb the second. }\label{fig:non-disturbance}
\vspace*{-0.5cm}
\end{center}
\end{figure}
An $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ does not disturb an observable $\mathsf{F}:= \{\mathsf{F}_y : y \in {\mathcal{Y}}\}$ if it holds that
\begin{align*}
\mathrm{tr}[\mathsf{F}_y {\mathcal{I}}_{\mathcal{X}}(\rho)] = \mathrm{tr}[\mathsf{F}_y \rho]
\end{align*}
for all states $\rho$ and outcomes $y$ \cite{Heinosaari2010}. In other words, ${\mathcal{I}}$ does not disturb $\mathsf{F}$ if the statistics of $\mathsf{F}$ are not affected by a prior non-selective measurement of $\mathsf{E}$ by ${\mathcal{I}}$. Non-disturbance is only possible for \emph{jointly measurable} observables, since in such a case the sequential measurement of $\mathsf{E}$ by ${\mathcal{I}}$, followed by a measurement of $\mathsf{F}$, defines a joint observable for $\mathsf{E}$ and $\mathsf{F}$ \cite{Heinosaari2015}. In the absence of any constraints, commutation of $\mathsf{E}$ with $\mathsf{F}$ is sufficient for non-disturbance. That is, if $\mathsf{F}$ commutes with $\mathsf{E}$,
there exists an $\mathsf{E}$-instrument ${\mathcal{I}}$ that does not disturb $\mathsf{F}$. Moreover, the L\"uders $\mathsf{E}$-instrument ${\mathcal{I}}^L$ does not disturb \emph{all} $\mathsf{F}$ commuting with $\mathsf{E}$ \cite{Busch1998}. This can be easily shown by the following: if all effects of $\mathsf{F}$ and $\mathsf{E}$ mutually commute, then by \eq{eq:Luders} we may write
\begin{align*}
\mathrm{tr}[\mathsf{F}_y {\mathcal{I}}_{\mathcal{X}}^L(\rho)] &= \sum_x \mathrm{tr}[\mathsf{F}_y \sqrt{\mathsf{E}_x} \rho \sqrt{\mathsf{E}_x}] \\
& = \sum_x \mathrm{tr}[\sqrt{\mathsf{E}_x} \mathsf{F}_y \sqrt{\mathsf{E}_x} \rho ] \\
& = \sum_x \mathrm{tr}[\mathsf{E}_x \mathsf{F}_y \rho ] = \mathrm{tr}[\mathsf{F}_y \rho].
\end{align*}
In the second line we have used the cyclicity of the trace, and in the third line we use $[\mathsf{E}_x, \mathsf{F}_y] = \mathds{O} \iff [\sqrt{\mathsf{E}_x}, \mathsf{F}_y] = \mathds{O}$.
While commutation is sufficient for non-disturbance, it is in general not necessary; if $\mathsf{E}$ and $\mathsf{F}$ do not commute but are both sufficiently unsharp so as to be jointly measurable \cite{Miyadera2008}, then it \emph{may} be possible for a measurement of $\mathsf{E}$ to not disturb $\mathsf{F}$, but not always: while non-disturbance requires joint measurability, joint-measurability does not guarantee non-disturbance. Let us consider an example where non-disturbance is permitted for two non-commuting observables. Consider the case that ${\mathcal{H}\sub{\s}} = \mathds{C}^2 \otimes \mathds{C}^2$, with the orthonormal basis $\{|k\rangle \otimes |m\rangle : k,m = 0,1\}$, and define the following family of operators on $\mathds{C}^2$:
\begin{align*}
A_0 &=|0\rangle \langle 0|, \qquad A_1=\frac{1}{2}|0\rangle \langle 0|, \qquad A_2 =\frac{1}{2}|1\rangle \langle 1|,\\
A_3 &=\frac{1}{2} |+\rangle \langle + |, \qquad A_4 =\frac{1}{2} |-\rangle \langle -|, \qquad A_5= |1\rangle \langle 1|,
\end{align*}
where $\ket{\pm} := \frac{1}{\sqrt{2}}(\ket{0} \pm \ket{1})$. Now consider the binary observables $\mathsf{E}:= \{\mathsf{E}_0, \mathsf{E}_1\}$ and $\mathsf{F}:= \{\mathsf{F}_0, \mathsf{F}_1\}$ acting in ${\mathcal{H}\sub{\s}}$, defined by
\begin{align*}
\mathsf{E}_0 &= A_0 \otimes |0\rangle \langle 0|
+ (A_2+ A_4) \otimes |1\rangle \langle 1| , \\
\mathsf{E}_1&=(A_1+A_3) \otimes |1\rangle \langle 1|
+A_5 \otimes |0\rangle \langle 0|,
\end{align*}
and
\begin{align*}
\mathsf{F}_0 &= A_0\otimes |0\rangle \langle 0|
+ (A_1+A_4)\otimes |1\rangle \langle 1|, \\
\mathsf{F}_1 &= (A_2+A_3)\otimes |1\rangle \langle 1|
+A_5 \otimes |0\rangle \langle 0|.
\end{align*}
One can confirm that $[\mathsf{E}, \mathsf{F}] \ne \mathds{O}$. But, we can construct an $\mathsf{E}$-instrument ${\mathcal{I}}$ with operations
\begin{align*}
{\mathcal{I}}_0(\rho) &=\mathrm{tr}[\rho (A_0\otimes |0\>\<0| + A_4 \otimes |1\>\<1|)]
|0 \>\< 0|\otimes |0 \>\< 0| \\
& \quad + \mathrm{tr}[\rho(A_2 \otimes |1\>\<1|)]|1\>\<1|\otimes |0\>\<0|,
\\
{\mathcal{I}}_1(\rho) &= \mathrm{tr}[\rho( A_5\otimes |0\rangle \langle 0| + A_3 \otimes |1\rangle \langle 1|)]
|1\rangle \langle 1|\otimes |0\rangle \langle 0|
\\
&\quad + \mathrm{tr}[\rho(A_1 \otimes |1\rangle \langle 1|)]|0\rangle \langle 0|\otimes |0\rangle \langle 0| ,
\end{align*}
which does not disturb $\mathsf{F}$.
However, we show that under the third law constraint, commutation is in fact necessary for non-disturbance. That is, if an $\mathsf{E}$-instrument ${\mathcal{I}}$ can be
implemented by a measurement scheme constrained by the third law, such that ${\mathcal{I}}$ does not disturb $\mathsf{F}$, then
$[\mathsf{E}, \mathsf{F}]=\mathds{O}$ must be satisfied. In \app{app:third-law-measurement} we show that for any instrument ${\mathcal{I}}$, implemented by a measurement scheme constrained by the third law, there exists at least one full-rank state $\rho_0$ such that ${\mathcal{I}}_{\mathcal{X}}(\rho_0) = \rho_0$. In such a case, non-disturbance of $\mathsf{F}_y$ (i.e., $\mathrm{tr}[\mathsf{F}_y {\mathcal{I}}_{\mathcal{X}}(\rho)] = \mathrm{tr}[\mathsf{F}_y \rho]$ for all $\rho$)
implies non-disturbance of a sharp observable $\P=\{\P_z\}$, where $\P_z$ are the spectral projections of $\mathsf{F}_y$. That is, a sequential measurement of $\mathsf{E}$ by the instrument ${\mathcal{I}}$, followed by a measurement of $\P$, is a joint measurement of $\mathsf{E}$ and $\P$. Since joint measurability implies commutativity when either observable is sharp, it follows that $\mathsf{E}$ must commute with $\P$, and hence with $\mathsf{F}_y$, for all $y$. In other words, given the existence of a full-rank fixed state $\rho_0$, then a measurement of $\mathsf{E}$ does not disturb $\mathsf{F}$ only if they commute. See also Proposition 4 of Ref. \cite{Heinosaari2010}.
But when the measurement of $\mathsf{E}$ is constrained by the third law, we show that $[\mathsf{E},\mathsf{F}]=\mathds{O}$ is not sufficient for non-disturbance: the properties of $\mathsf{E}$ impose further constraints. We now present our first main result:
\begin{theorem}
Under the third law constraint, a completely unsharp observable $\mathsf{E}$ admits a measurement that does not disturb any observable $\mathsf{F}$ that commutes with $\mathsf{E}$. On the other hand, if an observable $\mathsf{E}$ satisfies $\|\mathsf{E}_x\| = 1$ for any outcome $x$, then there exists $\mathsf{F}$ which commutes with $\mathsf{E}$ but is disturbed by any measurement of $\mathsf{E}$ that is constrained by the third law.
\end{theorem}
That is, an $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ admits a measurement scheme ${\mathcal{M}}$ that is constrained by the third law, such that $[\mathsf{E},\mathsf{F}] = \mathds{O} \implies \mathrm{tr}[\mathsf{F}_y {\mathcal{I}}_{\mathcal{X}}(\rho)] = \mathrm{tr}[\mathsf{F}_y \rho]$ for all $y$ and $\rho$, if $\mathsf{E}$ is completely unsharp and only if $\|\mathsf{E}_x\| <1$ for all outcomes $x$. The proof is presented in \app{app:non-disturbance} (\propref{prop:non-disturbance}). To show sufficiency of complete unsharpness we prove that, given the third law constraint, an observable admits a L\"uders instrument if and only if it is completely unsharp (\propref{prop:luders-completely-unsharp}). But since L\"uders measurements are guaranteed to not disturb any commuting observable, the claim immediately follows. On the other hand, the necessity that the effects have norm smaller than 1 follows from the following: if any effect of $\mathsf{E}$ has eigenvalue 1, the projection onto such eigenspace commutes with $\mathsf{E}$ but is shown to be disturbed. In particular, this implies that when a norm-1 observable (such as a sharp observable) is measured under the third law constraint, then there exists some observable $\mathsf{F}$ that commutes with $\mathsf{E}$ but is nonetheless disturbed. Note that an observable can satisfy $\| \mathsf{E}_x\| <1$ for all $x$ without being completely unsharp, since such effects can still have 0 in their spectrum.
Of course, even if a sharp or norm-1 observable $\mathsf{E}$ fails the strict non-disturbance condition, this does not imply that some non-disturbed observables do not exist. In \app{app:non-disturbance}, we show that if an observable is small-rank as per \defref{defn:small-rank}, then it holds that a third-law constrained measurement of such an observable will disturb all observables, even if they commute. Second, we show that if $\mathsf{E}$ is a non-degenerate observable as per \defref{defn:non-degenerate}, then the class of non-disturbed observables will be commutative, and any pair of non-disturbed observables will commute. That is, for any $\mathsf{F}$ and $\mathsf{G}$ that are non-disturbed, then it will hold that $[\mathsf{F}, \mathsf{G}] = [\mathsf{F},\mathsf{F}] = [\mathsf{G},\mathsf{G}] = \mathds{O}$. In other words, non-degeneracy of the measured observable will spoil the ``coherence'' of the measured system. Therefore, to ensure that a measurement of $\mathsf{E}$ does not disturb a non-trivial class of (possibly non-commutative) observables, then $\mathsf{E}$ must be a large rank (and degenerate) observable. In \app{app:non-disturbance}, we construct an explicit example where the measurement of a sharp observable that is large-rank, and hence degenerate, will not disturb a non-trivial class of possibly non-commutative observables. This is a binary observable $\mathsf{E}$ acting in a two-qubit system ${\mathcal{H}\sub{\s}} = \mathds{C}^2 \otimes \mathds{C}^2$, defined by $\mathsf{E}_x = \mathds{1} \otimes |x\>\<x|$. Note that these effects have rank 2, and are hence also degenerate. In such a case, any observable $\mathsf{F}$ with effects $\mathsf{F}_y \otimes \mathds{1}$ will be non-disturbed, and it may be the case that $[\mathsf{F},\mathsf{F}] \ne \mathds{O}$.
\subsection{First-kindness}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.45\textwidth]{First-Kindness}
\vspace*{-0.2cm}
\caption{When the same observable is measured in succession, and when the statistics of the second measurement are the same as those of the first, for all input states $\rho$, then such a measurement is said to be of the first kind. }\label{fig:first-kindness}
\vspace*{-0.5cm}
\end{center}
\end{figure}
An $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ is a measurement of the first kind if ${\mathcal{I}}$ does not disturb $\mathsf{E}$ itself, i.e., if it holds that
\begin{align*}
\mathrm{tr}[\mathsf{E}_x {\mathcal{I}}_{\mathcal{X}}(\rho)] = \mathrm{tr}[\mathsf{E}_x \rho]
\end{align*}
for all states $\rho$ and outcomes $x$ \cite{Lahti1991}. In the absence of any constraints, commutativity of an observable is sufficient for it to admit a first-kind measurement; for any observable $\mathsf{E}$ such that $[\mathsf{E},\mathsf{E}]=\mathds{O}$ holds, the corresponding L\"uders instrument is a measurement of the first kind. This follows from analogous reasoning to that given above. But we show that, under the third law constraint, commutativity is necessary for first-kindness, but not sufficient. We now present our second main result:
\begin{theorem}\label{thm:first-kind}
Under the third law constraint, an observable $\mathsf{E}$ admits a measurement of the first kind if and only if $\mathsf{E}$ is commutative and completely unsharp.
\end{theorem}
In particular, note that a third law constrained measurement of any norm-1 observable, such as a sharp observable, necessarily disturbs itself. The proof is given in \app{app:first-kindness} (\propref{prop:first-kindness}). The sufficiency follows from the fact that any completely unsharp observable admits a L\"uders instrument, as discussed above. On the other hand, the following is a sketch of the proof for the necessity of such a condition: a non-selective measurement constrained by the third law always leaves some full-rank state $\rho_0$ invariant. Non-disturbance of $\mathsf{E}$ therefore demands commutativity, as discussed above. But every commutative observable $\mathsf{E}$ is a classical post processing of a sharp observable $\P$, i.e., we may write $\mathsf{E}_x = \sum_y p(x|y) \P_y$ where $\{p(x|y)\}$ is a family of non-negative numbers satisfying $\sum_x p(x|y) = 1$ for every $y$ \cite{Heinosaari2011a}. Given that ${\mathcal{I}}_{\mathcal{X}}$ has a full-rank fixed state, then if ${\mathcal{I}}$ is a first-kind measurement, $\P$ is also not disturbed \cite{Mohammady2021a}. Therefore, a sequential measurement of $\mathsf{E}$ by ${\mathcal{I}}$ followed by measurement of $\P$ defines a joint measurement of $\mathsf{E}$ and $\P$. By \eq{eq:instrument-dilation}, we obtain for every $\rho$ the following:
\begin{align*}
\mathrm{tr}[\P_y \mathsf{E}_x \P_y \rho] = \mathrm{tr}[\P_y \otimes \mathsf{Z}_x {\mathcal{E}}(\rho \otimes \xi)].
\end{align*}
Now assume that $\rho$ is full-rank. Given that a third law constrained measurement employs a full-rank apparatus preparation $\xi$, while ${\mathcal{E}}$ obeys \defref{defn:third-law}, then ${\mathcal{E}}(\rho \otimes \xi)$ is full-rank. It follows that the term on the right hand side is strictly positive, and hence so too is the term on the left. But this implies that $\P_y \mathsf{E}_x \P_y > \mathds{O}$, and so $0 < p(x|y) < 1$, for all $x,y$. Therefore, $\mathsf{E}$ is completely unsharp.
In \app{app:first-kindness}, we construct an explicit example of a first-kind measurement (not given by a L\"uders instrument) of a commutative and completely unsharp observable. We consider a system ${\mathcal{H}\sub{\s}} = \mathds{C}^N$ with orthonormal basis $\{|n\>: n=1, \dots, N\}$, and an observable $\mathsf{E}:=\{\mathsf{E}_x: x=1,\dots,N\}$ acting in ${\mathcal{H}\sub{\s}}$ given by the effects $\mathsf{E}_x = \sum_n p(n|x) |n\>\<n|$. Here, $ p(n|x) = q(x \ominus n)$, where $\ominus$ denotes subtraction modulo $N$, with $q(n)$ some arbitrary probability distribution satisfying $0<q(n)<1$ for all $n$. Such an observable is commutative and completely unsharp.
\subsection{Repeatability}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.45\textwidth]{Repeatability}
\vspace*{-0.2cm}
\caption{When the same observable is measured in succession, and when the outcome obtained by the second is guaranteed with probabilistic certainty to coincide with that of the first, for all input states $\rho$, then such a measurement is said to be repeatable.}\label{fig:repeatability}
\vspace*{-0.5cm}
\end{center}
\end{figure}
An $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ is repeatable if it holds that
\begin{align*}
\mathrm{tr}[\mathsf{E}_y {\mathcal{I}}_x(\rho)] = \delta_{x,y}\mathrm{tr}[\mathsf{E}_x \rho]
\end{align*}
for all states $\rho$ and outcomes $x,y$ \cite{Busch1995,Busch1996b}. In other words, an instrument ${\mathcal{I}}$ is a repeatable measurement of $\mathsf{E}$ if a second measurement of $\mathsf{E}$ is guaranteed (with probabilistic certainty) to produce the same outcome as ${\mathcal{I}}$. It is simple to verify that repeatability implies first-kindness, since if ${\mathcal{I}}$ is repeatable, then we have
\begin{align*}
\mathrm{tr}[\mathsf{E}_y {\mathcal{I}}_{\mathcal{X}}(\rho)] = \sum_x \mathrm{tr}[\mathsf{E}_y {\mathcal{I}}_x(\rho)] = \mathrm{tr}[\mathsf{E}_y \rho].
\end{align*}
While a first-kind measurement need not be repeatable in general, repeatability and first-kindness coincide for the class of sharp observables (Theorem 1 in Ref. \cite{Lahti1991}). For example, if $\mathsf{E}$ is commutative then the corresponding L\"uders instrument is a measurement of the first kind, but such an instrument is repeatable if and only if $\mathsf{E}$ is sharp; note that $\mathrm{tr}[\mathsf{E}_x {\mathcal{I}}^L_x(\rho)] = \mathrm{tr}[\mathsf{E}_x^2 \rho]$, which satisfies the repeatability condition if and only if $\mathsf{E}_x^2 = \mathsf{E}_x$.
An observable $\mathsf{E}$ admits a repeatable instrument only if it is norm-1, and in the absence of any constraints, all norm-1 observables admit a repeatable instrument. For example, if $\mathsf{E}$ is a possibly unsharp observable with the norm-1 property, and if $\ket{\psi_x}$ are eigenvalue-1 eigenvectors of the effects $\mathsf{E}_x$, then an instrument with operations ${\mathcal{I}}_x(\rho) = \mathrm{tr}[\mathsf{E}_x \rho] |\psi_x\>\<\psi_x|$ is repeatable. Now we present our third main result:
\begin{theorem}
Under the third law constraint, no observable admits a repeatable measurement.
\end{theorem}
This is an immediate consequence of \thmref{thm:first-kind} which shows that, under the third law constraint, norm-1 observables do not admit a measurement of the first kind. Since repeatability is only admitted for norm-1 observables, and since repeatability implies first-kindness, then the statement follows. See \corref{cor:repeatability} for further details.
\subsection{Ideality}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.45\textwidth]{Ideality}
\vspace*{-0.2cm}
\caption{When an observable is measured in a system such that whenever an outcome can be predicted with certainty, the state of the measured system is unperturbed, then such a measurement is said to be ideal. }\label{fig:ideality}
\vspace*{-0.5cm}
\end{center}
\end{figure}
An instrument ${\mathcal{I}}$ is said to be an ideal measurement of $\mathsf{E}$ if for every outcome $x$ there exists a state $\rho$ such that $\mathrm{tr}[\mathsf{E}_x \rho]=1$, and if for every outcome $x$ and every state $\rho$ the following implication holds:
\begin{align*}
\mathrm{tr}[\mathsf{E}_x \rho]=1 \implies {\mathcal{I}}_x(\rho) = \rho.
\end{align*}
That is, ${\mathcal{I}}$ is an ideal measurement if it does not change the state of the system whenever the outcome can be predicted with certainty \cite{Busch1990}. Note that ideality can only be enjoyed by norm-1 observables; since $\mathrm{tr}[\mathsf{E}_x \rho] \leqslant \| \mathsf{E}_x\|$, then any $\mathsf{E}$ that does not enjoy the norm-1 property fails the antecedent of the ideality condition, in which case such condition becomes void. Conversely, in the absence of any constraints all norm-1 observables admit an ideal measurement; the condition $\mathrm{tr}[\mathsf{E}_x \rho] = 1$ holds if and only if $\rho$ only has support in the eigenvalue-1 eigenspace of $\mathsf{E}_x$, which implies that $\mathsf{E}_x \rho = \rho \mathsf{E}_x = \rho$. But in such a case, we obtain ${\mathcal{I}}^L_x(\rho) = \sqrt{\mathsf{E}_x} \rho \sqrt{\mathsf{E}_x} = \mathsf{E}_x \rho = \rho$, and so the L\"uders measurement of a norm-1 observable is ideal.
For the class of sharp observables, the ideal measurements are precisely the L\"uders instruments (see Theorem 10.6 in Ref. \cite{Busch2016a}). Since the third law only permits L\"uders instruments for completely unsharp observables, then we may immediately infer that ideal measurements of any sharp observable, even those represented by a possibly degenerate self-adjoint operator, are prohibited by the third law.
However, unsharp observables admit ideal measurements that are not given by the L\"uders instrument. For example, consider a system ${\mathcal{H}\sub{\s}} = \mathds{C}^3$ with orthonormal basis $\{\ket{-1}, \ket{0}, \ket{1}\}$. Let $\mathsf{E} := \{\mathsf{E}_+, \mathsf{E}_-\}$ be a binary norm-1 observable acting in ${\mathcal{H}\sub{\s}}$, defined by $\mathsf{E}_\pm = |\pm 1\rangle \langle \pm 1| + \frac{1}{2}|0\rangle \langle 0 |$. It can easily be verified that an instrument with operations
\begin{align*}
{\mathcal{I}}_\pm (\cdot) = \<\pm 1| \cdot |\pm 1\> |\pm 1\>\< \pm 1| + \< 0 | \cdot |0 \> \frac{\mathds{1}\sub{\s} }{6}
\end{align*}
is an ideal measurement of $\mathsf{E}$. Therefore, the restriction imposed by the third law on the realisability of L\"uders instruments does not by itself rule out the possibility of ideal measurements for unsharp norm-1 observables. Now we present our fourth main result:
\begin{theorem}
Under the third law constraint, no observable admits an ideal measurement.
\end{theorem}
The proof is given in \app{app:ideality} (\propref{prop:appendix-ideality}), and the following is a rough sketch. If ${\mathcal{I}}$ is an ideal measurement of $\mathsf{E}$, and if $\rho$ is a state for which outcome $x$ can be predicted with certainty, then ${\mathcal{I}}_y(\rho) = \mathds{O}$ for all $y\ne x$, which implies that ${\mathcal{I}}_{\mathcal{X}}(\rho) = \rho$. But given the third law constraint, for every state $\rho$ such that $\mathrm{tr}[\mathsf{E}_x \rho] = 1$, it is shown that $\rho$ cannot be a fixed state of ${\mathcal{I}}_{\mathcal{X}}$, and so ${\mathcal{I}}$ cannot be ideal.
\subsection{Extremality}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.45\textwidth]{Extremality}
\vspace*{-0.2cm}
\caption{ A system may be measured by an instrument obtained by a probabilistic mixture of two distinct instruments. An extremal instrument is such that cannot be recovered as a probabilistic mixture of two distinct instruments. }\label{fig:extremality}
\vspace*{-0.5cm}
\end{center}
\end{figure}
For any fixed value space ${\mathcal{X}}$, the set of instruments is convex. That is, given any $\lambda \in [0,1]$, and any pair of instruments ${\mathcal{I}}^{(i)}:= \{{\mathcal{I}}^{(i)}_x : x\in {\mathcal{X}}\}$, $i=1,2$, we can construct an instrument ${\mathcal{I}}$ with the operations
\begin{align*}
{\mathcal{I}}_x(\cdot) = \lambda \, {\mathcal{I}}^{(1)}_x(\cdot) + (1-\lambda) {\mathcal{I}}^{(2)}_x(\cdot).
\end{align*}
An instrument ${\mathcal{I}}$ is \emph{extremal} when for any $\lambda \in (0,1)$ such a decomposition is only possible if ${\mathcal{I}} = {\mathcal{I}}^{(1)} = {\mathcal{I}}^{(2)}$. Intuitively, this implies that an extremal instrument is ``pure'', whereas a non-extremal instrument suffers from ``classical noise''. For an in-depth analysis of extremal instruments and their properties, see Refs. \cite{DAriano2011,Pellonpaa2013}. A simple example of an extremal instrument is the L\"uders instrument compatible with an observable with linearly independent effects. Since such linear independence is trivially satisfied for norm-1 observables, then their corresponding L\"uders instruments are extremal. But it is also possible for the effects of a completely unsharp observable to be linearly independent. For example, a binary observable $\mathsf{E}:= \{\mathsf{E}_0, \mathsf{E}_1\}$ acting in ${\mathcal{H}\sub{\s}} = \mathds{C}^2$, defined as $\mathsf{E}_0= 3/4|0\rangle \langle 0|+1/4 |1\rangle \langle 1|$ and $\mathsf{E}_1=\mathds{1} - \mathsf{E}_0$, is completely unsharp with linearly independent effects. Since the L\"uders instrument for such an observable is extremal, and can be implemented under the third law constraint, then we can immediately infer that extremality is permitted by the third law. Now we present our final main result:
\begin{theorem}
Under the third law constraint, an observable $\mathsf{E}$ acting in ${\mathcal{H}\sub{\s}}$ admits an extremal instrument only if $\rank{\mathsf{E}_x} \geqslant \sqrt{\dim({\mathcal{H}\sub{\s}})}$ for all outcomes $x$, and a measuring apparatus can only implement an extremal instrument if it interacts with the system with a non-unitary channel ${\mathcal{E}}$.
\end{theorem}
The proof is given in \app{app:extremality} (\propref{prop:extremality-third-law}). It follows that under the third law constraint, extremality is only permitted for large-rank observables. Note in particular that since L\"uders measurements of completely unsharp observables may be extremal, then the above result indicates that they are realisable under the third law constraint only with non-unitary measurement interactions; indeed, our proof for the sufficiency of complete unsharpness for the realisability of L\"uders instruments (\propref{prop:luders-completely-unsharp}) uses a non-unitary channel. Furthermore, note that in contradistinction to the other properties discussed above, unsharpness of $\mathsf{E}$ is not a necessary condition for extremality. Indeed, sharp observables with sufficiently large rank admit an extremal instrument, albeit such instruments cannot be L\"uders due to the previous results. In \app{app:extremality}, we provide a concrete model for an extremal instrument compatible with a binary sharp observable $\mathsf{E}$ acting in ${\mathcal{H}\sub{\s}} = \mathds{C}^2 \otimes \mathds{C}^2$, defined by $\mathsf{E}_x = \mathds{1} \otimes |x\>\<x|$. Since $\rank{\mathsf{E}_x} = 2 = \sqrt{\dim({\mathcal{H}\sub{\s}})}$, we see that the bound provided in the above theorem is in fact tight.
\section{Discussion}
We have generalised and strengthened the results of Ref. \cite{Guryanova2018}, in the finite-dimensional setting, in several ways. We have considered the most general class of (discrete) observables---both the observable to be measured and the pointer observable for the measuring apparatus---and not just those that are sharp and rank-1. Moreover, we have considered a more general class of measurement interactions, between the measured system and measuring apparatus, constrained only by our operational formulation of the third law and thus not restricted to the standard unitary or rank non-decreasing framework. Within the extended setting thus described, we have shown that ideal measurements are categorically prohibited by the third law for all observables and, \emph{a fortiori}, we showed that the third law dictates that whenever a measurement outcome can be predicted with certainty, then the state of the measured system is necessarily perturbed upon measurement. Moreover, we showed that the third law also forbids repeatable measurements, where we note that repeatability and ideality coincide only in the case of sharp rank-1 observables. In addition to the aforementioned impossibility statements, however, our results also include possibility statements as regards extremality and non-disturbance: the third law allows for an extremal instrument that measures an observable with sufficiently large rank, and for a measurement of a completely unsharp observable so that such a measurement will not disturb any observable that commutes.
Our results have interesting consequences for the role of unsharp observables in the foundations of quantum theory, and the question: what is real? There are two deeply connected traditional paradigms for the assignment of reality to a system, both of which are formulated with respect to sharp observables: the Einstein-Podolsky-Rosen (EPR) criterion \cite{Einstein1935}, and the macrorealism criteria of Legget-Garg \cite{Leggett1985}.
The EPR criterion for a physical property to correspond to an element of reality reads: \emph{``If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity''}. In other words, the EPR criterion rests on the possibility of ideal measurements: an eigenvalue of some self-adjoint operator exists in a system when the system is in the corresponding eigenstate, so that an ideal measurement of the observable reveals the eigenvalue while leaving the system in the same state. But the EPR criterion is shown to be in conflict with the third law of thermodynamics: it is in fact \emph{not} possible to ascertain any property of the system, with certainty, without changing its state. As argued by Busch and Jaeger \cite{Busch2010a, Jaeger2019}, however, the EPR criterion is sufficient, but not necessary; a necessary condition for a property of a system to correspond to an element of reality is that \emph{``it must have the capacity of influencing other systems, such as a measuring apparatus, in a way that is characteristic of such property''}. Indeed, since the influence the system has on the apparatus may come in degrees---quantified by the probability, or ``propensity'', for the apparatus to register that such property obtains a given value in the system---then even an unsharp observable may correspond to an element of ``unsharp reality''. But note that this weaker criterion makes no stipulation as to how the state of the system changes upon measurement, and does not rely on the possibility of ideal measurements: a property may exist in a system even if its measurement changes the state of the system. Consequently, our results provide support for the unsharp reality program of Busch and Jaeger from a thermodynamic standpoint, as it is shown to be compatible with the third law.
On the other hand, Legget and Garg proposed Macrorealism as the conjunction of two postulates:{ \bf(MR)} \emph{Macrorealism per se}, and {\bf (NI)} \emph{Noninvasive measurability}. { \bf(MR)} rests on the notion of \emph{definiteness}, i.e., that at any given time, a system can only be in one out of a set of states that are perfectly distinguishable by measurement of the observable describing the system---for example, an eigenstate corresponding to some eigenvalue of a self-adjoint operator. On the other hand, {\bf (NI)} requires that measurement of such observable not influence the statistics of other observables at later times. In other words, {\bf (NI)} relies on the possibility of a non-disturbing measurement. But we showed that the third law permits non-disturbance only for unsharp observables without the norm-1 property. Since such observables do not admit definite values in any state, i.e., no two states can be perfectly distinguished by a measurement of such observables, the third law is incompatible with the conjunction of { \bf(MR)} and {\bf (NI)}. It follows that if we want to keep {\bf (NI)}, then we must drop { \bf(MR)}; once again we are forced to adopt the notion of an unsharp reality.
To be sure, the third law of thermodynamics should not be considered in isolation; a complete analysis of how the laws of thermodynamics constrain channels and quantum measurements demands that the third law be considered in conjunction with the first (conservation of energy) and with the second (no perpetual motion of the second kind). Indeed, our operational formulation of the third law is independent of any notion of temperature, energy, or time. We expect that in the complete picture, that is, when the other laws are also taken into account, our generalised formulation will recover the standard notions of the third law in the literature. It is also an interesting question to ask how our formulation of the third law, and the constraints imposed by such law on measurements, can be extended to the infinite-dimensional setting. A complete operational formulation of channels constrained by the laws of thermodynamics, and for more general systems than those of finite dimension, is thus still an open problem; our work constitutes one part of such a program, which extends the research discipline devoted to the ``thermodynamics of quantum measurements'' \cite{Sagawa2009b, Miyadera2011d, Jacobs2012a, Funo2013, Navascues2014a, Miyadera2015a, Abdelkhalek2016, Hayashi2017, Lipka-Bartosik2018, Solfanelli2019, Purves-2020,Aw2021, Danageozian2022, Mohammady2022a, Stevens2021}. While our impossibility results are expected to carry over to the more complete framework, the question remains as to how our positive claims must be adapted in light of the other laws of thermodynamics: the combined laws may impose further constraints. Indeed, as witnessed by the Wigner-Araki-Yanase theorem, conservation of energy imposes constraints on the measurability of observables that do not commute with the Hamiltonian \cite{E.Wigner1952,Busch2010, Araki1960,Ozawa2002,Miyadera2006a,Loveridge2011,Ahmadi2013b,Loveridge2020a, Mohammady2021a,Kuramochi2022}. This is in contradistinction to the third law, which imposes no constraints on measurability. We leave such open questions for future work.
\newpage
\begin{acknowledgments}
The authors wish to thank Leon Loveridge for his invaluable comments, which greatly improved the present manuscript. M.H.M. acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 801505. T.M. acknowledges financial support from JSPS KAKENHI Grant No. JP20K03732.
\end{acknowledgments}
\section{introduction}
One of the standard assumptions of textbook quantum mechanics is the ``L\"uders rule'' which states that when an observable---represented by a self-adjoint operator with a non-degenerate spectrum---is measured in a system, the state of the system collapses to the eigenstate of the observable associated with the observed eigenvalue \cite{Luders2006, Busch2009a}. But assuming the universal validity of quantum theory, such a state change must be consistent with a description of the measurement process as a physical interaction between the system to be measured and a given (quantum) measuring apparatus. In his groundbreaking contribution to quantum theory in 1932 \cite{Von-Neumann-Foundations}, von Neumann introduced just such a model for the measurement process, where the system and apparatus interact unitarily.
While L\"uders measurements, and von Neumann's model for their realisation, are always available within the formal framework of quantum theory, they may not always be feasible in practice---technological obstacles and fundamental physical principles must also be accounted for. One such principle is that of conservation laws and, as shown by the famous Wigner-Araki-Yanase theorem \cite{E.Wigner1952,Busch2010, Araki1960,Ozawa2002,Miyadera2006a,Loveridge2011,Ahmadi2013b,Loveridge2020a,Mohammady2021a,Kuramochi2022}, only observables commuting with the conserved quantity admit a L\"uders measurement. This observation naturally raises the following question: do other physical principles constrain quantum measurements, and if so, how? Given that von Neumann's model for the measurement process assumes that the measuring apparatus is initially prepared in a pure state, an obvious candidate for consideration immediately presents itself: the third law of thermodynamics, or Nernst's unattainability principle, which states that a system cannot be cooled to absolute zero temperature with finite time, energy, or control complexity; in the quantum regime, the third law prohibits the preparation of pure states \cite{Schulman2005, Allahverdyan2011a, Reeb2013a, Masanes2014, Ticozzi2014, Scharlau2016a, Wilming2017a, Freitas2018, Clivaz2019, Taranto2021}. As argued by Guryanova \emph{et al.}, such unattainability rules out L\"uders measurements for any self-adjoint operator with a non-degenerate spectrum \cite{Guryanova2018}.
The more modern quantum theory of measurement \cite{PaulBuschMarianGrabowski1995, Busch1996, Heinosaari2011, Busch2016a} states that the properties of a quantum system are not exhausted by its sharp observables, i.e., observables represented by self-adjoint operators. Indeed, observables can be fundamentally unsharp, and are properly represented as positive operator valued measures (POVM) \cite{Busch2010a, Jaeger2019}. Similarly, the state change that results from measurement is more properly captured by the notion of instruments \cite{Davies1970}, which need not obey the L\"uders rule. Moreover, the interaction between system and apparatus during the measurement process is not necessarily unitary, and is more generally described as a channel, which more accurately describes situations where the interaction with the environment cannot be neglected. Therefore, how the third law constrains general measurements should be addressed; in this paper, we shall thoroughly examine this in the finite-dimensional setting.
First, we provide a minimal operational formulation of the third law by constraining the class of permissible channels so that the availability of a channel not so constrained is both necessary and sufficient for the preparation of pure states. The considered class of channels include those whose input and output spaces are not the same, which is the case when the process considered involves composing and discarding systems, and is more general than the class of rank non-decreasing channels, such as unitary channels. Indeed, the rank non-decreasing concept can only be properly applied to the limited cases where the input and output systems of a channel are the same.
Subsequently, we consider the most general class of measurement schemes that are constrained by the third law. That is, we do not assume that the measured observable is sharp, or that the pointer observable is sharp, or that the measurement interaction is rank non-decreasing. Next, we determine if the instruments realised by such measurement schemes may satisfy several desirable properties and, if so, under what conditions. These properties are:
\begin{enumerate}[\bf(i)]
\item {\bf Non-disturbance:} a non-selective measurement does not affect the subsequent measurement statistics of any observable that commutes with the measured observable.
\item {\bf First-kindness:} a non-selective measurement of an observable does not affect its subsequent measurement statistics.
\item {\bf Repeatability:} successive measurements of an observable are guaranteed to produce the same outcome.
\item {\bf Ideality:} whenever an outcome is certain from the outset, the measurement does not change the state of the measured system.
\item {\bf Extremality:} the instrument cannot be written as a probabilistic mixture of distinct instruments.
\end{enumerate}
L\"uders measurements of sharp observables simultaneously satisfy the above properties.
In general, however, these properties can be satisfied by instruments that do not obey the L\"uders rule, and also for observables that are not necessarily sharp. Moreover, they are in general not equivalent: an instrument can enjoy one while not another \cite{Lahti1991, Heinosaari2010, DAriano2011}. We therefore investigate each such property individually, and provide necessary conditions for their fulfilment by a measurement constrained by the third law.
We show that the third law prohibits a measurement of any \emph{small-rank} observable---an observable that has at least one rank-1 effect, or POVM element---from satisfying any of the above properties. On the other hand, extremality is shown to be permitted for an observable if each effect has sufficiently large rank, but only if the interaction between the system and apparatus is non-unitary. Finally, we show that while repeatbility and ideality are forbidden for all observables, non-disturbance and first-kindness are permitted for observables that are \emph{completely unsharp}: the effects of such observables do not have either eigenvalue 1 or 0, and so such observables do not enjoy the ``norm-1'' property. That is, non-disturbance and first-kindness are only permitted for observables that cannot have a definite value in any state. Our results are summarised in Table \ref{table:results}.
\begin{table}[!htb]
\begin{tabular}{ |p{0.5cm}||p{1.5cm}|p{1.5cm}|p{1.5cm}| p{1.5cm}| }
\hline
\multicolumn{1}{|c||}{} & \multicolumn{4}{c|}{Observable} \\
\hline
& Small-rank & Sharp & Norm-1 & Completely unsharp\\
\hline
\bf{(i)} & \xmark & \xmark & \xmark & \cmark\\
\hline
\bf{(ii)} & \xmark & \xmark & \xmark & \cmark\\
\hline
\bf{(iii)} & \xmark & \xmark & \xmark & \xmark\\
\hline
\bf{(iv)} & \xmark & \xmark & \xmark & \xmark \\
\hline
\bf{(v)} & \xmark & \cmark & \cmark & \cmark\\
\hline
\end{tabular}
\caption{The possibility (\cmark) or impossibility (\xmark) of an observable to admit the properties {\bf(i) - (v)} outlined above are indicated for four classes of observables: small-rank observables have at least one rank-1 effect; sharp observables are such that all effects are projections; norm-1 observables are such that every effect has eigenvalue 1; and completely unsharp observables are such that no effect has eigenvalue 1 or 0. }
\label{table:results}
\end{table}
\section{Operational formulation of the third law for channels}
The third law of thermodynamics states that in the absence of infinite resources of time, energy, or control complexity, a system cannot be cooled to absolute zero temperature. Assuming the universal validity of this law, then it must also hold in the quantum regime \cite{Schulman2005, Allahverdyan2011a, Reeb2013a, Masanes2014, Ticozzi2014, Scharlau2016a, Wilming2017a, Freitas2018, Clivaz2019, Taranto2021}. Throughout, we shall only consider quantum systems with a finite-dimensional Hilbert space ${\mathcal{H}}$. When such a system is in thermal equilibrium at some temperature, it is in a Gibbs state, and whenever the temperature is non-vanishing, such states are full-rank. Conversely, at absolute zero temperature the system will be in a low-rank state, i.e., it will not have full rank. In the special case of a non-degenerate Hamiltonian, the system will in fact be in a pure state. A minimal operational formulation of the third law in the quantum regime can therefore be phrased as follows: the possible transformations of quantum systems must be constrained so that the only attainable states have full rank.
In the Schr\"odinger picture, the most general transformations of quantum systems are represented by channels $\Phi: {\mathcal{L}}({\mathcal{H}}) \to {\mathcal{L}}({\mathcal{K}})$, i.e., completely positive trace-preserving maps from the algebra of linear operators on an input Hilbert space ${\mathcal{H}}$ to that of an output space ${\mathcal{K}}$. In the special case where ${\mathcal{H}} = {\mathcal{K}}$, we say that $\Phi$ acts in ${\mathcal{H}}$. But in general ${\mathcal{H}}$ need not be identical to ${\mathcal{K}}$, and the two systems may have different dimensions. This is because physically permissible transformations include the composition of multiple systems, and discarding of subsystems.
Previous formulations of the third law (see, for example, Proposition 5 of Ref. \cite{Reeb2013a} and Appendix B of Ref. \cite{Taranto2021}) have restricted the class of available channels to those with the same input and output system, and where the channel does not reduce the rank of the input state of such a system: these are referred to as rank non-decreasing channels, with unitary channels constituting a simple example. An intuitive argument for such restriction is as follows. Consider the case where we wish to cool the system of interest by an interaction with an infinitely large heat bath. But to utilise all degrees of freedom of such a bath one must either manipulate them all at once, which requires an infinite resource of control complexity, or one must approach the quasistatic limit, which requires an infinite resource of time. It stands to reason that, in a realistic protocol, only finitely many degrees of freedom of the bath can be accessed and so the system of interest effectively interacts with a finite, bounded, thermal bath. Such a bath is represented by a Gibbs state with a non-vanishing temperature, which has full rank. It is a simple task to show that if the interaction between the system of interest and the finite thermal bath is a rank non-decreasing channel---such as a unitary channel---acting in the compound of system-plus-bath, then the rank of the system cannot be reduced unless infinite energy is spent. It follows that if the input state of the system is full-rank, for example if it is a Gibbs state with a non-vanishing temperature, then the third law thus construed will only allow for such a state to be transformed to another full-rank state.
The above formulation has some drawbacks, however. First, the argument relies on the strong assumption that the system interacts with a thermal environment, which is not justified under purely operational grounds; the environment may in fact be an out of equilibrium system. Second, the rank non-decreasing condition can only be properly applied to channels with an identical input and output: the rank of a state on ${\mathcal{H}}$ only has meaning in relation to the dimension of ${\mathcal{H}}$. Indeed, the partial trace channel (describing the process by which one subsystem is discarded) and the composition channel (describing the process by which the system of interest is joined with an auxiliary system initialised in some fixed state) are physically relevant transformations that must also be addressed, but lead to absurdities when the change in the state's rank is examined. The partial trace channel is rank-decreasing, but tracing out one subsystem of a global full-rank state can only prepare a state that has full rank in the remaining subsystem. On the other hand, the composition channel is rank-increasing. But it is simple to show that if the rank of the auxiliary state is sufficiently small, then a unitary channel can be applied on the compound so as to purify the system of interest. We thus propose the following minimal definition for channels constrained by the third law, which is conceptually sound, and which does not rely on any assumptions regarding the environment and how it interacts with the system under study, and accounts for the most general class of channels:
\begin{definition}\label{defn:third-law}
A channel $\Phi : {\mathcal{L}}({\mathcal{H}}) \to {\mathcal{L}}({\mathcal{K}})$ is constrained by the third law if for every full-rank state $\rho$ on ${\mathcal{H}}$, $\Phi(\rho)$ is a full-rank state on ${\mathcal{K}}$.
\end{definition}
Properties of channels obeying the above definition are given in \app{app:channel-third-law-properties}, and as shown in \app{app:third-law-channel-proof}, if we are able to implement any channel that is constrained by the third law, then the added ability to implement a channel that is not so constrained, that is, a channel that may map some full-rank state to a low-rank state, is both necessary and sufficient for preparing a system in a pure state, given any unknown initial state $\rho$. Moreover, note that while a rank non-decreasing channel acting in ${\mathcal{H}}$ satisfies \defref{defn:third-law}, a channel acting in ${\mathcal{H}}$ and which satisfies such a definition need not be rank non-decreasing: a channel constrained by the third law may reduce the rank of some input state, but only if such a state is not full-rank. Finally, \defref{defn:third-law} has the benefit that for any pair of channels $\Phi_1$ and $\Phi_2$ satisfying such property, where the output of the former corresponds with the input of the latter, so too does their composition $\Phi_2 \circ \Phi_1$; the set of channels constrained by the third law is thus closed under composition.
\defref{defn:third-law} also allows us to re-examine the constraints imposed by the third law on state preparations, without modeling a finite-dimensional environment prepared in a Gibbs state, or assuming that the system interacts with such an environment by a rank non-decreasing channel. A state preparation is a physical process so that, irrespective of what input state is given, the output is prepared in a unique state $\rho$; indeed, an operational definition of a state is precisely the specification of procedures, or transformations, that produce it. As stated in Ref. \cite{Gour2020b}, \emph{``A quantum state can be understood
as a preparation channel, sending a trivial quantum system
to a non-trivial one prepared in a given state''}. That is, state preparations on a Hilbert space ${\mathcal{H}}$ may be identified with the set of preparation channels
\begin{align*}
\mathscr{P}({\mathcal{H}}) := \{\Phi : {\mathcal{L}}(\mathds{C}^1) \to {\mathcal{L}}({\mathcal{H}}) \} .
\end{align*}
Here, the input space is a 1-dimensional Hilbert space $\mathds{C}^1 \equiv \mathds{C} |\Omega\>$, and the only state on such a space is the rank-1 projection $|\Omega\>\<\Omega|$. The triviality of the input space captures the notion that the output of the channel $\Phi$ is independent of the input, and so the prepared state $\rho = \Phi(|\Omega\>\<\Omega|)$ is uniquely identified with the channel itself. Without any constraints, all states $\rho$ on ${\mathcal{H}}$ may be prepared by some $\Phi \in \mathscr{P}({\mathcal{H}})$. But now we may restrict the class of preparations by the third law as follows: $\mathscr{P}({\mathcal{H}})$ is constrained by the third law if all $\Phi \in \mathscr{P}({\mathcal{H}})$ map full-rank states to full-rank states as per \defref{defn:third-law}. But note that $|\Omega\>\<\Omega|$ has full rank in $\mathds{C}^1$, and so $\rho $ is guaranteed to be full-rank in ${\mathcal{H}}$.
\section{Quantum measurement}
Before investigating how the third law constrains quantum measurements, we shall first cover briefly some basic elements of quantum measurement theory which will be used in the sequel \cite{PaulBuschMarianGrabowski1995, Busch1996, Heinosaari2011, Busch2016a}.
\subsection{Observables}
Consider a quantum system ${\mathcal{S}}$ with a Hilbert space ${\mathcal{H}\sub{\s}}$ of finite dimension $2 \leqslant \dim({\mathcal{H}\sub{\s}}) < \infty$. We denote by $\mathds{O}$ and $\mathds{1}\sub{\s}$ the null and identity operators on ${\mathcal{H}\sub{\s}}$, respectively, and an operator $E$ on ${\mathcal{H}\sub{\s}}$ is called an \emph{effect} if it holds that $\mathds{O} \leqslant E \leqslant \mathds{1}\sub{\s}$. An observable of ${\mathcal{S}}$ is represented by a normalised positive operator valued measure (POVM) $\mathsf{E} : \Sigma \to \mathscr{E}({\mathcal{H}\sub{\s}})$, where $\Sigma$ is a sigma-algebra of some value space ${\mathcal{X}}$, representing the possible measurement outcomes, and $\mathscr{E}({\mathcal{H}\sub{\s}})$ is the space of effects on ${\mathcal{H}\sub{\s}}$. We restrict ourselves to discrete observables for which ${\mathcal{X}} := \{x_1, x_2, \dots \}$ is countable and finite. In such a case we may identify an observable with the set $\mathsf{E}:= \{\mathsf{E}_x: x \in {\mathcal{X}}\}$, where $\mathsf{E}_x \equiv \mathsf{E}(\{x\})$ are the (elementary) effects of $\mathsf{E}$ (also called POVM elements) which satisfy $\sum_{x\in {\mathcal{X}}} \mathsf{E}_x = \mathds{1}\sub{\s}$. The probability of observing outcome $x$ when measuring $\mathsf{E}$ in the state $\rho$ is given by the Born rule as $p^\mathsf{E}_\rho(x) := \mathrm{tr}[\mathsf{E}_x \rho]$.
Without loss of generality, we shall always assume that $\mathsf{E}_x \ne \mathds{O}$, since for any $x$ such that $\mathsf{E}_x = \mathds{O}$, the outcome $x$ is never observed, i.e., it is observed with probability zero; in such a case we may simply replace ${\mathcal{X}}$ with the smaller value space ${\mathcal{X}} \backslash \{x\}$. Additionally, we shall always assume that the observable is non-trivial, as trivial observables cannot distinguish between any states, and are thus uninformative; an effect is trivial if it is proportional to the identity, and an observable is non-trivial if at least one of its effects is not trivial.
We shall employ the short-hand notation $[\mathsf{E}, A]=\mathds{O}$ to indicate that the operator $A$ commutes with all effects of $\mathsf{E}$, and $[\mathsf{E}, \mathsf{F}]=\mathds{O}$ to indicate that all the effects of observables $\mathsf{E}$ and $\mathsf{F}$ mutually commute. An observable $\mathsf{E}$ is commutative if $[\mathsf{E},\mathsf{E}]=\mathds{O}$, and a commutative observable is also sharp if additionally $\mathsf{E}_x \mathsf{E}_y = \delta_{x,y} \mathsf{E}_x$, i.e., if $\mathsf{E}_x$ are mutually orthogonal projection operators. Sharp observables are also referred to as projection valued measures, and by the spectral theorem a sharp observable may be represented by a self-adjoint operator $A = \sum_x \lambda_x \mathsf{E}_x$, where $\{\lambda_x\} \subset \mathds{R}$ satisfies $\lambda_x \neq \lambda_y$ for $x\neq y$.
An observable that is not sharp will be called unsharp. An observable $\mathsf{E}$ has the norm-1 property if it holds that $\|\mathsf{E}_x\|=1$ for all $x$. In finite dimensions, this implies that each effect of a norm-1 observable has at least one eigenvector with eigenvalue 1. While sharp observables are trivially norm-1, this property may also be enjoyed by some unsharp observables.
We now introduce definitions for classes of observables that are of particular significance to our results:
\begin{definition}\label{defn:small-rank}
An observable $\mathsf{E}:= \{\mathsf{E}_x : x\in {\mathcal{X}}\}$ is called ``small-rank'' if there exists some $x\in {\mathcal{X}}$ such that $\mathsf{E}_x$ has rank 1. An observable is called ``large-rank'' if it is not small-rank.
\end{definition}
In particular, a sub-class of small-rank observables are called rank-1, for which every effect has rank 1 \cite{Holland1990, Pellonpaa2014}. For example, the effects of a sharp observable represented by a non-degenerate self-adjoint operator $A = \sum_x \lambda_x |\psi_x\>\<\psi_x|$ are the rank-1 projections $\mathsf{E}_x = |\psi_x\>\<\psi_x|$. Such observables are therefore rank-1, and hence small-rank. On the other hand, a sharp observable represented by a degenerate self-adjoint operator such that the eigenspace corresponding to each (distinct) eigenvalue has dimension larger than 1 is large-rank, as each effect is a projection with rank larger than 1.
\begin{definition}\label{defn:non-degenerate}
An observable $\mathsf{E}:= \{\mathsf{E}_x : x\in {\mathcal{X}}\}$ is called ``non-degenerate'' if there exists some $x\in {\mathcal{X}}$ such that there are no multiplicities in the strictly positive eigenvalues of $\mathsf{E}_x$. An observable that is not non-degenerate is called degenerate.
\end{definition}
An example of a non-degenerate observable is a small-rank observable, since in such a case there exists an effect that has exactly one strictly positive eigenvalue. On the other hand, a large-rank sharp observable is degenerate, since in such a case each effect has more than one eigenvector with eigenvalue 1.
\begin{definition}\label{defn:complete-unsharp}
An observable $\mathsf{E}:= \{\mathsf{E}_x : x\in {\mathcal{X}}\}$ is called ``completely unsharp'' if for each $x\in {\mathcal{X}}$, the spectrum of $\mathsf{E}_x$ does not contain either 1 or 0.
\end{definition}
Completely unsharp observables evidently do not have the norm-1 property, since it holds that $\| \mathsf{E}_x\| < 1$ for all $x$. But since the effects also do not have eigenvalue 0, then the effects are in fact full-rank. It follows that completely unsharp observables are also large-rank. But a completely unsharp observable may be either degenerate or non-degenerate.
\subsection{Instruments}
An instrument \cite{Davies1970}, or operation valued measure, describes how a system is transformed upon measurement, and is given as a collection of operations (completely positive trace non-increasing linear maps) ${\mathcal{I}}:= \{{\mathcal{I}}_x : x\in {\mathcal{X}}\}$ such that ${\mathcal{I}}_{\mathcal{X}}(\cdot) := \sum_{x\in {\mathcal{X}}} {\mathcal{I}}_x(\cdot)$ is a channel. Throughout, we shall always assume that the instrument acts in ${\mathcal{H}\sub{\s}}$, i.e., that both the input and output space of ${\mathcal{I}}_x$ is ${\mathcal{H}\sub{\s}}$. An instrument ${\mathcal{I}}$ is identified with a unique observable $\mathsf{E} $ via the relation $\mathrm{tr}[{\mathcal{I}}_x(\rho)] = \mathrm{tr}[\mathsf{E}_x\rho]$ for all outcomes $x$ and states $\rho$, and we shall refer to such ${\mathcal{I}}$ as an $\mathsf{E}$-compatible instrument, or an $\mathsf{E}$-instrument for short, and to ${\mathcal{I}}_{\mathcal{X}}$ as the corresponding $\mathsf{E}$-channel \cite{Heinosaari2015}. Note that while every instrument is identified with a unique observable, every observable $\mathsf{E}$ admits infinitely many $\mathsf{E}$-compatible instruments; the operations of the L\"uders instrument ${\mathcal{I}}^L$ compatible with $\mathsf{E}$ are written as
\begin{align}\label{eq:Luders}
{\mathcal{I}}^L_x(\cdot) := \sqrt{\mathsf{E}_x} \cdot \sqrt{\mathsf{E}_x},
\end{align}
and it holds that the operations of every $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ can be constructed as ${\mathcal{I}}_x = \Phi_x \circ {\mathcal{I}}^L_x$, where $\Phi_x$ are arbitrary channels acting in ${\mathcal{H}\sub{\s}}$ that may depend on outcome $x$ \cite{Ozawa2001,Pellonpaa2013a}.
\subsection{Measurement schemes}
A quantum system is measured when it undergoes an appropriate physical interaction with a measuring apparatus so that the transition of some variable of the apparatus---such as the position of a pointer along a scale---registers the outcome of the measured observable. The most general description of the measurement process is given by a \emph{measurement scheme}, which is a tuple ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ where ${\mathcal{H}\sub{\aa}}$ is the Hilbert space for (the probe of) the apparatus $\aa$ and $\xi$ is a fixed state of $\aa$, ${\mathcal{E}}$ is a channel acting in ${\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}}$ which serves to correlate ${\mathcal{S}}$ with $\aa$, and $\mathsf{Z} := \{\mathsf{Z}_x : x\in {\mathcal{X}}\}$ is a POVM acting in ${\mathcal{H}\sub{\aa}}$ which is referred to as a ``pointer observable''. For all outcomes $x$, the operations of the instrument ${\mathcal{I}}$ implemented by ${\mathcal{M}}$ can be written as
\begin{align}\label{eq:instrument-dilation}
{\mathcal{I}}_x (\cdot) = \mathrm{tr}\sub{\aa}[(\mathds{1}\sub{{\mathcal{S}}}\otimes \mathsf{Z}_x) {\mathcal{E}}(\cdot \otimes \xi)],
\end{align}
where $\mathrm{tr}\sub{\aa}[\cdot]$ is the partial trace over $\aa$. The channel implemented by ${\mathcal{M}}$ is thus ${\mathcal{I}}_{\mathcal{X}}(\cdot) = \mathrm{tr}\sub{\aa}[{\mathcal{E}}(\cdot \otimes \xi)]$. Every $\mathsf{E}$-compatible instrument admits infinitely many \emph{normal} measurement schemes, where $\xi$ is chosen to be a pure state, ${\mathcal{E}}$ is chosen to be a unitary channel, and $\mathsf{Z}$ is chosen to be sharp \cite{Ozawa1984}. Von Neumann's model for the measurement process is one such example of a normal measurement scheme. However, unless stated otherwise, we shall consider the most general class of measurement schemes, where $\xi$ need not be pure, ${\mathcal{E}}$ need not be unitary, and $\mathsf{Z}$ need not be sharp.
\section{Measurement schemes constrained by the third law}
We now consider how the third law constrains measurement schemes, and subsequently examine how such constraints limit the possibility of a measurement to satisfy the proprieties {\bf (i) - (v)} outlined in the introduction.
Since the third law only pertains to channels and state preparations, the only elements of a measurement scheme ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ that will be limited by the third law are the interaction channel ${\mathcal{E}}$, and the apparatus state preparation $\xi$. By \defref{defn:third-law} and the proceeding discussion, we therefore introduce the following definition:
\begin{definition}\label{defn:third-law-measurement}
A measurement scheme ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ is constrained by the third law if the following hold:
\begin{enumerate}[(i)]
\item $\xi$ is a full-rank state on ${\mathcal{H}\sub{\aa}}$.
\item For every full-rank state $\varrho$ on ${\mathcal{H}\sub{\s}} \otimes {\mathcal{H}\sub{\aa}}$, ${\mathcal{E}}(\varrho)$ is also a full-rank state.
\end{enumerate}
\end{definition}
Properties of measurement schemes constrained by the third law are given in \app{app:third-law-measurement} and \app{app:fixed-point-measurement}. Note that the third law does not impose any constraints on the measurability of observables; we may always choose ${\mathcal{M}}$ to be a ``trivial'' measurement scheme, where ${\mathcal{H}\sub{\aa}} \simeq {\mathcal{H}\sub{\s}}$ and ${\mathcal{E}}$ is a unitary swap channel, in which case the observable $\mathsf{E}$ measured in the system is identified with the pointer observable $\mathsf{Z}$ of the apparatus, which can be chosen arbitrarily. This is in contrast to the case where a measurement is constrained by conservation laws; by the Yanase condition, the pointer observable is restricted so that it commutes with the apparatus part of the conserved quantity, and it follows that an observable not commuting with the system part of the conserved quantity is measurable only if it is unsharp, and only if the apparatus preparation has a large coherence in the conserved quantity \cite{Mohammady2021a}.
The measurability of observables notwithstanding, let us note that an instrument implemented by a trivial measurement scheme is also trivial, i.e., it will hold that for all outcomes $x$ and states $\rho$, the operations of ${\mathcal{I}}$ satisfy ${\mathcal{I}}_x(\rho) = \mathrm{tr}[\mathsf{E}_x \rho] \xi$. Irrespective of what outcome is observed and what the initial state is, the final state is always $\xi$. In such a case, ${\mathcal{I}}$ fails all the properties {\bf (i) - (v)} that are the subject of our investigation. Therefore, whether or not an observable admits an instrument---realisable by a measurement scheme constrained by the third law as per \defref{defn:third-law-measurement}---with such properties remains to be seen: we shall now investigate this.
\subsection{Non-disturbance}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.45\textwidth]{Non-Disturbance}
\vspace*{-0.2cm}
\caption{The top half of the figure represents a sequential measurement of possibly different observables in a system initially prepared in state $\rho$, with the histograms representing the statistics obtained for each measurement in the sequence. The bottom half shows the case where the first measurement in the sequence is removed, and only the second measurement takes place. When the statistics of such a measurement are the same in both scenarios, for all states $\rho$, then the first measurement is said to not disturb the second. }\label{fig:non-disturbance}
\vspace*{-0.5cm}
\end{center}
\end{figure}
An $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ does not disturb an observable $\mathsf{F}:= \{\mathsf{F}_y : y \in {\mathcal{Y}}\}$ if it holds that
\begin{align*}
\mathrm{tr}[\mathsf{F}_y {\mathcal{I}}_{\mathcal{X}}(\rho)] = \mathrm{tr}[\mathsf{F}_y \rho]
\end{align*}
for all states $\rho$ and outcomes $y$ \cite{Heinosaari2010}. In other words, ${\mathcal{I}}$ does not disturb $\mathsf{F}$ if the statistics of $\mathsf{F}$ are not affected by a prior non-selective measurement of $\mathsf{E}$ by ${\mathcal{I}}$. Non-disturbance is only possible for \emph{jointly measurable} observables, since in such a case the sequential measurement of $\mathsf{E}$ by ${\mathcal{I}}$, followed by a measurement of $\mathsf{F}$, defines a joint observable for $\mathsf{E}$ and $\mathsf{F}$ \cite{Heinosaari2015}. In the absence of any constraints, commutation of $\mathsf{E}$ with $\mathsf{F}$ is sufficient for non-disturbance. That is, if $\mathsf{F}$ commutes with $\mathsf{E}$,
there exists an $\mathsf{E}$-instrument ${\mathcal{I}}$ that does not disturb $\mathsf{F}$. Moreover, the L\"uders $\mathsf{E}$-instrument ${\mathcal{I}}^L$ does not disturb \emph{all} $\mathsf{F}$ commuting with $\mathsf{E}$ \cite{Busch1998}. This can be easily shown by the following: if all effects of $\mathsf{F}$ and $\mathsf{E}$ mutually commute, then by \eq{eq:Luders} we may write
\begin{align*}
\mathrm{tr}[\mathsf{F}_y {\mathcal{I}}_{\mathcal{X}}^L(\rho)] &= \sum_x \mathrm{tr}[\mathsf{F}_y \sqrt{\mathsf{E}_x} \rho \sqrt{\mathsf{E}_x}] \\
& = \sum_x \mathrm{tr}[\sqrt{\mathsf{E}_x} \mathsf{F}_y \sqrt{\mathsf{E}_x} \rho ] \\
& = \sum_x \mathrm{tr}[\mathsf{E}_x \mathsf{F}_y \rho ] = \mathrm{tr}[\mathsf{F}_y \rho].
\end{align*}
In the second line we have used the cyclicity of the trace, and in the third line we use $[\mathsf{E}_x, \mathsf{F}_y] = \mathds{O} \iff [\sqrt{\mathsf{E}_x}, \mathsf{F}_y] = \mathds{O}$.
While commutation is sufficient for non-disturbance, it is in general not necessary; if $\mathsf{E}$ and $\mathsf{F}$ do not commute but are both sufficiently unsharp so as to be jointly measurable \cite{Miyadera2008}, then it \emph{may} be possible for a measurement of $\mathsf{E}$ to not disturb $\mathsf{F}$, but not always: while non-disturbance requires joint measurability, joint-measurability does not guarantee non-disturbance. Let us consider an example where non-disturbance is permitted for two non-commuting observables. Consider the case that ${\mathcal{H}\sub{\s}} = \mathds{C}^2 \otimes \mathds{C}^2$, with the orthonormal basis $\{|k\rangle \otimes |m\rangle : k,m = 0,1\}$, and define the following family of operators on $\mathds{C}^2$:
\begin{align*}
A_0 &=|0\rangle \langle 0|, \qquad A_1=\frac{1}{2}|0\rangle \langle 0|, \qquad A_2 =\frac{1}{2}|1\rangle \langle 1|,\\
A_3 &=\frac{1}{2} |+\rangle \langle + |, \qquad A_4 =\frac{1}{2} |-\rangle \langle -|, \qquad A_5= |1\rangle \langle 1|,
\end{align*}
where $\ket{\pm} := \frac{1}{\sqrt{2}}(\ket{0} \pm \ket{1})$. Now consider the binary observables $\mathsf{E}:= \{\mathsf{E}_0, \mathsf{E}_1\}$ and $\mathsf{F}:= \{\mathsf{F}_0, \mathsf{F}_1\}$ acting in ${\mathcal{H}\sub{\s}}$, defined by
\begin{align*}
\mathsf{E}_0 &= A_0 \otimes |0\rangle \langle 0|
+ (A_2+ A_4) \otimes |1\rangle \langle 1| , \\
\mathsf{E}_1&=(A_1+A_3) \otimes |1\rangle \langle 1|
+A_5 \otimes |0\rangle \langle 0|,
\end{align*}
and
\begin{align*}
\mathsf{F}_0 &= A_0\otimes |0\rangle \langle 0|
+ (A_1+A_4)\otimes |1\rangle \langle 1|, \\
\mathsf{F}_1 &= (A_2+A_3)\otimes |1\rangle \langle 1|
+A_5 \otimes |0\rangle \langle 0|.
\end{align*}
One can confirm that $[\mathsf{E}, \mathsf{F}] \ne \mathds{O}$. But, we can construct an $\mathsf{E}$-instrument ${\mathcal{I}}$ with operations
\begin{align*}
{\mathcal{I}}_0(\rho) &=\mathrm{tr}[\rho (A_0\otimes |0\>\<0| + A_4 \otimes |1\>\<1|)]
|0 \>\< 0|\otimes |0 \>\< 0| \\
& \quad + \mathrm{tr}[\rho(A_2 \otimes |1\>\<1|)]|1\>\<1|\otimes |0\>\<0|,
\\
{\mathcal{I}}_1(\rho) &= \mathrm{tr}[\rho( A_5\otimes |0\rangle \langle 0| + A_3 \otimes |1\rangle \langle 1|)]
|1\rangle \langle 1|\otimes |0\rangle \langle 0|
\\
&\quad + \mathrm{tr}[\rho(A_1 \otimes |1\rangle \langle 1|)]|0\rangle \langle 0|\otimes |0\rangle \langle 0| ,
\end{align*}
which does not disturb $\mathsf{F}$.
However, we show that under the third law constraint, commutation is in fact necessary for non-disturbance. That is, if an $\mathsf{E}$-instrument ${\mathcal{I}}$ can be
implemented by a measurement scheme constrained by the third law, such that ${\mathcal{I}}$ does not disturb $\mathsf{F}$, then
$[\mathsf{E}, \mathsf{F}]=\mathds{O}$ must be satisfied. In \app{app:third-law-measurement} we show that for any instrument ${\mathcal{I}}$, implemented by a measurement scheme constrained by the third law, there exists at least one full-rank state $\rho_0$ such that ${\mathcal{I}}_{\mathcal{X}}(\rho_0) = \rho_0$. In such a case, non-disturbance of $\mathsf{F}_y$ (i.e., $\mathrm{tr}[\mathsf{F}_y {\mathcal{I}}_{\mathcal{X}}(\rho)] = \mathrm{tr}[\mathsf{F}_y \rho]$ for all $\rho$)
implies non-disturbance of a sharp observable $\P=\{\P_z\}$, where $\P_z$ are the spectral projections of $\mathsf{F}_y$. That is, a sequential measurement of $\mathsf{E}$ by the instrument ${\mathcal{I}}$, followed by a measurement of $\P$, is a joint measurement of $\mathsf{E}$ and $\P$. Since joint measurability implies commutativity when either observable is sharp, it follows that $\mathsf{E}$ must commute with $\P$, and hence with $\mathsf{F}_y$, for all $y$. In other words, given the existence of a full-rank fixed state $\rho_0$, then a measurement of $\mathsf{E}$ does not disturb $\mathsf{F}$ only if they commute. See also Proposition 4 of Ref. \cite{Heinosaari2010}.
But when the measurement of $\mathsf{E}$ is constrained by the third law, we show that $[\mathsf{E},\mathsf{F}]=\mathds{O}$ is not sufficient for non-disturbance: the properties of $\mathsf{E}$ impose further constraints. We now present our first main result:
\begin{theorem}
Under the third law constraint, a completely unsharp observable $\mathsf{E}$ admits a measurement that does not disturb any observable $\mathsf{F}$ that commutes with $\mathsf{E}$. On the other hand, if an observable $\mathsf{E}$ satisfies $\|\mathsf{E}_x\| = 1$ for any outcome $x$, then there exists $\mathsf{F}$ which commutes with $\mathsf{E}$ but is disturbed by any measurement of $\mathsf{E}$ that is constrained by the third law.
\end{theorem}
That is, an $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ admits a measurement scheme ${\mathcal{M}}$ that is constrained by the third law, such that $[\mathsf{E},\mathsf{F}] = \mathds{O} \implies \mathrm{tr}[\mathsf{F}_y {\mathcal{I}}_{\mathcal{X}}(\rho)] = \mathrm{tr}[\mathsf{F}_y \rho]$ for all $y$ and $\rho$, if $\mathsf{E}$ is completely unsharp and only if $\|\mathsf{E}_x\| <1$ for all outcomes $x$. The proof is presented in \app{app:non-disturbance} (\propref{prop:non-disturbance}). To show sufficiency of complete unsharpness we prove that, given the third law constraint, an observable admits a L\"uders instrument if and only if it is completely unsharp (\propref{prop:luders-completely-unsharp}). But since L\"uders measurements are guaranteed to not disturb any commuting observable, the claim immediately follows. On the other hand, the necessity that the effects have norm smaller than 1 follows from the following: if any effect of $\mathsf{E}$ has eigenvalue 1, the projection onto such eigenspace commutes with $\mathsf{E}$ but is shown to be disturbed. In particular, this implies that when a norm-1 observable (such as a sharp observable) is measured under the third law constraint, then there exists some observable $\mathsf{F}$ that commutes with $\mathsf{E}$ but is nonetheless disturbed. Note that an observable can satisfy $\| \mathsf{E}_x\| <1$ for all $x$ without being completely unsharp, since such effects can still have 0 in their spectrum.
Of course, even if a sharp or norm-1 observable $\mathsf{E}$ fails the strict non-disturbance condition, this does not imply that some non-disturbed observables do not exist. In \app{app:non-disturbance}, we show that if an observable is small-rank as per \defref{defn:small-rank}, then it holds that a third-law constrained measurement of such an observable will disturb all observables, even if they commute. Second, we show that if $\mathsf{E}$ is a non-degenerate observable as per \defref{defn:non-degenerate}, then the class of non-disturbed observables will be commutative, and any pair of non-disturbed observables will commute. That is, for any $\mathsf{F}$ and $\mathsf{G}$ that are non-disturbed, then it will hold that $[\mathsf{F}, \mathsf{G}] = [\mathsf{F},\mathsf{F}] = [\mathsf{G},\mathsf{G}] = \mathds{O}$. In other words, non-degeneracy of the measured observable will spoil the ``coherence'' of the measured system. Therefore, to ensure that a measurement of $\mathsf{E}$ does not disturb a non-trivial class of (possibly non-commutative) observables, then $\mathsf{E}$ must be a large rank (and degenerate) observable. In \app{app:non-disturbance}, we construct an explicit example where the measurement of a sharp observable that is large-rank, and hence degenerate, will not disturb a non-trivial class of possibly non-commutative observables. This is a binary observable $\mathsf{E}$ acting in a two-qubit system ${\mathcal{H}\sub{\s}} = \mathds{C}^2 \otimes \mathds{C}^2$, defined by $\mathsf{E}_x = \mathds{1} \otimes |x\>\<x|$. Note that these effects have rank 2, and are hence also degenerate. In such a case, any observable $\mathsf{F}$ with effects $\mathsf{F}_y \otimes \mathds{1}$ will be non-disturbed, and it may be the case that $[\mathsf{F},\mathsf{F}] \ne \mathds{O}$.
\subsection{First-kindness}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.45\textwidth]{First-Kindness}
\vspace*{-0.2cm}
\caption{When the same observable is measured in succession, and when the statistics of the second measurement are the same as those of the first, for all input states $\rho$, then such a measurement is said to be of the first kind. }\label{fig:first-kindness}
\vspace*{-0.5cm}
\end{center}
\end{figure}
An $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ is a measurement of the first kind if ${\mathcal{I}}$ does not disturb $\mathsf{E}$ itself, i.e., if it holds that
\begin{align*}
\mathrm{tr}[\mathsf{E}_x {\mathcal{I}}_{\mathcal{X}}(\rho)] = \mathrm{tr}[\mathsf{E}_x \rho]
\end{align*}
for all states $\rho$ and outcomes $x$ \cite{Lahti1991}. In the absence of any constraints, commutativity of an observable is sufficient for it to admit a first-kind measurement; for any observable $\mathsf{E}$ such that $[\mathsf{E},\mathsf{E}]=\mathds{O}$ holds, the corresponding L\"uders instrument is a measurement of the first kind. This follows from analogous reasoning to that given above. But we show that, under the third law constraint, commutativity is necessary for first-kindness, but not sufficient. We now present our second main result:
\begin{theorem}\label{thm:first-kind}
Under the third law constraint, an observable $\mathsf{E}$ admits a measurement of the first kind if and only if $\mathsf{E}$ is commutative and completely unsharp.
\end{theorem}
In particular, note that a third law constrained measurement of any norm-1 observable, such as a sharp observable, necessarily disturbs itself. The proof is given in \app{app:first-kindness} (\propref{prop:first-kindness}). The sufficiency follows from the fact that any completely unsharp observable admits a L\"uders instrument, as discussed above. On the other hand, the following is a sketch of the proof for the necessity of such a condition: a non-selective measurement constrained by the third law always leaves some full-rank state $\rho_0$ invariant. Non-disturbance of $\mathsf{E}$ therefore demands commutativity, as discussed above. But every commutative observable $\mathsf{E}$ is a classical post processing of a sharp observable $\P$, i.e., we may write $\mathsf{E}_x = \sum_y p(x|y) \P_y$ where $\{p(x|y)\}$ is a family of non-negative numbers satisfying $\sum_x p(x|y) = 1$ for every $y$ \cite{Heinosaari2011a}. Given that ${\mathcal{I}}_{\mathcal{X}}$ has a full-rank fixed state, then if ${\mathcal{I}}$ is a first-kind measurement, $\P$ is also not disturbed \cite{Mohammady2021a}. Therefore, a sequential measurement of $\mathsf{E}$ by ${\mathcal{I}}$ followed by measurement of $\P$ defines a joint measurement of $\mathsf{E}$ and $\P$. By \eq{eq:instrument-dilation}, we obtain for every $\rho$ the following:
\begin{align*}
\mathrm{tr}[\P_y \mathsf{E}_x \P_y \rho] = \mathrm{tr}[\P_y \otimes \mathsf{Z}_x {\mathcal{E}}(\rho \otimes \xi)].
\end{align*}
Now assume that $\rho$ is full-rank. Given that a third law constrained measurement employs a full-rank apparatus preparation $\xi$, while ${\mathcal{E}}$ obeys \defref{defn:third-law}, then ${\mathcal{E}}(\rho \otimes \xi)$ is full-rank. It follows that the term on the right hand side is strictly positive, and hence so too is the term on the left. But this implies that $\P_y \mathsf{E}_x \P_y > \mathds{O}$, and so $0 < p(x|y) < 1$, for all $x,y$. Therefore, $\mathsf{E}$ is completely unsharp.
In \app{app:first-kindness}, we construct an explicit example of a first-kind measurement (not given by a L\"uders instrument) of a commutative and completely unsharp observable. We consider a system ${\mathcal{H}\sub{\s}} = \mathds{C}^N$ with orthonormal basis $\{|n\>: n=1, \dots, N\}$, and an observable $\mathsf{E}:=\{\mathsf{E}_x: x=1,\dots,N\}$ acting in ${\mathcal{H}\sub{\s}}$ given by the effects $\mathsf{E}_x = \sum_n p(n|x) |n\>\<n|$. Here, $ p(n|x) = q(x \ominus n)$, where $\ominus$ denotes subtraction modulo $N$, with $q(n)$ some arbitrary probability distribution satisfying $0<q(n)<1$ for all $n$. Such an observable is commutative and completely unsharp.
\subsection{Repeatability}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.45\textwidth]{Repeatability}
\vspace*{-0.2cm}
\caption{When the same observable is measured in succession, and when the outcome obtained by the second is guaranteed with probabilistic certainty to coincide with that of the first, for all input states $\rho$, then such a measurement is said to be repeatable.}\label{fig:repeatability}
\vspace*{-0.5cm}
\end{center}
\end{figure}
An $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ is repeatable if it holds that
\begin{align*}
\mathrm{tr}[\mathsf{E}_y {\mathcal{I}}_x(\rho)] = \delta_{x,y}\mathrm{tr}[\mathsf{E}_x \rho]
\end{align*}
for all states $\rho$ and outcomes $x,y$ \cite{Busch1995,Busch1996b}. In other words, an instrument ${\mathcal{I}}$ is a repeatable measurement of $\mathsf{E}$ if a second measurement of $\mathsf{E}$ is guaranteed (with probabilistic certainty) to produce the same outcome as ${\mathcal{I}}$. It is simple to verify that repeatability implies first-kindness, since if ${\mathcal{I}}$ is repeatable, then we have
\begin{align*}
\mathrm{tr}[\mathsf{E}_y {\mathcal{I}}_{\mathcal{X}}(\rho)] = \sum_x \mathrm{tr}[\mathsf{E}_y {\mathcal{I}}_x(\rho)] = \mathrm{tr}[\mathsf{E}_y \rho].
\end{align*}
While a first-kind measurement need not be repeatable in general, repeatability and first-kindness coincide for the class of sharp observables (Theorem 1 in Ref. \cite{Lahti1991}). For example, if $\mathsf{E}$ is commutative then the corresponding L\"uders instrument is a measurement of the first kind, but such an instrument is repeatable if and only if $\mathsf{E}$ is sharp; note that $\mathrm{tr}[\mathsf{E}_x {\mathcal{I}}^L_x(\rho)] = \mathrm{tr}[\mathsf{E}_x^2 \rho]$, which satisfies the repeatability condition if and only if $\mathsf{E}_x^2 = \mathsf{E}_x$.
An observable $\mathsf{E}$ admits a repeatable instrument only if it is norm-1, and in the absence of any constraints, all norm-1 observables admit a repeatable instrument. For example, if $\mathsf{E}$ is a possibly unsharp observable with the norm-1 property, and if $\ket{\psi_x}$ are eigenvalue-1 eigenvectors of the effects $\mathsf{E}_x$, then an instrument with operations ${\mathcal{I}}_x(\rho) = \mathrm{tr}[\mathsf{E}_x \rho] |\psi_x\>\<\psi_x|$ is repeatable. Now we present our third main result:
\begin{theorem}
Under the third law constraint, no observable admits a repeatable measurement.
\end{theorem}
This is an immediate consequence of \thmref{thm:first-kind} which shows that, under the third law constraint, norm-1 observables do not admit a measurement of the first kind. Since repeatability is only admitted for norm-1 observables, and since repeatability implies first-kindness, then the statement follows. See \corref{cor:repeatability} for further details.
\subsection{Ideality}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.45\textwidth]{Ideality}
\vspace*{-0.2cm}
\caption{When an observable is measured in a system such that whenever an outcome can be predicted with certainty, the state of the measured system is unperturbed, then such a measurement is said to be ideal. }\label{fig:ideality}
\vspace*{-0.5cm}
\end{center}
\end{figure}
An instrument ${\mathcal{I}}$ is said to be an ideal measurement of $\mathsf{E}$ if for every outcome $x$ there exists a state $\rho$ such that $\mathrm{tr}[\mathsf{E}_x \rho]=1$, and if for every outcome $x$ and every state $\rho$ the following implication holds:
\begin{align*}
\mathrm{tr}[\mathsf{E}_x \rho]=1 \implies {\mathcal{I}}_x(\rho) = \rho.
\end{align*}
That is, ${\mathcal{I}}$ is an ideal measurement if it does not change the state of the system whenever the outcome can be predicted with certainty \cite{Busch1990}. Note that ideality can only be enjoyed by norm-1 observables; since $\mathrm{tr}[\mathsf{E}_x \rho] \leqslant \| \mathsf{E}_x\|$, then any $\mathsf{E}$ that does not enjoy the norm-1 property fails the antecedent of the ideality condition, in which case such condition becomes void. Conversely, in the absence of any constraints all norm-1 observables admit an ideal measurement; the condition $\mathrm{tr}[\mathsf{E}_x \rho] = 1$ holds if and only if $\rho$ only has support in the eigenvalue-1 eigenspace of $\mathsf{E}_x$, which implies that $\mathsf{E}_x \rho = \rho \mathsf{E}_x = \rho$. But in such a case, we obtain ${\mathcal{I}}^L_x(\rho) = \sqrt{\mathsf{E}_x} \rho \sqrt{\mathsf{E}_x} = \mathsf{E}_x \rho = \rho$, and so the L\"uders measurement of a norm-1 observable is ideal.
For the class of sharp observables, the ideal measurements are precisely the L\"uders instruments (see Theorem 10.6 in Ref. \cite{Busch2016a}). Since the third law only permits L\"uders instruments for completely unsharp observables, then we may immediately infer that ideal measurements of any sharp observable, even those represented by a possibly degenerate self-adjoint operator, are prohibited by the third law.
However, unsharp observables admit ideal measurements that are not given by the L\"uders instrument. For example, consider a system ${\mathcal{H}\sub{\s}} = \mathds{C}^3$ with orthonormal basis $\{\ket{-1}, \ket{0}, \ket{1}\}$. Let $\mathsf{E} := \{\mathsf{E}_+, \mathsf{E}_-\}$ be a binary norm-1 observable acting in ${\mathcal{H}\sub{\s}}$, defined by $\mathsf{E}_\pm = |\pm 1\rangle \langle \pm 1| + \frac{1}{2}|0\rangle \langle 0 |$. It can easily be verified that an instrument with operations
\begin{align*}
{\mathcal{I}}_\pm (\cdot) = \<\pm 1| \cdot |\pm 1\> |\pm 1\>\< \pm 1| + \< 0 | \cdot |0 \> \frac{\mathds{1}\sub{\s} }{6}
\end{align*}
is an ideal measurement of $\mathsf{E}$. Therefore, the restriction imposed by the third law on the realisability of L\"uders instruments does not by itself rule out the possibility of ideal measurements for unsharp norm-1 observables. Now we present our fourth main result:
\begin{theorem}
Under the third law constraint, no observable admits an ideal measurement.
\end{theorem}
The proof is given in \app{app:ideality} (\propref{prop:appendix-ideality}), and the following is a rough sketch. If ${\mathcal{I}}$ is an ideal measurement of $\mathsf{E}$, and if $\rho$ is a state for which outcome $x$ can be predicted with certainty, then ${\mathcal{I}}_y(\rho) = \mathds{O}$ for all $y\ne x$, which implies that ${\mathcal{I}}_{\mathcal{X}}(\rho) = \rho$. But given the third law constraint, for every state $\rho$ such that $\mathrm{tr}[\mathsf{E}_x \rho] = 1$, it is shown that $\rho$ cannot be a fixed state of ${\mathcal{I}}_{\mathcal{X}}$, and so ${\mathcal{I}}$ cannot be ideal.
\subsection{Extremality}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.45\textwidth]{Extremality}
\vspace*{-0.2cm}
\caption{ A system may be measured by an instrument obtained by a probabilistic mixture of two distinct instruments. An extremal instrument is such that cannot be recovered as a probabilistic mixture of two distinct instruments. }\label{fig:extremality}
\vspace*{-0.5cm}
\end{center}
\end{figure}
For any fixed value space ${\mathcal{X}}$, the set of instruments is convex. That is, given any $\lambda \in [0,1]$, and any pair of instruments ${\mathcal{I}}^{(i)}:= \{{\mathcal{I}}^{(i)}_x : x\in {\mathcal{X}}\}$, $i=1,2$, we can construct an instrument ${\mathcal{I}}$ with the operations
\begin{align*}
{\mathcal{I}}_x(\cdot) = \lambda \, {\mathcal{I}}^{(1)}_x(\cdot) + (1-\lambda) {\mathcal{I}}^{(2)}_x(\cdot).
\end{align*}
An instrument ${\mathcal{I}}$ is \emph{extremal} when for any $\lambda \in (0,1)$ such a decomposition is only possible if ${\mathcal{I}} = {\mathcal{I}}^{(1)} = {\mathcal{I}}^{(2)}$. Intuitively, this implies that an extremal instrument is ``pure'', whereas a non-extremal instrument suffers from ``classical noise''. For an in-depth analysis of extremal instruments and their properties, see Refs. \cite{DAriano2011,Pellonpaa2013}. A simple example of an extremal instrument is the L\"uders instrument compatible with an observable with linearly independent effects. Since such linear independence is trivially satisfied for norm-1 observables, then their corresponding L\"uders instruments are extremal. But it is also possible for the effects of a completely unsharp observable to be linearly independent. For example, a binary observable $\mathsf{E}:= \{\mathsf{E}_0, \mathsf{E}_1\}$ acting in ${\mathcal{H}\sub{\s}} = \mathds{C}^2$, defined as $\mathsf{E}_0= 3/4|0\rangle \langle 0|+1/4 |1\rangle \langle 1|$ and $\mathsf{E}_1=\mathds{1} - \mathsf{E}_0$, is completely unsharp with linearly independent effects. Since the L\"uders instrument for such an observable is extremal, and can be implemented under the third law constraint, then we can immediately infer that extremality is permitted by the third law. Now we present our final main result:
\begin{theorem}
Under the third law constraint, an observable $\mathsf{E}$ acting in ${\mathcal{H}\sub{\s}}$ admits an extremal instrument only if $\rank{\mathsf{E}_x} \geqslant \sqrt{\dim({\mathcal{H}\sub{\s}})}$ for all outcomes $x$, and a measuring apparatus can only implement an extremal instrument if it interacts with the system with a non-unitary channel ${\mathcal{E}}$.
\end{theorem}
The proof is given in \app{app:extremality} (\propref{prop:extremality-third-law}). It follows that under the third law constraint, extremality is only permitted for large-rank observables. Note in particular that since L\"uders measurements of completely unsharp observables may be extremal, then the above result indicates that they are realisable under the third law constraint only with non-unitary measurement interactions; indeed, our proof for the sufficiency of complete unsharpness for the realisability of L\"uders instruments (\propref{prop:luders-completely-unsharp}) uses a non-unitary channel. Furthermore, note that in contradistinction to the other properties discussed above, unsharpness of $\mathsf{E}$ is not a necessary condition for extremality. Indeed, sharp observables with sufficiently large rank admit an extremal instrument, albeit such instruments cannot be L\"uders due to the previous results. In \app{app:extremality}, we provide a concrete model for an extremal instrument compatible with a binary sharp observable $\mathsf{E}$ acting in ${\mathcal{H}\sub{\s}} = \mathds{C}^2 \otimes \mathds{C}^2$, defined by $\mathsf{E}_x = \mathds{1} \otimes |x\>\<x|$. Since $\rank{\mathsf{E}_x} = 2 = \sqrt{\dim({\mathcal{H}\sub{\s}})}$, we see that the bound provided in the above theorem is in fact tight.
\section{Discussion}
We have generalised and strengthened the results of Ref. \cite{Guryanova2018}, in the finite-dimensional setting, in several ways. We have considered the most general class of (discrete) observables---both the observable to be measured and the pointer observable for the measuring apparatus---and not just those that are sharp and rank-1. Moreover, we have considered a more general class of measurement interactions, between the measured system and measuring apparatus, constrained only by our operational formulation of the third law and thus not restricted to the standard unitary or rank non-decreasing framework. Within the extended setting thus described, we have shown that ideal measurements are categorically prohibited by the third law for all observables and, \emph{a fortiori}, we showed that the third law dictates that whenever a measurement outcome can be predicted with certainty, then the state of the measured system is necessarily perturbed upon measurement. Moreover, we showed that the third law also forbids repeatable measurements, where we note that repeatability and ideality coincide only in the case of sharp rank-1 observables. In addition to the aforementioned impossibility statements, however, our results also include possibility statements as regards extremality and non-disturbance: the third law allows for an extremal instrument that measures an observable with sufficiently large rank, and for a measurement of a completely unsharp observable so that such a measurement will not disturb any observable that commutes.
Our results have interesting consequences for the role of unsharp observables in the foundations of quantum theory, and the question: what is real? There are two deeply connected traditional paradigms for the assignment of reality to a system, both of which are formulated with respect to sharp observables: the Einstein-Podolsky-Rosen (EPR) criterion \cite{Einstein1935}, and the macrorealism criteria of Legget-Garg \cite{Leggett1985}.
The EPR criterion for a physical property to correspond to an element of reality reads: \emph{``If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity''}. In other words, the EPR criterion rests on the possibility of ideal measurements: an eigenvalue of some self-adjoint operator exists in a system when the system is in the corresponding eigenstate, so that an ideal measurement of the observable reveals the eigenvalue while leaving the system in the same state. But the EPR criterion is shown to be in conflict with the third law of thermodynamics: it is in fact \emph{not} possible to ascertain any property of the system, with certainty, without changing its state. As argued by Busch and Jaeger \cite{Busch2010a, Jaeger2019}, however, the EPR criterion is sufficient, but not necessary; a necessary condition for a property of a system to correspond to an element of reality is that \emph{``it must have the capacity of influencing other systems, such as a measuring apparatus, in a way that is characteristic of such property''}. Indeed, since the influence the system has on the apparatus may come in degrees---quantified by the probability, or ``propensity'', for the apparatus to register that such property obtains a given value in the system---then even an unsharp observable may correspond to an element of ``unsharp reality''. But note that this weaker criterion makes no stipulation as to how the state of the system changes upon measurement, and does not rely on the possibility of ideal measurements: a property may exist in a system even if its measurement changes the state of the system. Consequently, our results provide support for the unsharp reality program of Busch and Jaeger from a thermodynamic standpoint, as it is shown to be compatible with the third law.
On the other hand, Legget and Garg proposed Macrorealism as the conjunction of two postulates:{ \bf(MR)} \emph{Macrorealism per se}, and {\bf (NI)} \emph{Noninvasive measurability}. { \bf(MR)} rests on the notion of \emph{definiteness}, i.e., that at any given time, a system can only be in one out of a set of states that are perfectly distinguishable by measurement of the observable describing the system---for example, an eigenstate corresponding to some eigenvalue of a self-adjoint operator. On the other hand, {\bf (NI)} requires that measurement of such observable not influence the statistics of other observables at later times. In other words, {\bf (NI)} relies on the possibility of a non-disturbing measurement. But we showed that the third law permits non-disturbance only for unsharp observables without the norm-1 property. Since such observables do not admit definite values in any state, i.e., no two states can be perfectly distinguished by a measurement of such observables, the third law is incompatible with the conjunction of { \bf(MR)} and {\bf (NI)}. It follows that if we want to keep {\bf (NI)}, then we must drop { \bf(MR)}; once again we are forced to adopt the notion of an unsharp reality.
To be sure, the third law of thermodynamics should not be considered in isolation; a complete analysis of how the laws of thermodynamics constrain channels and quantum measurements demands that the third law be considered in conjunction with the first (conservation of energy) and with the second (no perpetual motion of the second kind). Indeed, our operational formulation of the third law is independent of any notion of temperature, energy, or time. We expect that in the complete picture, that is, when the other laws are also taken into account, our generalised formulation will recover the standard notions of the third law in the literature. It is also an interesting question to ask how our formulation of the third law, and the constraints imposed by such law on measurements, can be extended to the infinite-dimensional setting. A complete operational formulation of channels constrained by the laws of thermodynamics, and for more general systems than those of finite dimension, is thus still an open problem; our work constitutes one part of such a program, which extends the research discipline devoted to the ``thermodynamics of quantum measurements'' \cite{Sagawa2009b, Miyadera2011d, Jacobs2012a, Funo2013, Navascues2014a, Miyadera2015a, Abdelkhalek2016, Hayashi2017, Lipka-Bartosik2018, Solfanelli2019, Purves-2020,Aw2021, Danageozian2022, Mohammady2022a, Stevens2021}. While our impossibility results are expected to carry over to the more complete framework, the question remains as to how our positive claims must be adapted in light of the other laws of thermodynamics: the combined laws may impose further constraints. Indeed, as witnessed by the Wigner-Araki-Yanase theorem, conservation of energy imposes constraints on the measurability of observables that do not commute with the Hamiltonian \cite{E.Wigner1952,Busch2010, Araki1960,Ozawa2002,Miyadera2006a,Loveridge2011,Ahmadi2013b,Loveridge2020a, Mohammady2021a,Kuramochi2022}. This is in contradistinction to the third law, which imposes no constraints on measurability. We leave such open questions for future work.
\newpage
\begin{acknowledgments}
The authors wish to thank Leon Loveridge for his invaluable comments, which greatly improved the present manuscript. M.H.M. acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 801505. T.M. acknowledges financial support from JSPS KAKENHI Grant No. JP20K03732.
\end{acknowledgments}
|
1,477,468,750,452 | arxiv | \section{Introduction} \label{sec:I}
\vspace{-.3cm}\IEEEPARstart{T}{he} use of smartphones and data-hungry applications in radio access networks are increasing dramatically worldwide\ignore{\cite{1}}. This growth impacts the ability of traditional wireless networks to meet the required Quality-of-Service (QoS) for its users. Device-to-device (D2D) communication has been proposed as a candidate technology \cite{3a,2} to support a massive number of connected devices and possibly improve the data rate of the next-generation mobile networks \cite{2a,27}. The decentralized nature of D2D networks allows devices to communicate with other nearby devices over short-range and possibly more reliable links which is suitable for numerous applications in mobile networks. For example, in wireless cellular networks, D2D system enables mobile traffic offloading by user cooperations for content downloading and sharing. Using conventional centralized Point-to-Multi-Point (PMP) networks, e.g., cellular, Wi-Fi, and fog/cloud radio access networks (FRAN/CRAN), for content delivery would be excessively complicated and expensive.\ignore{ Because systems comprising a massive number of devices require a large number of access points. These access points are connected to a backhaul network and require handover algorithms to support mobility.
In other practical applications, e.g., ad-hoc networks, such as wireless sensor networks (WSNs), D2D systems enable fast and reliable data communications for route and topology discovery as well as sending control/emergency packets.
\ignore{To overcome the backhaul and mobility challenges, D2D systems have become a critical technology in the last few years. Indeed, D2D systems enable automated vehicles to communicate with nearby cars, pedestrians, and roadside devices without relying on a central and error-prone controller \cite{5a}.} }
Wireless channels are prone to interference and fading which result in packet/data loss at the application level. A widely used algorithm for packet recovery problem is the Automatic Repeat reQuest (ARQ)\ignore{\cite{6a}}.\ignore{This simple algorithm relies on negative acknowledgments (NACK) for the targeted devices to declare an erasure and reattempt the transmission.} However, this simple algorithm is highly inefficient for broadcast applications. For example, consider that a base-station (BS) is required to deliver the set of packets $\{p_1, p_2, p_3\}$ to users $\{u_1, u_2, u_3\}$. Assume that after sequentially transmitting $\{p_1, p_2, p_3\}$, user $u_i$ is still missing packet $p_i$ for $1 \leq i \leq 3$. To complete the reception of all packets for all users, the BS needs at least $3$ uncoded transmissions. However, by using an erasure code, the BS can broadcast the binary XOR combination $p_1\oplus p_2\oplus p_3$ that requires a single transmission.
Different erasure codes have been proposed for various applications and diverse network settings to solve the packet recovery problem. For the aforementioned PMP wireless broadcast networks, Raptor codes \cite{RC}, and Random Linear Network Codes (RLNC) \cite{RLNC} achieve maximum network throughput. Despite being efficient and offering a low-complexity solution, Raptor codes and RLNC are not attractive techniques for real-time applications, such as video streaming, online gaming, and teleconferencing. These codes accumulate a substantial decoding delay, meaning that these codes do not allow progressive decoding. In particular, coded packets cannot be decoded to retrieve the original data until a large number of independent transmissions are received.
Instantly Decodable Network Coding (IDNC) has been proposed as a low-complexity solution to improve throughput while allowing progressive decoding of the received packets \cite{IDNC}. By relying solely on binary XOR operations, IDNC ensures fast and instantaneous decodability of the transmitted packets for their intended users. Therefore, IDNC has been the topic of extensive research, e.g., \cite{12m,13m,14m, 15mm, 16m}. It has been applied in several real-time broadcast applications wherein received packets need to be used at the application layer immediately to maintain a high QoS, e.g., relay-aided networks \cite{17m,18m} video-on-demand and multimedia streaming \cite{20m,23m, 24m, 25m}, and D2D-enabled systems \cite{26m,28m,29m,30m}. The potential of IDNC technique is manifold \cite{31m}.
All the aforementioned IDNC works, for both PMP and D2D networks, are centralized in a sense that they require a global coordinator, i.e., a BS or a cloud, to plan packet combinations and coordinate transmissions. For example, the authors of \cite{30m} considered the completion time minimization problem in a partially connected D2D FRANs. The problem is solved under the assumption that the fog is within the transmission range of all devices and has perfect knowledge of the network topology. The authors suggested that the fog selects transmitting devices and their optimal packet combinations and conveys the information to the users for execution.
While the aforementioned centralized approaches provide a good performance for the decentralized system, it comes at a high computation cost at the cloud/fog units and high power consumption at each user. Indeed, users need to send the status of all D2D channels to the central controller at each time slot. In addition, the cloud controller requires to know the downloading history of users for content delivery.\ignore{ Furthermore, these centralized approaches are not feasible in some network configurations, including the roadside-to-vehicle and vehicle-to-vehicle (V2V) applications discussed earlier.} Recently, the authors in \cite{33m}, \cite{33e} proposed a distributed solution for D2D networks that rely on a non-cooperative game-theoretic formulation. However, in such game models, each player makes its decisions individually and selfishly. Furthermore, the system is assumed to be fully connected, i.e., single-hop, which only selects one player to transmit at any time instance. The fully connected model is not only an idealist in which all players are connected, it also causes severe latency (delay) in the network. Our work proposes a fully distributed solution for completion time minimization in a partially connected D2D network using coalition games \cite{25}. Thus, multiple and altruistic players transmit IDNC packets simultaneously.
Due to the cooperative and altruistic decisions among players, coalition games have been used in different network settings to optimize different parameters \cite{25aa, N1, 26, 42a, N2}. For example, the tutorial in \cite{25aa} classified the coalition games and demonstrated the applications of coalition games in communication networks. The authors of \cite{N1} proposed a distributed game theoretical scheme for users' cooperation in wireless networks to maximize users' rate while accounting the cost of cooperation. The authors of \cite{42a} proposed a Bayesian coalitional game for coalition-based cooperative packet delivery. Recently, the authors of \cite{N2} suggested a constrained
coalition formation game for minimizing users' content uploading in D2D multi-hop networks. For packet recovery purpose, we employ coalition game and IDNC optimization in D2D multi-hop networks.
Our work considers D2D multi-hop networks comprising several single-interface devices distributed in a geographical area, and each device is partially connected to other devices.\ignore{ The considered channel allocation follows \cite{26m, 29m, 30m, 33m}, where devices can transmit using the same radio resource block, e.g., frequency.} The packet recovery problem is motivated by real-time applications that tolerate only
low delays, i.e., multimedia streaming. In such applications, users' devices need to immediately exchange a set of packets, represented by a frame, between them with the minimum communication time. Our proposed model appears in different applications. For example, in current LTE system, where users at the edge of the service area or in dense urban areas often experience high degradation in the quality of signal from data centers due to channel impairments. Our proposed D2D distributed scheme would improve the total communication time of such users by implementing short and reliable D2D communications. Moreover, in cell centers with low erasures, our proposed scheme would offload the cloud's resources, e.g.,
time, bandwidth, and the ability to serve more users.
Motivated by the aforementioned discussions, our work solves the completion time reduction problem in partially connected D2D networks. To this end, we introduce a novel coalition game framework capturing the complex
interplays of instantly decodable network coding, transmitting user-receiving user associations, and a limited coverage zone of each user. The main contributions of this work can be summarized as follows.
\begin{enumerate}
\item We formulate the \textit{completion time minimization problem} in partially connected D2D networks and model it as a coalition game. We further demonstrate the difficulty of expressing the problem as a coalition game with non-transfer function (NTU) which motivates its relaxation to a \textit{coalition formation game} (CFG).
\item We derive the rules for assigning players\footnote{Player and device are used interchangeably throughout this paper.}, selecting transmitting player, and finding optimal encoded IDNC packets for each disjoint altruistic coalition.
\item We propose a distributed algorithm based on merge-and-split rules and study its convergence analysis, stability, \ignore{robustness,}complexity, and communication overhead.
\item We validate our theoretical finding using numerical simulations. Our numerical results reveal that our distributed scheme can significantly outperform existing centralized PMP and fully distributed methods. Indeed, for presented network setups, our coalition formation game offers almost the same performance as the centralized FRAN scheme.
\end{enumerate}
The rest of this paper is organized as follows. \sref{SMMM} introduces the system model and formulates the completion time minimization problem. Afterward, the problem is modeled as a coalition game and relaxed to a coalition formation game in \sref{COA}. The proposed distributed algorithm can be found in \sref{PS}, and its convergence analysis, stability, complexity, and communication overhead are provided in \sref{PA}. \sref{SR} numerically tests the performance of the proposed method against existing schemes, and \sref{CC} concludes the paper.
\section{System Overview and Problem Formulation} \label{SMMM}
The considered network and IDNC models are introduced in \sref{sec:sub1} and \sref{sec:sub2}, respectively. The fully distributed completion time reduction problem in the considered network is formulated in \sref{PF}. \sref{sec:sub3} further shows through a simple example that the completion time problem is generally intractable, which motivates the coalition game formulation in \sref{COA}.
\subsection{Network Model and Parameters} \label{sec:sub1}
Consider a D2D-enabled wireless network consisting of $N$ users denoted by the set $\mathcal{U}=\{u_1, u_2, \ \cdots, u_N\}$. These users are interested in receiving a frame $\mathcal{P}=\{p_1, p_2, \ \cdots, p_M \}$ of $M$ packets. The size of the frame $\mathcal{P}$ depends on the size of the packet and size of content. Due to previous initial transmissions, from data centers or access points, each device holds a part of the frame $\mathcal{P}$. The side information of the $u$-th device is represented by the following sets.
\begin{itemize}
\item The \textit{Has} set $\mathcal{H}_u$: Successfully received packets.
\item The \textit{Wants} set $\mathcal {W}_u=\mathcal{P} \setminus \mathcal{H}_u$: Erased/lost packets.
\end{itemize}
The side information of all players can be summarized in a binary $N \times M$ \textit{state matrix} $\mathbf{S}=[s_{up}]$ wherein the entry $s_{up}=0$ states that packet $p$ is successfully received by player $u$ and $1$ otherwise. In order for all users to obtain the whole frame $\mathcal{P}$ from D2D communications, we assume that each packet $p_i, 1 \leq i \leq M$ is received by at least one user. In other words, the sum of the rows $\sum_{u \in \mathcal{U}} s_{up} \geq 1$ for all packets $p \in \mathcal{P}$.
We consider a realistic multi-hop network topology. In such networks, battery-powered devices can only target the subset of devices in their coverage zone, denoted here by $\mathcal{C}_u$ of the $u$-th player.\ignore{ In network coding literature, multi-hop topology is referred to as partially connected D2D networks whereas single hop topology
is referred to as fully connected D2D networks.} The network topology can be captured by a unit diagonal symmetric $N \times N$ adjacency matrix $\mathbf{C}$ represents the connectivity of the players such that $\mathbf{C}_{u u^\prime} = 1$ if and only if $u^\prime \in \mathcal{C}_u$. We assume that no part of the network is disjoint, i.e., the matrix $\mathbf{C}$ is connected. Otherwise, the proposed algorithm is separately applied to each independent part of the network. Upon successful reception of a packet, each player send an error-free acknowledgment (ACK) to all players in its coverage zone to update their side information matrix.
We focus only on upper layer view of the network, where network coding scheme is performed at the
network-layer and the physical-layer is abstracted by a memory-less erasure channel. This
abstraction is widely used in network coding literature, where a packet is either perfectly
received or completely lost with certain average probability \cite{12m}, \cite{14m}, \cite{26m}, \cite{29m, 30m, 31m, 33m, 33e}, \cite{N3}. Therefore, the physical channel between players $u$ and $u'$ is modeled by a Bernoulli random variable whose mean
$\sigma_{uu'}$ indicates the packet erasure probability from player $u$ to player $u'$. We assume that these probabilities remain constant during the transmission of a single packet $p_i \in \mathcal{P}$ and they are known to all devices. However, due to the channel's asymmetry and the difference in the transmit powers of both devices $u$ and $u^\prime$, the equality of $\sigma_{u u^\prime}$ and $\sigma_{u^\prime u}$ is not guaranteed.
We consider a slowly changing network topology, in which players have fixed locations during the IDNC packet transmission and change from one transmission to another transmission.
However, after one transmission, the devices can move and all the network variables will be updated, and our model, i.e., the coalition formation solution, can be used with updated network parameters. It is important to note that in single-hop networks, each player is
connected to all other players in the network, and hence, it precisely knows the side information of all other players. To avoid any collision in the network, only one player is allowed to transmit
an encoded packet in one hop at any time slot. Clearly, this causes severe latency, i.e., delay, in delivering packets to all players. In multi-hop networks, multiple players are allowed to transmit
encoded packets simultaneously. This
results in targeting many players, and thus makes the delivery of packets to the players faster.
\subsection{Instantly Decodable Network Coding Model} \label{sec:sub2}
\ignore{IDNC is a network coding scheme that suits most of the critical-time applications in which users need immediate decoding of the received packets. As mentioned earlier,}IDNC encodes packets through binary XOR operations. Let $\kappa \subset \mathcal{P}$ be an XOR combination of some packets in $\mathcal{P}$. The transmission of the combination $\kappa$ is beneficial to the $u$-th user, in a sense that it allows the $u$-th user to retrieve one of its missing packets, if and only if the combination contains a single packet from $\mathcal{W}_u$. In that case, the user $u$ can XOR the combination $\kappa$ with $\kappa \cap \mathcal{H}_u$ to obtain its missing packet. Hence, we say that the user $u$ is \emph{targeted} by the transmission $\kappa$.
Let $\mathcal{A}^{(t)} \subset \mathcal{U}$ denote the set of transmitting players at the $t$-th transmission and $\underline{\rm\kappa}^{(t)}(\mathcal{A})=(\kappa_1, \ \cdots, \ \kappa_{|\mathcal{A}^{(t)}|})$ denote the packet combinations to be sent by users in $\mathcal{A}^{(t)}$. For notation simplicity, the time index $t$ is often omitted when it is clear from the context. Similar to \cite{29m}, \cite{30m}, \cite{33m}, \cite{33e}, we consider
players use the same frequency band and transmit encoded packets simultaneously. Thus, players located in the intersection of the coverage zone of multiple transmitting players experience
collision at the network layer and no packets can be decoded. Considering the interference of transmissions caused by other players to the set of transmitting players in partially connected D2D networks can be pursued in a future work. Therefore, player $u^\prime$ is targeted by the transmission from the $u$-th player if and only if it can receive the transmission and the packet combination contains a single file from $\mathcal {W}_{u^\prime}$. Let $\underline{\rm\tau}({\underline{\rm\kappa}(\mathcal{A})})=(\tau_1, \ \cdots, \ \tau_{| \mathcal{A}|})$ denote the set of targeted players by the transmitting players wherein $u^\prime \in \tau_u({\underline{\rm\kappa}(\mathcal{A})})$ implies that $|\mathcal{W}_{u^\prime} \cap {{\rm\kappa}_u(\mathcal{A})}| = 1$ and $\{u^\prime\} \cap \mathcal{C}_u \cap \mathcal{C}_{u''} = \delta_{u{u''}} \{u^\prime\}$ for all transmitting players $u'' \in \mathcal{A}$ wherein $\delta_{u{u''}}$ is the Kronecker symbol.
\begin{definition}
The individual completion time $\mathcal {T}_u$ of the $u$-th player is the number of transmissions required until it gets all packets in $\mathcal{P}$. The overall completion time $\mathcal {T} =\max _{u \in\mathcal{U}}\{\mathcal{T}_u\}$ represents the time required until all the players get all the packets.
\end{definition}
We use IDNC to minimize the completion time required to complete the reception of all packets for all users in the partially connected D2D network. Given that the direct minimization of the completion time is intractable \cite{31m}, we follow \cite{16m} in reducing the completion time by controlling the decoding delay.
\begin{definition}
The decoding delay $\mathcal{D}_u$ of player $u$ increases by one unit if and only if the player still wants packets, i.e., $\mathcal{W}_i \neq \varnothing$, and receives a combination that does not allow it to reduce the size of its Wants set. The decoding delay $\mathcal{D}$ is the sum of all individual delays.
\end{definition}
\subsection{Completion Time Minimization Problem Formulation}\label{PF}
In this subsection, we formulate the distributed completion time reduction problem in IDNC-enabled D2D network. Let $\underline{\rm N}$ be a binary vector of size $N$ whose $u$-th index is $1$ if player $u$ has non-empty \textit{Wants} set, i.e., $\mathcal{W}_u\neq \varnothing$ and $0$ otherwise, and let $\underline{\rm{\overline{\rm{\tau}}}}(\underline{\kappa}(\mathcal{A}))=\underline{1}-\underline{\rm \tau}(\underline{\kappa}(\mathcal{A}))$ be the set of the non-targeted players by the encoded packets $\underline{\kappa}(\mathcal{A})$. The different erasure occurrences at the $t$-th time slot are denoted by $\boldsymbol{\omega}:\mathbb{Z}_+ \rightarrow \{0,1\}^{N\times N}$ with $\boldsymbol{\omega}(t)=[Y_{u u^\prime}]$, for all $(u,u^\prime)\in \mathcal{U}^2$, where $Y_{u u^\prime}$ is a Bernoulli random variable equal to $0$ with probability $\sigma_{u u^\prime}$.
Let $\underline{\rm a}_t=(a_t^{[1]}, a_t^{[2]},\ \cdots, \ a_t^{[N]})$ be a binary vector of length $N$ whose $a_t^{[u]}$-th element is equal to $1$ if player $u$ is transmitting, i.e., $\|\underline{\rm a}\|_1=|\mathcal{A}|$. Likewise, let $\underline{\rm \mathcal{D}}(\underline{\rm a}_t)$ be the decoding delay
experienced by all players in the $t$-th recovery round. In particular, $\underline{\rm \mathcal{D}}(\underline{\rm a}_t)$ is a metric quantifies
the ability of the transmitting players to generate innovative packets for all the targeted players. This metric increases by one unit for each player that still wants packets and successfully receives a nonuseful
transmission from any transmitting player in $\mathcal{A}$ or for a transmitting player that still wants some packets. Let $\underline{\rm \mathcal{I}}=(\mathcal{I}^{[1]}, \mathcal{I}^{[2]},\ \cdots, \ \mathcal{I}^{[N]})$ be a binary vector of size $N$ whose $\mathcal{I}^{[u]}$ entry is $1$ if player $u$ is hearing more than one transmission from the set $\mathcal{A}$, i.e., $u \in \mathcal{C}_{u^\prime }\cap \mathcal{C}_{u''}$ where $u^\prime \neq u'' \in \mathcal{A}$ and $0$ otherwise, and let $\underline{\rm \mathcal{O}}=(\mathcal{O}^{[1]}, \mathcal{O}^{[2]},\ \cdots, \ \mathcal{O}^{[N]})$ be a binary vector of size $N$ whose $\mathcal{O}^{[u]}$ element is $1$ if player $u$ is out of transmission range of any player in $\mathcal{A}$, i.e., $u \notin \mathcal{C}_{u^\prime}, \forall ~u^\prime \in \mathcal{A}$ and $0$ otherwise.
Given the above configurations, the overall decoding delays $\underline{\rm \mathbb{D}}(\underline{\rm a}_t)$ experienced by all players, since the beginning of the recovery phase until the $t$-th transmission, can be expressed as follows.
\begin{align} \label{eq3}
\underline{\rm \mathbb{D}}(\underline{\rm a}_t) =\underline{\rm \mathbb{D}}(t-1) +
\begin{cases}
\underline N \hspace{2cm} & \mbox{if}~~\|\underline{\rm a}_t\|_1=0\\
\underline{ \mathcal{I}}+\underline{ \mathcal{O}}+ \underline{\rm a}_t+\underline{\rm \mathcal{D}}(\underline{\rm a}_t) & \mbox{otherwise}.
\end{cases}
\end{align}
As mentioned, the completion time is a difficult and intractable metric to optimize. However, in network coding literature, such metric is approximated by the \emph{anticipated} completion time which can be computed at each time instant using the decoding delay. Using the decoding delay in \eref{eq3}, the anticipated completion time is defined as follows.
\begin{definition}
The anticipated individual completion time of the $u$-th player is defined by the following expression
\begin{align}
\mathcal{T}_u(\underline{\rm a}_t) = \frac{|\mathcal{W}^{(0)}_u|+ \mathbb{D}_u(\underline{\rm a}_t)-\mathbb{E}[\sigma_u]}{1-\mathbb{E}[\sigma_u]}, \label{eq85}
\end{align}
where $|\mathcal{W}^{(0)}_u|$ is the Wants set of player $u$ at the beginning of the recovery phase and $\mathbb{E}[\sigma_u]$ is the expected erasure probability linking player $u$ to the other players.\end{definition}
Clearly, \eref{eq85} represents the
number of transmissions that are required to complete the transmission of all requested packets
in $\mathcal{P}$. In this context, completion time is intimately related to the throughput of the system. Throughput is measured as the number of cooperative D2D transmission rounds required by the players to download all their requested
packets.
The overall anticipated completion time can be written as $\mathcal{T}(\underline{\rm a}_t) =\max\limits_u (\mathcal{T}_u(\underline{\rm a}_t))=\|\underline{\mathcal{T}}(\underline{\rm a}_t)\|_\infty$. Therefore, the anticipated completion time minimization problem at the $t$-th transmission in IDNC-enabled D2D multi-hop network can be written as follows.
\begin{align}
\min_ {\substack{\underline{\rm a}_t\in \{0,1\}^N\\ \underline{\kappa}(\mathcal{A}) \in \{0,1\}^{M}}} \|\underline{\mathcal{T}}(\underline{\rm a}_t)\|_\infty. \label{eqct}
\end{align}
Unlike single-hop model that requires only
an optimization over a single transmitting player and its corresponding packet combination, a multi-hop model needs to select the set of transmitting players $\mathcal{A}$ and the optimal
encoded packets $\underline{\kappa}(\mathcal{A})$. As such, the probability of increasing the anticipated completion time is minimized.
\subsection{Example of IDNC Transmissions in a Partially Connected D2D-enabled Network}\label{sec:sub3}
This section illustrates the aforementioned definitions and concepts with a simple example. Consider a simple partially connected D2D network containing $6$ players and a frame $\mathcal{P}=\{p_1,p_2,p_3,p_4\}$ as illustrated in Fig. \ref{fig1}. The side information of all players is given on the left part of Fig. \ref{fig1}, and the coverage zone of each player is represented by edges. For ease of analysis, we assume error-free transmissions.
Assume that $u_1$ transmits the encoded packet $\kappa_1=p_3\oplus p_4$ to players $u_2$, $u_3$, $u_5$, and let $u_6$ transmit $\kappa_6=p_1\oplus p_4$ to players $u_4, u_5$ in the first time slot. Then, in the second time slot, $u_4$ transmits $\kappa_4=p_2$ to $u_6$, and $u_1$ transmits $\kappa_1=p_2\oplus p_4$ to players $u_2, u_5$. The decoding delay experienced by the different players is given as follows.
\begin{itemize}
\item Player $u_5$ experiences one unit delay as it is in the intersection of the coverage zone of $u_1$ and $u_6$. In other words, $u_5$ is in collision, i.e., $u_5 \in \underline{\rm \mathcal{I}}$. Thus, player $u_5$ would not be able to decode packet $\kappa_6$ transmitted by player $u_6$.
\item Player $u_6$ experiences one unit of delay as it is transmitting in the first time slot.
\end{itemize}
Under this scenario, we have the following assumption.
\begin{itemize}
\item First time slot: $\underline{\rm N}=(0~1~1~1~1~1)$, the set of transmitting players $\mathcal{A}^{(1)}=\{u_1, u_6\}=\underline{\rm a}_1=(1~0~0~0~0~ 1)$, the corresponding encoded packets $\underline{\rm\kappa}(\mathcal{A}^{(1)})=(\kappa_1, \kappa_6)$, and the set of targeted players $\underline{\rm\tau}({\underline{\rm\kappa}(\mathcal{A}^{(1)})})=(\tau_1, \tau_6)=\{(u_2, u_3), (u_4)\}$. The set of players that hearing more than one transmission $\underline{\rm \mathcal{I}}=(0~0~0~0~1~0)$, and the set of players that out of transmission range of any player in $\mathcal{A}^{(1)}$ is $\underline{\rm \mathcal{O}}=\underline{\rm 0}$. The decoding delay experienced by all players is $\underline{\rm \mathcal{D}}(\underline{\rm a}_1)=(0~0~0~0~1~1)$. The accumulative decoding delay is $\underline{\rm \mathbb{D}}(\underline{\rm a}_1)=(0~0~0~0~1~1)$.
\item Second time slot: $\underline{\rm N}=(0~1~0~0~1~1)$, the set of transmitting players $\mathcal{A}^{(2)}=\{u_1,u_4\}=\underline{\rm a}_1=(1~0~0~1~0~ 0)$, the corresponding encoded packets $\underline{\rm\kappa}(\mathcal{A}^{(2)})=(\kappa_1, \kappa_4)$, and the set of targeted players $\underline{\rm\tau}({\underline{\rm\kappa}(\mathcal{A}^{(2)})})=(\tau_1,\tau_4)=\{(u_2, u_5), (u_6)\}$. The set of players hearing more than one transmission $\underline{\rm \mathcal{I}}=\underline{\rm 0}$, and the set of players that out of transmission range of any player in $\mathcal{A}^{(1)}$ is $\underline{\rm \mathcal{O}}=\underline{\rm 0}$. The decoding delay is $\underline{\rm \mathcal{D}}(\underline{\rm a}_2)=\underline{\rm 0}$ and the accumulative decoding delay $\underline{\rm \mathbb{D}}(\underline{\rm a}_2)=(0~0~0~0~1~1)$.
\item The individual completion time of all players after the second transmission is \\$\mathcal{T}=(0~2~1~1~2~2)$. Thus, the maximum completion time is $2$ time slots which represents the overall completion time for all players to get their requested packets, i.e., $\underline{\rm N}=\underline{\rm 0}$.
\end{itemize}
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\linewidth]{fig1.pdf}
\caption{A partially connected D2D network containing $6$ players and $4$ packets.}
\label{fig1}
\end{figure}
\ignore{The next section suggests formulating the above completion time optimization problem as a coalition formation game (CFG) to separate the following optimization parameters: the transmitting players and their optimal encoded packets. As such, a solution to \eref{eqct} can be obtained by finding a coalition structure using a fully distributed algorithm. }
\section{Distributed Completion Time Minimization as a Coalition Game}\label{COA}
This section models the completion time problem in IDNC-enabled D2D multi-hop networks using coalition games \cite{25}. Afterward, fundamental concepts in coalition games are defined and provided. These concepts are used in \sref{PS} to derive the distributed completion time reduction solution in a partially connected D2D network.
\subsection{Completion Time Minimization as a Coalition Game}
To mathematically model the aforementioned completion time problem, we use coalition game theory. In particular, the problem is modeled as a coalition game with a non-transferable utility (NTU)\cite{25}.
\begin{definition}
A coalition game with a \textit{non-transferable utility}
is defined as a pair ($\mathcal{U},\phi$), where $\mathcal{U}$ is the set of players consisting of $N$ devices and $\phi$ is a real function such that for every coalition $\mathcal{S}_s\subseteq \mathcal{U}$, $\phi(\mathcal{S}_s)$ is the payoff that coalition $\mathcal{S}_s$ receives which cannot be arbitrarily apportioned between its players.
\end{definition}
For the problem of cooperative D2D completion time among players, given any coalition $\mathcal{S}_s\subseteq \mathcal{U}$, we define $\phi(\mathcal{S}_s) = (\phi_1(\mathcal{S}_s), \ \cdots, \ \phi_{|\mathcal{S}|}(\mathcal{S}_s))$ as the tuple wherein element $\phi_u(\mathcal{S}_s)$ represents the payoff of player $u$ in coalition $\mathcal{S}_s$. Lets $|\mathcal{S}_s|$ represents the total number of players in $\mathcal{S}_s$. The $|\mathcal{S}|$-dimensional vector represents the family of real vector payoffs of coalition $\mathcal{S}_s$, which is denoted by $\underline{\phi}(\mathcal{S}_s)$. As previously mentioned, for each coalition, we need to determine the transmitting player and its IDNC packet selection in order to minimize the increasing of the completion time. Consequently, by adopting the cooperative D2D completion model described in the previous section, the total payoff of any coalition $\mathcal{S}_s\subseteq \mathcal{U}$, $\forall s=\{1, \cdots,k\}$ is given by
\begin{align}
\phi(\mathcal{S}_s) =\max\limits_u (\phi_u(\mathcal{S}_s))=\|\underline{\phi}(\mathcal{S}_s)\|_\infty,
\label{eq4}
\end{align}
where $\phi_u(\mathcal{S})$ is the payoff of player $u$ which is in our problem given by
\begin{align}
\phi_u(\mathcal{S}_s) = - \|\mathcal{T}_u(\underline{\rm a}_t)\|_\infty - \|{\mathbb{D}}_u(\underline a_t)-{\mathbb{D}}_u(\underline a_{t-1})\|_1.
\label{eq5}
\end{align}
The payoff function in \eref{eq4} represents the total payoff
that a coalition receives due to self-organize players. For
a player $u\in \mathcal{S}_s$, the first term in \eref{eq5} represents the maximum anticipated completion time among players in $\mathcal{S}_s$ that is defined in \eref{eq85}. Similarly, the second term in \eref{eq5} represents the augmentation of the sum decoding delay that is defined in \eref{eq3}. Therefore, players in coalitions prefer to increase the payoff in \eref{eq5} by minimizing the anticipated completion time through controlling the decoding delay.
\ignore{Since each player has its own unique payoff within a coalition $\mathcal{S}$, i.e., unique anticipated completion time and unique decoding delay, the payoff of coalition $\mathcal{S}$ cannot be arbitrarily apportioned between its players. Thus, the payoff function in \eref{eq5} is considered as a \textit{non transferable utility} (NTU) \cite{25}. Consequently, we immediately have the following
property.}
\begin{property} The proposed D2D completion time cooperative problem is modeled as a coalition game with NTU ($\mathcal{U},\phi$)
where $\mathcal{U}$ is the set of players and $\phi$ is the payoff function given by \eref{eq4}.
\end{property}
\begin{IEEEproof}
From the nature of definition 1 and definition 2, each player $u$ has its own unique anticipated completion time and decoding delay, and, thus, it has a unique payoff $\phi_u(\mathcal{S}_s)$ within a coalition $\mathcal{S}_s$. Therefore, the payoff function in \eref{eq4} cannot be arbitrarily apportioned between coalition's players. Thus \eref{eq4} is considered as an NTU. Further, the overall completion time is the maximum individual completion times of the players regardless of the coalition. In other words, the dependency of $\phi(\mathcal{S}_s)$ in any coalition structure is not only on packet recovery of players inside $\mathcal{S}_s$, but also on packet recovery outside $\mathcal{S}_s$, which concludes that the proposed game model is NTU game.
\end{IEEEproof}
Although cooperation generally reduces the payoffs of players \cite{25}, it is limited by inherent information exchange cost that needs to be paid by the players when acting cooperatively. Consequently, for any coalition $\mathcal{S}_s\subseteq\mathcal{U}$, players need to exchange information for cooperation, which is an increasing function of the coalition size. The problem becomes severe when all players are in the same coalition, i.e., grand coalition (GC). However, given the realistic scenario of a partially connected network where each device has limited coverage, it is highly likely that when attempting to form the GC, one of these scenarios would hold: 1) there exist a pair of players $u$, $u^\prime \in \mathcal{U}$ that are distant enough to receive packets from the set $\mathcal{A}$, thus they have no incentive to join the grand coalition, and 2) there exists a player $u \in \mathcal{U}$ with a payoff in GC $\phi_u(\mathcal{U}(t))$ that is greater than its payoff in any coalition $\phi_u(\mathcal{S}_s)$. Hence, this player has an incentive to deviate from the GC.
Since we consider partially connected D2D networks, players would most likely form coalitions with their neighbors based on their preferences, which results in forming small coalitions' sizes, not large coalitions' sizes. In other words, the GC of all the players is \textit{seldom} formed. Therefore, the cost due to small coalition formations would not have a significant impact on the payoff functions. Subsequently, the proposed ($\mathcal{U},\phi$) game is classified as a coalition formation game (CFG) \cite{25aa}, where players form several independent disjoint coalitions. Hence, classical solution concepts for coalition games, such as the core \cite{25}, may not be applicable for our problem. In brief, the proposed coalition game ($\mathcal{U},\phi$) is a CFG, where the objective is to offer an algorithm for forming coalitions.
\subsection{Coalition Formation Concepts}
This section recalls the fundamental concepts of coalition formation games that are used in the next section.
CFG, a subclass of coalition games, has been a topic of high interest in game theory research \cite{25aa, N1, N2}. The fundamental approach in coalition formation games is to allow players in the formation set to join or leave a coalition based on a well-defined and most suitable \textit{preference} for NTU games, i.e., \textit{Pareto Order}. Pareto Order is the basis of many existing
coalition formation concepts, e.g., the merge-and-split algorithm \cite{26}.\ignore{ In what follows, key definitions about coalition formations are introduced \cite{44aaaa}.}
\begin{definition} A coalition structure, denoted as $\Psi$, is defined as $\Psi=\{\mathcal{S}_1, \ \cdots, \mathcal{S}_k\}$ for $1 <|\mathcal{S}_k|< |\mathcal{U}|$ independent disjoint coalitions $\mathcal{S}_k$ of $\Psi$.
\end{definition}
One can see from definition 5 that different coalition structures may lead to different system payoffs as each coalition structure $\Psi$ has its unique payoff $\phi(\Psi)$. These differences in $\Psi$ and their corresponding payoffs $\phi(\Psi)$ are usually ordered through a comparison relationship. In the coalition game literature, e.g., \cite{26}, comparison relationships based on orders are divided into individual value orders and coalition value orders. Individual order implies that comparison is performed based on the players' payoffs. This is referred to as the Pareto Order. In particular, in such order, no player is willing to move to another coalition when at least one of the players in that coalition is worse off. In other words, the payoff of players would be worse off after the new player joins. This is known as selfish behavior. Coalition order implies that two coalition structures are compared based on the payoff of the coalitions in these coalition structures. This is known as a utilitarian order and is denoted by $\triangleright$. In other words, the notation $\Psi_2\triangleright \Psi_1$ means that $\phi(\Psi_1)>\phi(\Psi_2)$. Subsequently, the definition of the preference operator that considered in this paper is given as follows.
\begin{definition} A preference operator $\triangleright$ is defined for comparing two coalition structures $\Psi_1=\{\mathcal{S}_1, \ \cdots, \mathcal{S}_k\}$ and $\Psi_2=\{\mathcal{R}_1, \ \cdots, \mathcal{R}_m\}$ that are partitions of the same set of players $\mathcal{U}$. The notation $\Psi_2 \triangleright \Psi_1$ denotes that players in $\mathcal{U}$ are preferred to be in $\Psi_2$ than $\Psi_1$.
\end{definition}
\ignore{Given the above coalition formation concepts, next section gives the rules of forming a coalition in the above partially D2D connected network to provide a fully distributed algorithm.}
\section{Proposed Fully Distributed Solution} \label{PS}
This section derives the constraints of forming a coalition. These constraints represent the optimal players' associations, the transmitting player, and its optimal IDNC packet in a coalition. By the given constraints, our aim is to propose a distributed coalition formation algorithm relying on merge-and-split rules \cite{26}.
\subsection{Coalition Formation Constraints}
Let $\mathcal{U}_s$ be the set of all associated players in coalition $\mathcal{S}_s$ and $\mathcal{N}_s$ the subset of $\mathcal{U}_s$ that have non-empty \textit{Wants} set. Let $\mathcal{M}_s$ be the subset of packets that in the \textit{Has} set of each player in $\mathcal{U}_s$, which defined as $\mathcal{M}_s=\bigcup_{u \in \mathcal{U}_s}\mathcal{H}_{u}$. Let $\mathtt{S}_s$ denote the set of all neighbor coalitions to coalition $\mathcal{S}_s$. For a coalition $\mathcal{S}_s$, the transmitting device $a^*_{s}$ is the one that can achieve the least expected increase in the completion time.
According to the analysis available in \cite{33m,33e}, a transmitting device $a^{*}_s$ and its packet combination $\kappa_{a_s^*}$ can be obtained by solving the following problem
\begin{align}\label{op7}
a_s^{*}=\operatorname*{arg\,max}\limits_{{\substack{\ a\in \mathcal{A}_s\setminus \mathcal{L}_s }}}|\mathcal{C}_{a}\cap\mathcal{N}_s|+\operatorname*{max}\limits_{\substack{\ \kappa_{a}\in \rm{\underline{\rm\kappa}(\mathcal{A}_s)}}} \sum\limits _{u\in\mathcal{L}_s\cap\tau(\kappa_{a})} \log\frac{1}{\sigma_{au}},
\end{align}
where $\mathcal{A}_s$ is the set of players in coalition $\mathcal{S}_s$ that are not in any coverage zone of all other players in $\mathtt{S}_s$ and $\mathcal{L}_s(t)$ is the set of critical players that can potentially increase the overall payoff of the coalition $\mathcal{S}_s$ before the $t$-th transmission. This set characterizes the players based on their anticiapted completion times to give them priority to be targeted in the next transmission. In other words, $\mathcal{L}_s(t)$ contains players that would potentially increase the maximum anticipated completion time if they are not targeted in
the next transmission. It can be define mathematically as
\begin{equation}
\begin{split}
\label{CSG}
\mathcal {L}_s(t)= \Bigl\{u\in\ \mathcal{U}\cap \mathcal{N}_s \big| \mathcal{T}_u(\underline{a}_t-1)+\frac{1}{1-\mathbb{E}[\sigma_u]}
\geqslant \|\underline{\mathcal{T}}(\underline{a}_t-1)\|_\infty\Bigr\}.
\end{split}
\end{equation} The set of targeted players in coalition $\mathcal{S}_s$ when device $a_s^{*}$ transmits the combination $\kappa_{a_s^*}$ is
\begin{align}
\tau(\kappa_{a_s^{*}})=\left\{u \in \mathcal{S}_s \ \big||\kappa_{a_s^{*}} \cap \mathcal{W}_u| = 1~~\text{and}~~ \mathcal{C}_{a_s^* u} =1 \right\}.
\end{align}
With the aforementioned variable definitions, we can reformulate the completion time minimization problem in IDNC-based partially connected D2D network per coalition at each time instance as follows
\begin{subequations}\label{CF}
\begin{align}
&\min_ {{\substack{\ \underline{a}_t\in\{0,1\}^{|\mathcal{U}_s|} \\\underline{\kappa}\in\{0,1\}^{|\mathcal{M}_s|} }}}\phi(\mathcal{S}_s)\label{eq7aa} \\
& {\rm s. ~t.\ } |\tau(\kappa_{a_s^{*}})| \geqslant 1,\label{eq7bb} \\
&\tau(\kappa_{a_s^{*}}) \cap \tau(\kappa_{a_{s^\prime}^{*}}) =\varnothing, \forall~ a^*_s \neq a^*_{s^\prime} \in \mathtt{S}_s.\label{eq7dd}
\end{align}
\end{subequations}
Constraint \eref{eq7bb} says that the number of targeted players in each coalition must be more than one to ensure that at each transmission at least a player is benefiting. Constraint \eref{eq7dd} states that all targeted players should not experience any collision.
To find the optimal solution to the problem in \eref{CF}, we need to search over all the sets of optimal player-coalition associations, their different erasure patterns, players' actions and their optimal IDNC packets in one coalition. As pointed out in \cite{30m} for centralized fog system, this is a challenging problem. Further, the solution to \eref{CF} must go through the players' decisions to join/leave a coalition at each stage of the game. To seek a desirable solution to \eref{CF} that is capable of achieving significant completion time reduction, we propose to use a distributed algorithm relying on merge-and-split rules.
\subsection{A Distributed Coalition Formation Algorithm} \label{DC}
This section presents
a distributed coalition forming algorithm to obtain the minimum completion time of players. The key mechanism is to allow players in coalition formation process to make individual decisions for selecting potential neighbor coalitions at any game stage. We first define two rules of merge-and-split that allow the modification of $\Psi$ of the set $\mathcal{U}$ players as follows.
\begin{definition} (\textbf{Merge Operation}). Any set of coalitions $\{\mathcal{S}_1, \ \cdots, \mathcal{S}_k\}$ in $\Psi_1$ can be merged if and only if $(\bigcup^{k}\limits_{i = 1}\mathcal{S}_{i},\Psi_2) \triangleright (\{\mathcal{S}_1, \ \cdots, \mathcal{S}_k\},\Psi_1)$, where $\bigcup^{k}\limits_{i = 1}\mathcal{S}_{i}$ and $\Psi_2$ are the new set of coalitions and the new coalition structure after the merge operation, respectively.
\end{definition}
\begin{definition} (\textbf{Split Operation}). Any set of coalitions $\bigcup^{k}\limits_{i = 1}\mathcal{S}_{i}$ in $\Psi_1$ can be split if and only if $(\{\mathcal{S}_1, \ \cdots, \mathcal{S}_k\},\Psi_2) \triangleright (\bigcup^{k}\limits_{i = 1}\mathcal{S}_{i},\Psi_1)$, where $\{\mathcal{S}_1, \ \cdots, \mathcal{S}_k\}$ and $\Psi_2$ are the new set of coalitions and the new coalition structure after the split operation, respectively.
\end{definition}
The merge rule means that two coalitions merge if their merger would benefit not only the players in the merged coalition but also benefit the overall coalition structure value, i.e., the overall completion time. On the other hand, a coalition split into smaller ones if its splitter coalitions enhance at least the payoff of one player in that coalition. Therefore, using these two known rules, we present a distributed algorithm to solve the completion time minimization problem in \eref{eqct}. The proposed algorithm is broken into three steps as follows.
First, in $\Psi_\text{ini}$, players need to discover their neighbors by utilizing one of different known neighbor
discovery schemes, e.g., those used in wireless networks \cite{44aa}. For example, each player broadcasts a message consisting of two segments; each segment consists of one byte. While the first byte indicates the number of players in each player's coverage zone, the second byte indicates the completion time of that player. Further, players collect all aforementioned information, and the one who is connected to a large number of players, has a large \textit{Has} set, and not in the coverage zone of any player in any other coalitions. However, if such player does not exist, the size of the coalition is increased until that player exists. To summarize, a transmitting player $a^*_{s}$ in coalition $s$ should satisfy \eref{eq7bb} and \eref{eq7dd} and can be obtained by solving problem \eref{op7}. \ignore{should satisfy these following conditions (C).
\begin{itemize}
\item C1: Not in the critical set of players and connected to the largest possible players in the coalition, i.e., $a^{*}_{s}=\operatorname*{arg\,max}\limits_{{\substack{\ a\in \mathcal{A}_s\setminus \mathcal{L}_s }}}|\mathcal{C}_{a}\cap\mathcal{N}_s|$.
\item C2: Not in the coverage zone of any players in any other coalitions. In other words, $a^*_{s}$ can only target players in coalition $s$, i.e., $\tau(\kappa_{a_s^{*}}) \cap \tau(\kappa_{a_{s^\prime}^{*}}) =\varnothing, \forall~ a^*_s \neq a^*_{s^\prime} \in \mathtt{S}_s$.
\end{itemize}}
Afterward, each player evaluates its potential payoff as in \eref{eq5} to make an accurate decision as explained in step II. The selected transmitting player in each coalition is referred to a \textit{coalition head} who can do the analysis in step II. Therefore, this step significantly reduces the search space of the coalition formation.
The coalition formation step optimizes the selection of the transmitting players and their IDNC packets through many successive split-and-merge rules between coalitions. Therefore, step II is to assign players to potential neighbor coalitions, select the transmitting player, and find its optimal IDNC packet, which can be accomplished by the following. In this step, the time-index
is updated to $\tau=\tau+1$. The merge rules are implemented by checking the merging possibilities of each pair of neighbor coalitions $s$ and $k$. Particularly, a coalition $s\in \Psi_\tau$ can decide to merge with another coalition $k$ to form a new coalition $j$. As such, the resulting structure guarantees both merge conditions (MC).
\begin{itemize}
\item MC1: There exists at least one player satisfies \eref{eq7bb} and \eref{eq7dd}.
\item MC2: At least one player in the merged coalition can reduce its individual payoff without increasing the payoffs of the other players.
\end{itemize}
After all the coalitions have made
their merge decisions based on the players preferences, the merge rules end. This results in the updated
coalition structure $\Psi_\tau$.
Similarly, the split rules performed on the players that do not benefit from being a member of that coalition. In other terms, coalition $s\in \Psi_\tau$ can be splitted into coalitions of smaller sizes as long as the splitter coalitions guarantee both split conditions (SC).
\begin{itemize}
\item SC1: At least one player can strictly enhance its payoff without increasing the payoffs of all the remaining players.
\item SC2: In each split coalition, there exists at least one player satisfying \eref{eq7bb} and \eref{eq7dd}.
\end{itemize}
At the end of the split rules, the coalition structure $\Psi_1$ is updated. The time index is updated along with a sequence of merge-and-split rules which take place in a distributed manner. Such sequence continues based on the resulting payoff of each player and coalition. It ends when there is no further merge-and-split rules required in the current coalition structure $\Psi_\tau$, which is the case of the final coalition structure $\Psi_\text{fin}$.
Finally, each transmitting player in each coalition broadcasts an IDNC packet to all players in its coverage zone. The distributed merge-and-split coalition formation algorithm is summarized in Algorithm \ref{Alg1}.
We repeat the above three steps until all packets are disseminated among players, as explained in \algref{Alg2}.
\begin{algorithm}[t!]
\caption{Coalition Formation Distributed Algorithm for a D2D Multi-hop Network}
\label{Alg1}
\textbf{Initialization:}\;
Players are organized themselves into an initial coalition structure $\Psi_\text{ini}=\{\mathcal{S}_1, \ \cdots, \mathcal{S}_k\}$;\;
Initialize time-index $\tau=0$ and $\Psi_\tau=\Psi_\text{ini}$;\;
\textbf{Step I: Coalition Members Discovery};\;
\begin{itemize}
\item Each player discovers its neighboring players.\;
\For { \text{each} $\mathcal{S}_s\in \Psi_{\text{ini}}, \forall s=\{1,2,\ \cdots, \, k \}$}{
Select the transmitting players $\mathcal{A}_s$ that satisfying \eref{eq7bb} and \eref{eq7dd} and find $a_s^{*}$ and its IDNC packet $\kappa_{a_s^{*}}$ by solving \eref{op7}.\;
Calculate the utility of each player as in \eref{eq5}.\;}
\end{itemize}
\textbf{Step II: Coalition Formation};\;
\begin{itemize}
\item The optimization target in coalition $\mathcal{S}_s$ is $\min\limits_{{\substack{\ \underline{a}_t\in\{0,1\}^{|\mathcal{U}_s|} \\\underline{\kappa}\in\{0,1\}^{|\mathcal{M}_s|} }}}\phi(\mathcal{S}_s)$.\;
\item Obtain player's assignments based on the two main rules of merge and split:\;
\Repeat{No further merge nor split rules}{ Update $\tau=\tau+1$.\;
\For { \text{each} $\mathcal{S}_s\in \Psi_{\tau-1}, \forall s=\{1,2,\ \cdots, \, k \}$} {%
The selected transmitting player analyzes all possible merge rules.\;
If a merge occurs, the current coalition structure $\Psi_{\tau-1}$
is updated.\;
Update $\mathcal{A}_s$ and update the selected transmitting player by solving \eref{op7}.\;
Set $\Psi_{\tau}=\Psi_{\tau-1}$.
}
\For { \text{each} $\mathcal{S}_s\in \Psi_{\tau}, \forall s=\{1,2,\ \cdots, \, k \}$} {%
The selected transmitting player analyzes all possible split rules.\;
If a split occurs, the current coalition structure $\Psi_{\tau}$
is updated.\;
Update $\mathcal{A}_s$ and update the selected transmitting player by solving \eref{op7}.\;
}
}
\end{itemize}
\textbf{Output} The convergence coalition structure $\Psi_\text{fin}=\Psi_{\tau}$.\;
\textbf{Step III: IDNC Packet Transmission};\;
\begin{itemize}
\item Each transmitting player $a_s^{*}$ in each coalition broadcasts IDNC packet $\kappa_{a_s^{*}}$ to all players in its coverage zone.
\end{itemize}
\end{algorithm}
\begin{algorithm}[t!]
\caption{Overall D2D Multi-hop Approach for Solving Problem \eref{eqct}}
\label{Alg2}
\textbf{Data:} $\mathcal{U}$, $\mathcal{P}$, $\mathcal{H}_u$, $\mathcal{W}_u$, $\mathcal{C}_u$, $\mathcal{T}_u=0$, $\mathcal{D}_u=0$, $\forall~ u\in \mathcal{U}$ and {\boldmath$\epsilon$}.\;
Set time-index of the completion time $t=0$;\\
\textbf{Repeat:}
\begin{itemize}
\item Execute Algorithm \ref{Alg1} and obtain the IDNC packet for each transmitting player in $\Psi_{\text{fin}}$;
\item Each targeted player does an XOR binary operation and calculate the anticipated completion time as in \eref{eq85}.
\item Each targeted player broadcasts a one bit ACK, indicating the successful reception of the packet, to all players in its coverage zone.
\item $t=t+1$;
\end{itemize}
\textbf{Until} all packets are disseminated among players.\;
\textbf{Output} the completion time $t$.
\end{algorithm}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth]{SNAP1.pdf}
\caption{A resulting coalition structure $\Psi_\text{fin}=\{\mathcal{S}_1, \mathcal{S}_2\}$ from \algref{Alg1} for a partially connected D2D network that is presented in Fig. \ref{fig1}. }
\label{fig2}
\end{figure}
Fig. \ref{fig2} depicts a snapshot of the coalition structure
$\Psi_\text{fin}$= $\{\mathcal{S}_1, \mathcal{S}_2\}$ resulting from \algref{Alg1} for the simple D2D network presented in Fig. \ref{fig1}. For ease of analysis, we assume error-free transmissions. Given the coverage zone of each player and their side information as in Fig. \ref{fig1}, two disjoint coalitions are formed where only one player transmits in each coalition. In particular, in coalition $\mathcal{S}_1$, player $u_4$ transmits packet $p_1$ to player $u_6$, and in coalition $\mathcal{S}_2$, player $u_{1}$ transmits an IDNC packet $p_3\oplus p_4$ to players $u_2$, $u_3$, $u_5$. The transmitting player in each coalition is shown in a red circle; their targeted players and the optimal IDNC packets are shown in Fig. \ref{fig2}. In a nutshell, we shed some remarks on executing \algref{Alg1}.
\begin{itemize}
\item The merge-and-split rules enumerate only the neighbor coalitions, and this does not necessarily need significant computations. To further reduce the computations, the players of a coalition $\mathcal{S}_s$ can avoid merging with other neighbor coalition $\mathcal{S}_k$ if the payoffs of the players in both coalitions are equal $\phi_u(\mathcal{S}_s)=\phi_{u'}(\mathcal{S}_k)$, $\forall u\in \mathcal{S}_s$ and $\forall u'\in \mathcal{S}_k$.
\ignore{\item The resulting coalitions' structure from the proposed distributed algorithm $\Psi_{\text{fin}}$ is not always guaranteed to be optimal. This is due to the formed coalitions being unaware of their payoffs and thus have no clue whether any different coalition structure would lead to a lower system payoff. Even if all coalitions' payoffs are known, it is infeasible to find the optimal coalition structure due to complexity constraints \cite{45aa}.}
\item Forming coalitions only one time, i.e., at the first stage of the game, is not guaranteed to disseminate all packets to all players. This is because each formed coalition has only some portion of packets and does not have the wanted packets of other players in other coalitions. For packet recovery completion, each
coalition is formed, at each transmission round, based on the individual preference of its members and irrespective of the \textit{Has} sets of its members. Thus, each transmitting
player has disseminated some packets to each visited coalition in previous transmissions.
\end{itemize}
In the considered game, each player has two actions to take either to transmit an IDNC packet $\kappa$ or to listen to a transmission. Therefore, the action of a player $u$ at each game stage $t$ is $\mathcal{AC}_u(t)=\{\text{transmit} ~\kappa_u,~ \text{remain silent}\}$. The asymmetry of the side information at each player generates a different packet combination to be sent by each player at each transmission round. This causes the asymmetry of the action space
of each player. Also, in each transmission, different players are associated with each coalition. All these make the payoff of each coalition unique.
\section{Convergence Analysis, Complexity, and communication overhead} \label{PA}
This section first studies the convergence of the coalition formation algorithm and its Nash equilibrium stability. Afterward,\ignore{ we use a well-known metric that characterizes the Nash equilibriums to study the robustness of the coalition formation game. Then,} the complexity properties of \algref{Alg1} is analyzed, which shows that \algref{Alg1} needs a low signaling overhead.
\subsection{Convergence and Nash Equilibrium}
In coalition formation games, the stability of the coalition structures
corresponds to an equilibrium state known as Nash-equilibrium. This subsection proves that the convergence of the coalition formation algorithm is guaranteed and it is a Nash-stable coalition structure.
The following theorem demonstrates that \algref{Alg1} terminates in a finite number of iterations.
\begin{theorem} \label{th:1w}
Given any initial coalition structure $\Psi_{\text{ini}}$, the coalition formation step of \algref{Alg1}
maps to a sequence of merge-and-split rules which
converges, in a finite number of iterations, to a final coalition structure $\Psi_{\text{fin}}$ composed of a number of disjoint coalitions.
\end{theorem}
\begin{proof}
To proof this theorem, we need to show that for any merge or split rule, there exists a new coalition structure which results from the coalition formation step of \algref{Alg1}. Starting from any initial coalition structure $\Psi_{\text{ini}}$, the coalition formation step of \algref{Alg1} can be mapped to a sequence of merge/split rules. As per definition $8$ and definition $9$, every merge or split rule transforms the current
coalition structure into another coalition structure, hence we obtain the following sequence of coalition structures
\begin{align}
\Psi_{\text{ini}}\rightarrow \Psi_{1} \rightarrow \Psi_{\text{2}} \rightarrow \ \cdots \ \Psi_{\text{fin}}
\label{eq88}
\end{align}
where $\Psi_{i+1} \triangleright \Psi_{i}$, and $\rightarrow$ indicates the occurrence of a merge-and-split rule. Since the Pareto Order introduced in definition $6$ is irreflexive, transitive and monotonic, a coalition structure cannot be revisited. Given the fact that the number of merge and split rules of a finite set is \textit{finite} and the merge/split operations-coalition structure mapping, the number of coalition structure sequences in \eref{eq88} is finite. Therefore, the sequence in \eref{eq88} always terminates and converge to a final coalition structure
$ \Psi_{\text{fin}}$.
\end{proof}
\ignore{Now, let us analyze the stability of the final coalition structure $ \Psi_{\text{fin}}$ which results from \algref{Alg1}.}
\begin{definition} A coalition structure $\Psi=\{\mathcal{S}_1,\ \cdots \ ,\mathcal{S}_k \}$ is Nash-stable if players have no incentive to leave $\Psi$ through merge-and-split operations.
\end{definition}
This definition implies that any coalition structure
$\Psi$ is considered as a Nash-stable coalition structure if and only if no player has an incentive to move from its current coalition and join another coalition or make an individual decision by performing any merge/split rules. Further, the coalitions in the final coalition structure $\Psi_\text{fin}$ have no incentive to do more merge and split operations. A Nash-stable coalition structure is also an individually stable coalition structure. In general, in a coalition formation game, Nash-stability is a subset of individual stability \cite{44}. Specifically, no player leaves its current coalition through a split rule and form an empty coalition, i.e., no singleton coalition is formed if the following property holds.
\begin{property} \label{pr1}
There exists at least one coalition structure $\Psi$ that satisfies both Nash-stability
and individual stability if and only if $\forall\mathcal{S}_s\in \Psi$ such that $|\mathcal{S}_s|>1$.
\end{property}
\begin{IEEEproof}
This property states that forming a singleton coalition cannot happen. Indeed, since each player cannot send an encoded packet to itself, it believes that a better payoff can be obtained by being a member of any coalition. Further, since the payoff of a non-targeted player in any coalition and a single player-coalition is the same, our proposed algorithm, as mentioned in the previous section, avoids making any merge-and-split rules for equal payoff values. Thus, according to \algref{Alg1}, a Nash-stable and individual stable coalition structure can be obtained.
\end{IEEEproof}
As a consequence of Property 2, the final coalition structure $\Psi_\text{fin}$ that results from \algref{Alg1} is $\mathbb{D}_\text{hp}$ stable as the coalitions have no incentive to do further merge-and-split operations. $\mathbb{D}_\text{hp}$ stable is also known as merge-and-split proof \cite{44}. Furthermore, $\Psi_\text{fin}$ can be considered as $\mathbb{D}_\text{c}$ stable. This is because players have no incentive to leave $\Psi_\text{fin}$ and form any other coalitions \cite{26}.
To illustrate the above concepts, consider the resulting coalition structure $\Psi_\text{fin}=\{\mathcal{S}_1, \mathcal{S}_2\}$ that shown in Fig. \ref{fig2}. The coalition structure $\Psi_\text{fin}$ is Nash-stable as no player has an incentive to leave its current coalition. For example, player $u_5$
has a payoff of $\phi_5(\mathcal{S}_2)=-2$ when being part of the coalition $\mathcal{S}_2 = \{u_1, u_2, u_3, u_5\}$. The payoff $\phi_5(\mathcal{S}_2)$ is calculated as follows. Since player $u_5$ receives an IDNC encoded packet from player $u_1$, it does not experience any decoding delay increases. Thus, by \eref{eq85}, its anticipated completion time is $\mathcal{T}_5(\underline{\rm a}_t) = \frac{|\mathcal{W}^{(0)}_5|+ \mathbb{D}_5(\underline{\rm a}_t)-\mathbb{E}[\sigma_5]}{1-\mathbb{E}[\sigma_5]}=2$, and, by \eref{eq5} its payoff is $-2$. If player $u_5$ switches to act non-cooperatively and joins $\mathcal{S}_1$, player $u_6$ would be the new transmitting player in $\mathcal{S}_1$. In this case, player $u_5$ will be in the coverage zone of both transmitting players $u_1$ in $\mathcal{S}_2$ and $u_6$ in $\mathcal{S}_1$. Consequently, the payoff of player $u_5$ decreases to $\phi_5(\mathcal{S}_1)=-3$, and the payoff of player $u_6$ decreases from $\phi_6(\mathcal{S}_1)=-3$ to
$\phi_6(\mathcal{S}_1)=-4$. Thus, player $u_5$ does not deviate form its current coalition $\mathcal{S}_2$ and join $\mathcal{S}_1$. Similarly, if players $u_2$ and $u_3$ act non-cooperatively by leaving $\mathcal{S}_2$ and forming a singleton coalition for each, i.e., $\mathcal{S}_3$ and $\mathcal{S}_4$, their payoffs decrease from $\phi_2(\{2\})=-2$ and $\phi_2(\{3\})=-1$ to $\phi_2(\mathcal{S}_3)=-3 $ and $\phi_3(\mathcal{S}_4)=-2$, respectively. Clearly, $\Psi_\text{fin}$ is an individual Nash-stable as it does not have any singleton coalition. Further, it is both $\mathbb{D}_\text{hp}$ and $\mathbb{D}_\text{c}$ stable as no further merge-and-split operations can be performed by the coalitions and no player has incentive to deviate from $\Psi_\text{fin}$, respectively.
\ignore{\subsection{Robustness of the Proposed Solution}
Nash Equilibrium is not sufficient to guarantee the robustness of the coalition formation game formulation. To characterize the equilibrium of a game, we use a
well-known metric, introduced in \cite{46}, is called the price of anarchy (POA).
\begin{definition} The price-of-anarchy (PoA) is defined as the worst-case
efficiency of a Nash Equilibrium among all possible strategies. Such PoA of one coalition and of the whole game at any stage can be expressed, respectively, as follows
\begin{align}
&POA_{\mathcal{S}}=\dfrac{\max_{s\in{\mathcal{ST}}}W(s)}{\min_{s\in{\mathcal{NE}}}W(s)}
\text{ and } POA= \dfrac{\max_{\mathcal{S}\in\Psi}POA_{\mathcal{S}}}{\min_{\mathcal{S}\in\Psi}POA_{\mathcal{S}}},
\end{align}
where $\mathcal{ST}$ represents the set of all possible strategies at the game stage, $\mathcal{NE}$ denotes the set of all NEs at the game stage, $W$ : $\mathcal{S}\rightarrow \mathbb{R}$ is a fairness function.
\end{definition}
The notion of PoA in game theory is used to measure how the efficiency of a game degrades. This game's degradation due to the selfish behavior of its players.
In this paper, the payoff values are the same for all the players, i.e., the completion time in \eref{eq5}. Thus, the PoA can be reformulated as follows
\begin{equation}
\begin{split}
POA= \dfrac{\max_{s\in{\mathcal{NE}}}\phi(s)}{\min_{s\in{\mathcal{NE}}}\phi(s)}.
\end{split}
\end{equation}}
\subsection{Complexity Analysis and Communication Overhead}
This section analyzes the computational complexity and communication burden of \algref{Alg1}.\\
\textit{Computational Complexity:} Each player at any game stage needs to find the optimal IDNC packet combination, which depends on the packets that it possesses. Further, since a game with incomplete information, i.e., each player knows only the side information of players in its coverage zone, every player can generate the IDNC packet combinations of all other players in its coverage zone. This allows every player to calculate the payoff function \eref{eq5} of all other players in its coverage zone.
The complexity of generating an optimal IDNC packet using a maximum weight search method is explained as follows. First, the BS generates the vertcies of $O(NM)$, and it connects them by edges that represent network coding conditions of $O(N^2M)$. Then, the BS executes the maximum weight search method that computes the weight of $O(NM)$ vertcies, and selects maximum $N$ users. Hence, the overall complexity of finding the optimal IDNC packet is $O(NM)+O(N^2M)+O(NM*N)=O(N^2M)$ \cite{12m}. In our case, the complexity is bounded by $O(N^2M)$ since the number of players in the coverage zone of each player is less than the total number of players.
\textit{ Communication Overhead:} \ignore{While in the complexity part we analyzed the computation burden for each player to find the IDNC packet, the communication overhead part analyzes the signaling overhead that needs to run Algorithm \ref{Alg1}.} The communication overhead of \algref{Alg1} is related to perform the members' discovery step, coalition heads selection, and the analysis of merge-and-split rules, which is associated with the total number of coalition formations.
First, similar to many algorithms in the literature, e.g., \cite{44aa}, the member discovery step needs $|N|$ 2-byte messages, in which each message is being sent to all neighbor players which is denoted by $\mathbf{U}$. Thus, the total communication overhead for discovering the neighbor players is $|2N\mathbf{U}|$ bytes.
Second, coalition head selection can be performed in many different strategies, e.g., based on players' attributes \cite{50m}, \cite{51m}. In \algref{Alg1}, players in each coalition initially select their coalition head by exchanging an advertisement message among them, and the one that satisfies the conditions C1 and C2 in \sref{DC} would be chosen. The same process is applied for selecting/updating the coalition head in step III. Being a player connected to most players in the coalition, the coalition head is responsible for ensuring that the rest of the coalition's members received an acknowledgment (ACK). As such, they can update their side information after each D2D transmission.
Third, the communication overhead of the coalition formation step is based on the number of merge-and-split rules, which is mainly related to the total number of decisions made by each of the $N$ players. As previously mentioned, the merge-and-split operations enumerate only the neighbor coalitions $\mathtt{S}_s$.\ignore{ Therefore, most players tend to form coalitions of sizes less than $N/2$.} Thus, two extreme cases can occur.
\begin{itemize}
\item If all coalitions' players decide to leave their current coalitions and join other coalitions. In this case, each player $u$ in coalition $\mathcal{S}_s$ would make $|\mathtt{S}_s|$ decisions (player $u$ has an $|\mathtt{S}_s|$ possibilities to join any of the neighbor coalitions). Consequently, the total number of players' decisions is $\mathcal{Q}_{\text{worst}}=N|\mathtt{S}_s|$, and the overhead complexity is of the order $O(N|\mathtt{S}_s|)$.
\item If players did not make any decisions. Since no decision is made by players, the overhead in this case is only $\mathcal{Q}_{\text{best}}=N$ (due to the initial player-coalition associations as in step I), and a complexity order of $O(N)$.
\end{itemize}
In practical, the number of players' decisions is between the above two cases, i.e., $\mathcal{Q}_{\text{best}}\leq\mathcal{Q}\leq \mathcal{Q}_{\text{worst}}$. Hence, if $\mathcal{L}$ average decisions are made by players, then $\mathcal{Q}=N|\mathcal{L}|$ decisions that perform split-and-merge rules are needed until \algref{Alg1} converges.
Therefore, combining all the signaling overhead components, the total overhead is $N(2\mathbf{U}+|\mathcal{L}|)$. Such signaling overhead will add only a few bytes, which are negligible in size compared to the entire packet's size. Furthermore, to update the \textit{Has} and \textit{Wants} sets of players, only the indices of packets needed for the communication between the players, not their contents. Hence, we ignore signaling overhead factor because it is first constant (independent on the completion time and decoding delay) and that its size is negligible.
\ignore{\begin{table}
\caption{Numerical parameters}
\renewcommand{\arraystretch}{0.99} \label{table:parameters}
\centering
\begin{tabular}{|p{6.5cm}|p{5cm} |} \hline
Solution & \\ \hline
Interference-free D2D & Perfect \\
\hline
Point-to-Multi-Point & Static \\ \hline
Fully connected D2D & Random \\
\hline
One colaition game & Uniform \\
\hline
Proposed scheme & Uniform \\
\hline
\end{tabular}
\end{table}}
\section{Numerical Results} \label{SR}
In this section, we evaluate the performance of our proposed coalition formation game (denoted by CFG partially-connected D2D) to demonstrate its capability of reducing the completion time compare to the baseline schemes. We first introduce the simulation setup and the comparison schemes. Then, the completion time and game performances are investigated, respectively.
\subsection{Simulation Setup}
We consider an IDNC-enabled partially connected D2D network where players are uniformly re-positioned
for each iteration in a $500$m$\times500$m cell with connectivity index
$\mathbf{C}$, which is defined as the ratio of the average number of neighbors to the total number of players $N$. A simple partially connected D2D network setting is plotted
in Fig. \ref{figsm} for the presented example in Fig. \ref{fig1}. The system setting in this paper follows the setup studied in \cite{29m},\cite{30m}. The initial side information $\mathcal{H}_u$ and $\mathcal{W}_u$, $\forall u\in \mathcal{U}$ of players is independently drawn based on their average erasure probability. The short-range communications are more reliable than the BS-player communications \cite{26m}, \cite{28m}. Hence, unless specified, we assume that the player-to-player erasure probability $\sigma$ is half the BS-to-player erasure $\epsilon$ in all simulations, i.e., $\sigma=0.5\epsilon$. Our simulations were implemented using Matlab on a Windows $10$ laptop $2.5$ GHz Intel Core i7 processor and $8$ GB $1600$ MHz DDR3 RAM. For the sake of comparison, we implement the following schemes.
\ignore{\begin{figure}[t!]
\centering
\includegraphics[width=0.4\linewidth]{./fig/NT.pdf}
\caption{A partially connected D2D network of the example presented in Fig. \ref{fig1}.}
\label{figsm}
\end{figure}}
\begin{itemize}
\item The fully-connected D2D system in which a single user who has the largest number of received packets transmits an IDNC packet at each round.
\item The PMP system in which the BS is responsible for the transmissions. The BS holds all the requested packets and can serve all the users. This scheme was proposed in \cite{16m}.
\item The one coalition formation game in a partially connected D2D (denoted by OCF partially-connected D2D). In this scheme, only one coalition is formed, and a single player transmits an IDNC packet at each round. The transmitting player is selected based on its number of received packets as well as on the maximum number of players in its coverage zone.
\item The partially D2D in FRANs (denoted by FRAN partially-connected D2D). In this scheme, a fog central unit is responsible for determining the set of transmitting users and the packet combinations. This scheme was proposed in \cite{30m}.
\end{itemize}
\begin{figure}[t!]
\centering
\begin{minipage}{0.494\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{NT.pdf}
\caption{A partially connected D2D network of the example presented in Fig. \ref{fig1}.}
\label{figsm}
\end{minipage}\hfill
\begin{minipage}{0.494\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{fign04.pdf}
\caption{Average completion time as a function of the number
of players $N$.}
\label{fig4}
\end{minipage}
\centering
\centering
\begin{minipage}{0.494\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{figm04.pdf}
\caption{Average completion time as a function of the number of packets $M$.}
\label{fig5}
\end{minipage}\hfill
\begin{minipage}{0.494\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{figp04.pdf}
\caption{Average completion time as a function of the average player-player erasure probability $\sigma$.}
\label{fig6}
\end{minipage}
\end{figure}
\subsection{Completion Time Performance Evaluation}
To study the completion time performance of the proposed solution, we change the number of players, packets, connectivity index, and the packet erasure probability. \ignore{The completion time is calculated over a
certain number of iterations, and the average values are presented.}
In Fig. \ref{fig4}, we depict the average completion time as a function of the number of players $N$ for a network composed of $M = 30$ packets, $\epsilon=0.25$, $\sigma=0.12$, and connectivity index $C=0.4$. It is observed from Fig. \ref{fig4} that the proposed CFG partially-connected D2D algorithm outperforms the PMP, fully-connected D2D, and OCF partially-connected D2D schemes for all simulated number of players. This is because of the simultaneous IDNC packet transmissions from cooperating players at the same time. In particular, the fully-connected D2D system only considers the size of the \textit{Has} set as a metric to select a single player for transmission at each round, i.e., $a^*=\max\limits_{a\in \mathcal{U}} \mathcal{H}_a$. The OCF partially-connected D2D scheme focuses on the maximum number of connected players to be formed as well as on the size of the \textit{Has} set of the transmitting player. On the other hand, although the transmitter in the PMP scheme can encode all the IDNC combinations and target a certain number of players, the PMP scheme sacrifices the utility of the simultaneous transmissions by considering only one transmission. Our proposed algorithm strikes a balance between these
aspects by jointly considering the number of targeted players and the \textit{Has} set size of each transmitting player. Despite the gain achieved by the FRAN partially-connected D2D solution with the presence of a fog that executes the whole process, our decentralized solution reaches the same performance. Clearly, due to the philosophy of the D2D simultaneous transmissions that both schemes have proposed, their performances are roughly the same.
We observe from Fig. \ref{fig4} that, for a small number of players, the PMP system is close to both the CFG partially-connected D2D and FRAN partially-connected D2D schemes. This is because, for a small number of players ($N\leq 60$), the certainty that the whole frame $M$ is distributed between players in the initial transmissions is low, thus decreasing the probability of exchanging potential IDNC packets between players. This makes the overall completion time performance of the partial D2D scenarios close to the PMP scheme. As the number of players increases ($N\geq 80$), the bigger the certainty that the union of their \textit{Has} sets is equal to $M$. This results in more potential D2D IDNC packet exchange, thus increasing the gap between the PMP performance and both the FRAN partially-connected D2D and proposed schemes.
In Fig. \ref{fig5}, we illustrate the average completion time as a function of the number of packets $M$ for a network composed of $N= 30$ players, $\epsilon=0.25$, $\sigma=0.12$, and connectivity index $C=0.4$. The figure shows that the proposed scheme
outperforms the fully connected, one coalition game, and PMP schemes. For a few packets, the IDNC combinations
are limited which affect the ability of the
proposed scheme to generate coded packets that satisfy number of players. With increasing the number of packets, the number of transmissions needed for the completion for the aforementioned schemes is remarkably increasing. Therefore, as the number of packets
increases, the proposed scheme outperforms largely the fully connected and one coalition game schemes. We see from Fig. \ref{fig4} that the completion time of all schemes linearly increases with the number of packets. This is expected as the number of packets increases, a high number of transmissions is required towards the completion. This results in increasing the average completion time.
\ignore{\begin{figure}
\centering
\begin{minipage}{0.494\textwidth}
\centering
\includegraphics[width=0.71\textwidth]{./fig/figp01.eps}
\caption{Average completion time as a function of the average player-player erasure probability $\sigma$ for a poorly connected network.}
\label{fig7}
\end{minipage}\hfill
\begin{minipage}{0.494\textwidth}
\centering
\includegraphics[width=0.71\textwidth]{./fig/figp04.eps}
\caption{Average completion time as a function of the average player-player erasure probability $\sigma$ for a moderately connected network.}
\label{fig8}
\end{minipage}
\end{figure}}
In Fig. \ref{fig6}, we plot the average completion time as a function of the average player-player erasure probability $\sigma$ for a network composed of $N= 60$, $M=30$, $\epsilon=2\sigma$, and $C=0.4$. Similar to what we have discussed in the above figures, the average completion time of the partial D2D solutions is noticeable compared to the fully-connected D2D and OCF partially-connected D2D schemes, as shown in Fig. \ref{fig6}. We clearly see that the completion time of the partial D2D schemes is better than the PMP one because of their multiple players' transmissions at each round. Moreover, as the player-to-player erasure probability increases, the BS-player erasure probability increases two-fold ($\epsilon=2\sigma$), thus slightly affecting the performance of the PMP scheme. The partial D2D settings, however, benefit from short range and reliable communications which provide much better players reachability and IDNC packet successful delivery compared to the PMP setting.
\begin{figure}[t!]
\centering
\begin{minipage}{0.494\textwidth}
\centering
\includegraphics[width=0.61\textwidth]{figc.pdf}
\caption{Average completion time as a function of the connectivity index $C$.}
\label{fig9}
\end{minipage}\hfill
\begin{minipage}{0.494\textwidth}
\centering
\includegraphics[width=0.61\textwidth]{FigCNnnn.pdf}
\caption{Average number of coalitions as a function of the number of players $N$.}
\label{fig10}
\end{minipage}
\end{figure}
In Fig. \ref{fig9}, we investigate the average completion time as a function of the connectivity index $C$
for a network composed of $N= 60$, $M=30$, $\epsilon=0.25$, and $\sigma=0.12$. It can clearly be seen that for a low connectivity index $(C \leq 0.4)$, the proposed CFG partially-connected D2D approach noticeably outperforms the fully-connected D2D and OCF partially-connected D2D approaches. In such poorly connected networks $(C \leq 0.4)$, multiple simultaneous players' transmissions are exploited in partially D2D algorithms. However, as the connectivity index increases $(C \geq 0.6)$, the number of formed disjoint coalitions in our proposed solution is drastically reduced, thus reducing the number of transmitting players. This results in a performance agreement with the fully-connected D2D scheme. Being independent of the coverage zones of the transmitting players and the delay created by those players, the PMP scheme is not affected by the changes to $C$. Thus, the PMP scheme has constant average completion time.
\ignore{
For moderately connected networks in Fig. \ref{fig4} and Fig. \ref{fig6}, the proposed CFG partially-connected D2D still outperforms the fully-connected D2D and the OCF partially-connected D2D schemes for all simulated numbers of players and packets. Due to the high connectivity between players, the number of formed coalitions is small, thus decreasing the number of transmitting players in partial D2D scenarios. Consequently, the completion time performance of the partially connected networks, i.e., CFG partially-connected D2D and FRAN partially-connected D2D, is degraded and expected to be close to the performance of the fully-connected D2D and OCF partially-connected D2D schemes.}
\begin{table}
\caption{The influence of changing $\sigma$ on the completion time performance of the proposed scheme}
\renewcommand{\arraystretch}{0.8} \label{table1}
\centering
\tabcolsep=0.11cm
\begin{tabular}{|p{5.2cm}|p{1.6cm} |p{1.6cm} |p{1.6cm}| p{1.6cm}|} \hline
Solution & $\sigma=0.6\epsilon$ & $\sigma=0.7\epsilon$ & $\sigma=0.9\epsilon$ & $\sigma=\epsilon$ \\ \hline
Point to Multi-Point & 30.2900 & 30.2800 & 30.3100 & 30.4800
\\
\hline
CFG partially-connected D2D & 20.1800& 23.4702 & 30.4500& 33.9300\\ \hline
\end{tabular}
\end{table}
To conclude this section, we study the influence of the setting $\sigma=0.5\epsilon$ on the completion time performance of our proposed scheme. In \tref{table1}, we summarize the completion time perfromance for different values of $\sigma$. The considered network setup has $30$ players,
$20$ packets, $\epsilon=0.5$, and $C=0.1$. From \tref{table1}, we note that the completion time of our proposed solution still outperforms the PMP scheme for $\sigma=0.7\epsilon$ and approximately reaches the same performance as for the PMP scheme for $\sigma=0.9\epsilon$. This is due to the simultaneous transmissions and cooperative decisions by the transmitting players, which show the potential of the proposed CFG solution in minimizing the completion time of users.
\subsection{Proposed CFG Perfromance Evaluation}
To quantify the analysis of the proposed formation coalition solution, we plot in Fig. \ref{fig10} the average number of coalitions as a function of the number of players $N$ for a network composed of $M= 30$, a different connectivity index ($C= 0.6$, $C= 0.3$, and $C= 0.1$), and $\sigma=0.12$. Fig. \ref{fig10} shows that the average coalition size increases with the increase in the number of players. This is because, as $N$ increases, the number of cooperating players increases, thus increasing the average size of the formed coalitions. We can conclude from Fig. $\ref{fig10}$ that the resulting coalition structure $\Psi_\text{fin}$ from Algorithm \ref{Alg1} is composed of a small number of relatively large coalitions when $C= 0.6$. When $C= 0.1$, this number of formed coalitions increase and the resulting coalition structure $\Psi_\text{fin}$ is composed of a large number of small coalitions' sizes.
\begin{table}
\caption{Average Running Times of the different schemes}
\renewcommand{\arraystretch}{0.8} \label{table2}
\centering
\tabcolsep=0.11cm
\begin{tabular}{|p{5.4cm}|p{4.3cm} |p{4.3cm} |} \hline
Solution & Time(s)- Small network& Time(s)- Large network \\ \hline
FRAN partially-connected D2D & 0.561893 & 15.98450\\
\hline
Point to Multi-Point & 1.994500 & 1103.020716 \\ \hline
Fully-connected D2D & 0.756420 & 128.772580 \\
\hline
OCF partially-connected D2D & 0.783575 & 28.726515 \\
\hline
CFG partially-connected D2D & 0.736737 & 21.725739 \\
\hline
\end{tabular}
\end{table}
In \tref{table2}, we evaluate the complexity of the proposed coalition game solution as a function of the algorithmic running time. In particular, \tref{table2} lists the consumed time of MATLAB to execute all schemes in different network setups since starting the algorithms until all players receive their wanted packets. The considered small network setup has $30$ players,
$20$ packets, $\epsilon=0.5$, $\sigma=0.25$, and $C=0.1$. The considered large network setup has $100$ players,
$70$ packets, $\epsilon=0.5$, $\sigma=0.25$, and $C=0.1$. It can clearly be seen from the table that the proposed CFG-partially D2D scheme needs low consumed time than all other solutions for both network setups. Although the completion time achieved by the CFG partially-connected D2D scheme is roughly the same as the centralized FRAN partially-connected D2D, the computing time required by our developed scheme is slightly higher than that required by the FRAN partially-connected D2D. This is because our proposed scheme needs time to converge before generating the output. The centralized FRAN scheme has low execution time due to the presence of the fog entity.
Finally, to evaluate the convergence rate analysis of the proposed scheme, the average number of merge-and-split
rules before Algorithm~\ref{Alg1} converges to the final coalition structure is listed in \tref{table3}. To achieve the stable coalition with our proposed CFG scheme, network setup $1$ requires on average $16$ iterations, and network setup $2$ needs on average $22$ iterations. These results show that our proposed distributed algorithm is robust to different network setups. In summary, these results show that our proposed
algorithm allows D2D users to form stable coalitions with a good convergence speed, which further confirm the theoretical findings in \thref{th:1w}.
\begin{table}
\caption{Average Number of Coalitions and Split/merge rules of the proposed scheme in the first iteration}
\renewcommand{\arraystretch}{0.8} \label{table3}
\centering
\begin{tabular}{|p{5.5cm} |p{3.7cm} |p{3.7cm} |} \hline
Network Setup & Number of Coalitions & Split-and-merge rules \\ \hline
Setup 1: $N=100$ and $C=0.1$ & 16.34 & 8.12 \\
\hline
Setup 2: $N=160$ and $C=0.1$ & 23.67 & 12.76 \\ \hline
\end{tabular}
\end{table}
\section{Conclusion} \label{CC}
This paper has developed a distributed game-theoretical framework for a partially connected D2D network using coalition game and IDNC optimization. As such, the completion time of users is minimized. In particular, our proposed model is formulated as a coalition formation game with nontransferable utility, and a fully distributed coalition formation algorithm is proposed. The proposed distributed algorithm is converged to a Nash-stable coalition structure using split-and-merge rules while accounting for the altruistic players' preferences. With such a distributed solution, each player has to maintain a partial feedback matrix only for the players in its coverage zone instead of the global feedback matrix required in the
fully connected D2D networks. A comprehensive completion time and game performances evaluation have been carried out for the proposed distributed coalition game. In particular, our performance evaluation results comprehensively demonstrated that our proposed distributed solution offers almost same completion time performance similar to centralized FRAN D2D network. \ignore{ Numerical results demonstrate that the proposed distributed solution provides appreciable completion time performance gains compared to the conventional point-to-multipoint and fully connected D2D networks. Compared to the centralized FRAN D2D network, our proposed solution offers roughly the same performance. }
|
1,477,468,750,453 | arxiv | \section{Introduction}
Consider the stochastic dynamics $X_t$ on ${\mathbb R}^d$ satisfying
\begin{equation}\label{1}
dX_t = -\nabla V(X_t)\,dt + \sqrt{2\beta^{-1}} \,dW_t,
\end{equation}
called {\em Brownian dynamics} or {\em overdamped Langevin dynamics}.
Here $V:{\mathbb R}^d \to \mathbb R$ is a smooth function, $\beta = (k_B T)^{-1}$
is a positive constant, and $W_t$ is a standard $d$-dimensional Brownian motion \cite{Oksendal}.
The dynamics~\eqref{1} is used to model the evolution of the
position vector $X_t$ of $N$ particles (in which case $d=3N$) in an
energy landscape defined by the potential energy~$V$. This is the
so-called {\em molecular dynamics}. Typically this energy landscape has
many metastable states, and in applications it is of interest
to understand how $X_t$ moves between them. Temperature accelerated
dynamics (TAD) is an algorithm for computing this {\it metastable
dynamics} efficiently. (See \cite{Voter} for the original algorithm,
\cite{Voter3} for some modifications, and
\cite{Voter2} for an overview of TAD and other
similar methods for accelerating dynamics.)
Each metastable state corresponds to a basin of attraction $D$
for the gradient dynamics $dx/dt=-\nabla V(x)$ of a local minimum
of the potential $V$. In TAD, temperature is raised to force $X_t$ to leave each
basin more quickly. What would have happened at the original
low temperature is then extrapolated. To generate metastable
dynamics of $(X_t)_{t\ge 0}$ at low temperature, this procedure is repeated in each basin.
This requires the assumptions:
\begin{itemize}
\item[(H1)]{$X_t$ immediately reaches local equilibrium upon entering a given basin $D$; and}
\item[(H2)]{An Arrhenius law may be used to extrapolate the exit event at low temperature.}
\end{itemize}
The Arrhenius (or Eyring-Kramers) law states that, in the small
temperature regime, the time it takes to transition between neighboring basins $D$ and $D'$ is
\begin{equation}\label{Arrheniuslaw}
{\nu^{-1}} \exp\left[\frac{|\delta V|}{k_B T}\right],
\end{equation}
where $\delta V$ is the difference in potential energy between the local
minimum in $D$ and the lowest saddle point along a path joining $D$ to $D'$. Here $\nu$ is a constant (called a {\em prefactor}) depending on the eigenvalues of
the Hessian of $V$ at the local minimum and at the saddle point, but not on
the temperature. In practice the Arrhenius law is used when $k_B T \ll |\delta V|$.
We refer to \cite{Berglund, Bovier, Hanggi, Menz} for details.
TAD is a very popular technique, in particular for
applications in material sciences; {see for example}
\cite{TAD9, TAD11, TAD6, TAD5, TAD3, TAD10, TAD4, TAD8, TAD1, TAD2, TAD7}.
In this article we provide a mathematical framework for
TAD, and in particular a mathematical formalism for
(H1)-(H2). Our analysis will actually concern a slightly modified
version of TAD. In this modified version, which we call {\em modified TAD},
the dynamics is allowed to reach local equilibrium
after entering a basin,
thus circumventing assumption~(H1).
{The
assumption (H1) is closely related to the
no recrossings assumption in transition state
theory; in particular one can see the local
equilibration steps (modifications (M1) and (M2) below) in modified TAD as a way to account
for recrossings.}
We note that modified TAD can be used in practice and, since it does not
require the assumption (H1), may reduce some of the numerical error in
(the original) TAD.
To analyze modified TAD, we first
make the notion of local equilibration precise by using
{\it quasistationary distributions}, in the spirit of \cite{Tony}, and
then we circumvent (H2) by introducing an idealized extrapolation procedure
which is {\it exact}. The result, which we call {\em idealized TAD}, yields exact
metastable dynamics; see Theorem~\ref{mainthm} below.
Idealized TAD is not a practical algorithm because it depends on
quantities related to quasistationary distributions which cannot be efficiently computed.
However, we show that idealized TAD agrees with modified TAD at low temperature.
In particular we justify (H2) in modified TAD by showing that at low temperature,
the extrapolation procedure of idealized TAD agrees with that of
modified TAD (and of TAD), which is based on the Arrhenius law~\eqref{Arrheniuslaw};
see Theorem~\ref{theorem2} below.
In this article, we focus on the overdamped Langevin
dynamics~\eqref{1} for simplicity. The algorithm
is more commonly used in practice
with the Langevin dynamics
\begin{align}\begin{split}\label{2ndorder}
\begin{cases} dq_t = M^{-1} p_t \,dt\\
dp_t = -\nabla V(q_t)\,dt -\gamma M^{-1}p_t\,dt +\sqrt{2\gamma \beta^{-1}}\,dW_t\end{cases}.
\end{split}
\end{align}
{The notion of quasistationary distributions
still makes sense for the Langevin dynamics~{\cite{Nier}}, so an extension of our analysis to
that dynamics
is in principle possible, though the mathematics
there are much more difficult due to the degeneracy of the infinitesimal
generator of {\eqref{2ndorder}}. In particular, some results on the low temperature
asymptotics of the principal eigenvalue and eigenvector for hypoelliptic
diffusions are still missing.}
The paper is organized as follows. In Section~\ref{sec:TAD}, we recall
TAD and present modified TAD. In Section~\ref{sec:idealTAD}, we introduce idealized TAD and
prove it is exact in terms of metastable dynamics. Finally, in Section~\ref{sec:theta},
we show that idealized TAD and modified TAD are essentially equivalent in the low temperature regime.
Our analysis in Section~\ref{sec:theta} is restricted to a one-dimensional setting.
The extension of this to higher dimensions will be the purpose of another work.
Throughout the paper it will be convenient to refer to various objects related to the dynamics~\eqref{1}
at a high and low temperature, $\beta^{hi}$ and $\beta^{lo}$, as well as at a generic temperature,
$\beta$. To do so, we use superscripts $^{hi}$ and $^{lo}$ to indicate that we
are looking at the relevant object at $\beta = \beta^{hi}$ or $\beta = \beta^{lo}$,
respectively. We drop the superscripts to consider objects at a generic temperature~$\beta$.
\section{TAD and modified TAD}\label{sec:TAD}
Let $X_t^{lo}$ be a stochastic dynamics obeying~\eqref{1} at
a low temperature $\beta = \beta^{lo}$,
and let $S:{\mathbb R}^d \to \mathbb N$ be a function which
labels the basins of $V$. (So each basin $D$ has the form
$S^{-1}(i)$ where $i \in \mathbb N$.) The goal of TAD is to efficiently
estimate the metastable dynamics at low temperature; in other words:
\begin{itemize}
\item{
Efficiently generate a trajectory ${\hat S}(t)_{t\ge 0}$ which has approximately
the same distribution as
$S(X_t^{lo})_{t\ge 0}$.}
\end{itemize}
The aim then is to get approximations of {\em trajectories}, {including
distributions of hitting times, time correlations, etc... and
thus not only
the evolution of the averages of some observables or
averages of observables with respect to the invariant distribution.}
At the heart of TAD is the problem of efficiently simulating an
exit of $X_t^{lo}$ from a generic basin $D$,
since the metastable dynamics are generated by essentially
repeating this. To efficiently simulate an exit of $X_t^{lo}$ from $D$,
{temperature is raised so that} $\beta^{hi} < \beta^{lo}$
and a corresponding high temperature dynamics $X_t^{hi}$ is evolved.
The process $X_t^{hi}$ is allowed to search for various exit
paths out of $D$ until a stopping time $T_{stop}$;
each time $X_t^{hi}$ reaches $\partial D$ it is reflected back
into $D$, the place and time of the attempted exit is recorded,
and the Arrhenius law~\eqref{Arrheniuslaw} is used to extrapolate a low temperature exit.
After time $T_{stop}$ the fastest extrapolated low temperature
exit is selected. This exit is considered an approximation of the
first exit of $X_t^{lo}$ from $D$. The original algorithm is
described in Section~\ref{originalTAD} below; a modified
version is proposed in Section~\ref{modifiedTAD} below.
\subsection{TAD}\label{originalTAD}
In the following, we let $D$ denote a generic basin. We
let $x_0$ be the minimum of $V$ inside $D$, and we
assume there are finitely many saddle points, $x_i$ ($i\ge 1$),
of $V$ on $\partial D$. The original TAD algorithm \cite{Voter} for
generating the approximate metastable dynamics
${\hat S}(t)$ is as follows:
\begin{algorithm}[TAD]\label{alg1}
Let $X_0^{hi}$ be in the basin $D$, and
start a low temperature simulation clock
$T_{tad}$ at zero: $T_{tad} = 0$. Then
iterate on the visited basins the following:
\begin{enumerate}
\item{Let $T_{sim} = 0$ and $T_{stop} = \infty$. These
are the simulation and stopping times for the high
temperature exit search.}
\item {Evolve $X^{hi}_t$ at $\beta = \beta^{hi}$ starting
at $t=T_{sim}$ until the
first time after $T_{sim}$ at which it exits $D$. (Exits
are detected by checking if the dynamics lands into another
basin via gradient descent, i.e. the deterministic dynamics $dx/dt = -\nabla V(x)$.)
Call this time $T_{sim}+\tau$.}
\item {Associate a nearby saddle point, $x_i$, of $V$ on $\partial D$
to the place where $X^{hi}_t$ exited~$D$. (This
can be done by using, for example, the nudged elastic
band method \cite{Henkelman}; see below.)}
\item {Advance the high temperature simulation clock by $\tau$: $T_{sim} = T_{sim} + \tau$.}
\item {If an exit at $x_i$ has already been observed, go to Step 8. If an exit at $x_i$ has not yet been observed,
set $T_i^{hi} = T_{sim}$ and extrapolate the high temperature exit time to low
temperature using the formula:
\begin{equation}\label{arrhenius}
T_i^{lo} = T_{i}^{hi}\,e^{-(\beta^{hi}-\beta^{lo})(V(x_i)-V(x_0))}.
\end{equation}
This equation comes from the Arrhenius law~\eqref{Arrheniuslaw} for
exit rates in the low temperature regime; see the remarks below. }
\item {Update the smallest extrapolated exit time:
\begin{equation*}
T_{min}^{lo} = \min\{T_{min}^{lo}, T_i^{lo}\},
\end{equation*}
and the (index of) the corresponding exit point:
\begin{equation*}
I_{min}^{lo} = i \hskip10pt\hbox{if}\hskip10pt T_{min}^{lo} = T_i^{lo}.
\end{equation*}
}
\item {Update $T_{stop}$. The stopping time is chosen so that with
confidence $1-\delta$, an extrapolated low temperature exit time
smaller than $T_{min}^{lo}$ will not be observed. See equation~\eqref{TADstop} below
for how this is done.}
\item {If $T_{sim} \le T_{stop}$, reflect $X_t^{hi}$ back into $D$ and
go back to Step 2. Otherwise, proceed to Step~9.}
\item {Set
\begin{equation*}
{\hat S}(t) = S(D) \hskip10pt {for} \hskip10pt t \in [T_{tad},T_{tad}+T_{min}^{lo}],
\end{equation*}
and advance the low temperature simulation clock by
$T_{min}^{lo}$:
\begin{equation*}
T_{tad} = T_{tad}+T_{min}^{lo}.
\end{equation*}}
\item{Send $X_t^{hi}$ to the new basin, namely the
neighboring basin of $D$ which is attained through the saddle point
$x_{I_{min}^{lo}}$. Then, go back to Step 1, the domain $D$ now being
the neighboring basin.}
\end{enumerate}
\end{algorithm}
The nudged elastic band method \cite{Henkelman} consists, starting from a
trajectory leaving $D$, of computing by a gradient descent method the closest
minimum energy path leaving $D$, with the end points of the trajectory being fixed.
This minimum energy path necessarily leaves $D$ through a saddle point.
\begin{remark}
{ When the overdamped Langevin dynamics leaves
a basin near a saddle point, its first re-entrance
into that basin is immediate.
Thus, Algorithm~{\ref{alg1}} does not really make
sense for overdamped Langevin dynamics. (With the Langevin dynamics~{\eqref{2ndorder}}, however, this difficulty does not arise.)
In modified TAD, defined below, we will allow the
dynamics to evolve away from the boundary of a basin
after an exit event, thus circumventing this problem.}
\end{remark}
Below we comment on the equation~\eqref{arrhenius} from
which low temperature exit times are extrapolated,
as well as the stopping time $T_{stop}$.
\begin{itemize}
\item {\bf Low temperature extrapolation}.
The original
TAD uses the following kinetic Monte Carlo (KMC) framework~\cite{KMC}.
For a given basin $D$, it is assumed that the time ${\tilde T}_i$
to exit through the saddle point $x_i$ of $V$ on $\partial D$ is exponentially
distributed with rate $\kappa_i$ given by the
Arrhenius law~\eqref{Arrheniuslaw}:
\begin{equation}\label{expparam}
\kappa_i \equiv {\nu_i} e^{-\beta (V(x_i)-V(x_0))}
\end{equation}
where we recall $\nu_i$ is a temperature independent
prefactor and $x_0$ is the minimum of $V$ in $D$.
An exit event from $D$ at temperature $\beta$ is obtained
by sampling independently the times ${\tilde T}_i$ for
all the saddle points $x_i$ on $\partial D$,
then selecting the smallest time and the
corresponding saddle point.
In TAD, this KMC framework is used for both
temperatures $\beta^{lo}$ and $\beta^{hi}$.
That is, it is assumed that the high and low temperature
exit times ${\tilde T}_i^{hi}$ and ${\tilde T}_i^{lo}$
through each saddle point $x_i$ satisfy:
\begin{align}\begin{split}\label{explaw}
\mathbb P({\tilde T}_i^{hi} > t) &= e^{-{\kappa}_i^{hi} t}\\
\mathbb P({\tilde T}_i^{lo} > t) &= e^{-{\kappa}_i^{lo} t}
\end{split}
\end{align}
where
\begin{align}\begin{split}\label{prefactors}
\kappa_i^{hi} &= {\nu_i} e^{-\beta^{hi}(V(x_i)-V(x_0))}\\
\kappa_i^{lo} &= {\nu_i} e^{-\beta^{lo}(V(x_i)-V(x_0))}
\end{split}
\end{align}
Observe that then
\begin{equation*}
{\tilde T}_i^{hi}\, {\frac{{\kappa}_i^{hi}}{{\kappa}_i^{lo}}} = {\tilde T}_i^{hi} \,e^{-(\beta^{hi}-\beta^{lo})(V(x_i)-V(x_0))}
\end{equation*}
has the same probability law as ${\tilde T}_i^{lo}$. This leads to the
extrapolation formula~\eqref{arrhenius}.
The assumption of exponentially distributed exit times $T_i^{hi}$
and $T_i^{lo}$ is
valid only if the dynamics at both temperatures immediately reach local equilibrium upon
entering a basin; see (H1) and Theorem~\ref{theorem0a} below. In modified TAD, described below,
we circumvent this immediate equilibration assumption by allowing the dynamics
at both temperatures to
reach local equilibrium. In particular, in modified TAD the low temperature assumption
is no longer needed to get exponential exit distributions as in~\eqref{explaw}.
On the other hand, to get the {\em rate constants} in~\eqref{prefactors}
-- and by extension the extrapolation rule~\eqref{arrhenius}; see (H2) -- a low
temperature assumption is required. {We will justify both~{\eqref{explaw}} and~{\eqref{prefactors}}
in the context of modified TAD. More precisely
we show that~{\eqref{explaw}} will be valid at any temperature,
while a low temperature assumption is needed to justify~{\eqref{prefactors}}.
Note that, inspecting equation~{\eqref{prefactors}}, the
low temperature assumption will be required for {\em both} temperatures
used in TAD -- so $1/\beta^{hi}$
will be small in an absolute sense, but
large compared to $1/\beta^{lo}$.}
\item{\bf Stopping time.}
The stopping time $T_{stop}$ is chosen so that if the high temperature
exit search is stopped at time $T_{stop}$, then with probability
$1-\delta$, the smallest extrapolated low temperature exit time will be
correct. Here $\delta$ is a user-specified parameter.
To obtain a formula for the
stopping time $T_{stop}$ it is assumed that, in addition to (H1)-(H2):
\begin{itemize}
\item[(H3)] {There
is a minimum, $\nu_{min}$, to all the prefactors in
equation~{\eqref{prefactors}}}: $$\forall i \in \{1,\ldots k\}, \nu_i
\ge \nu_{min},$$
\end{itemize}
where $k$ denotes the number of saddle points on $\partial D$.
Let us now explain how this assumption is used to determine $T_{stop}$.
Let $T$ be a deterministic time. If a high temperature first exit time through $x_i$, $T_i^{hi} > T$,
extrapolates to a low temperature time less than $T_{min}^{lo}$, then from~\eqref{arrhenius},
\begin{equation*}
V(x_i)-V(x_0) \le \frac{\log(T_{min}^{lo}/T)}{\beta^{lo}-\beta^{hi}}
\end{equation*}
and so
\begin{equation}\label{ki}
{\kappa}_i^{hi} = \nu_i e^{-\beta^{hi}(V(x_i)-V(x_0))} \ge \nu_{min}\exp\left(\frac{\beta^{hi}\log(T_{min}^{lo}/T)}{\beta^{hi}-\beta^{lo}}\right).
\end{equation}
In TAD it is required that this event has a low probability
$\delta$ of occurring, that is,
\begin{equation}\label{delta}
\mathbb P(T_i^{hi} > T) = e^{-{\kappa}_i^{hi} T} < \delta.
\end{equation}
Using~\eqref{ki} in~\eqref{delta}, one sees that it suffices that
\begin{equation*}
\exp\left[-\nu_{min}\exp\left(\frac{\beta^{hi}\log(T_{min}^{lo}/T)}{\beta^{hi}-\beta^{lo}}\right)T\right] < \delta.
\end{equation*}
Solving this inequality for $T$, one obtains
\begin{equation}
T > \frac{\log(1/\delta)}{\nu_{min}}\left(\frac{\nu_{min}T_{min}^{lo}}{\log(1/\delta)}\right)^{\beta^{hi}/\beta^{lo}}.
\end{equation}
The stopping time $T_{stop}$ is then chosen to be the right hand side of the above:
\begin{equation}\label{TADstop}
T_{stop} \equiv \frac{\log(1/\delta)}{\nu_{min}}\left(\frac{\nu_{min}T_{min}^{lo}}{\log(1/\delta)}\right)^{\beta^{hi}/\beta^{lo}}.
\end{equation}
(It is calculated using the current value of $T_{min}^{lo}$.)
The above calculation shows that at simulation time
$T_{stop}$, with probability at least $1-\delta$, $T_{min}^{lo}$
is the same as the smallest extrapolated
low temperature exit time which would have been observed
with no stopping criterion.
For TAD to be practical, the stopping time $T_{stop}$ must be (on average)
smaller than the exit times at low temperature. The stopping
time of course depends on the choice of $\nu_{min}$ and $\delta$.
In practice a reasonable value for $\nu_{min}$ may be known a priori
\cite{Voter} or obtained by a crude approximation \cite{Voter2}.
For a given $\delta$, if too large a value of $\nu_{min}$ is used,
the low temperature extrapolated times
may be incorrect with probability greater than $\delta$.
On the other hand, if the value of $\nu_{min}$ is too small,
then the extrapolated times will be correct with probability $1-\delta$, but
computational efficiency will be compromised. The usefulness of TAD comes
from the fact that, in practice, $\nu_{min}$ and $\delta$ can often
be chosen such that the correct low temperature exit event
is found by time $T_{stop}$ with large probability $1-\delta$,
{\it and} $T_{stop}$ is on average much smaller than the exit times
which would be expected at low temperature. In practical applications,
TAD has provided simulation time scale boosts of up to $10^9$~\cite{TAD4}.
\end{itemize}
\begin{remark}
One alternative to TAD is a brute force saddle point
search method, in which one evolves the
system at a high temperature $\beta^{hi}$ to
locate saddle points of $V$ on $\partial D$.
{(There are other popular techniques
in the literature to locate saddle points,
many of which do not use high
temperature or dynamics; see
for example~{\cite{Mousseau}}.)}
{Once one is confident that all the physically relevant
saddle points are found}, the times ${\tilde T}_i^{lo}$
to exit through each $x_i$ at low temperature
can be directly sampled from exponential distributions with
parameters $\kappa_i$ as in~\eqref{expparam}, using $\beta \equiv \beta^{lo}$.
(Estimates are available for the $\nu_i$ at low temperature; they
depend on the values of $V$ and the Hessian matrix of $V$ at $x_i$ and $x_0$.
See for example \cite{Bovier}.)
The advantage of TAD over
a brute force saddle point search method is that
in TAD, there is a well-defined stopping criterion
for the saddle point search at temperature $\beta^{hi}$, in
the sense that the saddle point corresponding to
the correct exit event at temperature $\beta^{lo}$
will be obtained with a user-specified probability.
In particular, TAD does not require all the saddle points to be found.
\end{remark}
\subsection{Modified TAD}\label{modifiedTAD}
Below we consider some modifications, (M1)-(M3), to TAD, calling
the result {\it modified TAD}. The main modifications, (M1)-(M2) below,
will ensure that the exponential rates assumed
in TAD are justified. We also introduce a different stopping
time, (M3). (See the discussion below Algorithm~\ref{alg2}.)
We note that some of these features are currently being used by
practitioners of TAD~\cite{Voterpriv}. Here are the three modifications:
\begin{itemize}
\item[(M1)] We include a decorrelation step in which an underlying
low temperature dynamics $(X_t^{lo})_{t\ge 0}$
finds local equilibrium in some basin $D$ before we start searching for exit pathways at high
temperature;
\item[(M2)] Before searching for exit pathways
out of $D$, we sample local equilibrium at high temperature
in the current basin $D$, without advancing any clock time;
\item[(M3)] We replace the stopping time~\eqref{TADstop} with
\begin{equation}\label{modstop}
T_{stop} = T_{min}^{lo}/C,
\end{equation}
where $C$ is a lower bound of the
minimum of $e^{-(\beta^{hi}-\beta^{lo})(V(x_i)-V(x_0))}$
over all the saddle points, $x_i$, of $V$ on $\partial D$.
\end{itemize}
\begin{remark}
{In (M3) above we are assuming some a priori knowledge of
the system, in particular a lower bound of the energy
barriers $V(x_i)-V(x_0)$, $i \in
\{1,\ldots k\}$. Such a lower bound will not be known in every situation,
but in some cases, practitioners can obtain such a bound, see for example~{\cite{Voter3}}. See also the
discussion in the section ``Stopping time'' below.}
\end{remark}
The modified algorithm is
as follows; for the reader's convenience we have boxed off the steps of
modified TAD which are different from TAD.
\begin{algorithm}[Modified TAD]\label{alg2}
Let $X_0^{lo}$ be in the basin $D$,
set a low temperature simulation clock $T_{tad}$ to zero:
$T_{tad} = 0$, and choose a (basin-dependent) decorrelation time $T_{corr}>0$.
Then iterate on the visited basins the following:
\vskip5pt
\fcolorbox{black}[HTML]{E9F0E9}{\parbox{\textwidth}{
\begin{enumerate}
\item[]{\bf Decorrelation step:}
\item{Starting at time $t = T_{tad}$, evolve $X_t^{lo}$ at temperature $\beta = \beta^{lo}$
according to~\eqref{1} in the current basin $D$.}
\item{If $X_t^{lo}$ exits $D$ at a time
$T_{tad} + \tau < T_{tad} + T_{corr}$,
then set
\begin{equation*}
{\hat S}(t) = S(D), \hskip10pt t \in [T_{tad},T_{tad}+\tau],
\end{equation*}
advance the low temperature clock by $\tau$:
$T_{tad} = T_{tad} + \tau$,
then go back to Step 1, where $D$ is now the
new basin. Otherwise, set
\begin{equation*}
{\hat S}(t) = S(D), \hskip10pt t \in [T_{tad},T_{tad}+T_{corr}],
\end{equation*}
advance the low temperature clock by $T_{corr}$: $T_{tad} = T_{tad} + T_{corr}$,
and initialize the exit step by setting $T_{sim} = 0$ and $T_{stop} = \infty$.
Then proceed to the exit step.}
\end{enumerate}}}
\begin{enumerate}[leftmargin=0.79in]
\item[]{\bf Exit step:}
\end{enumerate}
\fcolorbox{black}[HTML]{E9F0E9}{\parbox{\textwidth}{
\begin{enumerate}
\item[1.]{Let $X_{T_{sim}}^{hi}$ be a sample of the
dynamics~\eqref{1} in local equilibrium in $D$ at
temperature $\beta = \beta^{hi}$.
See the remarks below for how this sampling is done. None
of the clocks are advanced in this step.}
\end{enumerate}}}
\begin{enumerate}[leftmargin=0.79in]
\item[2.] {Evolve $X^{hi}_t$ at $\beta = \beta^{hi}$ starting
at $t=T_{sim}$ until the
first time after $T_{sim}$ at which it exits $D$.
Call this time $T_{sim}+\tau$.}
\item[3.] {Using the nudged elastic band method,
associate a nearby saddle point, $x_i$, of $V$ on $\partial D$
to the place where $X^{hi}_t$ exited $D$.}
\item[4.] {Advance the simulation clock by $\tau$: $T_{sim} = T_{sim} + \tau$.}
\item[5.] {If an exit at $x_i$ has already been observed, go to Step 8.
If an exit at $x_i$ has not yet been observed,
set $T_i^{hi} = T_{sim}$ and
\begin{equation}\label{arrhenius2}
T_i^{lo} = T_{i}^{hi}\,e^{-(\beta^{hi}-\beta^{lo})(V(x_i)-V(x_0))}.
\end{equation} }
\item[6.] {Update the lowest extrapolated exit time:
\begin{equation*}
T_{min}^{lo} = \min\{T_{min}^{lo}, T_i^{lo}\},
\end{equation*}
and the (index of) the corresponding exit point:
\begin{equation*}
I_{min}^{lo} = i \hskip10pt\hbox{if}\hskip10pt T_{min}^{lo} = T_i^{lo}.
\end{equation*}
}
\end{enumerate}
\fcolorbox{black}[HTML]{E9F0E9}{\parbox{\textwidth}{
\begin{enumerate}
\item[7.]{Update $T_{stop}$:
\begin{equation}\label{modstop2}
T_{stop} = T_{min}^{lo}/C,
\end{equation}
where $C$ is a lower bound of
the minimum of $e^{-(\beta^{hi}-\beta^{lo})(V(x_i)-V(x_0))}$
over all the saddle points, $x_i$, of $V$ on $\partial D$.}
\item[8.]{If $T_{sim} \le T_{stop}$,
go back to Step 1 of the exit step; otherwise, proceed to Step~9. }
\end{enumerate}}}
\begin{enumerate}[leftmargin=0.79in]
\item[9.]{Set
\begin{equation*}
{\hat S}(t) = S(D) \hskip10pt {for} \hskip10pt t \in [T_{tad},T_{tad}+T_{min}^{lo}],
\end{equation*}
and advance the low temperature simulation clock by
$T_{min}^{lo}$:
\begin{equation*}
T_{tad} = T_{tad}+T_{min}^{lo}.
\end{equation*}}
\end{enumerate}
\fcolorbox{black}[HTML]{E9F0E9}{\parbox{\textwidth}{
\begin{enumerate}
\item[10.]{Set {$X_{T_{tad}}^{lo} = X_{T_{I}^{hi}}^{hi}$
where $I \equiv I_{min}^{lo}$}.
Then go back to the decorrelation step, the domain $D$ now being the neighboring basin,
namely the one obtained by exiting through {$X_{T_{I}^{hi}}^{hi}$}.}
\end{enumerate}}}
\end{algorithm}
\vskip10pt
\begin{itemize}
\item{\bf Local equilibrium in $D$: (M1) and (M2).}
We introduce the decorrelation step -- see (M1) -- in order to
ensure that the low temperature dynamics reaches local equilibrium
in $D$. Indeed, for sufficiently large $T_{corr}$ the low temperature
dynamics reaches local equilibrium in some
basin. The convergence to local equilibrium will be made precise in
Section~\ref{sec:idealTAD} using the notion of the {\em quasistationary
distribution}. See also~\cite{Gideon,Tony}, in particular for a discussion of the choice of $T_{corr}$.
Local equilibrium will in general be reached at different times in
different basins, so we allow $T_{corr}$ to be basin dependent.
We note that a similar decorrelation step is used in another
accelerated dynamics proposed by A.F. Voter, the Parallel Replica Dynamics~\cite{VoterParRep}. {The decorrelation step accounts for
barrier recrossing events: the dynamics is allowed to evolve
exactly at low temperature after the exit step, capturing any
possible barrier recrossings, until local equilibrium is
reached in one of the basins.}
The counterpart of the addition of this decorrelation step is that,
from (M2), in the exit step we also start the high temperature dynamics from
local equilibrium in the current basin $D$.
{A similar step is actually being used by current practitioners
of TAD~{\cite{Voterpriv}}, though this step is not mentioned
in the original algorithm~{\cite{Voter}}.}
To sample local equilibrium in $D$, one can for example take the
end position of a a sufficiently
long trajectory of~\eqref{1} which does not exit $D$.
See \cite{Tony, Gideon} for some algorithms to efficiently sample
local equilibrium; we remark that this is expected to become more computationally demanding
as temperature increases.
To extrapolate the exit event at low temperature from the exit events at high temperature,
we need the dynamics at both temperatures to be in local equilibrium. We note that
the changes (M1)-(M2) in modified TAD are actually a practical way to get rid of the
error associated with the assumption (H1) in TAD.
\item{\bf Stopping time: (M3).}
In (M3) we introduce a stopping $T_{stop}$
such that, with probability $1$, the shortest extrapolated
low temperature exit time is found by time $T_{stop}$. (Recall that with the
stopping time of TAD, we have only a confidence level $1-\delta$.)
Note that for the stopping time $T_{stop}$ to be implemented in
~\eqref{modstop2}, we need some a priori knowledge about energy
barriers, in particular a lower bound $E_{min}>0$ for all the
differences $V(x_i)-V(x_0)$, where $x_i$ ranges over the saddle
points on the boundary of a given basin:
\begin{itemize}
\item[(H3')] {There
is a minimum, $E_{min}$, to all the energy barriers}: $$\forall i \in
\{1,\ldots k\}, V(x_i) - V(x_0) \ge E_{min}.$$
\end{itemize}
{If a lower bound $E_{min}$ is known}, then
we can choose $C$ accordingly so that in equation~\eqref{modstop2} we obtain
\begin{equation}\label{modifiedstop}
T_{stop} = T_{min}^{lo} e^{(\beta^{hi}-\beta^{lo})E_{min}}.
\end{equation}
A simple computation then shows that {under assumption (H3')}, any high temperature exit
time occurring after $T_{stop}$ cannot extrapolate to a low temperature
exit time smaller than $T_{min}^{lo}$. To see that~\eqref{modifiedstop} leads
to an efficient algorithm, recall that TAD is expected to be correct only in the regime where
$\beta^{hi} \gg E_{min}$, which since $\beta^{hi}\ll\beta^{lo}$ means the
exponential in~\eqref{modifiedstop} should be very small.
As the computational savings of TAD comes from the
fact that the simulation time of the exit step, namely $T_{stop}$, is
much smaller than the exit time that would have been observed at low
temperature, the choice of stopping time in TAD is of critical importance.
Both of the stopping times~\eqref{TADstop} and~\eqref{modstop}
are used in practice; see {{\cite{Voter3}}} for a presentation of TAD
with the stopping formula~\eqref{modstop}, and
\cite{Montalenti2} for an application. {The original stopping
time~{\eqref{TADstop}} requires a lower
bound for the prefactors in the Arrhenius law~{\eqref{prefactors}} (see
assumption (H3) above, in the remarks following
Algorithm~{\ref{alg1}}). The stopping time~{\eqref{modstop}} requires an
assumption on the minimum energy barriers; see assumption (H3') above}. The formula~\eqref{modstop}
may be preferable in case minimum energy barriers are known,
since it is known to scale
better with system size than \eqref{TADstop}. The formula~\eqref{TADstop} is
advantageous if minimum energy barriers are unknown but a reasonable
lower bound for the minimum prefactor $\nu_{min}$ is available.
{We have chosen
the stopping time~{\eqref{modstop}} instead of~{\eqref{TADstop}}
mostly for mathematical convenience -- in particular so that in our
Section~{\ref{sec:idealTAD}} analysis we do not have the error $\delta$
associated with~{\eqref{TADstop}}. A similar
analysis can be done under assumption (H3) with the stopping time~{\eqref{TADstop}},
modulo the error $\delta$.}
\end{itemize}
We comment that modified TAD is an algorithm which can be implemented in
practice, and which circumvents the error in the original TAD
arising from the assumption (H1).
\section{Idealized TAD and mathematical analysis}\label{sec:idealTAD}
In this section we show that under certain idealizing assumptions,
namely (I1)-(I3) and (A1) below, modified TAD is {\it exact} in the sense that the
simulated metastable dynamics ${\hat S}(t)_{t\ge 0}$
has the same law as the true low temperature metastable dynamics $S(X_t^{lo})_{t\ge 0}$.
We call this idealization of modified TAD {\it idealized TAD}.
Our analysis will show that idealized TAD and modified TAD
agree in the limit $\beta^{hi},\beta^{lo} \to \infty$ and $T_{corr} \to \infty$.
Since idealized TAD is exact, it follows that modified TAD is exact in the
limit $\beta^{hi},\beta^{lo} \to \infty$ and $T_{corr} \to \infty$.
In idealized TAD, we assume that at the end of the decorrelation step and
at the start of the exit step of modified TAD,
we are in {\it exact} local equilibrium; see (A1) and (I1). We formalize this using the
notion of quasistationary distributions, defined below. We also assume that the way in which
we exit near a given saddle point $x_i$ in the exit step does not affect
the metastable dynamics in the decorrelation step; see (I2). The remaining
idealization, whose relation to modified TAD is maybe not so clear at
first sight, is to replace the exponential
$\exp[-(\beta^{hi}-\beta^{lo})(V(x_i)-V(x_0))]$ of~\eqref{arrhenius}
with a certain quantity $\Theta_i$ depending on
the flux of the quasistationary distribution across $\partial D$; see~(I3).
In Section~\ref{sec:theta} we justify this by showing that the two
agree asymptotically as $\beta^{hi},\beta^{lo} \to \infty$ in a one-dimensional setting.
\subsection{Notation and quasistationary distribution}\label{section2b}
Here and throughout, $D$ is an (open) domain with $C^2$ boundary $\partial D$
and $X_t^x$ is a stochastic process evolving according to~\eqref{1} starting at
$X_0^x = x$ (we suppress the superscript where it is not needed). We
write $\mathbb P(\cdot)$ and $\mathbb E[\cdot]$ for various probabilities and expectations,
the meaning of which will be clear from context. We write
$Y \sim \mu$ for a random variable sampled from the probability
measure $\mu$ and $Y\sim {\cal E}(\alpha)$ for an
exponentially distributed random variable with parameter $\alpha$.
Recalling the notation
of Section~\ref{sec:TAD}, we assume that $\partial D$ is partitioned
into $k$ (Lebesgue measurable) subsets $\partial D_i$ containing the saddle points $x_i$ of $V$,
$i=1,\ldots,k$ (see Fig~\ref{fig1}):
\begin{equation*}
\partial D = \cup_{i=1}^k \partial D_i \hskip10pt \hbox{ and }
\hskip10pt \partial D_i \cap \partial D_j = \emptyset \hbox{ if } i \neq j.
\end{equation*}
We assume that any exit through $\partial D_i$ is associated to the
saddle point $x_i$ in Step 3 of TAD. In other
words, $\partial D_i$ corresponds to the basin of attraction of the
saddle point $x_i$ for the nudged elastic band method.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{092306R_fig1.eps}
\end{center}
\caption{The domain $D$ with boundary partitioned into $\partial
D_1,\ldots,\partial D_4$ (here $k=4$)
by the black line segments. $V$ has exactly one saddle point in each
$\partial D_i$, located at $x_i$. }
\label{fig1}
\end{figure}
Essential to the analysis below will be the notion of {\it quasistationary distribution},
which we define below, recalling some facts which will be needed in our analysis. Consider the
infinitesimal generator of~\eqref{1}:
\begin{equation*}
L = -\nabla V \cdot \nabla +\beta^{-1} \Delta,
\end{equation*}
and let $(u,-\lambda)$ be the principal eigenvector/eigenvalue pair for $L$ with
homogeneous Dirichlet (absorbing) boundary conditions on $\partial D$:
\begin{equation}\label{eq:ulambda}
\left\{
\begin{aligned}
Lu&=-\lambda u \text{ in } D, \\
u&=0 \text{ on } \partial D.
\end{aligned}
\right.
\end{equation}
It is known (see \cite{Tony})
that $u$ is signed and $\lambda >0$; we choose $u>0$ and for the moment do not specify a normalization.
Define a probability measure $\nu$ on $D$ by
\begin{equation}\label{qsd}
d\nu = \frac{u(x) e^{-\beta V(x)}\,dx}{\int_D u(x) e^{-\beta V(x)}\,dx}.
\end{equation}
The measure $\nu$ is called the {\it quasistationary distribution} (QSD) on $D$;
the name comes from the fact that $\nu$ has
the following property: for $(X_t)_{t \ge 0}$ a solution to~\eqref{1},
starting from any distribution with support in $D$,
\begin{equation}\label{eq:local_eq}
\nu(A) = \lim_{t\to \infty} \mathbb P(X_t \in A\,\big|\, X_s \in D,\,0\le s
\le t) \hskip20pt \hbox{for any measurable set }A\subset D.
\end{equation}
The following is proved in \cite{Tony}, and will be essential for our results:
\begin{theorem}\label{theorem0a}
Let $X_t$ be a solution to~\eqref{1} with $X_0 \sim \nu$, and let
\begin{equation*}
\tau = \inf\{t>0\,:\,X_t \notin D\}
\end{equation*}
Then: (i) $\tau \sim {\cal E}(\lambda)$ and (ii) $\tau$ and $X_\tau$ are independent.
\end{theorem}
We will also need the following formula from \cite{Tony} for the exit point distribution:
\begin{theorem}\label{theorem0b}
Let $X_t$ and $\tau$ be as in Theorem~\ref{theorem0a}, and let
$\sigma_{\partial D}$ be Lebesgue measure on $\partial D$. The measure $\rho$ on
$\partial D$ defined by
\begin{equation}\label{eq:rho}
d\rho = -\frac{\partial_n\left(u(x) e^{-\beta V(x)}\right)\,d\sigma_{\partial D}}{\beta\lambda\int_D u(x) e^{-\beta V(x)}\,dx}
\end{equation}
is a probability measure, and for any measurable $A \subset \partial D$,
\begin{equation*}
\mathbb P(X_\tau \in A) = \rho(A).
\end{equation*}
\end{theorem}
As a corollary of these two results we have the following, which will be central to our analysis:
\begin{corollary}\label{corollary1}
Let $X_t$, $\tau$ and $\rho$ be as in Theorems~\ref{theorem0a}-\ref{theorem0b},
and define
\begin{equation}\label{eq:pi}
p_i =\rho(\partial D_i)
\end{equation}
to be the exit probability through $\partial D_i$. Let $I$ be the
discrete random variable defined by: for $i = 1, \ldots, k$,
$$I=i \text{ if and only if } X_\tau \in \partial D_i.$$
Then (i) $\tau \sim {\cal E}(\lambda)$, (ii) $\mathbb P(I=i) = p_i$, and
(iii) $\tau$ and $I$ are independent.
\end{corollary}
Throughout we omit the dependence of $\lambda$, $\nu$, and $\rho$ on
the basin $D$; it should be understood from context.
\begin{remark} {We assume that $D$ has $C^2$
boundary so that standard elliptic regularity results and trace
theorems give a meaning to the formula~{\eqref{eq:rho}} used to define $\rho$
in Theorem~{\ref{theorem0b}}. For basins of attraction
this assumption will not be satisfied, as
the basins will have
``corners''. This is actually a minor technical point. The probability
measure $\rho$ can be defined for any Lipschitz domain
$D$ using the
following two steps: first, $\rho$ can be defined in
$H^{-1/2}(\partial \Omega)$ using the definition (equivalent
to~{\eqref{eq:rho}}): for any $v \in H^{1/2} (\partial D)$}
$$\langle v , d\rho \rangle =\frac{\int_D ( - \beta^{-1} \nabla w \cdot \nabla u + \lambda w u) \exp(-\beta V) }{\lambda \int_{D} u \exp(-\beta V)}
$$
{where $w \in H^1(D)$ is any lifting of $v$ ($w|_{\partial D}=v$). Second, it is easy to check
that $\rho$ actually defines a} {\em non-negative} {distribution on
$\partial D$, for example by using as a lifting the solution to}
$$
\left\{
\begin{aligned}
L w &= 0 \text{ in } D,\\
w&=v \text{ on } \partial D,
\end{aligned}
\right.
$$
{since, by the maximum principle, $w \ge 0$, and then,
$\langle v , d\rho \rangle
= \frac{\int_D
\lambda w u \exp(-\beta V) }{\lambda \int_{D} u \exp(-\beta
V)}$. One finally concludes using a Riesz representation theorem
due to Schwartz:
any non-negative distribution with total mass one defines a
probability measure.}
\end{remark}
\subsection{Idealized TAD}\label{section3}
In this section we consider an idealized version of
modified TAD, which we call
{\it idealized TAD}. The idealizations, (I1)-(I3) below,
are introduced so that the algorithm
can be rigorously analyzed using the mathematical
formalisms in Section~\ref{section2b}.
\begin{itemize}
\item[(I1)] At the start of the exit step, the high temperature dynamics is
initially distributed according to the QSD in $D$: $X_{T_{sim}}^{hi} \sim \nu^{hi}$;
\item[(I2)] At the end of the exit step, the extrapolated low
temperature exit point $X_{T_{tad}}^{lo}$
is sampled exactly from the conditional exit point distribution in
$\partial D_{I_{min}^{lo}}$ at low temperature:
\begin{equation}\label{exactexit}
X_{T_{tad}}^{lo} \sim \left[\rho^{lo}\left(\partial D_{I_{min}^{lo}}\right)\right]^{-1} \rho^{lo}|_{\partial D_{I_{min}^{lo}}}
\end{equation}
\item[(I3)]In the exit step, the quantity
\begin{equation*}
e^{-(\beta^{hi}-\beta^{lo})(V(x_i)-V(x_0))}
\end{equation*}
is everywhere replaced by
\begin{equation}\label{ratios}
\Theta_i \equiv \frac{\lambda^{hi} p_i^{hi}}{\lambda^{lo} p_i^{lo}},
\end{equation}
where, as in~\eqref{eq:pi}, $p_i^{lo}=\rho^{lo}(\partial
D_i)$ and $p_i^{hi}=\rho^{hi}(\partial D_i)$. Thus, the
extrapolation equation~\eqref{arrhenius}
is replaced by
\begin{equation}\label{extrapolate}
T_i^{lo} = T_{i}^{hi}\Theta_i
\end{equation}
and the formula for updating $T_{stop}$ is:
\begin{equation}\label{stop}
T_{stop} = T_{min}^{lo}/C
\end{equation}
where $C$ is chosen so that $C\le \min_{1\le i\le k} \Theta_i$.
\end{itemize}
We state idealized TAD below as an ``algorithm'', even
though it is not practical: in general we cannot exactly sample
$\nu^{hi}$ or the exit distributions
$\left[\rho^{lo}\left(\partial D_{i}^{lo}\right)\right]^{-1} \rho^{lo}|_{\partial D_{i}^{lo}}$,
and the quantities $\Theta_i$ are not known
in practice. (See the discussion below Algorithm~\ref{alg3}.)
For the reader's convenience
we put in boxes those steps of idealized
TAD which are different from modified TAD.
\begin{algorithm}[Idealized TAD]\label{alg3}
Let $X_0^{lo}$ be in the basin $D$,
set the low temperature clock time to zero: $T_{tad} =0$,
let $T_{corr}>0$ be a (basin-dependent) decorrelation
time, and iterate on the visited basins the following:
\begin{enumerate}[leftmargin=0.79in]
\item[]{\bf Decorrelation step:}
\item[1.]{Starting at time $t = T_{tad}$, evolve $X_t^{lo}$ at temperature $\beta = \beta^{lo}$
according to~\eqref{1} in the current basin $D$.}
\item[2.]{If $X_t^{lo}$ exits $D$ at a time
$T_{tad} + \tau < T_{tad} + T_{corr}$,
then set
\begin{equation*}
{\hat S}(t) = S(D), \hskip10pt t \in [T_{tad},T_{tad}+\tau],
\end{equation*}
advance the low temperature clock by $\tau$:
$T_{tad} = T_{tad} + \tau$,
then go back to Step 1, where $D$ is now the
new basin. Otherwise, set
\begin{equation*}
{\hat S}(t) = S(D), \hskip10pt t \in [T_{tad},T_{tad}+T_{corr}],
\end{equation*}
advance the low temperature clock by $T_{corr}$: $T_{tad} = T_{tad} + T_{corr}$,
and initialize the exit step by setting $T_{sim} = 0$ and $T_{stop} = \infty$.
Then proceed to the exit step.}
\end{enumerate}
\begin{enumerate}[leftmargin=0.79in]
\item[]{\bf Exit step:}
\end{enumerate}
\fcolorbox{black}[HTML]{E9F0E9}{\parbox{\textwidth}{
\begin{enumerate}
\item[1.]{Sample $X_{T_{sim}}^{hi}$ from the QSD at high temperature in $D$:
$X_{T_{sim}}^{hi} \sim \nu^{hi}$.}
\end{enumerate}}}
\begin{enumerate}[leftmargin=0.79in]
\item[2.]{Evolve $X_t^{hi}$ at $\beta = \beta^{hi}$ starting
at $t=T_{sim}$ until the
first time after $T_{sim}$ at which it exits $D$.
Call this time $T_{sim}+\tau$.}
\item[3.]{Record the set $\partial D_i$ through which $X_t^{hi}$ exited $D$.}
\item[4.]{Advance the simulation clock by $\tau$: $T_{sim} = T_{sim} + \tau$.}
\end{enumerate}
\fcolorbox{black}[HTML]{E9F0E9}{\parbox{\textwidth}{
\begin{enumerate}
\item[5.]{If an exit through $\partial D_i$ has already been observed, go to Step 8.
If an exit through $\partial D_i$ has not yet been observed, set $T_i^{hi} = T_{sim}$ and:
\begin{equation}\label{idealarrhenius}
T_i^{lo} = T_{i}^{hi}\,\Theta_i, \hskip20pt \Theta_i \equiv \frac{\lambda^{hi}p_i^{hi}}{\lambda^{lo}p_i^{lo}}.
\end{equation}
}
\end{enumerate}}}
\begin{enumerate}[leftmargin=0.79in]
\item[6.]{Update the lowest extrapolated exit time and corresponding exit spot:
\begin{align*}
T_{min}^{lo} &= \min\{T_{min}^{lo}, T_i^{lo}\}\\
I_{min}^{lo} &= i \hskip10pt\hbox{if}\hskip10pt T_{min}^{lo} = T_i^{lo}.
\end{align*}
}
\end{enumerate}
\fcolorbox{black}[HTML]{E9F0E9}{\parbox{\textwidth}{
\begin{enumerate}
\item[7.]{Update $T_{stop}$:
\begin{equation}\label{stop2}
T_{stop} = T_{min}^{lo}/C,\hskip20pt C \le \min_{1\le i\le k}\Theta_i.
\end{equation}
}
\end{enumerate}}}
\begin{enumerate}[leftmargin=0.79in]
\item[8.]{If $T_{sim} \le T_{stop}$, go back to Step 1 of the exit step;
otherwise, proceed to Step 9.}
\end{enumerate}
\begin{enumerate}[leftmargin=0.79in]
\item[9.]{Set
\begin{equation*}
{\hat S}(t) = S(D) \hskip10pt {for} \hskip10pt t \in [T_{tad},T_{tad}+T_{min}^{lo}],
\end{equation*}
and advance the low temperature simulation clock by
$T_{min}^{lo}$:
\begin{equation*}
T_{tad} = T_{tad}+T_{min}^{lo}.
\end{equation*}}
\end{enumerate}
\fcolorbox{black}[HTML]{E9F0E9}{\parbox{\textwidth}{
\begin{enumerate}
\item[10.]{Let
\begin{equation*}
X_{T_{tad}}^{lo} \sim \left[\rho^{lo}\left(\partial D_{I_{min}^{lo}}\right)\right]^{-1} \rho^{lo}|_{\partial D_{I_{min}^{lo}}}.
\end{equation*}
Then go back to the decorrelation step, the basin $D$ now
being the one obtained by exiting through $X_{T_{tad}}^{lo}$.}
\end{enumerate}}}
\end{algorithm}
\vskip10pt
Below we comment in more detail on idealized TAD.
\begin{itemize}
\item {\bf The quasistationary distribution in $D$: (I1) and (A1).}
In idealized TAD, the convergence to local equilibrium (see (M1)
and (M2) above) is assumed to
be reached, and this is made precise using
the QSD $\nu$. In particular, we start the
high temperature exit search exactly at the QSD $\nu^{hi}$; see (I1).
We will also assume the low temperature dynamics reaches $\nu^{lo}$ at the
end of the decorrelation step:
\begin{itemize}
\item[(A1)] After the decorrelation step of idealized TAD, the low temperature dynamics is
distributed according to the QSD in $D$: $X_{T_{tad}}^{lo} \sim \nu^{lo}$.
\end{itemize}
This will be crucial for extrapolating the exit event
at low temperature. Assumption (A1) is justified by the
fact that the law of $X_t^{lo}$ in the decorrelation step
approaches $\nu^{lo}$ exponentially fast in $T_{corr}$;
see \cite{Tony, Gideon} for details. We also refer to~\cite{Tony,Gideon}
for a presentation of algorithms which can be used to sample the QSD.
\item{\bf The exit position: (I2).}
To get exact metastable dynamics,
we have to assume that the way the dynamics leaves $D$ near a
given saddle point $x_i$ does not affect the
metastable dynamics in the decorrelation step; see (I2). This
can be justified in the small temperature regime by using Theorem~\ref{theorem0b}
and some exponential decay results on the normal derivative of the QSD
away from saddle points. Indeed, the conditional probability that, given the dynamics leaves
through $\partial D_i$, it leaves outside a neighborhood of $x_i$ is of order $e^{-c\beta}$
as $\beta \to \infty$ (for a constant $c>0$); see \cite{Helffer,Tony2}.
\item {\bf Replacing the Arrhenius law extrapolation rule: (I3).}
In idealized TAD, we replace the extrapolation formula~\eqref{arrhenius} based
on the Arrhenius law by the idealized formulas~\eqref{ratios}-~\eqref{extrapolate};
see (I3). This is a
severe modification, since it makes the algorithm
impractical. In particular the quantities $\lambda^{lo}$ and
$p^{lo}_i$ are not known: if they were, it would be very easy to
simulate the exit event from~$D$; see Corollary~\ref{corollary1}
above.
It is the aim of Section~\ref{sec:theta} below to explain how the small temperature
assumption is used to get practical estimates of the ratios $\Theta_i$. For simplicity we
perform this small temperature analysis in one dimension.
We will show that $\Theta_i$ is indeed close to the formula
$\exp[-(\beta^{hi}-\beta^{lo})(V(x_i)-V(x_0))]$ used in the
original and modified TAD; compare~\eqref{idealarrhenius}
with~\eqref{arrhenius} and~\eqref{arrhenius2}.
We expect the same relation to be true in higher dimensions
under appropriate conditions; this will be the subject of another paper.
\end{itemize}
In the analysis below, we need idealizations (I1) and (I3) to exactly
replicate the law of the low temperature exit time and exit region in the exit
step; see Theorem~\ref{theorem1} below.
With (I1) and (I3), the inferred low temperature exit events are
statistically exact. This is based in particular on (A1), namely the
fact that the low temperature process is distributed according to
$\nu^{lo}$ at the end of the decorrelation step. In addition, after an
exit event, the
dynamics in the next decorrelation step depends on the exact
exit {\it point} in $\partial D_i$: this is why we also need (I2)
to get exact metastable dynamics; see Theorem~\ref{mainthm} below.
\subsection{Idealized TAD is exact}\label{section4}
The aim of this section is to prove the following result:
\begin{theorem}\label{mainthm}
Let $X_t^{lo}$ evolve according to~\eqref{1} at $\beta = \beta^{lo}$.
Let ${\hat S(t)}$ be the metastable dynamics produced by
Algorithm~\ref{alg3} (idealized TAD), assuming (A1), and let
idealized TAD have the same initial condition as $X_t^{lo}$. Then:
\begin{equation*}
{\hat S}(t)_{t\ge 0} \sim S(X_t^{lo})_{t\ge 0},
\end{equation*}
that is, the metastable dynamics produced by idealized TAD
has the same law as the (exact) low temperature metastable dynamics.
\end{theorem}
Due to Corollary~\ref{corollary1}, (A1), (I2), and the fact that the
low temperature dynamics is simulated exactly during the
decorrelation step, it suffices to prove that the exit step of idealized
TAD is exact in the following sense:
\begin{theorem}\label{theorem1}
Let $X_t^{lo}$ evolve according to~\eqref{1} at $\beta = \beta^{lo}$ with
$X_t^{lo}$ initially distributed according to the QSD in $D$:
$X_0^{lo} \sim \nu^{lo}$. Let $\tau = \inf \{t>0\,:\,X_t^{lo} \notin D\}$ and $I$
be the discrete random variable defined by: for $i=1,\ldots,k$,
$$I=i \text{ if and only if } X_\tau^{lo} \in \partial D_i.$$ Let $T_{min}^{lo}$ and
$I_{min}^{lo}$ be the random variables produced by the exit step of
idealized TAD. Then, $(T_{min}^{lo}, I_{min}^{lo})$ has the
same probability law as $(\tau,I)$:
\begin{equation*}
(T_{min}^{lo}, I_{min}^{lo}) \sim (\tau,I).
\end{equation*}
\end{theorem}
The proof of Theorem~\ref{theorem1} will use (I1) and (I3) in particular.
The theorem shows that the exit event from $D$ produced by idealized TAD
is exact in law compared to the exit event that would have
occurred at low temperature: the random variable $(T_{min}^{lo},I_{min}^{lo})$
associated with idealized TAD has the same law as the first exit time
and location (from $D$) of a dynamics $(X_t^{lo})_{t \ge 0}$ obeying~\eqref{1} with
$\beta=\beta^{lo}$ and $X_0^{lo} \sim \nu^{lo}$.
To begin, we provide a simple lemma which shows that
we can assume $T_{stop}\equiv \infty$ without loss
of generality. We need this result in order to properly
define all the random variables $T^{hi}_i$, for $i=1, \ldots,k$,
where we recall $k$ denotes the number of saddle points
of $V$ on
$\partial D$.
\begin{lemma}\label{lem:Tstop}
Consider the exit step of the idealized TAD, and modify Step 8 as follows:
\begin{itemize}
\item[8.]{\it Go back to Step 1 of the exit step.}
\end{itemize}
Thus we loop between Step 1 and Step 8 of the exit step for infinite time, regardless of the values of
$T_{sim}$ and $T_{stop}$. Then, $(T^{lo}_{min}, I_{min}^{lo})$
remains constant for all times $T_{sim} > T_{stop}$.
\end{lemma}
\begin{proof}
We want to show that without ever advancing to Step 10, the exit step of idealized TAD produces
the same random variable $(T_{min}^{lo}, I_{min}^{lo})$ as soon as $T_{sim} > T_{stop}$. To
see this, note that if $T_i^{lo} < T_{min}^{lo}$,
then from~\eqref{idealarrhenius},
\begin{equation*}
T_i^{lo} = T_i^{hi}\frac{\lambda^{hi} p_i^{hi}}{\lambda^{lo}p_i^{lo}} < T_{min}^{lo}
\end{equation*}
and so, comparing with~\eqref{stop2},
\begin{equation*}
T_i^{hi} < T_{min}^{lo}\frac{\lambda^{lo} p_i^{lo}}{\lambda^{hi} p_i^{hi}}\le
\frac{T_{min}^{lo}}{C} = T_{stop}.
\end{equation*}
Thus, if $T_{sim} > T_{stop}$, any escape event will lead to an
extrapolated time $T_i^{lo}$ which will be larger than $T_{min}^{lo}$,
and thus will not change the value of $T_{min}^{lo}$ anymore.
\end{proof}
Let us now identify the laws of the random variables
$(T_i^{hi})_{1\le i \le l}$ produced by idealized TAD.
\begin{proposition}\label{prop:Thi}
Consider idealized TAD in the setting of Lemma~\ref{lem:Tstop},
so that all the $T^{hi}_i$ are defined, $i=1,2,\ldots,k$.
Let $(\tau^{(j)},I^{(j)})_{j \ge 1}$ be independent and
identically distributed random
variables such that $\tau^{(j)}$ is independent from
$I^{(j)}$, $\tau^{(j)} \sim {\cal E}(\lambda^{hi})$ and
for $i=1,\ldots,k$, $I^{(j)}$ is
a discrete random variable with law
\begin{equation*}
\mathbb P(I^{(j)}=i) = p^{hi}_i.
\end{equation*}
For $i=1,\ldots,k$ define
\begin{equation}\label{defineNT1}
N_i^{hi} = \min\{j\,:\,I^{(j)}=i\}.
\end{equation}
Then we have the following equality in law:
\begin{equation}\label{eq:T}
(T^{hi}_1, \ldots, T^{hi}_k)\sim
\left(\sum_{j=1}^{N_1^{hi}}\tau^{(j)}, \ldots,
\sum_{j=1}^{N_k^{hi}}\tau^{(j)}\right).
\end{equation}
Moreover, (i) $T^{hi}_i \sim {\cal E}(\lambda^{hi} p^{hi}_i)$ and (ii) $T^{hi}_1,T^{hi}_2,\ldots,T^{hi}_k$ are independent.
\end{proposition}
\begin{proof}
The equality~\eqref{eq:T} follows from Corollary~\ref{corollary1},
since in the exit step of idealized TAD, the dynamics restarts from
the QSD $\nu^{hi}$ after each escape event.
Let us now consider the statement $(i)$. Observe that the moment generating function of an exponential
random variable $\tau$ with parameter $\lambda$ is: for $s < \lambda$,
\begin{equation*}
\mathbb E\left[\exp\left(s\tau\right)\right] = \int_{0}^\infty e^{st}\lambda e^{-\lambda t}\,dt = \frac{\lambda}{\lambda-s}.
\end{equation*}
So, dropping the superscript $hi$ for ease of notation, we have:
for $i \in \{1,\ldots,k\}$, and for $s < \lambda p_i$,
\begin{align*}
\mathbb E\left[\exp\left(s T_i\right)\right] &= \sum_{m=1}^\infty\mathbb E\left[\exp\left(s T_i\right)\Big| N_i = m\right]\mathbb P\left(N_i = m\right) \\
&= \sum_{m=1}^\infty \mathbb E\left[\exp\left(s \sum_{j=1}^{m}\tau^{(j)}\right)\right](1-p_i)^{m-1}p_i\\
&= \sum_{m=1}^\infty \mathbb E\left[\exp\left(s \tau^{(1)}\right)\right]^m(1-p_i)^{m-1}p_i\\
&= \frac{\lambda p_i}{\lambda - s}\sum_{m=1}^\infty \left(\frac{\lambda\left(1 - p_i\right)}{\lambda - s}\right)^{m-1}\\
&= \frac{\lambda p_i}{\lambda p_i - s}.
\end{align*}
This shows $T^{hi}_i \sim {\cal E}(\lambda^{hi} p^{hi}_i)$.
\end{proof}
Before turning to the proof of the statement $(ii)$ in Proposition~\ref{prop:Thi}, we need the following
technical lemma:
\begin{lemma}\label{lemmasym}
Let $a_1,a_2,\ldots,a_n$ be positive real numbers, and let $S_n$ be the symmetric
group on $\{1,2,\ldots,n\}$. Then
\begin{equation}\label{symmetric}
\sum_{\sigma \in S_n}\prod_{i=1}^n \left(\sum_{j=i}^n a_{\sigma(j)}\right)^{-1} = \prod_{i=1}^n a_i^{-1}.
\end{equation}
\end{lemma}
\begin{proof} Note that~\eqref{symmetric} is of course true for $n=1$. Assume it
is true for $n-1$, and let
\begin{equation*}
S_n^{(k)} = \{\sigma \in S_n\,:\, \sigma(1) = k\}.
\end{equation*}
Then
\begin{align*}
\sum_{\sigma \in S_n} \prod_{i=1}^n \left(\sum_{j=i}^n a_{\sigma(j)}\right)^{-1} &=
\left(\sum_{i=1}^n a_i\right)^{-1}\sum_{\sigma \in S_n}\prod_{i=2}^n \left(\sum_{j=i}^n a_{\sigma(j)}\right)^{-1} \\
&=\left(\sum_{i=1}^n a_i\right)^{-1}\sum_{k=1}^n \sum_{\sigma \in S_n^{(k)}}\prod_{i=2}^n\left(\sum_{j=i}^n a_{\sigma(j)}\right)^{-1}\\
&=\left(\sum_{i=1}^n a_i\right)^{-1}\sum_{k=1}^n \prod_{\substack{j=1\\j\ne k}}^n a_j^{-1} \\
&= \prod_{i=1}^n a_i^{-1}.
\end{align*}
By induction~\eqref{symmetric} is valid for all $n$.
\end{proof}
We are now in position to prove statement $(ii)$ of
Proposition~\ref{prop:Thi}.
\begin{proof}[Proof of Proposition~\ref{prop:Thi} part $(ii)$]
In this proof, we drop the superscript $hi$ for ease of notation. To show that the $T_i$'s are independent, it suffices to show that for
$s_1,\ldots,s_k$ in a neighborhood of zero we have
\begin{equation}\label{toshow}
\mathbb E\left[\exp\left(\sum_{i=1}^k s_i T_i\right)\right] = \prod_{i=1}^k \mathbb E\left[\exp\left(s_i T_i\right)\right].
\end{equation}
We saw in the proof of part $(i)$ that: for $s_i < \lambda p_i$,
\begin{equation}\label{factors}
\mathbb E\left[\exp\left(s_i T_i\right)\right] = \frac{\lambda p_i}{\lambda p_i -s_i}.
\end{equation}
Consider then the left-hand-side of~\eqref{toshow}. We start by a
preliminary computation. Let $m_0 = 0$, $m_1 = 1$,
and $s_i < \lambda p_i$ for $i=1,\ldots,k$. Then
\begin{align}
&\sum_{1<m_2<m_3\ldots<m_k} \mathbb E\left[\exp\left(\sum_{i=1}^k s_i T_i\right)\Big| \cap_{i=1}^k \{N_i = m_i\}\right]\mathbb P\left(\cap_{i=1}^k \{N_i = m_i\}\right) \nonumber \\
&=\sum_{1<m_2<m_3\ldots<m_k} \mathbb E\left[\exp\left(\sum_{i=1}^k \left(s_i \sum_{j=1}^{m_i}\tau^{(j)}\right)\right)\right]
p_1 \prod_{i=2}^k p_i\left(1-\sum_{j=i}^k p_j\right)^{m_i-m_{i-1}-1}\nonumber\\
&= p_1\sum_{1<m_2<m_3\ldots<m_k}\,\prod_{i=1}^k \mathbb E\left[\exp\left(\left(\sum_{j=i}^k s_j\right)\sum_{j=m_{i-1}+1}^{m_i}\tau^{(j)}\right)\right]\prod_{i=2}^k p_i\left(1-\sum_{j=i}^k p_j\right)^{m_i-m_{i-1}-1} \nonumber\\
&= p_1 \sum_{1<m_2<m_3\ldots<m_k}\,\prod_{i=1}^k \mathbb E\left[\exp\left(\tau^{(1)}\sum_{j=i}^k s_j\right)\right]^{m_i-m_{i-1}}\prod_{i=2}^k p_i\left(1-\sum_{j=i}^k p_j\right)^{m_i-m_{i-1}-1} \nonumber\\
&=\left(\frac{\lambda p_1}{\lambda - \sum_{j=1}^k s_j}\right) \sum_{1<m_2<m_3\ldots<m_k}\,\prod_{i=2}^k p_i \left(\frac{\lambda}{\lambda - \sum_{j=i}^k s_j}\right)\left(\frac{\lambda\left(1-\sum_{j=i}^k p_j\right)}{\lambda - \sum_{j=i}^k s_j}\right)^{m_i-m_{i-1}-1}\nonumber\\
&= \left(\frac{\lambda p_1}{\lambda - \sum_{j=1}^k s_j}\right)\prod_{i=2}^k p_i \left(\frac{\lambda}{\lambda - \sum_{j=i}^k s_j}\right)
\left(1-\frac{\lambda\left(1 - \sum_{j=i}^k p_j\right)}{\lambda - \sum_{j=i}^k s_j}\right)^{-1} \nonumber\\
&= \left(\frac{\lambda p_1}{\lambda - \sum_{j=1}^k s_j}\right)\prod_{i=2}^k \lambda p_i \left(\sum_{j=i}^k \lambda p_j - s_j\right)^{-1}\nonumber\\
&= \prod_{i=1}^k \lambda p_i \left( \sum_{j=i}^k \lambda p_j - s_j\right)^{-1}. \nonumber\\
\label{long}
\end{align}
From~\eqref{long} observe that
\begin{align}\begin{split}\label{assume}
\mathbb E\left[\exp\left(\sum_{i=1}^k s_i T_i\right)\right]
&= \sum_{\sigma \in S_k}\prod_{i=1}^k \lambda p_{\sigma(i)} \left( \sum_{j=i}^k \lambda p_{\sigma(j)} - s_{\sigma(j)}\right)^{-1}\\
&= {\left(\prod_{i=1}^k \lambda p_i\right)} \sum_{\sigma \in S_k}\prod_{i=1}^k \left( \sum_{j=i}^k \lambda p_{\sigma(j)} - s_{\sigma(j)}\right)^{-1}\\
&= \prod_{i=1}^k \frac{\lambda p_i}{\lambda p_{i} - s_{i}},
\end{split}
\end{align}
where in the last step we have used Lemma~\ref{lemmasym}. Comparing
~\eqref{toshow} with~\eqref{factors} and~\eqref{assume}, we are done.
\end{proof}
To complete the proof of Theorem~\ref{theorem1}, we finally need the
following Lemma.
\begin{lemma}\label{lemma2}
Let $T_1,\ldots,T_k$ be independent random variables such that $T_i
\sim {\cal E}(\lambda p_i)$, with $\lambda >0$, $p_i > 0$ and
$\sum_{j=1}^k p_i =1$. Set
\begin{equation*}
T = \min_i T_i \hskip10pt\hbox{ and }\hskip10pt I = \arg\min_i\, T_i.
\end{equation*}
Then: (i) $T \sim {\cal E}(\lambda)$, (ii) $\mathbb P(I = i) = p_i$, and (iii) $T$ and $I$ are independent.
\end{lemma}
\begin{proof}
Since the $T_i$'s are assumed to be independent, it is well known that $T = T_I = \min_i T_i$ is an exponential random variable with
parameter $\sum_i \lambda p_i = \lambda$. This proves $(i)$.
Turning to $(ii)$ and $(iii)$, note that $\min_{j\ne i}T_j$ is an exponential
random variable independent of $T_i$ with parameter
\begin{equation*}
\sum_{j\ne i} \lambda p_j = \lambda (1-p_i).
\end{equation*}
Thus,
\begin{align}\begin{split}\label{iiandiii}
\mathbb P(I = i, T_{I} \ge t) &= \mathbb P(t \le T_i \le \min_{j\ne i}T_j)\\
&=\int_t^\infty \int_s^\infty \lambda p_i e^{-\lambda p_i s} \, \lambda(1-p_i) e^{-\lambda (1-p_i)r}
\,dr\,ds \\
&= \int_t^\infty \lambda p_i e^{-\lambda s}\,ds \\
&= p_i \mathbb P(T_I \ge t).
\end{split}
\end{align}
Setting $t=0$ we obtain $\mathbb P(I = i) = p_i$, which proves $(ii)$. Now
$(iii)$ follows from~\eqref{iiandiii}.
\end{proof}
We are now in position to prove Theorem~\ref{theorem1}.
\begin{proof}[Proof of Theorem~\ref{theorem1}.]
First, by Lemma~\ref{lem:Tstop}, we can assume that $T_{stop} =
\infty$ so that all the $T_i^{hi}$'s are well defined, for $i=1,
\ldots, k$. Then Proposition~\ref{prop:Thi} implies that the
$T_i^{hi}$'s are independent exponential random variables with parameters
$\lambda^{hi} p_i^{hi}$. So by~\eqref{idealarrhenius}, the $T_i^{lo}$'s are
independent exponential random variables with parameters $\lambda^{lo} p_i^{lo}$.
Now by applying Lemma~\ref{lemma2} to the $T_i^{lo}$'s, we get $T_{min}^{lo} \sim {\cal E}(\lambda^{lo})$,
$\mathbb P(I_{min}^{lo} = i) = p_i^{lo}$, and $T_{min}^{lo}$ is independent of $I_{min}^{lo}$. Referring to Corollary~\ref{corollary1}, we are done.
\end{proof}
\begin{remark}
Observe that the proof of Theorem~\ref{theorem1} does not use (I2),
which is needed only to obtain correct metastable dynamics by iterating the exit step.
Also, notice that we did not use the fact that $D$ is the basin of
attraction of a local minimum of $V$ or that each set $\partial D_i$
in the partition of $\partial D$ is associated to a saddle point $x_i$ for the moment.
The latter assumption is crucial in the next section, in which we obtain
computable estimates of the ratios $\Theta_i$, $i=1,\ldots,k$;
this will also require an assumption of large $\beta$ which was not needed
for Theorem~\ref{theorem1}.
\end{remark}
\section{Estimates for the $\Theta_i$'s at low temperature in one dimension}\label{sec:theta}
In the last section we showed that modified TAD (Algorithm~\ref{alg2})
is {\it exact} with the idealizations (I1)-(I3) and the assumption (A1);
see idealized TAD (Algorithm~\ref{alg3}). In this section we justify (I3).
In particular, we show in Theorem~\ref{theorem2} below how the ratios
$\Theta_i$ (see~\eqref{ratios}) can be approximated by explicit practical
formulas in one dimension. Compared to Theorem~\ref{theorem1}, the proof of
Theorem~\ref{theorem2} will require the additional assumption that temperature
is sufficiently small.
\subsection{Statement of the main result}
We recall that the ratios $\Theta_i$, $i=1,\ldots,k$ are unknown in
practice. In TAD these ratios are approximated using
the Arrhenius law. The main result of this section, Theorem~\ref{theorem2}, gives precise
asymptotics for $\Theta_i$ as $\beta^{hi}, \beta^{lo} \to \infty$.
In particular, we show that $\Theta_i$ converges to
$\exp[-(\beta^{hi}-\beta^{lo})(V(x_i)-V(x_0))]$.
Throughout this section we assume that we are {\em in a one dimensional
setting}. Moreover, we assume that $D$ is the basin of attraction of the gradient
dynamics $dy/dt = -V'(y)$ associated to a local minimum
of $V$ (this is what is done in practice by
A.F. Voter and co-workers). Finally, the potential $V$ is assumed to be
a Morse function, which means that the critical points of $V$ are non-degenerate.
Under these assumptions, we may assume
without additional loss of generality that (see Figure~\ref{fig2}):
\begin{itemize}
\item[(B1)]{$D = (0,b)$, with $b>1$, $V(0)=0$, and $V'(x) \ne 0$ for $x \notin \{0,1,b\}$,}
\item[(B2)]{$V'(0) = 0 = V'(b)$ and $V''(0)<0$, $V''(b)<0$,}
\item[(B3)]{$V'(1) = 0$ and $V''(1)>0$.}
\end{itemize}
We also normalize $u$ (see~\eqref{eq:ulambda}) so that
\begin{itemize}
\item[(B4)] $u(1) = 1$.
\end{itemize}
In particular, the location of the minimum of $V$ and the value
of $V$ at $0$ are chosen for notational convenience and without loss of
generality. In the following, we write $\{0\} = \partial D_1$ and $\{b\} = \partial D_2$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{092306R_fig2.eps}
\end{center}
\caption{A function $V:D \to {\mathbb R}$ satisfying
(B1)-(B3).}
\label{fig2}
\end{figure}
We will prove the following:
\begin{theorem}\label{theorem2}
Under the assumptions stated above, we
have the formula: for $i=1,2$,
\begin{equation}\label{main}
\Theta_i=\frac{\lambda^{hi}p_i^{hi}}{\lambda^{lo}p_i^{lo}} =
e^{-(\beta^{hi}-\beta^{lo})(V(x_i)-V(x_0))}\left(1 + O\left(\frac{1}{\beta^{hi}}- \frac{1}{\beta^{lo}}\right)\right)
\end{equation}
as $\beta^{hi},\beta^{lo} \to \infty$, $\beta^{lo}/\beta^{hi}=r$
where $x_1 = 0$, $x_2 = b$ and $x_0 = 1$, and $r>0$ is constant.
\end{theorem}
The ratios $\frac{\lambda^{hi}p_i^{hi}}{\lambda^{lo}p_i^{lo}}$ involve integrals of the form $\int_D e^{-\beta V(x)} u(x)\,dx$
at high and low temperature. We will use Laplace expansions to
analyze the integrals, but since $u$ depends on $\beta$, extra care must
be taken in the analysis.
\subsection{Proof of Theorem~\ref{theorem2}}
In all what follows, $(u,-\lambda)$ denotes the principal
eigenvector/eigenvalue pair of $L$ with homogeneous
Dirichlet boundary conditions; see~\eqref{eq:ulambda}. We are
interested in how the pair $(u,-\lambda)$ varies in the small
temperature regime $\beta \to \infty$.
Throughout this section, we write $c$ to denote a {\it positive} constant,
the value of which may change without being explicitly noted.
To begin, we will need some asymptotics for $\lambda$ and $u$, Lemma~\ref{lemma4} and
Lemma~\ref{lemma5} below. The contents of both lemmas are found in or
implied by \cite{Day2},
\cite{Devinatz}, and \cite{Friedman} (see also \cite{Ofer} and \cite{Day}) in the case where
$V' \cdot n >0$ on $\partial D$, with $n$ the normal to $\partial D$
(in our setting $n=1$ on $\partial D_2$ and $n=-1$ on $\partial D_1$). Here, we
consider the case of {\it characteristic boundary}, where
from (B2) $V' \cdot n = 0$ on $\partial D$, so we adapt
the classical results to this case.
\begin{lemma}\label{lemma4}
There exists $c>0$ such that
\begin{equation}\label{lambda}
\lambda = O\left(e^{-c\beta}\right)\hskip20pt\hbox{as }\beta \to \infty.
\end{equation}
\end{lemma}
\begin{proof}
Let $D' \subset D$ be a domain containing $1$ such that ${\overline {D'}} \subset D$, and
let $(u',-\lambda')$ the principal eigenvector/eigenvalue pair
for $L$ on $D'$ with homogeneous Dirichlet boundary conditions on $\partial D'$.
Recall that $\lambda$ is given by the Rayleigh formula
\begin{equation*}
\lambda = \inf_{f \in H_V^1(D)}\frac{\beta^{-1}\int_D |\nabla f(x)|^2\, e^{-\beta V(x)}\,dx}
{\int_D f(x)^2 \,e^{-\beta V(x)}\,dx},
\end{equation*}
where $H_V^1(D)$ is the space of functions vanishing on ${\mathbb R}\setminus D$ such that
\begin{equation*}
\int_D \left(|\nabla f(x)|^2 + f(x)^2\right)e^{-\beta V(x)}\,dx < \infty,
\end{equation*}
and similarly for $\lambda'$.
Since every function vanishing on ${\mathbb R}\setminus D'$ also vanishes
on ${\mathbb R} \setminus D$, we have
\begin{equation}\label{lambdaprime}
\lambda \le \lambda'.
\end{equation}
Now let $X_t^1$ obey~\eqref{1} with $X_0^1 = 1$, and define $\tau' = \inf\{t>0\,:\, X_t^1 \notin D'\}$.
Since $D'$ is a sub-basin of attraction such that $V'$ points outward on $\partial D'$,
we can use the following classical results (see e.g. Lemmas 3--4 of~\cite{Day2}):
\begin{equation}\label{taulambda}
\lim_{\beta \to \infty} \beta^{-1}\log \mathbb E[1/\lambda']=\lim_{\beta \to \infty} \beta^{-1} \log \mathbb E[\tau'] = \inf_{z \in \partial D'}\inf_{t>0}\, I_{z,t}
\end{equation}
where, by definition,
\begin{align*}
&I_{z,t} = \inf_{f \in H_1^z[0,t]} \frac{1}{4}\int_0^t |{\dot f}(s)+V'(f(s))|^2\,ds\\
&H_1^z[0,t] = \left\{f\,:\, \exists {\dot f} \in L^2[0,t]\,\,s.t.\,\,f(t) = z,\,\forall s \in [0,t],
\,f(s) = 1 + \int_0^s {\dot f}(r)\,dr\right\}.
\end{align*}
Observe that for any $t>0$ and $f \in H_1^z[0,t]$ we have
\begin{align*}
&\frac{1}{4}\int_0^t \left|{\dot f}(s) + V'(f(s))\right|^2\,ds \\
&= \frac{1}{4} \int_0^t \left|{\dot f}(s) - V'(f(s))\right|^2\,ds + \int_0^t {\dot f}(s) V'(f(s))\,ds\\
&\ge V(z)-V(1).
\end{align*}
Since $\partial D'$ is disjoint from $1$ we can conclude that for $z \in \partial D'$,
$I_{z,t} \ge c > 0$ uniformly in $t>0$, for a positive constant $c$. Thus,
\begin{equation*}
\lim_{\beta \to \infty} \beta^{-1} \log \mathbb E[\tau'] \ge c > 0
\end{equation*}
which, combined with~\eqref{lambdaprime} and~\eqref{taulambda}, implies the result.
\end{proof}
Next we need the following regularity result for $u$:
\begin{lemma}\label{lemma5}
The function $u$ is uniformly bounded in $\beta$, that is,
\begin{equation}
||u||_{\infty} = O(1)\hskip20pt \hbox{as } \beta \to \infty,
\end{equation}
where $||\cdot||_{\infty}$ is the $L^\infty$ norm on $C[0,b]$.
\end{lemma}
\begin{proof}
Define $f(t,x) = u(x)e^{\lambda t}$ and set
\begin{equation*}
\tau^x = \inf\{t > 0\,:\, X_t^x \notin (0,1)\}
\end{equation*}
where $X_t^x$ obeys~\eqref{1} with $X_0^x = x$.
Fix $T>0$. By It\={o}'s lemma, for $t \in [0,T\wedge\tau^x]$ we have
\begin{align*}
f(t,X_t^x) &= u(x) + \lambda \int_0^t u(X_s^x)e^{\lambda s}\,ds + \int_0^t Lu(X_s^x)e^{\lambda s}\,ds
+ \sqrt{2\beta^{-1}}\int_0^t u'(X_s^x)\,dW_s \\
&= u(x) + \sqrt{2\beta^{-1}}\int_0^t u'(X_s^x)\,dW_s.
\end{align*}
Setting $t = T\wedge\tau^x$ and taking expectations gives
\begin{equation}\label{FK}
u(x) = \mathbb E\left[f(T\wedge\tau^x,X_{T\wedge\tau^x}^x)\right] = {\mathbb E}\left[e^{\lambda T\wedge\tau^x}u(X_{T\wedge\tau^x}^x)\right].
\end{equation}
Recall that $u$ is bounded for fixed $\beta$. We show in
equations~\eqref{bound1} below that
$\mathbb E[e^{\lambda \tau^x}]$ is finite, so we may let $T \to \infty$
in~\eqref{FK} and use the dominated convergence theorem to obtain
\begin{equation}\label{ubound}
u(x) = {\mathbb E}\left[e^{\lambda \tau^x}u(X_{\tau^x}^x)\right] \le {\mathbb E}\left[e^{\lambda \tau^x}\right],
\end{equation}
where we have recalled $u(0)=0$ and, from (B4), $u(1) = 1$.
The idea is then to compare $\tau^x$ to the first hitting time of $1$
of a Brownian motion reflected at zero. Define
\begin{equation*}
\sigma^x = \inf\{t>0\,:\, B_t^x \notin (-1,1)\}
\end{equation*}
where
\begin{equation*}
B_t^x = x + \sqrt{2\beta^{-1}}{W}_t
\end{equation*}
with ${W}_t^x$ as in~\eqref{1}.
Let ${\bar B}_t^x$ and ${\bar X}_t^x$ be given by reflecting $B_t^x$
and $X_t^x$ at zero. Since $V' < 0$ on $(0,1)$, it is clear that
${\bar X}_t^x \ge {\bar B}_t^x$ for each $x \in (0,1)$ and $t \ge 0$.
Thus,
\begin{align}\begin{split}\label{compare}
\mathbb P\left(\tau^x \ge t\right)
&\le \mathbb P\left(\inf\{s>0\,:\, {\bar X}_s^x = 1\} \ge t\right)\\
&\le \mathbb P\left(\inf\{s>0\,:\, {\bar B}_s^x = 1\} \ge t\right)\\
&\le \mathbb P\left(\inf\{s>0\,:\, {\bar B}_s^0 = 1\} \ge t\right)\\
&= \mathbb P(\sigma^0 \ge t).
\end{split}
\end{align}
We will bound from above the last line of~\eqref{compare}. Let
$v(t,x)$ solve the heat equation $v_t = \beta^{-1}v_{xx}$ with $v(0,x)=1$
for $x \in (-1,1)$ and $v(t,\pm 1) = 0$. An elementary analysis shows that
\begin{equation}\label{v1}
v(t,0) \le \frac{4}{\pi}\exp(-\beta^{-1} \pi^2 t/4).
\end{equation}
(The Fourier sine series for $v(t,x-1)$ on $[0,2]$ at $x=1$ is an alternating
series, and its first term gives the upper bound above.)
We claim that for fixed $t$ and $x \in [0,t]$,
\begin{equation}\label{v2}
v(t,0) = \mathbb P(\sigma^0 \ge t).
\end{equation}
To see this, let $w(s,x) = v(t-s,x)$ and observe that $w_s = - \beta^{-1}w_{xx}$, so by It\={o}'s lemma, for $s \in [0,t \wedge \sigma^x]$
\begin{align*}
w(s,B_s^x) &= w(0,x) + \int_0^s \left(w_s + \beta^{-1}w_{xx}\right)(r,B_r^x)\,dr + \sqrt{2\beta^{-1}}\int_0^s w_x(r,B_x^r)\,dW_r\\
&=w(0,x)+ \sqrt{2\beta^{-1}}\int_0^s w_x(r,B_x^r)\,dW_r.
\end{align*}
By taking expectations and setting $s = t \wedge \sigma^x$ we obtain
\begin{align*}
v(t,x) =w(0,x) &= \mathbb E\left[w\left(t \wedge \sigma^x,B_{t \wedge \sigma^x}^x\right)\right] \\
&= \mathbb E\left[w\left(t,B_t^x\right)\,1_{\{t\le \sigma^x\}}\right] + \mathbb E\left[w\left(\sigma^x,B_{\sigma^x}^x\right)\,1_{\{t>\sigma^x\}}\right] \\
&= \mathbb E\left[v\left(0,B_{t}^x\right)\,1_{\{t\le \sigma^x\}}\right] \\
&= \mathbb P(\sigma^x \ge t).
\end{align*}
From~\eqref{compare},~\eqref{v1} and~\eqref{v2}, for $x \in [0,1)$
\begin{equation*}
\mathbb P(\tau^x \ge t) \le \frac{4}{\pi}\exp({-\beta^{-1} \pi^2 t/4}).
\end{equation*}
By Lemma~\ref{lemma4}, $\lambda \beta \to 0$ as $\beta \to \infty$.
So for all sufficiently large $\beta$,
\begin{align}\begin{split}\label{bound1}
\mathbb E\left[e^{\lambda \tau^x}\right]
&= 1 + \int_1^\infty \mathbb P(e^{\lambda \tau^x} \ge t)\,dt \\
&\le 1 + \frac{4}{\pi}\int_1^\infty t^{-\pi^2/(4\lambda\beta)}\,dt\\
&= 1 + \frac{4}{\pi}\frac{4\lambda\beta}{\pi^2-4\lambda\beta}.
\end{split}
\end{align}
Now recalling~\eqref{ubound},
\begin{equation}\label{ubound2}
u(x) \le \mathbb E\left[e^{\lambda \tau^x}\right]\le 1 + \frac{4}{\pi}\frac{4\lambda \beta}{\pi^2 - 4 \lambda \beta}.
\end{equation}
Using Lemma~\ref{lemma4} we see that the right hand side of
~\eqref{ubound2} approaches $1$ as $\beta \to \infty$. An
analogous argument can be made for $x \in (1,b]$, showing that $u$ is
uniformly bounded in $\beta$ as desired.
\end{proof}
Next we define a function which will be useful in the analysis of~\eqref{ratios}.
For $x \in [0,1]$ let
\begin{equation}\label{f}
f(x) = \frac{\int_0^x e^{\beta V(t)}\,dt}{\int_0^1 e^{\beta V(t)}\,dt}.
\end{equation}
We compare $u$ and $f$ in the following lemma:
\begin{lemma}\label{lemma6}
Let $||\cdot||_{\infty}$ the $L^\infty$ norm on $C[0,1]$.
With $f$ defined by~\eqref{f}, we have, in the limit $\beta \to \infty$,
\begin{align*}
&||f-u||_{\infty} = O\left(e^{-c\beta}\right),\\
&||f'-u'||_{\infty} = O\left(e^{-c\beta}\right).
\end{align*}
\end{lemma}
\begin{proof}
Observe that $g = f-u$, defined on $[0,1]$, satisfies
\begin{align}\begin{split}\label{g}
&-V'(x)g'(x) + \beta^{-1}g''(x) = \lambda u(x) \\
&\hskip74pt g(0)=0, \hskip5pt g(1) = 0.
\end{split}
\end{align}
Multiplying by $\beta e^{-\beta V(x)}$ in~\eqref{g}
leads to
\begin{equation*}
\frac{d}{dx}\left(e^{-\beta V(x)}g'(x)\right) = \beta e^{-\beta V(x)} \lambda u(x)
\end{equation*}
so that
\begin{equation}\label{gprime}
g'(x) = e^{\beta V(x)}\left(\lambda \beta \int_0^x e^{-\beta V(t)}u(t)\,dt + C_\beta\right).
\end{equation}
Integrating~\eqref{gprime} and using $g(0)=0$,
\begin{equation}\label{here}
g(x) = \lambda \beta \int_0^x \left(e^{\beta V(t)}\int_0^t e^{-\beta V(s)}u(s)\,ds\right)\,dt + C_\beta \int_0^x e^{\beta V(t)}\,dt.
\end{equation}
Using Lemma~\ref{lemma5} we have $||u||_{\infty}\le K<\infty$.
From (B1) and (B3)
we see that $V$ is decreasing on $[0,1]$.
So putting $g(1)=0$ in~\eqref{here} we obtain,
for all sufficiently large $\beta$,
\begin{align}\begin{split}
\label{use}
|C_\beta| &= \lambda \beta \left(\int_0^1 e^{\beta V(t)}\,dt\right)^{-1} \int_0^1 \left(e^{\beta V(t)}\int_0^t e^{-\beta V(s)}u(s)\,ds\right)\,dt \\
&\le \lambda \beta \left(\int_0^1 e^{\beta V(t)}\,dt\right)^{-1}\int_0^1 \left(\int_0^t u(s)\,ds\right)\,dt \\
&\le \lambda \beta K \left(\int_0^1 e^{\beta V(t)}\,dt\right)^{-1} \\
&\le 2\lambda \beta^{3/2} K \left(\frac{-2 V''(0)}{\pi}\right)^{1/2}
\end{split}
\end{align}
where in the last line Laplace's method is used.
Using Lemma~\ref{lemma4}, for all sufficiently large $\beta$,
\begin{equation}\label{bound}
|C_\beta| \le e^{-c\beta}.
\end{equation}
From (B1) and (B3) we see that $V$ is nonpositive on $[0,1]$, so
from~\eqref{gprime},
\begin{align}\begin{split}\label{gprimebound}
|g'(x)|&\le \lambda \beta \int_0^x e^{\beta (V(x)-V(t))}u(t)\,dt + |C_\beta| e^{\beta V(x)} \\
&\le \lambda \beta K + e^{-c\beta}.
\end{split}
\end{align}
Using Lemma~\ref{lemma4} again,
we get $||g'||_{\infty} = O(e^{-c\beta})$.
As $g(0) = 0$ this implies $||g||_{\infty} = O(e^{-c\beta})$.
This completes the proof.
\end{proof}
\begin{remark}\label{remark2}
A result analogous to Lemma~\ref{lemma5} holds, with
\begin{equation*}
f(x) = \frac{\int_x^b e^{\beta V(t)}\,dt}{\int_1^b e^{\beta V(t)}\,dt},
\end{equation*}
for $x \in [1,b]$.
\end{remark}
We are now in position to prove Theorem~\ref{theorem2}.
\begin{proof}[Proof of Theorem~\ref{theorem2}]
It suffices to prove the case $i=1$, so we will look at the
endpoint $\partial D_1 = \{0\}$. From Theorem~\ref{theorem0b} we have
\begin{equation*}
\rho(\{0\}) = \frac{\frac{d}{dx}\left(u(x) e^{-\beta V(x)}\right)\big|_{x=0}}{\beta \lambda\int_D u(x)e^{-\beta V(x)}\,dx}
\end{equation*}
so that
\begin{equation*}
\lambda p_1 = \frac{e^{-\beta V(0)} u'(0)}
{\beta\int_D u(x)e^{-\beta V(x)}\,dx}.
\end{equation*}
Introducing again the superscripts $^{hi}$ and $^{lo}$,
\begin{equation}\label{C}
\frac{\lambda^{hi}p_1^{hi}}{\lambda^{lo}p_1^{lo}} = e^{-(\beta^{hi}-\beta^{lo})V(0)}\cdot \frac{\beta^{lo}}{\beta^{hi}}\cdot\frac{u^{hi\,'}(0)}
{u^{lo\,'}(0)}
\cdot\frac{\int_D u^{lo}(x)e^{-\beta^{lo} V(x)}\,dx}{\int_D u^{hi}(x)e^{-\beta^{hi} V(x)}\,dx}
\end{equation}
Dropping the superscripts, recalling the function $f$ from~\eqref{f},
and using Lemma~\ref{lemma6}, we see that
\begin{equation*}
u'(0) = f'(0) + O\left(e^{-c\beta}\right).
\end{equation*}
Since
\begin{equation*}
f'(0) = \left(\int_0^1 e^{\beta V(t)}\,dt\right)^{-1} =\left(1+k_1\beta^{-1}+O\left(\beta^{-2}\right)\right)\sqrt{\beta}\left(\frac{-2V''(0)}{\pi}\right)^{1/2}
\end{equation*}
where $k_1$ is a $\beta$-independent constant coming from the
second term in the Laplace expansion. Thus
\begin{align}\begin{split}\label{uprime}
u'(0) &= O\left(e^{-c\beta}\right)+\left(1+k_1\beta^{-1}+O\left(\beta^{-2}\right)\right)\sqrt{\beta}\left(\frac{-2V''(0)}{\pi}\right)^{1/2}\\
&= \left(1+k_1\beta^{-1}+O\left(\beta^{-2}\right)\right)\sqrt{\beta}\left(\frac{-2V''(0)}{\pi}\right)^{1/2}.
\end{split}
\end{align}
This takes care of the third term of the product in~\eqref{C}.
We now turn to the fourth term. Let $y \in (0,1]$ and note that for $t \in (y,1]$,
\begin{equation*}
f'(t) = e^{\beta V(t)}\left(\int_0^1 e^{\beta V(x)}\,dx\right)^{-1} = O\left(e^{-c\beta}\right)
\end{equation*}
where here $c$ depends on $y$. Since $f(1)=1$, for all sufficiently large $\beta$,
\begin{equation}\label{leveling}
|f(t)- 1|\le e^{-c\beta}
\end{equation}
for $t \in [y,1]$ and a different $c$.
Also,
\begin{equation*}
\int_y^1 e^{-\beta V(x)}\,dx = \left(1+ k_2\beta^{-1}+O\left(\beta^{-2}\right)\right)\sqrt{\beta^{-1}}\left(\frac{\pi}{2 V''(1)}\right)^{1/2}\,e^{-\beta V(1)}.
\end{equation*}
where $k_2$ is a $\beta$-independent constant coming from
the second term in the Laplace expansion. Thus
\begin{align}\begin{split}\label{part4}
\int_0^1 f(x)e^{-\beta V(x)}\,dx &= O\left(e^{-\beta V(y)}\right)+ \int_y^1 f(x)e^{-\beta V(x)}\,dx \\
&= O\left(e^{-\beta V(y)}\right) + \left(1+ O\left(e^{-c\beta}\right)\right)\int_y^1 e^{-\beta V(x)}\,dx \\
&= O\left(e^{-\beta V(y)}\right) + \left(1+ k_2\beta^{-1}+O\left(\beta^{-2}\right)\right)\sqrt{\beta^{-1}}\left(\frac{\pi}{2 V''(1)}\right)^{1/2}\,e^{-\beta V(1)}\\
&= \left(1+ k_2\beta^{-1}+O\left(\beta^{-2}\right)\right)\sqrt{\beta^{-1}}\left(\frac{\pi}{2 V''(1)}\right)^{1/2}\,e^{-\beta V(1)}.
\end{split}
\end{align}
Using~\eqref{part4} and Lemma~\ref{lemma6} again,
\begin{equation*}
\int_0^1 u(x)e^{-\beta V(x)}\,dx = \left(1 + k_2\beta^{-1}+ O\left(\beta^{-2}\right)\right) \sqrt{\beta^{-1}}\left(\frac{\pi}{2 V''(1)}\right)^{1/2}\, e^{-\beta V(1)}.
\end{equation*}
From Remark~\ref{remark2}, we can make an identical argument on $[1,b)$ to get
\begin{equation}\label{integral}
\int_D u(x)e^{-\beta V(x)}\,dx = \left(1 + k_2\beta^{-1}+O\left(\beta^{-2}\right)\right) \sqrt{\beta^{-1}}\left(\frac{2\pi}{V''(1)}\right)^{1/2}\, e^{-\beta V(1)}.
\end{equation}
with a different but still $\beta$-independent $k_2$.
This takes care of the fourth term in the product in~\eqref{C}.
Observe that in the limit $\beta^{hi},\beta^{lo} \to \infty$, $\beta^{lo}/\beta^{hi} = r$
we have:
\begin{align*}
&\frac{1+k_1(\beta^{hi})^{-1} + O((\beta^{hi})^{-2})}{1+k_1(\beta^{lo})^{-1}+O((\beta^{lo})^{-2})} = 1 + O\left(\frac{1}{\beta^{hi}}- \frac{1}{\beta^{lo}}\right)\\
&\frac{1+k_2(\beta^{lo})^{-1} + O((\beta^{lo})^{-2})}{1+k_2(\beta^{hi})^{-1}+O((\beta^{hi})^{-2})} = 1 + O\left(\frac{1}{\beta^{hi}}- \frac{1}{\beta^{lo}}\right).
\end{align*}
Reintroducing
the superscripts $^{hi}$ and $^{lo}$ and using~\eqref{uprime} and~\eqref{integral}
in~\eqref{C} now gives
\begin{equation}\label{main2}
\frac{\lambda^{hi}p_1^{hi}}{\lambda^{lo}p_1^{lo}} = \left(1 + O\left(\frac{1}{\beta^{hi}}- \frac{1}{\beta^{lo}}\right)\right) e^{-(\beta^{hi}-\beta^{lo})(V(0)-V(1))}
\end{equation}
as desired.
\end{proof}
\section{Conclusion}
We have presented a mathematical framework for TAD which is valid in
any dimension, along with a complete analysis of TAD in one dimension
under this framework. This framework uses the notion of
quasi-stationary distribution, and is useful in particular to clarify
the immediate equilibration assumption (or no-recrossing assumption) which is underlying the original
TAD algorithm and to understand the extrapolation rule using the
Arrhenius law.
We hope to extend this justification of the extrapolation rule to high
dimensions, using techniques from~\cite{Tony2}; the analysis
seems likely to be technically detailed.
We hope that our
framework for TAD will be useful in cases where the original
method is not valid. Indeed, we have shown that TAD can be
implemented wherever accurate estimates for the
ratios in~\eqref{ratios} are available. This fact is important for transitions which
pass through degenerate saddle points, in which case a pre-exponential
factor is needed on the right hand side of~\eqref{main}.
For example, in one dimension,
a simple modification of our analysis shows that if we consider
degenerate critical points on $\partial D$, then a
factor of the form $(\beta^{hi}/\beta^{lo})^{\alpha}$
must be multiplied with the right hand side of~\eqref{main}.
\section*{Acknowledgments}
{\sc D. Aristoff} gratefully acknowledges enlightening discussions with {\sc G. Simpson}
and {\sc O. Zeitouni}. {\sc D. Aristoff} and {\sc T. Leli\`evre} acknowledge
fruitful input from {\sc D. Perez} and {\sc A.F. Voter}. Part of this work was completed while
{\sc T. Leli\`evre} was an Ordway visiting professor at the University of Minnesota.
The work of {\sc D. Aristoff} was supported in part by DOE Award DE-SC0002085.
|
1,477,468,750,454 | arxiv | \section{Introduction}
\label{sec:intro}
Production of an energetic photon in association with hadrons probes the
short-distance dynamics of electron-positron, hadron-hadron, and
lepton-hadron reactions. In addition to providing valuable tests of
perturbative quantum chromodynamics (pQCD), data from electron-positron
annihilation reactions permit
measurements of parton-to-photon fragmentation functions.
In QCD, the quark-photon collinear singularities that arise in each
order of perturbation theory, associated with
the hadronic component of the photon, are subtracted and absorbed into
quark-to-photon and gluon-to-photon fragmentation functions,
in accord with the factorization theorem \cite{AEMP}.
Fragmentation functions, $D\left( z, \mu^2\right)$, are inherently
nonperturbative quantities whose magnitude and dependence on
fractional momentum $z$ must be measured in experiments at a reference
fragmentation scale $\mu^2_0$.
The change of $D\left( z, \mu^2\right)$ with $\mu^2$
for large $\mu^2$ is specified
by perturbative QCD evolution equations \cite{WIT}. In
$e^+e^- \rightarrow \gamma X$, the
fragmentation contributions play a significantly
greater role than they do in hadron-hadron collisions \cite{BXQ}.
In lowest-order, the quark-to-photon and anti-quark-to-photon
fragmentation processes dominate the inclusive reaction
$e^+e^- \rightarrow \gamma X$, whereas ``direct" processes,
such as $qg \rightarrow \gamma q$ and $q\bar{q} \rightarrow \gamma g$,
dominate in $pp \rightarrow \gamma X$ and $\bar{p}p \rightarrow \gamma X$
for $\gamma$'s that carry large values of transverse
momentum \cite{LO,NLO,BQ,DAT}. The dominant role of fragmentation
contributions makes the inclusive process $e^+e^- \rightarrow \gamma X$
a potentially ideal source of information on
$D\left( z, \mu^2\right)$.
The most straightforward theoretical calculations in perturbative QCD
are those for the \underline{inclusive} yield of energetic photons,
$E_\gamma d\sigma/d^3p_\gamma$. However, an important practical
limitation of high energy investigations is that photons are observed
and their cross sections are measured reliably only when the photons
are relatively isolated, separated to some extent in phase space from
accompanying hadrons. Since fragmentation is a process in which
photons are part of quark, anti-quark, or gluon ``jets", it is evident
that photon isolation reduces the contribution from fragmentation terms.
In this paper, we deal with cross sections
for energetic \underline{inclusive} photons, and the extraction of
the photon fragmentation functions $D(z,\mu^2)$.
In another paper \cite{BXQ2}, we will present a systematic and analytic
treatment of cross sections for \underline{isolated} photons, and show
over what regions of $z$ the functions $D(z,\mu^2)$ may be determined
from data on isolated photon production in $e^+e^-\rightarrow \gamma
X$. We will also point out the breakdown of factorization of the
cross section for isolated photons in a particular part of phase space in
$e^+e^- \rightarrow \gamma X$, and the implications of this breakdown
for calculations of isolated photon production in hadronic collisions.
Our calculations of the inclusive photon yields in
$e^+e^- \rightarrow \gamma X$
are carried out through one-loop order. We compute explicitly direct
photon production through first order in the electromagnetic coupling
strength, $\alpha_{em}$, and the quark-to-photon and gluon-to-photon
fragmentation contributions through first order in the strong coupling
strength $\alpha_s$. We display the full angular dependence of the
cross sections, separated into longitudinal $\sin^2 \theta_\gamma$ and
transverse components $\left( 1 + \cos^2 \theta_\gamma \right)$,
where $\theta_\gamma$ is the direction of the $\gamma$ with respect to
the $e^+e^-$ collision axis. Our work goes beyond that of previous
authors \cite{GL,KRAM,LEP}. For example, the full angular dependence
of the cross section was not derived before.
In one recent analysis \cite{GL}, the authors concentrate on
events having the topology of a photon plus
1 hadronic jet; they discuss the extraction of the quark to photon
fragmentation function from such data. In that approach, final state
partons are treated as resolved, and the cancellation of infrared
singularities is not treated explicitly. Practical aspects
of confronting theoretical calculations with data from LEP are
addressed in Ref.~\cite{LEP}. All four groups at LEP have published
papers on prompt photon production \cite{LEP4}. In this paper, we
advocate a different method of analysis of data from that used so far.
We begin in Section II with definitions of the factorized inclusive
photon cross sections, and, to establish notation, we derive explicit
expressions for the inclusive photon yields in lowest order
$\left( O\left( \alpha^o_{em}\right),\ O\left( \alpha^o_s\right) \right)$.
In Section III, we examine in turn the three 1st order contributions to the
inclusive photon yield: the $O\left( \alpha_{em}\right)$ process in
which a photon is radiated from a final quark or antiquark line,
$e^+e^- \rightarrow q\bar{q}\, \gamma$, and the $O\left( \alpha_s \right)$
processes in which $e^+e^- \rightarrow q\bar{q}\ g$, followed by
fragmentation of one of the three final-state partons into a photon.
The quark to photon collinear singularity in
$e^+e^- \rightarrow q\bar{q}\gamma$ is absorbed into the
quark-to-photon fragmentation function. Our treatment of the
$O\left( \alpha_s\right)$ contributions necessarily includes a full
discussion of both real and virtual diagrams. Dimensional regularization
is used to handle infrared and collinear singularities. In Section IV, we
summarize our final expressions for the inclusive photon cross section
$E_\gamma d\sigma_{e^+e^- \rightarrow \gamma X}/d^3\ell$, with full
$\theta_\gamma$ dependence. Numerical results and suggestions for
comparisons with $e^+e^-$ data at LEP, SLAC/SLC, TRISTAN, and CESR/CLEO
energies are also collected in Section IV. An Appendix is included in
which we derive expressions for two- and three- particle phase space in
$n$ dimensions.
\section{Definitions, Notation, and Lowest Order Contribution}
\label{sec:def}
In this section we establish the notation to be used throughout the paper
and present our derivation of the lowest order
$O\left( \alpha^o_{em}\alpha^o_s\right)$ contribution to the inclusive
energetic photon yield in $e^+e^- \rightarrow \gamma X$.
\subsection{General Structure of the Cross Section and Kinematics}
\label{subsec:2a}
In $e^+e^-\rightarrow cX$, as sketched in Fig.~\ref{fig1},
the cross section for an $m$
parton final state is
\begin{equation}
d\sigma^{(m)} = {{1} \over {2s}}\Big| \overline{M}
_{e^+e^-\rightarrow {\underbrace{c+\cdots}_{m}}}
\Big|^2 dPS^{(m)}\cdot dz
D_{c\rightarrow \gamma} (z),
\label{l}
\end{equation}
with $c = \gamma, q, \bar{q}, g$ and $z = E_\gamma/E_c$. For
inclusive photons, we integrate over all phase space, $dPS^{(m)}$,
except the momentum of parton ``$c$''. For isolated photons, however,
the phase space, $dPS^{(m)}$, will have extra constraints due to the
definition of the isolated photon events.
For the scattering amplitude, $M_{e^+e^-\rightarrow c+\cdots}$, the
vertex between the intermediate vector boson and the initial/final
fermion pair is expressed
as\ i$e\gamma_\mu\, (v_f + a_f\, \gamma_5)$. The absolute square
of the matrix element $|\overline{M}|^2$, averaged over initial spins and
summed over final spins and colors,
may be expressed in terms of leptonic and
hadronic tensors, $L_{\mu \nu}\ {\rm and}\ H^{\mu \nu}$, as
\begin{equation}
|\overline{M}|^2 = e^2C \left[ F^{PC} (q^2)\ L^{PC}_{\mu \nu} +
F^{PV} (q^2)\ L^{PV}_{\mu \nu}\right] H^{\mu \nu};
\label{m}
\end{equation}
$e$ denotes the electric charge, and $C$ is the overall color factor.
Since the physical observable, the energetic photon $\gamma$, does not
distinguish between quarks and antiquarks, the parity violating $(PV)$
term does not contribute. Equivalently, only the symmetric part of
$H^{\mu\nu}$ contributes. Therefore,
\begin{equation}
|\overline{M}|^2 = e^2C F^{PC} (q^2)\ L^{PC}_{\mu \nu}\ H^{\mu \nu} \equiv
e^2C F^{PC}_q (q^2) \left( H_1 +H_2\right).
\label{n}
\end{equation}
\begin{equation}
H_1 = \left( -g_{\mu\nu}+{{q_\mu q_\nu}\over{q^2}} \right) H^{\mu \nu} =
- g_{\mu \nu} H^{\mu \nu}.
\label{o}
\end{equation}
\begin{equation}
H_2 = - {{k_\mu k_\nu} \over {q^2}} H^{\mu \nu}.
\label{p}
\end{equation}
The four-momenta $q^\mu$ and $k^\mu$ are defined in terms of the
four-momenta of the incident $e^+$ and $e^-\ \left( k^\mu_1 {\rm{and}}\
k^\mu_2 \right)$ as
\begin{equation}
q^\mu = k^\mu_1 + k^\mu_2,\ q^2 = \left( k_1 + k_2\right)^2 = s;
\label{q}
\end{equation}
and
\begin{equation}
k^\mu = k^\mu_1 - k^\mu_2,\ k^2 = \left( k_1 - k_2 \right)^2 = -s.
\label{r}
\end{equation}
The normalization factor $F^{PC}_q (q^2)$ is expressed in terms of
the vector $(v)$ and axial-vector $(a)$ couplings of the intermediate
$\gamma^*$ and $Z^o$ to the leptons and quarks. At the $Z^o$ pole,
neglecting $\gamma, Z^o$ interference, we find
\begin{equation}
{{2} \over {s}}\ F^{PC}_q (s) = \left(|v_e|^2 + |a_e|^2\right)
\left(|v_q|^2 + |a_q|^2\right)
{{1} \over {\left( s-M^2_Z\right)^2 + M^2_Z \Gamma^2_Z}}.
\label{s}
\end{equation}
At modest energies where only the $\gamma^*$ intermediate state is
relevant,
\begin{equation}
{{2} \over {s}}\ F^{PC}_q\ (s) = e^2_q\ {{1} \over {s^2}};
\label{t}
\end{equation}
$e_q$ is the fractional quark charge
$\left( e_u = 2/3; e_d = 1/3;\cdots\right)$.
In terms of functions $H_1$ and $H_2$, defined through Eq.~(\ref{n}),
we reexpress the cross section as
\begin{equation}
d\sigma^{(m)}= \sum_q \left[\frac{2}{s}F^{PC}_q(s)\right]\,
e^2\, C\, \frac{1}{4}\left( H_1 +H_2\right)
dPS^{(m)}\, dz\, D(z)\ .
\label{dsigma2}
\end{equation}
In the sections to follow, we calculate the functions $H_1$ and $H_2$
explicitly for the lowest order and the first-order contributions to
$e^+e^- \rightarrow \gamma X$. Function $H_1$ provides the cross section
integrated over all production angles, $\theta_\gamma$, of the $\gamma$.
Function $H_2$ specifies the angular dependence or, equivalently, the
transverse momentum distribution of the $\gamma$ with respect to the
$e^+e^-$ collision axis.
\subsection{Factorized Cross Section}
\label{subsec:2b}
We are interested in the inclusive cross section for production of
photons in association with hadrons, $E_\gamma d\sigma^{incl}_{e^+e^-
\rightarrow \gamma X}/d^3\ell$, where $E_\gamma$ is the energy of the
photon, and $\ell$ is the momentum of the photon in the $e^+e^-$
center-of-mass system. According to the pQCD factorization
theorem \cite{AEMP}, we may express the cross section as
\begin{equation}
E_\gamma{{d\sigma^{incl}_{e^+e^-\rightarrow \gamma X}} \over
{d^3\ell}} \equiv \sum_c E_c{{d\hat{\sigma}^{incl}_{e^+e^-\rightarrow cX}}
\over {d^3p_c}} \otimes D_{c \rightarrow \gamma} (z).
\label{a}
\end{equation}
The intermediate partons are $c = \gamma, g, q$, and $\bar{q}$. The
hard-scattering cross section $E_c d\hat{\sigma}^{incl}_{e^+e^-
\rightarrow cX}/d^3p_c$ contains no infrared or collinear divergences.
The fractional momentum $z$ is defined as $z = E_\gamma/E_c$; all
intermediate partons $c$ are assumed to be massless. The fragmentation
functions $D_{c\rightarrow \gamma} (z)$ represent all long-distance
physics associated with the hadronic component of the photon. They
are inherently non-perturbative quantities that must be
measured experimentally. Models and phenomenological
parametrizations \cite{JFO}
for $D(z)$ have been published. In lowest-order,
$D_{\gamma \rightarrow \gamma} (z) = \delta (1-z)$.
The convolution expressed in Eq.~(\ref{a}) is sketched in Fig.~\ref{fig2}.
The symbol
$\otimes$ in Eq.~(\ref{a}) is defined explicitly as follows:
\begin{equation}
E_c{ {d\hat{\sigma}^{incl}_{e^+e^- \rightarrow cX}} \over
{d^3p_c}} \otimes D_{c \rightarrow \gamma} (z) \equiv \int^1_{z_{\min}}
{{dz} \over {z^2}}
\left[ E_c {{d\hat{\sigma}^{incl}_{e^+e^- \rightarrow cX}
\left( E_c = {{E_\gamma} \over {z}}\right)} \over
{d^3p_c}}\right]
D_{c\rightarrow \gamma}(z).
\label{b}
\end{equation}
Since $z_{\min}$ occurs when $p_c$ has its maximum value,
$p_c^{\max} = \sqrt{s}/2$, the lower limit of integration
$z_{\min} = x_\gamma = 2E_\gamma/ \sqrt{s}$; $\sqrt{s}$ is the center
of mass energy of the $e^+e^-$ annihilation.
\subsection{Derivation of the Lowest Order Contribution}
\label{subsec:2c}
In this section we present an explicit derivation of the lowest order
contribution to the inclusive photon yield in $e^+e^- \rightarrow \gamma$,
sketched in Fig.~\ref{fig3}. The differential inclusive cross section
$d\sigma_{e^+e^- \rightarrow \gamma X}$
is expressed as a product of the lowest
order partonic cross section
$d\hat{\sigma}^{(o)}_{e^+e^- \rightarrow q\bar{q}}$
and the $q \rightarrow \gamma$ fragmentation function,
$D_{q \rightarrow \gamma} (z)$.
\begin{equation}
d\sigma_{e^+e^- \rightarrow \gamma X} = \sum_q d\hat{\sigma}^{(o)}_
{e^+e^- \rightarrow q(p_q)\bar{q}}\ dz\ D_{p_q \rightarrow \gamma} (z) +
(q \rightarrow \bar{q}).
\label{u}
\end{equation}
In Eq.~(\ref{u}), $p_q$ is the four-vector momentum of the quark $q$, and
$z = \ell/p_q$. The partonic cross section is written, in turn, in terms of
the invariant matrix element and differential phase space factor.
\begin{eqnarray}
d\hat{\sigma}^{(o)}_{e^+e^- \rightarrow p_q p_{\bar{q}}}
&=& {{1} \over {2s}}
\Big| \overline{M}_{e^+e^- \rightarrow p_q p_{\bar{q}}} \Big|^2\
dPS^{(2)} \nonumber \\
&=& \left[\frac{2}{s}F^{PC}_q(s)\right]\,
e^2\, N_c\, \frac{1}{4}\left( H_1 +H_2\right)\, dPS^{(2)}\ ,
\label{v}
\end{eqnarray}
where $N_c = 3$ is the number of colors carried by the quarks, and
Eq.~(\ref{n}) was used.
The symmetric part of the hadronic tensor $H^{\mu\nu}$, used to define
functions $H_1$ and $H_2$, is particularly simple:
\begin{equation}
H^{\mu\nu} = 4 \left(e\mu^{\epsilon}\right)^2
\left[ p_{q}^{\mu} p_{\bar{q}}^{\nu}
+ p_{\bar{q}}^{\mu} p_{q}^{\nu}
- g^{\mu\nu} p_q \cdot p_{\bar{q}} \right].
\label{w}
\end{equation}
The factor $\mu^{\epsilon}$ in Eq.~(\ref{w}) accommodates the fact that
we are working in $n$ dimensions. The dimensional scale $\mu$ will be
specified further below.
The functions $H_1$ and $H_2$, defined in
Section~\ref{subsec:2a}, become
\begin{mathletters}
\label{x}
\begin{eqnarray}
H_1 &=& 4\left(e\mu^{\epsilon}\right)^2\,
s\, (1-\epsilon)\ ; \label{x1} \\
H_2 &=& -2\left(e\mu^{\epsilon}\right)^2\,
s\, (1 - \cos^2 \theta)\ . \label{x2}
\end{eqnarray}
\end{mathletters}
In Eq.~(\ref{w}), $\epsilon$ is defined through the number of space
dimensions $n = 4-2\epsilon$, with $\epsilon \rightarrow 0$ at the
end of the calculation. In the center of mass frame of the collision,
$\theta$ is the angle of
$\vec{p}_q$ with respect to the direction defined by the incident $e^+$.
Combining $H_1$ and $H_2$, we obtain
\begin{equation}
\frac{1}{4}\left(H_1+H_2\right)=
\frac{1}{2}\, \left(e\mu^\epsilon\right)^2\, s\,
\left[ (1 + \cos^2 \theta) - 2\epsilon \right]\ .
\label{y}
\end{equation}
\noindent
Combining Eqs.~(\ref{y}) and the expression for
two-particle phase space in $n$-dimensions, Eq.~(\ref{A.4})
of the Appendix,
we find that the lowest order partonic cross
section, Eq.~(\ref{v}), is
\begin{equation}
E_q{{ d\hat{\sigma}^{(0)}_{e^+e^-\rightarrow qX}} \over {d^3 p_q}} =
\left[{{2} \over {s}} F^{PC}_q (s)\right] \alpha^2_{em} N_c
\left( {{4 \pi \mu^2} \over {(s/4) \sin^2\theta}}\right)^\epsilon
{{1} \over {\Gamma(1-\epsilon)}}
\left[ \left( 1 + \cos^2\theta \right) -2\epsilon \right]
{{\delta(x_q-1)} \over {x_q}},
\label{dd}
\end{equation}
with $x_q = 2E_q/\sqrt{s}$.
At this order, the cross section is manifestly finite in the limit
$\epsilon \rightarrow 0$, and we may set $\epsilon = 0$ directly in
Eq.~(\ref{dd}). Nevertheless, Eq.~(\ref{dd}) expressed in $n$ dimensions
is valuable for later comparison with the higher order cross section.
Noting that
$\ell = zp_q$ implies $d^3p_q/E_q = \left( 1/z^2\right) d^3\ell/
E_\gamma$, we obtain the lowest order inclusive cross section
\begin{eqnarray}
E_\gamma {{d\sigma^{incl}_{e^+e^-\rightarrow \gamma X}} \over d^3 \ell}
&=& 2\sum_q\ \int^1\ {{dz} \over {z^2}}
\left[ E_q{{ d\hat{\sigma}^{(0)}_{e^+e^-\rightarrow q X}}
\over {d^3p_q}} \left( x_q = {{x_\gamma} \over {z}}\right)\right]
D_{q\rightarrow \gamma} (z, \mu_F)\nonumber \\
&=& 2\sum_q \left[ {{2} \over {s}} F^{PC}_q (s)\right] \alpha^2_{em}
N_c (1 + \cos^2\theta_\gamma) {{1} \over {x_\gamma}}
D_{q \rightarrow \gamma} (x_\gamma, \mu_F).
\label{ee}
\end{eqnarray}
The angles $\theta_\gamma$ and $\theta$ are identical since we take all
products of the fragmentation to be collinear. The overall factor of 2 in
Eq.~(\ref{ee}) accounts for the $\bar{q}$ contribution. In Eq.~(\ref{ee})
we have introduced a fragmentation scale $\mu_F$ in the specification of
the fragmentation function.
\section{First Order Contributions}
\label{sec:first}
There are three distinct contributions to $e^+e^-\rightarrow \gamma X$
in first order perturbation theory:
\begin{mathletters}
\label{ii}
\begin{eqnarray}
e^+e^- \rightarrow \gamma, &\hspace{0.5in}& O(\alpha_{em})
\label{ii1} \\
e^+e^- \rightarrow q \
(\mbox{or}\ \bar{q}) \rightarrow \gamma, &\hspace{0.5in}& O(\alpha_s)
\label{ii2} \\
& \rm{and} & \nonumber \\
e^+e^- \rightarrow g \rightarrow \gamma. &\hspace{0.5in}& O(\alpha_s)
\label{ii3}
\end{eqnarray}
\end{mathletters}
In Eqs. (\ref{ii2}) and (\ref{ii3}),
we have in mind contributions from quark and gluon
fragmentation to photons in the three-parton final state process
$e^+e^- \rightarrow q\bar{q}g$. The first contribution, Eq.~(\ref{ii1}),
arises
from $e^+e^- \rightarrow q\bar{q}\gamma$ where the $\gamma$ is
\underline{not} collinear with either $\bar{q}$ or $q$.
In this section we derive and present the explicit contributions to the
inclusive yield
$E_\gamma d\sigma^{incl}_{e^+e^- \rightarrow\gamma X}/d^3\ell$
from each of the three processes in Eq.~(20). Following the pQCD
factorization theorem, and Eq.~(\ref{a}), we must calculate the
short-distance hard-scattering cross sections,
$E_c d\hat{\sigma}^{incl}_{e^+e^- \rightarrow cX}/d^3p_c$
for $c=\gamma,g,q$ and $\bar{q}$.
The Feynman graphs for $e^+e^- \rightarrow\gamma q\bar{q}$ are sketched in
Fig.~\ref{fig4}. Owing to the quark-photon
collinear divergence, the cross section
associated with these graphs is formally divergent. We denote this
divergent first order cross-section
$\sigma^{(1)}_{e^+e^- \rightarrow\gamma X}$,
a short-hand notation for $Ed\sigma/d^3\ell$.
To derive the corresponding short-distance hard-scattering cross
section, $\hat{\sigma}^{(1)}_{e^+e^- \rightarrow\gamma X}$,
we apply the factorized form, Eq.~(\ref{a}), perturbatively,
\begin{eqnarray}
\sigma^{(1)}_{e^+e^- \rightarrow\gamma X} &=& \hat{\sigma}^{(1)}_
{e^+e^- \rightarrow\gamma X} \otimes D^{(0)}_{\gamma\rightarrow\gamma} (z)
\nonumber \\
&+& \hat{\sigma}^{(0)}_{e^+e^- \rightarrow q X} \otimes
D^{(1)}_{q\rightarrow \gamma} (z)\nonumber \\
&+& (q \rightarrow \bar{q}).
\label{jj}
\end{eqnarray}
The convolution represented by $\otimes$ is defined in Eq.~(\ref{b}). The
superscripts (0) and (1) on the hard-scattering cross sections
$\hat{\sigma}$ and fragmentation functions $D$ refer to lowest-order
and first order, respectively. The collinear divergence resides
in the first order fragmentation function
$D^{(1)}_{q\rightarrow \gamma}(z)$.
The hard-scattering cross sections $\hat{\sigma}^{(1)}$ and
$\hat{\sigma}^{(0)}$ are finite.
The expression for $\hat{\sigma}^{(0)}$ was derived in
Section~\ref{subsec:2c}. In
Section~A we present our derivation of
$\hat{\sigma}^{(1)}_{e^+e^- \rightarrow\gamma X}$:
\begin{equation}
\hat{\sigma}^{(1)}_{e^+e^- \rightarrow \gamma X} =
\sigma^{(1)}_{e^+e^- \rightarrow \gamma X} -
\hat{\sigma}^{(0)}_{e^+e^- \rightarrow q X} \otimes
D^{(1)}_{q\rightarrow \gamma} (z) - (q \rightarrow \bar{q}).
\label{kk}
\end{equation}
The two Feynman graphs that provide the cross section for
$e^+e^- \rightarrow g \rightarrow \gamma$ in\linebreak
$O(\alpha_s)$ are shown in
Fig.~\ref{fig5}. In this
case, the final gluon is effectively ``observed'' through the fragmentation
$g\rightarrow \gamma$; there are no virtual gluon exchange diagrams. The
finite hard-scattering cross section
$\hat{\sigma}^{(1)}_{e^+e^- \rightarrow g X}$ is derived from
the difference
\begin{equation}
\hat{\sigma}^{(1)}_{e^+e^- \rightarrow g X} =
\sigma^{(1)}_{e^+e^- \rightarrow g X} -
\sum_{q^\prime =q}^{\bar{q}}
\hat{\sigma}^{(0)}_{e^+e^- \rightarrow q^\prime X} \otimes
D^{(1)}_{q^\prime \rightarrow g}.
\label{ll}
\end{equation}
In Eq.~(\ref{ll}), the divergent cross section
$\sigma^{(1)}_{e^+e^- \rightarrow g X}$
is evaluated from the Feynman graphs shown in Fig.~\ref{fig5},
and the quark-to-gluon
collinear divergences are embedded in the first-order fragmentation function
$D^{(1)}_{q^\prime \rightarrow g}$. Our derivation of
$\hat{\sigma}^{(1)}_{e^+e^- \rightarrow g X}$ is presented in Section~B.
The Feynman graphs in $O(\alpha_s)$ that contribute to
$e^+e^- \rightarrow q \rightarrow \gamma$ (Eq.~(\ref{ii2}))\linebreak
are sketched in Fig.~\ref{fig6}. Only a final state
photon from quark fragmentation is observed. The complete $O(\alpha_s)$
result includes both real gluon emission and virtual gluon exchange graphs,
as shown in Fig.~\ref{fig6}.
Although infrared divergences associated with soft gluons
cancel between the real and virtual graphs, the
cross section $\sigma^{(1)}_{e^+e^- \rightarrow q X}$
obtained from the Feynman graphs is still divergent due to collinear
singularities when the real gluon is emitted along the direction of its
parent quark or antiquark. To obtain the corresponding hard-scattering
cross section $\hat{\sigma}^{(1)}_{e^+e^- \rightarrow q X}$, we apply
the factorized form, Eq.~(\ref{a}), perturbatively, to the production of
a quark instead of the photon,
\begin{equation}
\sigma^{(1)}_{e^+e^- \rightarrow q X} = \sum^{\bar{q}}_{q^\prime=q}
\hat{\sigma}^{(0)}_{e^+e^- \rightarrow q^\prime} \otimes
D^{(1)}_{q^\prime \rightarrow q} + \hat{\sigma}^{(1)}
_{e^+e^- \rightarrow q X} \otimes D^{(0)}_{q\rightarrow q},
\label{mm}
\end{equation}
with the collinear $q^\prime \rightarrow q$ singularities in
$O(\alpha_s)$ included in $D^{(1)}_{q^\prime \rightarrow q}$. Note that
$D^{(0)}_{q\rightarrow q} (z) = \delta(1-z)$.
Correspondingly, the finite hard-scattering cross section
$\hat{\sigma}^{(1)}_{e^+e^- \rightarrow q X}$ is
\begin{equation}
\hat{\sigma}^{(1)}_{e^+e^- \rightarrow q X} =
\sigma^{(1)}_{e^+e^- \rightarrow q X} -
\hat{\sigma}^{(0)}_{e^+e^- \rightarrow q} \otimes
D^{(1)}_{q\rightarrow q}.
\label{nn}
\end{equation}
In Section C we present a detailed derivation of
$\hat{\sigma}^{(1)}_{e^+e^- \rightarrow q X}$.
Before turning to our explicit derivations, we conclude this discussion
with a presentation of the factorized formula for
the two-loop short-distance hard-scattering cross sections.
The two-loop direct contribution to $e^+e^-\rightarrow \gamma X$
is of $O(\alpha_{em}\alpha_s)$ and can be derived as follows.
We first apply the factorized form, Eq.~(\ref{a}),
perturbatively, at two-loop level and sum over $c=\gamma, g, q$ and
$\bar{q}$,
\begin{eqnarray}
\sigma^{(2)}_{e^+e^- \rightarrow\gamma X}
&=& \hat{\sigma}^{(2)}_{e^+e^- \rightarrow\gamma X}
\otimes D^{(0)}_{\gamma\rightarrow\gamma} (z)
+ \hat{\sigma}^{(1)}_{e^+e^- \rightarrow\gamma X}
\otimes D^{(1)}_{\gamma\rightarrow\gamma} (z)\nonumber \\
&+& \hat{\sigma}^{(1)}_{e^+e^- \rightarrow g X}
\otimes D^{(1)}_{g\rightarrow \gamma} (z)
+ \hat{\sigma}^{(0)}_{e^+e^- \rightarrow g X}
\otimes D^{(2)}_{g\rightarrow \gamma} (z)\nonumber \\
&+& \hat{\sigma}^{(1)}_{e^+e^- \rightarrow q X}
\otimes D^{(1)}_{q\rightarrow \gamma} (z)
+ \hat{\sigma}^{(0)}_{e^+e^- \rightarrow q X}
\otimes D^{(2)}_{q\rightarrow \gamma} (z)\nonumber \\
&+& (q \rightarrow \bar{q}).
\label{eeg2}
\end{eqnarray}
All first-order contributions, $\hat{\sigma}^{(1)}$,
in Eq.~(\ref{eeg2})
are given in Eqs.~(\ref{kk}), (\ref{ll}) and (\ref{nn}),
and they are calculated in this paper. Since the first order
fragmentation functions $D^{(1)}_{\gamma\rightarrow \gamma}(z)$
and $D^{(1)}_{g\rightarrow \gamma}(z)$ vanish, and the zeroth order
hard-scattering cross section
$\hat{\sigma}^{(0)}_{e^+e^- \rightarrow g X}$ vanishes, we
derive the two-loop hard-scattering cross sections
$\hat{\sigma}^{(2)}_{e^+e^- \rightarrow\gamma X}$ as:
\begin{eqnarray}
\hat{\sigma}^{(2)}_{e^+e^- \rightarrow \gamma X}
&=& \sigma^{(2)}_{e^+e^- \rightarrow \gamma X} \nonumber \\
&-& \hat{\sigma}^{(1)}_{e^+e^- \rightarrow q X}
\otimes D^{(1)}_{q\rightarrow \gamma} (z)
- \hat{\sigma}^{(0)}_{e^+e^- \rightarrow q X}
\otimes D^{(2)}_{q\rightarrow \gamma} (z) \nonumber \\
&-& (q \rightarrow \bar{q}).
\label{hs2g}
\end{eqnarray}
To complete the calculation of
$\hat{\sigma}^{(2)}_{e^+e^- \rightarrow\gamma X}$, it is necessary
to calculate the two-loop parton-level cross section
$\sigma^{(2)}_{e^+e^- \rightarrow\gamma X}$ and the two-loop
quark-to-photon fragmentation function
$D^{(2)}_{q\rightarrow \gamma} (z)$ in $n$-dimensions (implicitly, we use
dimensional regularization), in addition to all the zeroth and first
order contributions calculated in this paper.
The two-loop parton level cross section
$\sigma^{(2)}_{e^+e^- \rightarrow\gamma X}$ is formally divergent.
As is true of the calculation of
$\hat{\sigma}^{(1)}_{e^+e^- \rightarrow q X}$ in
Section~\ref{subsec:3c},
all infrared divergences associated with soft gluons
cancel among the real emission and virtual exchange diagrams.
All collinear divergences that
appear when final-state quarks and/or gluons are parallel
to the observed photon are cancelled by the subtraction terms given
in Eq.~(\ref{hs2g}). Consequently, the two-loop hard-scattering
cross section $\hat{\sigma}^{(2)}_{e^+e^- \rightarrow \gamma X}$
is finite if the pQCD factorization theorem holds.
As shown in Section IV, the leading order short-distance
direct production contribution
$\hat{\sigma}^{(1)}_{e^+e^- \rightarrow \gamma X}$
is much smaller than the leading order fragmentation
contribution\linebreak
$\hat{\sigma}^{(0)}_{e^+e^- \rightarrow qX}\otimes
D^{(1)}_{q\rightarrow\gamma}(z) + (q\leftrightarrow\bar{q})$.
We expect that the next-to-leading order direct contribution
$\hat{\sigma}^{(2)}_{e^+e^- \rightarrow \gamma X}$ will be much smaller
than the next-to-leading order fragmentation contributions
$\hat{\sigma}^{(1)}_{e^+e^- \rightarrow cX}\otimes
D^{(1)}_{c\rightarrow\gamma}(z)$ with $c=g, q$ and $\bar{q}$,
which are completely derived in this paper. We will not
calculate the two-loop contributions in this paper because we believe
their contributions to the overall cross section are much too small
in comparison with those presented here.
\subsection{Derivation of
$\hat{\sigma}^{(1)}_{e^+e^- \rightarrow \gamma X}$}
\label{subsec:3a}
In this section we present an explicit derivation of the finite
hard-scattering
cross section $\hat{\sigma}^{(1)}_{e^+e^- \rightarrow \gamma X}$ in
$O(\alpha_{em})$.
We begin by computing the functions $H_1$ and $H_2$, defined in
Section~\ref{subsec:2a},
Eqs.~(\ref{o}) and (\ref{p}). These will then be integrated over phase
space to yield the cross section
$E_\gamma d\sigma^{(1)}_{e^+e^- \rightarrow \gamma X}/d^3\ell$:
\begin{equation}
d\sigma^{(1)}_{e^+e^-\rightarrow \gamma X} =
\sum_{q} \left[ {{2} \over {s}}\, F^{PC}_q(s)\right]
e^2\, N_c\, \frac{1}{4}(H_1 + H_2) dPS^{(3)},
\label{oo}
\end{equation}
where three-particle phase space in $n$-dimensions is given in
Eq.~(\ref{A.22a}) of the Appendix.
Sketched in Fig.~\ref{fig7} is the hadronic tensor
$H_{\mu\nu}$ obtained from the two
diagrams of Fig.~\ref{fig4}.
Performing traces to sum over final spins, we may write
the four contributions as
\begin{eqnarray}
H^{(a)}_{\mu\nu} &=& 2(1-\epsilon) Tr \left[ \gamma_\mu \gamma\cdot\ell
\gamma_\nu \gamma\cdot p_2\right] {1 \over {2p_1\cdot\ell}};\nonumber \\
H^{(b)}_{\mu\nu} &=& 2(1-\epsilon) Tr \left[ \gamma_\mu \gamma\cdot p_1
\gamma_\nu \gamma\cdot\ell\right] {1 \over {2p_2 \cdot\ell}};\nonumber \\
H^{(c)}_{\mu\nu} &=& -2Tr \left[ \gamma_\mu \gamma\cdot p_1 \gamma\cdot
p_2 \gamma_\nu \gamma\cdot (p_1 + \ell) \gamma\cdot (p_2 + \ell)\right]
{1 \over{2p_1 \cdot \ell}}\, {1 \over{2p_2 \cdot {\ell}}}\nonumber \\
&+& 2\epsilon Tr \left[ \gamma_\mu \gamma\cdot p_1 \gamma\cdot \ell
\gamma_\nu \gamma\cdot p_2 \gamma\cdot \ell\right]
{1 \over {2p_1\cdot \ell}}\, {1 \over {2p_2\cdot \ell}};\nonumber \\
H^{(d)}_{\mu\nu} &=& -2 Tr \left[ \gamma_\mu \gamma\cdot (p_1 + \ell)
\gamma\cdot (p_2 + \ell) \gamma _\nu \gamma\cdot p_1 \gamma\cdot p_2\right]
{1 \over {2p_1 \cdot \ell}}\, {1 \over {2p_2\cdot \ell}}\nonumber \\
&+&
2\epsilon Tr \left[ \gamma_\mu \gamma\cdot \ell \gamma\cdot p_1 \gamma_\nu
\gamma\cdot \ell \gamma\cdot p_2\right]
{1 \over {2p_1\cdot \ell}}\, {1 \over {2 p_2 \cdot \ell}}.
\label{pp}
\end{eqnarray}
To obviate multiple repetition of a common factor, we temporarily omit the
overall coupling factor $e_q^2 (e\mu^\epsilon)^4$ that appears in
$H_{\mu\nu}$. Function $H_1 = -g_{\mu\nu}\ H^{\mu\nu} = -g_{\mu\nu}
\sum\limits^{d}_{i=a} H^{(i)\mu\nu}$. We obtain
\begin{equation}
H_1 = 8(1-\epsilon) \left\{ (1-\epsilon)
\left[ {{y_{1\ell}} \over {y_{2\ell}}} + {{y_{2\ell}}
\over {y_{1\ell}}}\right]
+ {{2y_{12}} \over {y_{1\ell}\, y_{2\ell}} } -2\epsilon \right\}.
\label{qq}
\end{equation}
The dimensionless quantities $y_{1\ell}, y_{2\ell}$, and $y_{12}$
are defined by
\begin{eqnarray}
y_{i\ell}
&=& {{2p_i\cdot \ell} \over {q^2}}\ (i=1, 2); \nonumber \\
y_{12}
&=& {{2p_1\cdot p_2} \over {q^2}}.
\label{rr2}
\end{eqnarray}
We remark that $y_{12}+y_{1\ell}+y_{2\ell}=1$.
In evaluating $H_2 = -\left( k_\mu k_\nu /q^2\right) H^{\mu\nu}$, we
also make use of dimensionless quantities
$y_{1k}, y_{2k}$, and $y_{k\ell}$:
\begin{eqnarray}
y_{ik} &=& {{2p_i \cdot k} \over {q^2}}\ (i=1, 2); \nonumber \\
y_{k\ell} &=& {{2k\cdot \ell} \over {q^2}}.
\label{ss}
\end{eqnarray}
Because $k\cdot q = 0$,
\begin{equation}
y_{1k} + y_{2k} + y_{k\ell} = 0.
\label{tt}
\end{equation}
After some algebra we find
\begin{eqnarray}
H_2 &=&- 4 \left\{ (1-\epsilon)
\left[{{y_{1\ell}} \over {y_{2\ell}}} + {{y_{2\ell}} \over
{y_{1\ell}}}\right]
+ {{2y_{12}} \over {y_{1\ell}\, y_{2\ell}} } -2\epsilon \right\}
\nonumber \\
&&+ {{4} \over {y_{1\ell}y_{2\ell}}}
\left\{ y_{1k}^2 + y_{2k}^2 \right\}
- {{4\epsilon} \over {y_{1\ell}y_{2\ell}}}
\left\{ y_{k\ell}^2 \right\}.
\label{h2g}
\end{eqnarray}
Our next task is to integrate $H_1$ and $H_2$ over three-body phase
space in $n = 4-2\epsilon$ dimensions.
Since the momentum of the photon ($\ell$) is an observable,
and the momentum of either the quark ($p_1$) or antiquark ($p_2$)
can be fixed by the overall momentum conservation $\delta$-function in
the three-body phase space, we need to integrate over only $p_1$
or $p_2$. In the following discussion, we let $p_2$ be fixed by the
$\delta$-function, and we integrate over $p_1$.
In the overall center of mass frame, as sketched in Fig.~\ref{fig8},
we take angle $\theta_\gamma$ to be the
polar angle of the $\gamma$
with respect to the $e^+e^-$ collision axis and
angle $\theta_{1\gamma}$ to be the angle
between the $\gamma$'s momentum $\ell$ and the quark momentum $p_1$.
The angle $\theta_x$ is the $n$-dimensional generalization of the
three-dimensional azimuthal angle $\phi$, defined through $p_1$ as
\begin{equation}
d\Omega_{n-2} (p_1) \equiv d\theta_{1\gamma} \sin^{n-3} \theta_{1\gamma}
d\theta_x \sin^{n-4} \theta_x d\Omega_{n-4} (p_1).
\label{vv}
\end{equation}
Having chosen the frame, we may reexpress the $y$ variables
in terms of observables and integration angles as follows:
\begin{eqnarray}
y_{k\ell}& =& - x_\gamma \cos \theta_\gamma\ , \nonumber \\
y_{1k}&=& -\left[\frac{y_{2\ell}y_{12}-y_{1\ell}}{x_\gamma}\right]
\cos \theta_\gamma
-\left[\frac{2\sqrt{y_{12}y_{1\ell}y_{2\ell}}}{x_\gamma}\right]
\sin \theta_\gamma \cos \theta_x\ ; \nonumber \\
y_{2k}&=& -\left[\frac{y_{1\ell}y_{12}-y_{2\ell}}{x_\gamma}\right]
\cos \theta_\gamma
+\left[\frac{2\sqrt{y_{12}y_{1\ell}y_{2\ell}}}{x_\gamma}\right]
\sin \theta_\gamma \cos \theta_x\ ;
\label{zz}
\end{eqnarray}
where $x_\gamma=2E_\gamma/\sqrt{s} (=y_{1\ell}+y_{2\ell})$.
In deriving $y_{1k}$ and $y_{2k}$,
we use the following identities
\begin{eqnarray}
\cos\theta_{1\gamma} &=&
\frac{y_{2\ell}y_{12}-y_{1\ell}}{x_1 x_\gamma}\ ;\nonumber \\
\sin\theta_{1\gamma} &=&
\frac{2\sqrt{y_{12}y_{1\ell}y_{2\ell}}}{x_1 x_\gamma}\ ,
\label{yy}
\end{eqnarray}
where $x_1=2E_1/\sqrt{s}$.
In the integration of $H_2$ over phase space, the integral over
$d\cos \theta_x$ is done from $\cos \theta_x = -1\ {\rm to}\ +1$. The
expression for the three-body phase space, Eq.~(\ref{A.22}), is an even
function of $\cos \theta_x$. Correspondingly, terms in $H_2$ that are odd
functions of $\cos \theta_x$ do not survive.
Because $H_2$ depends only on the square of the $y_{1k}$ and $y_{2k}$,
after eliminating all terms linear in $\cos\theta_x$, we find that
the only $\theta_x$
dependence in $H_2$ is $\cos^2\theta_x$. We can integrate
over $\theta_x$ independent of other variables, or we can effectively
replace the $\cos^2\theta_x$ terms in $H_2$ by the average of
$\cos^2\theta_x$ in $n$-dimensions and eliminate the $\theta_x$
dependence in $H_2$ completely.
Given the average of $\cos^2\theta_x$ in $n$-dimensions,
Eq.~(\ref{A.25}), we obtain, effectively,
\begin{eqnarray}
y_{k\ell}^2 &=& x_\gamma^2 \cos^2\theta_\gamma\ ;\nonumber \\
y_{1k}^2&=& \left[\frac{y_{2\ell}y_{12}-y_{1\ell}}{x_\gamma}\right]^2
\cos^2\theta_\gamma
+\left(\frac{1}{1-\epsilon}\right)
\left[\frac{2(y_{12}y_{1\ell}y_{2\ell})}{x_\gamma^2}\right]
\sin^2\theta_\gamma\ ;\nonumber \\
y_{2k}^2&=& \left[\frac{y_{1\ell}y_{12}-y_{2\ell}}{x_\gamma}\right]^2
\cos^2\theta_\gamma
+\left(\frac{1}{1-\epsilon}\right)
\left[\frac{2(y_{12}y_{1\ell}y_{2\ell})}{x_\gamma^2}\right]
\sin^2\theta_\gamma \ ;
\label{xx}
\end{eqnarray}
where the factor $1/(1-\epsilon)$ is from the average of
$\cos^2\theta_x$. Substituting the
above expressions into Eq.~(\ref{h2g}),
and combining with $H_1$ in Eq.~(\ref{qq}), we obtain,
\begin{eqnarray}
{{1} \over {4}} \left( H_1+H_2^{eff} \right)
&=&\left( 1 + \cos^2 \theta_\gamma - 2\epsilon\right)
\left[ (1-\epsilon)
\left(\frac{y_{1\ell}}{y_{2\ell}}
+\frac{y_{2\ell}}{y_{1\ell}}\right)
+2\left(\frac{y_{12}}{y_{1\ell}y_{2\ell}}-\epsilon\right) \right]
\nonumber \\
&+& \left( 1-3\cos^2\theta_\gamma\right)
\left[\frac{4\, y_{12}}{x_\gamma^2}\right]\ \nonumber \\
&+& \left(\frac{\epsilon}{1-\epsilon}\right)
\left(1-\cos^2\theta_\gamma\right)
\left[\frac{4\, y_{12}}{x_\gamma^2} \right]\ ,
\label{h1h2}
\end{eqnarray}
where the superscript ``{\scriptsize\it eff}''
indicates that we have replaced
$\cos^2\theta_x$ by its average in $n$-dimensions.
The two $\delta$-functions in the three-particle phase space,
$dPS^{(3)}$, provide the following identities
\begin{eqnarray}
y_{12} &=& 1-x_\gamma\ ; \nonumber \\
y_{2\ell} &=& x_\gamma - y_{1\ell}\ .
\label{yid}
\end{eqnarray}
Introducing $\hat{y}_{1\ell} = y_{1\ell}/x_\gamma$, and
substituting these identities into Eq.~(\ref{h1h2}), we derive
\begin{eqnarray}
{{1} \over {4}} \left( H_1+H_2^{eff} \right)
&=& \left( 1 + \cos^2 \theta_\gamma - 2\epsilon\right)
\left[ {{1+(1-x_\gamma)^2} \over {x^2_\gamma}}\right]
\left( {{1} \over {\hat{y}_{1\ell}}}
+{{1} \over {1-\hat{y}_{1\ell}}}\right)\nonumber \\
&+& \left( 1 + \cos^2\theta_\gamma - 2\epsilon\right)
\left[ -2-\epsilon\left( {{1}\over{\hat{y}_{1\ell}}}
+{{1}\over {1-\hat{y}_{1\ell}}}\right)\right]\nonumber \\
&+& \left( 1-3\cos^2\theta_\gamma\right)
\left[{{4(1-x_\gamma)} \over {x^2_\gamma}}\right] \nonumber \\
&+& \left(\frac{\epsilon}{1-\epsilon}\right)
\left(1-\cos^2\theta_\gamma\right)
\left[{{4(1-x_\gamma)}\over{x^2_\gamma}} \right]\ .
\label{ggg}
\end{eqnarray}
The last term vanishes as $\epsilon\rightarrow 0$.
Combining Eqs.~(\ref{oo}) and (\ref{ggg}), and integrating over
$d\hat{y}_{1\ell}$, we can derive the partonic cross section
$d\sigma^{(1)}_{e^+e^-\rightarrow\gamma X}$. The limits of the
$d\hat{y}_{1\ell}$ integration are from 0 to 1.
The integrals over $d\hat{y}_{1\ell}$ for $(H_1+H_2^{eff})/4$ may be
expressed in terms of
\begin{equation}
I_{n,m} \equiv \int^1_0\ d\hat{y}_{1\ell}\ \hat{y}^{n-\epsilon}_{1\ell}
\left( 1- \hat{y}_{1\ell}\right)^{m-\epsilon}.
\label{hhh}
\end{equation}
Examining Eq.~(\ref{ggg}), we need only $I_{0,0}$ and $I_{-1,0}
\left( = I_{0, -1}\right)$:
\begin{equation}
I_{0,0} = B(1-\epsilon, 1-\epsilon) =
{\left({\Gamma(1-\epsilon)}\right)^2 \over {\Gamma(2-2\epsilon)}};
\label{iii}
\end{equation}
\begin{equation}
I_{-1,0} = B(-\epsilon, 1-\epsilon) = \left( {{1} \over {-\epsilon}}\right)
{{\left( \Gamma(1-\epsilon)\right)^2} \over {\Gamma(1-2\epsilon)}}.
\label{jjj}
\end{equation}
For small $\epsilon,\ I_{0,0} = 1 + O(\epsilon)$, and
$I_{-1,0} = - {{1}\over{\epsilon}} + O(\epsilon)$.
After performing the integration over $d\hat{y}_{1\ell}$,
we expand the right-hand side of Eq.~(\ref{oo}) in a power series
in $\epsilon$, keeping only the singular term
proportional to $(1/\epsilon)$ and the terms independent of $\epsilon$.
(Terms of $O(\epsilon^m),\ m \geq 1$, vanish in the physical limit of four
dimensions $(n = 4-2\epsilon)$). We obtain
\newpage
\begin{eqnarray}
E_\gamma {{d\sigma^{(1)}_{e^+e^- \rightarrow \gamma X}} \over {d^3\ell}}
&=&\ 2 \sum_{q} \left[{{2}\over{s}} F^{PC}_{q} (s)\right]
\left[\alpha^2_{em}N_c
\left({{4\pi\mu^2} \over
{(s/4)\sin^2\theta_\gamma}}\right)^\epsilon
{{1} \over{\Gamma(1-\epsilon)}} \right]
\left( 1 + \cos^2 \theta_\gamma -2\epsilon\right) \nonumber \\
&& \times {{1}\over {x_\gamma}}
\Bigg\{ e^2_q\ {{\alpha_{em}}\over{2\pi}}
\left[ {{1+(1-x_\gamma)^2}\over{x_\gamma}}\right]\Bigg\}
\left(-{{1}\over{\epsilon}}\right)\nonumber \\
&+&\ 2\sum_{q}\left[{{2}\over {s}}\ F^{PC}_{q} (s)\right]
\left[ \alpha^2_{em} N_c {{1}\over {x_\gamma}} \right]
e^2_q \left( {{\alpha_{em}}\over {2\pi}}\right)\nonumber \\
&& \times \Bigg\{ (1+\cos^2\theta_\gamma)
\left[ {{1+(1-x_\gamma)^2}\over{x_\gamma}}\right]
\left[ \ell n \left( s/\mu^2_{\overline{\rm MS}}\right)
+\ell n\left(x^2_\gamma \left(1-x_\gamma\right)\right) \right]
\nonumber \\
&& +\left( 1-3\cos^2\theta_\gamma\right)
\left[{{2(1-x_\gamma)}\over{x_\gamma}}\right] \Bigg\}\ .
\label{kkk}
\end{eqnarray}
In deriving Eq.~(\ref{kkk}), we included the overall factor for
coupling constants, $e_q^2\ (e\mu^\epsilon)^4$; and used the expansion
$\Gamma(1-\epsilon) \simeq 1 + \epsilon \gamma_E$, where $\gamma_E$ is
Euler's constant, and the usual modified minimal subtraction scale
\begin{equation}
\mu^2_{\overline{\rm MS}} \equiv \mu^2 4\pi e^{-\gamma_E}.
\label{lll}
\end{equation}
The $(1/\epsilon)$ singularity in Eq.~(\ref{kkk}) represents the
quark-photon
collinear singularity. This singular term is expected to be cancelled
by subtraction terms defined in Eq.~(\ref{kk}). By evaluating the
diagram sketched in Fig.~\ref{fig9}, we obtain the one-loop
quark-to-photon fragmentation function
\begin{equation}
D^{(1)}_{q\rightarrow \gamma} (z) = D^{(1)}_{\bar{q}\rightarrow\gamma}(z)
= e^2_q\ {{\alpha_{em}} \over {2\pi}}\
\left[ {{1 + (1-z)^2} \over {z}}\right]
\left( {{1} \over {-\epsilon}}\right)\ ,
\label{mmm}
\end{equation}
where we keep only the $1/\epsilon$ pole term because we work in the
$\overline{\rm MS}$ factorization scheme. Using the fact that
$D^{(1)}_{\bar{q}\rightarrow\gamma}(z)=
D^{(1)}_{q\rightarrow\gamma}(z)$, and
comparing Eq.~(\ref{kkk}) with
Eqs.~(\ref{dd}) and (\ref{mmm}), we observe that
the divergent first term in Eq.~(\ref{kkk}) is cancelled exactly by
the subtraction terms defined in Eq.~(\ref{kk}), in accord with the
pQCD factorization theorem.
Using Eq.~(\ref{kk}), we obtain the finite
$O(\alpha_{em})$ hard-scattering cross section
\begin{eqnarray}
E_\gamma
{{d\hat{\sigma}^{(1)}_{e^+e^-\rightarrow\gamma X}} \over {d^3\ell}}
&=&\ 2 \sum_q \left[ {{2} \over {s}} F^{PC}_q\, (s)\right]
\left[ \alpha^2_{em} N_c {{1} \over {x_\gamma}} \right]
e_q^2 \left( {{\alpha_{em}} \over {2\pi}}\right) \nonumber \\
&&\times \Bigg\{ \left(1+\cos^2 \theta_\gamma\right)
\left[ {{1+(1-x_\gamma)^2} \over {x_\gamma}}\right]
\left[ \ell n \left(s/\mu^2_{\overline{\rm MS}}\right)
+ \ell n \left( x^2_\gamma \left(1-x_\gamma\right) \right) \right]
\nonumber \\
&&+ (1-3 \cos^2 \theta_\gamma )
\left[{{2(1-x_\gamma)} \over {x_\gamma}} \right]\Bigg\}.
\label{nnn}
\end{eqnarray}
We remark that the angular dependence of the $O(\alpha_{em})$
hard-scattering cross section has two components, one proportional
to $(1 + \cos^2\theta)$, familiar from the lowest order expression,
and a second piece proportional to $(1-3 \cos^2 \theta_\gamma)$.
If one integrates over $\cos\theta_\gamma$, the second piece vanishes.
We note, however, that the piece proportional to
$(1-3 \cos^2 \theta_\gamma)$ changes the predicted angular
dependence from the often assumed form $(1 + \cos^2\theta)$.
The difference means that it would not be correct to assume a
$(1 + \cos^2\theta)$ dependence when attempting to
correct an integrated cross section for unobserved events near,
for example, the incident $e^+e^-$ beam direction,
$\cos\theta_\gamma = \pm 1$.
\subsection{Derivation of $\hat{\sigma}^{(1)}_{e^+e^- \rightarrow gX}$}
\label{subsec:3b}
The finite hard-scattering cross section
$\hat{\sigma}^{(1)}_{e^+e^-\rightarrow gX}$ to first order in $\alpha_s$
may be obtained directly from Eq.~(\ref{nnn}) after three replacements:
$x_\gamma \rightarrow x_g;\ N_c \rightarrow N_c C_F$; and $e^2e^2_q$
of the final photon emission vertex by $g^2 = 4 \pi \alpha_s$.
\begin{eqnarray}
E_g {{d\hat{\sigma}^{(1)}_{e^+e^-\rightarrow gX}} \over {d^3p_g}}
&=&\ 2 \sum_q \left[ {{2}\over {s}}\, F^{PC}_q\, (s)\right]
\left[\alpha^2_{em}\, N_c\, {{1}\over {x_g}}\right]
C_F\left( {{\alpha_s}\over {2\pi}}\right) \nonumber \\
&&\times \Bigg\{ \left( 1 + \cos^2\theta_g \right)
\left[ {{1+(1-x_g)^2} \over {x_g}}\right]
\left[ \ell n \left( s/\mu^2_{\overline{\rm MS}}\right)
+\ell n \left(x^2_g\left(1-x_g\right)\right) \right] \nonumber \\
&&+ \left( 1-3\cos^2 \theta_g \right)
\left[{{2(1-x_g)} \over {x_g}}\right]\Bigg\}.
\label{ooo}
\end{eqnarray}
In Eq.~(\ref{ooo}), $x_g = 2E_g/\sqrt{s}$; $C_F = {4\over 3}$, and
$N_c=3$.
The contribution $O(\alpha_s)$ to the inclusive yield
$e^+e^-\rightarrow\gamma X$ via gluon fragmentation is therefore
\begin{equation}
E_\gamma {{d\sigma^{(1)}_{e^+e^-\rightarrow gX\rightarrow \gamma X}}
\over {d^3\ell}} = \int^1_{x_\gamma}\, {{dz} \over {z}}
\left[ E_g {{d\hat{\sigma}_{e^+e^-\rightarrow gX}^{(1)}} \over {d^3p_g}}\,
\left( x_g = {{x_\gamma} \over {z}}\right)\right]\,
{{D_{g\rightarrow\gamma}(z, \mu^2_{\overline{\rm MS}})} \over {z}}
\label{ppp}
\end{equation}
with $x_\gamma = 2E_\gamma/\sqrt{s}$. Because the $g\rightarrow \gamma$
fragmentation process is collinear, $\theta_g = \theta_\gamma$.
\subsection{Derivation of $\hat{\sigma}^{(1)}_{e^+e^- \rightarrow qX}$}
\label{subsec:3c}
In this section we present our explicit derivation of the finite
hard-scattering cross section $E_q$
$\hat{\sigma}^{(1)}_{e^+e^- \rightarrow qX}/d^3p_q$
to first order in $\alpha_s$. As sketched in Fig.~\ref{fig6},
both real gluon
emission and virtual gluon exchange graphs contribute.
The real emission diagrams have both infrared and
collinear divergences. The infrared divergence is cancelled by
contributions from the virtual diagrams, while the collinear divergence
is cancelled by the subtraction term defined in Eq.~(\ref{nn}).
The real emission diagrams can be treated easily in the same way as
$d\sigma^{(1)}_{e^+e^- \rightarrow \gamma X}/d^3\ell$ in
Section~\ref{subsec:3a}.
Except for the replacement of a photon by a gluon, the hadronic tensor
$H_{\mu\nu}$ obtained from the gluon emission diagrams in
Fig.~\ref{fig6}a is
identical to that computed in Section~\ref{subsec:3a} for
$e^+e^- \rightarrow q\bar{q} \gamma$. Thus, we may employ
our previous expressions for $H_1$ and $H_2$ again
but with the replacement of subscript ``$\ell$'' in
Eqs. (\ref{qq}) and (\ref{h2g}) by subscript ``3'', since $p_3$
is our momentum label for the gluon. Because the quark is now the
fragmenting particle (i.e., effectively the ``observed'' particle),
the $y_{ik}^2$ variables with $i=1,2,3$ in $H_2$ are no longer
those in Eq.~(\ref{xx}). Instead, we now have
\begin{eqnarray}
y_{1k}^2 &=& x_1^2 \cos^2\theta_1\ ;\nonumber \\
y_{2k}^2&=& \left[\frac{y_{13}y_{23}-y_{12}}{x_1}\right]^2
\cos^2\theta_1
+\left(\frac{1}{1-\epsilon}\right)
\left[\frac{2(y_{12}y_{13}y_{23})}{x_1^2}\right]
\sin^2\theta_1\ ;\nonumber \\
&& \mbox{and} \nonumber \\
y_{3k}^2&=& \left[\frac{y_{12}y_{23}-y_{13}}{x_1}\right]^2
\cos^2\theta_1
+\left(\frac{1}{1-\epsilon}\right)
\left[\frac{2(y_{12}y_{13}y_{23})}{x_1^2}\right]
\sin^2\theta_1\ .
\label{xxq}
\end{eqnarray}
In Eq.~(\ref{xxq}), $\theta_1$ is the scattering angle of the quark, and
subscript ``3'' indicates the gluon of momentum $p_3$. We
dropped all terms linear in $\cos\theta_x$, and replaced
$\cos^2\theta_x$ by its average value in $n$-dimensions. Substituting
these $y_{ik}^2$ with $i=1,2,3$ into Eq.~(\ref{h2g}), we derive
\begin{eqnarray}
{{1} \over {4}} \left( H_1+H_2^{eff} \right)
&= &\left( 1 + \cos^2 \theta_1 - 2\epsilon\right)
\left[ (1-\epsilon)
\left(\frac{y_{13}}{y_{23}}
+\frac{y_{23}}{y_{13}}\right)
+2\left(\frac{y_{12}}{y_{13}y_{23}}-\epsilon\right) \right]
\nonumber \\
&+& \left( 1-3\cos^2\theta_1\right)
\left[\frac{2\, y_{12}}{x_1^2}\right]\ \nonumber \\
&&+\ \epsilon\ \cos^2\theta_1
\left[\frac{4\, y_{12}}{x^2_1}\right]\ ,
\label{h12q}
\end{eqnarray}
where the last term again vanishes as $\epsilon\rightarrow 0$.
In analogy to Eq.~(\ref{yid}), the useful identities here are
\newpage
\begin{eqnarray}
y_{23} &=& 1-x_1\ ; \nonumber \\
y_{12} &=& x_1 - y_{13}\ .
\label{yidq}
\end{eqnarray}
Using these identities, we reexpress Eq.~(\ref{h12q}) in terms of
$x_1$ and $y_{13}$
\begin{eqnarray}
{{1} \over {4}} \left( H_1+H_2^{eff} \right)
&=& \left( 1 + \cos^2 \theta_1 - 2\epsilon\right)
\Bigg\{ \left[{{1+x_1^2} \over {1-x_1}}\right]
{{1} \over {y_{13}}}
+{{y_{13}} \over {1-x_1}}
-{{2} \over {1-x_1}} \nonumber \\
&&\mbox{\hskip 1.6in} -\epsilon\left[
{{1-x_1}\over{y_{13}}}
+{{y_{13}}\over {1-x_1}} + 2 \right] \Bigg\} \nonumber \\
&+& \left( 1-3\cos^2\theta_1\right)
\left[ \frac{2}{x_1}\left(1-\frac{y_{13}}{x_1}\right) \right]\ ,
\label{h12q2}
\end{eqnarray}
where we dropped the last term in Eq.~(\ref{h12q}).
Introducing the overall coupling factor $(e\mu^\epsilon)^2
(g\mu^\epsilon)^2$ and color factor $N_c C_F$,
and combining with the three particle final state
phase space $dPS^{(3)}$, Eq.~(\ref{A.27}),
we express the contribution of real gluon
emission as
\begin{eqnarray}
E_1 {{d\sigma^{(R)}_{e^+e^-\rightarrow qX}} \over {d^3p_1}}
&=& \left[ {{2}\over {s}}\, F^{PC} (s)\right]
\left[ \alpha^2_{em}\, N_c
\left( {{4\pi \mu^2} \over {(s/4) \sin^2\theta_1}}\right)^\epsilon
{{1} \over {\Gamma(1-\epsilon)}} \right]\nonumber \\
&\times & C_F \left( {{\alpha_s} \over {2\pi}}\right)
\left[ \left({{4\pi \mu^2} \over {s}}\right)^\epsilon
{{1} \over {\Gamma(1-\epsilon)}} \right]
{{\delta\left( x_1-(1-y_{23})\right)} \over {x_1}}\nonumber \\
&\times &{{1}\over {4}} \left[ H_1 + H_2^{eff} \right]
{{dy_{12}}\over{y^\epsilon_{12}}}\,
{{dy_{13}}\over{y^\epsilon_{13}}}\,
{{dy_{23}}\over{y^\epsilon_{23}}}\,
\delta\left( 1-y_{12}-y_{13}-y_{23}\right)\ ,
\label{qqq}
\end{eqnarray}
where superscript $(R)$ stands for the real emission.
Using the two $\delta$-functions to fix $y_{23}$ and $y_{12}$,
inserting $H_1+H_2^{eff}$ from Eq.~(\ref{h12q2}),
and integrating $dy_{13}$ from 0 to $x_1$, we derive
\begin{eqnarray}
E_1 {{d\sigma^{(R)}_{e^+e^-\rightarrow qX}} \over {d^3p_1}}
&=& \left[ {{2}\over {s}}\, F^{PC} (s)\right]
\left[ \alpha^2_{em}\, N_c
\left( {{4\pi \mu^2} \over {(s/4) \sin^2\theta_1}}\right)^\epsilon
{{1} \over {\Gamma(1-\epsilon)}} \right]\nonumber \\
&\times &C_F \left( {{\alpha_s} \over {2\pi}}\right)
\left[ \left({{4\pi \mu^2} \over {s}}\right)^\epsilon
{{1} \over {\Gamma(1-\epsilon)}} \right]
\left({{1} \over {x_1}}\right)
\frac{\Gamma(1-\epsilon)^2}{\Gamma(1-2\epsilon)} \nonumber \\
&\times &\Bigg\{
\left( 1 + \cos^2 \theta_1 - 2\epsilon\right)\Bigg[
\left(\frac{1+x_1^2}{(1-x_1)_+}+\frac{3}{2}\delta(1-x_1)\right)
\left(\frac{1}{-\epsilon}\right) \nonumber \\
&&\quad\quad\quad
+\left(\frac{1+x_1^2}{1-x_1}\right)\ell n\left(x_1^2\right)
+\left(1+x_1^2\right)\left(\frac{\ell n(1-x_1)}{1-x_1}\right)_+
-\frac{3}{2}\left(\frac{1}{1-x_1}\right)_+ \nonumber \\
&&\quad\quad\quad
+\delta(1-x_1)\left(
\frac{2}{\epsilon^2}+\frac{3}{\epsilon}+\frac{7}{2} \right)
-\frac{1}{2}\left(3x_1-5\right) \Bigg] \nonumber \\
&+& \left(1-3\cos^2\theta_1 \right) \Bigg\} \ .
\label{realg}
\end{eqnarray}
\newpage
\noindent The ``+'' prescription is defined as usual
\begin{equation}
\left(\frac{1}{1-x_1}\right)_+ \equiv
\left(\frac{1}{1-x_1}\right)
-\delta(1-x_1)\int_0^1 dz \left(\frac{1}{1-z}\right) \ .
\label{plus}
\end{equation}
The right-hand-side of Eq.~(\ref{realg}) is formally divergent as
$\epsilon\rightarrow 0$. The $1/\epsilon$ poles in $n$-dimensions
represent the infrared divergence, when the gluon momentum goes to zero,
and/or a collinear divergence, when the gluon momentum is parallel to
that of the
fragmenting quark. As we show below, the infrared divergence is
cancelled by the infrared divergence of the virtual diagrams, sketched
in Fig.~\ref{fig6}b.
The contribution of the virtual diagrams
results from the interference of the one-loop vertex and self-energy
diagrams with the leading order tree diagram. As for the leading
order contribution, the virtual diagrams, sketched in
Fig.~\ref{fig6}b, have a
two-particle final state phase space. Therefore, the contribution from
the virtual diagrams has the same kinematical structure
and angular dependence
as the leading order contribution, discussed in Section~\ref{subsec:2c}.
It is proportional to $\delta(1-x_1)$, and, consequently, the virtual
contribution cancels only the $1/\epsilon$ poles associated with the
$\delta(1-x_1)$ terms in Eq.~(\ref{realg}).
The subtraction terms in Eq.~(\ref{nn}) cancel the final state
collinear poles that appear in the contribution of real gluon emission.
Beginning with the virtual exchange diagrams in Fig.~\ref{fig6}b,
we evaluate the
one-loop vertex correction in $n$-dimension, and combine it with the
lowest order tree diagram to form the first order virtual contribution.
We derive
\begin{eqnarray}
E_1 {{d\sigma^{(V)}_{e^+e^-\rightarrow qX}} \over {d^3p_1}}
&=& \left[ {{2}\over {s}}\, F^{PC} (s)\right]
\left[ \alpha^2_{em}\, N_c
\left( {{4\pi \mu^2} \over {(s/4) \sin^2\theta_1}}\right)^\epsilon
{{1} \over {\Gamma(1-\epsilon)}} \right]\nonumber \\
&\times &C_F \left( {{\alpha_s} \over {2\pi}}\right)
\left[ \left({{4\pi \mu^2} \over {s}}\right)^\epsilon
{{1} \over {\Gamma(1-\epsilon)}} \right]
\left({{1} \over {x_1}}\right)
\frac{\Gamma(1-\epsilon)^3\Gamma(1+\epsilon)}
{\Gamma(1-2\epsilon)} \nonumber \\
&\times &\Bigg\{
\left( 1 + \cos^2 \theta_1 - 2\epsilon\right)
\delta(1-x_1)
\left[-\frac{2}{\epsilon^2}-\frac{3}{\epsilon}
+\left(\pi^2-8\right)\right] \Bigg\} \ ,
\label{virtual}
\end{eqnarray}
where the superscript (V) stands for the virtual contribution. After
adding the real and virtual contributions, Eqs.~(\ref{realg})
and (\ref{virtual}), we obtain the cross
section for $e^+e^-\rightarrow qX$ at order $O(\alpha_s)$,
\newpage
\begin{eqnarray}
E_1 {{d\sigma^{(1)}_{e^+e^-\rightarrow qX}} \over {d^3p_1}}
&=&\ \left[ {{2}\over {s}}\, F^{PC} (s)\right]
\left[ \alpha^2_{em}\, N_c
\left( {{4\pi \mu^2} \over {(s/4) \sin^2\theta_1}}\right)^\epsilon
{{1} \over {\Gamma(1-\epsilon)}} \right]\nonumber \\
&&\times C_F \left( {{\alpha_s} \over {2\pi}}\right)
{{1} \over {x_1}} \Bigg\{
\left( 1 + \cos^2 \theta_1 - 2\epsilon\right)
\left[\frac{1+x_1^2}{(1-x_1)_+}+\frac{3}{2}\delta(1-x_1)\right]
\left(\frac{1}{-\epsilon}\right) \Bigg\} \nonumber \\
&+&\ \left[ {{2}\over {s}}\, F^{PC} (s)\right]
\left[ \alpha^2_{em}\, N_c\, {{1} \over {x_1}} \right]
C_F \left( {{\alpha_s} \over {2\pi}}\right) \nonumber \\
&&\times\Bigg\{ (1+\cos^2\theta_1)
\Bigg[\left(\frac{1+x_1^2}{(1-x_1)_+}+\frac{3}{2}\delta(1-x_1)\right)
\ell n\left(\frac{s}{\mu^2_{\overline{\rm MS}}}\right)
\nonumber \\
&&\quad\quad + \left(\frac{1+x_1^2}{1-x_1}\right)\ell n\left(x_1^2\right)
+ \left(1+x_1^2\right)\left(\frac{\ell n(1-x_1)}{1-x_1}\right)_+
- \frac{3}{2}\left(\frac{1}{1-x_1}\right)_+ \nonumber \\
&&\quad\quad + \delta(1-x_1)\left(\frac{2\pi^2}{3}-\frac{9}{2}\right)
- \frac{1}{2}\left(3x_1-5\right) \Bigg] \nonumber \\
&& + \left(1-3\cos^2\theta_1\right) \Bigg\} \ .
\label{rv}
\end{eqnarray}
As is evident from the $1/\epsilon$ terms,
this cross section is divergent as $\epsilon\rightarrow 0$,
a reflection of the fact that a cross section for producing a
massless quark is an infrared sensitive quantity, not
perturbatively calculable.
According to the pQCD factorization theorem, the short-distance
hard-scattering cross sections, defined in Eq.~(\ref{a}), are infrared
safe quantities. Beyond the Born level, the short-distance parts,
$\hat{\sigma}_{e^+e^-\rightarrow cX}$, are not the same as the partonic
cross sections $\sigma_{e^+e^-\rightarrow cX}$ for fragmenting parton
$c$. Following Eq.~(\ref{nn}), in order to derive the short-distance
hard-scattering cross section $\hat{\sigma}^{(1)}_{e^+e^-\rightarrow qX}$,
we must first calculate the one-loop perturbative
fragmentation function $D^{(1)}_{q\rightarrow q}$.
Feynman diagrams for $D^{(1)}_{q\rightarrow q}$ are sketched in
Fig.~\ref{fig10}.
These diagrams are evaluated in the same way as one evaluates
parton-level parton distributions \cite{CQ}, and we obtain
\begin{equation}
D^{(1)}_{q\rightarrow q} (x_1) =
C_F \left({{\alpha_{s}} \over {2\pi}}\right)
\left[ \frac{1 + x_1^2}{(1-x_1)_+}
+\frac{3}{2}\delta(1-x_1)\right]
\left( {{1} \over {-\epsilon}}\right)\ ,
\label{dzqq}
\end{equation}
where the ``+'' prescription is defined in Eq.~(\ref{plus}).
Using Eq.~(\ref{nn}), the lowest order cross section for
$e^+e^-\rightarrow qX$, Eq.~(\ref{dd}),
and the one-loop quark fragmentation function
$D^{(1)}_{q\rightarrow q}$, Eq.~(\ref{dzqq}),
we derive the short-distance hard-scattering cross section
\begin{eqnarray}
E_1 \frac{d\hat{\sigma}^{(1)}_{e^+e^-\rightarrow qX}}{d^3p_1}
&= & \, \left[\frac{2}{s}F^{PC}_{q}(s)\right]
\left[\alpha_{em}^2N_c \frac{1}{x_1} \right]
C_F \left(\frac{\alpha_{s}}{2\pi}\right) \nonumber \\
&\times & \Bigg\{(1+\cos^2\theta_\gamma)
\Bigg[\left(\frac{1+x_1^2}{(1-x_1)_+}+\frac{3}{2}\delta(1-x_1)\right)
\ell n\left(\frac{s}{\mu^2_{\overline{\rm MS}}}\right)
\nonumber \\
&&\quad\quad + \left(\frac{1+x_1^2}{1-x_1}\right)\ell n\left(x_1^2\right)
+ \left(1+x_1^2\right)\left(\frac{\ell n(1-x_1)}{1-x_1}\right)_+
- \frac{3}{2}\left(\frac{1}{1-x_1}\right)_+ \nonumber \\
&&\quad\quad + \delta(1-x_1)\left(\frac{2\pi^2}{3}-\frac{9}{2}\right)
- \frac{1}{2}\left(3x_1-5\right) \Bigg] \nonumber \\
&&+ \left(1-3\cos^2\theta_\gamma\right) \Bigg\} .
\label{hardq}
\end{eqnarray}
We set $\theta_\gamma=\theta_1$ based on the assumption of
collinear fragmentation from quark to photon. As expected, the
hard-scattering cross section is infrared insensitive. The
$O(\alpha_s)$ quark fragmentation contribution to $e^+e^-\rightarrow
\gamma X$ is
\begin{equation}
E_\gamma \frac{d\sigma^{(1)}_{e^+e^-\rightarrow qX\rightarrow \gamma X}}
{d^3\ell}
= \sum_q \int^1_{x_\gamma}\, \frac{dz}{z} \left[
E_1 \frac{d\hat{\sigma}_{e^+e^-\rightarrow qX}^{(1)}}{d^3p_1}
\left( x_1 = \frac{x_\gamma}{z}\right)\right]\,
\frac{D_{q\rightarrow\gamma}(z, \mu^2_{\overline{\rm MS}})}{z} \ .
\label{fragq}
\end{equation}
Our derivation shows that the short-distance hard-scattering
cross section for antiquark fragmentation to a photon is
the same as that for quark fragmentation. Consequently, the $O(\alpha_s)$
antiquark fragmentation contribution to $e^+e^-\rightarrow \gamma X$
is the same as that given in Eq.~(\ref{fragq}).
\section{Numerical Results and Discussion}
\label{sec:result}
In this section we present and discuss explicit
numerical evaluations of the
inclusive prompt photon cross sections derived in this paper.
We provide results at $e^+e^-$ center-of-mass energies
$\sqrt{s} = $ 10 GeV, 58 GeV, and 91 GeV appropriate for
experimental investigations underway at Cornell, KEK, SLAC, and CERN.
In our figures, we display the variation of the inclusive
yield with photon energy $E_\gamma$ and scattering angle $\theta _\gamma$,
where $\theta_\gamma$ is the angle of the photon with respect
to the $e^+e^-$ collision axis. We also show the dependence of
cross sections on the choice of renormalization scale $\mu$.
The cross sections we evaluate are those derived in the text:
Eqs.~(\ref{ee}), (\ref{nnn}), (\ref{ppp}), and (\ref{fragq}).
They are assembled here for convenience of comparison.
The lowest order inclusive cross section is
\begin{equation}
E_\gamma {{d\sigma^{incl}_{e^+e^-\rightarrow \gamma X}} \over d^3 \ell}
= 2\sum_q \left[ {{2} \over {s}} F^{PC}_q (s)\right] \alpha^2_{em}(s)
N_c (1 + \cos^2\theta_\gamma) {{1} \over {x_\gamma}}
D_{q \rightarrow \gamma} (x_\gamma, \mu^2_F).
\label{aaaa}
\end{equation}
The finite $O(\alpha_{em})$ hard-scattering cross section is
\begin{eqnarray}
E_\gamma
{{d\hat{\sigma}^{(1)}_{e^+e^-\rightarrow\gamma X}} \over {d^3\ell}}
&=&\ 2 \sum_q \left[ {{2} \over {s}} F^{PC}_q\, (s)\right]
\left[ \alpha^2_{em}(s) N_c {{1} \over {x_\gamma}} \right]
e_q^2 \left( {{\alpha_{em}(\mu^2_F)} \over {2\pi}}\right)
\nonumber \\
&&\times \Bigg\{ \left(1+\cos^2 \theta_\gamma\right)
\left[ {{1+(1-x_\gamma)^2} \over {x_\gamma}}\right]
\left[ \ell n \left(s/\mu^2_F\right)
+ \ell n \left( x^2_\gamma \left(1-x_\gamma\right) \right) \right]
\nonumber \\
&&+ (1-3 \cos^2 \theta_\gamma )
\left[{{2(1-x_\gamma)} \over {x_\gamma}} \right]\Bigg\}\ .
\label{bbbb}
\end{eqnarray}
\noindent The $O(\alpha_s)$ contribution to the inclusive yield
$e^+e^-\rightarrow \gamma X$ via gluon fragmentation is
\begin{equation}
E_\gamma {{d\sigma^{(1)}_{e^+e^-\rightarrow gX\rightarrow \gamma X}}
\over {d^3\ell}} = \int^1_{x_\gamma}\, {{dz} \over {z}}
\left[ E_g {{d\hat{\sigma}_{e^+e^-\rightarrow gX}^{(1)}} \over {d^3p_g}}\,
\left( x_g = {{x_\gamma} \over {z}}\right)\right]\,
{{D_{g\rightarrow\gamma}(z, \mu^2_F)} \over {z}}
\label{cccc}
\end{equation}
with
\begin{eqnarray}
E_g {{d\hat{\sigma}^{(1)}_{e^+e^-\rightarrow gX}} \over {d^3p_g}}
&=&\ 2 \sum_q \left[ {{2}\over {s}}\, F^{PC}_q\, (s)\right]
\left[\alpha^2_{em}(s)\, N_c\, {{1}\over {x_g}}\right]
C_F\left( {{\alpha_s(\mu^2_F)}\over {2\pi}}\right) \nonumber \\
&&\times \Bigg\{ \left( 1 + \cos^2\theta_\gamma \right)
\left[ {{1+(1-x_g)^2} \over {x_g}}\right]
\left[ \ell n \left( s/\mu^2_F\right)
+\ell n \left(x^2_g\left(1-x_g\right)\right) \right]\nonumber \\
&&+ \left( 1-3\cos^2 \theta_\gamma \right)
\left[{{2(1-x_g)} \over {x_g}}\right]\Bigg\}\ .
\label{dddd}
\end{eqnarray}
We choose the renormalization scale $\mu$ in
$\alpha_s(\mu^2)$ to be the same as the fragmentation scale
$\mu_F$ in $D_{g\rightarrow\gamma}(z,\mu^2_F)$.
The $O(\alpha_s)$ contribution to the inclusive yield
$e^+e^-\rightarrow \gamma X$ via quark fragmentation is
\begin{equation}
E_\gamma \frac{d\sigma^{(1)}_{e^+e^-\rightarrow qX\rightarrow \gamma X}}
{d^3\ell}
= \sum_q \int^1_{x_\gamma}\, \frac{dz}{z} \left[
E_1 \frac{d\hat{\sigma}_{e^+e^-\rightarrow qX}^{(1)}}{d^3p_1}
\left( x_1 = \frac{x_\gamma}{z}\right)\right]\,
\frac{D_{q\rightarrow\gamma}(z,\mu^2_F)}{z} \ .
\label{eeee}
\end{equation}
\newpage
\noindent with
\begin{eqnarray}
E_1 \frac{d\hat{\sigma}^{(1)}_{e^+e^-\rightarrow qX}}{d^3p_1}
&= & \, \left[\frac{2}{s}F^{PC}_{q}(s)\right]
\left[\alpha_{em}^2(s) N_c \frac{1}{x_1} \right]
C_F \left(\frac{\alpha_{s}(\mu^2_F)}{2\pi}\right) \nonumber \\
&\times & \Bigg\{(1+\cos^2\theta_\gamma)
\Bigg[\left(\frac{1+x_1^2}{(1-x_1)_+}+\frac{3}{2}\delta(1-x_1)\right)
\ell n\left(\frac{s}{\mu^2_F}\right) \nonumber \\
&&\quad\quad + \left(\frac{1+x_1^2}{1-x_1}\right)\ell n\left(x_1^2\right)
+ \left(1+x_1^2\right)\left(\frac{\ell n(1-x_1)}{1-x_1}\right)_+
- \frac{3}{2}\left(\frac{1}{1-x_1}\right)_+ \nonumber \\
&&\quad\quad + \delta(1-x_1)\left(\frac{2\pi^2}{3}-\frac{9}{2}\right)
- \frac{1}{2}\left(3x_1-5\right) \Bigg] \nonumber \\
&&+ \left(1-3\cos^2\theta_\gamma\right) \Bigg\} .
\label{ffff}
\end{eqnarray}
For the common overall normalization function $F^{PC}_q (s)$, we use an
expression that includes $\gamma,\ Z^\circ$ interference:
\begin{eqnarray}
\frac{2}{s} F^{PC}_{q}(s)=\frac{1}{s^{2}} &\Bigg[
& e_{q}^{2}
+ \left(|v_e|^2 + |a_e|^2\right) \left(|v_q|^2 + |a_q|^2\right)
\frac{s^2}{\left( s-M^2_Z\right)^2 + M^2_Z \Gamma^2_Z} \nonumber \\
&+& 2 e_{q}v_{e}v_{q}
\frac{s \left( s-M_{Z}^{2} \right)}
{\left( s-M^2_Z\right)^2 + M^2_Z \Gamma^2_Z}\, \Bigg] \ .
\label{rrr}
\end{eqnarray}
The vector $(v)$ and axial-vector $(a)$ couplings are provided
in Table~\ref{table1} and Table~\ref{table2}.
We set $M_Z = $ 91.187 GeV and $\Gamma_Z =$ 2.491 GeV.
These and other constants used here are taken from Ref.~\cite{XY}.
The weak mixing angle
$\sin^2 \theta_w =$ 0.2319. For the electromagnetic
coupling strength $\alpha_{em}$, we use the solution of the first order
QED renormalization group equation
\begin{equation}
\alpha_{em}(\mu^2) =\frac{\alpha_{em}(\mu^{2}_0)}{1+
\frac{\beta_0}{4\pi}
\alpha_{em}(\mu^{2}_0) \ell n (\mu^2/ \mu^{2}_0)}\ .
\label{sss}
\end{equation}
Here $\beta_0$ is the first order QED beta function,
\begin{equation}
\beta_0=-\frac{4}{3}\sum_{f} N_c^f e_f^2\ ,
\label{beta1}
\end{equation}
with $N_c^f$ the number of colors for flavor $f$ and $e_f$ the
fractional charge of the fermions. The sum over $f$
extends over all fermions (leptons and quarks) with mass $m_f^2<\mu^2$.
For the energy region of interest here, we do not include the top
quark in the sum in Eq.~(\ref{beta1}), and we obtain $\beta_0=-80/9$.
To fix the boundary condition in Eq.~(\ref{sss}), we let
$\alpha_{em}(M^2_Z) = 1/128$ and set $\mu_0= M_Z$.
In the $O\left( \alpha_s\right)$ contributions, Eqs.~(\ref{dddd})
and (\ref{ffff}), we employ a two-loop expression for
$\alpha_s (\mu^2)$ with quark threshold effects handled properly.
We set $\Lambda^{(4)}_{QCD} =$ 0.231 GeV. At $\sqrt{s} = M_Z$,
this expression provides $\alpha_s \left( M^2_Z\right) = 0.112.$
At $\sqrt{s} =$ 10 GeV, the sums in Eqs.~(\ref{bbbb}), (\ref{dddd}), and
(\ref{ffff}) run over 4 flavors of quarks $(u, d, c, s)$, all assumed
massless. At this energy, we do not include
a $b$ quark contribution in our calculation. For $\sqrt{s} =$ 58 GeV and
91 GeV, we use 5 flavors, again assuming all massless in the short-distance
hard scattering cross sections. At these higher energies, non-zero mass
effects for the $c$ and $b$ quarks are accommodated by
our scale choice in the fragmentation functions, discussed below.
The quark-to-photon fragmentation function that appears in Eq.~(\ref{aaaa})
and (\ref{ffff}) is expressed as
\begin{eqnarray}
z\, D_{q \rightarrow \gamma} (z,\mu_{F}^2) &=&
\frac{\alpha _{em}(\mu^2_F)}{2\pi} \left[
e_{q}^{2}\
\frac{2.21-1.28z+1.29z^{2}}{1-1.63\,\ell n\left(1-z\right)}\,
z^{0.049} +0.002 \left( 1-z \right) ^{2} z^{-1.54}
\right] \nonumber \\
&&\times \ell n \left( \mu_{F}^{2} / \mu^{2}_0 \right).
\label{uuu}
\end{eqnarray}
The gluon-to-photon fragmentation function in Eq.~(\ref{cccc}) is
\begin{equation}
z\, D_{g \rightarrow \gamma} (z,\mu^2_{F})
= \frac{\alpha _{em}(\mu^2_F)}{2\pi}\,
0.0243 \left( 1-z \right)
z^{-0.97}\, \ell n \left( \mu_{F}^{2} / \mu^{2}_0 \right).
\label{vvv}
\end{equation}
These expressions for $D_{q\rightarrow \gamma}$ and
$D_{g\rightarrow \gamma}$, taken from Ref.~\cite{JFO}, are used as
a guideline for our estimates. The physical significance of scale
$\mu_0$ is that the fragmentation function vanishes for energies less
than $\mu_0$. For $g$ and for the $u, d, s$, and $c$ quarks, we set
$\mu_0 = \Lambda^{(4)}_{QCD}$, as in Ref.~\cite{JFO}.
For the $b$ quark we again use Eq.~(\ref{uuu}), but
we replace $\mu_0$ by the mass of the quark,
$m_b =$ 5 GeV; $D_{b\rightarrow \gamma}(z,\mu^2_F) = 0$
for $\mu_F < m_b$. We set
the fragmentation scale $\mu_F$ equal to the renormalization scale
$\mu$ for our inclusive cross sections. In the results presented below,
we vary $\mu$ to examine the sensitivity of the cross section
to its choice.
In presenting results, we divide our inclusive cross sections by an energy
dependent cross section $\sigma_0$ that specifies the leading order total
hadronic event rate at each value of~$\sqrt{s}$:
\begin{equation}
\sigma_0 = {{4\pi s} \over {3}} \sum_q
\left[ {2\over s} F^{PC}_q (s)\ \alpha^2_{em}(s) N_c\right].
\label{www}
\end{equation}
By doing so, we can observe what fraction of the total hadronic rate is
represented by inclusive prompt photon production.
In several figures to follow, we show the predicted behavior of the
inclusive yield as a function of $E_\gamma$ and $\theta_\gamma$,
as well as the breakdown of the total yield into contributions
from various components.
In Fig.~\ref{fig11},
we present the inclusive yield as a function of $E_\gamma$ at
$\sqrt{s} =$ 91 GeV for two values of the scattering angle $\theta_\gamma$,
45$^\circ$ and 90$^\circ$. The same results are displayed in
Fig.~\ref{fig12} as a
function of scattering angle $\theta_\gamma$ for two choices of $E_\gamma$.
In both Figs.~\ref{fig11} and \ref{fig12},
we set renormalization/fragmentation scale
$\mu = E_\gamma$. Dependence of the cross sections on $\mu$ is examined in
Fig.~\ref{fig13} at fixed $E_\gamma$. The patterns evident in
Figs.~\ref{fig11}--\ref{fig13} at
$\sqrt{s} =$ 91 GeV are repeated with subtle differences in
Figs.~\ref{fig14}--\ref{fig16} at
$\sqrt{s} =$ 58 GeV, appropriate for experiments at TRISTAN, and in
Figs.~\ref{fig17}--\ref{fig19}
at $\sqrt{s} =$ 10 GeV, applicable for studies at CESR/CLEO. In
Fig.~\ref{fig20}
we compare predictions at the three energies by showing the cross
section $\sigma_0^{-1}d\sigma/dx_\gamma d\Omega_\gamma$
as a function of the scaling variable $x_\gamma = 2E_\gamma/\sqrt{s}$.
Evident in Figs.~\ref{fig11}--\ref{fig19}
is the dominance of the lowest-order contribution to the
inclusive yield, Eq.~(\ref{aaaa}), at all values of $\sqrt{s}$, except at
small values of $E_\gamma/\sqrt{s}$ or at small values
of $\mu$ where the $O\left( \alpha_{em}\right)$ ``direct" contribution,
Eq.~(\ref{bbbb}), becomes larger. Following the lowest-order contribution
in importance at modest values of $E_\gamma/\sqrt{s}$ or of $\mu$ is the
$O\left( \alpha_{em}\right)$ direct contribution.
The direct contribution falls away more rapidly with increasing $E_\gamma$
or $\mu$ than the
$O\left( \alpha_s\right)$ quark-to-photon fragmentation term,
Eq.~(\ref{ffff}).
Therefore, at large values of $E_\gamma/\sqrt{s}$ or $\mu$, it is the
$O\left( \alpha_s\right)$ fragmentation term, that is secondary in
importance to the lowest-order term. The gluon-to-photon fragmentation
contribution, Eq.~(\ref{cccc}) plays an insignificant role except
at very small $E_\gamma$.
In Figs.~\ref{fig12}, \ref{fig15}, and \ref{fig18},
we examine the predicted $\theta_\gamma$
dependence of our cross sections. These figures, presented with a linear
scale, show perhaps more clearly the importance of the roles of the
$O\left( \alpha_{em}\right)$ direct and $O\left( \alpha_s\right)$
fragmentation contributions. The lowest-order contribution,
Eq.~(\ref{aaaa}), is proportional to $(1 + \cos^2 \theta_\gamma)$.
However, there are significant $\sin^2 \theta_\gamma$ components
in the next-to-leading order direct term,
Eq.~(\ref{bbbb}), and the next-to-leading order fragmentation terms,
Eqs.~(\ref{dddd}) and (\ref{ffff}).
The net result is that the predicted total yield in
Figs.~\ref{fig12}, \ref{fig15}, and \ref{fig18}
is \underline{not} proportional to $(1 + \cos^2 \theta_\gamma)$.
As illustrated in the figures, the deviation of
the total yield from the $(1 + \cos^2 \theta_\gamma)$ form becomes
greater at smaller values of $E_\gamma$. (The results shown in
Figs.~\ref{fig12}, \ref{fig15}, and \ref{fig18}
all pertain to the scale choice $\mu = E_\gamma$.) One lesson from this
examination of dependence of $\theta_\gamma$ is that it is inappropriate
and potentially misleading to assume that the functional form
$(1 + \cos^2 \theta_\gamma)$ describes the data when attempts are made to
correct distributions in the region of small $\theta_\gamma$
(where initial state bremsstrahlung overwhelms the final state
radiation in which one is interested).
Dependence on the renormalization/factorization scale $\mu$ in
Figs.~\ref{fig13}, \ref{fig16}, and \ref{fig19}
shows several interesting features. As is expected from the
functional form of $D_{q\rightarrow \gamma} \left( z, \mu^2\right)$ in
Eq.~(\ref{uuu}), the lowest-order contribution, Eq.~(\ref{aaaa}), increases
logarithmically as $\mu$ is increased. On the other hand, the
$\ell n \left(s/\mu^2\right)$ dependent term in Eq.~(\ref{bbbb})
causes a decrease of the
$O\left( \alpha_{em}\right)$ direct contribution as $\mu$ is increased.
Indeed, the $\left( 1 + \cos^2 \theta_\gamma\right)$ part of the direct
contribution becomes negative when
$s x^2_\gamma\left( 1-x_\gamma\right)/\mu^2
< 1$. The physical cross section, represented as a solid line in
Figs.~\ref{fig13}, \ref{fig16}, and \ref{fig19},
is of course always positive.
An especially noteworthy feature of
Figs.~\ref{fig13}, \ref{fig16}, and \ref{fig19} is that the
total inclusive yield is nearly independent of $\mu$, in spite of the
strong variation with $\mu$ of its components.
This independence reflects the role of
the fragmentation scale $\mu$. It is introduced to separate ``soft" and
``hard" contributions into ``fragmentation" and ``direct" pieces. As the
scale $\mu$ is increased, more of the cross section is necessarily
factored into the fragmentation contribution, and vice versa,
such that the sum remains nearly constant.
In Fig.~\ref{fig20},
we show the overall $\sqrt{s}$ dependence of our predictions. To
facilitate comparison, we present these results in terms of the ``scaling"
distribution $\sigma_0^{-1} d\sigma/dx_\gamma d\Omega_\gamma$.
The case of $\sqrt{s}=10$~GeV is somewhat special since we do not
include a contribution from b quark fragmentation at this energy.
Otherwise, the contribution of the lowest-order process,
Eq.~(\ref{aaaa}), decreases at fixed $x_\gamma$ as $\sqrt{s}$ is
increased. This decrease is explained easily. In computing
$d\sigma/dx_\gamma d\Omega_\gamma$, we multiply Eqs.~(\ref{rrr}) and
(\ref{uuu}), obtaining a charge weighting factor of $e_q^2 F_q^{PC}(s)$,
whereas in computing the denominator $\sigma_0$, the factor is
$F_q^{PC}(s)$. Owing to the values of the $v$ and $a$ couplings in
Table~1, the up-type quark contribution to $F_q^{PC}(s)$ decreases as
$\sqrt{s}$ increases, and the down-type contribution increases. The
$O(\alpha_{em})$ direct contribution to $\sigma_0^{-1} d\sigma/dx_\gamma
d\Omega_\gamma$ decreases at fixed $x_\gamma$ as $\sqrt{s}$ is increased
from 10 to 91 GeV. Again, the explanation may be found in the energy
dependence of the ratio $\sum_q e_q^2 F_q^{PC}(s)/\sum_q F_q^{PC}(s)$.
Taken together these statements explain the energy dependence displayed
in Fig.~\ref{fig20}.
As remarked earlier, the particular expressions we chose for the
fragmentation functions are not meant to be anything but illustrative
expressions. It would be very valuable if these non-perturbative
functions could be determined directly from data.
Dominance of the $q\rightarrow \gamma$ fragmentation contribution in
Figs.~\ref{fig11}--\ref{fig19}
demonstrates the important role data from
$e^+e^- \rightarrow \gamma X$ may play in the extraction of
$D_{q\rightarrow \gamma}\left( z, \mu^2\right)$ and study of its properties.
However, as mentioned in the Introduction, an important limitation of high
energy investigations is that photons are observed and cross sections are
measured reliably only when the photons are relatively isolated. Since
fragmentation is a process in which photons are part of quark, antiquark,
and gluon jets, isolation reduces the contribution from fragmentation terms.
In a forthcoming paper \cite{BXQ2},
we will examine in detail the behavior of the \underline{isolated}
prompt photon cross section.
In this paper we have presented a unified treatment of inclusive prompt
photon production in hadronic final states in $e^+e^-$ annihilation.
We have computed analytically the direct photon contribution through
$O\left( \alpha_{em}\right)$ and the quark-to-photon and gluon-to-photon
fragmentation terms through $O\left( \alpha_s\right)$. We presented the
full angular dependence of the cross section, separated into transverse
$\left( 1 + \cos^2 \theta_\gamma \right)$ and longitudinal components.
\section*{Acknowledgements}
X. Guo and J. Qiu are grateful for the hospitality of Argonne National
Laboratory where a part of this work was completed. Work in the High
Energy Physics Division at Argonne National Laboratory is supported by
the U.S. Department of Energy, Division of High Energy Physics,
Contract W-31-109-ENG-38.
The work at Iowa State University was supported in part
by the U.S. Department of Energy under Grant Nos.
DE-FG02-87ER40731 and DE-FG02-92ER40730.
|
1,477,468,750,455 | arxiv | \section{Introduction}
It is now widely accepted that the stellar density distribution
perpendicular to the Galactic disk traces at least two stellar
components, the thin and the thick disks. The change of slope
in the logarithm of the vertical density distributions at
$\sim$ 700\,pc (Cabrera-Lavers et al. \cite{cab05}) or $\sim$ 1500 pc
(Gilmore \& Reid \cite{gil83}) above the Galactic plane
is usually explained as the signature of a transition
between these two distinct components: the thin and the thick disks.
The thick disk is an intermediate stellar population between the thin
disk and the stellar halo, and was initially defined with the other
stellar populations by combining spatial, kinematic and abundance
properties (see a summary of the Vatican conference of 1957 by Blaauw
\cite{bla95} and Gilmore \& Wyse \cite{gil89}). Its properties
are described in a long series of publications with often diverging
characteristics (see the analysis by Gilmore \cite{gil85}, Ojha
\cite{ojh01}, Robin et al. \cite{rob03} and also by Cabrera-Lavers et
al. \cite{cab05}, that give an overview of recent improvements).
Majewski (\cite{maj93}) compared a nearly exhaustive list of scenarios
that describe many possible formation mechanisms for the thick disk.
In this paper, we attempt to give an answer to the simple
but still open questions: are the thin and thick disks really two
distinct components? Is there any continuous transition between them?
These questions were not fully settled by analysis of star counts by
Gilmore \& Reid (\cite{gil83}) and later workers.
Other important signatures of the thick disk followed from
kinematics: the age--velocity dispersion relation and also the
metallicity--velocity dispersion relation. However the identification
of a thin--thick discontinuity depends on the authors, due to the
serious difficulty of assigning accurate ages to stars (see Edvardsson
et al. \cite{edv93} and Nordstr\"om et al. \cite{nor04}). More
recently it was found that the
[$\alpha$/Fe] versus [Fe/H] distribution is related to the kinematics
(Fuhrmann \cite{fur98}; Feltzing et al. \cite{fel03}; Soubiran \&
Girard \cite{sou05}; Brewer \& Carney \cite{bre06}; Reddy et al.
\cite{red06}) and provides an
effective way to separate stars from the thin and thick disk
components. Ages and abundances are important to describe the various
disk components and to depict the mechanisms of their formation. A
further complication comes from the recent indications of the presence
of at least two thick disk components with different density
distributions, kinematics and abundances (Gilmore et al.
\cite{gil02}; Soubiran et al. \cite{sou03}; Wyse et
al. \cite{wys06}).
Many of the recent works favor the presently prevailing scenarios of
thick disk formation by the accretion of small satellites, puffing up
the early stellar Galactic disk or tidally disrupting the stellar
disk (see for example Steinmetz \& Navarro \cite{ste02}; Abadi et al.
\cite{aba03}; Brook et al. \cite{bro04}). We note however that
chemodynamical models of secular Galactic formation including
extended ingredients of stellar formation and gas dynamics can also
explain the formation of a thick disk distinct from the thin disk
(Samland \& Gerhard \cite{sam03}; Samland \cite{sam04}).
In this paper, we use the recent RAVE observations of stellar radial
velocities, combined with star counts and proper motions, to recover
and model the full 3D distributions of kinematics and densities for
nearby stellar populations. In a forthcoming study,
metallicities measured from RAVE observations will be included to
describe the galactic stellar populations and their history. The
description of data is given in Sect.~2, the model in Sect.~3, and the
interpretation and results in Sect.~4. Among these results, we
identify discontinuities visible both within the density distributions
and the kinematic distributions. They allow to define more
precisely the transition between the thin and thick stellar Galactic
disks.
\section{Observational data}
\label{data}
Three types of data are used to constrain our Galactic model for the
stellar kinematics and star counts (the model description is given in
Sect.~3): the Two-Micron
All-Sky Survey (2MASS PSC; Cutri et al. \cite{cut03}) magnitudes, the
RAVE (Steinmetz et al. \cite{ste06}) and
ELODIE radial velocities, and the UCAC2 (Zacharias et
al. \cite{zac04}) proper motions. Each sample of stars is selected independently
of the other, with its own magnitude limit and coverage of sky due to the
different source (catalogue) characteristics.
(1) We select 22\,050 2MASS stars within an 8-degree radius of the South and
North Galactic Poles, with $m_{\rm K}$ magnitudes between 5-15.4. Star
count histograms for both Galactic poles are used to
constrain the Galactic model.
(2) We select 105\,170 UCAC2 stars within a radius of 16 degrees of the
Galactic poles, with $m_{\rm K}$ 2MASS magnitudes between
6-14. We adjust the model to fit histograms of
the $\mu_U$ and $\mu_V$ proper motion marginal distributions;
the histograms combine stars in 1.0 magnitude intervals for $m_{\rm
K}$=6 to 9 and 0.2 magnitudes intervals for $m_{\rm K}$=9 to 14.
(3) We select 543 RAVE stars ( with $m_{\rm K}$ 2MASS magnitudes from
8.5 to 11.5) within a radius of 15 degrees of the
SGP. We group them in three histograms according to $m_{\rm K}$
magnitudes. We complete this radial velocity sample with
392 other similar stars: TYCHO-II stars selected towards the
NGP within an area of 720 square degrees, with B-V colors
between 0.9-1.1. Their magnitudes are brighter than $m_{\rm
K}$=8.5, they were observed with the ELODIE spectrograph and were
initially used to probe the vertical Galactic potential
(Bienaym\'e et al. \cite{bie06}). All these radial velocity
samples play a key role in constraining the vertical velocity
distributions of stars and the shape of the velocity ellipsoid.
\begin{figure}[!htbp]
\resizebox{8cm}{!}{
\includegraphics{HR.eps}
}
\caption{ $\rm M_K / J-K $ HR Diagram from Hipparcos stars with $\sigma_\pi / \pi \le 0.1$ cross-matched with the 2MASS catalogue. Vertical dashed lines represent our color selection
J$-$K=[0.5-0.7]}
\label{f:HR}
\end{figure}
\subsection{Data selection}
\label{s:selection}
In this paper, we restrict our analysis to stars near the Galactic poles with J$-$K
colors between 0.5-0.7 (see fig.\ref{f:HR}). This allows us to recover some Galactic
properties, avoiding the coupling with other Galactic parameters that
occurs in other Galactic directions (density and kinematic scale
lengths, Oort's constants, $R_0$, $V_0$...).
The selected J$-$K=[0.5-0.7] color interval corresponds to K3-K7
dwarfs and G3-K1 giants (Koornneef \cite{koo83}; Ducati et
al. \cite{duc01}). They may be G or K giants within the red clump
region (the part of the HR diagram populated by high metallicity
He-burning core stars). The absolute magnitudes of red clump stars
are well defined: nearby HIPPARCOS clump stars have a mean absolute
magnitude $M_{\rm K}=-1.61$ with a dispersion of $\sim 0.22$ (Alves
\cite{alv00}, see Cannon \cite{can70} for the first proposed use of
clump stars as distance indicators, see also Salaris \& Girardi
\cite{sal02}; Girardi et al. \cite{gir98} and other references in
Cabrera-Lavers et al. \cite{cab05}). This mean absolute magnitude
does not vary significantly with [Fe/H] in the abundance range
$[-0.6,0]$ (Alves \cite{alv00}). Studying nearby stars in 13 open
clusters and 2 globular clusters, Grocholski \&
Sarajedini \cite{gro02} find that the mean absolute magnitude of
clump stars is not dependent on metallicity when the [Fe/H] abundance
remains within the interval $[-0.5,0.1]$. Sarajedini \cite{sar04} finds that,
at metallicity [Fe/H]=$-0.76$, the mean absolute magnitude of red
clump stars drops to $M_{\rm K}=-1.28$, a shift of 0.33 mag. Most of the giants with metallicity [Fe/H] lower than -0.8 dex are excluded
by our color selection from our sample. Hence, we did not model giants of
the metal-weak thick disk, first identified by Norris \cite{nor85}
(see also, Morrison, Flynn \& Freeman \cite{mor90}).
This represents however only a minor component of the thick disk.
Although, Chiba \& Beers (\cite{chi00}) find that $\sim$ 30 \%
of the stars with $-1 >{\rm [Fe/H]}> -1.7$ are thick disk stars, but
stars with ${\rm [Fe/H]} < -1$ represent only 1 per cent of the local
thick disk stars (Martin \& Morrison \cite{mar98}).
K dwarfs within the J$-$K=[0.5-0.7] color interval also have well
defined absolute magnitudes that depend slightly on metallicity and
color. We determine their mean absolute magnitude, $M_{\rm K}$=4.15,
from nearby HIPPARCOS stars using color magnitude data provided by
Reid (see http://www-int.stsci.edu/$\sim$inr/cmd.html). From Padova
isochrones (Girardi et al. \cite{gir02}), we find that the absolute
magnitude varies by 0.4 magnitudes when J$-$K changes from 0.5 to 0.7.
A change of metallicity of $\Delta$[Fe/H]=0.6 also changes the
magnitude by about 0.3, in qualitative agreement with observed
properties of K dwarfs (Reid \cite{rei98}, Kotoneva et al \cite{kot02}). Thus, we
estimate that the dispersion of absolute magnitude of dwarfs in our
Galactic pole sample is $\sim$0.2-0.4.
Another important motivation for selecting the J$-$K=[0.5-0.7] color
interval is the absolute magnitude step of 6 magnitudes between
dwarfs and giants. This separation is the reason the magnitude
distributions for these two kinds of stars are very different towards
the Galactic poles. If giants and dwarfs have the same density distribution in the disk, in the apparent magnitude count, giants will appear before and well separated from dwarfs. Finally we mention a convenient property of the Galactic pole directions: there, the kinematic data
are simply related to the cardinal velocities relative to the local standard of rest (LSR).
UCAC2 proper motions are nearly parallel to the $U$ and $V$
velocities, and RAVE radial velocities are close to the vertical $W$
velocity component.
\subsection{How accurate is the available data?}
The star magnitudes are taken from the 2MASS survey which is presently the most accurate photometric all sky survey for probing the Galactic stellar populations. Nevertheless, since our color rang is narrow, we have to take care that the photometric errors on J and K do not bias our analysis.
The mean photometric accuracy ranges from 0.02 in K and J at magnitudes $m_{\rm K}$=5.0, to 0.15 in K and 0.08 in J at magnitudes $m_{\rm K}$=15.4. The error in J$-$K is not small considering the size ($\Delta$(J--K) = 0.2) of the analyzed J$-$K interval, 0.5 to 0.7. We do not expect, however, that it substantially biases our analysis. For $m_{\rm K}$ brighter than 10, the peak of giants is clearly identified in the J$-$K distribution within the J$-$K= [0.5-0.7] interval (see Fig.\ref{f:cmd} or Figure~6 from Cabrera-Lavers et al., \cite{cab05}). This peak vanishes only beyond $m_{\rm K}$=11. At fainter
K magnitudes, the dwarfs dominate and the J$-$K histogram of colors has a constant slope. This implies that the error in color at faint magnitudes does not affect to first order the star counts.
We find from the shape of the count histograms that, in the direction of the Galactic Pole and with our color selection J$-$K = [0.5-0.7] the limit of completeness is $m_{\rm K}\sim$15.5-15.6. Moreover, the contamination by galaxies must be low within the 2MASS PSC. It is also unlikely that compact or unresolved galaxies are present: according to recent deep J and K photometric counts (see Figure 15 of Iovino et al. \cite{iov05}), with our color selection, galaxies contribute only beyond $m_{\rm K}\sim$16. We conclude that we have a complete sample of stars for magnitudes from 5.0 to 15.4 in K, towards the Galactic poles.
The UCAC2 and RAVE catalogues however are not complete. Making it necessary to scale the proper motions and radial velocities distributions predicted by our model for complete samples. The total number of stars given by the model for the distribution of proper motions (or radial velocity) in a magnitude interval is multiplied by the ratio between the number of stars observed in UCAC2 (or RAVE) divided by the number of stars observed in 2MASS.
\begin{figure}[!htbp]
\resizebox{8cm}{!}{
\includegraphics{cmd.eps}
}
\caption{ K / J$-$K Color Magnitude Diagram obtained with 2MASS stars within a 8 degrees radius around the North Galactic pole. Dashed lines represent the limit of our color selection: $\rm J-K = [0.5,0.7]$. }
\label{f:cmd}
\end{figure}
Towards the North Galactic Pole (NGP), the error on the UCAC2 proper motions used in our
analysis varies from 1\,mas\,yr$^{-1}$ for the brightest stars to
6\,mas\,yr$^{-1}$ at $m_{\rm K}$=14. Towards the South Galactic Pole (SGP),
the error distribution looks similar, with the exception of a small
fraction of stars with $m_{\rm K}$ from 11 to 14 having errors around
8 or 13 mas\,yr$^{-1}$. The only noticeable difference between
the histograms at the NGP and SGP is that the peak of the proper
motion distribution is slightly more flattened at the SGP,
for magnitudes $m_{\rm K}>$13 (see fig. \ref{f:vitesse2} ).
This difference is related to the different error distributions towards the NGP and SGP.
The analyzed stars are located at distances from 200\,pc to 1\,kpc for dwarfs and to 1.5\,kpc for giants. A 2\,mas\,yr$^{-1}$ error represents 10\,km\,s$^{-1}$ at 1\,kpc, and 6\,mas\,yr$^{-1}$, an error
of 30\,km\,s$^{-1}$. This can be compared to the $\sigma_U$ values
for the isothermal components, for instance $\sim$\,60\,km\,s$^{-1}$
for the thick disk that is the dominant stellar population 1.5\,kpc
from the plane. Adding the errors in quadrature to the velocity dispersion would modify
a real proper motion dispersion of 60\,km\,s$^{-1}$ to an apparent
dispersion of 67\,km\,s$^{-1}$. The apparent dispersion would be only
60.8\,km\,s$^{-1}$ if the stars have a 2\,mas\,yr$^{-1}$ accuracy.
Therefore, we overestimate the $\sigma_U$ dispersion of the
thick disk by 5 to 10 percent. This effect is lower for the thin disk
components (the stars are closer and their apparent proper
motion distributions are broader). We have not yet included the
effect of proper motion errors within our model. This error has just an impact of the determination
on the velocity dispersions $\sigma_U$ and $\sigma_V$ and
on the ellipsoid axis ratio $\sigma_U/\sigma_W$ of each stellar
disk component, but does not change the determination of
vertical velocity dispersions $\sigma_W$ which are mainly
constrained by the magnitude star count and the radial velocities. Hence,
it is not significant in our kinematic decomposition of the
Galactic disk.
The accuracy of proper motions can also be gauged from the stability
of the peaks of proper motion distributions: comparing 112 $\mu_U$ and
$\mu_V$ histograms for different magnitude intervals, we find no
fluctuations larger than 3-5 mas\,yr$^{-1}$ .
A more complete test is performed by comparing the UCAC2 proper
motions (with our J$-$K color selection) to the recent PM2000 catalogue
(Ducourant et al. \cite{duc06}) in an area of 8$\times$16 degrees
around $\alpha_{2000}$=12h50m, $\delta_{2000}=14\deg$ close to the
NGP. PM2000 proper motions are more accurate, with
errors from 1 to 4 mas\,yr$^{-1}$. The mean differences between proper
motions from both catalogues versus magnitudes and equatorial
coordinates do not show significant shifts, just fluctuations of the
order of $\sim$0.2 mas\,yr$^{-1}$. We also find that the dispersions
of proper motion differences are $\sim$2\,mas\,yr$^{-1}$ for $m_{\rm
K}<$10, 4\,mas\,yr$^{-1}$ with $m_{\rm K}$=10-13, and 6\,mas\,yr$^{-1}$
with $m_{\rm K}$=13-14. These dispersions are dominated by the UCAC2
errors.
From the internal and external error analysis, RAVE radial velocities
show a mean accuracy of 2.3\,km\,s$^{-1}$
(Steinmetz et al. \cite{ste06}). Radial velocities of
stars observed with the ELODIE \'echelle spectrograph are an order
of magnitude more accurate. These errors have no impact on the
determination of the vertical velocity dispersion of stellar
components that ranges from 10 to 50\,km\,s$^{-1}$, but the reduced
size of our radial velocity samples towards the poles (about 1000
stars) limits the accuracy achieved in modeling the vertical
velocity dispersions.
\section{Model of the stellar Galactic disks}
The basic ingredients of our Galactic model are taken from traditional
works on star count and kinematic modeling, for instance see Pritchet
(\cite{pri83}); Bahcall (\cite{bah84}); Robin \& Cr\'ez\'e
(\cite{rob86}). It is also similar to the recent developments by
Girardi et al. (\cite{gir05}) or by Vallenari et al. (\cite{val06}).
The kinematic modeling is entirely taken from Ratnatunga et al.
(\cite{rat89}) and is also similar to Gould's (\cite{gou03}) analysis.
Both propose closed-form expressions for velocity projections; the
dynamical consistency is similar to Bienaym\'e et al. (\cite{bie87})
and Robin et al. (\cite{rob03}, \cite{rob04}).
Our analysis, limited to the Galactic poles, is based on a set of
20 stellar disk components. The distribution function of each
component or stellar disk is built from three elementary functions
describing the vertical density $\rho_i$ (dynamically self consistent
with the vertical gravitational potential), the kinematic distribution
$f_{i}$ (3D-gaussians) and the luminosity function $\phi_{ik}$.
We define $\mathcal{N}(z, V_R, V_\phi, V_z; M)$ to be the density
of stars in the Galactic position-velocity-(absolute magnitude) space
$$\mathcal{N} =\sum_{ik}\, \rho_i(z) f_{i}( V_R, V_\phi, V_z) \phi_{ik}(M)$$
\noindent
the index $i$ differentiates the stellar disk components and the
index $k$ the absolute magnitudes used to model the luminosity function.
From this model, we apply the generalized equation of stellar
statistics:
$$ A(m,\mu_l,\mu_b, V_r)= \int N(z, V_R, V_\phi, V_z; M)\, z^2 \, \omega \, dz $$
to determine the $A(m)$ apparent magnitude star count equation as
well as the
marginal distributions of both components $\mu_l$ and $\mu_b$ of
proper motions and the distributions of radial velocities for any
direction and apparent magnitudes. For the Galactic poles, we define
$\mu_U$ and $\mu_V$ as the proper motion components parallel to the
cardinal directions of $U$ and $V$ velocities. For a more general
inverse method of the equation of the stellar statistic, see Pichon et al. (\cite{pic02}).
\subsection{The vertical density}
Each stellar disk is modeled with an isothermal velocity
distribution, assuming that the vertical density distribution
(normalized at $z$=0) is given by the relation:
\begin{equation}
\phantom{exponentiel egual }
\rho_i(z)=\exp \left(-\Phi(z)/\sigma^2_{zz,i}\right)
\end{equation}
where $\Phi(z)$ is the vertical gravitational potential at the solar
Galactic position and $\sigma_{zz,i}$ is the vertical velocity
dispersion of the considered stellar component $i$. The Sun's
position $z_\odot$ above the Galactic plane is also used as a model
parameter. Such expressions were introduced by Oort (\cite{oor22}),
assuming the stationarity of the density distributions. They ensure
the consistency between the vertical velocity and density
distributions. For the vertical gravitational potential we use the
recent determination obtained by Bienaym\'e et al. (\cite{bie06})
based on the analysis of HIPPARCOS and TYCHO-II red clump giants.
The vertical potential is defined at the solar position by:
$$\Phi(z)= 4\pi G \left( \Sigma_0 \left( \sqrt{z^2+D^2}-D \right)
+\rho_{\rm eff}\, z^2 \right)$$
with $\Sigma_0=48\,{\rm M}_{\odot}\,{\rm pc}^{-2}$,
$D=800\,{\rm pc}$ and $\rho_{\rm eff}=0.07\,{\rm M}_{\odot}\,{\rm pc}^{-3}$.
It is quite similar to the potential determined by Kuijken \& Gilmore
(\cite{kui89}) and Holmberg \& Flynn (\cite{hol04}).
\subsection{The kinematic distributions}
The kinematical model is given by shifted 3D gaussian velocity
ellipsoids. The three components of mean streaming motion
($\langle U \rangle$, $\langle V \rangle$, $\langle W \rangle$) and
velocity dispersions ($\sigma_{RR}$, $\sigma_{\phi\phi}$,
$\sigma_{zz}$), referred to the cardinal directions of the Galactic
coordinate frame, provide a set of six kinematic quantities. The
mean stream motion is relative to the LSR.
The Sun's velocity $U_\odot$ and $W_\odot$ are model parameters.
We define the $\langle V \rangle$ stream motion as:
$\langle V \rangle = - V_\odot - V_{\rm lag}$. We adopt an asymmetric drift
proportional to the square of $\sigma_{RR}$: $V_{\rm lag}= \sigma_{RR}^2 /k_{a}$,
where the coefficient $k_{a}$ is also a model parameter.
We assume null stream motions for the
other velocity components, thus $\langle U \rangle=-U_\odot$ and
$\langle W \rangle=-W_\odot$.
For simplicity, we have assumed that the
$\sigma_{RR}/\sigma_{\phi\phi}$ ratio is the same for all the
components. It is well
known that the assumptions of a constant
$\sigma_{RR}/\sigma_{\phi\phi}$ ratio, of a linear asymmetric drift
and of 2D gaussian U and V velocity distributions hold only for cold
stellar populations (see for instance Bienaym\'e \& S\'echaud
\cite{bie97}). These simple assumptions allow a direct comparison
with similar studies. It allows also an exact integration of count
equations along the line of sight. Thus the convergence of parameters
for any single model is achieved in a reasonable amount of time (one
week). The model includes 20 isothermal components with $\sigma_{zz}$ from 3.5 to 70\,km\,s$^{-1}$. We choose a step of 3.5\,km\,s$^{-1}$ which is sufficient to give a realistic kinematic decomposition and permit calculation in a reasonable time. The two first components $\sigma_{zz}$= 3.5 and 7\,km\,s$^{-1}$ were suppressed since they do not contribute significantly to counts for $m_{\rm K}$$>$6 and are not constrained by our adjustments. The components between 10 and 60 \,km\,s$^{-1}$ are constrained by star counts, proper motions histograms up to magnitude 14 in K and radial velocity histograms for magnitudes $m_{\rm K}$=[5.5-11.5]). The model includes isothermal components from 60 to 70\,km\,s$^{-1}$ to properly fit the star counts at the faintest apparent magnitudes $m_{\rm K} > 15.0$. All the values of the kinematic components depend on the adopted galactic potential.
The velocity ellipsoids are inclined along the Galactic meridian
plane. The main axis of velocity ellipsoids are set parallel to
con-focal hyperboloids as in St\"ackel potentials. We set the focus at
$z_{hyp}$=6\,kpc on the main axis giving them realistic orientations
(see Bienaym\'e \cite{bie99}). The non-zero inclination implies that
the vertical density distributions of each isothermal component is
not fully dynamically consistent with the potential. Since the
$z$-distances are below 1.5\,kpc for the majority of stars with
kinematic data, and since the main topic of this paper is not the
determination of the Galactic potential, we do not develop a more
consistent dynamical model.
\subsection{The luminosity functions}
The luminosity function of each stellar disk component is modeled with $n$ different kinds of stars according to their absolute magnitude:
$$\phi_{i}(M)= \sum_{k=1,n} \,\phi_{ik}(M)=
\frac{1}{\sqrt{2\pi}\sigma_{M}}\,\sum_{k=1,n}\, c_{ik}\,
e^{-\frac{1}{2} \left( \frac{M-M_k}{\sigma_M} \right)^2 }$$
where $c_{ik}$ is the density for each type of star (index $k$) of each stellar disk component (index $i$).
We use four types of stars to model the local luminosity function (see Fig. \ref{f:LLF}). More details on the way that we have determined it is given in section \ref{lf}. Stars with a mean absolute magnitude $M_{\rm K}=-1.61$ are identified to be the red clump giants ($k=1$) that we will call `giants', with $M_{\rm K}=-0.89$ and $M_{\rm K}=-0.17$ for first ascent giants that we categorize as `sub-giants' ($k=2-3$) and $M_{\rm K}=4.15$ are labelled dwarfs ($k = 4$) (see fig. \ref{f:HR}). We neglected 'sub-giant' populations having absolute magnitude $M_{\rm K}$ between 0.2 and 2. Their presences marginally change the ratio of giants to dwarfs, since their magnitudes are lower, and their total number in the magnitude counts appears significantly smaller than the other components. In fact, we initially tried to introduce 10 types of stars (spaced by 0.7 absolute magnitude intervals). This still improves the fit to the data. However due to the small contribution of the `sub-giants' components with $M_{\rm K}=[0.2-2]$, they were not determined with a useful accuracy. We adopt $\sigma_M=0.25$, justified by the narrow range of absolute magnitudes both for red clump giants and for dwarfs on the luminosity function.
The 4x20 coefficients $c_{ik}$ are parameters of the model. In order to obtain a realistic luminosity function, we have added constraints to the minimization procedure. For {\it each} kinematic component $i$, we impose conditions on the proportion of dwarfs, giants and sub-giants following the local luminosity function. We have modeled our determination of the local luminosity function of nearby stars (see Fig. \ref{f:LLF}). We obtained :
\begin{itemize}
\item a ratio of the density of dwarfs ($k$=4) to the density of giants ($k$=1) of 12.0, so we impose: $ \frac{c_{i,4}}{c_{i,1}} > 10 $
\item a ratio of the density of giants ($k$=1) to the density of sub-giants($k$=2) of 2.3, so we impose: $ \frac{c_{i,1}}{c_{i,2}} > 2 $
\item and the density of sub-giants ($k$=2) is greater than the density of sub-giants ($k$=3), so we impose: $c_{i,1}\,>\,c_{i,2}$.
\end{itemize}
If we do not include these constraints, the various components are populated either only with dwarfs
or only with giants.
\begin{figure}[!htbp]
\resizebox{\linewidth}{!}{
\includegraphics{FL1.eps}
}
\caption{ Local luminosity function: The histogram is our determination of the local luminosity function for nearby stars with error bars. The red (or dark grey) dashed line is a fit of the luminosity function with four gaussians (blue or light grey line) corresponding to the dwarfs, the giants and the two types of sub-giants. }
\label{f:LLF}
\end{figure}
\section{Results and discussion}
The 181 free model parameters are adjusted through simulations. Each
simulation is compared to histograms of counts, proper motions and
radial velocities (see Sect.~\ref{data} for the description of data
histograms and see
Figures~\ref{f:count},~\ref{f:vitesse},~\ref{f:vitesse2},~\ref{f:elodie},~\ref{f:rave})
for the comparison of the best fit model with data. The adjustment is
done by minimizing a $\chi^2$ function using the MINUIT software
(James \cite{jam04}). Equal weight is given to each of the four types
of data (magnitude counts, $\mu_U$ proper motions, $\mu_V$ proper
motions, and radial velocities). This gives relatively more weight to
the radial velocity data whose contribution in number is two orders of
magnitude smaller than for the photometry and proper motions.
By adjusting our Galactic model, we derive the respective
contributions of dwarfs and giants, and of thin and thick disks. One
noticeable result is the kinematic gap between the thin and thick disk
components of our Galaxy. This discontinuity must be the consequence of
some specific process of formation for these Galactic components.
Fitting a multi-parameter model to a large data-set raises the question of the uniqueness of the best fit model, and the robustness of our solution and conclusions. For this purpose, we have explored
the strength of the best Galactic model, by fitting various subsets of data, by modifying various model parameters and adjusting the others. This is a simple, but we expect efficient, way to understand the
impact of parameter correlations and to see what is really constrained by model {\bf or} by data. A summary of the main outcomes is given below.
\begin{figure*}[!htbp]
\resizebox{\hsize}{!}{
\includegraphics[angle=270]{count1nb.eps}
\includegraphics[angle=270]{count2nb.eps}
}
\caption{Magnitude count histogram towards the North Galactic Pole.
Left: model prediction (dashed line) is split according to
star types: giants (red or black line), sub-giants (dot-dashed
and dotted) and dwarfs (green or grey line). The right figure
highlights the contributions of thin and thick disks
(respectively thin and thick lines), for dwarfs (green or
grey) and giants (red or black).}
\label{f:count}
\end{figure*}
\begin{figure*}
\begin{minipage}{.48\textwidth}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUN1.eps}
\includegraphics[angle=270]{muVN1.eps}}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUN2.eps}
\includegraphics[angle=270]{muVN2.eps}}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUN3.eps}
\includegraphics[angle=270]{muVN3.eps}}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUN4.eps}
\includegraphics[angle=270]{muVN4.eps}}
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUS1.eps}
\includegraphics[angle=270]{muVS1.eps}}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUS2.eps}
\includegraphics[angle=270]{muVS2.eps}}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUS3.eps}
\includegraphics[angle=270]{muVS3.eps}}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUS4.eps}
\includegraphics[angle=270]{muVS4.eps}}
\end{minipage}
\caption{$\mu_U$ and $\mu_V$ histograms towards the North Galactic
Pole (right) and the South Galactic Pole (left) for magnitudes 6 to
10: model (dashed line) and contributions from the different types of
stars: giants (red or dark thin lines), sub-giants (dot-dashed and
dotted lines) and dwarfs (green or grey thick lines).}
\label{f:vitesse}
\end{figure*}
\begin{figure*}
\begin{minipage}{.48\textwidth}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUN5.eps}
\includegraphics[angle=270]{muVN5.eps}}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUN6.eps}
\includegraphics[angle=270]{muVN6.eps}}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUN7.eps}
\includegraphics[angle=270]{muVN7.eps}}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUN8.eps}
\includegraphics[angle=270]{muVN8.eps}}
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUS5.eps}
\includegraphics[angle=270]{muVS5.eps}}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUS6.eps}
\includegraphics[angle=270]{muVS6.eps}}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUS7.eps}
\includegraphics[angle=270]{muVS7.eps}}
\scalebox{0.17}[0.21]{\includegraphics[angle=270]{muUS8.eps}
\includegraphics[angle=270]{muVS8.eps}}
\end{minipage}
\caption{Same as Fig.~\ref{f:vitesse} for magnitudes 10 to 14.}
\label{f:vitesse2}
\end{figure*}
\begin{figure*}[!htbp]
\center
\begin{tabular}{ccc}
\includegraphics[width=4.1cm,angle=270]{rvN1.eps} &
\includegraphics[width=4.1cm,angle=270]{rvN2.eps} &
\includegraphics[width=4.1cm,angle=270]{rvN3.eps} \\
\end{tabular}
\caption{Radial velocity histograms towards the North Galactic Pole
for magnitudes 5.5 to 8.5 for ELODIE data: model (dashed line) and
contributions of the different type of stars: giants (red or dark
lines), sub-giants (dot-dashed and dotted) and dwarfs (green or grey
line).}
\label{f:elodie}
\end{figure*}
From these explorations, we choose to fix or bound some important
Galactic model parameters which would otherwise be poorly constrained:
i) we fix the vertical Galactic potential (adjusting the $K_z$ force
does not give more accurate results than for instance in Bienaym\'e et
al., \cite{bie06}, since we only increase by a factor 2 the number of
stars with measured radial velocities), ii) the asymmetric drifts of
all kinematic components are linked through a unique linear asymmetric
drift relation with just one free parameter; the solar velocity
component $V_\odot$ is also fixed,
iii)
the axis ratio of the velocity ellipsoids is bounded; for thin disk
components ($\sigma_W\le 25$\,km\,s$^{-1}$) we set
$\sigma_U/\sigma_W>1.5$, for thick disks ($\sigma_W>30$\,km\,s$^{-1}$,
$\sigma_U/\sigma_W>1.1$).
The agreement between our fitted model and the observed counts is
illustrated by the various magnitude, proper motion and radial
velocity distributions
(Figs.~\ref{f:count},~\ref{f:vitesse},~\ref{f:vitesse2},~\ref{f:elodie},~\ref{f:rave}). We
can consider that globally the agreement is good, if we note the small
$\chi^2$ values obtained. We just comment the main disagreements
visible within these distributions. They can be compared to recent
similar studies (Girardi et al. \cite{gir05}, Vallenari et al.
\cite{val06}).
\begin{figure*}[!htbp]
\center
\begin{tabular}{ccc}
\includegraphics[width=4.1cm,angle=270]{rvS1.eps} &
\includegraphics[width=4.1cm,angle=270]{g1.eps} &
\includegraphics[width=4.1cm,angle=270]{n1.eps} \\
\includegraphics[width=4.1cm,angle=270]{rvS2.eps} &
\includegraphics[width=4.1cm,angle=270]{g2.eps} &
\includegraphics[width=4.1cm,angle=270]{n2.eps} \\
\includegraphics[width=4.1cm,angle=270]{rvS3.eps} &
\includegraphics[width=4.1cm,angle=270]{g3.eps} &
\includegraphics[width=4.1cm,angle=270]{n3.eps} \\
\end{tabular}
\caption{Number of giants and dwarfs in RAVE data compared to model prediction.
Left column: Radial velocity histograms towards the South Galactic Pole for magnitudes 8.5 to 11.5 for RAVE data, model (dashed line) and contributions of the different type of stars: giants (red or dark
lines), sub-giants (dot-dashed and dotted lines) and dwarfs (green or grey line).
Center column: Radial velocity histograms for all stars (black) and for giants (red or grey): model for all stars (black dashed line) and for giants (red or grey dashed line).
Right column: Radial velocity histograms for all stars (black) and for dwarfs (green or light grey): model for all stars (black dashed line) and for dwarfs (green or light grey dashed line).
}
\label{f:rave}
\end{figure*}
The agreement for the apparent magnitude distribution looks satisfying
in Fig.~\ref{f:count}.
The comparison of observed and modeled $\mu_U$ proper motion
distributions does not show satisfactory agreement close to the maxima
of histograms at apparent magnitude $m_{\rm K}<$10 ( NGP or SGP, see
Fig.~\ref{f:vitesse}). We have not been able to determine if this is
due to the inability of our model to describe the observed data, for
instance due to simplifying assumptions (gaussianity of the velocity
distribution, asymmetric drift relation, constant ratio of velocity
dispersions, etc...). We note that this disagreement may just result
from an underestimate of the impact of the proper motion errors.
Some possible substructures are seen in proper motion histograms for
the brightest bins ($ m_{\rm K}<$7, Fig.~\ref{f:vitesse}); they are
close to the level of Poissonian fluctuations and marginally
significant. One of the possible structures corresponds to the known
Hercules stream ($\bar U=-42\,$\,km\,s$^{-1}$ and $\bar
V=-52$\,km\,s$^{-1}$, Famaey et al \cite{fam05b}).
For faint magnitude ($ m_{\rm K}>$11) bins (Fig.~\ref{f:vitesse2}),
small shifts ($\sim$ 3-5\,mas\,yr$^{-1}$) of $\mu_U$ explain most of the differences between North
and South and the larger $\chi^2$.
At $ m_{\rm K}$ within 10-13 (Fig.~\ref{f:vitesse2}), the wings of
$\mu_U$ histograms look slightly different between North and South
directions; it apparently results from shifts of North histograms
versus South ones.
A disagreement of the model versus observations also appears within
the wings of $\mu_V$ distributions, ($ m_{\rm K}$ within 10-13,
Fig.~\ref{f:vitesse2}). This may introduce some doubt
concerning our ability to correctly recover the asymmetric drift,
because the negative proper motion tail of $\mu_V$ distributions
directly reflects the asymmetric drift of the $V$ velocity
component. However, we estimate that our determination of the
asymmetric drift coefficient is robust and marginally correlated to
the other model parameters.
These comparisons of observed and model distributions suggest new
directions to analyze data. In the future, we plan to use the present
galactic model to simultaneously fit the RAVE radial velocity distribution
in all available galactic directions. This result will be
compared to a fit of our model to proper motion distributions over all
galactic directions. This will give a better insight into the inconsistency
between radial velocity and proper motion data, and also for possible
inconsistency in our galactic modeling.
\subsection{The transition from dwarfs to giants}
Within the J$-$K=[0.5-0.7] interval, the proper motion is an excellent distance indicator: there is a factor of 14 between the proper motion of a dwarf and the proper motion of a giant with the same apparent
magnitudes and velocities. Combining proper motions and apparent magnitudes, our best-fit Galactic model allows us to separate the contributions of dwarfs and giants (Fig.~\ref{f:count}).
We deduce that, towards the Galactic poles, most of the bright stars are giants. At $m_{\rm K}=7.2$, only 10\% are dwarfs and at $m_{\rm K}=9.6$ only 50\% are giants. We have checked if the contribution of sub-giants with absolute magnitude $M_{\rm K}=[0.2-2]$ can change the contribution of dwarfs and giants. At $m_{\rm K}<10$, the contribution of sub-gaint with $M_{\rm K}=[0.2-2]$ is at least one order of magnitude lower. So the ratio of giants and dwarfs is unchanged. Furthermore, the RAVE data confirm our model prediction. This is in contradiction with Cabrera-Lavers et al. (\cite{cab05}) statement based on the Wainscoat et al. (\cite{wai92}) model which estimates that, at magnitude $m_{\rm K} < 10$, giants represent more than 90 \% of the stars. The Wainscoat model assumes only one disk with a scale height of 270\,pc for the giants and 325\,pc for the dwarfs. In our model, we find a scale height of 225\,pc both for the giants and the dwarfs. This explains why we find more dwarfs at bright magnitudes ( $m_{\rm K} < 10$).
Faint stars are mainly dwarfs, 80\% at $m_{\rm K}=11.6$ while at $m_{\rm K}=11.9$, only 10\% are giants. The 50\%-50\% transition between giants-sub-giants and dwarfs occurs at $m_{\rm K}\sim10.1$. This is a robust result from our study that depends slightly on the absolute magnitude adopted for dwarf and giant stars. We have not tried to change our color range. If we take a broader color interval, the dispersion around the absolute magnitude of dwarfs will be larger, but our results are not expected to change. For another color interval, we can expect this result to be different, since we would be looking at a different spectral type of star. \\
A confirmation of the dwarf-giant separation between magnitudes $m_{\rm K} = [5.5-11.5]$ comes from RAVE spectra. With the preliminary determination of the stellar parameters (T$_{eff}$, $\log(g)$ and [Fe/H]) of RAVE stars, we choose to define giant stars with $\log(g) < 3$ and dwarfs with $\log(g) > 4$.
The comparison of the number of giants and dwarfs predicted by our best model
to the observed one is in good agreement (see fig. \ref{f:rave}).
\begin{figure}[!htbp]
\resizebox{\hsize}{!}{
\includegraphics[angle=270]{rho.eps} }
\caption{ Model of the vertical stellar density $\rho(z)$
towards the the North Galactic Pole (dashed line) and its thin and
thick disk decomposition (respectively thin and thick lines). The
thin disk includes the isothermal kinematic components with
$\sigma_W\,<$25\,km\,s$^{-1}$, the thick disks include components with
$\sigma_W>$ 25\,km\,s$^{-1}$. }
\label{f:thin-thick}
\end{figure}
\subsection{The scale heights of stellar components}
Our dynamical modeling of star counts allows us to recover the vertical density distribution of each kinematic component $\rho_i(z)$, with the exact shapes depending on the adopted vertical potential $\Phi(z)$. We recover the well-known double-exponential shape of the total vertical number density distribution $\rho_{\rm tot}(z)$ (Fig.~\ref{f:thin-thick}). Since we estimate that the kinematic decomposition in isothermal components is closer to the idealized concept of stellar populations and disks, we identify the thin disk as the components with vertical velocity dispersions
$\sigma_W$ smaller than 25\,km\,s$^{-1}$ and the thick disk with $\sigma_W$ from 30 to 45.5\,km\,s$^{-1}$ (Fig.~\ref{f:KDF}). Following this identification, we can fit an exponential on the thin and thick disk vertical density component (thin line and thick lines respectively of Fig.~\ref{f:thin-thick}). The scale height of the thin disk is 225$\pm$10\,pc within 200-800\,pc. For the thick disk, within 0.2-1.5\,kpc, the scale height is 1048$\pm$36\,pc. If we consider all the kinematic components without distinguishing between the thin and thick disk, we can fit a double exponential with a scale length of the thin disk 217$\pm$15\,pc and of the thick disk 1064$\pm$38\,pc. We calculate the error of the scale length from the error on the individual kinematic disk components $\phi_{kin,i}$ (see Tab. \ref{t:KDF}). We have performed a Monte-Carlo simulation on the value of the components and obtained the error bars for the scale length of the thin and thick disk both independently and together.
We note that our density distribution is not exponential for $z < 200$ pc: this mainly results from the fact that we do not model components with small velocity dispersions $\sigma_W<$\,8\,km\,s$^{-1}$. Thus our estimated density at $z$=0 cannot be directly compared, for instance, to Cabrera-Lavers et al. (\cite{cab05}) results. With this proviso, the star number density ratio of thick to thin disk stars at $z$=0\,pc is 8.7\% for the dwarfs.
One candidate to trace the thin and thick disk are the red clump giants. In fact, at $z$-distances larger than $\sim$\,500\,pc (i.e. $m_{\rm K}$ larger than $\sim$7.0, see Fig.~\ref{f:count}, there are more thick disk giants than thin disk giants. Cabrera-Lavers et al. (\cite{cab05}) have analyzed them using 2MASS data. To do this, they select all stars with color J-K=[0.5-0.7] and magnitude $m_{\rm K} < 10$. But, beyond magnitude 9, the proportion of giants relative to sub-giants and dwarfs decreases quickly. At $m_{\rm K}$=9.6, giants just represent half of the stars, and their distance is about 1.7\,kpc. Thus, we must be cautious when probing the thick disk with clump giants and we have first to determine the respective sub-giant and dwarf contributions. However, Cabrera-Lavers et al. (\cite{cab05}) obtained a scale height of 267$\pm$13\,pc and 1062$\pm$52\,pc for the thin and thick disks which is in relatively good agreement with the values obtained from our model.
For dwarfs that dominate the counts at faint apparent magnitudes $m_{\rm K} > 11$ (distances larger than $\sim$ 240\,pc), we use the photometric distance:
\begin{equation}
z_{phot}=10^{(m_{\rm K}-M_{\rm K}-5)/5}
\end{equation}
where $M_{\rm K}$ is equal to 4.15 (the value for the dwarfs).
Doing so, we obtain the number density $n(z_{phot})$ of stars seen along the line of sight at the SGP and NGP (Fig.~\ref{f:nz}). These plots show a well-defined first maximum at $z_{phot}$=500\,pc (SGP) or 700\,pc (NGP) related to the distribution of thin disk dwarfs. At 0.9-1.1\,kpc, $n(z_{phot})$ has a minimum and then rises again at larger distances, indicating the thick disk dwarf contribution.
However, the use of photometric distances can introduce a systematic error for thick disk dwarfs that have lower metallicities . The mean metallicity of the thick disk population at 1 kpc is $\langle$[Fe/H]$\rangle \simeq$ -0.6 (Gilmore et al. \cite{gil95}; Carraro et al. \cite{car98}; Soubiran et al. \cite{sou03}).
The metallicity variation from [Fe/H]=0.0 for the thin disk to [Fe/H]=-0.6 for the thick disk means that the absolute magnitude $M_{\rm K}$ changes from 4.15 to 4.5. So, we smoothly vary the absolute magnitude with the metallicity from the thin to the thick disk, in this way:
\begin{equation}
M_{\rm K}([Fe/H])=M_{{\rm K},0} + 0.035 m_{\rm K}
\end{equation}
where $M_{{\rm K},0}$ is equal to 4.15.
The counts continue to show two maxima (Fig.~\ref{f:nz2}), even if the minimum is less deep. The minimum delineates a discontinuous transition between the thin and thick components. \\
The superposition of the model on the number density $n(z_{phot})$ shows only approximate agreement (Fig.~\ref{f:nz}). We think that is due to non-isothermality of the real stellar components. Anyway, the fact that the model does not reproduce exactly the observation does not weaken the conclusion about the kinematic separation of the thin and thick disk. It reinforces the need for a clear kinematic separation between the two disks in the kinematic decomposition (Fig.~\ref{f:KDF}). \\
We also notice, in Fig.~\ref{f:nz}, the difference in counts between the North and the South. This difference allows us to determine the distance of the Sun above the Galactic plane, $z_\odot=+20.0\pm2.0$\,pc, assuming symmetry between North and South. We also note that the
transition between thin and thick disks is more visible towards the SGP than towards the NGP.
\begin{figure*}[!htbp]
\resizebox{\hsize}{!}{
\hspace{1.5cm}
\includegraphics{StepsS1.eps}
\hspace{3cm}
\includegraphics{StepsN1.eps}
\hspace{1.5cm}
}
\caption{Data (histogram with error bars) and model (dashed line) for the NGP (left)
and SGP (right) vertical density distribution
using photometric distances $n_{phot}$(z) for dwarf stars. The
transition between thin and thick components is revealed by a minimum
at $z\sim$1\,kpc. The main contributing components are plotted, for
the thin disk (thin continuous line) $\sigma_W$ = 10.5 (dot-dashed),
14 \& 17.5 (triple dot-dashed), 21 \& 24.5\,km\,s$^{-1}$ (dotted) and
for the thick disk (thick continuous line) $\sigma_W$ = 45.5
km\,s$^{-1}$.}
\label{f:nz}
\end{figure*}
\begin{figure*}[!htbp]
\resizebox{\hsize}{!}{
\hspace{1.5cm}
\includegraphics{StepsS2.eps}
\hspace{3cm}
\includegraphics{StepsN2.eps}
\hspace{1.5cm}
}
\caption{Histograms of the vertical density distribution for the NGP (left) and SGP (right)
using photometric distances $n_{phot}$(z) for dwarf stars with a smooth variation in the [Fe/H] from the thin to the thick disk. }
\label{f:nz2}
\end{figure*}
\begin{figure*}[!htbp]
\resizebox{\hsize}{!}{
\hspace{1cm}
\includegraphics[angle=270]{FVnb.eps}
\hspace{2cm}
\includegraphics[angle=-90]{FVcontinu.eps}
\hspace{1cm}
}
\caption{ Left: The local $\sigma_W$ kinematic
distribution function. The contributing components to star counts can
be put together in a thin disk component ($\sigma_W<$25 km\,s$^{-1}$),
a thick disk (isothermal with $\sigma_W$=45.5 km\,s$^{-1}$) and a
hotter component with $\sigma_W\sim$65 km\,s$^{-1}$. The
two first components with $\sigma_W$=3.5 and 7 km\,s$^{-1}$ are set to
zero by construction. Right: A Kinematic Distribution Function (KDF) that tries to reproduce the
magnitude star counts and the kinematic data: this model has been obtained requiring the
continuity of the KDF from $\sigma_w$=10 to 48\,km\,s$^{-1}$. }
\label{f:KDF}
\end{figure*}
\subsection{The thin--thick disk transition, and the kinematic distribution
function}
\label{s:thin-thick}
The minimum at $z$\,$\sim$\,1\,kpc in the $n(z)$ distribution (Fig.~\ref{f:nz}) provides very direct evidence of the discontinuity between stellar components with small velocity dispersions
($\sigma_W$=10-25\,km\,s$^{-1}$) and those with intermediate velocity dispersions ($\sigma_W\sim$\,45.5\,km\,s$^{-1}$) (left panel Fig. \ref{f:KDF}).
\begin{table}
\begin{center}
\begin{tabular}{| r | r @{,} l | r @{,} l | r @{,} l | r @{,} l |}
\hline
n$^{o}$ & \multicolumn{2}{c|}{$\sigma_w$} & \multicolumn{2}{c|}{$\phi_{kin}$} &
\multicolumn{2}{c|}{error} & \multicolumn{2}{c|}{error} \\
& \multicolumn{2}{c|}{(km\,s$^{-1}$)} & \multicolumn{2}{c|}{($\times 10^6$)} & \multicolumn{2}{c|}{absolute} & \multicolumn{2}{c|}{in \%} \\
\hline
1 & 3 & 5 & 0 & 00 & \multicolumn{2}{c|}{--} & \multicolumn{2}{c|}{--} \\
2 & 7 & 0 & 0 & 00 & \multicolumn{2}{c|}{--} & \multicolumn{2}{c|}{--} \\
3 & 10 & 5 & 2044 & 13 & 720 & 50 & 35 & 25 \\
4 & 14 & 0 & 596 & 69 & 493 & 81 & 82 & 76 \\
5 & 17 & 5 & 1618 & 79 & 169 & 57 & 10 & 48 \\
6 & 21 & 0 & 385 & 76 & 92 & 03 & 23 & 86 \\
7 & 24 & 5 & 234 & 53 & 54 & 72 & 23 & 33 \\
8 & 28 & 0 & 3 & 85 & 35 & 10 & \multicolumn{2}{l|}{ $>$100} \\
9 & 31 & 5 & 53 & 21 & 33 & 09 & 62 & 19 \\
10 & 35 & 0 & 79 & 16 & 30 & 73 & 38 & 82 \\
11 & 38 & 5 & 64 & 71 & 63 & 76 & 98 & 53 \\
12 & 42 & 0 & 27 & 49 & 66 & 31 & \multicolumn{2}{l|}{ $>$100} \\
13 & 45 & 5 & 216 & 96 & 44 & 07 & 20 & 32 \\
14 & 49 & 0 & 2 & 63 & 39 & 19 & \multicolumn{2}{l|}{ $>$100} \\
15 & 52 & 5 & 0 & 38 & 0 & 08 & 21 & 05 \\
16 & 56 & 0 & 0 & 04 & 0 & 04 & 100 & 00 \\
17 & 59 & 5 & 0 & 29 & 0 & 11 & 37 & 93 \\
18 & 63 & 0 & 4 & 83 & 31 & 72 & \multicolumn{2}{l|}{ $>$100} \\
19 & 66 & 5 & 5 & 86 & 30 & 88 & \multicolumn{2}{l|}{ $>$100} \\
20 & 70 & 0 & 2 & 69 & 0 & 05 & 1 & 86 \\
\hline
\end{tabular}
\end{center}
\caption{List of the values of the kinematic disk components $\phi_{kin,i}$ ( 10$^6 \times$ number of stars / pc$^3$ ) with the individual errors absolutes and relatives in percent.}
\label{t:KDF}
\end{table}
Another manifestation of this transition is well known from the $\log \rho(z)$ density distribution (Fig.~\ref{f:thin-thick}) which shows a change of slope at $z$=500-700\,pc. This feature can be successfully modeled with two (thin and thick) components (e.g. Reid and Gilmore \cite{gil83}), which is an indication of a discontinuity between the thin and thick disks of our Galaxy.
It is conclusive evidence, only if we show that we can not fit accurately the star counts or vertical density distributions with a continuous set of kinematic components (without a gap between the thin
and the thick disks). We find that the constraint of a set of kinematic components following a continuous trend (right panel of Fig.~\ref{f:KDF}) raises the reduced $\chi^2$, in particular on SGP magnitude counts, from 1.59 to 3.40. This confirms the robustness of our result and conclusion on the wide transition between thin and thick stellar disk components.
Adjusting the Galactic model to star counts, tangential and radial velocities, we can recover the details of the kinematics of stellar populations, and we determine the local $\sigma_W$ kinematic distribution function (left panel of Fig.~\ref{f:KDF} and Tab. \ref{t:KDF}). This kinematic distribution function clearly shows a large step between the kinematic properties of the thin and thick disks. We define the thin disk as the components with $\sigma_W$ covering 10-25\,km\,s$^{-1}$, and the thick disk as the components with $\sigma_W$ covering 30-45 km\,s$^{-1}$. The counts and radial velocities by themselves already show the kinematic transition that we obtain in the kinematic decomposition. The fit of proper motions confirms the conclusion from the star counts and radial velocities, even if a fraction of the proper motions $\mu_l$ and $\mu_b$ at magnitude m$_{\rm K}$ fainter than 13 have significant errors ($> 20$ km\,s$^{-1}$). The only consequence for the proper motion errors is that we obtained an ellipsoid axis ratio $\sigma_U/\sigma_W$ different from the classical values (see Sec. \ref{kin}).
The last non-null components at approximately $\sigma_W\sim\,65\,$km\,s$^{-1}$ are necessary to fit the faintest star counts at $m_{\rm K}\sim15$. But, they do not result from the fit of proper motion histograms (since, unfortunately, they stop at $m_{\rm K}\sim$14). Thus their exact nature, a second thick disk or halo (they would have very different asymmetric drift) cannot be solved in the context of our analysis.
\subsection{The luminosity function of stellar components}
\label{lf}
Our distant star count and kinematic adjustment constrains the local
luminosity function (LF).
We make the comparison with the local
LF determined with nearby stars. However, the brightest HIPPARCOS
stars needed to determine the local LF are saturated within
2MASS and have less accurate photometry. We can also compare it to
the LF determined by Cabrera-Lavers et al. (\cite{cab05}) who use
a cross-match of HIPPARCOS and MSX stars and estimate $m_{\rm K}$
magnitudes from MSX A band magnitudes (hereafter [8.3]).
However we note from our own
cross-match of HIPPARCOS-MSX-2MASS (non saturated) stars that their
LF, for stars
selected from V-[8.3], corresponds mainly to stars with J--K colors
between 0.6-0.7 rather than between 0.5-0.7. A second limitation for
a comparison of LFs is that our modeling does not include the stellar
populations with small velocity dispersions ($\sigma_W<8$
km\,s$^{-1}$). For these reasons, we determine a rough local LF
based on 2MASS-HIPPARCOS cross-matches, keeping stars with V$<$7.3 or
distances $<$125\,pc, and using the color selection V--K between 2.0
and 2.6, that corresponds approximately to J--K = [0.5-0.7].
Using V and K magnitudes minimizes the effects of the J--K
uncertainties. Considering these limitations, there is reasonable
agreement between the local LF obtained with our model using distant
stars and the LF obtained from nearby Hipparcos
stars (see Fig.~\ref{f:LF}).
\begin{figure}[!htbp]
\resizebox{\hsize}{!}{
\includegraphics{FL2.eps}
}
\caption{ The local Luminosity Function of K stars from our modeling
of star counts towards the Galactic poles (line) compared to the
LF function from nearby Hipparcos K stars by Cabrera-Lavers et al.
(\cite{cab05}) (red or black histogram) and our own estimate of the local LF:
see text (green or grey histogram with error bars). The scale of
Cabrera et al.'s LF has been arbitrarily shifted.}
\label{f:LF}
\end{figure}
\subsection{The stellar kinematics}
\label{kin}
Many of the stellar disk kinematic properties obtained with our best
fit Galactic model are comparable with previously published results.
We make the comparison with the analysis of HIPPARCOS data (Dehnen \&
Binney \cite{deh98}; Bienaym\'e \cite{bie99}; N\"ordstrom et al.
\cite{nor04}; Cubarsi \& Alcob\'e \cite{cub04}; Famaey et al.
\cite{fam05}), and also with results published from remote
stellar samples using a wide variety of processes to identify thin and
thick kinematic components (Barta\u{s}i\={u}t\.{e} \cite{bar94}; Flynn
\& Morrel \cite{fly97}; Soubiran et al. \cite{sou03}; Pauli et al.
\cite{pau05}).
We obtain for the Sun motion relative to the LSR,
$u_{\odot}$=8.5$\pm$0.3\,km\,s$^{-1}$ and
$w_{\odot}$=11.1$\pm$1.0\,km\,s$^{-1}$. We find for the asymmetric
drift coefficient, $k_{a}$=76$\pm$4\,km\,s$^{-1}$, compared to
80$\pm$5\,km\,s$^{-1}$ for nearby HIPPARCOS stars (Dehnen \& Binney
\cite{deh98}) and the thick disk lag is $V_{\rm lag}= \sigma_R^2 /k_{a}=33
\pm$2\,km\,s$^{-1}$ relative to the LSR. We note that this value of
the thick disk lag is close to the value of Chiba \& Beers (\cite{chi00}) and other
estimates prior to this. It is less in agreement with the often-mentioned values
of 50-100\,km\,s$^{-1}$ from pencil-beam
samples. These may be more affected by Arcturus group stars which are
more dominant at higher $z$-values.
Our determination of
the asymmetric drift coefficient is highly correlated to $V_{\odot}$.
The reason is that we do not fit populations with low velocity
dispersions and small $V_{\rm lag}$ since we do not fit star counts with
$m_{\rm K}<6$: as a consequence the slope of the relation, $V_{\rm lag}$
versus $\sigma_U$, is less well constrained. To improve the $k_{a}$
determination, we adopt $V_{\odot}=5.2$\,km\,s$^{-1}$ (Dehnen \& Binney
\cite{deh98}; Bienaym\'e \cite{bie99}). The adjusted
$\sigma_U/\sigma_V$ velocity dispersion ratio, taken to be the
same for all components, is $1.44 \pm 0.02$.
We obtain $\sigma_U/\sigma_W$ ratios
significantly smaller than those published using nearby samples of
stars. For the thin disk components, we find $\sigma_U/\sigma_W$=1.50
to 1.62 (compared to published values $\sim$2 by authors using
HIPPARCOS stars). For the thick disk, we obtain
$\sigma_U/\sigma_W$=1.1, instead of $\sim 1.5-1.7$ typically obtained
with nearby thick disk stars by other authors.
While there is no dynamical reason preventing the variation of
$\sigma_U/\sigma_W$ with $z$, we suspect that our low
$\sigma_U/\sigma_W$ ratio at large $z$ for the thick disk results from
a bias within our model due to the outer part of the wings of some
proper motion histograms not being accurately adjusted.
This may be the consequence of an incorrect adopted vertical
potential or, as we think, more likely the non-isothermality of
the real velocity distributions. This suspicion is reinforced since
fitting each proper motion histograms separately with a set of gaussians
gives us larger values for $\sigma_U/\sigma_W$.
Our results can be directly compared with the very recent analysis
by Vallenari et al. (\cite{val06}) of stellar populations towards the
NGP using BVR photometry and proper motions (Spagna et
al. \cite{spa96}). Their model is dynamically consistent but based on
quite different hypotheses from ours; for each stellar population,
they assume that in the Galactic plane $\sigma_{zz}^2$ is proportional
to the stellar density $\rho$ (Kruit \& Searle \cite{vdk82}). They
also assume that both velocity dispersions, $\sigma_{zz}^2$ and
$\sigma_{RR}^2$, follow exponential laws with the same scale
exponential profile as the surface mass density (Lewis \& Freeman
\cite{lew89}). Vallenari et al. (\cite{val06}) found thick disk
properties (see their table 6) quite similar to the ones obtained in
this paper. They obtain: $\sigma_W$=38$\pm$7km\,s$^{-1}$,
$\sigma_U/\sigma_V$=1.48,
$V_{\rm lag}=42\pm7$ km\,s$^{-1}$, and for the thick
disk scale height: 900\,pc. However, they find $\sigma_U/\sigma_W$=1.9.
They also claim that "no significant
velocity gradient is found in the thick disk", implying that the thick
disk must be an isothermal component.
\subsubsection{Radial velocities}
The number of RAVE and ELODIE stars used in this analysis is a tiny fraction of the total number of stars used from 2MASS or UCAC2 catalogues. However they play a key role in constraining Galactic model parameters: the magnitude coverage of RAVE stars towards the SGP, from $m_{\rm K}=8.5$ to 11.5, can be used to discriminate between the respective contributions from each type of star, dwarfs, sub-giants, giants. A future RAVE data release (Zwitter et al. submitted) will include gravities, allowing for easier identification of dwarfs and red clump giants; it will also include element abundances allowing for better description of stellar disk populations and new insights into the process of their formation.
\section{Conclusion}
We revisit the thin-thick disk transition using star counts
and kinematic data towards the Galactic poles. Our Galactic modeling
of star count, proper motion and radial velocity allows us to recover the
LF, their kinematic distribution function, their vertical
density distribution, the relative distribution of giants, sub-giants
and dwarfs, the relative contribution from thin and thick disk
components, the asymmetric drift coefficient and the solar velocity
relative to the LSR.
The double exponential fitting of the vertical disk stellar density distribution is not sufficient to fully characterize the thin and thick disks. A more complete description of the stellar disk is given by its kinematical decomposition.
From the star counts, we see a sharp transition between the
thick and thin components. Combining star counts with kinematic data,
and applying a model with 20 kinematic components, we discover a gap
between the vertical velocity dispersions of thin disk components with
$\sigma_W$ less than 21 km\,s$^{-1}$ and a dominant thick disk
component at $\sigma_W$=45.5km\,s$^{-1}$. The thick disk scale height
is found to be 1048$\pm$36\,pc.
We identify this thick disk with the intermediate metallicity ([Fe/H]
$\sim$\,--0.6 to --0.25) thick disk described, for instance, by
Soubiran et al. (\cite{sou03}). This thick disk is also similar to
the thick disk measured by Vallenari et al (\cite{val06}) who find "no
significant velocity gradient" for this stellar component. We note
that star counts at $m_{\rm K}\sim15$ suggest a second
thick disk or halo component with $\sigma_W\sim$65km\,s$^{-1}$.
Due to the separation of the thin and thick components, clearly
identified with stars counts and visible within the kinematics,
the thick disk measured in this paper cannot be the result of
dynamical heating of the thin disk by massive molecular clouds
or by spiral arms. We would expect otherwise a continuous kinematic
distribution function with significant kinematic components covering without
discontinuity the range of $\sigma_W$ from
10 to 45 km\,s$^{-1}$.
We find that, at the solar position, the surface mass density of the thick disk
is 27\% of the surface mass density of the thin disk. The thick disk has velocity dispersions
$\sigma_U=50\,$km\,s$^{-1}$, $\sigma_W$=45.5\,km\,s$^{-1}$, and
asymmetric drift $V_{\rm lag}=33 \pm 2$\,km\,s$^{-1}$.
Although clearly separated from the thin disk, this thick component
remains a relatively `cold' thick disk and has characteristics that
are close to the thin disk properties.
This `cold' and rapidly rotating thick disk is similar to the
component identified by many kinematic studies of the thick disk (see
Chiba \& Beers \cite{chi00} for a summary). Its kinematics appear to
be different from the thick disk stars studied at intermediate
latitudes in pencil beam surveys (eg Gilmore et al \cite{gil02}),
which appear to be significantly affected by a substantial stellar
stream with a large lag velocity. They interpret this stellar
stream as the possible debris of an accreted satellite (Gilmore
\cite{gil02}; Wyse et al. \cite{wys06}). Maybe some connections exist with streams identified
in the solar neighborhood as the Arcturus stream (Navarro et al
\cite{nav04}).
Some mechanisms of formation connecting a thin and a thick
components are compatible with our findings. It may be, for instance
a `puffed-up' thick disk, i.e. an earlier thin disk puffed up by the accretion
of a satellite (Quinn et al. \cite{qui93}). Another possibility,
within the monolithic collapse scenario, is a thick disk formed from
gas with a large vertical scale height before the final collapse of
the gas in a thin disk, i.e. a `created on the spot' thick disk. We also
notice the Samland (\cite{sam04}) scenario: a chemodynamical model of
formation of a disk galaxy within a growing dark halo that provides
both a `cold' thick disk and a metal-poor `hot' thick disk.
A popular scenario is the `accreted' thick disk formed from the
accretion of satellites. If the thick disk results from the accretion
of just a single satellite, with a fifth of the mass of the Galactic
disk, this has been certainly a major event in the history of the
Galaxy, and it is hard to believe that the thin disk could have
survived this upheaval.
Finally, from the thick disk properties identified in
this paper, we can reject the most improbable scenario of formation:
the one of type `heated' thick disk (by molecular clouds or
spiral arms).
\begin{acknowledgements}
Funding for RAVE has been provided by the Anglo-Australian
Observatory, by the Astrophysical Institute Potsdam, by the Australian
Research Council, by the German Research foundation, by the National
Institute for Astrophysics at Padova, by The Johns Hopkins University,
by the Netherlands Research School for Astronomy, by the Natural
Sciences and Engineering Research Council of Canada, by the Slovenian
Research Agency, by the Swiss National Science Foundation, by the
National Science Foundation of the USA (AST-0508996), by the
Netherlands Organisation for Scientific Research, by the Particle
Physics and Astronomy Research Council of the UK, by Opticon, by
Strasbourg Observatory, and by the Universities of Basel, Cambridge,
and Groningen. The RAVE web site is at www.rave-survey.org.
Data verification is partially based on observations taken at the
Observatoire de Haute Provence (OHP, France), operated by the French
CNRS.
This publication makes use of data products of the 2MASS, which is a
joint project of the University of Massachusetts and the Infrared
Processing and Analysis Center, funded by the NASA and NSF
It is a pleasure to thank the UCAC team who supplied a copy of the
UCAC CD-ROMs in July 2003.
This research has made use of the SIMBAD and VIZIER databases,
operated at CDS, Strasbourg, France.
This paper is based on data from the ESA {\it HIPPARCOS} satellite
(HIPPARCOS and TYCHO-II catalogues).
\end{acknowledgements}
|
1,477,468,750,456 | arxiv | \section{Introduction}
\noindent
Theoretically, a distribution is log-symmetric when the corresponding random variable and its reciprocal have the same distribution \citep{Jones2008}. A characterization of distributions of this type can be constructed by taking the exponential function of a symmetric random variable. Therefore, log-symetric distributions are used to describe the behavior of strictly positive data. The class of this type of distribution is quite broad and includes a large portion of bimodal distributions and those with lighter or heavier tails than the log-normal distribution; see e.g.~\cite{Vanegas2016}. Some examples of log-symmetric distributions are: log-normal, log-Student-$t$, log-logistic, log-Laplace, log-power-exponential, log-slash, etc; see e.g., \cite{Crow1974}, \cite{Jones2008}, and \cite{Vanegas2016}.
Another important feature of the log-symmetric class is that they are closed under scale change and under reciprocity, according to \cite{Puig2008}, which are very desirable properties for distributions that are used to describe strictly positive data, and log-symmetric models allow you to model the median or the asymmetry (relative dispersion).
Furthermore, the log-symmetric class has statistical properties that might make it preferable to the alternative distribution. For example, the two parameters of the log-symmetric distribution are orthogonal and they can be interpreted directly as median and skewness (or relative dispersion, taking into account two parameters that are interpreted as measures of position and scale, as stated by \cite{Vanegas2016}, which are, in the context of asymmetric distributions, the ones that mean the most being complete measures of location and shape, respectively.
Within this context, the main objective of the current work is to extend in a natural way the definition of univariate log-symmetric distributions to the bivariate case, to
study its main statistical properties, to propose the maximum likelihood method for the estimation parameters and to show an application to real data.
The remainder of this work is organized as follows. In Section \ref{Sec:2} the bivariate log-symmetric (BLS) model is proposed. In Section \ref{Sec:2}, the main mathematical properties such as stochastic representation, quantile function, conditional distribution, Mahalanobis distance, independence, moments, correlation function, among others, are discussed.
In Section \ref{Sec:4}, we describe the maximum likelihood (ML) method for the estimation of the BLS model parameters.
In Section \ref{Sec:5} we performed Monte Carlo simulation to evaluate the performance of the maximum likelihood estimators.
In Section \ref{Sec:6} we apply the BLS models to a data set, and finally in Section \ref{Sec:7}, we provide some concluding remarks.
\section{The bivariate log-symmetric model}\label{Sec:2}
\noindent
A continuous random vector $\boldsymbol{T}=(T_1,T_2)$ follows a bivariate log-symmetric (BLS) distribution if its joint probability density function (PDF) is given by
\begin{eqnarray}\label{PDF}
f_{T_1,T_2}(t_1,t_2;\boldsymbol{\theta})
=
{1\over t_1t_2\sigma_1\sigma_2\sqrt{1-\rho^2}Z_{g_c}}\,
g_c\Biggl(
{\widetilde{t_1}^2-2\rho\widetilde{t_1}\widetilde{t_2}+\widetilde{t_2}^2
\over
1-\rho^2}
\Biggr),
\quad
t_1,t_2>0,
\\[0,3cm]
\widetilde{t_i}
=
\log\biggl[\Bigl({t_i\over \eta_i}\Bigr)^{1/\sigma_i}\biggr], \ \eta_i=\exp(\mu_i), \ i=1,2, \nonumber
\end{eqnarray}
where $\boldsymbol{\theta}=(\eta_1,\eta_2,\sigma_1,\sigma_2,\rho)$ is the parameter vector with $\mu_i\in\mathbb{R}$, $\sigma_i>0$, $i=1,2$; $\rho\in(-1,1)
; $Z_{g_c}>0$ is the partition function, that is,
\begin{align}\label{partition function}
Z_{g_c}
&=
\int_{0}^{\infty}\int_{0}^{\infty}
{1\over t_1t_2\sigma_1\sigma_2\sqrt{1-\rho^2}}\,
g_c\Biggl(
{\widetilde{t_1}^2-2\rho\widetilde{t_1}\widetilde{t_2}+\widetilde{t_2}^2
\over
1-\rho^2}
\Biggr)\, {\rm d}t_1{\rm d}t_2,
\end{align}
and $g_c$ is a scalar function referred to as the density generator; see \cite{Fang1990}.
We use, in this case, the notation $\boldsymbol{T}\sim {\rm BLS}(\boldsymbol{\theta},g_c)$.
In this paper we prove that, when it exists, the variance-covariance matrix of a random vector $\boldsymbol{T}\sim {\rm BLS}(\boldsymbol{\theta},g_c)$, denoted by $K_{\boldsymbol{T}}$, is a matrix function of the following dispersion matrix
(see Subsections \ref{moments} and \ref{Correlation function}):
\begin{align*}
\boldsymbol{\Sigma}=
\begin{pmatrix}
\sigma_1^2 & \rho\sigma_1\sigma_2
\\
\rho\sigma_1\sigma_2 & \sigma_2^2
\end{pmatrix}.
\end{align*}
In other words, $K_{\boldsymbol{T}}=\psi(\boldsymbol{\Sigma})$ for some matrix function $\psi: \mathcal{M}_{2,2}\longmapsto \mathcal{M}_{2,2}$, where $\mathcal{M}_{2,2}$ denotes the set of all 2-by-2 real matrices.
Based on the references \cite{Saulo2017} and \cite{Vanegas2016}, Table \ref{table:1} presents some examples of bivariate log-symmetric distributions; some BLS PDF plots are displayed in Figure \ref{fig:bls-pdfs}.
\begin{table}[!ht]
\caption{Partition functions $(Z_{g_c})$ and density generators $(g_c)$ for some distributions.}
\vspace*{0.15cm}
\centering
\begin{tabular}{llll}
\hline
Distribution
& $Z_{g_c}$ & $g_c$ & Parameter
\\ [0.5ex]
\noalign{\hrule height 1.7pt}
Bivariate Log-normal
& $2\pi$ & $\exp(-x/2)$ & $-$
\\ [1ex]
Bivariate Log-Student-$t$
& ${{\Gamma({\nu/ 2})}\nu\pi\over{\Gamma({(\nu+2)/ 2})}}$
& $(1+{x\over\nu})^{-(\nu+2)/ 2}$ & $\nu>0$
\\ [1ex]
Bivariate Log-Pearson Type VII
& ${\Gamma(\xi-1)\theta\pi\over\Gamma(\xi)}$ & $(1+{x\over\theta})^{-\xi}$ & $\xi>1$, $\theta>0$
\\ [1ex]
Bivariate Log-hyperbolic
& ${2\pi (\nu+1)\exp(-\nu)\over \nu^2}$ & $\exp(-\nu\sqrt{1+x})$ & $\nu>0$
\\ [1ex]
Bivariate Log-Laplace
& $\pi$ & $K_0(\sqrt{2x})$ & $-$
\\ [1ex]
Bivariate Log-slash
& ${\pi\over \nu-1}\, 2^{3-\nu\over 2}$ & $ x^{-{\nu+1\over 2}} \gamma({\nu+1\over 2},{x\over 2})$ & $\nu>1$
\\ [1ex]
Bivariate Log-power-exponential
& ${2^{\xi+1}(1+\xi)\Gamma(1+\xi)}\pi$ & ${\exp\bigl(-{1\over 2}\, x^{1/(1+\xi)}\bigr)}$ & $-1<\xi\leqslant 1$
\\ [1ex]
{Bivariate Log-Logistic}
& $\pi/2$ & {${\exp(-x)\over (1+\exp(-x))^2}$} & {$-$}
\\ [1ex]
\hline
\end{tabular}
\label{table:1}
\end{table}
\noindent
Here, in the Table \ref{table:1}, $\Gamma(t)=\int_0^\infty x^{t-1} \exp(-x) \,{\rm d}x$, $t>0$, is the gamma function,
$K_0(u)=\int_0^\infty t^{-1} \exp(-t-{u^2\over 4t}) \,{\rm d}t/2$, $u>0$, is the Bessel function of the third kind
(for more details on the main properties of $K_0$, see appendix of \cite{Kotz2001});
and $\gamma(s,x)=\int_{0}^{x}t^{s-1}\exp(-t)\,{\rm d}t$ is the lower incomplete gamma function.
\begin{figure}[htb!]
\centering
\subfigure[Log-normal]{\includegraphics[scale=0.35]{normal.eps}}
\subfigure[Log-Student-$t$ ($\nu = 3$)]{\includegraphics[scale=0.30]{student.eps}}
\subfigure[Log-Pearson Type VII ($\xi = 5,\theta = 22$)]{\includegraphics[scale=0.30]{pearson.eps}}
\subfigure[Log-hyperbolic ($\nu = 2$)]{\includegraphics[scale=0.30]{hyperbolic.eps}}
\subfigure[Log-slash ($\nu = 4$)]{\includegraphics[scale=0.30]{slash.eps}}
\subfigure[Log-power-exponential ($\xi = 0.5$)]{\includegraphics[scale=0.30]{powerexp.eps}}
\subfigure[Log-logistic]{\includegraphics[scale=0.30]{logistic.eps}}
\subfigure[Log-Laplace]{\includegraphics[scale=0.30]{norms.eps}}
%
%
\caption{BLS PDFs with $\boldsymbol{\theta}_*=(2,2,0.5,0.5,0)$.}
\label{fig:bls-pdfs}
\end{figure}
By using \eqref{PDF} it is clear that the random vector $\boldsymbol{X}=(X_1,X_2)$, with $X_i=\log(T_i)$, $i=1,2$, has a bivariate elliptically symmetric (BES) distribution; see p. 592 in \cite{Bala2009}. In other words, the PDF of $\boldsymbol{X}$ is as follows
\begin{eqnarray}\label{PDF-symmetric}
f_{X_1,X_2}(x_1,x_2;\boldsymbol{\theta}_*)
=
{1\over \sigma_1\sigma_2\sqrt{1-\rho^2}Z_{g_c}}\,
g_c\Biggl(
{\widetilde{x_1}^2-2\rho\widetilde{x_1}\widetilde{x_2}+\widetilde{x_2}^2
\over
1-\rho^2}
\Biggr),
\quad
-\infty<x_1,x_2<\infty,
\\[0,3cm]
\widetilde{x_i}={x_i-\mu_i\over\sigma_i}, \ i=1,2, \nonumber
\end{eqnarray}
where $\boldsymbol{\theta}_*=(\mu_1,\mu_2,\sigma_1,\sigma_2,\rho)$ is the parameter vector and $Z_{g_c}$ is the partition function stated in \eqref{partition function}. In this case, the notation $\boldsymbol{X}\sim {\rm BES}(\boldsymbol{\theta}_*, g_c)$ is used.
A simple standard calculation shows that the joint cumulative distribution function (CDF) of $\boldsymbol{T}\sim {\rm BLS}(\boldsymbol{\theta},g_c)$, denoted by $F_{T_1,T_2}(t_1,t_2;\boldsymbol{\theta})$, is expressed as
\begin{align*}
F_{T_1,T_2}(t_1,t_2;\boldsymbol{\theta})
=
F_{X_1,X_2}\big(\log(t_1),\log(t_2);\boldsymbol{\theta}_*\big),
\end{align*}
with $F_{X_1,X_2}(x_1,x_2;\boldsymbol{\theta}_*)$ the CDF of $\boldsymbol{X}\sim {\rm BES}(\boldsymbol{\theta}_*, g_c)$. Except for the bivariate normal, there is no single closed form for the CDF of $\boldsymbol{X}$.
\section{Some basic properties of model} \label{Sec:3}
\noindent
In this section, some mathematical properties of proposed bivariate log-symmetric distribution are discussed.
\subsection{Characterization of the partition function $Z_{g_c}$}
\begin{proposition}\label{partition function-simplicado}
The partition function $Z_{g_c}$ \eqref{partition function} is independent of the parameter vector $\boldsymbol{\theta}$. More precisely,
\begin{align*}
Z_{g_c}
=
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
g_c\big({z_1}^2+{z_2}^2\big)
\, {\rm d}{z_1}{\rm d}{z_2}
=
\pi
\int_{0}^{\infty}
g_c(u)
\, {\rm d}{u}.
\end{align*}
\end{proposition}
\begin{proof}
The proof of the first identity follows by considering in \eqref{partition function} the following change of variables (Jacobian Method):
\begin{align*}
z_1=\widetilde{t}_1,
\quad
z_2={\widetilde{t}_2-\rho \widetilde{t}_1\over\sqrt{1-\rho^2}},
\end{align*}
where $\widetilde{t}_i$, $i=1,2$, are defined in \eqref{PDF}.
The proof of the second identity follows by using
integration in polar coordinates: $z_1=r\cos(\theta)$, $z_2=r\sin(\theta)$, with $r\geqslant 0$ and $0\leqslant\theta\leqslant 2\pi$, and then the change of variables $u=r^2$, ${\rm d}u=2r {\rm d}r$.
For the sake of space, we omit details of the calculus.
\end{proof}
\subsection{Stochastic representation}
\begin{proposition}\label{Stochastic Representation}
The random vector $\boldsymbol{T}=(T_1,T_2)$ has a BLS distribution if
\begin{align*}
&T_1=\eta_1 \exp(\sigma_1 Z_1),
\\[0,2cm]
&T_2=\eta_2
\exp\big(\sigma_2 {\rho} Z_1+\sigma_2\sqrt{1-\rho^2} Z_2\big),
\end{align*}
where $Z_1=RDU_1$ and $Z_2=R\sqrt{1-D^2}U_2$;
$U_1$, $U_2$,$R$, and $D$ are mutually independent random variables, $\rho\in(-1,1)$, $\eta_i=\exp(\mu_i)$, and $\mathbb{P}(U_i = -1) = \mathbb{P}(U_i = 1) = 1/2$, $i=1,2$. The random variable $D$ is positive and has PDF
\begin{align*}
f_D(d)={2\over \pi\sqrt{1-d^2}}, \quad d\in(0,1).
\end{align*}
Furthermore, the positive random variable $R$ is called the generator of the elliptical random vector $\boldsymbol{X}=(X_1,X_2)$.
{In other words, $R$ has PDF given by
\begin{align*}
f_R(r)={2r g_c(r^2)\over \int_{0}^{\infty}
g_c(u)
\, {\rm d}{u}}, \quad r>0.
\end{align*}
}
\end{proposition}
\begin{proof}
It is well-known that \citep{Abdous2005}, the vector $\boldsymbol{X}$ has a BES distribution if
\begin{align}\label{rep-stoch-biv-gaussian}
\begin{array}{llll}
&X_1=\mu_1+\sigma_1 Z_1,
\\[0,2cm]
&X_2=\mu_2+\sigma_2 {\rho} Z_1+\sigma_2\sqrt{1-\rho^2} Z_2.
\end{array}
\end{align}
Since $X_i=\log(T_i)$, $i=1,2$, the proof follows.
\end{proof}
\subsection{Quantile function}
Let $\boldsymbol{T}=(T_1,T_2)\sim {\rm BLS}(\boldsymbol{\theta},g_c)$ and $p\in(0,1)$.
By using the stochastic representation of Proposition \ref{Stochastic Representation}, we obtain
\begin{align*}
p=\mathbb{P}(T_1\leqslant Q_{T_1})
=
\mathbb{P}\big(\eta_1 \exp(\sigma_1 Z_1)\leqslant Q_{T_1}\big)
=
\mathbb{P}\left(Z_1\leqslant \log\biggl[\Bigl({Q_{T_1}\over \eta_1}\Bigl)^{1/\sigma_1}\biggl]\right)
\end{align*}
and
\begin{align*}
p=\mathbb{P}(T_2\leqslant Q_{T_2})
&=
\mathbb{P}\big(
\eta_2
\exp(\sigma_2 {\rho} Z_1+\sigma_2\sqrt{1-\rho^2} Z_2)
\leqslant Q_{T_2}\big)
\\[0,2cm]
&=
\mathbb{P}\left(
{\rho} Z_1+\sqrt{1-\rho^2} Z_2
\leqslant \log\biggl[\Bigl({Q_{T_2}\over \eta_2}\Bigl)^{1/\sigma_2}\biggl]\right).
\end{align*}
Hence, the $p$-quantile of $T_1$ and the $p$-quantile of $T_2$ are given by
\begin{align*}
\log\biggl[\Bigl({Q_{T_1}\over \eta_1}\Bigl)^{1/\sigma_1}\biggl]
=
Q_{Z_1}
\quad \Longleftrightarrow \quad
Q_{T_1}=\eta_1 \exp(\sigma_1 Q_{Z_1})
\end{align*}
and
\begin{align*}
\log\biggl[\Bigl({Q_{T_2}\over \eta_2}\Bigl)^{1/\sigma_2}\biggl]
=
Q_{{\rho} Z_1+\sqrt{1-\rho^2} Z_2}
\quad \Longleftrightarrow \quad
Q_{T_2}=
\eta_2 \exp(\sigma_2 Q_{{\rho} Z_1+\sqrt{1-\rho^2} Z_2}),
\end{align*}
respectively.
\subsection{Conditional distribution}
\begin{proposition}\label{joint and marginal pdfs}
The joint PDF of $Z_1$ and $Z_2$, given in Proposition \ref{Stochastic Representation}, is given by
\begin{align*}
f_{Z_1,Z_2}(z_1,z_2)={g_c(z_1^2+z_2^2)\over \pi \int_{0}^{\infty}
g_c(u)
\, {\rm d}{u}}, \quad -\infty<z_1,z_2<\infty.
\end{align*}
Moreover, the marginal PDFs of $Z_1$ and $Z_2$, denoted by $f_{Z{_1}}$ and $f_{Z{_2}}$, respectively, are given by
\begin{align*}
f_{Z{_1}}(z_1)=
{{\displaystyle\int_{\vert z_1\vert}^{\infty}} {2 g_c(w^2)\over \sqrt{1-{z_1^2\over w^2}}}\, {\rm d}{w}
\over \pi \int_{0}^{\infty}
g_c(u)
\, {\rm d}{u}}
\quad \text{and} \quad
f_{Z{_2}}(z_2)=
{{\displaystyle
\int_{\vert z_2\vert }^{\infty}} {2 g_c(w^2)\over \sqrt{1-{z_2^2\over w^2}}}\, {\rm d}{w}
\over \pi \int_{0}^{\infty}
g_c(u)
\, {\rm d}{u}}, \quad -\infty<z_1,z_2<\infty.
\end{align*}
In particular, $f_{Z_i}(z_i\,\vert\, Z_j=z_j)$ ($i\neq j$) and $f_{Z_i}(z_i)$, $i,j=1,2$, are even functions.
\end{proposition}
\begin{proof}
For more details on the proof, see Propositions 3.1 and 3.2 of \cite{Saulo2022}.
\end{proof}
Let $X$ and $Y$ be two continuous random variables with joint PDF $f_{X,Y}$, and marginal PDFs $f_X$ and $f_Y$, respectively.
\begin{itemize}
\item
Let $B$ be a Borelian subset of $\mathbb{R}$.
The conditional CDF of $X$ given $\{Y\in B\}$, denoted by ${F}_X(x\,\vert\, Y\in B)$, is defined as (for every $x$)
\begin{align}\label{cdf-cond}
{F}_X(x\,\vert\, Y\in B)=\mathbb{P}(X\leqslant x\,\vert\, Y\in B)
=
\int_{-\infty}^x {f}_X(u\,\vert\, Y\in B)\, {\rm d} u, \quad \text{if} \ \mathbb{P}(Y\in B)>0,
\end{align}
where ${f}_X(u\,\vert\, Y\in B)$ is the corresponding conditional PDF given by
\begin{align*}
{f}_X(u\,\vert\, Y\in B)=\frac{\int_B {f}_{X,Y}(u,v)\, {\rm d} v}{\mathbb{P}(Y\in B)}.
\end{align*}
We write $X\,\vert\, Y\in B$ to indicate that the random variable $X$ follows the conditional CDF \eqref{cdf-cond} given $\{Y\in B\}$.
\item
Let $\varepsilon>0$, and suposse that $\mathbb{P}(y-\varepsilon<Y\leqslant y+\varepsilon)>0$.
Abusing mathematical notation, we define the conditional CDF of $X$ given $Y=y$, denoted by ${F}_X(x\,\vert\, Y=y)$, as (for every $x$)
\begin{align*}
{F}_X(x\vert Y=y)
=
\lim_{\varepsilon\to 0^+} \mathbb{P}(X\leqslant x\,\vert\, y-\varepsilon<Y\leqslant y+\varepsilon),
\end{align*}
provided that the limit exists.
If the limit exists, there is a nonnegative function ${f}_X(u\vert Y=y)$ (called the conditional PDF) so that (for every $x$)
\begin{align*}
{F}_X(x\,\vert\, Y=y)
=
\int_{-\infty}^x {f}_X(u\,\vert\, Y=y)\, {\rm d} u.
\end{align*}
At every point $(x,y)$ at which $f_{X,Y}$ is continuous and $f_Y(y)>0$ is continuous, the PDF ${f}_X(u\,\vert\, Y=y)$ exists and it is expressed by (see Theorem 6, p. 109, of \cite{Rohatgi2015})
\begin{align*}
{f}_X(u\,\vert\, Y=y)={f_{X,Y}(u,y)\over f_Y(y)}.
\end{align*}
For simplicity, we write $X\,\vert\, Y=y$.
\end{itemize}
\begin{lemma}\label{conditional PDF}
If $\boldsymbol{T}=(T_1,T_2)\sim {\rm BLS}(\boldsymbol{\theta},g_c)$ then
the PDF of $T_2 \vert T_1=t_1$ is written as
\begin{align}\label{cond-pdf}
f_{T_2}(t_2\,\vert\, T_1=t_1)
=
{1\over t_2\sigma_2 \sqrt{1-\rho^2}}\,
{
f_{Z_2}\biggl(
{1\over\sqrt{1-\rho^2}}\,\widetilde{t}_2
-
{\rho\over\sqrt{1-\rho^2}}\, \widetilde{t}_1 \, \bigg\vert\, Z_1=\widetilde{t}_1
\biggr)
},
\end{align}
where $\widetilde{t}_i$, $i=1,2$, are defined in \eqref{PDF}, and ${Z_1}$ and ${Z_2}$ are as in Proposition \ref{Stochastic Representation}.
\end{lemma}
\begin{proof}
If $T_1=t_1$, then $Z_1 = \log\big[(t_1/\eta_1)^{1/\sigma_1}\big]=\widetilde{t}_1$. Thus, the conditional distribution of $T_2$ given $T_1=t_1$ is the same as the distribution of
\begin{align*}
\eta_2
\exp\big(\sigma_2 {\rho} \widetilde{t}_1+\sigma_2\sqrt{1-\rho^2} Z_2\big)\, \big\vert\, T_1=t_1.
\end{align*}
Consequently,
\begin{align*}
F_{T_2}(t_2\, \vert\, T_1=t_1)
&=
\mathbb{P}\big(\eta_2
\exp(\sigma_2 {\rho} \widetilde{t}_1+\sigma_2\sqrt{1-\rho^2} Z_2)\leqslant t_2\, \big\vert\, T_1=t_1\big)
\\[0,2cm]
&=
\mathbb{P}\biggl(
Z_2\leqslant {1\over\sqrt{1-\rho^2}}\, \widetilde{t}_2-{{\rho}\over\sqrt{1-\rho^2}}\, \widetilde{t}_1\, \bigg\vert\, Z_1= \widetilde{t}_1\biggr).
\end{align*}
Then, the Formula \eqref{cond-pdf} of the conditional PDF of $T_2$ given $T_1=t_1$ follows.
\end{proof}
\begin{theorem}\label{theo-pdf-cond}
For a Borelian subset $B$ of $(0,\infty)$, we define the following Borelian set:
\begin{align}\label{B-child}
{B}_\rho
=
{1\over\sqrt{1-\rho^2}}\,
\log\biggl[\Bigl({B\over \eta_2}\Bigr)^{1/\sigma_2}\biggr]
-
{\rho\over\sqrt{1-\rho^2}}\, \widetilde{t_1},
\end{align}
where $\widetilde{t}_1$ is as in \eqref{PDF}.
If $\boldsymbol{T}=(T_1,T_2)\sim {\rm BLS}(\boldsymbol{\theta},g_c)$ then the PDF of $T_1\,\vert\, T_2\in B$ is written as
\begin{align*}
f_{T_1}(t_1\vert T_2\in B)
=
{1\over t_1 \sigma_1}\,f_{Z_1}(\widetilde{t}_1)\,
{
\int_{{B}_\rho}
f_{Z_2}(
w \, \vert\, Z_1=\widetilde{t}_1 )\,
{\rm d}w
\over
\mathbb{P}(\rho Z_1+\sqrt{1-\rho^2}Z_2\in {B}_0)
},
\end{align*}
with ${Z_1}$ and ${Z_2}$ as in Proposition \ref{Stochastic Representation}.
\end{theorem}
\begin{proof}
Let $B$ be a Borelian subset of $(0,\infty)$. Notice that
\begin{align*}
f_{T_1}(t_1\,\vert\, T_2\in B)
=
f_{T_1}(t_1)\, {\int_B f_{T_2}(t_2\,\vert\, T_1=t_1)\, {\rm d}t_2\over \mathbb{P}(T_2\in B)}.
\end{align*}
Since $f_{T_1}(t_1)=f_{Z_1}(\widetilde{t}_1)/(\sigma_1 t_1)$ and $\mathbb{P}(T_2\in B)=\mathbb{P}(\rho Z_1+\sqrt{1-\rho^2}Z_2\in {B}_0)$, where ${B}_0$ is given in \eqref{B-child} with $\rho=0$, the term on the right-hand side of the above identity is
\begin{align*}
=
{1\over \sigma_1 t_1}\,f_{Z_1}(\widetilde{t}_1)\, {\int_B f_{T_2}(t_2\,\vert\, T_1=t_1)\, {\rm d}t_2\over \mathbb{P}(\rho Z_1+\sqrt{1-\rho^2}Z_2\in {B}_0)}.
\end{align*}
By using the expression for $f_{T_2}(t_2\,\vert\, T_1=t_1)$ provided by Lemma \ref{conditional PDF}, the previous expression is
\begin{align*}
=
{1\over t_1 \sigma_1\sigma_2 \sqrt{1-\rho^2}}\,f_{Z_1}(\widetilde{t}_1)\,
{\int_B
{1\over t_2}\,
f_{Z_2}\Big(
{1\over\sqrt{1-\rho^2}}\,\widetilde{t}_2
-
{\rho\over\sqrt{1-\rho^2}}\, \widetilde{t}_1 \, \Big\vert\, Z_1=\widetilde{t}_1
\Big)\,
{\rm d}t_2\over \mathbb{P}(\rho Z_1+\sqrt{1-\rho^2}Z_2\in {B}_0)},
\end{align*}
where $\widetilde{t}_i$, $i=1,2$, are as in \eqref{PDF}.
Finally, by applying the change of variable $w=(\widetilde{t}_2
-
{\rho}\, \widetilde{t}_1)/\sqrt{1-\rho^2}$, the above expression is
\begin{align*}
=
{1 \over t_1 \sigma_1}\,f_{Z_1}(\widetilde{t}_1)\,
{\int_{{B}_\rho}
f_{Z_2}(
w \, \vert\, Z_1=\widetilde{t}_1 )\,
{\rm d}w
\over \mathbb{P}(\rho Z_1+\sqrt{1-\rho^2}Z_2\in {B}_0)},
\end{align*}
in which ${B}_\rho$ is as in \eqref{B-child}.
Thus, we have completed the proof.
\end{proof}
\begin{corollary}[Gaussian generator] \label{Gaussian generator}
Let $\boldsymbol{T}=(T_1,T_2)\sim {\rm BLS}(\boldsymbol{\theta},g_c)$ and $g_c(x)=\exp(-x/2)$ be the generator of the bivariate log-normal distribution. Then, for each Borelian subset $B$ of $(0,\infty)$, the PDF of $T_1\,\vert\, T_2\in B$ is given by (for $t_1>0$)
\begin{align*}
f_{T_1}(t_1\vert T_2\in B)
=
{1\over t_1 \sigma_1}\,
\phi\Bigl(
\log\Bigl[\Big({t_1\over \eta_1}\Big)^{1/\sigma_1}\Bigr]
\Bigr)\,
\dfrac{
\Phi\Bigl(
{1\over\sqrt{1-\rho^2}}
\Big\{
\log\Bigl[\big({B\over \eta_2}\big)^{1/\sigma_2}\Bigr]
-
\rho \log\Big[\big({t_1\over \eta_1}\big)^{1/\sigma_1}\Big]
\Big\}
\Bigr)
}{
\Phi\Big(\log\Big[\big(\frac{B}{ \eta_2}\big)^{1/\sigma_2}\Big]\Big)
},
\end{align*}
where we are adopting the notation $\Phi(C)=\int_C \phi(x){\rm d}x$, for $\phi(x)=g_c(x^2)/\sqrt{2\pi}$.
\end{corollary}
\begin{proof}
It is well-known that the bivariate normal distribution admits a stochastic representation like as in \eqref{rep-stoch-biv-gaussian}, where $Z_1\sim N(0,1)$ and $Z_2\sim N(0,1)$ are independent. Consequently, $Z_2\vert Z_1=z\sim N(0,1)$ and $\rho Z_1+\sqrt{1-\rho^2}Z_2\sim N(0,1)$.
Further, a simple algebraic manipulation shows that
\begin{align*}
\int_{{B}_\rho}
f_{Z_2}(
w \, \vert\, Z_1=\widetilde{t}_1 )\,
{\rm d}w
=
\Phi(B_\rho),
\end{align*}
where $B_\rho$ is the Borelian set defined in \eqref{B-child}.
Then, by applying Theorem \ref{theo-pdf-cond}, the proof follows.
\end{proof}
\begin{corollary}[Student-$t$ generator]
Let $\boldsymbol{T}=(T_1,T_2)\sim {\rm BLS}(\boldsymbol{\theta},g_c)$ and $g_c(x)=(1+(x/\nu))^{-(\nu+2)/2}$, $\nu>0$, be the generator of the bivariate log-Student-$t$ distribution with $\nu$ degrees of freedom. Then, for each Borelian subset $B$ of $(0,\infty)$, the PDF of $T_1\vert T_2\in B$ is given by (for $t_1>0$)
\begin{align*}
\resizebox{17.5cm}{!}{
$
f_{T_1}(t_1\vert T_2\in B)
=
{1 \over t_1 \sigma_1}\,
f_{\nu}\Bigl(
\log\Bigl[\big({t_1\over \eta_1}\big)^{1/\sigma_1}\Bigr]
\Bigr)\,
\dfrac{
F_{\nu+1}\Big(\sqrt{\nu+1\over \nu+\widetilde{t_1}^2}\, {1\over\sqrt{1-\rho^2}}
\Big\{
\log\Bigl[\big({B\over \eta_2}\big)^{1/\sigma_2}\Bigr]
-
\rho \log\Big[\big({t_1\over \eta_1}\big)^{1/\sigma_1}\Big]
\Big\}
\Big)
}{
F_\nu\Big(\log\Big[\big(\frac{B}{ \eta_2}\big)^{1/\sigma_2}\Big]\Big)
},
$
}
\end{align*}
where $F_\nu(C)=\int_C f_\nu(x){\rm d}x$, for $f_\nu(x)=[\Gamma({(\nu+1)/ 2})/(\sqrt{\nu\pi} \, {\Gamma({\nu/ 2})})]g_c(x^2)$.
\end{corollary}
\begin{proof}
It is well-known that the bivariate Student-$t$ distribution has a stochastic representation like as in \eqref{rep-stoch-biv-gaussian}, where $Z_1=Z_1^* \sqrt{\nu/Q}\sim t_\nu$ and $Z_2= Z_2^* \sqrt{\nu/Q}\sim t_\nu$, $Q\sim\chi^2_\nu$ (chi-square with $\nu$ degrees of freedom) is independent of $Z_1^*$ and ${\rho} Z_1^* +\sqrt{1-\rho^2} Z_2^*$;
whereas $Z_1^*$ and $Z_2^*$ are independent and identically distributed standard normal random variables.
Since, $\rho Z_1^*+\sqrt{1-\rho^2} Z_2^*\sim N(0,1)$, we have
$\rho Z_1+\sqrt{1-\rho^2}Z_2=(\rho Z_1^*+\sqrt{1-\rho^2} Z_2^*)\sqrt{\nu/Q}\sim t_\nu$. Then
\begin{align*}
\mathbb{P}\big(\rho Z_1+\sqrt{1-\rho^2}Z_2\in {B}_0\big)=F_\nu({B}_0).
\end{align*}
On the other hand, if $\boldsymbol{X}=(X_1,X_2)\sim {\rm BES}(\boldsymbol{\theta}_*, g_c)$, by Remark 3.7 of \cite{Saulo2022},
\begin{align*}
\sqrt{{\nu+1\over (\nu+r^2)(1-\rho^2)}} \biggl({X_2-\mu_2\over \sigma_2}-\rho r\biggr)\, \Bigg\vert \, {X_1-\mu_1\over \sigma_1}=r\sim t_{\nu+1}.
\end{align*}
Equivalently,
\begin{align*}
\mathbb{P}(T_{\nu+1}\leqslant x)
&=
\mathbb{P}\left(\sqrt{{\nu+1\over (\nu+r^2)(1-\rho^2)}} \biggl({X_2-\mu_2\over \sigma_2}-\rho r\biggr)\leqslant x\, \Bigg\vert \, {X_1-\mu_1\over \sigma_1}=r\right)
\\[0,2cm]
&=
\mathbb{P}\left(Z_2 \leqslant \sqrt{{\nu+r^2\over \nu+1}}\, x\, \Bigg\vert \, Z_1=r\right), \quad T_{\nu+1}\sim t_{\nu+1}.
\end{align*}
By taking $x=\sqrt{(\nu+1)/(\nu+r^2)}\, w$ with $r=\widetilde{t}_1$, we reach at
\begin{align*}
\mathbb{P}\Biggl(T_{\nu+1}\leqslant \sqrt{\nu+1\over \nu+\widetilde{t_1}^2}\, w\Biggr)
=
\mathbb{P}(Z_2 \leqslant w\, \vert \, Z_1=\widetilde{t}_1).
\end{align*}
So, differentiating the above identity with respect to $w$ we have
\begin{align*}
\sqrt{\nu+1\over \nu+\widetilde{t_1}^2}\, f_{\nu+1}\Biggl(\sqrt{\nu+1\over \nu+\widetilde{t_1}^2}\, w\Biggr)
=
f_{Z_2}(
w \, \vert\, Z_1=\widetilde{t}_1 ).
\end{align*}
Hence,
\begin{align*}
\int_{{B}_\rho}
f_{Z_2}(
w \, \vert\, Z_1=\widetilde{t}_1 )\,
{\rm d}w
=
\sqrt{\nu+1\over \nu+\widetilde{t_1}^2}
\int_{{B}_\rho}
f_{\nu+1}\Biggl(\sqrt{\nu+1\over \nu+\widetilde{t_1}^2}\, w\Biggr)\,
{\rm d}w
=
F_{\nu+1}\left(\sqrt{\nu+1\over \nu+\widetilde{t_1}^2}\, {B}_\rho \right).
\end{align*}
Finally, by applying Theorem \ref{theo-pdf-cond}, the proof follows.
\end{proof}
\subsection{The squared Mahalanobis Distance}\label{maha:sec}
The squared Mahalanobis distance of a random vector $\boldsymbol{T}=(T_1,T_2)$ and the vector $\log(\boldsymbol{\eta})=(\log(\eta_1),\log(\eta_2))$ of a bivariate log-symmetric distribution is defined as
\begin{align*}
d^2(\boldsymbol{T},\log(\boldsymbol{\eta}))
=
{\widetilde{T_1}^2-2\rho\widetilde{T_1}\widetilde{T_2}+\widetilde{T_2}^2
\over
1-\rho^2},
\quad
\widetilde{T_i}
=
\log\biggl[\biggl({T_i\over \eta_i}\biggr)^{1/\sigma_i}\biggr], \ \eta_i=\exp(\mu_i), \ i=1,2.
\end{align*}
In what follows we derive formulas for the CDF and PDF of the random variable $d^2(\boldsymbol{T},\log(\boldsymbol{\eta}))$.
\begin{proposition}\label{Mahalanobis Distance}
If $\boldsymbol{T}\sim {\rm BLS}(\boldsymbol{\theta},g_c)$ then the CDF of $d^2(\boldsymbol{T},\log(\boldsymbol{\eta}))$, denoted by $F_{d^2(\boldsymbol{T},\log(\boldsymbol{\eta}))}$, is expressed as
\begin{align}
F_{d^2(\boldsymbol{T},\log(\boldsymbol{\eta}))}(x)
&=
4
\int_{0}^{\sqrt{x}}
\biggl[
F_{Z_2}\Big(\sqrt{x-z_1^2}\,\Big\vert\, Z_1=z_1\Big)-{1\over 2}
\biggr]
f_{Z_1}(z_1)\, {\rm d}z_1 \, \boldsymbol{\cdot} \mathds{1}_{(0,\infty)}(x)
\label{eq-1}
\\[0,2cm]
&=
{4\over Z_{g_c}}\,
\int_{0}^{\sqrt{x}}
\left[\int_{0}^{\sqrt{x-z_1^2}} g_c(z_1^2+z_2^2) \, {\rm d}z_2\right] {\rm d}z_1\, \boldsymbol{\cdot} \mathds{1}_{(0,\infty)}(x),
\label{eq-2}
\end{align}
where $Z_{g_c}$ is as in Proposition \ref{partition function-simplicado}.
\end{proposition}
\begin{proof}
Since $\boldsymbol{T}=(T_1,T_2)$ admits the stochastic representation given in Proposition \ref{Stochastic Representation}, there are $Z_1$ and $Z_2$ so that $\widetilde{T_1}=Z_1$ and $\widetilde{T_2}={\rho} Z_1+\sqrt{1-\rho^2} Z_2$. Then, a simple algebraic manipulation shows that
\begin{align}\label{identity-dist-mahalanobis}
d^2(\boldsymbol{T},\log(\boldsymbol{\eta}))
= Z_1^2+Z_2^2.
\end{align}
Hence, by using the law of total expectation we have (for $x>0$)
\begin{align}
F_{d^2(\boldsymbol{T},\log(\boldsymbol{\eta}))}(x)
=
\mathbb{E}[\mathbb{E}(\mathds{1}_{\{ Z_1^2+Z_2^2\leqslant x \}}\, \vert \, Z_1)]
&=
\mathbb{E}\Big[\mathbb{E}\Big(\mathds{1}_{\{\vert Z_2\vert \leqslant\sqrt{x-Z_1^2}\}}\, \Big\vert \, Z_1 \mathds{1}_{\{\vert Z_1\vert \leqslant\sqrt{x}\}}\Big)\Big]
\nonumber
\\[0,2cm]
&=
\int_{-\sqrt{x}}^{\sqrt{x}}
\left[\int_{-\sqrt{x-z_1^2}}^{\sqrt{x-z_1^2}} f_{Z_2}(z_2\,\vert\, Z_1=z_1)\, {\rm d}z_2\right] f_{Z_1}(z_1)\, {\rm d}z_1. \label{first-eq}
\end{align}
Since $f_{Z_2}(z_2\,\vert\, Z_1=z_1)$ and $f_{Z_1}(z_1)$ are even functions (see Proposition \ref{joint and marginal pdfs}), from \eqref{first-eq} the proof of the first equality \eqref{eq-1} follows.
The second equality \eqref{eq-2} follows by using in \eqref{first-eq} the joint PDF $f_{Z_1,Z_2}$ given in Proposition \ref{joint and marginal pdfs}.
\end{proof}
\begin{proposition}\label{Mahalanobis Distance-PDF}
If $\boldsymbol{T}\sim {\rm BLS}(\boldsymbol{\theta},g_c)$ then the PDF of $d^2(\boldsymbol{T},\log(\boldsymbol{\eta}))$, denoted by $f_{d^2(\boldsymbol{T},\log(\boldsymbol{\eta}))}$, is written as
\begin{align*}
f_{d^2(\boldsymbol{T},\log(\boldsymbol{\eta}))}(x)={\pi\over Z_{g_c}}\, g_c(x), \quad x>0,
\end{align*}
where $Z_{g_c}$ is as in Proposition \ref{partition function-simplicado}.
\end{proposition}
\begin{proof}
The proof is immediate, it follows by differentiating \eqref{eq-2} with respect to $x$ and then by using the following known formula (Leibniz integral rule):
\begin{align*}
{{\rm d}\over {\rm d}x} \int_{a(x)}^{b(x)} h(x,y)\, {\rm d}y
=
h(x,b(x)) b'(x)-h(x,a(x)) a'(x)+ \int_{a(x)}^{b(x)} {\partial h(x,y)\over \partial x}\, {\rm d}y.
\end{align*}
\end{proof}
\begin{remark}
\begin{itemize}
\item {\it Gaussian generator.} By taking $g_c(x)=\exp(-x/2)$ and $Z_{g_c}=2\pi$ (see Table \ref{table:1}), and by applying Proposition \ref{Mahalanobis Distance-PDF}, we get
\begin{align*}
f_{d^2(\boldsymbol{T},\boldsymbol{\eta})}(x)
=
{1\over 2}\, \exp\biggl(-{x\over 2}\biggr)
=
{1\over 2^{k/2}\Gamma(k/2)}\, x^{(k/2)-1} \exp\biggl(-{x\over 2}\biggr),
\quad \text{with} \ k=2.
\end{align*}
But the formula on the right is the PDF of a random variable following the chi-squared distribution with $k$ degrees of freedom ($\chi^2_k$). Hence, $d^2(\boldsymbol{T},\log(\boldsymbol{\eta}))\sim \chi^2_2$.
\item {\it Student-$t$ generator}. By taking $g_c(x)=(1+(x/\nu))^{-(\nu+2)/2}$ and $Z_{g_c}={{\Gamma({\nu/ 2})}\nu\pi/{\Gamma({(\nu+2)/ 2})}}$ (see Table \ref{table:1}), and by applying Proposition \ref{Mahalanobis Distance-PDF}, we have
\begin{align*}
f_{d^2(\boldsymbol{T},\boldsymbol{\eta})}(x)
&=
{{\Gamma({(\nu+2)/ 2})}\over {\Gamma({\nu/ 2})}\nu}\, \biggl(1+{x\over\nu}\biggr)^{-(\nu+2)/2}
\\[0,2cm]
&=
{1\over 2}\, {\sqrt{[d_1(x/2)]^{d_1}d_2^{d_2}\over [d_1(x/2)+d_2]^{d_1+d_2}}\over (x/2){\rm B}(d_1/2, d_2/2)},
\quad \text{with} \ d_1=2\ \text{and} \ d_2=\nu.
\end{align*}
Here, ${\rm B}(x,y)=\Gamma(x)\Gamma(y)/\Gamma(x+y)$, $x>0, y>0$, is the beta function.
Notice that the formula on the second identity above is the PDF of a random variable $2X$, where $X$ follows the $F$-distribution with $d_1$ and $d_2$ degrees of freedom ($F_{d_1,d_2}$). Hence, for abuse of language, we write $d^2(\boldsymbol{T},\log(\boldsymbol{\eta})) \sim 2 F_{2,\nu}$.
\end{itemize}
\end{remark}
\subsection{Independence}
\begin{proposition}
Let $\boldsymbol{T}\sim {\rm BLS}(\boldsymbol{\theta},g_c)$. If $\rho=0$ and the density generator $g_c$ in \eqref{PDF} satisfies
\begin{align}\label{kernel-dec}
g_c(x^2+y^2)
=g_{c_1}(x^2)
g_{c_2}(y^2),
\quad \forall (x,y)\in\mathbb{R}^2,
\end{align}
for some density generators $g_{c_1}$ and $g_{c_2}$, then $T_1$ and $T_2$ are statistically independent.
\end{proposition}
\begin{proof}
When $\rho=0$, by \eqref{kernel-dec} the joint density of $(T_1,T_2)$ satisfies
$$
f_{T_1,T_2}(t_1,t_2;\boldsymbol{\theta})
=
f_1(t_1;\mu_1,\sigma_1)
f_2(t_2;\mu_2,\sigma_2), \quad\forall(t_1,t_2)\in(0,\infty)\times (0,\infty),
$$
and consequently $Z_{g_c}=Z_{g_{c_1}}Z_{g_{c_2}}$,
where
\begin{align*}
f_i(t_i;\mu_i,\sigma_i)=
{1\over t_i\sigma_iZ_{g_{c_i}}}\,
g_{c_i}(\widetilde{t_i}^2), \ t_i>0,
\quad
\text{and}
\quad
Z_{g_{c_i}}
=
\int_{-\infty}^{\infty}
g_{c_i}({z_i}^2)\, {\rm d}z_i, \quad i=1,2,
\end{align*}
and $\widetilde{t_i}$ as in \eqref{PDF}.
A simple calculation shows that $f_1$ and $f_2$ are densities functions (in fact, $f_1$ and $f_2$ are densities associated to two univariate continuous and symmetric random variables; see \cite{Vanegas2016}). Then, from Proposition 2.5 of \cite{James2004} it follows that $T_1$ and $T_2$ are independent, and even more, that $f_i=f_{T_i}$, for $i=1,2$.
\end{proof}
\begin{remark}
Notice that, in Table \ref{table:1}, the density generator of the Bivariate Log-normal is the unique one that satisfies the condition \eqref{kernel-dec}.
\end{remark}
\subsection{Real moments}\label{moments}
\begin{proposition}\label{Real moments}
Let $\boldsymbol{X}=(X_1,X_2)\sim {\rm BES}(\boldsymbol{\theta}_*, g_c)$ and $\boldsymbol{T}=(T_1,T_2)\sim {\rm BLS}(\boldsymbol{\theta},g_c)$.
If the moment-generating function (MGF) of $X_i$, denoted by $M_{X_i}(s_i)$, $i=1,2$, exists, then the real moments of $T_i$ are
\begin{align*}
\mathbb{E}(T_i^r)=\eta_i^r \vartheta(\sigma_i^2r^2), \quad \text{with} \ \eta_i=\exp(\mu_i), \ i=1,2, \ r\in\mathbb{R},
\end{align*}
for some scalar function $\vartheta$, which is called the characteristic generator (see p. 32 in \cite{Fang1990}).
For example, when $g_c(x)=\exp(-x/2)$ (Gaussian generator), $\vartheta(x)=\exp(x/2)$, and when $g_c(x)=(1+(x/\nu))^{-(\nu+2)/2}$, $\nu>0$ (Student-$t$ generator), $\vartheta$ don't exists.
\end{proposition}
\begin{proof}
We only show the case $i=1$, because the other one follows an analogous reasoning.
Indeed, since the random variable $X_1$ has a MGF $M_{X_1}(s_1)$, the domain of the characteristic function $\varphi_{X_1}(t)$ can be extended to the complex plane, and
\begin{align*}
M_{X_1}(s_1)= \varphi_{X_1}(-is_1).
\end{align*}
Since $X_1=\log(T_1)$, by using the above identity we get
\begin{align}\label{relation}
\mathbb{E}(T_1^r)
=\mathbb{E}[\exp(rX_1)] \nonumber
=M_{X_1}(r)
&=\exp(r\mu_1) M_{S_1}(\sigma_1r)
\\[0,2cm]
&=\exp(r\mu_1)\varphi_{S_1}(-i\sigma_1r)
=\exp(r\mu_1)\varphi_{S_1,S_2}(-i\sigma_1r,0),
\end{align}
with $(S_1,S_2)\sim {\rm BES}(\boldsymbol{\theta}_{*_0}, g_c)$, $\boldsymbol{\theta}_{*_0}=(0,0,1,1,\rho)$; and $\varphi_{S_1,S_2}(s_1,0)$
is the marginal characteristic function. On the other hand, the characteristic function of the BES distribution is given by (see Item 13.10, p. 595 in \cite{Bala2009})
\begin{align}\label{CF}
\varphi_{S_1,S_2}(s_1,s_2)=\vartheta(s_1^2+2\rho s_1s_2+s_2^2),
\end{align}
where $\vartheta$ is the characteristic generator specified in the statement of the proposition.
Finally, by using \eqref{CF} in the right-hand side of \eqref{relation}, the proof follows.
\end{proof}
\subsection{Correlation function}\label{Correlation function}
By using the stochastic representation (Proposition \ref{Stochastic Representation}) of $\boldsymbol{T}=(T_1,T_2)$ and the law of total expectation, we have
\begin{align*}
\mathbb{E}(T_1T_2)
&=
\mathbb{E}(\mathbb{E}(T_1T_2\,\vert\, T_1))
=
\mathbb{E}(T_1\mathbb{E}(T_2\,\vert\, T_1))
\\[0,2cm]
&=
\eta_1
\eta_2
\mathbb{E}
\bigl[
\exp\big((\sigma_1+\sigma_2 {\rho}) Z_1\big)\,
\mathbb{E}
\big(
\exp(\sigma_2\sqrt{1-\rho^2} Z_2)
\,\big\vert\, Z_1
\big)
\bigr],
\end{align*}
where $Z_1$ and $Z_2$ are defined in Proposition \ref{Stochastic Representation}.
Hence, from formula of moments (Proposition \ref{Real moments}) we get the next formula for the correlation function of $T_1$ and $T_2$:
\begin{align*}
\rho(T_1,T_2)
=
\dfrac{
\mathbb{E}
\bigl[
\exp\big((\sigma_1+\sigma_2 {\rho}) Z_1\big)\,
\mathbb{E}
\big(
\exp(\sigma_2\sqrt{1-\rho^2} Z_2)
\,\big\vert\, Z_1
\big)
\bigr]-\vartheta(\sigma_1^2)\vartheta(\sigma_2^2)}
{\sqrt{\vartheta(4\sigma_1^2)-\vartheta^2(\sigma_1^2)}\, \sqrt{\vartheta(4\sigma_2^2)-\vartheta^2(\sigma_2^2)}},
\end{align*}
where $\vartheta$ is a scalar function stated in Proposition \ref{Real moments}.
It is a simple task to verify that,
when $g_c(x)=\exp(-x/2)$ (Gaussian generator), $\rho(T_1,T_2)=[\exp(\sigma_1\sigma_2\rho)-1]/\big[\sqrt{\exp(\sigma_1^2)-1} \, \sqrt{\exp(\sigma_2^2)-1}\,\big]$, and when $g_c(x)=(1+(x/\nu))^{-(\nu+2)/2}$, $\nu>0$ (Student-$t$ generator), $\rho(T_1,T_2)$ does not exist.
\subsection{Other properties}
If $\boldsymbol{T}=(T_1,T_2)\sim {\rm BLS}(\boldsymbol{\theta},g_c)$ then, in analogy to stated by \cite{Vanegas2016}), the following properties follow immediately as a consequence of the definition of the BLS distribution:
\begin{itemize}
\item[(P1)] The CDF of $\boldsymbol{T}$ is written as
$F_{T_1,T_2}(t_1,t_2;\boldsymbol{\theta})
=F_{S_1,S_2}(\widetilde{t}_1,\widetilde{t}_2;\boldsymbol{\theta}_{*_0})$, with $(S_1,S_2)\sim {\rm BES}(\boldsymbol{\theta}_{*_0}, g_c)$ and $\boldsymbol{\theta}_{*_0}=(0,0,1,1,\rho)$.
\item[(P2)] The random vector $(T_1^*,T_2^*)=([T_1/\eta_1]^{1/\sigma_1},[T_2/\eta_2]^{1/\sigma_2})$ follows standard BLS distribution. In other words, $(T_1^*,T_2^*)\sim {\rm BLS}(\boldsymbol{\theta}_0, g_c)$ with $\boldsymbol{\theta}_0=(1,1,1,1,\rho)$.
\item[(P3)] $(c_1T_1,c_2T_2)\sim {\rm BLS}(c_1\eta_1,c_2\eta_2,\sigma_1,\sigma_2,g_c)$ for all constants $c_1,c_2>0$.
\item[(P4)] $(T_1^{c_1},T_2^{c_2})\sim {\rm BLS}(\eta_1^{c_1},\eta_2^{c_2},c_1^2\sigma_1,c_2^2\sigma_2,g_c)$ for all constants $c_1\neq 0$ and $c_2\neq 0$.
\end{itemize}
\begin{proposition}
If $\boldsymbol{T}=(T_1,T_2)\sim {\rm BLS}(\boldsymbol{\theta},g_c)$ then
the random vectors $({\eta_1/ T_1},{\eta_2/ T_2})$ and $({T_1/ \eta_1},{T_2/ \eta_2})$ are identically distributed.
Furthemore,
$({\eta_1/ T_1},{\eta_2/ T_2})\sim {\rm BLS}(\boldsymbol{\theta}_{\bullet}, g_c)$ and $({T_1/ \eta_1},{T_2/ \eta_2})\sim {\rm BLS}(\boldsymbol{\theta}_{\bullet}, g_c)$, with $\boldsymbol{\theta}_{\bullet}=(1,1,\sigma_1,\sigma_2,\rho)$.
\end{proposition}
\begin{proof}
By using the well-known identity for two random variables $X$ and $Y$ (see e.g. p. 59 of \cite{James2004}): for all $a_1<b_1$ and $a_2<b_2$,
\begin{align*}
\mathbb{P}(a_1 < X \leqslant b_1, a_2 < Y \leqslant b_2) = F_{X,Y}(b_1, b_2) - F_{X,Y}(b_1, a_2) - F_{X,Y}(a_1, b_2) + F_{X,Y}(a_1, a_2);
\end{align*}
with $a_1=\eta_1/w_1$, $b_1=\infty$, $a_2=\eta_2/w_2$ and $b_2=\infty$, for all $(w_1,w_2)\in (0,\infty)^2$, we get
\begin{align}\label{P1}
\mathbb{P}\Big({\eta_1\over T_1} \leqslant w_1, {\eta_2\over T_2} \leqslant w_2\Big)
=
1- F_{T_2}\Big({\eta_2\over w_2}\Big) -F_{T_1}\Big({\eta_1\over w_1}\Big)+F_{T_1,T_2}\Big({\eta_1\over w_1},{\eta_2\over w_2}\Big).
\end{align}
Since
\begin{align}\label{P2}
\mathbb{P}\biggl({T_1\over \eta_1} \leqslant w_1, {T_2\over \eta_2} \leqslant w_2\biggr)
=
F_{T_1,T_2}({\eta_1 w_1},{\eta_2 w_2}),
\end{align}
by taking partial derivatives with respect to $w_1$ and $w_2$ in \eqref{P1} and \eqref{P2}, we have that the joint PDF of ${\eta_1/ T_1}$ and ${\eta_2/ T_2}$, and the joint PDF of ${T_1/ \eta_1}$ and ${T_2/ \eta_2}$, are related as follows
\begin{align*}
f_{{\eta_1\over T_1}, {\eta_2\over T_2}}(w_1,w_2)
=
f_{{T_1\over \eta_1}, {T_2\over \eta_2}}(w_1,w_2)
=
{1\over w_1w_2\sigma_1\sigma_2\sqrt{1-\rho^2}Z_{g_c}}\,
g_c\Biggl(
{\widetilde{w}_1^2-2\rho\widetilde{w}_1\widetilde{w}_2+\widetilde{w}_2^2
\over
1-\rho^2}
\Biggr),
\quad
w_1,w_2>0,
\end{align*}
with $\widetilde{w_i}=\log({w_i}^{1/\sigma_i})$, $i=1,2$. This completes the proof of proposition.
\end{proof}
\section{Maximum likelihood estimation} \label{Sec:4}
\noindent
Let $\{(T_{1i},T_{2i}):i=1,\ldots,n\}$ be a bivariate random sample of size $n$ from the ${\rm BLS}(\boldsymbol{\theta},g_c)$ distribution with PDF as given in \eqref{PDF}, and let $(t_{1i},t_{2i})$ be the correspondent observations of $(T_{1i},T_{2i})$. Then, the log-likelihood function for $\boldsymbol{\theta}=(\eta_1,\eta_2,\sigma_1,\sigma_2,\rho)$,
without the additive constant, is expressed as
\begin{align*}
\ell(\boldsymbol{\theta})
=
-n\log(\sigma_1)-n\log(\sigma_2)-{n\over 2}\,\log\big({1-\rho^2}\big)
+
\sum_{i=1}^{n}
\log
g_c\Biggl(
{\widetilde{t_{1i}}^2-2\rho\widetilde{t}_{1i}\widetilde{t}_{2i}+\widetilde{t_{2i}}^2
\over
1-\rho^2}
\Biggr), \quad
t_{1i},t_{2i}>0,
\\[0,3cm]
\widetilde{t}_{ki}=\log\biggl[\Bigl({t_{ki}\over \eta_k}\Bigr)^{1/\sigma_k}\biggr], \ \eta_k=\exp(\mu_k), \ k=1,2; \ i=1,\ldots,n.
\end{align*}
In the case that a supremum $\widehat{\boldsymbol{\theta}}=(\widehat{\eta_1},\widehat{\eta_2},\widehat{\sigma_1},\widehat{\sigma_2},\widehat{\rho})$ exists, it must satisfy the following likelihood equations:
\begin{align}\label{likelihood equation}
{\partial \ell({\boldsymbol{\theta}})\over\partial\eta_1}
\bigg\vert_{{\boldsymbol{\theta}}=\widehat{\boldsymbol{\theta}}}
=0,
\quad
{\partial \ell(\boldsymbol{\theta})\over\partial\eta_2}=0,
\quad
{\partial\ell(\boldsymbol{\theta})\over\partial\sigma_1}
\bigg\vert_{{\boldsymbol{\theta}}=\widehat{\boldsymbol{\theta}}}=0,
\quad
{\partial\ell(\boldsymbol{\theta})\over\partial\sigma_2}
\bigg\vert_{{\boldsymbol{\theta}}=\widehat{\boldsymbol{\theta}}}=0,
\quad
{\partial\ell(\boldsymbol{\theta})\over\partial\rho}
\bigg\vert_{{\boldsymbol{\theta}}=\widehat{\boldsymbol{\theta}}}=0,
\end{align}
with
\begin{align}
&{\partial \ell(\boldsymbol{\theta})\over\partial\eta_1}
=
\frac{2}{\sigma_1\eta_1(1-\rho^2)}
\sum_{i=1}^{n}
\big(\rho \widetilde{t}_{2i} -\widetilde{t}_{1i}\big)
G(\widetilde{t}_{1i},\widetilde{t}_{2i})
, \nonumber
\\[0,1cm]
&{\partial \ell(\boldsymbol{\theta})\over\partial\eta_2}
=
\frac{2}{\sigma_2\eta_2(1-\rho^2)}
\sum_{i=1}^{n}
\big(\rho \widetilde{t}_{1i} -\widetilde{t}_{2i}\big)
G(\widetilde{t}_{1i},\widetilde{t}_{2i})
, \nonumber
\\[0,1cm]
&{\partial\ell(\boldsymbol{\theta})\over\partial\sigma_1}
=
-\frac{n}{\sigma_1}
+
{2\over \sigma_1(1-\rho^2)}
\sum_{i=1}^{n}
\widetilde{t}_{1i}
\big(\rho\widetilde{t}_{2i}-\widetilde{t}_{1i}\big)
G(\widetilde{t}_{1i},\widetilde{t}_{2i})
, \nonumber
\\[0,1cm]
&{\partial\ell(\boldsymbol{\theta})\over\partial\sigma_2}
=
-\frac{n}{\sigma_2}
+
{2\over \sigma_2(1-\rho^2)}
\sum_{i=1}^{n}
\widetilde{t}_{2i}
\big(\rho\widetilde{t}_{1i}-\widetilde{t}_{2i}\big)
G(\widetilde{t}_{1i},\widetilde{t}_{2i})
, \nonumber
\\[0,1cm]
&{\partial \ell(\boldsymbol{\theta})\over\partial\rho}
=
{n\rho\over 1-\rho^2}
-
{2\over (1-\rho^2)^2}
\sum_{i=1}^{n}
\big(\rho \widetilde{t}_{1i}-\widetilde{t}_{2i}\big)
\big(\rho\widetilde{t}_{2i}-\widetilde{t}_{1i}\big)
G(\widetilde{t}_{1i},\widetilde{t}_{2i}), \label{rho-mle}
\end{align}
where we are denoting
\begin{align*}
&G(\widetilde{t}_{1i},\widetilde{t}_{2i})
=
g_c'\Biggl(
{\widetilde{t_{1i}}^2-2\rho\widetilde{t}_{1i}\widetilde{t}_{2i}+\widetilde{t_{2i}}^2
\over
1-\rho^2}
\Biggr)
{\Bigg /}
g_c\Biggl(
{\widetilde{t_{1i}}^2-2\rho\widetilde{t}_{1i}\widetilde{t}_{2i}+\widetilde{t_{2i}}^2
\over
1-\rho^2}
\Biggr),
\quad i=1,\ldots,n.
\end{align*}
A simple observation shows that the likelihood equations \eqref{likelihood equation} can be written as follows
\begin{align*}
&\sum_{i=1}^{n}
\widetilde{t}_{1i}\,
G(\widetilde{t}_{1i},\widetilde{t}_{2i})
\bigg\vert_{{\boldsymbol{\theta}}=\widehat{\boldsymbol{\theta}}}
=0,
\\[0,1cm]
&
\sum_{i=1}^{n}
\big(\widetilde{t}_{1i}^2-\widetilde{t}_{2i}^2\big)\,
G(\widetilde{t}_{1i},\widetilde{t}_{2i})
\bigg\vert_{{\boldsymbol{\theta}}=\widehat{\boldsymbol{\theta}}}
=0,
\\[0,2cm]
&
\sum_{i=1}^{n}
\widetilde{t_{2i}}
\left[2\rho
\widetilde{t_{2i}}
-
(1+\rho^2) \widetilde{t}_{1i}\right]
G(\widetilde{t}_{1i},\widetilde{t}_{2i})
\bigg\vert_{{\boldsymbol{\theta}}=\widehat{\boldsymbol{\theta}}}
=-{n\widehat{\rho}(1-\widehat{\rho}^2)\over 2}\,.
\end{align*}
Any nontrivial root $\widehat{\boldsymbol{\theta}}$ of the above likelihood equations is known as
an ML estimator in the loose sense. When the parameter value provides the absolute maximum of the log-likelihood function, it is called an ML estimator in the strict sense.
In the following proposition we study the existence of the ML estimator $\widehat{\rho}$ when the other parameters are known.
\begin{proposition}\label{prop-existence-MLE}
Let $g_c$ be a density generator satisfying the following condition:
\begin{align}\label{condition-g}
g'_c(x)=r(x) g_c(x),\quad -\infty<x<\infty,
\end{align}
for some real-valued function $r(x)$ so that $\lim_{\rho\to \pm 1} r(x_{\rho,i})=c\in(-\infty,0)$, where $x_{\rho,i}= (\widetilde{t_{1i}}^2-2\rho\widetilde{t}_{1i}\widetilde{t}_{2i}+\widetilde{t_{2i}}^2)/(1-\rho^2)$, $i=1,\ldots,n$.
If the parameters $\eta_1,\eta_2,\sigma_1$ and $\sigma_2$ are known, then the equation \eqref{rho-mle} has at least one root on the interval $(-1, 1)$.
\end{proposition}
\begin{proof}
Since $g'_c(x_{\rho,i})=r(x_{\rho,i}) g_c(x_{\rho,i})$, we have
$G(\widetilde{t}_{1i},\widetilde{t}_{2i})=r(x_{\rho,i})$. Then, by using the condition $\lim_{\rho\to \pm 1} r(x_{\rho,i})=c<0$, from \eqref{rho-mle} we can easily see that
\begin{align*}
\lim_{\rho\to 1^-}
{\partial \ell(\boldsymbol{\theta})\over\partial\rho}
=-\infty
\quad \text{and} \quad
\lim_{\rho\to -1^+}
{\partial \ell(\boldsymbol{\theta})\over\partial\rho}
=+\infty.
\end{align*}
So, by Intermediate value theorem, there
exists at least one solution on the interval $(-1, 1)$.
\end{proof}
\begin{remark}
Notice that, in Table \ref{table:1}, the density generators
of the Bivariate Log-normal (or Bivariate Log-power-exponential with $\xi= 0$) and the Bivariate Log-Kotz type (with $\delta=1$)
satisfy the hypotheses of Proposition \ref{prop-existence-MLE} with $r(x_{\rho,i})=-1/2$ and $r(x_{\rho,i})=(-\lambda x_{\rho,i}+\xi-1)/x_{\rho,i} {\longrightarrow}-\lambda$ as $\rho\to\pm 1$, $\forall i=1,\ldots,n$, respectively. Then, Proposition \ref{prop-existence-MLE} can be applied to guarantee the existence of an ML estimator $\widehat{\rho}$ of $\rho$ in the loose sense.
On the other hand, the density generators of the Bivariate Log-Kotz type (with $\delta<1$),
the Bivariate Log-Student-$t$,
the Bivariate Log-Pearson Type VII and
the Bivariate Log-power-exponential (with $\xi\neq 0$)
satisfy the condition \eqref{condition-g} with
$r(x)=(-\lambda\delta x^\delta+\xi-1)/x$,
$r(x)=-(\nu+2)/2(1+{x\over\nu})$,
$r(x)=-\xi/(\theta+x)$
and
$r(x)=-x^{-\xi/(\xi+1)}/2(\xi+1)$,
respectively, but in all these cases $ r(x_{\rho,i})\longrightarrow 0$ as $\rho\to\pm 1$, $\forall i=1,\ldots,n$.
\end{remark}
For the BLS model no closed-form solution to the maximization problem is known or available, and an MLE can only be found via numerical optimization. Under mild regularity conditions \citep{Cox1974,Davison2008}, the asymptotic distribution of ML estimator $\widehat{\boldsymbol{\theta}}$ of $\boldsymbol{\theta}$ is easily determined by the convergence in law: $(\widehat{\boldsymbol{\theta}}-\boldsymbol{\theta})\stackrel{\mathscr D}{\longrightarrow} N(\boldsymbol{0},I^{-1}(\boldsymbol{\theta}))$,
where
$\boldsymbol{0}$ is the zero mean vector and $I^{-1}(\boldsymbol{\theta})$ is the inverse expected
Fisher information matrix.
The main use of the last convergence is to construct confidence regions and to perform
hypothesis testing for $\boldsymbol{\theta}$ \citep{Davison2008}.
\section{Monte Carlo simulation}\label{Sec:5}
In this section, we carry out a Monte Carlo (MC) simulation study to evaluate the performance of the previously proposed maximum likelihood estimators for the BLS models. We use different sample sizes and parameter settings, using the following distributions: log-slash, log-power-exponential and log-normal.
The simulation scenario considers the following setting: 1,000 MC replications, sample size $n \in (25,50,100,150)$, vector of true parameters $(\eta_1,\eta_2,\sigma_1,\sigma_2)= (1,1,0.5,0.5)$, $\rho \in \{0,0.25,0.5,0.75,0.95\}$, $\nu = 4$ (log-slash), and $\xi = 0.3$ (log-power-exponential). The extra parameters of the chosen distributions are assumed to be fixed; see \cite{Sauloetal2022}.
The performance and recovery of the ML estimators were evaluated through the empirical bias and the mean square error (MSE), which are calculated from the MC replicates, as shown below,
\begin{align}
\text{Bias}(\widehat{\theta}) = \frac{1}{N}\sum_{i=1}^N\widehat{\theta}^{(i)} - \theta,
\quad
\quad
\text{MSE}(\widehat{\theta}) = \frac{1}{N}\sum_{i=1}^N(\widehat{\theta}^{(i)} - \theta)^2,
\end{align}
where $\theta$ and $\widehat{\theta}^{(i)}$ are the true value of the parameter and its respective $i$th estimate, and $N$ is the number of MC replications. The steps for the MC simulation study are described in Algorithm 1.
\begin{table}[H]
\resizebox{\linewidth}{!}{
\begin{tabular}{l}
\hline
\textbf{Algorithm 1.} Simulation \\ \hline
1. Choose the BLS distribution based on Table 1.1 and define the value of the parameters of the chosen distribution. \\
2. Generate 1,000 samples of size $n$ based on the chosen model. \\
3. Estimate the model parameters using the ML method for each sample. \\
4. Compute the empirical bias and MSE. \\ \hline
\end{tabular}}
\end{table}
The simulation results are shown in Tables \ref{table:2}, \ref{table:3} and \ref{table:4}. It is possible to observe in the simulations that the results produced for the chosen distributions were as expected. As the sample size increases, the bias and MSE tend to decrease. In general, the results do not seem to depend on the parameter $\rho$.
\begin{table}[htpb!]\label{Log-Slash}
\caption{Monte Carlo simulation results for the bivariate log-slash distribuition with $\nu = 4$.}
\resizebox{\linewidth}{!}{
\begin{tabular}{llcccccccccccccc}
\hline
\multirow{2}{*}{$n$} & \multirow{2}{*}{$\rho$} & \multicolumn{10}{l}{MLE} \\ \cline{3-12}
& & \multicolumn{2}{c}{$\widehat{\eta}_1$} & \multicolumn{2}{c}{$\widehat{\eta}_2$} & \multicolumn{2}{c}{$\widehat{\sigma}_1$} & \multicolumn{2}{c}{$\widehat{\sigma}_2$} & \multicolumn{2}{c}{$\widehat{\rho}$} \\
\noalign{\hrule height 1.7pt}
& & Bias & MSE & Bias & MSE & Bias & MSE & Bias & MSE & Bias & MSE \\
25 & 0.0 & 0.0093 & 0.0138 & 0.0085 & 0.0145 & -0.1040 & 0.0154 & -0.1004 & 0.0143 & -0.0053 & 0.0426 \\
& 0.25 & 0.0078 & 0.0137 & 0.0096 & 0.0150 & -0.1053 & 0.0148 & -0.0988 & 0.0140 & -0.0079 & 0.0412 \\
& 0.50 & 0.0050 & 0.0126 & 0.0060 & 0.0133 & -0.1008 & 0.0142 & -0.1046 & 0.0149 & -0.0145 & 0.0276 \\
& 0.95 & 0.0072 & 0.0150 & 0.0064 & 0.0148 & -0.1055 & 0.0151 & -0.1048 & 0.0150 & -0.0046 & 0.0007 \\ \hline
50 & 0.0 & 0.0056 & 0.0067 & 0.0030 & 0.0071 & -0.0967 & 0.0113 & -0.0947 & 0.0110 & -0.0065 & 0.0230 \\
& 0.25 & 0.0053 & 0.0069 & 0.0017 & 0.0072 & -0.0963 & 0.0114 & -0.0968 & 0.0114 & -0.0071 & 0.0230 \\
& 0.50 & 0.0033 & 0.0065 & 0.0041 & 0.0073 & -0.0959 & 0.0112 & -0.0951 & 0.0111 & -0.0013 & 0.0130 \\
& 0.95 & 0.0060 & 0.0071 & 0.0066 & 0.0070 & -0.0944 & 0.0109 & -0.0938 & 0.0108 & -0.0003 & 0.0002 \\ \hline
100 & 0.0 & 0.0006 & 0.0037 & 0.0023 & 0.0034 & -0.0955 & 0.0101 & -0.0944 & 0.0098 & 0.0015 & 0.0122 \\
& 0.25 & -0.0016 & 0.0033 & -0.0007 & 0.0035 & -0.0929 & 0.0096 & -0.0943 & 0.0098 & -0.0056 & 0.0095 \\
& 0.50 & 0.0003 & 0.0035 & 0.0023 & 0.0034 & -0.0949 & 0.0100 & -0.0962 & 0.0102 & -0.0047 & 0.0066 \\
& 0.95 & 0.0009 & 0.0036 & 0.0015 & 0.0035 & -0.0955 & 0.0101 & -0.0960 & 0.0102 & -0.0012 & 0.0001 \\ \hline
150 & 0.0 & 0.0034 & 0.0022 & 0.0033 & 0.0024 & -0.0953 & 0.0098 & -0.0941 & 0.0095 & 0.0050 & 0.0075 \\
& 0.25 & 0.0000 & 0.0023 & -0.0004 & 0.0022 & -0.0933 & 0.0093 & -0.0934 & 0.0094 & -0.0045 & 0.0066 \\
& 0.50 & 0.0027 & 0.0022 & 0.0038 & 0.0023 & -0.0942 & 0.0095 & -0.0932 & 0.0094 & -0.0015 & 0.0039 \\
& 0.95 & 0.0007 & 0.0023 & 0.0011 & 0.0023 & -0.0936 & 0,0094 & -0.0937 & 0.0094 & -0.0006 & 0.0001 \\ \hline
\end{tabular}
}
\label{table:2}
\end{table}
\begin{table}[htpb]
\caption{Monte Carlo simulation results for the bivariate log-power-exponential distribuition with $\xi = 0.3$.}
\resizebox{\linewidth}{!}{
\begin{tabular}{llcccccccccc}
\hline
\multicolumn{1}{c}{\multirow{2}{*}{$n$}} & \multirow{2}{*}{$\rho$} & MLE & & & & & & & & & \\ \cline{3-12}
\multicolumn{1}{c}{} & & \multicolumn{2}{c}{$\widehat{\eta}_1$} & \multicolumn{2}{c}{$\widehat{\eta}_2$} & \multicolumn{2}{c}{$\widehat{\sigma}_1$} & \multicolumn{2}{c}{$\widehat{\sigma}_2$} & \multicolumn{2}{c}{$\widehat{\rho}$} \\ \hline
& & \multicolumn{1}{c}{Bias} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{Bias} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{Bias} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{Bias} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{Bias} & \multicolumn{1}{c}{MSE} \\
\multirow{4}{*}{25} & 0.00 & 0.0076 & 0.0174 & 0.0133 & 0.0173 & -0.0466 & 0.0138 & -0.0488 & 0.0139 & \multicolumn{1}{c}{-0.0038} & 0.0411 \\
& 0.25 & 0.0048 & 0.0155 & 0.0076 & 0.0169 & -0.0434 & 0.0079 & -0.0432 & 0.0082 & \multicolumn{1}{c}{-0.0098} & 0.0399 \\
& 0.50 & 0.0066 & 0.0170 & 0.0099 & 0.0192 & -0.0436 & 0.0097 & -0.0413 & 0.0101 & \multicolumn{1}{c}{-0.0047} & 0.0268 \\
& 0.95 & 0.0108 & 0.0170 & 0.0113 & 0.0171 & -0.0423 & 0.0095 & -0.0431 & 0.0097 & -0.0025 & 0.0006 \\ \hline
\multirow{4}{*}{50} & 0.00 & 0.0026 & 0.0083 & 0.0060 & 0.0084 & -0.0735 & 0.0415 & -0.0712 & 0.0418 & \multicolumn{1}{c}{0.0016} & 0.0192 \\
& 0.25 & 0.0062 & 0.0090 & 0.0017 & 0.0081 & -0.0572 & 0.0243 & -0.0597 & 0.0258 & 0.0003 & 0.0181 \\
& 0.50 & 0.0055 & 0.0085 & 0.0027 & 0.0076 & -0.0452 & 0.0128 & -0.0460 & 0.0125 & -0.0026 & 0.0134 \\
& 0.95 & 0.0062 & 0.0086 & 0.0068 & 0.0088 & -0.0364 & 0.0047 & -0.0358 & 0.0047 & -0.0010 & 0.0003 \\ \hline
\multirow{4}{*}{100} & 0.00 & 0.0034 & 0.0044 & 0.0022 & 0.0043 & -0.0475 & 0.0173 & -0.0453 & 0.0175 & 0.0011 & 0.0107 \\
& 0.25 & 0.0023 & 0.0041 & 0.0025 & 0.0042 & -0.0331 & 0.0044 & -0.0319 & 0.0044 & 0.0011 & 0.0088 \\
& 0.50 & 0.0011 & 0.0045 & 0.0000 & 0.0041 & -0.0313 & 0.0035 & -0.0345 & 0.0035 & -0.0058 & 0.0063 \\
& 0.95 & 0.0034 & 0.0043 & 0.0027 & 0.0043 & -0.0344 & 0.0041 & -0.0344 & 0.0040 & -0.0009 & 0.0001 \\ \hline
\multirow{4}{*}{150} & 0.00 & -0.0002 & 0.0029 & 0.0038 & 0.0029 & -0.0319 & 0.0020 & -0.0318 & 0.0020 & -0.0008 & 0.0066 \\
& 0.25 & 0.0001 & 0.0028 & 0.0036 & 0.0026 & -0.0300 & 0.0019 & -0.0303 & 0.0019 & -0.0018 & 0.0067 \\
& 0.50 & 0.0050 & 0.0026 & 0.0027 & 0.0027 & -0.0331 & 0.0036 & -0.0347 & 0.0037 & -0.0035 & 0.0041 \\
& 0.95 & 0.0014 & 0.0029 & 0.0014 & 0.0029 & -0.0306 & 0.0019 & -0.0299 & 0.0018 & -0.0002 & 0.0001 \\ \hline
\end{tabular}
}
\label{table:3}
\end{table}
\begin{table}[htpb!]
\caption{Monte Carlo simulation results for the bivariate log-normal distribuition.}
\resizebox{\linewidth}{!}{
\begin{tabular}{llcccccccccc}
\hline
\multicolumn{1}{c}{\multirow{2}{*}{$n$}} & \multirow{2}{*}{$\rho$} & \multicolumn{1}{l}{MLE} & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \cline{3-12}
\multicolumn{1}{c}{} & & \multicolumn{2}{c}{$\widehat{\eta}_1$} & \multicolumn{2}{c}{$\widehat{\eta}_2$} & \multicolumn{2}{c}{$\widehat{\sigma}_1$} & \multicolumn{2}{c}{$\widehat{\sigma}_2$} & \multicolumn{2}{c}{$\widehat{\rho}$} \\ \hline
& & Bias & MSE & Bias & MSE & Bias & MSE & Bias & MSE & Bias & MSE \\
\multirow{4}{*}{25} & 0.00 & -0.0007 & 0.0100 & 0.0034 & 0.0100 & -0.0312 & 0.0182 & -0.0265 & 0.0178 & -0.0044 & 0.0411 \\
& 0.25 & 0.0095 & 0.0107 & 0.0085 & 0.0101 & -0.0336 & 0.0180 & -0.0320 & 0.0220 & -0.0151 & 0.0367 \\
& 0.50 & 0.0120 & 0.0109 & 0.0159 & 0.0112 & -0.0313 & 0.0205 & -0.0282 & 0.0198 & -0.0106 & 0.0252 \\
& 0.95 & -0.0013 & 0.0101 & -0.0011 & 0.0100 & -0.0289 & 0.0197 & -0.0296 & 0.0199 & -0.0026 & 0.0005 \\ \hline
\multirow{4}{*}{50} & 0.00 & -0.0012 & 0.0051 & 0.0055 & 0.0050 & -0.0394 & 0.0373 & -0.0433 & 0.0387 & 0.0006 & 0.0187 \\
& 0.25 & 0.0014 & 0.0053 & 0.0023 & 0.0050 & -0.0445 & 0.0391 & -0.0446 & 0.0387 & -0.0014 & 0.0174 \\
& 0.50 & 0.0023 & 0.0051 & -0.0027 & 0.0052 & -0.0370 & 0.0329 & -0.0363 & 0.0338 & 0.0044 & 0.0114 \\
& 0.95 & 0.0020 & 0.0051 & 0.0026 & 0.0051 & -0.0151 & 0.0127 & -0.0148 & 0.0125 & 0.0000 & 0.0002 \\ \hline
\multirow{4}{*}{100} & 0.00 & 0.0021 & 0.0025 & 0.0020 & 0.0024 & -0.0043 & 0.0022 & -0.0067 & 0.0021 & 0.0079 & 0.0096 \\
& 0.25 & -0.0025 & 0.0025 & -0.0031 & 0.0025 & -0.0063 & 0.0042 & -0.0073 & 0.0039 & 0.0008 & 0.0091 \\
& 0.50 & -0.0019 & 0.0025 & -0.0007 & 0.0023 & -0.0035 & 0.0013 & -0.0052 & 0.0013 & 0.0001 & 0.0057 \\
& 0.95 & 0.0025 & 0.0024 & 0.0029 & 0.0024 & -0.0061 & 0.0038 & -0.0063 & 0.0039 & -0.0001 & 0.0001 \\ \hline
\multirow{4}{*}{150} & 0.00 & 0.0000 & 0.0018 & -0.0018 & 0.0016 & -0.0101 & 0.0080 & -0.0096 & 0.0081 & -0.0024 & 0.0064 \\
& 0.25 & 0.0003 & 0.0017 & 0.0001 & 0.0017 & -0.0041 & 0.0027 & -0.0035 & 0.0027 & 0.0016 & 0.0058 \\
& 0.50 & 0.0000 & 0.0016 & -0.0001 & 0.0017 & -0.0088 & 0.0066 & -0.0082 & 0.0064 & -0.0021 & 0.0040 \\
& 0.95 & 0.0013 & 0.0018 & 0.0016 & 0.0018 & -0.0115 & 0.0100 & -0.0118 & 0.0100 & -0.0006 & 0.0001 \\ \hline
\end{tabular}
}
\label{table:4}
\end{table}
\newpage
\section{Application to real data}\label{Sec:6}
In this section we will illustrate the proposed methodology and use a real data set to apply the bivariate log-symmetric models. The data is based on the article by Marchant et al. (2015), in which the authors proposed a multivariate Birnbaum-Saunders regression model to describe fatigue data. The authors describes fatigue as the process of material failure, which is caused by cyclic stress. Thus, fatigue is composed of crack initiation and propagation, until the material fractures. The calculation of fatigue life is important for determining the reliability of components or structures. Here, we consider the variables \textit{Von Mises stress} $(T_1$, \text{in} $N/mm^2)$ and \textit{die lifetime} ($T_2$, in number of cycles). According to Marchant et al. (2015), die fracture is the fatigue of metal caused by cyclic stress in the course of the service life cycle of dies (die lifetime).
Table \ref{table:5} provides descriptive statistics for the variables \textit{Von mises stress} ($T_1$) and \textit{die lifetime} ($T_2$), including the minimum, median, mean, maximum, standard deviation (SD), coefficient of variation, coefficient of skewness (CS), and coefficient of kurtosis (CK). We can observe in the variable \textit{Von mises stress}, that the mean and median are respectively $1.247$ and $1.130$, i.e. the mean is larger than the median, which indicates a possitively skewed feature in the data distribution. The CV is $56.172\%$, which means a moderate level of dispersion around the mean. Furthermore, the CS value confirms the skewed nature. The variable \textit{die lifetime} has mean equal to $23.761$ and median equal to $19.000$. These values also indicate the positively skewed feature in the distribution of the data. Moreover, the CV value is $71.967\%$, which shows us the moderate level of dispersion around the mean. The CS confirms the skewed nature and the CK value indicates the high kurtosis feature in the data distribution.
\begin{table}[H]
\caption{Summary statistics for the indicated data set.}
\centering
\begin{tabular}{lccccccccc}
\hline
Variables & $n$ & Minimum & Median & Mean & Maximum & SD & CV & CS & CK \\ \hline
$T_1$ & 15 & 0.243 & 1.130 & 1.247 & 2.430 & 0.700 & 56.172 & 0.209 & -1.466 \\
$T_2$ & 15 & 6.420 & 19.900 & 23.761 & 74.800 & 17.100 & 71.967 & 1.631 & 2.495 \\ \hline
\end{tabular}
\label{table:5}
\end{table}
Table \ref{table:6} presents the ML estimates and the standard errors (in parentheses) for the bivariate log-symmetric model parameters. This table also reports the log-likelihood value, and the values of the Akaike (AIC) and Bayesian (BIC) information criteria. The extra parameters were estimated using the profile log-likelihood. From Table \ref{table:6}, we observe that the log-Laplace model provides better adjustment than other models based on the values of log-likelihood, AIC and BIC. Nevertheless, in general, the values of log-likelihood, AIC and BIC of all bivariate log-symmetric models are quite close to each other.
\begin{table}[H]
\caption{ML estimates (with standard errors in parentheses), log-likehood, AIC and BIC values for the indicated bivariate log-symmetric models.}
\resizebox{\linewidth}{!}{
\begin{tabular}{lccccccccc}
\noalign{\hrule height 1.7pt}
Distribuiton & $\widehat{\eta}_1$ & $\widehat{\eta}_2$ & $\widehat{\sigma}_1$ & $\widehat{\sigma}_2$ & $\widehat{\rho}$ & $\widehat{\nu}$ & Log-likehood & AIC & BIC \\ \hline
Log-normal & 1.0362* & 19.4824* & 0.6536* & 0.6210* & -0.9390* & - & -58.117 & 126.23 &129.78\\
& (0.0175) & (3.1239) & (0.01192) & (0.1133) & (0.0305) & & \\
Log-Student-$t$ & 1.0188* & 20.1932* & 0.6111* & 0.5508* & -0.9514* & 7 & -57.915 & 125.83 & 129.37 \\
& (0.1339) & (2.0685) & (0.2220) & (0.1881) & (0.0207) & & & \\
Log-Pearson Type VII & 1.0211* & 20.1218* & 0.3712* & 0.3362* & -0.9502* & $\xi$ = 5 , $\theta$ = 22 & -57.917 & 125.83 & 129.37 \\
& (0.1806) & (3.2095) & (0.0761) & (0.0698) & (0.0280) & & & \\
Log-hyperbolic & 1.0175* & 20.1900* & 0.6843* & 0.6201* & -0.9504* & 2 & -57.922 & 125.84 & 129.38 \\
& (0.1910) & (3.2818) & (0.0376) & (0.0333) & (0.0276) & & & \\
{Log-Laplace} & {1.0594*} & {20.9110*} & {0.7748*} & {0.6809*} & {-0.9471*} & {-} & {-57.585} & {125.17} & {128.71} \\
& (0.0023) & (0.0105) & (0.2032) & (0.1745) & (0.0342) & & & \\
Log-slash & 1.0207* & 20.1854* & 0.5158* & 0.4648* & -0.9515* & 5 & -57.945 & 125.89 & 129.43 \\
& (0.1783) & (3.1973) & (0.1030) & (0.0955) & (0.0277) & & & \\
Log-power-exponential & 1.0298* & 19.9461* & 0.4516* & 0.4154* & -0.9432* & 0.37 & -57.984 & 125.97 & 129.51 \\
& (0.18445) & (3.2182) & (0.0935) & (0.0852) & (0.0294) & & & \\
Log-logistic & 1.0498* & 18.8904* & 0.7651* & 0.7488* & -0.9315* & - & -58.672 & 127.34 & 130.89 \\
& (0.1650) & (3.0396) & (0.1212) & (0.1231) & (0.0316) & & & \\\hline
\end{tabular}
}
\label{table:6}
\footnotesize{$^*$ significant at 5\% level.}
\end{table}
Figure~\ref{fig:qqplots} shows the QQ plots of the Mahalanobis distances (see Subsection \eqref{maha:sec}) for the models considered in Table \ref{table:6}. We see clearly that, with the exception of the log-Student-$t$ case, the Mahalanobis distances in the bivariate log-symmetric models conform relatively well with their reference distributions.
\begin{figure}[!ht]
\centering
\subfigure[Log-normal]{\includegraphics[height=5cm,width=5cm]{qqplot_normal.eps}}
\subfigure[Log-Student-$t$]{\includegraphics[height=5cm,width=5cm]{qqplot_student.eps}}
\subfigure[Log-Pearson Type VII]{\includegraphics[height=5cm,width=5cm]{qqplot_pearson.eps}}\\
\subfigure[Log-hyperbolic]{\includegraphics[height=5cm,width=5cm]{qqplot_hyperbolic.eps}}
\subfigure[Log-Laplace]{\includegraphics[height=5cm,width=5cm]{qqplot_laplace.eps}}
\subfigure[Log-slash]{\includegraphics[height=5cm,width=5cm]{qqplot_slash.eps}}
\subfigure[Log-power-exponential]{\includegraphics[height=5cm,width=5cm]{qqplot_powerexp.eps}}
\subfigure[Log-logistic]{\includegraphics[height=5cm,width=5cm]{qqplot_logistic.eps}}
\caption{\small {QQ plot for the Mahalanobis distance in the indicated model.}}
\label{fig:qqplots}
\end{figure}
\section{Concluding Remarks}\label{Sec:7}
In this paper, we have introduced a class of bivariate log-symmetric models, which is the result of an exponential transformation on a variable that follows a bivariate symmetric distribution. We have studied the main statistical properties, proposed the maximum likelihood estimators for the model parameters. A Monte Carlo simulation study has been carried out to numerically evaluate the maximum likelihood estimators. The simulation results have showed the good performance for the estimators, obtaining empirical bias values close to zero, as shown in Tables \ref{table:2}-\ref{table:4}. We have applied the proposed models to a real fatigue data set. The results are seen to be favorable to the log-Laplace model. As part of future research, it will be of interest to propose bivariate log-symmetric regression models. Furthermore, the study of some hypothesis and misspecification tests via Monte Carlo simulation can be investigated. Work on these problems is currently in progress and we hope to report these findings in future.
\paragraph{Acknowledgements}
This study was financed in part by the Coordenação de
Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES)
(Finance Code 001).
\paragraph{Disclosure statement}
There are no conflicts of interest to disclose.
|
1,477,468,750,457 | arxiv | \section{Introduction}
4U~1626--67 is a remarkable ultra-compact X-ray binary bearing a neutron star with pulse period
of 7.7~seconds \citep{Rappaport77}.
Evidence of binary motion has never been revealed from X-ray timing measurements
\citep[see e.g.,][]{Rappaport77,Joss78,Jain08}.
Orbital period of 42~minutes has been inferred from the
pulsed optical emission reprocessed
on the surface of secondary
\citep{Middleditch81,Chakrabarty98}.
An upper limit of 10~lt-ms for pulse arrival delay
has been reported by \citet{Jain08} using X-ray data from \emph{RXTE}-\textsc{PCA}.
Time-scales of torque reversals observed in most of the accretion powered pulsars
varies from weeks to months and years and, in most cases accretion torques
are often related to the X-ray luminosity.
4U~1626--67, a persistent X-ray source
underwent two torque reversals since its discovery \citep{Camero10}.
It was initially observed in spin-up state, this trend
reversed in 1990 and the neutron star began to spin-down.
After the steady spin-down phase of about 18 years, a transition to spin-up
took place in 2008.~The second torque reversal was detected with \emph{RXTE}-\textsc{PCA} \citep{Jain-atel09}
and \emph{Fermi}-\textsc{GBM} \citep{Arranz09}.
~Moreover, it is observed that this source does not obey standard X-ray luminosity-accretion torque
relation \citep{Beri14}.
X-ray features during spin-down phase were different in comparison
with both spin-up phases.
The most outstanding difference in energy resolved pulse profiles
of the two spin-up eras and the spin down era was disappearance
of the sharp double peaked profile during spin-down era \citep[for details see][]{Beri14}.
Quasi periodic oscillation~(QPO) at 48~mHz was observed in all the observations
during spin-down phase \citep{Kaur08},~this feature was absent in the power
density spectra (PDS) created using X-ray data during current spin-up phase \citep{Jain10}.
The X-ray spectrum of 4U~1626--67 is well described using two
continuum components: a hard power law and a black body.
X-ray spectra during both spin-up phases showed blackbody temperature of about 0.6~keV
while during the spin-down phase of 4U~1626--67 the blackbody temperature decreased to
$\sim$~0.3~keV.~Moreover, the energy spectrum became harder during the spin-down phase.
The power law photon index showed a value of $\sim$~1.5 during the first spin-up phase
which changed to $\sim$~0.4-0.6 during the spin-down phase and during the second spin-up phase
it showed a value in the range of 0.8-1.0
\citep[see][references therein]{Beri14}.
Detailed study of this source during each phase of torque reversal
suggests that accretion flow geometry is different during the spin-up and spin-down
phases and plays an
important role in transfer of angular momentum \citep{Jain10,Beri14}. \\
X-ray spectrum of 4U~1626--67 is unique, unusually bright Neon~(Ne)
and Oxygen~(O) lines have been reported from many spectroscopic
observations \citep[][]{Angelini95,Owens97,Schulz01,Krauss07}.
Observations made with \emph{Chandra} revealed double peaked
nature of low energy emission line features, indicating their formation
in the accretion disk \citep{Schulz01}.
Continuum of the spectra is well described using a soft emission component and a
power law \citep[see][and references therein]{Beri15} though.
Observations made during spin-down and spin-up
phase of 4U~1626--67 with the \emph{Suzaku} observatory were used to measure spectral changes
with torque reversal in 2008 \citep{Camero12}. The authors confirmed
that the equivalent width and the intensity of these emission
lines are variable. They found that fluxes of all the emission lines
have increased almost by factor of $\sim$~5 with an exception
of Ne~X~(1.02~keV) emission line that showed an increase by factor of $\sim$~8
after the torque reversal.~Pulse
phase resolved spectroscopy performed using data from the \emph{XMM-Newton} observatory during
spin-down phase of 4U~1626--67 revealed that line fluxes show pulse phase dependence \citep{Beri15}
One of the emission line (O~VII) showed the line flux to vary by a factor of about four,
significantly larger compared to
the relative variation of total flux.
Warp-like structures in the accretion disk are believed
to be the cause of observed line flux variability. \\
An interesting possibility for the cause of spin-down is the radiation pressure induced warping of the inner
accretion disk which may become retrograde leading to negative accretion
torque \citep{Kerkwijk98}.
Moreover, changes in the timing characteristics~(like the pulse profile, the QPO's etc)
in the spin-down phase compared to spin-up phase
are understood to be due changes in the inner accretion flow
from a warped accretion disk in the spin-down phase.
Therefore, we expect to observe changes
in the accretion flow and probably
also the accretion disk structures of 4U~1626--67 during
the spin-up phase.~We carried out a
pulse phase resolved spectroscopy to investigate if this results
into a different modulation of the emission lines during its current spin-up phase.
In this paper we present results obtained from
timing and spectral study of 4U~1626--67, performed using data
obtained with the \emph{XMM-Newton} observatory during its current spin-up phase.
The paper is structured as follows:~we describe observation
details and data reduction procedure in Section~2.~This is followed by
the results from timing analysis (Section~3).~In Section~4 we present results from the spectral
analysis.~The last section~(Section~5) of the paper presents
results and discussions. \\
\section{Observations and Data Reduction}
We have obtained a 56~ks observation of 4U~1626--67 during its current spin-up phase with \emph{XMM-Newton}.
The observation was performed
on October~5,~2015 bearing an ID-0764860101. \\
\emph{XMM-Newton} satellite has three X-ray telescopes, each with an Europeon
photon imaging camera~(\textsc{EPIC}) at the focus \citep{Jansen01}.
Two of the \textsc{EPIC} imaging spectrometers use metal-oxide semiconductors~(MOS)
CCDs \citep{Turner01} and one used pn CCD \citep{Struder01}.~Reflection grating spectrometer~(RGS)
and optical monitor~(OM) are two other instruments on-board \emph{XMM-Newton} satellite.
\textsc{RGS} comprises of two spectrometers namely, \textsc{RGS1} and \textsc{RGS2}.
Two \textsc{RGS} have a bandpass of 0.35--2.5~keV
and first-order spectral resolution of about 200 to 800 in 0.35--2.5~keV.~They are attached to two of the X-ray telescopes
with MOS.~Simultaneous optical/UV observations
are carried out with the optical monitor~(OM). \\
In this work we performed analysis using data from
the \textsc{EPIC}-pn and the \textsc{RGS} on-board \emph{XMM-Newton}.
\textsc{EPIC}-pn data were collected in timing mode using medium filter
with a frame time of 6~ms.~In timing mode only one CCD chip
is in operation and data is collapsed into one dimensional row
and read out at high speed.
~\textsc{RGS} data was operated in standard
$\textquoteleft$spectral' mode. \\
We processed the \emph{XMM-Newton} observation data files, using the
science analysis software~(SAS version 15.0).~Latest updated calibration
files available as on April 2016 were applied. \\
~Standard SAS
tool \textsc{epproc} was used to obtain \textsc{EPIC}-pn event file.
We first checked for flaring particle background in the data.
A light curve was extracted using selection criterion:~PATTERN=0
in the energy range of 10-12~keV.
We found no evidence of soft proton flaring.~Thereafter, we extracted
\textsc{EPIC}-pn cleaned event list by selecting events with PATTERN$\leq$4,
FLAG=0 and energy in the range 0.3-12~keV.~This cleaned event file was used to
extract source events and background files.~We used rectangular box with RAWX~=~
30-46 for source events and RAWX~=~2-4 for background.~Source event file was also checked
for photon pile-up using SAS tool \textsc{epatplot}.~No significant photon
pile-up was found.~Barycenter correction was performed using SAS tool \textsc{barycen}.
~For extraction of light curves and spectra SAS tool \textsc{evselect}
was used.~Response matrix and Ancillary response files were generated
using the SAS task \textsc{rmfgen} and \textsc{arfgen}, respectively. \\
For the \textsc{RGS} data reduction, we used SAS tool \textsc{rgsproc}
to reduce and extract calibrated source and background spectrum and response files.
Standard procedure as mentioned in SAS analysis thread was followed. \\
\section{Timing Analysis}
\begin{figure*}
\centering
\begin{minipage}{0.45\textwidth}
\includegraphics[height=4.0in,width=6.0in,angle=0, keepaspectratio]{fig1.ps}
\end{minipage}
\hspace{0.05\linewidth}
\begin{minipage}{0.45\textwidth}
\includegraphics[height=3.5in,width=8.5cm]{fig2.ps}
\end{minipage}
\caption{Light curves in 0.3-12~keV band of 4U~1626--67, created using data from \textsc{EPIC}-pn on-board \emph{XMM-Newton} are shown in left.
We divided data into four short segments each of $\sim$~14~ks for visual clarity of flares.~Light curves are binned using 7.7~seconds.~In the right, we show power density spectrum~(PDS) created using the same light curve.}
\label{LC}
\end{figure*}
Left hand side of Figure~\ref{LC} shows barycenter corrected and background subtracted
light curve of 4U~1626--67 obtained using the \textsc{EPIC}-pn data.
Light curve is highly variable.
It includes both flares and dip features in it.
X-ray flares have also been observed in the light curves created using previous observations
made during its spin-up phase \citep[see][and references therein]{Beri14}.
Amplitude of flares is 2-3 times above the persistent level (Figure~\ref{LC}).
Duration of flares is few hundred of seconds.
Recurrence time-scales of these flares varies between 300 to 1000~seconds
and these time-scales are consistent with the previous reports \citep[see][]{Joss78,Li80,Raman16} \\
~Unlike other flaring sources
like LMC~X-4, SMC~X--1,~where persistent emission begins just after the end of flares,
it is interesting to notice a sharp dip in the light curve near the decay of bright flares
at 18000,~23000~seconds~(second panel) and 51000~seconds~(fourth panel)
of Figure~\ref{LC}.
This feature has never been reported before in 4U~1626--67.
~Similar kind of dip near the end of outburst has also
been observed in the light curves of bursting pulsar~(GRO~J1744-28)
\citep[eg.,][]{Giles96}.
\subsection{Power density Spectrum}
Power density spectrum~(PDS) generated using the \textsc{EPIC}-pn light curve
is shown in Figure-\ref{LC}. The light curve was divided into stretches of 8192 seconds. PDS from all the segments were
averaged to produce the final PDS and were normalized such that their integral
gives squared rms fractional variability and the white noise level was
subtracted. PDS showed narrow peak at around 0.130 Hz which corresponds to the
spin frequency of the neutron star. Multiple harmonics are also seen
in the PDS of the source.~In addition to the main peak,~a QPO
feature is seen at $\sim$~3~mHz with fractional rms amplitude of $\sim$~$7.26\pm0.07~{\%}$.
3~mHz QPO can be due to flares seen in the light curve
and this feature is observed for the first time in the X-ray data during current
(spin-up) phase of 4U~1626--67.~A similar mHz QPO was however observed in the PDS generated with X-rays
during the first spin-up phase of 4U~1626--67 \citep[see e.g.,][]{Joss78}.
Another interesting observation is dependence of 3~mHz QPO on energy~(Figure~\ref{PDS_EN}).
It is evident from Figure~\ref{PDS_EN} that flares are more prominent
at lower energies and therefore sharp feature
in the PDS around 3~mHz is dominant at energies below 5~keV.~Fractional rms amplitude of 3~mHz QPO feature in the PDS created using
light curves in different energy bands namely, 0.3-2~keV,~2-5~keV,
5-8~keV amd 8-12~keV are $7.8\pm0.1~{\%}$,~$9.4\pm0.4~{\%}$,
$4\pm2~{\%}$,~$4\pm2~{\%}$ respectively.~We also detect a signature of broad QPO around 48~mHz with rms amplitude
of $\sim$~5~{\%}, much smaller than the rms in the 48 mHz QPO seen during the spin-down phase \citep[e.g.,][]{Kommers98}.
\begin{figure*}
\centering
\begin{minipage}{0.45\textwidth}
\includegraphics[height=3.75in,width=7.5cm]{fig3.ps}
\end{minipage}
\hspace{0.05\linewidth}
\begin{minipage}{0.45\textwidth}
\includegraphics[height=3.75in,width=7.5cm]{fig4.ps}
\end{minipage}
\caption{Energy resolved light curves binned with 77~seconds are shown in left.~These light curves
were used to generate power density spectra, shown in the right.
This plot shows a QPO feature at $\sim$~3~mHz and some energy dependence of Power Density Spectrum of 4U~1626--67.}
\label{PDS_EN}
\end{figure*}
\begin{figure*}
\centering
\begin{minipage}{0.45\textwidth}
\includegraphics[height=3.5in,width=7.5cm]{fig5.ps}
\end{minipage}
\hspace{0.05\linewidth}
\begin{minipage}{0.45\textwidth}
\includegraphics[height=3.5in,width=7.5cm]{fig6.ps}
\end{minipage}
\caption{Left:~The average pulse profile created in the energy band 0.3-12~keV.~Right:~The
energy resolved pulse profiles.~Pulse profiles are binned into 64 phasebins.}
\label{EN-pp}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[height=3.5in,width=9.5cm]{fig7.ps}
\caption{Pulse Profile created in the energy band of 0.3-2~keV using \emph{XMM-Newton}-pn data of spin-down is shown in red.
~This figure demonstrates that pulse profile below 2~keV is quite different from that seen in the current spin-up phase~(blue).}
\label{PP-XMM}
\end{figure*}
\subsection{Pulse Profiles}
Spin-period was determined to be $7.67255\pm0.00009$~seconds using epoch folding $\chi^2$
maximization technique.
This period was used for creating pulse profiles.
We first created average pulse profile with 64 phase~bins,~using light curve in
0.3-12~keV energy band~(Figure~\ref{EN-pp}).
The bi-horned peaks observed in the pulse profiles
look similar to the previously reported pulse profiles
of 4U~1626--67 during its spin-up era \citep[see e.g.,][]{Beri14}.
However,~in 0.3-12~keV band amplitude of first peak is slightly small compared to the second peak. \\
The energy resolved pulse profiles were created using the light curves in
the energy bands of 0.3-2~keV,~2-5~keV,~5-8~keV,~8-12~keV.
Thanks to \emph{XMM-Newton} which enabled us to investigate pulse profiles below 2~keV
during its current spin-up phase for the first time.
Pulse profile in 0.3-2~keV band looks simple having shoulder-like
structure.~It has a sharp dip around phase 0.0.
It seems that the sharp dip observed in the 0.3-2~keV profiles mainly contributes
to the dip observed between the two horns in the energy
averaged profiles~(0.3-12~keV).
Profile shape in 0.3-2~keV band is similar to that seen
during the first spin-up phase \citep{Pravdo79}.
It is interesting
to see that pulse profile below 2~keV is quite different from that observed
in other energy bands.~This suggests that pulsation of soft component is different from
that in higher energy band pulse profiles
which indicate that pulsation of thermal component
is different from power law component.~A soft spectral component, that pulsates differently
from the power law component has been detected in other sources with low absorption column density
~(e.g., SMC~X--1 and LMC~X--4) and have been interpreted as reprocessed thermal emission from
the inner accretion disk \citep{Paul02}.
To compare pulse profiles below 2~keV during its current phase
with that of the spin-down phase, we created pulse profile in the energy band
of 0.3-2~keV using data from previous \emph{XMM-Newton} observation~(ObsID-0152620101)
during its spin-down phase.~We performed data reduction and analysis
in the same way as discussed in our previous paper on 4U~1626--67
\citep{Beri15}.
It is interesting to see that pulse profile during spin-down phase
is quite different from that during its current phase.
Pulse profile below 2~keV has many structures during spin-down phase
which is not the case during its current spin-up phase (see Figure~\ref{PP-XMM}).
Pulse profiles in remaining energy bands are consistent
with previous observations in spin-up state \citep[see][and references therein]{Beri14}. \\
\begin{figure*}
\centering
\includegraphics[height=4.in,width=9.cm,angle=270,keepaspectratio]{fig9.ps}
\caption{Best fitted phase averaged spectrum obtained after performing simultaneous fit of \textsc{RGS} and \textsc{EPIC}-pn data.~Second
panel shows the ratio plot obtained after adding only continuum components while bottom two panels
show the ratio plots of pn and \textsc{RGS} respectively, obtained after the best fit.}
\label{Avg-spec-XMM}
\end{figure*}
\begin{table*}
\centering
\caption{Best-fitting parameters obtained from simultaneous fit of \textsc{RGS1}, \textsc{RGS2} \& pn Spectrum.}
\begin{tabular}{ l l }
\hline
\hline
Parameter & Model Values \\
N$_H$ (10${^2}{^2}$atoms cm$^{-2}$) & $0.085\pm{0.003}$ \\
$kT_{bbodyrad}~(keV)$ & $0.427\pm{0.005}$ \\
PowIndex ($\Gamma$) & $0.914\pm{0.014}$ \\
$N_{PL}^a$ & $0.0229\pm{0.0006}$ \\
$\rm{Line Energy^b}$ \\
$\rm{O~VII}$ & $0.571\pm{0.001}$ \\
$\rm{O~VIII}$& $0.6536\pm{0.0003}$ \\
$\rm{Ne~IX}$ & $0.913\pm{0.004}$ \\
$\rm{Ne~X}$ & $1.022\pm{0.001}$ \\
$\rm{Fe~(L shell)}$ & $0.733\pm{0.001}$ \\
$\rm{Fe~(K shell)}$ & $6.8\pm{0.1}$ \\
$\rm{Line Width^c}$ \\
$\rm{O~VII}$ & $0.009\pm{0.001}$ \\
$\rm{O~VIII}$& $0.0075\pm{0.0002}$ \\
$\rm{Ne~IX}$ & $0.035\pm{0.005}$ \\
$\rm{Ne~X}$ & $0.0115\pm{0.0006}$ \\
$\rm{Fe~(L shell)}$ & $0.009\pm{0.002}$ \\
$\rm{Fe~(K shell)}$ & $0.14\pm{0.09}$ \\
$\rm{Line Flux^d}$ \\
$\rm{O~VII}$ & $11\pm{1.}$ \\
$\rm{O~VIII}$& $25.\pm{1.}$ \\
$\rm{Ne~IX}$ & $13\pm{1}$ \\
$\rm{Ne~X}$ & $23.0\pm{1.}$ \\
$\rm{Fe~(L shell)}$ & $7.\pm{0.7}$ \\
$\rm{Fe~(K shell)}$ & $0.6\pm{0.3}$ \\
Reduced $\chi^2$ & 1.43(dof 1337) \\
\hline
\end{tabular}
\bigskip
{{\bf{ Note}}: Errors quoted are with 90 $\%$ confidence range. \\
\hspace{2.5in} Energy range used is 0.35-1.8~keV for \textsc{RGS1} and \textsc{RGS2} and 0.8-12.0~keV for \textsc{EPIC}-pn. \\
\hspace{2.6in} a $\rightarrow$ Powerlaw normalisation~($N_{PL}$)
is in units of $\rm{photons~cm^{-2}~s^{-1}~keV^{-1}}$ at 1~keV \\
b $\rightarrow$ Line Energy in units of keV. \\
c $\rightarrow$ Line width in units of keV. \\
\hspace{1.65 in} d $\rightarrow$ Gaussian normalisation is in units of $10^{-4}$$\rm{photons~cm^{-2}~s^{-1}}$ }\\
\label{Best-fit}
\end{table*}
\begin{table*}
\caption{DOUBLE-GAUSSIAN EMISSION LINE FITS}
\label{table2}
\begin{tabular}{ c c c c c c c c c}
\hline
\hline
Observatory & MJD
& \multicolumn{2}{c} {Blueshifted Lines}
& \multicolumn{2}{c} {Redshifted Lines} \\
& & V~($km~s^{-1})$ & $Flux^{a}$
& V~($km~s^{-1})$ & $Flux^{a}$ & Reference \\
\hline
& & & O~VIII~(0.653~keV) & & & \\
\hline
\emph{Chandra} & 51803.6 & 1740$\pm$440 & 14.04$\pm$2.52 & 1900$\pm$480 &
17.82 $\pm$ 0.57 & \citet{Schulz01} \\
\emph{XMM-Newton} & 52145.1 & 1930$\pm$260 & $17.6_{-4.4}^{4.7}$ & 1930$\pm$260 &
$21.9_{-4.5}^{+4.9}$ & \citet{Krauss07} \\
\emph{Chandra} & 52795.1 & 1770$\pm$330 & $13.0_{-4.9}^{+5.7}$ & 1770$\pm$330 &
$13.7_{-5.0}^{+5.8}$ & \citet{Krauss07} \\
\emph{XMM-Newton} & 52871.2 & 1810$\pm$180 & $12.7_{-1.9}^{+2.1}$ & 1810$\pm$180 &
$12.4_{-1.8}^{+2.0}$ & \citet{Krauss07} \\
\emph{XMM-Newton} & 56397 & 1535$\pm$158 & 71$\pm$6 & 1306$\pm$158 &
61 $\pm$ 6 & Current Work \\
\hline
& & & Ne~X~(1.02~keV) & & & \\
\hline
\emph{Chandra} & 51803.6 & 2220$\pm$350 & 8.15$\pm$0.93 & 1240$\pm$220 &
15.04 $\pm$ 1.65 & \citet{Schulz01} \\
\emph{XMM-Newton} & 52145.1 & 1910$\pm$450 & $11.1_{-4.6}^{+4.8}$ & 1910$\pm$450 &
$14.4_{-5.2}^{-3.4}$ & \citet{Krauss07} \\
\emph{Chandra} & 52795.1 & 1670$\pm$180 & $8.2_{-1.5}^{+1.4}$ & 1670$\pm$180 &
$10.5_{-1.6}^{+1.8}$ & \citet{Krauss07} \\
\emph{XMM-Newton} & 52871.2 & 1780$\pm$420 & $11.3_{-2.6}^{2.6}$ & 1780$\pm$420 &
$9.0_{-2.6}^{2.5}$ & \citet{Krauss07} \\
\emph{XMM-Newton} & 56397 & 1731$\pm$247 & 65$\pm$20 & 1484$\pm$247 &
98 $\pm$ 22 & Current Work \\
\hline\\
\end{tabular}
\label{Velocity}
\\{Note:~a~$\rightarrow$ The Gaussian normalization is in units of $10^{-5} photons~cm^{-2}~s^{-1}$ }
\end{table*}
\section{Spectroscopy}
\subsection{Phase Averaged Spectroscopy}
We performed simultaneous spectral fitting, using data from \textsc{RGS} and \textsc{EPIC}-pn (Figure~\ref{Avg-spec-XMM}).
Spectra of 1st order obtained using \textsc{RGS1} and \textsc{RGS2}
were grouped using the tool \textsc{grppha} (\textsc{HEASOFT} \textit{Version}-6.17)
to contain 6 channels per bin.~We have used 0.35-1.8~keV band of \textsc{RGS} for spectral fitting.
~Mean spectrum extracted using \textsc{EPIC}-pn was
rebinned using the SAS task \textsc{specgroup} to oversample the \textsc{FWHM} of energy resolution by factor of 3
and to obtain minimum of 25~counts/bin.
There is no reliable calibration below 0.7~keV for \textsc{EPIC}-pn
in timing mode \footnote{http://xmm2.esac.esa.int/docs/documents/CAL-TN-0018.pdf}
and the disagreement between the \textsc{EPIC}-pn and the \textsc{RGS}
is larger below 0.7~keV.~Therefore,
we have used 0.8-12~keV band of \textsc{EPIC}-pn for spectral fitting.
All the spectral parameters other than the relative instrument normalization,
were tied together for both \textsc{RGS} and \textsc{EPIC}-pn.
We fixed the instrumental normalization of \textsc{RGS1}
to 1, and freed the normalization of the \textsc{RGS2} and \textsc{EPIC}-pn
instruments.~The values of \textsc{constant} model component
obtained for \textsc{RGS2} and \textsc{EPIC}-pn are $0.977\pm0.007$
and $1.04\pm0.01$ respectively.
A blackbody component and a power law well describes
the continuum of the phase-averaged spectrum \citep{Pravdo79,Kii86,Angelini95,Owens97,Orlandini98,Schulz01,Krauss07,Jain10,Iwakiri12}.
Therefore,~we modelled the continuum of the spectrum using \textit{tbabs*(bbodyrad+powerlaw)}.
Using only the continuum model showed a
significant excess in the residuals in the form of emission lines.
The second panel of Figure~\ref{Avg-spec-XMM}
shows the ratio between data and model, indicating the presence of low energy emission lines~(below~1~keV).
The raw \textsc{RGS} spectrum shows the presence of two strong emission lines around
0.65~keV and 1.0~keV, therefore, we added two Gaussian components around
0.65 and 1.0~keV.~These line energies correspond to Ne~X and O~VIII.
Adding these two Gaussian components was not adequate to obtain a good spectral fit.
The presence of additional emission features at 0.73~keV, 0.571~keV, and 0.913~keV
was observed in the
residuals of the \textsc{RGS} data.
The presence of O~VII and Ne~IX emission lines around 0.569~keV and 0.915~keV respectively
in the X-ray spectrum of 4U~1626-67 have been reported earlier by several authors
\citep[see e.g.,][]{Schulz01,Krauss07}.
Therefore, to obtain an appropriate fit
we added two additional Gaussian components at these line energies.
However, we required an additional Gaussian component
to model the excess seen around 0.73~keV.
This line energy correspond to iron~(Fe-L shell) emission feature.
Table~\ref{Best-fit} shows the best fit parameters obtained.~They are consistent with the
previous results during spin-up phase of 4U~1626--67 \citep[see e.g.,][]{Camero10}.
The equivalent widths~(EW) of O~VII~(0.571~keV),~O~VIII~(0.653~keV),~Fe-L shell~(0.73~keV),~Ne~IX~(0.913~keV) and
Ne~X~(1.02~keV) emission lines are
$17.0\pm1.0$~eV,~$34\pm2$~eV,~$9.0\pm1.0$~eV,~$20\pm3.0$~eV,~$42.0\pm$1~eV respectively.\\
Here, we emphasize that for the first time the X-ray spectrum of
4U~1626-67 showed an Fe-L shell fluorescence emission
feature and the detection of this feature at 0.73~keV is statistically significant
as the value of chi-squared~($\chi^2$) increased from 2049 to 2458 (1341 degrees of freedom)
on fixing its normalization to zero.
A systematic error of $2\%$ was added quadratically to each energy bin
to account for all the artifacts due to calibration issues in the \textsc{EPIC}-pn timing mode data.
The residuals of \textsc{EPIC}-pn showed the presence of a weak iron fluorescence emission line around 6.8~keV.
Therefore, we added another Gaussian component
with line energy centered around 6.8~keV to the spectrum.~Equivalent
width of the emission line observed at 6.8~keV is $\sim$~$0.02\pm0.01$~keV.
The spectral fit resulted into reduced $\chi^2$~($\chi^2_{\nu}$) of 1.43 for 1337 degrees
of freedom (see Table-\ref{Best-fit}).
We also observed that on fixing the normalization of Fe K shell line
to zero lead to an increase in the value of chi-squared~($\chi^2$)
from 1914 to 1930 (1338 degrees of freedom) which suggests that the detection of this emission feature is statistically significant.
The presence of Fe~$K_{\alpha}$ emission line was also observed in the MOS~2 spectrum.
The values of line flux and the equivalent width observed in the MOS~2 data
are $0.58^{0.5}_{-0.3}$$\times$$10^{-4}$$\rm{photons~cm^{-2}~s^{-1}}$ and $0.015\pm0.010$~keV respectively.
These values are similar to that observed in
the pn data and on fixing the normalization to zero of this line in the MOS~2 data also showed increase in
the value of chi-squared~(172 to 183 for 171 degrees of freedom) which is similar to that observed in the pn data.
The addition of the systematic error to the pn data
is not likely to introduce any pulse phase dependence of emission line fluxes which is the main
motivation of the present work.~Here, we would like to mention that
owing to limited statistical significance of Fe~$K_{\alpha}$ emission line we have not performed
phase resolved spectroscopy for this line. \\
It is believed that Ne/O emission lines observed in the X-ray spectra of
4U~1626--67 originate from highly ionized layers of the accretion disk.
The existence of double-peaked profiles support their disk
origin \citep{Schulz01, Krauss07}.~Interestingly, we noticed that one of the emission lines
at 0.653~keV~(O~VIII) showed the presence of double-peaked profiles
in the high resolution data of \textsc{RGS} (Figure~\ref{Double-peak}).
Therefore, we fit this line with a pair of Gaussian
to resolve into the doppler pairs and to estimate the disk velocities
of red and blue-shifted components.
The line velocities measured using the \textsc{RGS} data
along with the previous known values are given in Table~\ref{Velocity}.
The single-Gaussian fit revealed a broad emission line
at 1.02~keV~(Ne~X) in the \textsc{RGS2} spectrum.~Therefore, we fit this line as well
with a pair of Gaussian and the velocities measured are given in Table~\ref{Velocity}.
Ne~IX emission line observed in the \emph{XMM-Newton} data did not allow us
to measure doppler velocities. \\
\begin{figure*}
\centering
\begin{minipage}{0.45\textwidth}
\includegraphics[height=4.in,width=6.5cm,keepaspectratio]{fig18.ps}
\end{minipage}
\hspace{0.05\linewidth}
\begin{minipage}{0.45\textwidth}
\includegraphics[height=4.in,width=6.5cm,keepaspectratio]{fig19.ps}
\end{minipage}
\caption{Double-peaked emission lines.Left:~the hydrogenic O~VIII emission line observed with \textsc{RGS}-1 \& 2.
Right plot shows the broad hydrogenic Ne~X emission line as observed with the \textsc{RGS-2} data.}
\label{Double-peak}
\end{figure*}
\begin{figure*}
\centering
\begin{minipage}{0.45\textwidth}
\includegraphics[height=4.5in,width=7.5cm]{fig10.ps}
\end{minipage}
\hspace{0.05\linewidth}
\begin{minipage}{0.45\textwidth}
\includegraphics[height=4.5in,width=7.5cm]{fig11.ps}
\end{minipage}
\caption{Variation of line flux across the pulse phase is plotted in left
while the plot in right shows the variation of continuum flux with pulse phase.
Line fluxes are in units of $10^{-4} photons/cm^{2}/sec$ while all the continuum fluxes~($f_{total}$,~$f_{BB}$,~$F_{PL}$)
are measured in $10^{-10} ergs/cm^{2}/sec$.
~All the errors are quoted with 1~$\sigma$ confidence.}
\label{pp-flare}
\end{figure*}
\begin{figure}
\centering
\includegraphics[height=4.5in,width=7.5cm,keepaspectratio]{fig13.ps}
\caption{
In this plot we show
variation of continuum parameters across the pulse phase.}
\label{EW}
\end{figure}
\subsection{Pulse Phase Resolved Spectroscopy}
For performing pulse phase resolved spectroscopy
we have used data from \textsc{EPIC}-pn.
We
added $\textquoteleft$\textsc{phase}' column to the
pn event list.~This was performed using SAS task \textsc{phasecalc}
with phase zero fixed at the reference time~(epoch) used for creating pulse profiles.
Thereafter, appropriate good time intervals~(GTI) files
were created for narrow phase bins of 0.05.
These GTI files were used for the extraction of 20 phase resolved source spectra.
Response matrices and ancillary response files used for
phase averaged spectroscopy were used again for performing phase resolved spectroscopy.
Spectral fitting was done in the energy range of 0.8-12~keV with the same spectral
model consisting of a power-law, a blackbody and several emission lines.
We fixed the neutral hydrogen column density, line energies and line widths
to the values obtained from the phase averaged spectrum.
Left plot of Figure~\ref{pp-flare} shows variation of flux of
low energy emission lines with pulse phase.
From the plot we observe~:
\begin{itemize}
\item Ne~IX~$He_{\alpha}$ emission line at 0.913~keV shows strong variation
with pulse phase (factor of $\sim$ $2.0\pm0.3$).
~$\chi^2$ value of 103 for 20 phase bins
was observed after fitting a constant to the flux of line at 0.913~keV.
\item Ne~X~$Ly_{\alpha}$ emission line at 1.02~keV shows no significant variation with pulse phase.
This is similar to the previous observations made during spin-down phase of 4U~1626-67 \citep{Angelini95,Beri15}.
A constant fitted to the flux of Ne~X~$Ly_{\alpha}$ emission line showed a $\chi^2$ value of 32 for 20 phase bins.
\end{itemize}
We estimated the observed total continuum flux in the energy band of 0.7-12~keV,
\textit{power law} flux in 2-12~keV band and the \textit{blackbody}
flux in 0.7-2~keV band using the \textsc{CFLUX} convolution model
in \textsc{XSPEC}.
The continuum flux profile plotted in right hand side of Figure~\ref{pp-flare}
shows that the modulation of the \textit{power law} is same as the modulation of the
total flux while the flux modulation of the
\textit{blackbody} component has a different shape.
The blackbody component shows a broad dip,
consistent with the pulse profile in the 0.3-2.0~keV band
in which the blackbody component dominates.~Shape
of the power law profile can be imagined to have formed
as a narrow dip at the centre of a broad pulse peak, while the
blackbody profile is a broad dip on an otherwise constant emission. \\
Continuum parameters also showed variation with pulse phase
(Figure~\ref{EW}).
Blackbody temperature shows strong variation
with possible correlation with the pulse profile.
However, blackbody normalization profile shape
is anti-correlated to its temperature profile.
Power law index profile shows a sharp dip
at phase 0.2 with some structures in rest of the profile
while power law normalization shows strong correlation
with the pulse profile shape~(bi-horned peaks around pulse phase 0.9 and 1.1).
The blackbody flux is a few percent of the total flux and given
the systematic errors in \emph{EPIC}-\textsc{pn}, one should
be cautious about the blackbody parameters. The flux modulation
of the blackbody is however certainly different from the power law flux
variation, as is evident from the energy-resolved pulse profiles.
\begin{figure*}
\centering
\includegraphics[height=3.5in,width=7.5cm]{fig14.ps}
\caption{Best fit model components for Intensity Resolved Spectra. Blue colour represents the spectrum created using intensity range of 200-320~counts/s,
green : 150-200~counts/s, red for 90-100~counts/s and black for spectrum during dip.~From the figure it is clear that normalisation varies
with the change in intensity.}
\label{Int-resol-spec-XMM}
\end{figure*}
\begin{figure*}
\centering
\begin{minipage}{0.45\textwidth}
\includegraphics[height=5.5in,width=7.5cm,keepaspectratio]{fig15.ps}
\end{minipage}
\hspace{0.05\linewidth}
\begin{minipage}{0.45\textwidth}
\includegraphics[height=4.in,width=6.5cm,keepaspectratio]{fig16.ps}
\includegraphics[height=4.in,width=6.5cm,keepaspectratio]{fig17.ps}
\end{minipage}
\caption{Left hand plot shows the variation of continuum parameters with the increase in intensity, the bottom panel of this plot
shows the variation of ratio of blackbody flux and the power law flux.~On the right: Top plot shows the
variation of line flux with the increase in intensity.~Line fluxes are in units of $10^{-4} photons/cm^{2}/sec$
while the bottom plot shows the variation of equivalent width of Ne~IX and Ne~X emission lines
with the increase in intensity.
~Note: X-axis label refer to average count-rate during different
intensity resolved spectra.~All the errors are quoted with 1~$\sigma$ confidence.}
\label{Int-variation-flare}
\end{figure*}
\subsection{Intensity Resolved Spectroscopy}
Since strong variation in count-rates is observed
in light curve shown in Figure~\ref{LC},
we extracted intensity resolved
spectra from \textsc{EPIC}-pn data using SAS task \textsc{evselect}.
Good~time~intervals~(GTIs) were created in different intensity ranges~:~sharp dip seen just after the end
of first flare seen in Figure~\ref{LC} and pn count-rates between 90-150~c/s,
150-200~c/s,~200-320~c/s were used to extract intensity resolved spectra.
For spectral fitting we used the same technique which we opted while performing
phase resolved spectroscopy.~Neutral hydrogen column density,~line energies
and widths were fixed to the phase averaged values.~We also added $2\%$ systematics
while performing the spectral fitting.
Fitted model components are shown in Figure~\ref{Int-resol-spec-XMM}.
Temperature of blackbody and the power-law index was found to increase.
The line fluxes and fractional contribution of the blackbody flux
also found to increase with total flux (Figure~\ref{Int-variation-flare}).
\begin{table*}
\caption{Radius measurement for the Ne/O line formation region}
\label{table2}
\begin{tabular}{ c c c c c c c c c}
\hline
\hline
Ion Species & $Radius^{a}$~(Spin-down) & $Radius^{a}$~(Spin-up) \\
\hline
O~VII & 2.3 & 4.2 \\
O~VIII & 1.4 & 2.6 \\
Ne~IX & 1.2 & 2.1 \\
Ne~X & 0.07 & 0.14 \\
\hline
\end{tabular}
\label{radii}
\\{Note:~a~$\rightarrow$ Radius Measurements are in units of $10^{10}~cm$ }
\end{table*}
\section{Discussion \& Summary of Results}
In this paper we present results obtained using data from the
\emph{XMM-Newton} observatory during the current spin-up phase of
4U~1626--67.~Several new
and significant changes have been observed in comparison to the previous
observation made during its spin-down phase.~The main focus
of our study is to observe a pulse phase dependence of low energy
emission lines seen in the X-ray spectrum of 4U~1626--67.
Strong pulse phase dependence of O~VII emission line at 0.569~keV
was observed during spin-down phase of 4U~1626--67.~This strong variation was
interpreted as a result of warps in the accretion disk \citep{Beri15}.
Dissimilarities in timing characteristics (such as pulse profile, QPOs)
during spin-down and spin-up eras are believed to be due to difference in the
inner accretion flow from a warped accretion disk during spin-down phase \citep{Beri14,Kaur08}.
Therefore, one expects to see a different behavior of line fluxes with
pulse phase during current spin-up phase of this source.
The calibration issues below 0.7~keV in the timing mode data of \textsc{EPIC}-pn
did not allow us to study the pulse phase dependence of emission line features
at 0.571~keV~(O VII), 0.653~keV~(O~VIII) and 0.733~keV~(Fe~L).~However, we investigated the behavior
of emission lines at 0.913~keV~(Ne~IX) and 1.02~keV~(Ne~X) with the pulse phase. \\
We summarize the results as follows~:
\begin{itemize}
\item Light curve obtained using the \textsc{EPIC}-pn data during its current spin-up phase
showed dips.~Unlike other flaring sources
like LMC~X-4, SMC~X--1 it is interesting to notice a broad dip in the light curve
soon after the decay of a large flare.~This feature is similar to that
observed in the bursting pulsar, GRO~J1744-28.
The light curve of GRO~J1744-28 showed the presence of
a dip and recovery period following each outburst \citep[see e.g.,][]{Giles96}
and the X-ray spectrum of GRO~J1744-28 showed no significant change going from quiescence to outburst \citep{Cannizzo96}.
The same authors proposed that the outbursts observed in the bursting pulsar
could be due to Lightman-Eardley~(LE)~instability \citep{Lightman74} in the accretion disk
and the material that is evacuated
onto the pulsar during an accretion event is replenished by
material flowing in from further out; hence, the dip and recovery in the light curve following an outburst.
After performing intensity resolved spectroscopy of 4U~1626--67 we found that overall there is no change in the shape of
shape of the spectrum (see Figure~\ref{Int-resol-spec-XMM}) except that the continuum and line parameters
follow an increasing trend with intensity.
Therefore, it is plausible that a similar mechanism might be responsible for the presence of flares, sharp dips
in the light curve of 4U~1626--67. \\
\item QPO feature around 3~mHz is observed in the PDS. This feature has been observed for the first time using X-ray data
of current spin-up phase.~The feature at 3~mHz also shows strong energy dependence.
The feature is sharp at low energies as flares dominate at low energies.
The energy dependence of fractional rms amplitude of QPO has been used as a tool
to understand the physical origin of QPO \citep[see e.g.,][]{Gilfanov03,Cabanac10,Mukherjee12}.
The fractional rms amplitude of 3~mHz QPO
observed in 4U~1626--67 showed an increase
upto 5~keV and thereafter its value saturates~(probably due to lower count rates at higher energies).
Therefore, it seems that fluctuations in the blackbody component could be a plausible cause of the
observed mHz QPO in 4U~1626--67.
\item We found that the pulse profile shape below 2~keV
is different from that seen during spin-down phase of 4U~1626--67 (Figure~\ref{PP-XMM}).
Moreover,~during its spin-up phase pulse profiles below 2~keV
are quite different from that seen above 2~keV (Figure~\ref{EN-pp}).
A possible explanation to these observations is
changes in the emission diagram of the accretion column.
During the low luminosity phase~(spin-down) of 4U~1626--67, the emission
of the accretion column is concentrated in a beam, oriented along the magnetic
field axis while during the high luminosity phase~(spin-up) the emission diagram
changed to the fan beam pattern \citep{Basko75}.~Soft X-ray emission~(below~2~keV) is
attributed to reprocessing of the primary emission by the optically thick material (i.e., the inner accretion disk)
and, therefore, changes in the emission diagram might lead to the changes in the
illumination of the inner accretion disk and hence, different pulse profiles below 2~keV
during the spin-up phase compared to the spin-down phase of 4U~1626---67.
We also note that the similar hypothesis was also proposed by \citet{Koliopanos16} to explain
the origin of the iron line during spin-up phase.
\item Values of EW of emission lines observed in the phase averaged spectrum
suggests that the EW of O~VIII has increased by a factor of 4
compared to the value~($\sim$7~eV) measured with \emph{Suzaku} during its spin-up phase
by \citet{Camero12}.~However, EW measured with \emph{ASCA} and \emph{XMM-Newton}
during its spin-down phase was $\sim$~14~eV \citep{Angelini95,Krauss07}.
Observations made with \emph{ASCA} and \emph{XMM-Newton} during spin-down phase of 4U~1626-67 revealed
EW of O~VII to be $\sim$31~eV and $\sim$23~eV respectively \citep{Angelini95,Krauss07}
while the measurement made during spin-up phase with the \emph{Suzaku} observatory showed
a much lower value (1.3~eV).
EWs of Ne~IX and Ne~X emission lines are almost consistent with the previous measurements
made during its spin-up phase \citep{Camero12}. \\
\item From the intensity resolved spectroscopy, we found that there is an increase in the ratio
between blackbody and power law flux which suggests that the spectrum
softens with the intensity.~The values of line fluxes at 0.913~keV and 1.02~keV also showed an
increase with intensity.
However, we did not notice any correlation between the equivalent width of these emission lines
with the intensity
(Figure~\ref{Int-variation-flare}). \\
\item
From the pulse phase resolved spectroscopy of 4U~1626--67, we observed a strong variation of
Ne~IX emission line with the pulse phase while the emission line at 1.02~keV~(Ne~X)
showed a lack of pulsations.
A different behaviour of Ne~IX and Ne~X emission lines accross the pulse phase
suggests that these emission lines might have a different origin.~It may be possible that
Ne~IX emission line originates from the accretion disk and thus, showing a strong pulse phase
dependence while the Ne~X emission line originates from highly ionized optically thin
emission i.e. from the material trapped in the Alfven shell \citep{Basko80}.
If this scenario is true, it also provides an explanation to the different line shapes
of Ne~IX and Ne~X emission lines.
The broadening observed in the profile of Ne~IX emission line could be due to the Doppler shifts while
the microscopic processes may be the cause of the broadening of Ne~X emission line. \\
Current observation made during spin-up phase of 4U~1626--67 showed a
different line intensity modulation pattern of Ne~IX emission line compared to the earlier \emph{XMM-Newton}
observation in the spin-down phase.
Pulse phase dependence of low energy emission lines in 4U~1626--67 is believed to be due to
geometrical effect called ``warping{\textquotedblright} of the accretion disk \citep[][]{Beri15}.
Due to warps~(wherein tilt angle of the normal to the local disk surface varies with azimuth)
in the accretion disk, one expects to observe modulation in the flux
of reprocessed emission visible along our line of sight.
Several possibilities have been discussed in the literature that might lead to
warps in the accretion disk.
One of the widely accepted possibility is that if the accretion disk is subject
to strong central irradiation, then it is unstable to warping \citep[see e.g.,][]{Petterson77,Pringle96}.
We also note that \citet{Pringle96} suggested that radiation-driven warping is
strongest in the outer regions of the accretion disks.
From Table~\ref{Velocity}, it is interesting to notice that O~VIII emission line at 0.653~keV
showed a lower value of velocity compared to the values measured during the
spin-down phase of 4U~1626--67.
This indicates that the radius of the accretion disk at which this emission line is formed
has moved outward during spin-up phase. \\
In order to further investigate the above,
we estimated radii of Ne/O emission line formation regions
using the expression for ionization parameter~($\zeta$~=$~L_X/nR^2$), where $L_X$ is the
X-ray luminosity, $n$ is the ion number density and $R$ is the radius (see Table~\ref{radii})
The values of ionization parameters calculated using the XSTAR code~\citep{Kallman82} for the optically thin
photoionized model \footnote{http://heasarc.gsfc.nasa.gov/lheasoft/xstar/xstar.html}
were opted for our calculations.
We further assumed a constant value~($10^{13}~cm^{-3}$) for the electron number density.
This is a reasonable assumption as comparable values of number density were estimated
by \citet{Schulz01}.
It is interesting to notice from Table~\ref{radii} that the radius of line formation region for each of the ion
species has moved outwards compared to the values obtained using $L_X$ measured during spin-down phase.
This further supports our interpretation that
a strong variation of Ne~IX line during
the current spin-up phase of 4U~1626--67 is because the structures~(or warps) in the accretion disk (that produce pulse phase dependence of emission lines)
have changed during its spin-up phase or the line forming region has moved outwards
where the warps dominates. \\
Different pulse phase dependence of Ne~IX emission line observed during current spin-up phase of 4U~1626--67,
therefore, supports that there is a possible change in accretion flow geometry.
Accretion flow geometry plays an important role in transfer of angular momentum and therefore
any change in it would suggest a change in the interaction between the Keplerian
disk and the stellar magnetic field at the corotation radius.
\end{itemize}
\section*{Acknowledgments}
We thank the anonymous referee for several useful suggestions which improved the
quality of the paper.
A.B. gratefully acknowledge Raman Research Institute (RRI) for providing local hospitality
and financial assistance, where this work was started.
She is also grateful to the Royal Society and SERB~(Science $\&$ Engineering Research Board, India)
for financial support through Newton-Bhabha Fund.
A.B would like to extend further thanks to
Michael~Smith and Matteo~Guainazzi for their useful
insights about \emph{XMM-Newton} data analysis.
The authors would like to thank all the members of the
\emph{XMM-Newton} observatory for carrying out observation
of 4U~1626--67 during its current spin-up phase and
for their contributions in the instrument preparation,~spacecraft operation,~software development,~and~
in-orbit instrumental calibration.
|
1,477,468,750,458 | arxiv | \section{Introduction}
A fascinating open problem of modern physics is a satisfactory
understanding of the structure of spacetime at very short length scale, of
the order of the Planck length $\lambda_P \sim 10^{-33}~cm$. On one side
the difficulties of quantization of gravity in canonical ways can be only
due to technical reasons, and we have simply to wait for new powerful
mathematical tools. What is implicit in this possibility is however that
the very basic structure of spacetime is a customary manifold with
suitable differential and topological structures, which is the
arena for interactions among quantized fields. On the other hand, many
arguments on operational limits on position and time measurements have been
considered in literature \cite{a}, suggesting that this description should
be modified in a way which is reminiscent of the quantization of phase
space in ordinary quantum mechanics. It is worth stressing that, in this case,
it is the
very notion of {\it manifold} which now undergoes a dramatic change. If we
adopt a dual point of view, it is well known that the (commutative)
algebra of smooth functions over spacetime manifold
contains, using
Gelf'and--Naimark reconstruction theorem, all informations on the
underlying space. Switching on the Planck length then amounts to consider
the noncommutative algebra still generated by positions and time $X^\mu$,
now looked upon as generators with nontrivial commutation relations
\begin{equation}
[X^\mu,X^\nu]= i \lambda_P^2 Q^{\mu \nu},
\label{1}
\end{equation}
where the antisymmetric tensor $Q^{\mu \nu}$, depends, in general, on the
$X^\mu$. The noncommutative $*$-algebra ${\cal A}$, generated by regular
representations of Eq. (\ref{1}), contains all informations on what we may
call {\it
noncommutative spacetime}, while the geometric picture, which greatly helps
in the commutative cases, is lost.
The analogy with ordinary quantum mechanics
and quantization of phase space strongly suggests to those
who are fascinated by the path integral approach, as the author of
the present paper is, to establish a path
integral formulation of
these noncommutative geometries, taking the point of view that a
class of linear functionals
over ${\cal A}$, which turn into evaluation maps in the commutative
limit,
can be expressed as integrals, with a suitable measure, of ordinary
functions over the classical, commutative spacetime. The reason why such
an approach could eventually turn out to be useful is that it relates in
a rather simple way the algebra generated by the $X^\mu$ to its commutative
counterpart, and this is a powerful way to look for generalization at the
noncommutative level of the basic structures of Riemannian geometry: metric,
connections and curvature.
\section{Path integral over spacetime}
We start considering the pair $(M^{2n},\Omega)$, where $M^{2n}$ is an
even-dimensional
differentiable manifold and $\Omega$ a symplectic form, and introduce the
{\it generating functional}
\begin{equation}
Z(x_0,J) = N \int D[\gamma]~ \exp \left[ i \lambda_P^{-2}
\left( \int_{\Gamma} \Omega + (x,J) \right) \right],
\label{2}
\end{equation}
where $N$ is a normalization constant, $J$ an arbitrary source,
the integration is carried
over all closed curves $\gamma$ with base point $x_0 \in M^{2n}$
and $\Gamma$ is any
two--dimensional surface with boundary $\gamma$.
We have also defined
\begin{equation}
(x,J) = \int_\gamma x^\mu(\tau) J_\mu(\tau) d \tau,
\label{3}
\end{equation}
where $x^\mu(\tau)$ is any parameterization of $\gamma$.
We here illustrate the basic
construction for a manifold with trivial second homology group.
The general case will be discussed in section 3.
We therefore have
\begin{equation}
\int_{\Gamma} \Omega=
\int_\gamma A_\mu(x) {d \over d \tau}
x^\mu(\tau) d \tau,
\label{4}
\end{equation}
with $\Omega =d A$.
$Z(x_0,J)$ is invariant under curve
reparameterization, provided we redefine the
arbitrary external source $J$. In the following we will
consider $\tau \in [0,1]$.
Equation (\ref{2}) is clearly inspired by the
expression of path integral in phase space with external sources in the
coherent state representation.
The generating functional $Z(x_0,J)$ defines a
(noncommutative) algebra ${\cal A}$ generated by $2n$ elements
$X^\mu$,
implicitly defined via the introduction of a set of linear functionals
$\rho_{x_0}$ as follows
\begin{eqnarray}
& &\rho_{x_0}(X^{\mu_1}...X^{\mu_k})
\equiv \lim_{\tau_i-\tau_{i-1} \rightarrow 0^-}
N Z(x_0,0)^{-1}
\int D[\gamma]~ e^{i \int_{\Gamma} \Omega /\lambda_P^2}
x^{\mu_1}(\tau_1)...x^{\mu_k}(\tau_k) \nonumber \\
& & = \lim_{\tau_i-\tau_{i-1} \rightarrow 0^-}
\left. (- i \lambda_P^2)^k Z(x_0,0)^{-1}
{\delta \over \delta J_{\mu_1} (\tau_1)}...
{\delta \over \delta J_{\mu_k} (\tau_k)} Z(x_0,J) \right|_{J=0}.
\label{6}
\end{eqnarray}
The way the limit is taken guarantees the ordering of the $X^{\mu_i}$ in the
right-hand side of Eq. (\ref{6}). Once defined on all polynomials in the
$X^\mu$, Eq. (\ref{6}) can
be applied to the entire algebra of continuous functions in the weak
topology.
To understand this definition, we consider the {\it classical}
limit $\lambda_P \rightarrow 0$.
The integral over curves is then dominated by the
contribution
at stationary points, i.e.
$\Omega_{\mu \nu}(x) dx^\nu/d \tau =0$.
Since $\Omega$ is not degenerate, the leading term
satisfying $x^\mu(0)=x^\mu(1)=x_0^\mu$,
is therefore given by
the curve $x^\mu(\tau)=x_0^\mu,~\forall \tau$, and thus
\begin{equation}
\rho_{x_0}(X^{\mu_1}...X^{\mu_k})
\rightarrow \left.
x^{\mu_1} ...x^{\mu_k} \right|_{x_0} + {\cal O}(\lambda_P^2).
\label{7}
\end{equation}
The maps $\rho_{x_0}$ reduce to the evaluation maps ($*$-homomorphisms)
of the commutative
algebra of smooth functions over $M^{2n}$, $\rho_{x_0}: f(x) \mapsto f(x_0)$
The explicit evaluation of $Z(x_0,J)$ may be rather involved or even
impossible for a generic two form $\Omega$; in these cases
application of perturbation techniques,
as in ordinary path integral in quantum mechanics, may nevertheless
provide some information
on the underlying noncommutative spacetime.
In the simple case of a constant
$\Omega$, or assuming that it is a slowly varying function of $x$ on the
scale $\lambda_P$, one can easily compute Eq. (\ref{2}) finding \cite{b}
\begin{equation}
\rho_{x_0}(X^\mu)= x_0^\mu~~~,
\rho_{x_0}([X^\mu,X^\nu])= {i \over 2} \lambda_P^2
\Omega^{-1 \mu \nu} (x_0).
\label{8}
\end{equation}
As expected the commutator is proportional to the inverse symplectic form
$\Omega^{-1}$ and using
notation of Eq. (\ref{1}), we see that $X^{\mu}$ generates a
noncommutative algebra with
$\rho_{x_0}(Q^{\mu \nu})= \Omega^{-1 \mu \nu}(x_0)/2$.
\section{Non-trivial topologies and Black Hole area quantization}
In the case of a manifold
$M^{2n}$ with nontrivial second homology group
the measure in the integral over closed curves in Eq. (\ref{2}) is,
in general, a multivalued function, since the integral of $\Omega$
over two surfaces $\Gamma$
and $\Gamma'$, both with boundary $\gamma$, may be different if
$\Gamma-\Gamma'$ is a non trivial 2-cycle. The ambiguity in the choice of the
surface $\Gamma$, however, can be removed by requiring {\it quantization}
conditions
on the integral of $\Omega$ over a set of generators $\Gamma_i$
of $H^2(M^{2n})$
such that $exp~(i \int_{\Gamma_i} \Omega \lambda_P^{-2}$) is
single valued, analogous
to the one introduced to quantize the motion of a
particle in the field of a magnetic charge.
Let us consider for example the pair $(S^2,\Omega)$. The generalization to
an arbitrary manifold is straightforward \cite{b}.
For any closed curve
the measure in the path integral will
be always single valued if we require
that the integral of $\Omega$ over $S^2$ is a multiple of $2 \pi$, in unit
$\lambda_P^2$
\begin{equation}
\int_{S^2} \Omega = 2 n \pi \lambda_P^2
\label{9}
\end{equation}
We stress the point that requiring Eq. (\ref{9}) is a consistency condition
to construct, via the path integral approach outlined in this paper, a
deformation of spacetime, and so it seems to us intimately
related to the appearance of microscopic noncommutativity at the Planck
scale.
Actually, we observe that
what we have just discussed could have an intriguing relationship with the
idea that
black hole horizon area is quantized and its spectrum is uniformly spaced
\cite{c}. This is in fact what condition (\ref{9}) is stating, if $S^2$
represents the black hole horizon area and with $\Omega$ the area two-form.
Black hole physics is probably one of the best scenarios where the
structure of spacetime at small scales, otherwise unobservable,
could be felt well above the Planck length. The topological constraint
(\ref{9}) may be, perhaps, a clue in the direction of a deep interplay
of microscopic noncommutativity and macroscopic phenomena such as black hole
mass quantization.
|
1,477,468,750,459 | arxiv | \section{Introduction and Related Works}
\label{sec:intro}
Modern video compression systems widely adopt coding mode competition to select
the best performing tool given the signal. Coding performance improvements
of MPEG/ITU video codecs (AVC, HEVC and VVC) \cite{DBLP:journals/cm/MarpeWS06,
Sullivan:2012:OHE:2709080.2709221, VVC_Ref} are mainly brought by increasing the number
of coding modes. These modes include prediction mode (Intra/Inter), transform
type and block shape. This concept allows to perform signal adapted processing.
In recent years, image coding standards such as BPG (HEVC-based image coding
method) have been outperformed by neural networks-based systems
\cite{DBLP:conf/nips/MinnenBT18,
DBLP:conf/iclr/LeeCB19,DBLP:journals/corr/abs-2002-03370}. Most neural
networks-based systems are inspired by Ball\'{e} and Minnen's
works~\cite{DBLP:conf/nips/MinnenBT18, DBLP:conf/iclr/BalleLS17,
DBLP:conf/iclr/BalleMSHJ18}. They rely on an Auto-Encoder (AE)
architecture that maps the input signal to latent variables. Latent variables
are then transmitted with entropy coding, based on a probability model conveyed
as an Hyper-Prior (HP). Such systems are denoted as AE-HP systems in the
remaining of the paper. AE-HP systems are learned in an end-to-end fashion: all
components being trained according to a global objective function, minimizing a
trade off between distortion and rate. Training of AE-HP systems is often
performed following Ball\'{e}'s method \cite{DBLP:conf/iclr/BalleLS17} to
circumvent the presence of non-differentiable elements in the auto-encoder.
As learned image compression already exhibits state-of-the-art performance,
learned video compression has started to attract the research community's
attention. Authors in
\cite{DBLP:conf/cvpr/LuO0ZCG19,DBLP:conf/iccv/DjelouahCSS19} proposed a method
to compress Groups Of Pictures (GOPs) inspired by standard video coding methods
\textit{i.e.} by decomposing GOPs into intra frames, without dependency, and
inter frames which are coded based on previously decoded frames. Intra frames
are coded with AE-HP systems while Inter frames processing is widely inspired by
classical codecs approaches, replacing traditional coding tools by neural
networks. First, motion vectors (representing the motion between the current
frame and the reference frames), are estimated by an optical flow
network~\cite{DBLP:conf/cvpr/RanjanB17,DBLP:conf/cvpr/SunY0K18}. Motion vectors
are encoded using a AE-HP system and used to perform a prediction of the current
frame. Finally, the residue (prediction error) is computed either in image or
latent domain and coded using an other AE-HP system. Liu \textit{et al.}
\cite{DBLP:journals/corr/abs-1912-06348} tackle a similar problem and show that
using a single network for both flow estimation and coding achieves performance
similar to HEVC.
Although learned video coding already demonstrates appealing performance, it
does not exploit all usual video coding tools. Particularly, inter frames are fully
transmitted through motion compensation and residual coding even though it may
not be the best option. This is different from classical encoders,
where inter frame coding relies on a combination of \textit{Skip Mode} (direct
copy of the motion compensated reference), intra coding and residual inter
coding.
In this paper, a mode selection network (ModeNet) is proposed. Its role is to
select the most suited coding mode for each pixel. ModeNet is based on a
lightweight AE-HP system, which is trained end-to-end alongside the networks
performing the different coding modes. It learns to assign each pixel to the
coding mode that provides the best rate-distortion tradeoff. Consequently, the
proposed ModeNet can be integrated seamlessly into any neural-based coding
scheme to select the most appropriate coding mode.
ModeNet behavior and benefits are highlighted through an architecture
composed of two competing modes: each pixel is either copied from the prediction
(\textit{Skip Mode} in classical codecs) or conveyed through an AE-HP coder. We show
that using ModeNet achieves compelling performance when evaluated under the
\textit{Challenge on Learned Image Compression 2020} (CLIC20) P-frame coding
track conditions \cite{CLIC20_web_page}.
\begin{figure}
\centering
\includegraphics[scale=0.40]{CompleteSystem.pdf}
\caption{Architecture of the complete system.}
\label{CompleteSystemDiagrams}
\end{figure}
\section{Preliminary}
This work focuses on \textit{P-frame coding} with two \textit{AE-HP systems}.
Theses two concepts are briefly summarized below.
\textbf{AE-HP system} This coding scheme is composed of a convolutional encoder
which maps the input signal to latents and a convolutional decoder which
reconstructs the input signal from quantized latents. Latents are transmitted
with entropy coding based on a latents probability model. To improve
performance, the probability model is conditioned on
side-information~\cite{DBLP:conf/iclr/BalleMSHJ18} and/or on previously received
latents~\cite{DBLP:conf/nips/MinnenBT18}.
\textbf{P-frame coding} Let $\left(\mathbf{x}_{t-1},\mathbf{x}_t\right) \in \mathbb{R}^{2 \times C
\times H \times W}$ be the previous frame and the frame to be coded,
respectively. $C$, $H$ and $W$ denote the number of color channels, height and
width of the image, respectively. The previous frame $\mathbf{x}_{t-1}$ has already been
transmitted and it thus available at the decoder side to be used as a reference
frame $\hat{\mathbf{x}}_{t-1}$. Since this work follows the CLIC20 P-frame coding test
conditions, the coding of $\mathbf{x}_{t-1}$ frame is considered lossless \textit{i.e.}
$\hat{\mathbf{x}}_{t-1} = \mathbf{x}_{t-1}$.
P-frame coding is the process to encode $\mathbf{x}_t$ knowing $\hat{\mathbf{x}}_{t-1}$. A
prediction $\tilde{\mathbf{x}}_t$ of $\mathbf{x}_t$ is made available, based on $\hat{\mathbf{x}}_{t-1}$ and
side-information (such as motion). The conditional entropy of $\mathbf{x}_t$ and $\tilde{\mathbf{x}}_t$ verifies:
\begin{equation}
\mathrm{H}(\mathbf{x}_t \mid \tilde{\mathbf{x}}_t) = \mathrm{H}(\mathbf{x}_t) - \mathrm{I}(\mathbf{x}_t, \tilde{\mathbf{x}}_t) \leq \mathrm{H}(\mathbf{x}_t),
\end{equation}
where $\mathrm{H}$ is the Shannon entropy and $\mathrm{I}$ is the mutual
information. Thus using information from $\tilde{\mathbf{x}}_t$ allows to lower the
uncertainty about $\mathbf{x}_t$, resulting in better coding performance. This work
aims at minimizing a rate-distortion trade-off under a lossy P-frame coding objective:
\begin{equation}
\mathcal{L}(\lambda) = \mathrm{D}(\hat{\mathbf{x}}_t, \mathbf{x}_t) + \lambda \, R,\ \text{with}\ \hat{\mathbf{x}}_t = f(\tilde{\mathbf{x}}_t, \mathbf{x}_t),
\end{equation}
where $\mathrm{D}$ is a distortion measure, $\hat{\mathbf{x}}_t$ is the reconstruction
from an encoding/decoding process $f$ with an associated rate $R$ weighted by a
Lagrange multiplier $\lambda$.
\begin{figure*}[htb]
\begin{subfigure}[b]{1\columnwidth}
\centering
\includegraphics[scale=0.265]{RawBlocks.pdf}
\caption{Basic blocks used to build AE-HP systems.}
\label{RawBlocksDiagrams}
\end{subfigure}
\hfill
\begin{subfigure}[b]{1\columnwidth}
\centering
\includegraphics[scale=0.265]{ModeNet.pdf}
\caption{ModeNet architecture. $g_a$ and $g_s$ use LeakyReLU.}
\label{ModeNetDiagram}
\end{subfigure}
\begin{subfigure}[b]{1\columnwidth}
\centering
\includegraphics[scale=0.265]{CodecNetImage.pdf}
\caption{CodecNet for image and difference coding. $g_a$ and $g_s$ use GDN.}
\label{CodecNetImageDiffDiagram}
\end{subfigure}
\hfill
\begin{subfigure}[b]{1\columnwidth}
\centering
\includegraphics[scale=0.265]{CodecNetConditional.pdf}
\caption{CodecNet for conditional coding. $g_a$ and $g_s$ use GDN.}
\label{CodecNetConditionalDiagram}
\end{subfigure}
\caption{Detailed architecture of all used networks. \textbf{Top left figure:}
Building blocks of all subsystems. $g_a$ and $g_s$ are the main
encoder/decoder. $h_a$ and $h_s$ are the hyperprior encoder/decoder. $r$ is
an auto-regressive module as in \cite{DBLP:conf/nips/MinnenBT18}. Each block
is set-up by $f$ (number of internal features) and $n$ (number of output
features). Squared arrows denote LeakyReLU, rounded arrows refer to either
LeakyReLU or GDN \cite{DBLP:conf/iclr/BalleLS17}. Convolutions parameters:
filters number $\times$ kernel size / stride. TConv and MConv stand
respectively for Transposed convolution and Masked convolution.
$\texttt{cat}$ stands for concatenation along feature axis, Q for
quantization, AE and AD for arithmetic encoding/decoding with a Laplace
distribution $\mathcal{L}$.}
\label{fig:four figures}
\end{figure*}
\section{Mode Selection for P-frame Coding}
\label{sec:system}
\subsection{Problem formulation}
Let us define $\mathcal{S}$ as a set of pixels of frame $\mathbf{x}_t$ verifying the following inequality:
\begin{equation}
d(\tilde{\mathbf{x}}_t, \mathbf{x}_t;i) \leq d(\hat{\mathbf{x}}_t, \mathbf{x}_t;i) + \lambda \, r(\mathbf{x}_t \mid \tilde{\mathbf{x}}_t;i),
\label{eq:pixelWiseLossComparison}
\end{equation}
where ${d(\cdot,\cdot;i)}$ is the $i$-th pixel distortion and
${r(\mathbf{x}_t\mid\tilde{\mathbf{x}}_t;i)}$ the rate of the $i$-th pixel of $\mathbf{x}_t$ knowing
$\tilde{\mathbf{x}}_t$. The set $\mathcal{S}$ gives the zones of $\mathbf{x}_t$ preferably
conveyed by using $\tilde{\mathbf{x}}_t$ copy (\textit{Skip Mode}) rather than by an
encoder-decoder system. $\mathcal{S}$ is re-written as:
\begin{align}
\begin{split}
\mathcal{S} = \left\{x_{t, i} \mid\ x_{t, i} \in \mathbf{x}_t,\ \ell (\tilde{\mathbf{x}}_t, \mathbf{x}_t; i) \leq \lambda \right\},\\
\text{ with } \ell(\tilde{\mathbf{x}}_t, \mathbf{x}_t; i) = \frac{d(\tilde{\mathbf{x}}_t, \mathbf{x}_t;i) - d(\hat{\mathbf{x}}_t, \mathbf{x}_t;i)}{r(\mathbf{x}_t \mid \tilde{\mathbf{x}}_t;i)}.
\label{eq:staticArea}
\end{split}
\end{align}
The partitioning function $\ell$ is a rate-distortion comparator, which assigns
a coding mode (either copy or transmission) to each pixel. It is
similar to the RD-cost used to arbitrate different coding modes in traditional
video coding. In the remaining of the article, $\bar{\staticarea}$ is the complement set of
$\mathcal{S}$, used to denote all pixels not in $\mathcal{S}$ \textit{i.e.}
pixels for which transmission results in a better rate-distortion trade-off than
copy from $\tilde{\mathbf{x}}_t$.
Hand-crafting the partitioning function $\ell$ is not trivial. Indeed, both the
rate and the distortion of the $i$-th pixel depends on choices made for previous
and future pixels. Classical codecs circumvent this issue by computing
$\ell$ on blocks of pixels assumed independent from each others.
The purpose of this work is to introduce a convolutional mode selection network
(ModeNet), whose role is both to indicate which pixels belong to $\mathcal{S}$
and to convey this partitioning. This performs a pixel-wise
partitioning of $\mathbf{x}_t$, allowing both causal and anti-causal dependencies, learned
by minimizing a global rate-distortion objective function.
\subsection{Proposed system}
The proposed system is built around ModeNet, which learns a pixel-wise weighting
$\boldsymbol{\alpha}$ allowing to choose among two different coding methods for each pixel.
Here, the two methods in competition are copying prediction pixel from
$\tilde{\mathbf{x}}_t$ or coding pixels of $\mathbf{x}_t$ using an AE-HP system (CodecNet).
An overview of the system architecture is shown in Fig.
\ref{CompleteSystemDiagrams}. ModeNet and CodecNet architecture are described in
details in section \ref{subsec:detailArchitecture}. ModeNet is defined as a
function $m$:
\begin{equation}
R_{m},\ \boldsymbol{\alpha} = m\left(\mathbf{x}_{t-1},\mathbf{x}_t\right),
\end{equation}
where $\boldsymbol{\alpha} \in \left[0 ; 1\right]^{H \times W}$ is the pixel-wise weighting
and $R_m$ the rate needed to convey $\boldsymbol{\alpha}$. The pixel-wise weighting $\boldsymbol{\alpha}$
is continuously valued in $\left[0 ; 1\right]^{H \times W}$ performing smooth
transitions between coding modes to avoid blocking artifacts.
CodecNet is similarly defined as a function $c$, which codes areas
$\bar{\staticarea}$ of $\mathbf{x}_t$ (selected through $\boldsymbol{\alpha}$) using information from
$\tilde{\mathbf{x}}_t$:
\begin{equation}
R_{c},\ \hat{\mathbf{x}}_{t, c} = c\left(\alpha \odot \tilde{\mathbf{x}}_t, \alpha \odot \mathbf{x}_t\right).
\end{equation}
Element-wise matrix multiplication is denoted by $\odot$, $\hat{\mathbf{x}}_{t,
c} \in \mathbb{R}^{C \times H \times W}$ is the reconstruction of $\alpha \odot
\mathbf{x}_t$ and $R_c$ the associated rate. The same $\boldsymbol{\alpha}$ is used to multiply
all $C$ color channels. ModeNet is used to split $\mathbf{x}_t$ between what goes
through CodecNet and what is directly copied from $\tilde{\mathbf{x}}_t$, without
transmission. Thus the complete system output is:
\begin{equation}
\hat{\mathbf{x}}_t = (1 - \boldsymbol{\alpha}) \odot \tilde{\mathbf{x}}_t + c(\boldsymbol{\alpha} \odot \tilde{\mathbf{x}}_t, \boldsymbol{\alpha} \odot \mathbf{x}_t).
\end{equation}
This equation highlights that the role of $\boldsymbol{\alpha}$ is to zero areas from $\mathbf{x}_t$
before transmission to spare their associated rate. The whole system is trained
in an end-to-end fashion to minimize the rate-distortion trade-off:
\begin{equation}
\mathcal{L}(\lambda) = \mathrm{D}(\hat{\mathbf{x}}_t, \mathbf{x}_t) + \lambda (R_m + R_c),
\label{eq:globalLoss}
\end{equation}
where $\mathrm{D}$ denotes a distortion metric. Following the CLIC20 P-frame
test conditions, the Multi Scale
Structural Similarity Metric (MS-SSIM)\cite{Wang03multi-scalestructural} is used:
\begin{equation*}
\mathrm{D}(\hat{\mathbf{x}}_t, \mathbf{x}_t) = 1 - \text{MS-SSIM}(\hat{\mathbf{x}}_t, \mathbf{x}_t).
\end{equation*}
As this work focuses on mode selection, a naive prediction $\tilde{\mathbf{x}}_t = \hat{\mathbf{x}}_{t-1} = \mathbf{x}_{t-1}$
is used. This allows to not add the burden of motion estimation to the system.
Results shown in this paper would still hold when working with a more relevant
prediction issued from motion compensation.
\newcommand{\imagepath/2020_03_25--10_54_30/CoverSong_720P-3261_00185_detail/}{./images/2020_03_25--10_54_30/CoverSong_720P-3261_00185_detail/}
\newcommand{0.95}{0.95}
\begin{figure*}[htb]
\begin{subfigure}[b]{0.95\columnwidth}
\centering
\includegraphics[width=\columnwidth]{input_pairs.png}
\caption{The pair of frames $(\mathbf{x}_{t-1},\mathbf{x}_t)$.}
\label{ex:InputPairs}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.95\columnwidth}
\centering
\includegraphics[width=\columnwidth]{image_rate_total.pdf}
\caption{Spatial distribution of CodecNet rate in bits.}
\label{ex:CodecNetRate}
\end{subfigure}
\begin{subfigure}[b]{0.95\columnwidth}
\centering
\includegraphics[width=\columnwidth]{png_image_coder_input.png}
\caption{Areas selected for the CodecNet $\boldsymbol{\alpha} \odot \mathbf{x}_t$.}
\label{ex:CodecNetPart}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.95\columnwidth}
\centering
\includegraphics[width=\columnwidth]{png_inter_pred_copy.png}
\caption{Areas selected for prediction copy $(1 - \boldsymbol{\alpha}) \odot \tilde{\mathbf{x}}_t$.}
\label{ex:CopyPart}
\end{subfigure}
\caption{Details on the subdivision performed by ModeNet. The pair of frames
$(\mathbf{x}_{t-1},\mathbf{x}_t)$ represents a singer moving in front of a static background.
The microphone in the foreground is also motionless. Frame $\hat{\mathbf{x}}_{t-1} = \mathbf{x}_{t-1}$ is used
as prediction $\tilde{\mathbf{x}}_t$.}
\label{fig:Example}
\end{figure*}
\definecolor{imagesystems}{rgb}{0.161, 0.2, 0.361}
\definecolor{diffsystems}{rgb}{0.953, 0.654, 0.071}
\definecolor{conditionalsystems}{rgb}{0.859, 0.169, 0.224}
\definecolor{darkspringgreen}{rgb}{0.09, 0.45, 0.27}
\newcommand{1}{1}
\newcommand{1.3}{1.3}
\newcommand{star}{star}
\newcommand{thick}{thick}
\newcommand{square*}{square*}
\newcommand{thick}{thick}
\newcommand{solid}{solid}
\newcommand{*}{*}
\newcommand{thick}{thick}
\newcommand{densely dashed}{densely dashed}
\newcommand{loosely dotted}{loosely dotted}
\newcommand{9cm}{9cm}
\newcommand{7.25cm}{7.25cm}
\newcommand{0}{0}
\newcommand{0.2}{0.2}
\newcommand{12}{12}
\newcommand{25}{25}
\definecolor{battleshipgrey}{rgb}{0.52, 0.52, 0.51}
\definecolor{davysgrey}{rgb}{0.33, 0.33, 0.33}
\definecolor{ashgrey}{rgb}{0.7, 0.75, 0.71}
\pgfplotsset{minor grid style={solid,battleshipgrey, dotted}}
\pgfplotsset{major grid style={solid, thick}}
\begin{figure*}[htb]
\begin{subfigure}[b]{1\columnwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid= both ,
xlabel = {Rate [bpp]} ,
ylabel = {$\text{MS-SSIM}_{dB}$} ,
xmin = 0, xmax = 0.2,
ymin = 12, ymax = 25,
ylabel near ticks,
xlabel near ticks,
width=9cm,
height=7.25cm,
x tick label style={
/pgf/number format/.cd,
fixed,
precision=2
},
xtick distance={0.05},
ytick distance={5},
minor y tick num=4,
minor x tick num=1,
title=CodecNet performance compared to HEVC
]
\addplot[thick, loosely dotted, imagesystems, mark=star, mark options={solid, scale=1.3}] coordinates{
(0.059,12.17)
(0.125,14.68)
(0.240,17.36)
(0.426,20.34)
}node [pos=0.45, sloped, anchor=south] {HEVC Img};
\addplot[thick, densely dashed, imagesystems, mark=square*, mark options={solid, scale=1}] coordinates {
(0.218,19.20)
(0.160,17.88)
(0.105,15.95)
(0.078,14.60)
(0.065,13.91)
}node [pos=0.15, sloped, anchor=south] {NN Img};
\addplot[thick, loosely dotted, diffsystems, mark=star, mark options={solid, scale=1.3}] coordinates {
(0.035,14.55)
(0.051,15.37)
(0.070,16.19)
(0.082,16.59)
(0.096,17.04)
(0.110,17.45)
(0.128,17.94)
(0.147,18.35)
}node [pos=0.9, sloped, anchor=south] {HEVC Diff.};
\addplot[thick, densely dashed, diffsystems, mark=square*, mark options={solid, scale=1}] coordinates {
(0.137,19.85)
(0.098,18.85)
(0.055,16.76)
(0.034,15.01)
}node [pos=0, sloped, anchor=west] {NN Diff.};
\addplot[thick, densely dashed, conditionalsystems, mark=square*, mark options={solid, scale=1}] coordinates {
(0.102,20.19)
(0.065,18.94)
(0.048,18.13)
(0.039,17.62)
}node [pos=0, sloped, anchor=west] {NN Cond.};
\end{axis}
\end{tikzpicture}
\caption{Performance of CodecNet systems alone.}
\label{AnchorResults}
\end{subfigure}
\hfill
\begin{subfigure}[b]{1\columnwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid= both ,
xlabel = {Rate [bpp]} ,
ylabel = {$\text{MS-SSIM}_{dB}$} ,
xmin = 0, xmax = 0.2,
ymin = 12, ymax = 25,
width=9cm,
height=7.25cm,
ylabel near ticks,
xlabel near ticks,
x tick label style={
/pgf/number format/.cd,
fixed,
precision=2
},
xtick distance={0.05},
ytick distance={5},
minor y tick num=4,
minor x tick num=1,
title=Systems performance with and without ModeNet
]
\addplot[thick, densely dashed, imagesystems, mark=square*, mark options={solid, scale=1}] coordinates {
(0.218,19.20)
(0.160,17.88)
(0.105,15.95)
(0.078,14.60)
(0.065,13.91)
}node [pos=1, sloped, anchor=east] {NN Img.};
\addplot[thick, densely dashed, diffsystems, mark=square*, mark options={solid, scale=1}] coordinates {
(0.137,19.85)
(0.098,18.85)
(0.055,16.76)
(0.034,15.01)
}node [pos=1, sloped, anchor=east] {NN Diff.};
\addplot[thick, densely dashed, conditionalsystems, mark=square*, mark options={solid, scale=1}] coordinates {
(0.102,20.19)
(0.065,18.94)
(0.048,18.13)
(0.039,17.62)
}node [pos=1, sloped, anchor=east] {NN Cond.};
\addplot[thick, solid, imagesystems, mark=*, mark options={solid, scale=1}] coordinates {
(0.125,20.59)
(0.083,19.02)
(0.063,18.30)
(0.051,17.53)
}node [pos=0, sloped, anchor=west, yshift=-0.05cm] {NN Mode Img.};
\addplot[thick, solid, conditionalsystems, mark=*, mark options={solid, scale=1}] coordinates {
(0.155,22.28)
(0.101,20.77)
(0.064,19.35)
(0.046,18.57)
}node [pos=0.2, sloped, anchor=south] {NN Mode Cond.};
\addplot[thick, loosely dotted, darkspringgreen, mark=star, mark options={solid, scale=1.3}] coordinates {
(0.021,20.40)
(0.030,21.06)
(0.050,22.14)
(0.070,22.89)
}node [pos=1, sloped, anchor=west] {HEVC LP};
\end{axis}
\end{tikzpicture}
\caption{Improvements brought by ModeNet.}
\label{ModeNetResults}
\end{subfigure}
\caption{Rate-distortion performance of the systems. All systems are
evaluated on CLIC20 P-frame validation dataset. Quality metric is
$\text{MS-SSIM}_{dB} = -10 \log_{10} (1 - \text{MS-SSIM})$ (the higher the
better). Rate is indicated in bits per pixel (bpp). Img. denotes image,
Diff. difference, Cond. conditional and HEVC LP is HEVC in low-delay P
configuration.}
\label{fig:results}
\end{figure*}
\subsection{Networks architecture}
\label{subsec:detailArchitecture}
Both ModeNet and CodecNet networks are built from standard AE-HP blocks
described in Fig. \ref{RawBlocksDiagrams}. The ModeNet role is to process the
previous and current frames to transmit the pixel-wise weighting $\boldsymbol{\alpha}$. It is
implemented as a lightweight AE-HP system (\textit{cf.} Fig.
\ref{ModeNetDiagram}), with $\mathbf{x}_{t-1}$ and $\mathbf{x}_t$ as inputs. A bias of $0.5$ is
added to the output as it makes training easier. To assure that $\boldsymbol{\alpha} \in
\left[0, 1\right]^{H \times W}$ a clipping function is used.There are 200~000
parameters in ModeNet, which represents around 10~\% of CodecNet number of
parameters.
In order to transmit pixels in $\bar{\staticarea}$, three different configurations
of CodecNet are investigated. Two of them are based on the architecture depicted
in Fig. \ref{CodecNetImageDiffDiagram}. They consist in either plain image
coding of $\mathbf{x}_t$ or in difference coding of ($\mathbf{x}_t - \tilde{\mathbf{x}}_t$) (prediction
error coding). Last configuration is conditional coding denoted as $(\mathbf{x}_t \mid
\tilde{\mathbf{x}}_t)$. shown Fig. \ref{CodecNetConditionalDiagram}. This configuration
theoretically results in better performance. Indeed, from a source coding
perspective:
\begin{equation}
\mathrm{H}(\mathbf{x}_t \mid \tilde{\mathbf{x}}_t) \leq \min\left(\mathrm{H}(\mathbf{x}_t),\ \mathrm{H}(\mathbf{x}_t - \tilde{\mathbf{x}}_t)\right).
\label{eq:entropy}
\end{equation}
Therefore, coding $\mathbf{x}_t$ while retrieving all information from $\tilde{\mathbf{x}}_t$
results in less information to transmit than difference or image coding.
\section{Network training}
All networks are trained in an end-to-end fashion to minimize the global loss
function stated in eq. \eqref{eq:globalLoss}. Non-differentiable parts are
approximated as in Ball\'e's work
\cite{DBLP:conf/iclr/BalleLS17,DBLP:conf/iclr/BalleMSHJ18} to make training
possible. End-to-end training allows ModeNet to learn to partition $\mathbf{x}_t$,
without the need of an auxiliary loss or a hand-crafted criterion. Due to the
competition between signal paths, some care is taken when training. The training
process is composed of two stages:
\textbf{Warm-up.} Training of CodecNet only (\textit{i.e.} ModeNet weights are
frozen). Unlike copy, CodecNet is not immediately ready to process its input.
Thus CodecNet has to be trained for a few epochs so the competition between copy
and CodecNet is relevant.
\textbf{Alternate training.} Alternate training of ModeNet and CodecNet, one
epoch for each (\textit{i.e.} the other network weights are frozen).
The training set is constructed from the CLIC20 P-frame training dataset
\cite{CLIC20_web_page}. Half a million $256 \times 256$ pairs of crops are
randomly extracted from consecutive frames. The batch size is set to 8 and an
initial learning rate of $10^{-4}$ is used. The learning rate is divided by 5 at
50~\% and 75~\% of the training.
\section{Mode Visualisation}
This sections details the processing of a pair of frames $(\mathbf{x}_{t-1},\mathbf{x}_t)$ by the
proposed system. Frames are from the sequence $\textit{CoverSong\_720P-3261}$
extracted from the CLIC20 P-frame dataset. The system used for generating the
visuals is implemented as in Fig. \ref{CompleteSystemDiagrams}.
Figure \ref{ex:InputPairs} shows the inputs of ModeNet. They are encoded and
decoded as the pixel-wise weighting $\boldsymbol{\alpha}$. The value of $\boldsymbol{\alpha}$ tends to be
zero\footnote{As images are in YUV format, all-zero areas appear in green}
for pixels in $\mathcal{S}$ \textit{i.e.} when copying $\tilde{\mathbf{x}}_t$ results in
a better rate-distortion cost than transmission through CodecNet. $\mathcal{S}$
corresponds to static areas in $(\mathbf{x}_{t-1}, \mathbf{x}_t)$ as the background and the
microphone, which are well captured by $\boldsymbol{\alpha}$. This areas are shown in Fig.
\ref{ex:CopyPart}.
CodecNet selected inputs are $\boldsymbol{\alpha} \odot \mathbf{x}_t$ and $\boldsymbol{\alpha} \odot \tilde{\mathbf{x}}_t$
depicted in Fig. \ref{ex:CodecNetPart}. Copying areas of the prediction
$\tilde{\mathbf{x}}_t$ allows to zero areas in $\mathbf{x}_t$ which prevents CodecNet to spend
rate for these areas. Figure \eqref{ex:CodecNetRate} shows the spatial
distribution of the rate in CodecNet and clearly highlights this behavior.
In this example, the rate associated to $\boldsymbol{\alpha}$ is 0.005 bit per pixel (bpp).
This shows that ModeNet is able to convey a smooth partitioning of an arbitrary
number of objects for a marginal amount of rate.
\section{Experimental Results}
Performance improvements brought by ModeNet are assessed on the CLIC20 P-frame
validation set, under the challenge test conditions. In order to obtain
RD-curves, each system is learnt with different $\lambda$. Results are gathered
in Fig.~\ref{fig:results}. For the sake of brevity, systems denoted as
\textit{NN Mode X} are complete systems (\textit{cf.} Fig.
\ref{CompleteSystemDiagrams}) composed of both ModeNet and CodecNet in coding
configuration X. Similarly, systems \textit{NN X} denotes CodecNet only system
without ModeNet (\textit{i.e.} no copy possibility: $\boldsymbol{\alpha}$ is an all-ones matrix).
\subsection{Anchors}
CodecNet performance is assessed by training and evaluating it without ModeNet,
meaning that $\mathbf{x}_t$ is completely coded through CodecNet. The three
configurations of CodecNet (\textit{cf.} section \ref{subsec:detailArchitecture}
and Fig. \ref{fig:four figures}) are tested. The image configuration is
compared with HEVC in All Intra configuration. Difference configuration is compared
with HEVC coding the pre-computed difference image. For both comparison, HEVC
encodings are performed with the HM 16.20 reference software. Results in terms of
MS-SSIM versus the rate are shown in Fig. \ref{AnchorResults}. CodecNet achieves
consistently better performance than HEVC for both configurations across all
bitrates, proving its competitiveness.
Conditional coding achieves better performance than both difference and image
coding as expected from eq. \eqref{eq:entropy}. This shows the relevance of
performing conditional coding relative to difference coding.
\subsection{Performances of ModeNet-based systems}
Performances of ModeNet-based systems are shown Fig. \ref{ModeNetResults}.
Using ModeNet increases the performance of both image and conditional coding.
Image coding of $\mathbf{x}_t$ alone does not have any information about the previous
frame. Thus, adding ModeNet and the possibility of copying areas of
$\tilde{\mathbf{x}}_t$ results in an important increase of the performance.
Interestingly, NN Mode Image achieves significantly better results that NN
Difference. As illustrated in Fig. \ref{fig:Example}, $\mathcal{S}$ tends to
represent the areas similar in $(\tilde{\mathbf{x}}_t, \mathbf{x}_t)$, which are well
handled by difference coding. Thus, performance gap between NN Mode
Image and NN Difference arises on $\bar{\staticarea}$, where image coding
outperforms difference coding.
An ideal conditional coder is able to retrieve all informations about
$\mathbf{x}_t$ in $\tilde{\mathbf{x}}_t$ making $\tilde{\mathbf{x}}_t$ copy useless. However, leveraging
all information in $\tilde{\mathbf{x}}_t$ is not possible for a neural network with
reduced complexity. There are still areas for which $\tilde{\mathbf{x}}_t$ copy provides
a smaller rate-distortion cost than transmission. Thus using ModeNet to identify
them improves performance.
To better appreciate the results, HEVC low-delay P (LP) performance is presented.
HEVC LP codes $\mathbf{x}_t$ with $\mathbf{x}_{t-1}$ as reference frame and is able to perform motion
compensation to obtain a relevant prediction. Consequently, it outperforms all
other systems which are constrained to directly use $\mathbf{x}_{t-1}$ as their prediction, without motion compensation.
Using ModeNet with the best CodecNet configuration (conditional coding) allows
to decrease the rate by 40~\% compared to difference coding for the whole frame.
Even though this gap would decrease when working with a motion compensated
prediction, we believe that using ModeNet to arbitrate between conditional
coding of $(\mathbf{x}_t \mid \tilde{\mathbf{x}}_t)$ and copy of $\tilde{\mathbf{x}}_t$ would improve
most learned video coding methods, which still uses difference coding for the
whole frame.
\section{Conclusion and Future Works}
In this paper, we propose a mode selection network which learns to transmit a
partitioning of a frame to code, allowing to choose among different coding
methods pixel-wise. ModeNet benefits are illustrated under a P-frame coding
task. It is shown that coding the prediction error is not necessarily the best
choice and using ModeNet to select better coding methods significantly
increase performance.
This paper shows that the proposed ModeNet performs a smooth partitioning of an
arbitrary number of areas in a frame, for a marginal rate and complexity
overhead. It can be generalized to other coding schemes to leverage competition of
complementary coding modes, which is known to be one of the most powerful tools
in classical video coding.
An extension of this work is to use motion information to improve the
prediction process. As the proposed method outperforms residual coding, having a
competitive motion compensated prediction would result in compelling
performance.
\bibliographystyle{IEEEbib}
|
1,477,468,750,460 | arxiv | \section{Introduction}
Pfister's Local-Global Principle says that a regular quadratic form over a (formally) real field represents a torsion element in the Witt ring if and only if its signature at each ordering of the field is zero.
This result has been extended in \cite{LU} to central simple algebras with involution.
The theory of central simple algebras with involution is a natural extension of quadratic form theory.
On the one hand many concepts and related results associated to quadratic forms
have been extended to algebras with involution. Examples include isotropy, hyperbolicity,
cohomological invariants and signatures.
On the other hand quadratic forms are used as tools in the study of algebras with involution. Examples include involution trace forms and spaces of similitudes.
In this article we are interested in
weakly hyperbolic algebras with involution, a natural generalization of torsion quadratic forms considered first in \cite[Chap.~5]{Unger}.
In \cite{LU} such algebras with involution were characterized as those having trivial signature at all orderings of the base field, thus generalizing Pfister's Local-Global Principle.
We aim to give a new exposition of this result, including several new aspects and extensions.
We attempt to minimize the use of hermitian forms and treat algebras with involution as direct analogues of quadratic forms.
The structure of this article is as follows.
In Section~\ref{Sec:PLGP} we give a self-contained presentation of Pfister's Local-Global Principle for quadratic forms in a generalized version, relative to a preordering.
Along the way we will set up the necessary background material from the theory of quadratic forms
and ordered fields.
This corresponds to the material covered in
\cite[Chap.~1]{LamOVQ}.
Our approach makes crucial use of Lewis' annihilating polynomials, enabling us to touch on
the quantitative aspect of the relation between nilpotence and torsion.
In Sections~\ref{sec3}, \ref{sec4} and \ref{sec5} we recall the basic terminology for algebras with involution, consider their relations to quaternion algebras and quadratic forms and study involution trace forms.
In Section~\ref{sec6} we treat the notion of hyperbolicity for algebras with involution and
cite the relevant results about hyperbolicity behaviour over field extensions.
In Section~\ref{sec7} we turn to the study of algebras with involution over ordered fields.
In \eqref{P:rcai} we obtain a classification over real closed fields.
We then provide a uniform definition of signatures for involutions of both kinds with respect to an ordering.
Signatures of involutions were introduced in \cite{LT} for involutions of the first kind
and in \cite{Queg} for involutions of the second kind, and both cases are treated in \cite[(11.10), (11.25)]{BOI}.
In Section~\ref{sec8} we give a new proof of the main result of \cite{LU}, an analogue of Pfister's Local-Global Principle for algebras with involution.
As we present this result in \eqref{T:PLULG} it further covers an observation due to Scharlau in \cite{Scha70} on the torsion part of Witt groups.
In \eqref{T:PLULG-Pre} we extend this result to a local-global principle for $T$-hyperbolicity with respect to a preordering $T$.
Some of the essential ideas contained in Sections~\ref{sec7} and~\ref{sec8} germinated in the {MSc} thesis of Beatrix Bernauer \cite{BB}, prepared under the guidance of the first named author.
In its original version for quadratic forms as well as in the generalized version for algebras with involution Pfister's Local-Global Principle relates the hyperbolicity of tensor powers to the hyperbolicity of multiples. For quadratic forms this corresponds to the relation between nilpotence and torsion for an element of the Witt ring. In Section~\ref{sec9} we touch on the quantitative aspect of this relation in the setting of algebras with involution.
\section{Pfister's Local-Global Principle}
\label{Sec:PLGP}
We refer to \cite{Lam} and \cite{Scharlau} for the foundations of quadratic form theory over fields and the basic terminology.
Let $K$ be a field of characteristic different from $2$.
We denote by $\mg{K}$ the multiplicative group of $K$, by $\sq{K}$ the subgroup of nonzero squares, and by $\sums{K}$ the subgroup of nonzero sums of squares in $K$.
If $\sums{K}=\sq{K}$ then $K$ is said to be \emph{pythagorean}.
By a \emph{quadratic form over $K$} we mean a pair $(V,B)$ consisting of a finite-dimensional $K$-vector space $V$ and a regular symmetric $K$-bilinear form $B:V\times V\longrightarrow K$.
We mostly use a (single) lower case Greek letter to denote such a pair and often say `form' instead of `quadratic form'.
If $\varphi=(V,B)$ is a form over $K$,
we say that $a\in\mg{K}$ is \emph{represented by $\varphi$} if $a=B(x,x)$ for some $x\in V$, and we write $\dset{K}{(\varphi)}$ for the elements of $\mg K$ represented by $\varphi$.
Up to isometry a form of dimension $n$ is given by a diagonalization $\langle a_1,\dots,a_n\rangle$, where $a_1,\dots,a_n\in\mg{K}$ are the values represented on some orthogonal basis.
Given $n\in\mathbb{N}$ and $a_1,\dots,a_n\in\mg{K}$ we write
$\la\!\la a_1,\dots,a_n\ra\!\ra$ to denote the form $\langle 1,-a_1\rangle\otimes\cdots\otimes\langle 1,-a_n\rangle$ and call this an \emph{$n$-fold Pfister form}.
By \cite[Chap.~4, (1.5)]{Scharlau} a Pfister form is either anisotropic or hyperbolic.
We consider quadratic forms up to isometry and use the equality sign to indicate that two forms are isometric.
Given a quadratic form $\varphi$ over $K$ and $m\in \mathbb{N}$, we write $m\times \varphi$ for the $m$-fold orthogonal sum $\varphi \perp \cdots \perp \varphi$ and further $\varphi^{\otimes m}$ for the $m$-fold tensor product $\varphi\otimes\cdots\otimes\varphi$.
We abbreviate $\mathsf{D}_K(m) =\mathsf{D}_K(m\times \langle 1\rangle)$, which is
the set of nonzero sums of $m$ squares in $K$.
Let $W\!K$ denote the Witt ring of $K$ and $I\!K$ its fundamental ideal, which consists of the classes of even-dimensional quadratic forms over $K$.
For $n\in\mathbb{N}$ we write $I^n\!K$ for $(I\!K)^n$, the $n$th power of $I\!K$.
Recall that $I^n\!K$ is generated as a group by the Witt equivalence classes of the $n$-fold Pfister forms.
We sometimes write $[\varphi]$ to denote the class in $W\!K$ given by a form $\varphi$.
For $n\in\mathbb{N}$ let
$$L_n(X) \,\,=\,\, \prod_{i=0}^n (X-n+2i)\,.$$
Note that $L_n(-X)=(-1)^{n+1}\cdot L_n(X)$, so that the polynomial $L_n(X)$ is either even or odd.
In \cite{Lew}, Lewis showed that these polynomials have a crucial property relating to quadratic forms, and that this fact can be applied to study the structure of Witt rings.
\begin{thm}[Lewis]\label{T:Lewis}
Let $n\in \mathbb{N}$ and let $\varphi$ be a quadratic form of dimension $n$ over $K$. Then $L_n([\varphi]) = 0$ in $W\!K$.
\end{thm}
For completeness we include a proof due to K.H.~Leung, also given in~\cite{Lew}.
\begin{proof}
As $(a\varphi)^{\otimes 2} = \varphi^{\otimes 2}$ for all $a\in\mg{K}$ and $L_n(X)$ is either even or odd, we may scale $\varphi$ and assume that $\varphi = \varphi'\perp\langle 1\rangle$ where $\varphi'$ is a form of dimension $n-1$.
Using the induction hypothesis for $\varphi'$ we obtain that $L_{n-1}([\varphi]-1)=L_{n-1}([\varphi'])=0$.
Since $L_n(X) = (X+n)\cdot L_{n-1}(X-1)$ we conclude that $L_n([\varphi])=0$.
\end{proof}
\begin{cor}
\label{C:alpha-mult}
Let $n\in\mathbb{N}$ and let $\varphi$ be a quadratic form of dimension $2n$ over~$K$.
Then $2^{2n-1}n!(n-1)!\cdot[\varphi]$ is a multiple of $[\varphi]^2$ in $W\!K$.
\end{cor}
\begin{proof}
We may scale $\varphi$ and assume that $\varphi=\langle 1\rangle\perp\varphi'$ where $\varphi'$ is a form of dimension $2n-1$.
Then $L_{2n-1}([\varphi'])=0$ by \eqref{T:Lewis}. It follows that $[\varphi]$ is a zero of the polynomial
\[\smash{L_{2n-1}(X-1)=(X-2n)X\prod_{i=1}^{n-1} (X^2-4i^2).}\]
This implies the statement.
\end{proof}
\begin{cor}
\label{C:rad}
Let $J$ be an ideal of $W\!K$ contained in $I\!K$ and such that $W\!K/J$ is torsion free.
Then $J$ is a radical ideal.
\end{cor}
\begin{proof}
For $\alpha \in W\!K$ with $\alpha^{2}\in J$ we obtain by \eqref{C:alpha-mult} that $m\alpha\in J$ for some $m\geq 1$, and as $W\!K/J$ is torsion free we conclude that $\alpha\in J$.
This implies the statement.
\end{proof}
An \emph{ordering of $K$} is a set $P\subseteq K$ that is additively and multiplicatively closed and that satisfies $P\cup -P=K$ and $P\cap -P=0$.
Any such set $P$ is the \emph{positive cone} $\{ x\in K\mid x\geq 0\}$ for a unique total order relation $\leq$ on $K$ that is compatible with the field operations.
Let $X_K$ denote the set of orderings of $K$; it can be equipped with the \emph{Harrison topology} (cf.~\cite[Chap.~VIII, Sect.~6]{Lam}), but this is not relevant in the sequel.
Let $T\subseteq K$ be additively and multiplicatively closed with $\sq{K}\cup\{0\}\subseteq T$.
Then $T+xT=\{s+xt\mid s,t\in T\}$ is additively and multiplicatively closed for any $x\in K$.
Moreover $\mg T=T\setminus \{0\}$ is a subgroup of $\mg{K}$ containing $\sums{K}$.
If further $-1\notin T$, then $T$ is called a \emph{preordering of $K$}.
Any ordering is a preordering.
Furthermore, if $T$ is a preordering of $K$, then so is $T+xT$ for any $x\in K\setminus-T$.
\begin{prop}\label{P:preord-ext}
Any preordering is contained in an ordering.
\end{prop}
\begin{proof}
Using Zorn's Lemma, we obtain that any preordering is contained in a maximal preordering.
For a preordering $T$ of $K$ that is not an ordering, there exists an element $x\in K\setminus (T\cup-T)$ and then $T+xT$ is a preordering of $K$ that strictly contains $T$.
Hence, any maximal preordering is an ordering.
\end{proof}
If the field $K$ has an ordering we say that it is \emph{real}, otherwise \emph{nonreal}.
\begin{thm}[Artin-Schreier]\label{T:AS}
The field $K$ is real if and only if $-1\notin\sums{K}$.
\end{thm}
\begin{proof}
The set $\sums{K}\cup\{0\}$ is a preordering of $K$ if and only if $-1\notin \sums{K}$.
Since any ordering of $K$ contains $\sums{K}\cup\{0\}$, the
statement follows from \eqref{P:preord-ext}.
\end{proof}
For a preordering $T$ of $K$ we set $X_T=\{P\in\ X_K\mid T\subseteq P\}$.
\begin{thm}[Artin]\label{T:A}
Assume that $T$ is a preordering of $K$. Then $T=\!\!\bigcap\limits_{P\in X_T}\!\!P$.
\end{thm}
\begin{proof}
For $x\in K\setminus T$ the set $T-xT$ is a preordering of $K$,
hence by \eqref{P:preord-ext} contained in some ordering $P$, which then contains $T$ but does not contain $x$.
\end{proof}
\begin{cor}
If $K$ is real, then $\sums{K}\cup\{0\}$ is a preordering and equal to $\bigcap\limits_{P\in X_K}\!\!P$.
\end{cor}
\begin{proof}
This is clear from \eqref{T:AS} and \eqref{T:A}.
\end{proof}
Any $P\in X_K$ determines a unique ring homomorphism
${\rm sign}_P:W\!K\longrightarrow \mathbb Z$ that maps the class of $\langle a\rangle$ to $1$ for all $a\in\mg{P}$, called the \emph{signature at $P$}.
Furthermore, any form $\varphi$ over $K$ induces a map $\widehat{\varphi} : X_K\longrightarrow \mathbb Z, P\longmapsto {\rm sign}_P(\varphi)$ (cf.~\cite[Chap.~2, \S4]{Scharlau}.
We obtain a ring homomorphism $${\rm sign} : W\!K \longrightarrow \mathbb Z^{X_K}, \varphi\longmapsto \widehat{\varphi}$$
called the \emph{total signature}.
If $K$ is nonreal, then $X_K=\emptyset$ and $\mathbb Z^{X_K}$ is the ring with one element.
Let $T$ be a fixed preordering of $K$.
We write $${\rm sign}_T: W\!K\longrightarrow \mathbb Z^{X_T},\varphi\longmapsto \widehat{\varphi}|_{X_T}$$
and we denote the kernel of this homomorphism by $I_TK$.
Let $\varphi$ be a quadratic form over $K$.
We say that $\varphi$ is \emph{$T$-positive} if $\varphi$ is nontrivial and $\dset{K}{(\varphi)}\subseteq \mg{T}$.
If $a_1,\dots,a_n\in\mg{K}$ are such that $\varphi=\langle a_1,\dots,a_n\rangle$, then $\varphi$ is $T$-positive if and only if $a_1,\dots,a_n\in\mg{T}$.
Hence, orthogonal sums and tensor products of $T$-positive forms are again $T$-positive.
We say that $\varphi$ is \emph{$T$-isotropic} or \emph{$T$-hyperbolic} if there exists a $T$-positive form $\vartheta$ over $K$ such that $\vartheta\otimes\varphi$ is isotropic or hyperbolic, respectively.
We write $\dset{T}{(\varphi)}$ for the union of the sets $\dset{K}{(\vartheta\otimes\varphi)}$ where $\vartheta$ runs over all $T$-positive forms over $K$.
For $a_1,\dots,a_n\in\mg{K}$ we set $\dset{K}{\langle a_1,\dots,a_n\rangle}=\dset{K}{(\langle a_1,\dots,a_n\rangle)}$ and $\dset{T}{\langle a_1,\dots,a_n\rangle}=\dset{T}{(\langle a_1,\dots,a_n\rangle)}$, and we recall that $[\langle a_1,\dots,a_n\rangle]$ stands for the Witt equivalence class of $\langle a_1,\dots,a_n\rangle$.
\begin{prop}
Let $n\in\mathbb{N}$ and $a_1,\dots,a_n\in\mg{K}$.
The form $\langle a_1,\dots,a_n\rangle$ is $T$-isotropic if and only if $\langle t_1a_1,\dots,t_na_n\rangle$ is isotropic for certain $t_1,\dots,t_n\in\mg{T}$.
For $a\in\mg{K}$, we have that $a\in \dset{T}{\langle a_1,\dots,a_n\rangle}$ if and only if $a\in\dset{K}{\langle t_1a_1,\dots,t_na_n\rangle}$ for certain $t_1,\dots,t_n\in\mg{T}$.
\end{prop}
\begin{proof}
Let $\varphi=\langle a_1,\dots,a_n\rangle$.
For $t_1,\dots,t_n\in\mg{T}$ the form $\vartheta=\langle t_1,\dots,t_n\rangle$ is $T$-positive, and $\langle t_1a_1,\dots,t_na_n\rangle$ is a subform of $\vartheta\otimes\varphi$.
This shows the right-to-left implications.
To show the left-to-right implications, consider a $T$-positive form $\vartheta$ and an element $a\in K$ that is non-trivially represented by $\vartheta\otimes\varphi$.
Since $\vartheta\otimes\varphi=a_1\vartheta\perp\dots\perp a_n\vartheta$ it follows that there exist $s_1,\dots,s_n\in\dset{K}{(\vartheta})\cup\{0\}$, not all equal to zero, such that $a=a_1s_1+\dots+a_ns_n$.
Letting $t_i=1$ if $s_i=0$ and $t_i=s_i$ otherwise for $1\leq i\leq n$, we have that $t_1,\dots,t_n\in\mg{T}$ and that $a$ is represented nontrivially by $\langle t_1a_1,\dots,t_na_n\rangle$.
\end{proof}
\begin{prop}\label{P:LL}
Let $\mf{p}$ be a prime ideal of $W\!K$ different from $I\!K$.
The set $P=\{t\in\mg{K}\mid [\langle 1,-t\rangle]\in\mf{p}\}\cup\{0\}$ is an ordering of $K$, and $I_PK\subseteq \mf{p}$.
\end{prop}
\begin{proof}
For $s,t\in P\setminus\{0\}$ we have that $[\langle 1,-st\rangle]=[\langle t\rangle]\cdot([\langle 1,-s\rangle]-[\langle 1,-t\rangle])\in \mf{p}$ and thus $st\in P$. Therefore $P$ is a multiplicatively closed subset of $K$.
For $t\in\mg{K}$ we have $[\langle 1,-t\rangle]\otimes [\langle 1,t\rangle]=0\in\mf{p}$ and thus $[\langle 1,-t\rangle]\in \mf{p}$ or $[\langle 1,t\rangle]\in\mf{p}$, showing that $K=P\cup-P$.
Since $\mf{p}$ is different from $I\!K$, which is a maximal ideal of $W\!K$ and generated by the elements $[\langle 1,-a\rangle]$ with $a\in\mg{K}$, we obtain that $P\subsetneq K$.
Since $K=P\cup-P$ it follows that $-1\notin P$ and $P\cap-P=0$.
To show that $P$ is additively closed, we consider $s,t\in P\setminus\{0\}$.
As $s^{-1}t\in P$ we have $s+t\neq 0$.
Using \cite[Chap.~I, (5.1)]{Lam} we see that $[\langle 1,s+t\rangle]\cdot [\langle 1,st\rangle]=[\langle 1,s\rangle]\cdot [\langle 1,t\rangle]$.
As $-s,-t\notin P$, the elements $[\langle 1,s\rangle]$ and $[\langle 1,t\rangle]$ do not lie in $\mf{p}$, thus neither does their product, for $\mf{p}$ is prime.
We conclude that $[\langle 1,s+t\rangle]\notin\mf{p}$ and thus $s+t\in K\setminus-P=P\setminus\{0\}$.
Hence $P$ is additively closed.
This shows that $P$ is an ordering of $K$.
The ideal $I_PK$ is generated by the classes of forms $\langle 1,-t\rangle$ with $t\in\mg{P}$, and these belong to $\mf{p}$. So $I_PK\subseteq \mf{p}$.
\end{proof}
The following statement is a generalization of Pfister's Local-Global Principle, relative to a preordering (cf. \cite[(1.26)]{LamOVQ}).
\begin{thm}[Pfister]
\label{T:pre-Pfister}
Let $T$ be a preordering of $K$.
The ideal $I_TK$ is generated by the classes of binary forms $\langle 1,-t\rangle$ with $t\in\mg{T}$.
Moreover, for a quadratic form $\varphi$ over $K$ the following statements are equivalent:
\begin{enumerate}[$(i)$]
\item We have ${\rm sign}_T(\varphi)=0$.
\item The form $\varphi$ is $T$-hyperbolic.
\item There exists a $T$-positive Pfister form $\tau$ over $K$ such that $\tau\otimes\varphi$ is hyperbolic.
\item There exist $r\geq 0$, $a_1,\dots,a_r\in\mg{K}$ and $t_1,\dots,t_r\in\mg{T}$ such that $\varphi$ is Witt equivalent to $\langle a_1,-a_1t_1\rangle\perp\dots\perp\langle a_r,-a_rt_r\rangle$.
\end{enumerate}
\end{thm}
\begin{proof}
Let $J$ denote the ideal of $W\!K$ generated by the classes of the binary forms $\langle 1,-t\rangle$ with $t\in\mg{T}$.
Obviously, $J\subseteq I_TK$.
Note that $I_TK$ is equal to the intersection of prime ideals $\bigcap_{P\in X_T} I_PK$ and contained in $I\!K$.
Given any prime ideal $\mf{p}$ of $W\!K$ such that $J\subseteq \mf{p} \not=I\!K$, the set $P=\{t\in\mg{K}\mid \langle 1,-t\rangle\in\mf{p}\}\cup\{0\}$ is an ordering of $K$ containing $T$, so that $J\subseteq I_TK\subseteq I_PK \subseteq \mf{p}$ by \eqref{P:LL}.
This shows that $J\subseteq I_TK\subseteq\sqrt{J}$.
Note that a quadratic form $\varphi$ over $K$ is $T$-isotropic if and only if $\varphi \equiv \psi \bmod J$ for a quadratic form $\psi$ over $K$ with $\dim(\psi) <\dim(\varphi)$.
In particular, $\varphi$ is $T$-hyperbolic if and only if $\varphi \in J$.
From this, it follows immediately that $W\!K/J$ is torsion free. Hence $J$ is a radical ideal by \eqref{C:rad}, and we conclude that $I_TK=J$.
This shows that $(i)\iff (iv)$.
The implications $(iii)\implies (ii)\implies (i)$ are obvious.
Finally we have $(iv)\implies (iii)$, since, with elements given as in $(iv)$, we may choose $\tau=\langle 1,t_1\rangle\otimes\dots\otimes\langle 1,t_r\rangle$.
\end{proof}
A quadratic form $\varphi$ over $K$ is said to be \emph{torsion} or \emph{weakly hyperbolic} if $m\times \varphi$ is hyperbolic for some positive integer $m$.
The following is \cite[Satz 22]{Pfi66}.
\begin{cor}[Pfister]\label{C:PLGP}
Assume that $K$ is real. For a quadratic form $\varphi$ over $K$ the following statements are equivalent:
\begin{enumerate}[$(i)$]
\item We have ${\rm sign}(\varphi)=0$.
\item The quadratic form $\varphi$ is weakly hyperbolic.
\item There exists $m\in\mathbb{N}$ such that $2^m\times \varphi$ is hyperbolic.
\item There exists $n\in\mathbb{N}$ such that $\varphi^{\otimes n}$ is hyperbolic.
\item There exist $r\geq 0$, $a_1,\dots,a_r\in\mg{K}$ and $s_1,\dots,s_r\in \sums{K}$ such that $\varphi$ is Witt equivalent to $\langle a_1,-a_1s_1\rangle\perp\dots\perp\langle a_r,-a_rs_r\rangle$.
\end{enumerate}
\end{cor}
\begin{proof}
We consider the preordering $S=\sums{K}\cup\{0\}$.
By \eqref{T:pre-Pfister} we have that $(i)$ and $(v)$ are equivalent.
Using \eqref{P:LL}, it follows that $I_SK$ is the intersection of all prime ideals of $W\!K$ and thus the nilradical of $W\!K$. This yields the equivalence of $(i)$ and $(iv)$.
Clearly $(iii)$ implies $(ii)$, which in turn implies $(i)$.
We conclude by showing that $(v)$ implies $(iii)$.
Given elements $s_1,\ldots, s_r\in\sums{K}$ such that $\varphi$ is Witt equivalent to $\langle a_1,-a_1s_1\rangle\perp\dots\perp\langle a_r,-a_rs_r\rangle$, we choose $m\in\mathbb{N}$ such that $s_1,\ldots, s_r \in \mathsf{D}_K(2^m)$ and then have that $2^m\times \varphi$ is hyperbolic.
\end{proof}
\begin{cor}[Scharlau]\label{C:Scharlau-2prim}
The order of any torsion element in $W\!K$ is a $2$-power.
\end{cor}
\begin{proof}
If $K$ is real, this is a rephrasing of the equivalence $(ii)\iff(iii)$ of \eqref{C:PLGP}.
If $K$ is nonreal, then $-1\in \mathsf{D}_K(2^{n})$ for some $n\in\mathbb{N}$, and then $2^{n+1}W\!K=0$, which yields the statement.
\end{proof}
\begin{cor}[Scharlau]\label{C:zero-div}
Any zero-divisor of $W\!K$ lies in $I\!K$.
\end{cor}
\begin{proof}
Let $\alpha\in W\!K\setminus I\!K$.
By \eqref{T:Lewis} there exists $n\in\mathbb{N}$ such that $\alpha$ is a zero of
$L_{2n+1}(X)=\prod_{i=0}^{n} (X^2-(2i+1)^2)$.
Hence, for $\beta\in W\!K$ with $\alpha\beta=0$ we have $m\beta=0$ for the odd integer $m=\prod_{i=0}^{n} (2i+1)^2$, thus $\beta=0$ by \eqref{C:Scharlau-2prim}.
\end{proof}
For $n\in \mathbb{N}$ we denote by $d(n)$ the number of occurrences of the digit~$1$ in the binary representation of $n$. Note that $d(2n)=d(n)$ and $d(2n+1)=d(n)+1$.
In \cite[\S4.4]{Conc} the following observation is attributed to Legendre.
\begin{prop}
For $n\in\mathbb{N}$ the largest $2$-power dividing $n!$ is $2^{n-d(n)}$.
\end{prop}
\begin{proof} Let $n\in\mathbb{N}$.
The largest $2$-power dividing $n$ is $2^m$ where $m$ is the number of consecutive digits~$1$ at the end of the binary representation of $n-1$, whereby $m=d(n-1)-d(n)+1$.
Hence the largest $2$-power dividing $n!$ is $2^k$, where $k= \sum_{i=1}^n \bigl(d(i-1)-d(i)+1\bigr)=n-d(n)$.
\end{proof}
For $n\geq 1$ we set
$\Delta(n)=2n-1-d(n)-d(n-1)$.
\begin{thm}
\label{P:Lou}
Let $\varphi$ and $\pi$ be quadratic forms over $K$ such that $\varphi\otimes \varphi\otimes \pi$ is hyperbolic.
Then $2^{\Delta(n)}\times \varphi\otimes\pi$ is hyperbolic for $n=\dim(\varphi)$.
\end{thm}
\begin{proof}
Assume that $\pi$ is not hyperbolic, as otherwise the statement is trivial.
Then we have $n=\dim(\varphi)=2k$ for some $k\in\mathbb{N}$ by \eqref{C:zero-div}.
It follows from \eqref{C:alpha-mult} that $2^{2k-1}k!(k-1)!\times \varphi\otimes\pi$ is hyperbolic.
Since we have $$\Delta(n)=4k-1-d(2k)-d(2k-1)=(2k-1)+k-d(k)+k-1-d(k-1)\,$$ we conclude by \eqref{C:Scharlau-2prim} that $2^{\Delta(n)}\times \varphi\otimes\pi$ is hyperbolic.
\end{proof}
\begin{rem}
In view of \eqref{P:Lou} we may define a function $g: \mathbb{N}\longrightarrow \mathbb{N}$ in the following way: for $k\in \mathbb{N}$, let $g(k)$ be the smallest number $m\in \mathbb{N}$ such that, for any quadratic form $\varphi$ of dimension $2k$ over an arbitrary field of characteristic different from $2$ for which $\varphi\otimes\varphi$ is hyperbolic, also $2^m\times \varphi$ is hyperbolic.
Applying \eqref{P:Lou} with $\pi=\langle 1\rangle$ yields that $g(k)\leq \Delta(2k)=4k-2-d(k)-d(k-1)$. This bound, however, does not seem to be optimal for $k>1$. In fact, it is not difficult to show that $g(2)=g(3)=2$.
\end{rem}
\section{Algebras with involution}
\label{sec3}
Our general references for the theory of central simple algebras and their involutions are \cite{BOI} and \cite[Chap.~8]{Scharlau}.
We fix some terminology.
Let $K$ be a field of characteristic different from two
and let $A$ be a $K$-algebra.
We call $A$ a \emph{$K$-division algebra} if every non-zero element in $A$ is invertible.
We denote by $Z(A)$ the centre of $A$.
A \emph{$K$-involution on $A$} is a $K$-linear map $\sigma : A\longrightarrow A$ such that $\sigma(xy)=\sigma(y)\sigma(x)$ for all $x,y\in A$ and $\sigma\circ\sigma=\mathrm{id}_A$.
If $A$ is finite-dimensional, simple as a ring, and such that $Z(A)=K$, then $A$ is said to be \emph{central simple}. Assume that $A$ is a central simple $K$-algebra.
Wedderburn's Theorem says that in this case $\dim(A)=n^2$ for a positive integer $n$, called the \emph{degree of $A$} and denoted $\deg(A)$, and further that $A$ is isomorphic to a matrix algebra over a central simple $K$-division algebra $D$ unique up to $K$-isomorphism; the degree of $D$ is then called the (\emph{Schur}) \emph{index of $A$} and denoted by ${\rm ind}(A)$.
One says that $A$ is \emph{split} if ${\rm ind}(A)=1$.
A \emph{$K$-algebra with involution} is a pair $(A,\sigma)$
where $A$ is a finite-dimensional $K$-algebra and $\sigma$ is a $K$-involution on $A$ such that $K=\{x\in Z(A)\mid \sigma(x)=x\}$ and such that either $A$ is simple or $A$ is a product of two simple $K$-algebras that are mapped to each other by $\sigma$.
We will often denote a $K$-algebra with involution by a single capital Greek letter.
Let $(A,\sigma)$ be a $K$-algebra with involution.
We have either $Z(A)=K$ or $Z(A)$ is a quadratic \'etale extension of $K$.
We say that $(A,\sigma)$ is of the \emph{first} or \emph{second kind}, depending on whether $[Z(A):K]$ is $1$ or $2$, respectively.
Note that $A$ is simple if and only if $Z(A)$ is a field.
If $Z(A)$ is not a field then $(A,\sigma)$ is \emph{degenerate}; this can only occur if $(A,\sigma)$ is of the second kind.
We have $\dim_K(A)=[Z(A):K]\cdot n^2$ for a positive integer $n\in\mathbb{N}$, which we call the \emph{degree of $(A,\sigma)$}; if $A$ is nondegenerate, then $\deg(A,\sigma)$ is just the degree of $A$ as a central simple $Z(A)$-algebra.
We say that $x\in A$ is \emph{symmetric} or \emph{skew-symmetric} (with respect to $\sigma$) if $\sigma(x)=x$ or $\sigma(x)=-x$, respectively.
We let $\Sym{A, \sigma}=\{ x\in A\mid \sigma(x)=x\}$ and
$\Skew{A, \sigma}=\{x\in A\mid \sigma(x)= - x\}$.
These are $K$-linear subspaces of $A$ satisfying
\[A=\Sym{A,\sigma}\oplus \Skew{A,\sigma}.\]
There exists $\varepsilon\in\{-1,0,+1\}$ such that
\[\dim_K(\Sym{A,\sigma})=\mbox{$\frac{1}{2}$} n(n+\varepsilon)\,\, \mbox{ and }
\dim_K(\Sym{A,\sigma})=\mbox{$\frac{1}{2}$} n(n-\varepsilon)\,.\]
If $(A,\sigma)$ is of the first kind, then $\varepsilon=\pm 1$, and we say that $(A,\sigma)$ is \emph{orthogonal} if $\varepsilon=1$ and \emph{symplectic} if $\varepsilon=-1$.
If $(A,\sigma)$ is of the second kind, then $\varepsilon=0$, and we say that $(A,\sigma)$ is \emph{unitary}.
The integer $\varepsilon$ is called the \emph{type of $(A,\sigma)$} and denoted $\mathrm{type}(A,\sigma)$.
Following \cite[\S 12]{BOI}, given a $K$-algebra with involution $(A,\sigma)$, we denote
\begin{eqnarray*}
\mathrm{Sim}(A,\sigma) &=& \{x \in \mg{A} \mid \sigma(x)x \in \mg K\} \quad\text{and}\\
\mathsf{G}(A,\sigma) &= &\{ \sigma(x)x \mid x \in \mathrm{Sim}(A,\sigma) \}\,;
\end{eqnarray*}
note that these are subgroups of $\mg A$ and $\mg K$, respectively.
We denote by $\mathrm{Br}(K)$ the Brauer group of $K$.
The group operation in $\mathrm{Br}(K)$ is written additively.
Let $\Psi$ denote the $K$-algebra with involution $(A,\sigma)$.
If $\Psi$ is non-degenerate, let $[\Psi]=\pm [A]$ in $\mathrm{Br}(L)$ for $L=Z(A)$.
Here, $\pm[A]$ denotes the element $[A]$ if this is of order at most $2$ and otherwise the unordered pair of $[A]$ and $-[A]$.
Recall
that if $\Psi$ is of the first kind, then $[A]+[A]=0$ in $\mathrm{Br}(K)$.
If $\Psi$ is degenerate unitary, then $A\simeq A_1\times A_2$ for two central simple $K$-algebras $A_1$ and $A_2$ with $\sigma(A_1)=A_2$, so that $[A_1]+[A_2]=0$ in $\mathrm{Br}(K)$, and we set $[\Psi]=\pm [A_i]$ for $i=1,2$.
If $A$ is simple, let ${\rm ind}(\Psi)$ denote the Schur index of $A$ as a central simple $Z(A)$-algebra, otherwise let ${\rm ind}(\Psi)$ be the Schur index of the two simple components of $A$ (which is the same).
We also write $Z(\Psi)$ to refer to the centre of $A$.
For any field extension $L/K$ the $L$-algebra with involution $(A\otimes_K L,\sigma\otimes \mathrm{id}_L)$ is denoted by $\Psi_L$.
Note that $\mathrm{type}(\Psi_L)=\mathrm{type}(\Psi)$.
If $\Psi$ is non-degenerate unitary, then $\Psi_L$ is non-degenerate if and only if $L$ is linearly disjoint to $Z(\Psi)$ over $K$.
We now consider two $K$-algebras with involution $\Psi=(A,\sigma)$ and $\Theta=(B,\vartheta)$.
A \emph{homomorphism of algebras with involution} $\Psi \longrightarrow \Theta$ is a $K$-homomorphism $f:A\longrightarrow B$ satisfying $\vartheta\circ f = f \circ \sigma$.
A homomorphism is called an \emph{embedding} if it is injective and an \emph{isomorphism} if it is bijective.
We write $\Psi \simeq \Theta$ if there exists and isomorphism $\Psi\longrightarrow\Theta$.
(This occurs if and only if either $\Psi$ and $\Theta$ are adjoint to two similar hermitian or skew-hermitian forms over some $K$-division algebra with involution or $\Psi$ and $\Theta$ are both degenerate unitary of the same degree.)
We further write $\Psi\sim\Theta$ to indicate that $\mathrm{type}(\Psi)=\mathrm{type}(\Theta)$ and $[\Psi]=[\Theta]$.
(This occurs if and only if $\Psi$ and $\Theta$ are either both degenerate unitary or both adjoint algebras with involution of hermitian or skew-hermitian forms over a common $K$-division algebra with involution.)
Except when $\Psi$ and $\Theta$ are both unitary with different centres, we can define
their tensor product $\Psi\otimes \Theta$.
If $\Psi$ and $\Theta$ are not both unitary, let $\Psi\otimes \Theta$ denote the $K$-algebra with involution $(A\otimes_K B,\sigma\otimes\vartheta)$.
If $\Psi$ and $\Theta$ are both unitary and with same centre $L$, then $\Psi\otimes \Theta$ is the unitary $K$-algebra with involution $(A\otimes_L B,\sigma\otimes\vartheta)$, whose centre is also $L$.
Note that in each of the cases where we defined $\Psi\otimes \Theta$, we have
\[\mathrm{type}(\Psi\otimes\Theta)=\mathrm{type}(\Psi)\cdot\mathrm{type}(\Theta)\quad \mbox{ and }\quad \deg(\Psi\otimes\Theta)=\deg(\Psi)\cdot\deg(\Theta) \,.\]
For a positive integer $n$
the tensor power $\Psi^{\otimes n}$ of a $K$-algebra with involution $\Psi$ is now well-defined.
\section{Algebras with involution of small index}
\label{sec4}
Involutions on central simple algebras are often considered as adjoint to hermitian or skew-hermitian forms (cf.~\cite[\S4]{BOI}).
We will only need this approach for algebras with involution of small index.
Let $\varphi=(V,B)$ be a quadratic form over $K$. Consider the split central simple $K$-algebra $\End_K(V)$.
Let $\sigma:\End_K(V)\longrightarrow \End_K(V)$ denote the involution determined
by the formula
\[\qquad B(f(u), v) = B(u, \sigma(f)(v))\quad\text{ for all } u, v\in V\text{ and } f\in \End_K(V).\]
We denote this involution $\sigma$ by ${\rm ad}_B$ and call it the \emph{adjoint involution of $\varphi$}. Furthermore, we call $(\End_K(V),{\rm ad}_B)$ the \emph{adjoint algebra with involution of $\varphi$} and denote it by $\mathsf{Ad}(\varphi)$. Note that it is split orthogonal and that $\varphi$ is determined up to similarity by $\mathsf{Ad}(\varphi)$.
\begin{ex}
Let $n$ be a positive integer and $\varphi=n\times \langle 1\rangle$, the $n$-dimensional form $\langle 1,\dots,1\rangle$ over $K$.
Then $\mathsf{Ad}(\varphi)\simeq (\mathsf{M}_n(K), \tau)$ where $\mathsf{M}_n(K)$ is the $K$-algebra of $n\!\times\! n$-matrices over $K$ and $\tau$ is the transpose involution.
\end{ex}
\begin{prop}\label{P:ad-mult}
For quadratic forms $\varphi$ and $\psi$ over $K$ we have $$\mathsf{Ad}(\varphi\otimes\psi)\simeq \mathsf{Ad}(\varphi)\otimes\mathsf{Ad}(\psi)\,.$$
\end{prop}
\begin{proof} Denoting $V$ and $W$ the underlying vector spaces of $\varphi$ and $\psi$, respectively, the natural $K$-algebra isomorphism
$\End_K(V)\otimes_K \End_K(W) \longrightarrow \End_K(V \otimes_K W)$ yields the required identification for the adjoint involutions.
\end{proof}
For
a finite-dimensional $K$-algebra $A$ we denote by $\mathrm{Trd}_A:A\longrightarrow Z(A)$ its reduced trace map (cf. \cite[p.~5 and p.~22]{BOI}).
A \emph{$K$-quaternion algebra} is a central simple $K$-algebra of index $2$.
Given a $K$-quaternion algebra $Q$, the map $\sigma:Q\longrightarrow K, x\longmapsto x-\mathrm{Trd}_Q(x)$ is a $K$-involution, called the \emph{canonical involution on $Q$} and denoted by $\mathrm{can}_Q$; this is the unique symplectic $K$-involution on $Q$.
If $L$ is a quadratic \'etale extension of $K$ we denote by $\mathrm{can}_L$ the unique non-trivial $K$-automorphism of $L$.
We further set $\mathrm{can}_K=\mathrm{id}_K$.
\begin{prop}\label{P:can-invol}
Let $(A,\sigma)$ be a $K$-algebra with involution.
We have that $\Sym{A,\sigma}=K$ if and only if $A$ is either $K$, a quadratic \'etale extension of~$K$, or a $K$-quaternion algebra, and if further $\sigma=\mathrm{can}_A$.
\end{prop}
\begin{proof}
If $\Sym{A,\sigma}=K$ then $\dim_K(A)=2^{1-\varepsilon}$ for $\varepsilon=\mathrm{type}(A,\sigma)$, so that $A$ is either $K$, a quadratic \'etale extension of~$K$, or a $K$-quaternion algebra.
In any of these three cases, $\mathrm{can}_A$ is the unique involution of type $\varepsilon$ on $A$, and $\Sym{A,\mathrm{can}_A}=K$.
\end{proof}
In the cases characterized by \eqref{P:can-invol} we call $(A,\sigma)$ a \emph{$K$-algebra with canonical involution}.
\begin{prop}\label{P:JAC}
Let $\Psi$ be a $K$-algebra with involution.
Then $\Psi\simeq \Phi\otimes\mathsf{Ad}(\varphi)$ for a $K$-algebra with canonical involution~$\Phi$ and a quadratic form $\varphi$ over~$K$ if and only if $\Psi$ is either split or symplectic of index $2$.
\end{prop}
\begin{proof}
Clearly, any $K$-algebra with canonical involution $\Phi$ is
either split or symplectic of index $2$, and thus so is $\Phi\otimes \mathsf{Ad}(\varphi)$ for any quadratic form $\varphi$ over $K$.
Assume now that $\Psi$ is either split or symplectic of index $2$.
Then $\Psi\sim\Phi$ for a $K$-algebra with canonical involution $\Phi$.
It follows that $\Psi$ is adjoint to a hermitian form over $\Phi$.
Any hermitian form over $\Phi$ has a diagonalisation with entries in $\Sym{\Psi}=K$.
Therefore $\Psi\simeq \Phi\otimes\mathsf{Ad}(\varphi)$ for a form $\varphi$ over $K$.
\end{proof}
For computational purposes we augment the classical notation for quaternion algebras
in terms of pairs of field elements to take into account an involution. Let $a,b\in \mg K$ and let $Q$ be the $K$-algebra with basis $(1,i,j,k)$, where $i^2=a$, $j^2=b$ and $ij=-ji=k$. This quaternion algebra is denoted by $(a,b)_K$. For $\delta, \varepsilon \in \{+1,-1\}$ there is a unique $K$-involution $\sigma$ on $Q$ such that $\sigma(i)=\delta i$ and $\sigma(j)=\varepsilon j$. We denote the pair $(Q,\sigma)$ by
\begin{eqnarray*}
(a\mid b)_K & \text{ if } & \delta=+1,\ \varepsilon=+1,\\
(a\,\cdot\!\!\mid b)_K & \text{if} & \delta=-1,\ \varepsilon=+1,\\
(a\mid\!\!\cdot\, b)_K & \text{if} & \delta=+1,\ \varepsilon=-1,\\
(a\,\cdot\!\!\mid\!\!\cdot\, b)_K & \text{if} & \delta=-1,\ \varepsilon=-1.
\end{eqnarray*}
In particular, $(a\,\cdot\!\!\mid\!\!\cdot\, b)_K$ denotes the quaternion algebra $(a,b)_K$ together with its canonical involution.
Any $K$-quaternion algebra with orthogonal involution is isomorphic to $(a \,\cdot\!\!\mid b)_K$ for some $a,b \in \mg K$.
Note that $(a \mid b)_K \simeq (-ab \,\cdot\!\!\mid b)_K$ and $(a\mid\!\!\cdot\, b)_K \simeq (b \,\cdot\!\!\mid a)_K$ for any $a,b \in \mg K$.
\begin{prop}\label{P:quat-qil-rep}
Let $Q$ be a $K$-quaternion algebra and let $a,b\in\mg{K}$ be such that $Q\simeq (a,b)_K$. Then $(Q,\mathrm{can}_Q)\simeq (a\,\cdot\!\!\mid\!\!\cdot\, b)_K$.
Moreover, for $i\in\mg{Q}\setminus\mg{K}$ with $i^2=a$ and $\tau=\mathrm{Int}(i)\circ\mathrm{can}_Q$ we have $(Q,\tau)\simeq (a\,\cdot\!\!\mid b)_K$.
\end{prop}
\begin{proof}
Since $\mathrm{can}_Q$ is the only symplectic involution on $Q$, any $K$-isomorphism $Q\longrightarrow (a,b)_K$ is also an isomorphism of $K$-algebras with involution. Hence, $(Q,\mathrm{can}_Q) \simeq (a\,\cdot\!\!\mid\!\!\cdot\, b)_K$.
Choose an element $i\in\mg{Q}\setminus\mg{K}$ with $i^2=a$.
Then $V=\{j\in Q\mid ij+ji=0\}$ is the orthogonal complement of $K[i]$ in $Q$ with respect to
the symmetric $K$-bilinear form $B: Q\times Q\longrightarrow K, (x,y)\longmapsto \mathrm{can}_Q(x)\cdot y$.
By \cite[Chap.~2, (11.4)]{Scharlau}
we have
$(Q,B)\simeq \la\!\la a,b\ra\!\ra$. Since
$(K[i], B|_{K[i]})\simeq \la\!\la a\ra\!\ra$ it follows that
$(V, B|_{V})\simeq -b\la\!\la a\ra\!\ra$.
As $B(j,j)=-j^2$ for any $j\in V$ there exists $j\in V$ with $j^2=b$.
For $\tau=\mathrm{Int}(i)\circ\mathrm{can}_Q$ we obtain that $\tau(j)=j$, whereby $(Q,\tau)\simeq (a\,\cdot\!\!\mid b)_K$.
\end{proof}
For $a\in \mg K$, let $(a)_K$ denote the unitary $K$-algebra with canonical involution $(L,\mathrm{can}_L)$ where
$L = K[X]/(X^2-a)$; it is degenerate if and only if $a\in\sq{K}$.
\begin{cor}\label{C:quat-awi-class}
Let $\Phi$ be a $K$-quaternion algebra with involution.
If $\Phi$ is orthogonal there exist $a,b\in\mg{K}$ such that $\Phi\simeq (a\,\cdot\!\!\mid b)_K$.
If $\Phi$ is symplectic there exist $a,b\in\mg{K}$ such that $\Phi\simeq (a\,\cdot\!\!\mid\!\!\cdot\, b)_K$.
If $\Phi$ is unitary, there exist $a,b,c\in\mg{K}$ such that $\Phi\simeq (a\,\cdot\!\!\mid\!\!\cdot\, b)_K\otimes (c)_K$.
\end{cor}
\begin{proof}
Assume that $\Phi$ is of the first kind and let $\Phi=(Q,\sigma)$.
If $\Phi$ is symplectic, then $\sigma=\mathrm{can}_Q$ and we choose $a,b\in\mg{Q}$ such that $Q\simeq (a,b)_K$ to obtain by \eqref{P:quat-qil-rep} that $\Phi\simeq (a\,\cdot\!\!\mid\!\!\cdot\, b)_K$.
If $\Phi$ is orthogonal, we choose $i\in\Skew{Q,\sigma}\cap\mg{Q}$ and $a,b\in\mg{K}$ with $a=i^2$ and $Q\simeq (a,b)_K$, and obtain that $\sigma=\mathrm{Int}(i)\circ\mathrm{can}_Q$, so that $\Phi\simeq (a\,\cdot\!\!\mid b)_K$ by \eqref{P:quat-qil-rep}.
Assume now that $\Phi$ is of the second kind.
From \cite[(2.22)]{BOI} we obtain that $\Phi\simeq (Q,\mathrm{can}_Q)\otimes (c)_K$ for a $K$-quaternion algebra $Q$ and an element $c\in\mg{K}$, and by the above there exist $a,b\in\mg{K}$ such that $(Q,\mathrm{can}_Q)\simeq (a\,\cdot\!\!\mid\!\!\cdot\, b)_K$.
\end{proof}
\begin{prop}\label{P:-qil-characterisation}
Let $a,b,c,d \in \mg K$.
We have $(a \,\cdot\!\!\mid b)_K \simeq (c \,\cdot\!\!\mid d)_K$ if and only if $a \sq K =c \sq K$ and $bd \in \mathsf{D}_K\la\!\la a\ra\!\ra$.
\end{prop}
\begin{proof}
We set $(Q,\tau)=(a\,\cdot\!\!\mid b)_K$.
There exists $i\in\Skew{Q,\tau}$ and $j\in\Sym{Q,\tau}$ with $i^2=a$, $j^2=b$, and $ij+ji=0$.
Assuming that $bd\in \mathsf{D}_K\la\!\la a\ra\!\ra$, we may write $d=b(u^2-av^2)$ with $u,v\in K$ and obtain for $g=uj+vij$ that $g\in \Sym{Q,\tau}$, $gi+ig=0$, and $g^2=d$.
If further $c\sq{K}=a\sq{K}$, then $c=f^2$ for some $f\in i\mg{K}$, and we have $f\in\Skew{Q,\tau}$ and $gf+fg=0$, and conclude that $(Q,\tau)\simeq (c\,\cdot\!\!\mid d)_K$.
For the converse, suppose that
$(Q,\tau)\simeq (c\,\cdot\!\!\mid d)_K$. There exist $f\in\Skew{Q,\tau}$ and $g\in\Sym{Q,\tau}$ with $f^2=c$, $g^2=d$, and $fg+gf=0$.
It follows that $i K=\Skew{Q,\tau}=f K$, so that $a\sq{K}=c\sq{K}$. Moreover, $ig+gi=0$ and $jgi=ijg$.
As $K[i]$ is a maximal commutative $K$-subalgebra of $Q$, we obtain that $jg\in K[i]$.
Writing $jg=x+iy$ with $x,y\in K$, we obtain that $$bd=j^2g^2=j(x+iy)g=(x-iy)jg=(x-iy)(x+iy)=x^2-ay^2\,,$$ whence $bd\in D_K\langle 1,-a\rangle$.
\end{proof}
\begin{prop}\label{P:flipflop}
For $a,b,c,d \in \mg K$ we have
\begin{align*}
(a\,\cdot\!\!\mid b)_K\otimes (c\,\cdot\!\!\mid d)_K & \simeq (a\,\cdot\!\!\mid\!\!\cdot\, bc)_K\otimes (c\,\cdot\!\!\mid\!\!\cdot\, ad)_K \quad \textrm{and} \\
(a\,\cdot\!\!\mid b)_K\otimes (c\,\cdot\!\!\mid\!\!\cdot\, d)_K & \simeq (a\,\cdot\!\!\mid\!\!\cdot\, bc)_K\otimes (c\,\cdot\!\!\mid ad)_K\,.
\end{align*}
\end{prop}
\begin{proof}
Let $(A,\sigma)=(a\,\cdot\!\!\mid b)_K\otimes (c\,\cdot\!\!\mid d)_K$.
Then there exist elements $i,j,f,g\in\mg{A}$ such that $\sigma(i)=-i$, $\sigma(j)=j$, $\sigma(f)=-f$, $\sigma(g)=g$, $i^2=a$, $j^2=b$, $f^2=c$, $g^2=d$, $ij+ji=fg+gf=0$, and each of $i$ and $j$ commutes with each of $f$ and $g$.
Set $j'=fj$ and $g'=ig$. Then
$\sigma(i)=-i$, $\sigma(j')=-j'$, $\sigma(f)=-f$, $\sigma(g')=-g'$, $i^2=a$, $j'^2=bc$, $f^2=c$, $g'^2=ad$, $ij'+j'i=fg'+g'f=0$, and each of $i$ and $j'$ commutes with each of $f$ and $g'$.
The $K$-subalgebra $Q$ of $A$ generated by $i$ and $j'$ commutes elementwise with the $K$-subalgebra $Q'$ of $A$ generated by $f$ and $g'$, and $Q$ and $Q'$ are $\sigma$-stable.
Hence
$$(A,\sigma)\simeq (Q,\sigma|_Q)\otimes (Q',\sigma|_{Q'})\simeq (a\,\cdot\!\!\mid\!\!\cdot\, bc)_K\otimes (c\,\cdot\!\!\mid\!\!\cdot\, ad)_K\,.$$
This shows the first isomorphism.
The proof of the second isomorphism is almost identical, with the only difference that $\sigma(g)=-g$ and $\sigma(g')=g'$.
\end{proof}
\begin{prop}
For $a,b \in \mg K$, we have
\[\mathsf{G}\,(a\,\cdot\!\!\mid\!\!\cdot\, b)_K = \mathsf{D}_K \la\!\la a,b\ra\!\ra \quad\text{ and }\quad
\mathsf{G}\,(a\,\cdot\!\!\mid b)_K = \mathsf{D}_K\la\!\la a\ra\!\ra\cup b\mathsf{D}_K\la\!\la a\ra\!\ra.\]
\end{prop}
\begin{proof}
Let $Q=(a,b)_K$ and $u,v\in \mg{Q}$ with $u^2=a$, $v^2=b$ and $uv+vu=0$. Then $\mathrm{Sim}(Q,\mathrm{can}_Q)=\mg{Q}$ and thus $\mathsf{G}(Q,\mathrm{can}_Q)=\mathsf{D}_K\la\!\la a,b\ra\!\ra$.
For $\tau=\mathrm{Int}(u)\circ\mathrm{can}_Q$ we obtain $\mathrm{Sim}(Q,\tau)=\mg{K(u)}\cup v\mg{K(u)}$ and thus $\mathsf{G}(Q,\tau)=\mathsf{D}_K\la\!\la a\ra\!\ra\cup b\mathsf{D}_K\la\!\la a\ra\!\ra$.
\end{proof}
\section{Involution trace forms}
\label{sec5}
A $K$-algebra with involution $(A,\sigma)$ with centre $L$ gives rise to a regular hermitian form $T_{(A,\sigma)}:A\times A\longrightarrow L$ over $(L,\mathrm{can}_L)$ defined by $T_{(A,\sigma)}(x,y)=\mathrm{Trd}_A(\sigma(x)y)$; this follows from \cite[(2.2) and (2.16)]{BOI}.
We further obtain a regular symmetric $K$-bilinear form $T_\sigma:A\times A \longrightarrow K$ defined by $T_\sigma(x,y)=\mbox{$\frac{1}{2}$} \mathrm{Trd}_A (\sigma(x)y+\sigma(y)x)$.
Note that if $L=K$ then $T_\sigma=T_{(A,\sigma)}$, otherwise $2T_\sigma=T\circ T_{(A,\sigma)}$ where $T$ is the trace of ${Z(A)/K}$.
(Here, $2\varphi$ denotes the form obtained by scaling the form $\varphi$ by $2$, which ought not to be confused with the form $2\times \varphi=\varphi\perp\varphi$.)
Given a $K$-algebra with involution $\Psi=(A,\sigma)$, we denote by $\mathsf{Tr}(\Psi)$ the quadratic form $(A,T_\sigma)$ over $K$.
Note that $\dim(\mathsf{Tr}(A,\sigma))=\dim_K(A)$.
\begin{ex}\label{E:itr}
For $a\in\mg{K}$ we have $\mathsf{Tr}\,(a)_K=\la\!\la a\ra\!\ra$. For $a,b\in\mg{K}$ we have $\mathsf{Tr}\,(a\,\cdot\!\!\mid b)_K=2\la\!\la a,-b\ra\!\ra$ and $\mathsf{Tr}\,(a\,\cdot\!\!\mid\!\!\cdot\, b)_K=2\la\!\la a,b\ra\!\ra$.
\end{ex}
\begin{prop}\label{P:itr-prod}
Let $\Psi$ and $\Theta$ be $K$-algebras with involution.
If $\Psi$ is of the first kind, then
$\mathsf{Tr}(\Psi\otimes\Theta)\simeq \mathsf{Tr}(\Psi)\otimes\mathsf{Tr}(\Theta)$.
If $\Psi$ and $\Theta$ are both unitary with same centre, then
$2\times\mathsf{Tr}(\Psi\otimes\Theta)\simeq \mathsf{Tr}(\Psi)\otimes\mathsf{Tr}(\Theta)$.
\end{prop}
\begin{proof}
Let $K'$ denote the centre of $\Psi=(A,\sigma)$ and $L$ the centre of $\Theta=(B,\tau)$. In view of the claims we may assume that $K'\subseteq L$.
For $a\in A$ and $b\in B$ we have $\mathrm{Trd}_{A\otimes_{K'}B}(a\otimes b)=\mathrm{Trd}_A(a)\cdot \mathrm{Trd}_B(b)$, as one verifies by reduction to the split case.
Hence,
$T_{\Psi\otimes \Theta}$ and $T_{\Psi}\otimes T_{\Theta}$ coincide as hermitian forms on $A\otimes_{K'} B$ with respect to $(L,\mathrm{can}_L)$.
If $L=K$ then we are done.
Assume now that $(L,\mathrm{can}_L)\simeq (c)_K$ where $c\in\mg{K}$.
Then $\mathsf{Tr}(\Theta)\simeq \la\!\la c\ra\!\ra\otimes\vartheta$ for a form $\vartheta$ over $K$.
If now $K'=K$ then $ \mathsf{Tr}(\Psi)\otimes\mathsf{Tr}(\Theta)\simeq \la\!\la c\ra\!\ra\otimes(\mathsf{Tr}(\Psi)\otimes \vartheta)\simeq \mathsf{Tr}(\Psi\otimes \Theta)$.
In the remaining case $K'=L$ and $\mathsf{Tr}(\Psi)\simeq \la\!\la c\ra\!\ra\otimes\psi$ for a quadratic form $\psi$ over $K$, and we obtain that
$\mathsf{Tr}(\Psi)\otimes\mathsf{Tr}(\Theta)\simeq \la\!\la c,c\ra\!\ra\otimes \psi\otimes\vartheta\simeq 2\times \la\!\la c\ra\!\ra\otimes(\psi\otimes\vartheta)\simeq 2\times
\mathsf{Tr}(\Psi\otimes \Theta)$.
\end{proof}
\begin{prop}\label{P:trace-qf-square}
For any form $\varphi$ over $K$ we have $\mathsf{Tr}(\mathsf{Ad}(\varphi))\simeq \varphi\otimes\varphi$.
\end{prop}
\begin{proof}
See \cite[(11.4)]{BOI}.
\end{proof}
Let $A$ be a finite-dimensional $K$-algebra.
For $a\in A$ let $\lambda_a\in \End_K(A)$ be given by $\lambda_a(x)=ax$
for $x\in A$. The $K$-algebra homomorphism $\lambda:A\longrightarrow \End_K(A)$, $a \longmapsto \lambda_a$ thus obtained is called the
\emph{left regular representation of $A$}.
\begin{prop}\label{P:awi-embed}
Let $\Psi=(A,\sigma)$ be a $K$-algebra with involution. The left regular representation of $A$ yields an embedding of $\Psi$ into $\mathsf{Ad}(\mathsf{Tr}(\Psi))$.
\end{prop}
\begin{proof} For $a, x, y\in A$ we have that
$T_\sigma(x, \lambda_{\sigma(a)}(y))=T_\sigma(\lambda_a(x),y)$.
Thus $\lambda$ identifies $\sigma$ with the restriction to $\lambda(A)$ of the involution adjoint to $T_\sigma$.
\end{proof}
\begin{prop}\label{P:awi-square}
Let $\Psi=(A,\sigma)$ be a $K$-algebra with involution of the first kind.
Then
$\Psi\otimes\Psi \simeq \mathsf{Ad}(\mathsf{Tr}(\Psi))$.
\end{prop}
\begin{proof} We expand the proof of \cite[(11.1)]{BOI}.
Consider the $K$-algebra homomorphism
$\sigma_*: A\otimes_K A\longrightarrow \End_K(A)$ determined by $\sigma_*(a\otimes b)(x)=ax\sigma(b)$ for all $a,b,x \in A$.
As $A\otimes_K A$ is simple and of the same dimension as $\End_K(A)$, $\sigma_*$ is an isomorphism. For $a,b,x,y\in A$ we have
$T_\sigma(x, \sigma_*(\sigma(a)\otimes \sigma(b))(y))=
T_\sigma (\sigma_*(a\otimes b)(x), y)$. Thus $\sigma_*$ identifies
the involution $\sigma\otimes\sigma$ with the adjoint involution of $T_\sigma$.
\end{proof}
\section{Hyperbolicity}
\label{sec6}
Following \cite[(2.1)]{BST}, we say that the $K$-algebra with involution $(A,\sigma)$ is \emph{hyperbolic} if there exists an element $e\in A$ with $e^2=e$ and $\sigma(e)=1-e$.
If $(A,\sigma)$ is adjoint to a hermitian form over a $K$-division algebra with involution, then it is hyperbolic if and only if the hermitian form is hyperbolic.
\begin{prop}\label{P:hyper-char}
The $K$-algebra with involution $(A,\sigma)$ is hyperbolic if and only if there exists $f\in\Skew{A,\sigma}$ with $f^2=1$, if and only if $(1)_K$ embeds into $(A,\sigma)$.
\end{prop}
\begin{proof}
The second equivalence is obvious.
To prove the first equivalence, given $e\in A$ with $e^2=e$ and $\sigma(e)=1-e$, we see that $f=2e-1$ satisfies $\sigma(f)=-f$ and $f^2=1$, and conversely, for $f\in A$ with these properties, $e=\frac{1}{2}(f-1)$ satisfies $e^2=1$ and $\sigma(e)=1-e$.
\end{proof}
\begin{cor}\label{C:trivial-hyper}
Let $\Psi$ be a split symplectic or degenerate unitary $K$-algebra with involution. Then $\Psi$ is hyperbolic.
\end{cor}
\begin{proof}
Using \eqref{P:JAC} we have that $\Psi\simeq \mathsf{Ad}(\varphi)\otimes \Phi$ for a $K$-algebra with canonical involution $\Phi$, and conclude that $\Phi\simeq (1\,\cdot\!\!\mid\!\!\cdot\, 1)_K$ or $\Phi\simeq (1)_K$.
In either case $\Psi$ contains $(1)_K$ and thus is hyperbolic by \eqref{P:hyper-char}.
\end{proof}
Let $\Psi$ and $\Theta$ denote $K$-algebras with involution.
\begin{prop}\label{P:hyper-iso}
If $\Psi$ and $\Theta$ are hyperbolic with $\Psi\sim \Theta$ and $\deg(\Psi)=\deg(\Theta)$,
then $\Psi\simeq \Theta$.
\end{prop}
\begin{proof}
If $\Psi$ and $\Theta$ are degenerate unitary, the statement follows from \cite[(2.14)]{BOI}.
Otherwise $\Psi$ and $\Theta$ are adjoint to hyperbolic hermitian or skew-hermitian forms of the same dimension over a common $K$-division algebra with involution, and these are necessarily isometric.
\end{proof}
\begin{prop}\label{P:hypermult}
If $\Psi$ is hyperbolic, then $\Psi\otimes\Theta$ is hyperbolic.
\end{prop}
\begin{proof}
This is obvious.
\end{proof}
\begin{prop}\label{C:hyper-trace}
If $\Psi$ is hyperbolic, then $\mathsf{Tr}(\Psi)$ is hyperbolic.
\end{prop}
\begin{proof}
By \eqref{P:awi-embed} $\Psi$ embeds into $\mathsf{Ad}(\mathsf{Tr}(\Psi))$, which implies the statement.
\end{proof}
\begin{prop}
\label{P:simhyp}
Let $a\in \mg{K}$. We have that $a \in \mathsf{G}(\Psi)$ if and only if $\mathsf{Ad}\la\!\la a\ra\!\ra \otimes \Psi$ is hyperbolic.
\end{prop}
\begin{proof}
See \cite[(12.20)]{BOI}.
\end{proof}
\begin{thm}[Bayer-Fluckiger, Lenstra]\label{T:BFL}
Let $L/K$ be a finite field extension of odd degree.
Then $\Psi_L$ is hyperbolic if and only if $\Psi$ is hyperbolic.
\end{thm}
\begin{proof}
See \cite[(6.16)]{BOI}.
\end{proof}
The following is a reformulation of the main result in \cite{Jac40}.
\begin{thm}[Jacobson]\label{T:JAC}
Let $\Phi$ be a $K$-algebra with canonical involution and $\varphi$ a quadratic form over $K$.
Then $\mathsf{Ad}(\varphi)\otimes \Phi$ is hyperbolic if and only if $\varphi\otimes\mathsf{Tr}(\Phi)$ is hyperbolic.
\end{thm}
\begin{proof}
Let $\varphi=(V,B)$ and $\Phi=(A,\sigma)$ with $\sigma=\mathrm{can}_A$.
Then $T_\sigma(x,x)=x+\sigma(x)\in K$ for $x\in A$.
The $K$-algebra with involution $\mathsf{Ad}(\varphi)\otimes \Phi$ is adjoint to the hermitian form $(V_A,h)$ over $\Phi$ obtained from $\varphi$, with $V_A=V\otimes_KA$ and $h:V_A\times V_A\rightarrow A$ determined by $h(a\otimes v, b\otimes w)=\sigma({a})B(v,w)b$ for $a,b\in A$ and $v,w\in V$.
Then $(V_A,T_\sigma\circ h)$ is a quadratic form over $K$ isometric to $\varphi\otimes \mathsf{Tr}(\Phi)$.
The isotropic vectors for $h$ and for $T_\sigma\circ h$ coincide. It follows that a maximal totally isotropic $K$-subspace for $T_\sigma\circ h$ is the same as a maximal totally isotropic $A$-subspace for $h$. This implies the statement.
\end{proof}
\begin{thm}[Bayer-Fluckiger, Shapiro, Tignol]
\label{quExt} Let $a\in\mg{K}\setminus\sq{K}$. Then
$\Psi_{K(\sqrt{a})}$ is hyperbolic if and only if $(a)_K$ embeds into $\Psi$ or
$\Psi \simeq \mathsf{Ad}(\varphi)$ for a quadratic form $\varphi$ over $K$ whose anisotropic part is a multiple of $\la\!\la a\ra\!\ra$.
\end{thm}
\begin{proof}
Assume first that $\Psi$ is split orthogonal, so that $\Psi\simeq \mathsf{Ad}(\varphi)$ for a form $\varphi$ over $K$. Then $\Psi_{K(\sqrt{a})}$ is hyperbolic if and only if $\varphi_{K(\sqrt{a})}$ is hyperbolic, which by \cite[Chap.~VII, (3.2)]{Lam} is if and only if the anisotropic part of $\varphi$ is a multiple of $\la\!\la a\ra\!\ra$.
In the remaining cases, the statement is proven in \cite[(3.3)]{BST} for involutions of the first kind, and an adaptation of the argument for involutions of the second kind is provided in \cite[(3.6)]{LU}.
\end{proof}
\begin{rem}
There is an overlap in the two cases of the characterization given in \eqref{quExt}.
Assume that $a\in\mg{K}\setminus\sq{K}$ and $\varphi$ is a form over $K$.
Then $(a)_K$ embeds into $\mathsf{Ad}(\varphi)$ if and only if
$\varphi$ is a multiple of $\la\!\la a\ra\!\ra$, if and only if the anisotropic part $\varphi_\mathrm{an}$ of $\varphi$ is a multiple of $\la\!\la a\ra\!\ra$ and $\varphi\simeq \varphi_\mathrm{an}\perp 2m\times \mathbb H$ for some $m\in\mathbb{N}$.
\end{rem}
\begin{cor}\label{C:DinG}
For any $a\in\mg{K}\setminus\sq{K}$ such that $\Psi_{K(\sqrt{a})}$ is hyperbolic, we have that $\mathsf{D}_K\la\!\la a\ra\!\ra \subseteq \mathsf{G}(\Psi)$.
\end{cor}
\begin{proof}
As $\mathsf{D}_K\la\!\la a\ra\!\ra=\mathsf{G}\, (a)_K$, the statement follows immediately from \eqref{quExt}.
\end{proof}
\begin{prop}
\label{P:bqhyp}
Let $Q_1$ and $Q_2$ be $K$-quaternion algebras. The $K$-algebra with involution $(Q_1, \mathrm{can}_{Q_1})\otimes (Q_2, \mathrm{can}_{Q_2})$ is hyperbolic if and only if one of $Q_1$ and $Q_2$ is split.
\end{prop}
\begin{proof}
Let $(A,\sigma)= (Q_1, \mathrm{can}_{Q_1})\otimes (Q_2, \mathrm{can}_{Q_2})$.
If one of the factors is split, it is hyperbolic, and thus $(A,\sigma)$ is hyperbolic.
Assume now that $(A,\sigma)$ is hyperbolic.
Then by \eqref{P:hyper-char} there exists $f\in\Skew{\sigma}$ with $f^2=1$.
We identify $Q_1$ and $Q_2$ with $K$-subalgebras of $A$ that commute with each other elementwise and such that $\sigma|_{Q_i}=\mathrm{can}_{Q_i}$ for $i=1,2$.
Then $\Skew{\sigma}=Q_1'\oplus Q_2'$ where $Q_i'$ is the $K$-subspace of pure quaternions of $Q_i$ for $i=1,2$.
Writing $f=f_1+f_2$ with $f_i\in Q_i'$ for $i=1,2$, we obtain that
$1=f^2=f_1^2+f_2^2+2f_1f_2$. As $f_1^2, f_2^2\in K$, we conclude that $f_1f_2\in K$.
This is only possible if $f_1f_2=0$, that is, if either $f_1=0$ or $f_2=0$.
If, say, $f_2=0$, then $f=f_1$, which then is a hyperbolic element with respect to $\sigma$ contained in $Q_1$,
whereby $Q_1$ is split.
Hence, one of $Q_1$ and $Q_2$ is split.
\end{proof}
\begin{thm}[Karpenko, Tignol]
\label{T:KT}
Let $\Psi$ be a non-hyperbolic $K$-algebra with involution such that $\Psi\otimes \Psi$ is split.
There exists a field extension $L/K$ such that $\Psi_L$ is not hyperbolic and, either $\Psi_L$ is split or $\Psi$ is symplectic and ${\rm ind}(\Psi_L)=2$.
\end{thm}
\begin{proof}
See \cite[(1.1)]{Kar} for the orthogonal case and \cite[(A.1) and (A.2)]{Tignol} for the other cases.
\end{proof}
Note that the condition in \eqref{T:KT} that $\Psi\otimes\Psi$ be split is trivially satisfied if $\Psi$ is a $K$-algebra with involution of the first kind.
We mention separately the following special case of \eqref{T:KT}, which was obtained earlier by more classical methods.
It will be used in \eqref{P:final}.
\begin{thm}[Dejaiffe, Parimala, Sridharan, Suresh]\label{T:DPSS}
Let $a,b\in\mg{K}$ and let $L$ be the function field of the conic $aX^2+bY^2=1$ over $K$.
Let $\Psi$ be a $K$-algebra with orthogonal involution such that $\Psi\sim (a\,\cdot\!\!\mid b)_K$.
Then $\Psi$ is hyperbolic if and only if $\Psi_L$ is hyperbolic.
\end{thm}
\begin{proof}
If the conic $aX^2+bY^2=1$ is split over $K$, then $L$ is a rational function field over $K$ and the statement is obvious.
Otherwise $\Phi=(a\,\cdot\!\!\mid\!\!\cdot\, b)_K$ is a $K$-quaternion division algebra with involution and $\Psi$ is adjoint to a skew-hermitian form over $\Phi$, in which case the statement follows alternatively from \cite{Dej} or \cite[(3.3)]{PSS}.
\end{proof}
\section{Algebras with involution over real closed fields}
\label{sec7}
Let $\Psi$ be a $K$-algebra with involution.
For $n\geq 1$ we set $n\times \Psi=\mathsf{Ad}(n\times \langle 1\rangle)\otimes\Psi$.
\begin{prop}\label{T:pyth-hamilton-orth}
Assume that $K$ is pythagorean and $\Psi\sim (-1\,\cdot\!\!\mid -1)_K$. Then $\Psi \simeq \mathsf{Ad}(\varphi) \otimes (-1 \,\cdot\!\!\mid -1)_K$ for a form $\varphi$ over $K$.
Moreover, $2\times \Psi$ is hyperbolic.
\end{prop}
\begin{proof}
Let $Q=(-1,-1)_K$.
We may identify $\Psi$ with $(\End_Q(V),\sigma)$ where $V$ is a finite-dimensional right $Q$-vector space and $\sigma$ is the involution adjoint to a regular skew-hermitian form $h:V\times V\longrightarrow Q$ with respect to $\mathrm{can}_Q$.
Since $K$ is pythagorean, any maximal subfield of $Q$ is $K$-isomorphic to $K(\sqrt{-1})$.
We fix a pure quaternion $i\in Q$ with $i^2=-1$ and obtain that any invertible pure quaternion in $Q$ is conjugate to an element of $i\mg{K}$.
This yields that $h$ has a diagonalization with entries in $i\mg{K}$.
It follows that $i h:V\times V\longrightarrow Q$ is a hermitian form with respect to the involution $\tau=\mathrm{Int}(i)\circ \mathrm{can}_Q$ and has a diagonalization with entries in $\mg{K}$.
This yields that $\Psi\simeq \mathsf{Ad}(\varphi)\otimes (Q,\tau)$ for a form $\varphi$ over $K$.
Moreover, $(Q,\tau)\simeq (-1\,\cdot\!\!\mid -1)_K$.
This shows the first claim.
As $\mathsf{Ad}\langle 1,1\rangle\simeq (-1\,\cdot\!\!\mid 1)_K$ we obtain using \eqref{P:flipflop} that
$$2\times (Q,\tau)\simeq (-1\,\cdot\!\!\mid 1)_K\otimes (-1\,\cdot\!\!\mid -1)_K\simeq (-1\,\cdot\!\!\mid\!\!\cdot\, {-1})_K\otimes (-1\,\cdot\!\!\mid\!\!\cdot\, 1)_K\,.$$
By \eqref{P:hyper-char} this $K$-algebra with involution is hyperbolic, and thus so is $2\times \Psi$.
\end{proof}
\begin{thm}
\label{P:rcai}
Assume that $K$ is real closed.
\begin{enumerate}[$(a)$]
\item If $\Psi$ is split orthogonal, then $\Psi \simeq \mathsf{Ad}(r\times\langle 1\rangle\perp\eta)$ for a hyperbolic form $\eta$ over $K$ and $r\in\mathbb{N}$ such that ${\rm sign}(\mathsf{Tr}(\Psi))=r^2$.
\item
If $\Psi$ is non-split orthogonal, then $\Psi\simeq r\times (-1\,\cdot\!\!\mid -1)_K$ for a positive integer $r$, the form $\mathsf{Tr}(\Psi)$ is hyperbolic, and $\Psi$ is hyperbolic if and only if $r$ is even.
\item If $\Psi$ is split symplectic, then $\Psi\simeq r\times (1\,\cdot\!\!\mid\!\!\cdot\, 1)_K$ for a positive integer $r$, and both $\Psi$ and $\mathsf{Tr}(\Psi)$ are hyperbolic.
\item If $\Psi$ is non-split symplectic, then $\Psi \simeq \mathsf{Ad}(r\times\langle 1\rangle\perp\eta)\otimes (-1\,\cdot\!\!\mid\!\!\cdot\, -1)_K$ for a hyperbolic form $\eta$ over $K$ and $r\in\mathbb{N}$ such that ${\rm sign}(\mathsf{Tr}(\Psi))=4r^2$.
\item If $\Psi$ is non-degenerate unitary, then $\Psi \simeq \mathsf{Ad}(r\times\langle 1\rangle\perp\eta)\otimes (-1)_K$ for a hyperbolic form $\eta$ over $K$ and $r\in\mathbb{N}$ such that ${\rm sign}(\mathsf{Tr}(\Psi))=2r^2$.
\item If $\Psi$ is degenerate unitary, then $\Psi\simeq r\times (1)_K$ for a positive integer, and both $\Psi$ and $\mathsf{Tr}(\Psi)$ are hyperbolic.
\end{enumerate}
These cases are mutually exclusive and cover all possibilities, and the integer $r$ is unique in each case.
\end{thm}
\begin{proof}
It is clear that exactly one of the conditions in $(a)$--$(f)$ is satisfied.
As $K$ is real closed, the only finite-dimensional $K$-division algebras are $K$, $K(\sqrt{-1})$, and $(-1,-1)_K$.
Therefore we have $\Psi\sim \Phi$ for the $K$-algebra with involution
$$\Phi = \begin{cases}
(K,\mathrm{id}_K) & \text{if $\Psi$ is split orthogonal,}\\
(-1\,\cdot\!\!\mid -1)_K & \text{if $\Psi$ is non-split orthogonal,}\\
(1 \,\cdot\!\!\mid\!\!\cdot\, 1)_K & \text{if $\Psi$ is split symplectic,}\\
(-1\,\cdot\!\!\mid\!\!\cdot\,-1)_K & \text{if $\Psi$ is non-split symplectic,}\\
(-1)_K & \text{if $\Psi$ is split non-degenerate unitary,}\\
(1)_K & \text{if $\Psi$ is degenerate unitary.}
\end{cases}$$
If $\Psi$ is split-symplectic or degenerate unitary, then
$\Psi$ is hyperbolic by \eqref{C:trivial-hyper}, whence $\mathsf{Tr}(\Psi)$ is hyperbolic by \eqref{C:hyper-trace}, and using \eqref{P:hyper-iso} it follows that $\Psi\simeq r\times\Phi$ for some $r\in\mathbb{N}$. This shows $(c)$ and $(f)$.
Next, suppose that $\Psi$ is non-split orthogonal.
Then by \eqref{T:pyth-hamilton-orth} we have $\Psi\simeq \mathsf{Ad}(\varphi)\otimes (-1\,\cdot\!\!\mid -1)_K$ for a form $\varphi$ over $K$, and as $\mathsf{G}\,(-1\,\cdot\!\!\mid -1)_K=\sq{K}\cup-\sq{K}=\mg{K}$ we may choose $\varphi$ to be $r\times\langle 1\rangle$ for some $r\in\mathbb{N}$.
We thus have
$\Psi\simeq r\times \Phi$ with $r\in\mathbb{N}$ such that $\deg(\Psi)=2r$.
Furthermore, $\mathsf{Tr}(\Phi)$ is hyperbolic by \eqref{E:itr}, and thus so is $\mathsf{Tr}(\Psi)\simeq r^2\times \mathsf{Tr}(\Phi)$.
This shows $(b)$.
In each of the remaining cases $(a)$, $(d)$, and $(e)$, by \eqref{P:JAC} we have that $\Psi\simeq \mathsf{Ad}(\varphi)\otimes \Phi$ for a form $\varphi$ over $K$. Since $K$ is real closed and $\varphi$ is determined up to a scalar factor, we choose $\varphi$ to be $r\times \langle 1\rangle\perp\eta$ for some $r\in\mathbb{N}$ and a hyperbolic form $\eta$ over $K$.
It further follows that $\mathsf{Tr}(\Psi)\simeq \varphi\otimes\varphi\otimes \mathsf{Tr}(\Phi)$
by \eqref{P:itr-prod} and \eqref{P:trace-qf-square}
and thus ${\rm sign}(\mathsf{Tr}(\Psi))= r^2\cdot{\rm sign}(\mathsf{Tr}(\Phi))$.
As in either case $\mathsf{Tr}(\Phi)$ is positive definite by \eqref{E:itr}, we have that
${\rm sign}(\mathsf{Tr}(\Phi)) = \dim_K(\Phi)$.
This establishes the cases $(a)$, $(d)$, and $(e)$.
Finally, note that in each case the non-negative integer $r$ is determined by $\deg(\Psi)$ or $\dim(\mathsf{Tr}(\Psi))$, respectively.
\end{proof}
\begin{cor}
\label{C:rc}
Assume $K$ is real closed. Then $\mathsf{Tr}(\Psi)$ is hyperbolic if and only if $2\times \Psi$ is hyperbolic, if and only if either $\Psi$ is hyperbolic or $\Psi \simeq r\times (-1 \,\cdot\!\!\mid -1)_K$ where $r\in\mathbb{N}$ is odd.
\end{cor}
\begin{proof}
We shall refer to the cases in \eqref{P:rcai}.
In each of the cases $(b)$, $(c)$, or $(f)$, both $\mathsf{Tr}(\Psi)$ and $2\times \Psi$ are hyperbolic.
Assume that we are in one of the cases $(a)$, $(d)$, or $(e)$, and let $r$ be the integer occurring in the statement for that case.
Then $\mathsf{Tr}(\Psi)$ is hyperbolic if and only if $r=0$, if and only if $\Psi$ is hyperbolic.
\end{proof}
\begin{cor}\label{C:sign-def}
Let $P$ be an ordering of $K$ and $\Psi$ a $K$-algebra with involution.
Then ${\rm sign}_P(\mathsf{Tr}(\Psi))=[Z(\Psi):K]\cdot s^2$ for some $s\in\mathbb{N}$.
\end{cor}
\begin{proof}
By \eqref{P:rcai} the statement holds in the case where $K$ is real closed and $P$ is the unique ordering of $K$.
The general case follows immediately by extending scalars to the real closure of $K$ at $P$.
\end{proof}
Let $P$ be an ordering of $K$.
The integer $s$ occurring in \eqref{C:sign-def} is called the
\emph{signature of $\Psi$ at $P$} and denoted ${\rm sign}_P(\Psi)$.
With $k=[Z(\Psi):K]$ we thus have
$${\rm sign}_P(\Psi) =\sqrt{\mbox{$\frac{1}{k}$}{\rm sign}_P (\mathsf{Tr}(\Psi))}\,.$$
\begin{prop}\label{P:sign-mult}
Let $\Psi$ and $\Theta$ be two $K$-algebras with involution.
If $\Psi$ and $\Theta$ are both unitary, assume that they have the same centre.
For every ordering $P$ of $K$ we have that ${\rm sign}_P(\Psi\otimes\Theta)={\rm sign}_P(\Psi)\cdot{\rm sign}_P(\Theta)$.
\end{prop}
\begin{proof}
This follows immediately from \eqref{P:itr-prod}.
\end{proof}
\begin{prop}
Let $\varphi$ be a quadratic form over $K$. For every ordering $P$ of $K$ we have that ${\rm sign}_P (\mathsf{Ad}(\varphi)) = | {\rm sign}_P (\varphi)|$.
\end{prop}
\begin{proof}
This is clear as $\mathsf{Tr}(\mathsf{Ad}(\varphi))\simeq \varphi\otimes\varphi$ by \eqref{P:trace-qf-square}.
\end{proof}
\section{Local-global principle for weak hyperbolicity}
\label{sec8}
We say that the algebra with involution $\Psi$ is \emph{weakly hyperbolic}
if there exists a positive integer $n$ such that $n\times \Psi$ is hyperbolic.
We say that $\Psi$ \emph{has trivial signature} and write ${\rm sign}(\Psi)=0$ to indicate that ${\rm sign}_P(\Psi)=0$ for every $P\in X_K$.
\begin{lem}\label{C:pmqExt}
Assume that there exists $a\in \mg K\setminus \pm\sq K$ such that $\Psi_{K(\sqrt{a})}$ and $\Psi_{K(\sqrt{-a})}$ are hyperbolic. Then $2\times \Psi$ is hyperbolic.
\end{lem}
\begin{proof}
Let $a\in\mg{K}\setminus\pm\sq{K}$ be such that $\Psi_{K(\sqrt{a})}$ and $\Psi_{K(\sqrt{-a})}$ are hyperbolic.
By \eqref{C:DinG} $a$ and $-a$ both belong to $\mathsf{G}(\Psi)$.
As $\mathsf{G}(\Psi)$ is a group, we conclude that $-1\in \mathsf{G}(\Psi)$, so $2\times \Psi\simeq \mathsf{Ad}\la\!\la -1\ra\!\ra \otimes \Psi $ is hyperbolic by \eqref{P:simhyp}.
\end{proof}
\begin{prop}
\label{P:lev}
Assume that $K$ is nonreal and let $n\in\mathbb{N}$ be such that $-1$ is a sum of $2^n$ squares in $K$.
Then $2^{n+1}\times \Psi$ is hyperbolic.
\end{prop}
\begin{proof} By the assumption, the Pfister form $\pi=2^{n+1}\times \la1\rangle$ over $K$ is isotropic, whereby it is hyperbolic.
Hence, $2^{n+1}\times \Psi \simeq \mathsf{Ad}(\pi) \otimes \Psi$ is hyperbolic.
\end{proof}
\begin{lem}
\label{L:zk}
Assume that $2^n\times \Psi$ is not hyperbolic for any $n\in\mathbb{N}$, and that for every proper finite extension $L/K$ there exists $n\in \mathbb{N}$ such that $2^n\times \Psi_L$ is hyperbolic.
Then $K$ is real closed and ${\rm sign}(\Psi)\neq 0$.
\end{lem}
\begin{proof}
By \eqref{P:lev} the field $K$ is real, by \eqref{C:pmqExt} its only quadratic field extension is $K(\sqrt{-1})$, and
by \eqref{T:BFL} $K$ has no proper finite field extension of odd degree.
Thus $K$ is real closed, by \cite[Chap.~3, (2.3)]{Scharlau}. It follows from \eqref{P:rcai} that $\Psi\simeq \mathsf{Ad}(\varphi)\otimes\Phi$ for a form $\varphi$ over $K$ and a non-degenerate $K$-algebra with canonical involution $\Phi$.
As $K$ is real closed,
it follows with \eqref{E:itr} that $\mathsf{Tr}(\Phi)$ is positive definite, and thus ${\rm sign}(\Psi)$ is equal to $|{\rm sign}(\varphi)|$ or to $2\cdot |{\rm sign}(\varphi)|$.
As $\Psi$ is not hyperbolic, $\varphi$ is not hyperbolic, and we conclude that ${\rm sign}(\Psi)\neq 0$.
\end{proof}
\begin{lem}\label{L:nilpotent-awi}
Assume that $\Psi$ is split and let $r\in\mathbb{N}$.
Then $\Psi^{\otimes 2r}$ is hyperbolic if and only if $\mathsf{Tr}(\Psi)^{\otimes r}$ is hyperbolic.
\end{lem}
\begin{proof}
Replacing $\Psi$ by $\Psi^{\otimes r}$ we may in view of \eqref{P:itr-prod} assume that $r=1$.
If $\Psi$ is symplectic then $\Psi$ and $\mathsf{Tr}(\Psi)$ are hyperbolic by \eqref{C:trivial-hyper} and \eqref{C:hyper-trace}.
If $\Psi$ is orthogonal, then
$\Psi^{\otimes 2}\simeq \mathsf{Ad}(\mathsf{Tr}(\Psi))$ by \eqref{P:awi-square}, implying the statement.
Assume now that $\Psi$ is unitary.
Then $\Psi\simeq \mathsf{Ad}(\varphi)\otimes (a)_K$ for a form $\varphi$ over $K$ and some $a\in\mg{K}$.
We obtain that $\Psi^{\otimes 2}\simeq \mathsf{Ad}(\varphi\otimes\varphi)\otimes (a)_K$ and
$\mathsf{Tr}(\Psi)\simeq \varphi\otimes\varphi\otimes \la\!\la a\ra\!\ra$.
Using \eqref{T:JAC} we conclude that $\Psi^{\otimes 2}$ is hyperbolic if and only if $\mathsf{Tr}(\Psi)$ is hyperbolic.
\end{proof}
\begin{thm}
\label{T:PLULG}
The following are equivalent:
\begin{enumerate}[$(i)$]
\item ${\rm sign}(\Psi)=0$;
\item $\Psi$ is weakly hyperbolic;
\item $2^n\times \Psi$ is hyperbolic for some $n\in\mathbb{N}$;
\item either $\Psi^{\otimes m}$ is hyperbolic for some $m\geq 1$, or $K$ is nonreal and $\Psi$ is split orthogonal of odd degree.
\end{enumerate}
These conditions are trivially satisfied if $K$ is nonreal.
\end{thm}
\begin{proof}
Trivially $(iii)$ implies $(ii)$, and by \eqref{P:sign-mult} any of the conditions implies $(i)$.
Suppose that $2^n\times \Psi$ is not hyperbolic for any $n\in\mathbb{N}$.
By Zorn's Lemma there exists a maximal algebraic extension $L/K$ such that $2^n\times \Psi_L$ is not hyperbolic for any $n\in\mathbb{N}$. By \eqref{L:zk} $L$ is real closed and $\Psi_L$ has nonzero signature at the unique ordering of $L$.
For the ordering $P=L^2\cap K$ of $K$ we obtain that ${\rm sign}_P(\Psi)\neq 0$. This shows that $(i)$ implies $(iii)$.
To finish the proof we show that $(i)$ implies $(iv)$.
We may assume that $\Psi$ is simple as otherwise $\Psi$ is hyperbolic. Replacing $\Psi$ by $\Psi^{\otimes e}$ for some positive integer $e$ and using \eqref{P:sign-mult}, we may further assume that $\Psi$ is split.
Assuming $(i)$ we have ${\rm sign}(\mathsf{Tr}(\Psi))=0$.
Note further that $\dim_K(\Psi)=\dim_K(\mathsf{Tr}(\Psi))$.
If $\dim (\mathsf{Tr}(\Psi))$ is odd, we conclude that $K$ is nonreal and $\Psi$ is split orthogonal of odd degree.
If $\dim (\mathsf{Tr}(\Psi))$ is even, we obtain by \eqref{C:PLGP}
that $\mathsf{Tr}(\Psi)^{\otimes r}$ is hyperbolic for some positive integer $r$, and then $\Psi^{\otimes 2r}$ is hyperbolic by \eqref{L:nilpotent-awi}.
\end{proof}
\begin{cor}
Assume that $Z(\Psi)\simeq K(\sqrt{a})$ with $a\in\sums{K}$ in case $\Psi$ is unitary, and otherwise that
$\Psi_{R}\sim (-1,-\mathrm{type}(\Psi))_{R}$ for every real closure $R$ of $K$.
Then $\Psi$ is weakly hyperbolic.
\end{cor}
\begin{proof}
In view of the hypothesis, we obtain from \eqref{P:rcai} that $\mathsf{Tr}(\Psi)$ becomes hyperbolic over every real closure of $K$.
Therefore ${\rm sign}(\Psi)=0$, and it follows by \eqref{T:PLULG} that $\Psi$ is weakly hyperbolic.
\end{proof}
Let $T$ be a preordering of $K$.
We say that a $K$-algebra with involution $\Psi$ is \emph{$T$-hyperbolic} if there exists a $T$-positive quadratic form $\tau$ over $K$ such that $\mathsf{Ad}(\tau)\otimes\Psi$ is hyperbolic.
It is clear from \eqref{P:ad-mult} that a quadratic form $\varphi$ over $K$ is $T$-hyperbolic if and only if $\mathsf{Ad}(\varphi)$ is $T$-hyperbolic.
\begin{thm}\label{T:PLULG-Pre}
Let $T$ be a preordering of $K$.
We have ${\rm sign}_P(\Psi)=0$ for every $P\in X_T(K)$ if and only if $\Psi$ is $T$-hyperbolic. Moreover, in this case there exists a $T$-positive Pfister form $\vartheta$ over $K$ such that $\mathsf{Ad}(\vartheta)\otimes \Psi$ is hyperbolic.
\end{thm}
\begin{proof}
Assume first that $\Psi$ is $T$-hyperbolic.
Let $\vartheta$ be a $T$-positive form over $K$ such that $\mathsf{Ad}(\vartheta)\otimes \Psi$ is hyperbolic.
By \eqref{P:sign-mult} then ${\rm sign}_P(\vartheta)\cdot {\rm sign}_P(\Psi)=0$ for any ordering $P$ of $K$. For any $P\in X_T$ we have ${\rm sign}_P(\vartheta)>0$ as $\vartheta$ is $T$-positive, and we conclude that ${\rm sign}_P(\Psi)=0$.
Assume now that ${\rm sign}_P(\Psi)=0$ for every $P\in X_T$.
Then $\vartheta\otimes \mathsf{Tr}(\Psi)$ is hyperbolic for some $T$-positive Pfister form $\vartheta$ over $K$, by \eqref{T:pre-Pfister}.
By \eqref{P:itr-prod} and \eqref{P:trace-qf-square} we have $\mathsf{Tr}(\mathsf{Ad}(\vartheta)\otimes \Psi)\simeq \vartheta\otimes\vartheta\otimes \mathsf{Tr}(\Psi)$.
We conclude that $\mathsf{Ad}(\vartheta)\otimes \Psi$ has trivial total signature.
By \eqref{T:PLULG} there exists $n\in\mathbb{N}$ such that $2^n\times \mathsf{Ad}(\vartheta)\otimes \Psi$ is hyperbolic.
Hence, the isomorphic $K$-algebra with involution $\mathsf{Ad}(2^n\times \vartheta)\otimes \Psi$ is hyperbolic.
As $2^n\times \vartheta$ is a $T$-positive Pfister form, this shows the statement.
\end{proof}
\section{Bounds on the torsion order}
\label{sec9}
By \eqref{T:PLULG}, for a $K$-algebra with involution $\Psi$ such that $\Psi^{\otimes n}$ is hyperbolic for some $n\in\mathbb{N}$, we have that $2^m\times \Psi$ is hyperbolic for some $m\in\mathbb{N}$.
In this situation, one may want to bound $m$ in terms of $n$ and the degree of $\Psi$.
We restrict to the case $n=2$, that is, where $\Psi^{\otimes 2}$ is hyperbolic, and use the function $\Delta:\mathbb{N}\longrightarrow\mathbb{N}$ introduced in Section 2 to bound $m$.
\begin{thm}
\label{P:gkar}
Let $\Psi$ be a $K$-algebra with involution such that $\Psi^{\otimes 2}$ is split hyperbolic.
Let $m=\deg(\Psi)$ if $\sigma$ is orthogonal or unitary, and $m=\frac{1}{2}\deg(\Psi)$ if $\sigma$ is symplectic.
Then $2^{\Delta(m)}\times \Psi$ is hyperbolic.
\end{thm}
\begin{proof}
In view of \eqref{T:KT} it suffices to consider the situation where $\Psi$ is either split orthogonal, or split unitary, or symplectic of index $2$.
Then by \eqref{P:JAC} we have
$\Psi\simeq \mathsf{Ad}(\varphi)\otimes \Phi$ for a form $\varphi$ over $K$ with $\dim(\varphi)=m$ and a $K$-algebra with canonical involution $\Phi$.
As $\Psi^{\otimes 2}$ is hyperbolic, it follows from \eqref{L:nilpotent-awi} in the split case and from \eqref{P:awi-square} in the non-split case that $\mathsf{Tr}(\Psi)$ is hyperbolic.
By \eqref{P:itr-prod} and \eqref{P:trace-qf-square}
we have $\mathsf{Tr}(\Psi)\simeq \varphi\otimes\varphi\otimes\mathsf{Tr}(\Phi)$.
Hence, \eqref{P:Lou} yields that $(2^{\Delta(m)}\times \varphi)\otimes\mathsf{Tr}(\Phi)$ is hyperbolic.
We conclude using \eqref{T:JAC} that $2^{\Delta(m)}\times \Psi\simeq \mathsf{Ad}(2^{\Delta(m)}\times \varphi)\otimes\Phi$ is hyperbolic.
\end{proof}
\begin{thm}
\label{T:quatord}
Let $\Psi$ be a $K$-quaternion algebra with involution and
$m\in\mathbb{N}$.
If $2^m \times \Psi^{\otimes 2}$ is hyperbolic, then $2^{m+1}\times \Psi$ is hyperbolic.
Moreover, the converse holds in case $\Psi$ is split.
\end{thm}
\begin{proof}
Suppose first that $\Psi$ is split. If $\Psi$ is symplectic then it is hyperbolic.
Assume that $\Psi$ is orthogonal or unitary.
Then either
$\Psi\simeq \mathsf{Ad}\la\!\la a\ra\!\ra$ for some $a\in\mg{K}$ or $\Psi\simeq \mathsf{Ad}\la\!\la a\ra\!\ra\otimes (b)_K$ for some $a,b\in \mg{K}$. Either way,
as $\la\!\la a, a\ra\!\ra \simeq 2\times \la\!\la a \ra\!\ra$
it follows that $\Psi^{\otimes 2}\simeq 2\times \Psi$. This yields the claimed equivalence in the split case.
We derive the implication claimed in general for $\Psi$ orthogonal or unitary by reduction to the split case by means of \eqref{T:KT}. Assume finally that $\Psi$ is symplectic. Then $\Psi^{\otimes 2}\simeq \mathsf{Ad}(\mathsf{Tr}(\Psi))$ by \eqref{P:awi-square} and thus $2^m\times \Psi^{\otimes 2}\simeq \mathsf{Ad}(2^m\times \mathsf{Tr}(\Psi))$ by \eqref{P:ad-mult}.
Hence, if $2^m\times \Psi^{\otimes 2}$ is hyperbolic, then $2^m\times \mathsf{Tr}(\Psi)$ is hyperbolic, and it follows by \eqref{T:JAC} that $2^m\times \Psi$ is hyperbolic.
\end{proof}
The following example shows that the converse in \eqref{T:quatord} does not hold in general.
\begin{ex}
Let $m\in\mathbb{N}$.
Assume that $K$ is either $k(t)$ or $k(\!(t)\!)$ for a field~$k$. Let $a \in \mathsf{D}_k (2^{m+1})\setminus \mathsf{D}_k (2^m)$.
The form $2^m\times \la\!\la a, -t\ra\!\ra$ over $K$ is anisotropic.
Hence, $2^m\times (a \,\cdot\!\!\mid t)_K^{\otimes 2} \simeq \mathsf{Ad}(2^m\times \la\!\la a, -t\ra\!\ra)$ is anisotropic, whereas $2^{m+1}\times (a \,\cdot\!\!\mid t)_K$ is hyperbolic.
\end{ex}
\begin{thm}\label{P:final}
Let $a,b\in \mg K$. Then $2\times (a\,\cdot\!\!\mid b)_K$ is hyperbolic if and only if $a\in \mathsf{D}_K \langle 1,1\rangle \cup \mathsf{D}_K\langle 1,b\rangle$.
For $n\in\mathbb{N} $ with $n\geq 2$ we have that $2^n\times (a\,\cdot\!\!\mid b)_K$ is hyperbolic if and only if
$a=x(y+b)$ with $x\in \mathsf{D}_K(2^n-1)$ and $y \in \mathsf{D}_K(2^n-1)\cup \{0\}$.
\end{thm}
\begin{proof}
Note that
$2\times (a \,\cdot\!\!\mid b)_K \simeq (-1 \,\cdot\!\!\mid 1)_K \otimes (a \,\cdot\!\!\mid b)_K \simeq (-1 \,\cdot\!\!\mid\!\!\cdot\, a)_K\otimes (a \,\cdot\!\!\mid\!\!\cdot\, -b)_K$ by \eqref{P:flipflop}.
Hence, by \eqref{P:bqhyp}, $2\times (a \,\cdot\!\!\mid b)_K$ is hyperbolic if and only if one of $(-1,a)_K$ and $(a,-b)_K$ is split, which happens if and only if
$a\in \mathsf{D}_K\langle 1,1\rangle \cup \mathsf{D}_K\langle 1,b\rangle$.
Let $n\geq 2$. Let $L$ denote the function field of the conic $aX^2+bY^2=1$ over~$K$.
Note that $(a,b)_L$ is split and thus $2^n\times (a\,\cdot\!\!\mid b)_L \simeq \mathsf{Ad}(2^n\times\la\!\la a\ra\!\ra_L)$.
Using \eqref{T:DPSS} $2^n\times (a \,\cdot\!\!\mid b)_K$ is hyperbolic if and only if $2^n\times (a \,\cdot\!\!\mid b)_L$ is hyperbolic, which is
the case if and only if $2^n\times\la\!\la a\ra\!\ra_L$ is hyperbolic.
Using \cite[Chap.~X, (4.28)]{Lam} we conclude that this happens if and only if $\langle 1,-a,-b\rangle$ is a subform of $2^n\times \la\!\la a\ra\!\ra$ over $K$, which is, if and only if $(2^n-1)\times \la\!\la a\ra\!\ra\perp\langle b\rangle$ is isotropic.
Finally, this occurs if and only if $a=x(y+b)$ for some $x\in \mathsf{D}_K(2^n-1)$ and $y \in \mathsf{D}_K(2^n-1)\cup \{0\}$.
\end{proof}
\begin{qu}
If $K$ is pythagorean and ${\rm sign}(\Psi)=0$, is then $2\times \Psi$ necessarily hyperbolic?
\end{qu}
\subsection*{Acknowledgements}
This work was supported by the \emph{Deutsche Forschungsgemeinschaft} (project \emph{Quadratic Forms and Invariants}, {BE 2614/3}), by the \emph{Zu\-kunfts\-kolleg, Universit\"at Konstanz}, and by the Science Foundation Ireland Research Frontiers Programme (project no. 07/RFP/MATF191).
|
1,477,468,750,461 | arxiv | \section{Introduction}
Textile concepts in mythological cosmology go back to antiquity, e.g.\ to the Fates spinning a tapestry representing destiny, and the planets riding on spheres rotating about a cosmic spindle\footnote{Spindle in the sense of a rod for spinning wool into yarn, mentioned in Plato's {\it Republic} \citep[discussed by][]{James1995}.}. In modern physical cosmology as well, the textile concept of a cosmic web is important, the term introduced and popularized in the paper `How filaments of galaxies are woven into the cosmic web' \citep{BondEtal1996}. `Web' was first used in this context even before, in the 1980's \citep{Shandarin1983, KlypinShandarin1983}.
The cosmic web has inspired many textile artistic representations, reviewed by \citet{DiemerFacio2017}, who also detail new techniques for `tactilization' of the cosmic web. The cosmic web has only recently been accurately mapped, but it resembles some more familiar natural structures. These bear similarities to human-designed structures both because of human inspiration, and because the same mathematical and engineering principles apply both to nature and human design \citep{ArslanSorguc2004}.
The cosmic web and its resemblance to a spiderweb has inspired several room-sized installations by Tom\'{a}s Saraceno\footnote{See e.g.\ {\it How to Entangle the Universe in a Spider Web}, {\it 14 Billions} and {\it Galaxies forming along filaments, like droplets along the strands of a spiders web}, at \hrefurl{http://www.tomassaraceno.com}.}, described by \citet{Ball2017}. Referencing Saraceno's work, \citet{Livio2012} mentions `the visual (although clearly not physical) similarity between spider webs and the cosmic web.' This paper responds to `clearly not physical'; we explain the physical similarity between spider and cosmic webs.
This similarity is through a geometric concept of a `spiderweb' used in architecture and engineering, related to generalized Voronoi and Delaunay tessellations \citep[e.g.][]{OkabeEtal2008,AurenhammerEtal2013}. Such tessellations are already crucial concepts and tools in cosmology \citep[e.g.][]{VdwEtal2009,NeyrinckShandarin2012}. The Voronoi foam concept \citep{IckeVdw1987,VdwIcke1989,Vdw2007} has also been instrumental in shaping our understanding of the large-scale arrangement of matter and galaxies in the universe. Indeed, the arrangement of matter on large scales behaves in many respects like a cellular system \citep{AragonCalvo2014}. Tessellation concepts have been used for some time in cosmological data analysis, as well \citep[e.g.][]{BernardeauVdw1996,SchaapVdw2000,NeyrinckEtal2005,vdwSchaap2009}. For more information about the cosmic web, see recent proceedings of `The Zeldovich Universe: Genesis and Growth of the Cosmic Web' symposium \citep{VdwShandarinSaar2016} and a comparison of ways of classifying parts of the cosmic web \citep{LibeskindEtal2017}.
In the real Universe, filaments of galaxies between clusters of galaxies have been observed for some time \citep[e.g.][]{deLapparentEtal1986}. Observing the cosmic web of filaments of dark and ordinary baryonic matter between galaxies is much more difficult, but has started to happen. With weak gravitational lensing observations, large filaments of matter have been detected individually \citep{JauzacEtal2012}; smaller filaments have been detected by stacking filament signals together \citep{ClampittEtal2016}. Very recently, filaments in ionized gas have been detected \citep{TanimuraEtal2017,deGraaffEtal2017}, by stacking their Sunyaev-Zeldovich effect, a spectral distortion that ionized gas imparts onto the primordial cosmic microwave background radiation. One reason these observations are important is that they support the standard picture that much (tens of percent) of the baryonic matter in the Universe resides not in galaxies themselves, but in the cosmic web of filaments between galaxies.
In this paper, first we will discuss the spiderweb concept and how it applies to large-scale structure, in 1D, 2D, and 3D. Then, in \S \ref{sec:applications}, we suggest some applications of these ideas beyond the understanding of structure formation itself.
\section{Geometry of spiderwebs}
First we will define the geometric concept of a {\it spiderweb} \citep[e.g.][]{WhiteleyEtal2013}, a particular type of spatial graph. Conceptually, a spiderweb is a spatial (i.e.\ with nodes at specified positions) graph that can be strung up in equilibrium such that all of its strings are in tension. It need not look like the common conception of a biological spiderweb, i.e.\ a 2D pattern of concentric polygons supported by radial strands. We will discuss the various concepts in some detail first in 2D, and then turn to 3D. Graphic statics, a visual design and analysis method based on the idea of reciprocity between form and force diagrams, developed after Maxwell's work in the 19th century \citep[e.g.][]{Kurrer2008}. It tailed off in popularity in the early 20th century, but has recently experienced a resurgence of interest, especially the study of global static equilibrium for both 2D and 3D structures \citep[e.g.][]{McRobie2016,KonstantatouMcRobie2016,McRobie2017,BlockEtal2016,AkbarzadehEtal2016}.
The key mathematical property that characterizes a spiderweb is a perpendicularity property. Consider a graph with positioned nodes, and straight edges between them. Suppose that the graph is planar (i.e.\ with non-crossing edges), and so the graph tessellates the plane into polygonal cells. Further suppose a set of {\it generators} exists, one per cell, such that the line connecting the generators of each pair of bordering cells is perpendicular to the edge that forms their border. The network of edges connecting bordering cells is called the {\it dual}, and if those edges are perpendicular to cells' borders, it is a {\it reciprocal} dual. The original graph is a {\it spiderweb} if the edges of the dual tessellation do not cross each other.
James Clerk Maxwell, better known in physics for uniting electromagnetism, conceived of reciprocal duals to analyze and design pin-jointed trusses in structural engineering \citep{Maxwell1864,Maxwell1867}. A {\it form diagram} is just a map of the structural members in a truss, containing nodes and edges between them. A {\it force diagram} also has one edge per beam, but the length of each `form' edge is proportional to the internal force (if positive, compression; if negative, tension) in that structural member. Maxwell showed that if the network is in equilibrium, a closed force diagram can be constructed such that the form and force networks are reciprocals of each other. Fig.\ \ref{fig:eiffeltower} shows an example of a spiderweb. The reciprocal-dual force diagram appears in black; the form diagram is a spiderweb because all force polygons are closed and fit together without crossing any edges.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{eiffeltower.pdf}
\end{center}
\caption{A spiderweb form diagram (blue) resembling the Eiffel Tower, and the corresponding force diagram (black). Letters label perpendicular pairs of form (unprimed) and force (primed) edges. Some perpendicular segment pairs would only actually intersect if extended. Dashed edges lead to external supports.}
\label{fig:eiffeltower}
\end{figure}
To gain some intuition, consider all forces acting on a node of the form diagram. They can be represented by a {\it force polygon} around it, each side perpendicular to an edge coming off the node, of length proportional to the tension. The condition of the node's forces being in balance is equivalent to the force polygon being closed. Now consider adjacent nodes, with their own force polygons. The tension and direction of each edge will be the same at both ends, so the sides of the force polygons at both ends will be the same as well; they may be neatly fitted together without gaps. The ensemble of all such polygons constitutes the force diagram.
The force and form diagrams in Fig.\ \ref{fig:eiffeltower} are Voronoi and Delaunay tessellations, their generating points lying at the vertices of the Delaunay force diagram. Each blue Voronoi cell in the form diagram is the set of points in space closer to its generator point than to any other. The black, force diagram is the Delaunay tessellation joining generators. Voronoi and Delaunay tessellations are reciprocal duals, and are always spiderwebs; they have non-crossing, convex polygons, with edges of one perpendicular to edges of the other.
Fig.\ \ref{fig:eiffeltower} is supposed to resemble the Eiffel Tower, designed principally by Maurice Koechlin using graphics statics. Koechlin apparently wrote only one article not strictly about engineering, and that happens to be about how spiders spin their webs, mentioning some structural issues \citep{Koechlin1905}. See \citet{FivetEtal2015} for further information about Koechlin and his publications.
The set of spiderwebs is more general than the set of Voronoi diagrams, but only a bit. Each edge of the Voronoi diagram can slide perpendicular to its dual edge (the Delaunay edge joining generators). This edge can be slid in constructing the tessellation by adding a different constant to the distance functions used to decide which points are closest to the generators forming that edge.
In symbols, the cell $V_{\bm{q}}$ around the generator at position $\bm{q}$ is
\begin{equation}
V_{\bm{q}}=\left\{\bm{x}\in \mathcal{E}~{\rm s.t.}~ |\bm{x}-\bm{q}|^2 + z_q^2 \le |\bm{x}-\bm{p}|^2 + z_p^2,~\forall \bm{p}\in \mathcal{L}\right\},
\label{eqn:secvoronoi}
\end{equation}
where $\mathcal{E}$ is the space being tessellated, $\mathcal{L}$ is the set of generator points, and $z_q^2$ and $z_p^2$ are constants, possibly different for each generator. If all constants $z$ are equal, both sides of the inequality reduce to the usual distance functions, giving a usual Voronoi diagram. If they differ, the tessellation generalizes to a {\it sectional-Voronoi diagram} (also known as a `power diagram'), generally not a Voronoi diagram \citep{ChiuEtal1996}. The reason for the word `sectional' is that the tessellation can be seen as a cross-section through a Voronoi diagram in a higher dimension, with each $z_q$ interpreted as a distance from the cross-section. In 2D, the set of spiderweb networks is exactly the set of sectional-Voronoi diagrams \citep{AshBolker1986,WhiteleyEtal2013}, and 3D sectional-Voronoi diagrams are guaranteed to be spiderwebs, as well.
\subsection{Non-spiderwebs}
What kinds of spatial graphs are non-spiderweb? One clear sign that a spatial graph is non-spiderweb is if any of its polygons are non-convex. Suppose there is a non-convex polygon, so there is at least one node of the polygon whose interior angle exceeds 180$^\circ$. That means that all of the threads pulling it are in the same half-plane, with no balancing tension in the other directions, so this node cannot be in force equilibrium.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{nonspiderweb.pdf}
\end{center}
\caption{{\it Blue:} A spatial graph that is not a spiderweb. {\it Black:} An attempted force diagram. It cannot close with the bottom-right member askew, so it would not be in equilibrium if strung up in tension. {\it Orange}: continuations of the external members, that do not meet in a point.}
\label{fig:nonspiderweb}
\end{figure}
Fig. \ref{fig:nonspiderweb} shows another kind of non-spiderweb. Here, members of the blue form diagram outline a square, supported by external members (dashed blue lines) at each corner. Three of these point out 45$^\circ$-diagonally; their force polygons are 45$^\circ$\ right triangles. But the lower-right external member has a different angle; this is a problem because this means the node's force polygon must be differently-shaped, like the bottom-left triangle. There is no way to scale this triangle to join both vertices to the adjacent triangles, so this is not even a reciprocal dual.
Physical intuition might be of some help to understand this. Imagine stringing up this pattern in tension, such that all strings pointed radially out from the center of the square. Now imagine changing the angle of one of the strings. After the pattern equilibrates, it would be distorted.
Similarly, if the angle of one of the external members of Fig.\ \ref{fig:eiffeltower} were changed, the structure would not be in equilibrium. This is related to a so-called Desargues configuration, named after a founder of projective geometry, a subject closely related to graphic statics \citep[e.g.][]{CrapoWhiteley1982,KonstantatouMcRobie2016}. If external members extend from vertices of a triangle in equilibrium (e.g. Fig.\ \ref{fig:eiffeltower}, without the central member $A$), those members' vectors, extended inside the triangle, must meet in a point. In perspective geometry, this is called the center of perspectivity, the vanishing point of parallel rays at infinity. In Fig.\ \ref{fig:eiffeltower}, lines $C$, $F$ and $I$ meet in such a center of perspectivity, as is guaranteed by $C^\prime$, $F^\prime$ and $I^\prime$ forming a closed triangle.
For a triangle, given two external member directions, the third is fixed also to point to where they would intersect. For polygons with more sides, a simple way to force spiderwebness is still for external member vectors to meet in a single point. (Perhaps this is a reason why idealized biological spiderwebs have concentric polygons joined by radial strands that intersect in the center.) But there are equilibrium spiderweb configurations without members meeting in a single point as well; for an $N$-sided polygon, the closing of the form diagram only fixes one remaining member direction given all $N-1$ others.
\section{The adhesion model and spiderwebs}
Remarkably, the accurate {\it adhesion model} of large-scale structure produces a cosmic web that is exactly a sectional-Voronoi diagram, as well.
The Zeldovich approximation \citep[][ZA]{Zeldovich1970} is the usual starting point for the adhesion model, although any model for a displacement potential (defined below) can be used. The ZA is a first-order perturbation theory, perturbing particle displacements away from an initial uniform grid, and already describes the morphology of the cosmic web remarkably well \citep[e.g.][]{ColesEtal1993}. But it grows notably inaccurate after particles cross in their ballistic trajectories. Crossing is allowed physically because the (dark) matter is assumed to be collisionless, but $N$-body simulations of full gravity show that gravity itself is enough to keep these dark-matter structures compact once they form. `Collapse' is our term for the structure forming, i.e.\ for particle trajectories crossing, forming a {\it multistream} region.
The adhesion model \citep{GurbatovSaichev1984,KofmanEtal1990,GurbatovEtal2012} eliminates the ZA over-crossing problem with a mechanism that sticks trajectories together when they cross. In the adhesion model, a viscosity $\nu$ is introduced formally into the equation of motion (resulting in a differential equation called Burgers' equation), and then the limit is taken $\nu\to 0$. It can then be solved elegantly, by a few methods, including a Laplace transform, and the following convex hull construction \citep{VergassolaEtal1994}.
Let $\mbox{\boldmath $\Psi$}(\bm{q})\equiv \bm{x}-\bm{q}$ denote the {\it displacement} field, the displacement between the final ({\it Eulerian}) position $\bm{x}$ and initial ({\it Lagrangian}) position ($\bm{q}$) of a particle. These labels based on mathematicians' names refer to coordinate systems in fluid dynamics. All distances here are {\it comoving}, meaning that the expansion of the Universe is scaled out. The initial velocity field $\bm{v}(\bm{q})$ after inflation is usually assumed to have zero vorticity \citep[e.g.][]{Peebles1980}, any primordial rotational modes having been damped away through inflationary expansion. In the ZA, $\mbox{\boldmath $\Psi$}\propto\bm{v}$, so $\mbox{\boldmath $\Psi$}$ is also curl-free. In full gravity, $\mbox{\boldmath $\Psi$}$ does have a curl component in collapsed regions, but in uncollapsed regions, it seems to be tiny compared to the divergence \citep{Chan2014,Neyrinck2016}. So here we assume that $\mbox{\boldmath $\Psi$}(\bm{q})=-\mbox{\boldmath $\nabla$}_{\bm{q}}\Phi$, for a displacement potential $\Phi(\bm{q})$.
As discussed in the above adhesion-model papers, the mapping between $\bm{q}$ and $\bm{x}$ for a particle is given by the implicit equation
\begin{equation}
\Phi(\bm{x}) = \max_q\left[\Phi(\bm{q})-\frac{1}{2}|\bm{x}-\bm{q}|^2\right].
\label{adhesionEuler}
\end{equation}
Here, $\Phi(\bm{x})$ is the same Lagrangian potential $\Phi$ as in $\Phi(\bm{q})$, but evaluated at the Eulerian position $\bm{x}$.
For a 2D problem, one way to think about this solution is to slide an upward-opening paraboloid with equation $z=|\bm{x}-\bm{q}|^2/2$ around on a surface with height $z=\Phi(\bm{q})$, and mapping any $\bm{q}$ patches that touch the paraboloid to the position of its minimum point, $\bm{x}$. If the paraboloid touches more than one point, the entire polygon or polyhedron of Lagrangian space joining those points adheres together, and is placed at $\bm{x}$. \citet{KofmanEtal1990} nicely explain this process. \citet{VergassolaEtal1994} go on to discuss how this leads to a convex-hull algorithm: raise points sampling the space in a new spatial dimension according to their displacement potential, and shrink-wrap this surface with a convex hull. Uncollapsed (defined below) regions will be on this convex hull, while collapsed regions will be inside, not touching it.
Now, the key result linking spiderwebs to the cosmic web: this convex-hull operation is equivalent to constructing a sectional-Voronoi diagram \citet{HiddingEtal2012,HiddingEtal2016,HiddingEtal2018,Hidding2018}. This is related to the popular convex-hull method of computing a Voronoi tessellation \citep{Brown1979}. For the following figures, we used a Python code that we provide \citep{Hidding2017zenodo}, which uses a native Python wrapper of the Qhull \citep{BarberEtal1996} convex hull algorithm to compute the sectional-Voronoi diagram.
\begin{figure*}
\begin{minipage}{175mm}
\begin{center}
\includegraphics[width=0.8\columnwidth]{sectional_voronoi.pdf}
\end{center}
\caption{{\it Top-left}: An example 1D displacement potential. {\it Bottom-left panels}: Three snapshots of the evolution of the sectional-Voronoi tessellation. Generators at grid points begin at $\Phi(q)=0$, but as $\Phi(q)$ scales with the growth factor $D_+$, generators lift off the $x$-axis by a distance $\sqrt{2\Phi_{\max}-2\Phi(q)}$. The Eulerian cell of each grid point is the intersection of the $x$-axis with the full 2D Voronoi tessellation; `particles' are at these intersection points. In sufficiently deep potential wells, 2D Voronoi cells rise so high that the orange line no longer intersects them. Green circles show where the particles would be using the ZA, i.e.\ simply $x=q-\frac{d\Phi}{dq}$. In uncollapsed regions, the intersection points correspond to the circles, but note that in collapsed regions (e.g.\ at $x\approx 47$), green circles overcross. {\it Right}: Illustration of how the displacement field comes from this construction (see text).
}
\label{fig:sectional_voronoi}
\end{minipage}
\end{figure*}
The idea is easily visualized for a 1D universe. Fig.\ \ref{fig:sectional_voronoi} shows how linearly evolving a 1D potential in time changes its sectional-Voronoi tessellation, and 1D cosmic web. Each generator point is lifted to a height $h(q)=\sqrt{2\Phi_{\rm max} - 2\Phi(q)}$, for some $\Phi_{\max}\ge\ \max_{q} \Phi(q)$. (In Fig.\ \ref{fig:sectional_voronoi}, $\Phi_{\max}= \max_q \Phi(q) + 5$\,$h^{-1}$\,Mpc, adding a constant for visual clarity.) If they did not collide, the black lines between generator points would all intersect the $x$ axis at $\Psi(q)=-d\Psi/dq$ (positions indicated with green circles). This is exactly the expected displacement in the ZA. To see this, consider the right panel of Fig.\ \ref{fig:sectional_voronoi}. A segment between two grid points of $h(q)$ is shown in blue. Its slope is simply the derivative of $h(q)$, $\frac{dh(q)}{dq} = -\frac{d\Psi}{dq}[2\Psi_{\rm max}-2\Phi(q)]^{-1/2}$. As it should, this expression agrees with the slope inferred using the right triangle in the figure.
In the left panels of Fig.\ \ref{fig:sectional_voronoi}, there is one Voronoi cell per generator point. The Eulerian cells in the 1D universe are the between intersections of the black lines with the orange $x$-axis. Where the potential is deepest, generators have floated up so high that their Voronoi cells no longer intersect with the orange line. This indicates collapse; these particles give their mass to the black line that does manage to intersect the orange line. We show the ZA position of these particles with green circles, using $x=q-\frac{d\Phi}{dq}$, evaluating the derivative with a finite difference between grid points. Note that in collapsed regions (e.g.\ at $x\approx 47$), green circles have some dispersion (having crossed each other), unlike the Voronoi edges in the collapsed regions that have adhered.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\columnwidth]{displacement_potential.pdf}
\end{center}
\caption{The displacement potential used to generate the following adhesion-model cosmic webs. It is $32^2$, extremely low-resolution for clarity.}
\label{fig:displacement_potential}
\end{figure}
\begin{figure*}
\begin{minipage}{175mm}
\begin{center}
\includegraphics[width=0.85\columnwidth]{cosmicduals1.pdf}
\end{center}
\caption{{\it Upper right}: A cosmic web generated from the displacement potential in Fig.\ \ref{fig:displacement_potential}. Each blue polygon is a sectional-Voronoi cell (defined in Eq.\ \ref{eqn:secvoronoipot}), inhabiting Eulerian (final comoving position) space; the web collectively is a spiderweb. {\it Lower left}: The corresponding reciprocal dual tessellation, in Lagrangian (initial comoving position) space; each node of the Eulerian web is a triangle here. In architecture, the lengths of each white edges would be proportional to the tension in the corresponding spiderweb thread. {\it Upper left}: the web at upper right, adding a translucent black circle at each node of area proportional to its mass (the area of its black triangle at lower left). {\it Lower right}: A Minkowski sum of the first two tessellations, every cell halved in linear size, i.e.\ $\alpha=\frac{1}{2}$ in Eq.\ (\ref{eqn:minksum}).}
\label{fig:cosmicduals}
\end{minipage}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{eulag0_0.pdf}
\includegraphics[width=\columnwidth]{eulag0_25.pdf}
\includegraphics[width=\columnwidth]{eulag0_5.pdf}
\includegraphics[width=\columnwidth]{eulag1.pdf}
\end{center}
\caption{A time sequence of the adhesion-model cosmic web in Fig.\ \ref{fig:cosmicduals}, scaling the displacement potential in Fig.\ \ref{fig:displacement_potential} by $D_+=0,0.25,0.5$, and $1$, from top to bottom. {\it Left}: Lagrangian triangulation; each black patch collapses into a node of the web at right.}
\label{fig:timesequence}
\end{figure}
The procedure to produce an adhesion-model realization of particles (in 1D, 2D or 3D) at scale factor $D_+$ is as follows:
\begin{enumerate}
\item{Generate a grid of initial conditions $\delta_0(\bm{q})$, i.e.\ a Gaussian random field consistent with an initial power spectrum, dependent on cosmological parameters.}
\item{From $\delta_0(\bm{q})$, obtain the displacement potential in Lagrangian coordinates, $\Phi(\bm{q})$. In the ZA, $\Phi(\bm{q})=D_+ \nabla^{-2}\delta_0$, the inverse Laplacian straightforwardly calculable with a fast Fourier transform (FFT).}
\item{Construct sectional-Voronoi and Delaunay tessellations from the grid. To do this, put a generator point at each grid point, setting the additive weight in the distance function for each generator point to $z^2(\bm{q}) = -2\Phi(\bm{q})$. Each cell $V_{\bm{q}}$ then satisfies
\begin{equation}
V_{\bm{q}}=\left\{\bm{x}\ {\rm s.t.}~ |\bm{x}-\bm{q}|^2 - 2\Phi(\bm{q}) \le |\bm{x}-\bm{p}|^2 - 2\Phi(\bm{p}),~\forall \bm{p}\in \mathcal{L}\right\},
\label{eqn:secvoronoipot}
\end{equation}
where $\mathcal{L}$ is the set of Lagrangian grid points. (To make the tessellation truly a geometrical section of a higher-dimensional Voronoi tessellation, add $\Phi_{\rm max}$ to $-\Phi(\bm{q})$ as in the 1D example above, but this is not necessary computationally.) If the volume of $V_{\bm{q}} = 0$, the Lagrangian patch at $\bm{q}$ is part of a collapsed wall, filament, or node. If the volume of $V_{\bm{q}}>0$, the patch at $\bm{q}$ is uncollapsed. The mass of each particle in the web is given by the area of the corresponding triangle or tetrahedron of the dual tessellation (which is a reciprocal dual). We call this dual a `weighted Delaunay tessellation' (also known as a `regular triangulation'), a tessellation of the Lagrangian space of the initial conditions.}
\end{enumerate}
\subsection{2D example}
A $\Phi(\bm{q})$ field appears in Fig.\ \ref{fig:displacement_potential}, and the resultant cosmic spiderweb in Fig.\ \ref{fig:cosmicduals}. Each triangle in Lagrangian space, with mass given by its area (lower left) is a node of the spiderweb in Eulerian space (upper right). The nodes are shown with mass deposited at upper left, and a half-half Lagrangian/Eulerian mixture is at lower right, called a Minkowski sum.
A Minkowski sum of two sets of vectors $\bm{A}$ and $\bm{B}$ is $\bm{A}+\bm{B} \equiv \{\bm{a}+\bm{b}~|~\bm{a}\in\bm{A}, \bm{b}\in\bm{B}\}$. We alter this concept a bit, following \citet{McRobie2016}, for application to dual tessellations. We do not want to attach arbitrary vectors in each set to each other, but specify that there is a subset $\bm{B}_i$ of the dual tessellation $\bm{B}$ attachable to each vector $\bm{a}_i\in\bm{A}$. We also add a scaling $\alpha$ to interpolate between the original and dual tessellations. The vertices of our {\it Minkowski sum} satisfy
\begin{equation}
\alpha\bm{A}+(1-\alpha)\bm{B}\equiv \left\{\alpha\bm{a}_i+(1-\alpha)\bm{b}_j~|~\bm{a}_i\in\bm{A}, \bm{b}_j\in\bm{B}_i\right\}.
\label{eqn:minksum}
\end{equation}
The vectors in the sum are what we actually plot. Only spiderweb networks have Minkowski sums with parallel lines separating neighboring polygons of each tessellation like this.
Fig.\ \ref{fig:timesequence} shows the time evolution of this cosmic web, from uniformity at $D_+=0$, to the snapshot in Fig.\ \ref{fig:cosmicduals}. When triangles in the Lagrangian triangulation merge, the nodes stick together.
Some aspects of each tessellation may look curious. In the Lagrangian triangulation, why is there a square grid, with each square split by 45$^\circ$, randomly-oriented diagonals? And why are the threads of the Eulerian web only horizontal, vertical, and 45$^\circ$-diagonal in uncollapsed regions? These are both results of picking a square grid for the sectional-Voronoi generators. The grid of Voronoi generators could just as easily form a triangular grid, or even an irregular, e.g.\ `glass' set of initial conditions. In that case, the threads outside collapsed regions would have random directions. Another possibly surprising thing is that opposite sides of the tessellation do not perfectly fit together if tiled, as expected from a periodic displacement potential. This is because it is missing boundary cells in such a repetition, cells between generators along the top and bottom, and the left and right.
So, in summary, what does it mean to say that the cosmic web in the adhesion model is a spiderweb, in 2D? A realization of the universe in the adhesion model, truncating the structure at a fixed resolution, consists of a set of particles of possibly different mass, at vertices of a sectional-Voronoi tessellation. If the edges of sectional-Voronoi cells are replaced with strands of string, this network can be strung up to be entirely in tension. The tension in each strand will be proportional to the length of the corresponding edge of the weighted Delaunay triangulation, which in cosmology is proportional to the mass per unit length along the filament. So, to construct this with the optimal amount of constant-strength material to be structurally sound, strand thicknesses would be proportional to their thicknesses in the Minkowski sum.
Note the tree-like appearance in the bottom-right panel of Fig.\ 5. In a tree (a structural spiderweb), the summed cross-sectional area of the trunk is roughly conserved after each branching. In our case, similarly, the total cross-sectional area (mass) of a large filament is conserved after branching. Referencing actual arachnid spiderwebs, gravity is like a haunted-house explorer, clearing strands aside, causing them to adhere and produce thicker strands. Note that there are other networks in nature with approximate conservation of summed thickness across branches, such as biological circulatory networks, and traffic-weighted maps of traffic flow in and out of a hub \citep{West2017}.
However, there is an aspect of this picture that is a bit at odds with a particular definition of the cosmic web as consisting of multistream regions \citep[e.g.][]{FalckEtal2012,Neyrinck2012,RamachandraShandarin2015,RamachandraShandarin2017,ShandarinMedvedev2017}. This multistream picture of the cosmic web is particularly simple to relate to the adhesion model, but there are alternative definitions of the cosmic web that have their own advantages \citep{LibeskindEtal2017}. To be guaranteed of a cosmic web that is a spiderweb, uncollapsed as well as collapsed nodes of the sectional-Voronoi tessellation must be included. {\it Uncollapsed} nodes have not joined with other nodes; these are the smallest black triangles in Figs.\ \ref{fig:cosmicduals} and \ref{fig:timesequence}. {\it Collapsed} nodes are those that have joined with other nodes in the adhesion model; these would be expected to have experienced stream-crossing in full gravity. The multistream picture of the cosmic web is that set of only collapsed regions.
The distinction between the cosmic web of collapsed and uncollapsed nodes is relevant for the conceptual question of whether the uncollapsed/single-stream/void region {\it percolates} (i.e.\ is a single connected region) throughout the Universe, or whether multistream boundaries pinch it off into discrete voids. If uncollapsed nodes are included in the adhesion-model cosmic-web network, the set of `voids' is simply the set of all sectional-Voronoi cells. Each of these is its own exact convex polyhedron, by construction, and they do not percolate. More interesting, though, are the percolation properties of the region outlined by collapsed nodes; in $N$-body simulations, it seems that this region does percolate \citep{FalckNeyrinck2015,RamachandraShandarin2017}.
A relevant question for further study is to what degree the cosmic web remains a spiderweb if uncollapsed nodes are excluded. Imagine stringing up the blue Eulerian web in Fig.\ \ref{fig:cosmicduals}, to be in tension. To what degree could the web of collapsed nodes be preserved after clipping away edges in void regions? The shapes would generally change, e.g. some bends in filaments would straighten. But if the threads are carefully chosen, the change in shape could be small. Also, there is a freedom to change the initial arrangement of nodes away from a lattice, that be used to optimize the spiderwebness of the modeled cosmic web.
Note too that if the multistream region is not a single connected structure (i.e.\ if it does not percolate), then it cannot collectively be a spiderweb, although each connected patch could be. Thankfully (for cosmic-spiderweb advocates), \citet{RamachandraShandarin2017} find indeed that the multistream region generally nearly percolates if the mass resolution is reasonably high, at the current epoch. But this picture would be different at an early epoch, when the multistream region would indeed consist of different discrete isolated patches, that could individually but not collectively be a spiderweb. While the single-stream/void region appears to percolate at all epochs, the epoch when the multistream region percolates in the observable universe may define an interesting characteristic time for structure formation in the Universe.
\section{Origami tessellations}
Another physically meaningful way of representing the cosmic web is with origami. In fact, it was the origami result by \citet{LangBateman2011} that led us to the current paper. They give an algorithm for producing an {\it origami tessellation} from a spiderweb. Here we define an origami tessellation as an origami construction based on a polygonal tessellation that folds flat (in 2D, can be pressed into a book without further creasing). Also, we require that folding all creases produces a translation, but not a rotation or reflection, in each polygon of the original tessellation. The no-rotation property is usual but not universal in paper-origami tessellations. But it is crucial in the cosmological interpretation: single-stream regions, that correspond to these polygons, should hardly rotate with negligible large-scale vorticity.
By default, we mean `simple twist fold' tessellations, in which each polygon has a single pleat between them, and each polygon vertex has a polygonal node. A {\it pleat} is a parallel pair of creases; since they are parallel, neighboring polygons do not rotate relative to each other. Another relevant type of tessellation is a `flagstone' tessellation; \citet{Lang2015} also worked out an algorithm to produce one of these from a spiderweb. Polygons in a flagstone tessellation are designed to end up entirely top after creases are folded, requiring at least four creases between each polygon if flat-folded. Voronoi tessellations, reciprocal diagrams, and related concepts are rather widely used in origami design and mathematics \citep{Tachi2012,DemaineEtal2015,Mitani2016}, e.g.\ playing a central role in the recent first `practical' algorithm for folding any polyhedron from a 2D sheet \citep{DemaineTachi2017}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{cog5_90.pdf}
\includegraphics[width=\columnwidth]{cog5_20.pdf}
\includegraphics[width=\columnwidth]{origami_dreamcatcher.jpg}
\end{center}
\caption{Origami and textile approximate representations of the Council of Giants \citep{McCall2014}. {\it Top}: Eulerian-space Voronoi polygons, and dual, Lagrangian-space Delaunay polygons, combined in a Minkowski sum construction. {\it Middle}: The same, with both tessellations rotated with respect to each other. With its smaller angles, it is actually foldable. Green lines are valley folds (looking like a V from the side when folding); black lines are mountain folds (looking like half of an M). {\it Bottom}: The middle panel folded from paper, alongside a nearly matching spiderweb construction built from yarn and an embroidery hoop. The length of a Delaunay edge at top is proportional to the tension in the corresponding strand of the spiderweb.}
\label{fig:councilofgiants}
\end{figure}
Fig.\ \ref{fig:councilofgiants} shows an example of an origami tessellation, in a pattern designed to resemble the set of galaxies in the so-called Council of Giants \citep{McCall2014}. These are `giant' galaxies (e.g.\ the Milky Way and Andromeda) within 6 Mpc of the Milky Way; all such galaxies happen to be nearly coplanar, along the Supergalactic Plane. Such a nearly 2D structure is particularly convenient to represent with 2D paper. We designed this by hand, from a Voronoi tessellation, with Robert Lang's {\it Tessellatica} Mathematica code.\footnote{\hrefurl{http://www.langorigami.com/article/tessellatica}}
The top panel shows both the web and its dual together, constructed as a Minkowski sum. The middle panel shows essentially the same, except Delaunay polygons from the top have has been rotated by 70$^\circ$, and the Voronoi polygons shifted around accordingly, as prescribed by the `shrink-and-rotate' algorithm \citep{Bateman2002}. In fact, the top panel was generated in the same way, but it is not foldable as actual origami without patches of paper passing through each other, because of the large, 90$^\circ$\ angle between Delaunay and Voronoi polygons. The bottom panel shows the middle pattern folded from actual paper, alongside a textile spiderweb construction with the same structure.
Comparing this structure to the actual Council of Giants arrangement \citep[][Fig.\ 3]{McCall2014}, it is only approximate; some galaxies are lumped together. All galaxies could have been included, but this would have required dividing up the voids, since each galaxy must have at least 3 filaments. Also, greater accuracy would have been possible by using different weights in a sectional-Voronoi diagram; here, we used a simple 2D Voronoi diagram.
\subsection{Origami tessellations and the cosmic web}
Does an origami tessellation have a physical correspondence to the cosmic web? Yes, approximately. An origami tessellation describes the arrangement of dark matter after gravity has caused it to cluster, under the strict `origami approximation' \citep{Neyrinck6OSME2015,NeyrinckZELD2016}. The strict part of this approximation is the requirement that the sheet does not stretch non-uniformly (i.e.\ stays constant-density, except for piling up when it is folded). In full gravity, this requirement is wildly violated, although in a way, it holds surprisingly well into multistream regions; \citet{VogelsbergerWhite2011} found that the median density on each stream is near the background density, even deep in the heart of a dark-matter halo (a collapsed dark-matter node that might be big enough to pull gas into it to form a galaxy), with up to 10$^{14}$ streams coinciding in some locations.
In the origami approximation, the density at a location after folding is the density of the sheet times the number of layers that overlap there. Imagine shining a light through the paper after folding, and interpret the darkness at each point as a density; the light freely shines through single-layer regions, but is blocked progressively more as the number of layers increases.
\begin{figure}
\begin{center}
\includegraphics[width=0.72\columnwidth]{origamiweb.pdf}
\end{center}
\caption{{\it Top}: An origami crease pattern, formed combining the edges of the sectional-Voronoi (blue) and Delaunay (black) tessellations of Fig.\ \ref{fig:cosmicduals} in a construction known as a Minkowski sum: every blue polygon is scaled by $\alpha=\frac{3}{4}$ and placed at a black node, and every black polygon is scaled by $(1-\alpha)$ and placed at a blue node. {\it Middle}: The locations of the creases after folding up the crease pattern, i.e. performing a reflection operation in each line segment. Note that the sheet has contracted, because patches (still the same size) now overlap. {\it Bottom}: The `density' in the folded-up middle panel, showing the number of streams (layers of the sheet) at each location, as though backlit. White polygons are single-stream, and the number of streams increases with the darkness of the color.}
\label{fig:origamiweb}
\end{figure}
Since an adhesion-model cosmic web gives a spiderweb, and a spiderweb gives an origami tessellation, the adhesion model can give an origami tessellation, as in Fig.\ \ref{fig:origamiweb}. Does the folded-up origami tessellation in the bottom panel have a precise physical meaning for an adhesion-model cosmic web?
Yes, with some caveats. Although the origami construction gives a good qualitative representation of the web, it does not necessarily give a good representation of the phase-space structure of the dark-matter sheet \citep{ShandarinEtal2012,AbelEtal2012,FalckEtal2012,Neyrinck2012,HiddingEtal2014,FeldbruggeEtal2017}, the original inspiration for the origami construction. The {\it dark-matter sheet}, or Lagrangian submanifold, is a construction that keeps track of the way dark matter fills and moves in space. The dark-matter sheet is a 3D manifold folding without crossing itself or tearing in 6D position-velocity phase space. Its vertices, corresponding to particles in the usual conception of a cosmological $N$-body simulation, begin with tiny velocities and displacements away from a uniform grid. Then, starting from these tiny perturbations, gravity causes structures to fold up in 6D phase space.
Some aspects of this folding in a large-scale structure context are described by catastrophe theory; see \citep{ArnoldEtal1982}, and recently \citep{HiddingEtal2014,FeldbruggeEtal2017}. In fact, this link through catastrophe theory brings in interesting analogies to other physical systems. One example is the set of light caustics at play on the bottom of a swimming pool: an incoming sheet of light gets bent and folded into a pattern of light resembling a two-dimensional cosmic web \citep{ShandarinZeldovich1984,ShandarinZeldovich1989,HiddingCaustics2014}. Often in nature and art, catastrophe theory deals with smooth curves, as emphasized by \citep{McRobie2017seduction}. Compared to these, the origami and spiderweb constructions discussed here are angular, composed entirely of straight lines. The reason for this difference is that we are considering systems such as the cosmic web at a fixed resolution, at which the structure is angular. The physical, curved edges of structures would reveal themselves if the resolution is increased.
In the Minkowski sum construction, folding occurs even at uncollapsed nodes, which correspond to single resolution elements. But no folding of the phase-space sheet is expected to occur there. At collapsed nodes, though, the representation holds reasonably: the sheet does fold qualitatively like it would in phase space. The larger the node, the more massive it is physically.
What about the correspondence between origami pleats/filaments and more usual conceptions of cosmic filaments? This is an important question for us, since the calculation that originally inspired the term `cosmic web' \citep{BondEtal1996} showed that typically, from the initial conditions onward, there is an overdense column of matter (a filament) between nearby collapsed peaks. Origami pleats/filaments do indeed correspond rather well to cosmic filaments. Sensibly, the thickness of each origami pleat/filament is proportional to the length of the edge it corresponds to in the Lagrangian triangulation; this indicates its mass per unit length if interpreted cosmologically.
But, reducing the physicality of the origami construction, pleats exist in uncollapsed void regions as well (but are only one resolution element wide). These fold up in the origami construction, but do not correspond to multistream regions. And for a wide, physical filament in the origami construction, whenever it bends, it must do so with a node, joined to at least one additional pleat. If that additional pleat is just one resolution element wide, it does not correspond to a physical collapsed filament, and the node producing the bend in the larger filament does not correspond to a physical node, i.e.\ a structure that has collapsed along more than one axis.
Another way the origami folded model could be relevant physically is in estimating the density field from an adhesion-model realization. But the question of how most accurately to assign density to an Eulerian grid given an adhesion-model realization is beyond our current scope.
\section{Three dimensions}
What is a 3D spiderweb? The typical picture of a biological spiderweb is nearly planar, but many spiders spin fully 3D webs (such as the black widow; see Tom\'{a}s Saraceno's {\it 14 Billions}). The concept of cosmic webs in the adhesion model, and the structural-engineering spiderweb, carry to 3D as well. Indeed, the field of fully 3D graphic statics that employ concepts such as the reciprocal diagram has experienced a resurgence of interest, and is currently an active area of research.
Most of the spiderweb concepts we have discussed generalize straightforwardly to 3D. A 3D spiderweb is a network of members joining nodes that can be strung up to be entirely in tension. \citet{Rankine1876} introduced the concept of a reciprocal dual in 3D. The following will mirror the 2D definition. Consider a 3D spatial graph of positioned nodes, and edges between them. A {\it dual} to this graph is a tessellation of space into closed polyhedral cells, one per original node, such that neighboring nodes of the original are separated by faces in the dual. A dual is {\it reciprocal} if each original edge is perpendicular to its corresponding face in the dual. The original graph is a {\it spiderweb} if a reciprocal dual exists such that its cells fit together at nodes with no gaps or overlaps between them.
In structural engineering, the form diagram of a 3D truss is the map of members and nodes in space, and the force diagram is a collection of fitted-together {\it force polyhedra}, one polyhedron per node. If a node's force polyhedron is closed, it is in force equilibrium, and the forces on the each member meeting at the node is proportional to the area of the corresponding face of the force polyhedron.
What about the 3D cosmic web in the adhesion model? As in 2D, Eulerian space gets tessellated into polyhedral cells of a sectional-Voronoi diagram according to Eq.\ \ref{eqn:secvoronoi}, the additive weight for each cell given by the displacement potential at that point. And the volume of a tetrahedral cell in the weighted Delaunay tessellation of Lagrangian space gives the mass which contracts into the corresponding node.
The `cosmic web', i.e. the spatial graph of sectional-Voronoi edges that inhabit Eulerian space, is a spiderweb in 3D as well as in 2D, since a non-overlapping reciprocal dual tessellation exists: the faces of the Lagrangian weighted Delaunay tessellation are perpendicular to the corresponding Eulerian sectional-Voronoi (cosmic web) edges.
However, this definition of a `cosmic web' does not entirely conform with some common conceptions of the cosmic web. As in the 2D case, the cosmic spiderweb includes uncollapsed nodes in single-stream regions, usually classified to be in voids \citep[e.g.][]{FalckEtal2012,RamachandraShandarin2015}. Also, while edges of the sectional-Voronoi tessellation can be identified with filaments, we have not mentioned a similar concept for the walls. Perhaps if the faces of the sectional-Voronoi faces were filled in panels, these panels instead of the beams could provide an alternative structural-engineering description of the cosmic web (a cosmic foam, instead of web).
There is another subtlety to this correspondence in 3D. While in 2D, the definitions of spiderwebs and cosmic webs are exactly the same, both arising from a sectional-Voronoi description, so far the correspondence in 3D is only rigorous in one direction: cosmic webs are spiderwebs, but it seems that not every structural-engineering spiderweb can be constructed from a sectional-Voronoi diagram. \citet{McRobie2017} explains (see e.g.\ Fig. 8) that there exist three-dimensional structural-engineering spiderwebs that are not sectional-Voronoi diagrams (and therefore cosmic webs). This is because in structural engineering, it is only the force on a structural member that matters (i.e. the area of the face in the force diagram), not its shape, but the shapes must also match when connecting nodes of a sectional-Voronoi diagram. However, this subtlety does not impact our main conclusion, that the cosmic web is a spiderweb.
\subsection{Origami tessellations in 3D}
As in 2D, 3D spiderwebs lead to `origami tessellations,' but not of paper, but of a non-stretchy 3D manifold, folding (being reflected) along planes in higher dimensions; when projected back to 3D, several layers of the manifold can overlap. The 3D origami crease patterns are again related to Minkowski sums; \citet{McRobie2016} gives several structural-engineering examples. Nodes are typically tetrahedra. A Toblerone-like, triangular-prism filament connects the faces of each neighboring pair of nodes, the filament's cross-section given by the triangle separating these tetrahedra in the Lagrangian Delaunay tessellation/force diagram. In structural engineering, the length of the filament gives the length of the structural member, and the force on it is proportional to the cross-sectional area. Between filaments are gaps (`walls') consisting of parallel identical polygons that are matching faces of neighboring cells of the sectional-Voronoi tessellation.
In cosmology, folding up a dark-matter sheet constructed in this way can give a way to estimate a density field made from a dark-matter sheet under the `origami approximation' \citep{Neyrinck6OSME2015,NeyrinckZELD2016}. In cosmology, each surface represents a caustic. Each node would be a `3D twist fold' in origami terms, or `tetrahdedral collapse' \citep{Neyrinck2016tetcol} in astrophysical terms. In the simplest, irrotational tetrahedral collapse, filaments extrude perpendicularly from faces of tetrahedral nodes. When collapse happens, nodes, filaments, and walls all collapse together. The four faces of each node invert through the node's center, the three faces of each filament invert through its central axis, and the two parallel faces of each wall pass through each other. Generally, rotation can happen; as a node collapses and inverts, it can undergo a 3D rotation as well. This causes its filaments to rotate, as well; this can correlated the rotation of nearby filaments.
However, as above in 2D, there are many cosmic-web nodes (haloes, in the origami approximation) that do not represent collapsed objects; they are just tracers of the structure in void, wall, or filament regions. In the adhesion model, a cosmic-web classification (into voids, walls, filaments, and haloes) can be done according to the shapes of Lagrangian-space tetrahedra that collapse to form nodes. Haloes collapse from nearly equilateral tetrahedra (all sides many resolution elements long); filament segments collapse from slab-like tetrahedra (with only one side a single resolution element long); wall segments collapse from rod-like tetrahedra (with 2 or 3 sides a single resolution element long); and void patches are uncollapsed, all tetrahedron sides only a single resolution element long. Tetrahedron shapes indicate the directions along which mass elements have merged/stream-crossed. This is consistent with the {\scshape origami}\ classification (\cite{FalckEtal2012}; \cite{NeyrinckEtal2015}; first introduced in \citealp{KnebeEtal2011}), of voids, walls, filaments and haloes according to the number of orthogonal axes along which streams have crossed.
\subsection{A 3D cosmic web in compression}
The everyday concept of a `spiderweb' refers to nodes and members in tension, but in structural engineering, it can just as well refer to members entirely in compression. Fig.\ \ref{fig:threedprint} shows a 3D-printed, tactile realization of a cosmic web. Its spiderweb nature simply means that it has structural integrity, and can support nonzero weight.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{threedprint.jpg}
\end{center}
\caption{3D print made of plastic from a slice 50\,$h^{-1}$\,Mpc\ on a side of an $N$-body simulation assuming a warm dark matter cosmology. This is a spiderweb mostly in compression (`mostly' because some of its segments are likely in tension). It is obviously strong enough to support its own weight; we did not test its breaking point, though.}
\label{fig:threedprint}
\end{figure}
This model came from a 512$^3$-particle cosmological simulation with box size $100$\,$h^{-1}$\,Mpc, assuming a warm-dark-matter cosmology. The initial conditions were smoothed \citep[with parameter $\alpha=0.1$\,$h^{-1}$\,Mpc; for details, see][]{YangEtal2015}, removing substantial small-scale structure, and simplifying the design. That is, the smallest dwarf galaxies observed may not form in this simulation. Note, though, that distance from the Milky Way to Andromeda would only be a couple of pixels, so differences within the Local Group would hardly show up. The main effect of the initial-conditions smoothing was to smoothen walls and filaments.
In detail, from the simulation, we pixelized a $50\times 50\times 6.25$ ($h^{-1}$\,Mpc)$^3$ volume with a $128\times 128\times 16$ grid (the mean number of particles per cell was 8). We filled in only multistream voxels, i.e.\ voxels containing collapsed (wall, filament or halo) particles as classified by {\scshape origami}. We included only the largest connected set of such voxels within the slice. Of the 262144 voxels, 215127 had no collapsed particles and therefore were not in the structure, 46056 had collapsed particles and formed the structure, and 894 were collapsed but excluded because they were not connected within the slice (many of these would be connected through regions of the simulations outside the slice, though).
To produce the 3D print file, we follow \citet{DiemerFacio2017}, who make a 3D print similar in spirit to this, but higher resolution, and defined by a contour of a Gaussian-smoothed log-density field, not by multistreaming. We use the {\tt coutour3d} function in the {\scshape Mayavi} visualization package to produce a wavefront file from the contiguous set of printed voxels. We then import this into the freely available {\scshape Blender} package, and use its 3D printing toolkit plugin to remove possibly problematic artifacts for a 3D printer, keeping the appearance of the model the same. We then export an .stl file\footnote{\url{http://skysrv.pha.jhu.edu/~neyrinck/cosmicweb128.stl}} and print it.
This construction is likely a spiderweb, since it stands up under its own weight, but note that some of the members are likely in tension rather than compression, e.g.\ filaments that hang down and are not connected across voids. Some members in tension does not necessarily mean that an all-compression state cannot exist, though. We note again the `collapsed' vs 'uncollapsed' caveat: this would only be rigorously guaranteed to be a spiderweb if all void nodes were included (i.e., in the context of a simulation, all particles). In this case, this would fill in all but apparently random cells, giving an uninteresting design. But if the mass resolution (particle sampling) were decreased quite a bit, a 3D print including uncollapsed nodes could still be interesting (looking e.g.\ like a 3D version of the web in Fig.\ \ref{fig:cosmicduals}).
\section{Uses of the cosmic spiderweb picture}
\label{sec:applications}
If the cosmic spiderweb could be unambiguously identified observationally, analyzing it could possibly constrain cosmological models, in the ways we explain below.
But first we should clarify what an observation of `the cosmic spiderweb' means. Suppose the full dark-matter density field could be observed to some precision. The associated cosmic spiderweb would be a network of nodes and edges whose density field is consistent with these observations. This network would likely have nodes (essentially particles, in an $N$-body sense) in voids, to reproduce observed bends and kinks in walls and filaments. This procedure is essentially an inference of the initial conditions, for which many algorithms exist already \citep[e.g.][]{KitauraEnsslin2008,KitauraEtal2012,Kitaura2013,HessEtal2013,JascheWandelt2013,LeclercqEtal2015,WangEtal2017,ShiEtal2017}. The sectional-Voronoi method for estimating the final structure is likely competitive with other fast approximations. But one unique aspect of the adhesion model is that its set of generating points can be placed on an irregular grid, and even with irregular initial generator masses. Arbitrary generator positioning would allow a better fit, but at the expense of vastly expanding the parameter space (a 3D position plus 1 velocity potential, per particle). The positions need not be entirely free, though; for instance, rather unconstrained regions such as voids could be traced with a minimal number of generator points.
An example would be to test how well the galaxies in the Council of Giants (a hand-designed model appearing in Fig.\ \ref{fig:councilofgiants}) can be reproduced with nodes of a sectional-Voronoi tessellation; galaxy masses and spins could even be added to the constraints. than in the hand-designed model in Fig.\ \ref{fig:councilofgiants}. A few issues made this more difficult than in a scientific test: we optimized for an appearance resembling typical conceptions of the cosmic web (notably, without nodes in voids, and minimizing the number of filaments). Also, the design algorithm uses pure Voronoi instead of sectional-Voronoi tessellations; a better fit would have been possible if displacement potentials as well as Voronoi generators could be tweaked.
But also, further constraints could make the test even better. Estimates of dark-matter halo masses (node areas or volumes in Lagrangian space), velocities, spins (as addressed in the next paragraph), and observations (or upper limits) on filaments between galaxies would all add useful information. As we plan to discuss in a future paper, the adhesion model could provide an elegant formalism for estimating spins in collapsed regions. Each collapsed node in the adhesion model collapses to a point from a patch of Lagrangian space, with some initial velocity field. Each patch would generally have nonzero angular momentum, even averaging from a potential velocity field. It is an open question whether and how this initial velocity field would continue to grow inside a collapsed structure, but perhaps the directions of galaxy spins would be accurately enough predicted in the adhesion model to provide useful constraints.
Here is a list of possible applications and an example of analyzing an observed cosmic web with these ideas. Most of these are simply aspects of the adhesion model that do not crucially relate to spiderweb ideas, but the last two items describe and give an example of a new geometric test.
\subsection{Experimentally testing structural integrity of tactile models}
The spiderwebness of an observed patch of the Universe could be tested by physically building a model of it and mechanically testing its structural properties, e.g., finding the weight it can bear before breaking. However, such a structural-integrity test is not quite a test for spiderwebness, since some members can be in tension if the object is put in compression. Indeed, the class of 3D-printed objects that can bear at least their own weight is of course quite broad, especially if the material is strong in tension as well as in compression. On the theoretical side as well, we must admit that the class of spiderwebs with as many connections between nodes as occur in the adhesion model is quite broad.
Still, building tactile models can often be surprisingly useful scientifically, for building intuition and flagging problems in the data. It is especially crucial for the visually impaired. And quantifying the structural integrity of the cosmic web could possibly provide scientifically useful constraints, but first, ambiguities would have to be cleared up. It is hard to imagine final constraints being derived from anything but entirely deterministic computer algorithms, but 3D-print-shattering experiments could be useful for intermediate results, and would obviously be delightful for educational and public-outreach purposes.
\subsection{Displacement fields without assuming periodicity}
Typically, initial particle displacements in cosmological simulations are generated from a density field using an FFT, obtaining the displacement potential with e.g.\ the ZA. There are other, more accurate methods that work from a displacement-divergence (easily convertible to a displacement potential), that similarly use FFTs \citep{Neyrinck2013, KitauraHess2013,Neyrinck2016}. It would be interesting to combine one of these methods with the adhesion model. These models implement a spherical-collapse prescription to prevent overcrossing, as the adhesion model does, but additionally predict void densities accurately.
The Voronoi method obtains particle displacements from the displacement potential without an FFT, which could be useful for investigations of flows on the largest observable scales. It also can naturally generate mutliresolution particle realizations, by varying the volumes occupied by particles in Lagrangian space. However, note that existing methods to generate a displacement potential (from e.g.\ a Gaussian random field giving the density) do use a periodic FFT.
\subsection{Identifying rotational or multistream displacements}
The adhesion model assumes a potential displacement field, i.e.\ $\mbox{\boldmath $\nabla$}_{\bm{q}}\times\mbox{\boldmath $\Psi$}(\bm{q})={\bm0}$. Collapsed regions in full gravity can carry quite large $|\mbox{\boldmath $\nabla$}_{\bm{q}}\times\mbox{\boldmath $\Psi$}(\bm{q}) |$. It is likely even nonzero but small \citep{Chan2014,WangEtal2014} outside of collapsed regions, as in third-order Lagrangian perturbation theory \citep{Buchert1994}. The degree of agreement with a spiderweb measures the magnitude of rotational vs irrotational motions, which might be a probe of the growth factor, and perhaps for modified gravity, some theories of which are known to affect the cosmic web \citep[e.g.][]{FalckEtal2014,LlinaresMota2014,FalckEtal2015}.
Difficulty reproducing a high-precision galaxy arrangement with a sectional-Voronoi tessellation could be a signal of substantial rotational displacements. One cause could be stream crossing on few-Megaparsec scales, which in simulations seems to be a predictor for halting star formation via a `cosmic-web detachment' mechanism. Primordial filaments are thought to feed cold gas into galaxies, providing a fresh gas supply for star-formation; when these are detached, star formation is suppressed \citep{AragonCalvoEtal2016}. Or, it could be a signal of unexpectedly high vorticity, or vector modes, in the initial conditions. Such a scenario is unlikely physically, since cosmic expansion is thought to dampen these away, but have received some consideration \citep[e.g.][]{Jones1971,ShandarinZeldovich1984} and unexpectedly high primordial vorticity is worth testing for.
Alternative methods to estimate the displacement curl exist, as well. If it were possible to estimate the full displacement field directly, one could measure its curl and divergence, immediately constraining these components. In principle, this could be done by fitting $N$-body simulations to observations. Note, though, that many shortcuts used for this task become unavailable if the displacement field is allowed to have a curl.
\subsection{Identifying anisotropy on large scales}
Spiderwebs, i.e.\ (sectional) Voronoi tessellations, are sensitive to anisotropies in a field. Voronoi tessellations cannot straightforwardly be used in a space where there is no global metric, e.g.\ position-velocity phase space in an $N$-body simulation \citep{AscasibarBinney2005}. But this sensitivity can be exploited, as well. \citet{EvansJones1987} detect shear in a network of ice cracks, by fitting a Voronoi pattern to them. A Voronoi pattern fits an isotropic pattern well, but requires a scaling of the metric in one direction for a good fit to a sheared pattern.
In large-scale structure, applying these ideas would give a test similar in spirit to the Alcock-Paczynski (AP) test. Originally, \citet{AlcockPaczynski1979} proposed the test as a probe of the cosmological constant: if there existed `standard spheres' in the large-scale structure, their ellipticities could be used as a test of the expansion history assumed to go from redshift space to real space. Unfortunately, such `standard spheres' do not exist, but the AP test can be applied to contours of the redshift-space correlation function \citep[e.g.][]{LiEtal2016}. Getting closer to the original spirit of the AP test, it can be applied to average redshift-space void profiles \citep{Ryden1995,LavauxWandelt2012,HamausEtal2016,MaoEtal2017}. Redshift-space distortions spoil an idealized AP measurement, but simultaneously mix in a sensitivity to the growth rate of fluctuations.
In principle, departures from anisotropy in a cosmic spiderweb could be detected using each of its parts, without averaging structures. One way of doing this would directly use the key perpendicularity property (between the original and dual tessellation) of spiderwebs. If void `centers' (likely, density minima, i.e. generating points) could be observed, edges joining void centers should be perpendicular to walls and filaments between the voids. One could easily define a statistic quantifying this perpendicularity at each wall or filament. It could either be used in a position-dependent manner, or summed up over the survey for a global statistic quantifying the departure from a spiderweb. Notably, the perpendicularity test {\it carries no explicit cosmic variance}. This is in contrast to cosmological tests using correlation functions, power spectra, or even voids, in which even with perfect sampling of the density field, cosmic variance is present as fluctuations away from the cosmic mean. However, there will be noise in practice, causing constraints inferred from the perpendicularity test to get better as the volume is enlarged; the noise would likely behave in effect like cosmic variance.
This perpendicularity test would be highly sensitive to redshift-space distortions, just as in the AP test; redshift-space distortions substantially change the directions that filaments and walls have compared to real space. However, again, with sufficient modeling, this issue could be exploited instead of seen as an obstacle: the angles between void minima and walls between them could be used as position-dependent probe of redshift-space distortions. Still, there will likely always be various observational effects that add ambiguity in inferring void `centers,' as well as the positions and characteristics of walls, filaments, and haloes.
Note also that a section through a section is still a section; i.e. if the 3D cosmic web is a sectional-Voronoi tessellation, so will be a 2D slice through it. This makes these ideas applicable to large-scale structure surveys that are effectively 2D.
An alternative way of using the cosmic spiderweb to look for scalings of one spatial coordinate with respect to another is by analyzing the displacement potential inferred from an adhesion-model initial-conditions reconstruction. Without scaling the metric according to the shear, a pure Voronoi tessellation cannot fit a sheared Voronoi tessellation \citep{EvansJones1987}. Unfortunately for detecting shear in a sectional-Voronoi tessellation, though, a sectional-Voronoi tessellation can perfectly fit a sheared pure-Voronoi tessellation; a uniform shear can be produced by a large-scale gradient in the generators' weights.\footnote{Consider a border between 2D Voronoi cells with generators at $(x_1, y_1, z_1=0)$ and $(x_2,y_2,z_2=0)$, with the $y$ metric scaled by a factor $\gamma$ wrt the $x$ metric. That is, the border is the locus of points $(x,y)$ such that $(x-x_1)^2+\gamma^2(y-y_1)^2=(x-x_2)^2+\gamma^2(y-y_2)^2$. One can check that the same border on the $x$-$y$ plane exists in a sectional-Voronoi tessellation, with an isotropic metric and generators at $\left(x_1,\gamma^2y_1,0\right)$ and $\left(x_2,\gamma^2y_2, [(y_2^2-y_1^2)(\gamma^2-\gamma^4)]^{1/2}\right)$. See Fig.\ 6 of \citet{LangBateman2011} for a nice visual depiction of the effect of shearing a spiderweb.} Indeed, it makes intuitive sense that shearing a spiderweb pattern results in something that can still be strung up in tension, albeit likely with different forces. In our case, an anisotropy would show up as a possibly-detectable large-scale gradient in the displacement potential.
\subsection{Example: detecting shear in a Voronoi density field}
\begin{figure}
\begin{center}
\includegraphics[width=0.75\columnwidth]{stretchfig.pdf}
\end{center}
\caption{Detecting shear in a Voronoi model for a 2D density field. {\it Top panels}: Voronoi density fields from the orange generators, using Eq.\ (\ref{eqn:voronoidensity}), with isotropic ($\gamma=1$) and sheared ($\gamma=2$) distance functions. {\it Third panel}: In blue, contour plot of the 2D two-point correlation function $\xi(x,y)$, measured from the top panel. Orange arcs are contours of constant distance from the origin. The conventional method to measure shear, from $\xi(x,y)$, essentially finds an ellipticity of the orange arcs that best fits the ellipticity of blue contours, with ambiguities in detail. {\it Bottom}: Error bars, typically $\sim 1$\%, in fitting $\gamma$ using the Voronoi shear method described in the text, from a realization such as in the top panel, with input shear $\gamma=1$. Fiducially, `power raised'$=1$, but we explored sensitivity to inaccuracies in assumed void/filament density profiles by raising the `observed' density field to various powers, along the $x$-axis, before fitting $\gamma$.}
\label{fig:stretchfig}
\end{figure}
Although the sectional-Voronoi situation is more general and accurate in the adhesion model, in the following we turn back to a simplified case of shear in a pure (not sectional) Voronoi tessellation.
Fig.\ \ref{fig:stretchfig} shows the result of shear in a simple 2D Voronoi-based density field, and how well it can be detected. We generated 256$^2$-pixel density fields from sets of 16 Voronoi generators, randomly Poisson-placed except for an exclusion: no two generators are within $b/10$ of each other, where $b$ is the box side length. We used the following model for the density at each point $\bm{x}$:
\begin{equation}
\rho(\bm{x})=\frac{d_1(\bm{x})^2}{d_2(\bm{x}) d_3(\bm{x})}
\label{eqn:voronoidensity}
\end{equation}
where $d_1$, $d_2$ and $d_3$ denote the distances to the first, second, and third nearest Voronoi generators (adding a pixel width to each distance, suppressing staircase-like pixelization effects). As usual in cosmology, we analyze $\delta(\bm{x})=\rho(\bm{x})/\bar{\rho}$, with $\bar{\rho}$ the mean density. To our knowledge, this is a new description for a Voronoi-based model for a density field, but resembles that by \citet{Matsubara2007} in using distances to nearest neighbors.
We first tried a density field of the form $d_1(\bm{x})/d_2(\bm{x})$, which results in nearly uniform-density filaments, but included the third-nearest generator as well to boost the density at Voronoi vertices over edges, as expected in large-scale structure. In a 3D field, we would include the fourth-nearest neighbor as well, i.e. $\rho=d_1^3/(d_2 d_3 d_4)$, to boost node densities over filament densities. Our model has asymptotically linear profiles for nodes, walls, filaments, and void centers. Perhaps a simple modification or transformation of this prescription could give a good global description of cosmological density profiles away from these morphological features; see \citet{CautunEtal2016} for an example of using wall instead of void density profiles.
To shear the fields, we keep the generators fixed, and change $\gamma$ in the following function giving the distance to a generator position $(x_g,y_g)$,
\begin{equation}
d_\gamma(x,y)=\sqrt{\frac{1}{\gamma}(x-x_g)^2 + \gamma(y-y_g)^2}.
\end{equation}
In the middle panel, $\gamma=2$. Changing $\gamma$ this much changes the tessellation substantially, as well as noticeably slanting borders between Voronoi cells. The uniform Poisson generator distribution is still isotropic even with a sheared metric, however, except inside the exclusion radius.
There are many tests imaginable to detect shear in a Voronoi or sectional-Voronoi-generated field. Because this section is mainly for illustration, we take a very simplistic approach. We assume that all generator positions are known, and fixed, even when $\gamma$ changes. We fit only for the parameter $\gamma$ in the distance formula, which requires no explicit identification of filaments and nodes of the cosmic web. We generate each realization with $\gamma=1$, and then find the best-fitting $\gamma$ over an ensemble of $\gamma$ near 1. The statistic we maximize over $\gamma$ is $\left\langle\delta_\gamma^{\rm mod}(\bm{x})\delta_{\gamma=1}^{\rm obs}(\bm{x})\right\rangle$ i.e.\ the average over pixels of the generated $\delta$ using $\gamma=1$ (the `observed' field) times the same field using a different $\gamma$ (the `model').
In the bottom panel, we show means and standard deviations of the distributions of best-fitting $\gamma$s over 512 realizations, having checked each distribution for Gaussianity. To evaluate sensitivity to shear when the model is imperfect, for the `observed' field, we also raised $\rho(\bm{x})$ in Eq.\ (\ref{eqn:voronoidensity}) to different powers to obtain $\delta_{\gamma=1}^{\rm obs}$. For $\delta_\gamma^{\rm mod}$, $\rho$ was not raised to a power. In the case that the fitted model was perfect (`power raised'$=1$), there is about a 1\% error in $\gamma$. The error even decreases when the observed field is raised to a high power. This is impressive precision, but it is not so surprising because changing $\gamma$ changes the whole tessellation: not just the orientations of separating walls, but the location of peaks as well. We also checked that adding a visible level of Poisson-sampling noise to the observed field does not substantially degrade error bars.
A standard technique to detect shear in a field is the two-point correlation function $\xi(x,y)\equiv\langle\delta(x^\prime,y^\prime)\delta(x^\prime+x,y^\prime+y)\rangle$; in the third panel, we also show contours of that, measured from the top (isotropic) field. Fitting the isotropy of various contours here would certainly be of use in measuring a shear $\gamma$, but is not obvious what statistic optimally to use, over what range of scales, etc. In a (sheared) Gaussian field, the 2D correlation function would contain all information for doing statistical inference. But this is a highly non-Gaussian field, in which correlation-function information likely falls far short of the full information. We do not attempt a shear measurement from this somewhat noisy correlation function, but we would be surprised if a 1\% measurement could be made from it. We suspect that this Voronoi-based method extracts shear in the field better than the usual correlation function can; in this very idealized example, it may even be optimal.
\section{Conclusion}
In this paper, we have explained the close relationships between a few fields: the large-scale arrangement of matter in the Universe; textile, architectural, and biological spiderwebs; and origami tessellations. The cosmic web forms a spiderweb, i.e.\ a structure that can be strung up to be entirely in tension. If strands of string were strung up in the same arrangement as filaments of a cosmic web, the tension in a string would be proportional to the mass that has collapsed cylindrically onto the filament. This is the cross-sectional area of the boundary between the blobs in the initial Universe that collapsed into the nodes at both ends of the filament. As far as we know, this is just a geometric correspondence, and the tensions in strands of the structural-engineering spiderweb do not correspond to physical tensions in the cosmos, but perhaps there exists a valid interpretation along these lines.
However, the spiderweb concept rigorously only applies to all nodes (i.e.\ particles, as in an $N$-body simulation), not just those that have experienced collapse. This can be understood in terms of a representation of a cosmic web made from string: to capture kinks and bends in filaments, low-tension strands must be added, likely intersecting outside filamentary regions. Another subtlety is that for the exact correspondence, structure is not considered within dense, multistream regions (haloes, filaments and walls); here, the dark-matter structure is already complicated, and baryonic physics (not treated in the adhesion model) introduces more uncertainties. Thankfully, baryonic effects are expected to leave the structure largely intact outside multistream regions; its complexities are usually activated by shocks that correspond to multistreaming in the dark matter.
It is an open question how far one can relax this simplified problem and retain spiderwebbiness, but successful artistic experiments along these lines suggest that the conditions can be relaxed somewhat. To ease the investigation of this question, we suggest some ways to quantify spiderwebbiness: either measuring the key geometrical property of perpendicularity between nodes of the cosmic web and its dual tessellation, or most engagingly, testing the structural properties of actual tactile representations of it. We also suggest some other ways in which concepts related to spiderwebiness could be used for scientific analysis.
Another question is how much actual webs made by spiders correspond to the cosmic web. Webs by spiders are not always perfect structural-engineering spiderwebs; e.g.\ they can have a strand with slack, indicating a slight departure (perhaps easily fixed by shortening the strand). Even if both arachnid and cosmic spiderwebs were exactly structural-engineering spiderwebs, likely they would differ in some quantifiable properties, e.g.\ in the distribution of the number of strands coming off a node.
Geometry (specifically here, the concept of a Voronoi-related tessellation) provides the link between these apparently disparate scientific and artistic fields, as it does for many others \citep[e.g.][]{Kappraff2001,Senechal2013}. If a spider did construct the cosmic web (an idea we do not advocate), that spider was not ignorant of geometry.
\section*{Data Access}
See \hrefurl{https://github.com/jhidding/adhesion-example/blob/master/notebook/Adhesion.ipynb} for a Python notebook, with many figures illustrating the adhesion model, visible within a browser. This software, written by JH and citable as \citep{Hidding2017zenodo}, emanates from NWO project 614.000.908 supervised by Gert Vegter and RvdW. In that repository, \href{https://github.com/jhidding/adhesion-example/blob/master/notebook/Adhesion_spiderweb_origami.ipynb}{Adhesion\_spiderweb\_origami.ipynb} generates some figures in the paper. Also see an interactive sectional-Voronoi spiderweb demonstration and designer at \hrefurl{https://github.com/neyrinck/sectional-tess}, usable without installation at \hrefurl{https://mybinder.org/v2/gh/neyrinck/sectional-tess/master} (run notebook sec\textunderscore voronoi\textunderscore spiderweb.ipynb).
\section*{Authors' Contributions}
MN conceived, undertook, and wrote up the current project. JH provided essential clarifying adhesion-model code that enabled many of the figures, and explanations of the sectional-Voronoi formalism that he developed. MK shared substantial structural-engineering expertise and guidance, and clarified descriptions in the text. RvdW is leading the Groningen cosmic web program, major components of which are the adhesion model project, and additional tessellation-related projects. He has also contributed clarifications to the text and on tessellation-related issues.
\section*{Funding}
MN thanks the UK Science and Technology Facilities Council (ST/L00075X/1) for financial support. JH and RvdW acknowledge support by NWO grant 614.000.908.
\section*{Acknowledements}
MN is grateful for encouragement and discussions about this work at the scientific/artistic interface from Fiona Crisp, Chris Dorsett, and Si\'{a}n Bowen at the Paper Studio Northumbria; Sasha Englemann and Jol Thomson; Alex Carr, Giles Gasper and Richard Bower at Durham, of the Ordered Universe Project; origamist Robert Lang; and SciArt Center, where MN was a 2016-2017 `The Bridge: Experiments in Science and Art' resident, with Lizzy Storm. MN thanks Hannah Blevins for expertise, advice and encouragement for the textile spiderweb in Fig.\ \ref{fig:councilofgiants}; and Reece Stockport and Ryan Ellison for 3D printing help leading to Fig.\ \ref{fig:threedprint}. JH and RvdW wish to thank Gert Vegter, Bernard Jones and Monique Teillaud for helpful discussions about the adhesion formalism, Monique Teillaud in particular for her help and support with CGAL. We also thank Allan McRobie (particularly MK, also for PhD supervision), Masoud Akbarzadeh, and Walter Whiteley for useful discussions
\bibliographystyle{mnras}
|
1,477,468,750,462 | arxiv | \section{Introduction}
The Standard Model of electroweak interactions
\cite{Glashow:1961tr,Weinberg:1967tq,Salam:1968rm} is at the core of today's
understanding of fundamental physics. The breaking of the electroweak symmetry
through the Higgs mechanism is the origin of the masses of all other elementary
particles in the Standard Model (SM), and it explains the apparent ``weakness''
of the weak interactions in low-energy physics. In contrast to the strong
interactions, one can make reliable high-precision predictions using
perturbation theory for electroweak observables. The realization that the
electroweak theory is perturbatively calculable \cite{tHooft:1972tcz} has tremendously
advanced the understanding of its theoretical structure and provided the
opportunity for precise experimental tests of all its aspects.
Through comparisons of precision measurements of properties of the electroweak
gauge bosons with theoretical predictions within the SM in the 1990s and 2000s,
it was possible to put constraints on the some of the last undiscovered
components of the SM: the top quark and the Higgs boson (see Figs.~1.16, 8.3,
8.11, 8.13 in Ref.~\cite{ALEPH:2005ab}). At the same
time, electroweak precision tests put important constraints on physics beyond
the SM and have conclusively ruled out some models. These lectures provide an
introduction into the most common electroweak precision observables, their
theoretical underpinnings, and how they can be used to test the SM and physics
beyond the SM.
It is assumed that the reader is familiar with the general structure of the
Standard Model and general aspects of quantum field theory, such as Lagrangians,
Feynman rules, perturbation theory, gauge symmetries and Ward identities, and
electroweak symmetry breaking through the Higgs mechanism. Good examples for
pedagogical reviews of the foundations of the Standard Model can be found in
Refs.~\cite{Novaes:1999yn,Peskin:2017emn,Arbuzov:2018fza}.
Since the topic of these lectures requires a solid understanding of foundational
aspects of higher-order corrections and renormalization, they begin with a
review of renormalization in QED and in the Standard Model in
section~\ref{renorm}. Section~\ref{ewpos} discusses a range of quantities known
as electroweak precision observables, which play an important role in detailed
tests of the Standard Model, in particular its electroweak symmetry breaking
sector. Finally, in section~\ref{bsm}, it is shown how electroweak precision
observables can be used to probe and constrain physics beyond the Standard
Model, with an emphasis on models of neutrino physics and dark matter, owing to
the themse of the TASI 2020 school.
Throughout this document, the following conventions for the metric tensor and
Dirac algebra are being used:
\begin{align}
(g_{\mu\nu}) &= \text{diag}(+1,-1,-1,-1), &
\{\gamma_\mu,\gamma_\nu\} &= 2g_{\mu\nu}\,\mathbbm{1}_{4{\times}4}, &
\{\gamma_\mu,\gamma_5\} &= 0. \label{conv}
\end{align}
The document also contains a handful of exercise problems that the reader is
encouraged to try to solve. Answers to the problems are given at the very end of
the document.
\section{Renormalization}
\label{renorm}
\subsection{Renormalization in QED}
Before discussing renormalization in the Standard Model (SM), let us first
illustrate the main concepts for a simplet theory: Quantum Electrodynamics
(QED), which describes a charged Dirac fermion\footnote{The extension to several
fermions with different charges and masses is straightforward.}
$\psi$ that interacts
with the photon field $A_\mu$. Its Lagrangian is given by
\begin{align}
{\cal L} &= -\tfrac{1}{4} F_{0,\mu\nu} F_0^{\mu\nu} + \overline{\psi}_0
\bigl(i\slashed{\partial} + e_0 \slashed{A}_0 - m_0 \bigr) \psi_0,
&
F_{0,\mu\nu} &= \partial_\mu A_{0,\nu} - \partial_\nu A_{0,\mu}.
\end{align}
This expression contains two free parameters: $e_0$ and $m_0$, the charge and
mass of the fermion $\psi_0$, respectively.
When including radiative corrections, these parameters will in general differ
from the observable charge and mass of the fermion. Denoting the latter as $e$
and $m$, the relation can be written as
\begin{align}
e_0 &= Z_e\,e = (1+\delta Z_e)e,
&
m_0 &= m + \delta m
\end{align}
The quantities $\delta X$ are called \emph{counterterms}. Here and in the
following, the index ``0'' is used for Lagrangian (``bare'') quantities, whereas
the corresponding symbols without subscript denote physical (renormalizated)
quantities.
To determine the counterterms, one needs to specify a set of
\emph{renormalization conditions} that define what we mean by ``physical
quantities.'' For the charge and mass, we can find a set of conditions that
formally reflect how these quantities are typically measured in an experiment:
\paragraph{Mass \boldmath $m$:} The physical mass is defined as the pole in the fermion
propagator
\begin{align}
D(p) \equiv \frac{i}{\slashed{p}-m} = \frac{i(\slashed{p}+m)}{p^2-m^2},
\label{propf}
\end{align}
since the peak in the propagation probability $|D(p)|^2$ for $p^2=m^2$
corresponds to long-distance propagation ($i.\,e.$ an actual observable
particle).
When computing the propagator from the Lagrangian, one must
include radiative corrections, leading to
\begin{align}
&\raisebox{-1em}{\epsfig{figure=dyson1.eps, height=2em, bb=74 682 540 716,
clip=true}} \label{dysonf1} \\
= &\,\frac{i}{\slashed{p}-m_0}
+ \frac{i}{\slashed{p}-m_0} i\Sigma(p) \frac{i}{\slashed{p}-m_0}
+ \frac{i}{\slashed{p}-m_0} i\Sigma(p) \frac{i}{\slashed{p}-m_0}
i\Sigma(p) \frac{i}{\slashed{p}-m_0}
+ ... \label{dysonf2} \\
= &\,\frac{i}{\slashed{p}-m_0+\Sigma(p)}\label{dysonf3}
\end{align}
Here $\Sigma(p)$ is the \emph{self-energy} of the fermion, which represents all
one-particle irreducible loop diagrams contributing to the fermion two-point
functions [depicted symbolically by the blob in \eqref{dysonf1}].
Eq.~\eqref{dysonf2} is called a \emph{Dyson series}, which can be resummed as a
geometric series, leading to eq.~\eqref{dysonf3}.
$\Sigma(p)$ can contain $\gamma$ matrices and thus can be expanded as a sum of
the following terms:
\begin{align}
\Sigma(p) &= \Sigma_S(p^2) + \gamma_\mu p^\mu \, \Sigma_V(p^2)
+ \underbrace{\overbrace{\gamma_\mu p^\mu \gamma_\nu p^\nu}^{=p^2} \,
\Sigma_T(p^2)}_{\to \text{ absorb in }\Sigma_S} + ...
\end{align}
Owing to Lorentz invariance, the coefficients $\Sigma_X$ can only depend on
$p^2$. The term linear in $\gamma_\mu$ (called the ``vector'' part of the
self-energy) must be proportional to $p^\mu$ since this is the only other
4-vector that can be contracted with $\gamma_\mu$. The term with two gamma
matrices (the ``tensor'' part) can be rewritten, using \eqref{conv}, as being
proportional to $p^2$ and thus it is already captured by the ``scalar'' part
$\Sigma_S$. In the same way, all terms with three or more gamma matrices can be
absorbed into $\Sigma_S$ and $\Sigma_V$.
Demanding that the propagator has a pole for $p^2=m^2$, or equivalently
$\slashed{p}=m$ (see eq.~\eqref{propf}) leads to the condition
\begin{align}
&0 = \underbrace{\slashed{p} - (m}_0 + \delta m) + \bigl[\slashed{p}
\Sigma_V(p^2)+ \Sigma_S(p^2)\bigr]_{p^2=m^2,\slashed{p}=m} \\
\Rightarrow\quad &\delta m = m\,\Sigma_V(m^2) + \Sigma_S(m^2)
\end{align}
\medskip
\noindent
\begin{minipage}[b]{.75\textwidth}
\paragraph{Charge \boldmath $e$:} The physical charge is defined as the strength
of the electromagnetic coupling in the Thomson limit: an on-shell fermion
($p_1^2=p_2^2=m^2$) interacts with a static electric field ($i.\,e.$ a photon
with zero momentum, $k = p_2-p_1 \to 0$). Denoting the sum of all vertex
diagrams by $\Gamma_\mu(p_1,p_2)$, this means that
\begin{align}
\bar{u}(p_1)\, i\Gamma_\mu(p_1,p_1)\, u(p_1) = \bar{u}(p_1)\,
ie\gamma_\mu\, u(p_1) \quad \text{for } p_1^2=m^2 \label{chcond}
\end{align}
\end{minipage}
\hfill\raisebox{2em}{%
\epsfig{figure=vertex1.eps, width=.2\textwidth, bb=235 620 355 710, clip=}}
\vspace{1ex}\noindent
The vertex factor can be written as a tree-level piece and a term
$\delta\Gamma_\mu$ that subsumes all loop contributions. The latter can be
related to the fermion self-energies using the QED Ward identity, which implies
that
\begin{align}
k^\mu\,\delta\Gamma(p,p+k) &= e\bigl[\Sigma(p+k)-\Sigma(p)\bigr] \\
\Rightarrow\quad \delta\Gamma(p,p) &= e\, \lim_{k\to 0} \frac{1}{k^\mu}
\bigl[\Sigma(p+k)-\Sigma(p)\bigr] = e\, \frac{\partial}{\partial p^\mu}
\Sigma(p) \label{verward}
\end{align}
\medskip
\noindent
\begin{minipage}[b]{.75\textwidth}
\paragraph{Field renormalization:} Until now, we only considered the
renormalization of the parameters in the Lagrangian. However, the fields
themselves receive quantum corrections, due to self-energy contributions in the
external legs of any physics process (also called ``wave function''
renormalization). These corrections can be absorbed by redefining the fields:
\begin{align}
A_0^\mu &= \sqrt{Z_A} \, A^\mu, &
\psi_0 &= \sqrt{Z_\psi} \, \psi,
\end{align}
\end{minipage}
\hfill\raisebox{2em}{%
\epsfig{figure=wavef1.eps, width=.2\textwidth, bb=205 574 390 710, clip=}}
\smallskip\noindent
where, as before, we can write $Z_X = 1 + \delta Z_X$, where $\delta Z_X$ is the
counterterm due to loop corrections. These counterterms should cancel the
self-energy corrections on the external legs. For an external fermion, one
therefore should demand
\begin{align}
&\bigl[Z_\psi(\slashed{p}+m_0) + \Sigma(p)\bigr]_{\slashed{p}\to m} =
(\slashed{p}+m)_{\slashed{p}\to m}
\intertext{Taking the derivative $p^\mu \frac{\partial}{\partial p^\mu}$ on both
sides yields}
&\Bigl[p^\mu (1+\delta Z_\psi)\gamma_\mu + p^\mu \frac{\partial}{\partial p^\mu}
\Sigma(p)\Bigr]_{\slashed{p}\to m} = p^\mu\gamma_\mu\big|_{\slashed{p}\to m}
\label{psi1} \\
\Rightarrow\quad &\delta Z_\psi = -\frac{p^\mu}{m}\;
\frac{\partial}{\partial p^\mu}\Sigma(p)\Bigr|_{\slashed{p}\to m}
= -\Sigma_V(m^2) -2m \Sigma'_V(m^2) - 2 \Sigma'_S(m^2)
\end{align}
For external photons, we need to use the photon self-energy,
$\Sigma_{\mu\nu}(k)$, which can be decomposed into a transverse and a
longitudinal part:
\begin{align}
\Sigma_{\mu\nu}(k) &= \Bigl(g_{\mu\nu} - \frac{k_\mu k_\nu}{k^2}\Bigr )
\Sigma_T(k^2) + \frac{k_\mu k_\nu}{k^2}\Sigma_L(k^2)
\end{align}
Invoking the QED Ward identity, $k^\mu \Sigma_{\mu\nu} =0$ immediately tells us
that $\Sigma_L = 0$.
Applying the Dyson summation to the remaining transverse part of the self-energy
yields the following result for the photon propagator (in Feynman gauge):
\begin{align}
\Bigl(g_{\mu\nu} - \frac{k_\mu k_\nu}{k^2}\Bigr )\frac{-i}{k^2+\Sigma_T(k^2)}
+ \frac{k_\mu k_\nu}{k^2}\,\frac{-i}{k^2} \label{propa}
\end{align}
Note that the last term in this equation changes if a different gauge than
Feynman gauge is adopted. Including the field renormalization counterterm
requires to modify \eqref{propa} according to $k^2+\Sigma_T(k^2) \to Z_A
k^2+\Sigma_T(k^2)$. Demanding that $Z_A$ should compensate the self-energy
contribution for an on-shell photon leads to
\begin{align}
&\bigl[ Z_A k^2 + \Sigma_T(k^2) \Bigr]_{k^2 \to 0} = k^2\big|_{k^2\to 0} \\
\Rightarrow\quad &\delta Z_A = -\Sigma'_T(0)
\end{align}
\paragraph{Field renormalization effects in charge renormalization:}
For the proper evaluation of the charge renormalization condition
\eqref{chcond}, we must include the field renormalization factors, yielding
\begin{align}
\sqrt{Z_A} \,Z_\psi\, e_0\, \gamma_\mu + \delta \Gamma_\mu = e\gamma_\mu
\end{align}
Expanding the left-hand term to leading order in perturbation theory yields
$\sqrt{Z_A} \,Z_\psi\, e_0 = e(1+\delta Z_e + \frac{1}{2}\delta Z_A + \delta
Z_\psi + ...)$. Furthermore we can use that $\delta\Gamma = e\frac{\partial}{\partial
p^\mu}\Sigma$ according to \eqref{verward} and $\frac{\partial}{\partial
p^\mu}\Sigma = \delta Z_\psi \gamma_\mu$ according to \eqref{psi1}. Thus one
obtains a rather simple result for the charge counterterm:
\begin{align}
&\delta Z_e = -\tfrac{1}{2}\delta Z_A && \text{[at 1-loop order]} \label{dze1}
\end{align}
An explicit one-loop calcution of the fermion loop diagram below yields
\begin{align}
&\raisebox{-1.2em}{\epsfig{figure=se1.eps, height=2.4em}}
&\Sigma'_T(0) &= \frac{\alpha}{3\pi}\biggl(\frac{2}{4-d}-\gamma_{\rm E} -
\ln \frac{m^2}{4\pi\mu^2}\biggr) \label{se1}
\end{align}
where $d$ and $\mu$ are the number of space-time dimensions and the
regularization scale in dimensional regularization, respectively. Furthermore,
$\gamma_{\rm E} \approx 0.577216$ is Euler's constant.
\medskip\noindent
Extending the QED theory to include all SM fermions, this becomes
\begin{align}
\Sigma'_T(0) &= \sum_f N_{\rm c}^f Q_f^2\, \frac{\alpha}{3\pi}\biggl(\frac{2}{4-d}-\gamma_{\rm E} -
\ln \frac{m_f^2}{4\pi\mu^2}\biggr) \label{sigmap}
\end{align}
where $N_{\rm c}^f=1\,(3)$ for leptons (quarks) and $Q_f$ is the electric charge
of the fermion species $f$ in units of the positron charge $e$. A problem with
\eqref{sigmap} is the fact that light quark masses ($m_u,\,m_d,\,m_s$) are
ill-defined, since QCD at the scale $m_{u,d,s}$ is inherently non-perturbative,
and thus a perturbative calculation as in eq.~\eqref{sigmap} is not adequate.
\label{deltaalpha}
This problem can be circumvented by using a \emph{dispersion relation} that
establishes a relationship between $\Sigma'_T(0)$ and the process $e^+e^- \to
\text{hadrons}$, which can be obtained from data. In order to so, as a first
step we will rewrite $\Sigma'_T(0)$ as follows:
\begin{align}
\Sigma'_T(0) &= \Pi(0) = \underbrace{\Pi(0) -
\text{Re}\,\Pi(M_Z^2)}_{\equiv\Delta\alpha} +
\text{Re}\,\Pi(M_Z^2), \qquad \Pi(Q^2) \equiv \frac{\Sigma_T(Q^2)}{Q^2}
\label{delal1}
\end{align}
Here the term $\Delta\alpha$ is UV finite, while $\Pi(M_Z^2)$ depends on the
light quark masses only through powers of $m_q^2/M_Z^2 \approx 0$ and thus can
be computed perturbatively to very good accuracy. The choice of $M_Z$ for the
separation scale in \eqref{delal1} is somewhat arbitrary; the only requirement
is that this scale should be much larger than $\Lambda_{\rm QCD}$. However,
$M_Z$ has become the conventional choice in the literature.
Furthermore, $\Delta\alpha$ can be divided into a leptonic and a hadronic part,
$\Delta\alpha = \Delta\alpha_{\rm lept} + \Delta\alpha_{\rm had}$, where
$\Delta\alpha_{\rm lept}$ can also be reliably calculated using perturbation
theory \cite{Steinhauser:1998rq,Sturm:2013uka}.
On the other hand, $\Delta\alpha_{\rm had}$ can be related to the process $e^+e^- \to
\text{hadrons}$ using a dispersion integral (see below for the derivation):
\begin{align}
\Delta\alpha_{\rm had} &= -\frac{\alpha}{3\pi}\int_0^\infty ds' \;
\frac{R(s')}{s'(s'-M_Z^2-i\epsilon}, &
R(s) &= \frac{\sigma[e^+e^- \to \text{hadrons}]}{\sigma[e^+e^- \to \mu^+\mu^-]}
\label{delal2}
\end{align}
For $s \lesssim 2$~GeV, $R(s)$ is typically extracted from data collected at
several $e^+e^-$ colliders, while QCD perturbation theory can be used for $s
\gtrsim 2$~GeV. In many analyses, data is also used near the $c\bar{c}$ and
$b\bar{b}$ thresholds, although it has been argued that perturbation theory can
also be used in these regions \cite{Erler:1998sy,Erler:2017knj}. For recent
evaluations of $\Delta\alpha_{\rm had}$
from $R(s)$, see Refs.~\cite{Blondel:2019vdq,Davier:2019can,Keshavarzi:2019abf}.
Efforts are also underway to compute $\Delta\alpha_{\rm had}(s) \equiv \Pi_{\rm had}(0) -
\text{Re}\,\Pi_{\rm had}(s)$ using lattice QCD \cite{Burger:2015lqa,Ce:2019imp}, but more
work and a more detailed evaluation of systematic errors will be needed before
they can be applied in phenomenological applications. Finally, it is possible to
extract $\Delta\alpha(s)$ directly from measurements of Bhabha scattering
\cite{Abbiendi:2005rx,Achard:2005it,KLOE-2:2016mgi}, but
the currently achievable precision is not competitive with the dispersion relation
method.
\begin{figure}
\centering
\epsfig{figure=disp.eps, width=8cm, bb=130 400 456 706}
\caption{Integration contour for using Cauchy's integral theorem for a function
that has a branch cut along the positive real axis (indicted by a zigzag line).
The circle section is understood to have a radius $R \to \infty$.}
\label{disp1}
\end{figure}
\begin{quote}
\textbf{Derivation of eq.~\eqref{delal2}:} Suppose a function $f(z),\,z\in
\mathbbm{C}$ has a branch cut along the positive real axis, but is analytical
elsewhere. One can then apply Cauchy's integral theorem for a contour $\cal C$
that excludes the branch cut, see Fig.~\ref{disp1}
\begin{align}
f(z_0) = \frac{1}{2\pi i} \oint_{\cal C} dz' \; \frac{f(z')}{z'-z_0}
\end{align}
If $f(z)$ vanishes sufficiently fast for $|z| \to\infty$, only the parts of
$\cal C$ along the real axis need to be considered. For $z_0=s+i\epsilon$ one
then obtains
\begin{align}
f(s+i\epsilon) = \frac{1}{2\pi i} \int_0^\infty ds' \; \frac{f(s'+i\delta) -
f(s'-i\delta)}{s'-s-i\epsilon}
\end{align}
where $\delta < \epsilon$ are both infinitesimally small. Applying this to
$f(z)=\Pi(z)$ and noting that $\Pi(s'-i\delta) = \Pi^*(s'+i\delta)$,
\begin{align}
\text{Re}\,\Pi(s) = \frac{1}{\pi} \int_0^\infty ds' \; \frac{\text{Im}\,
\Pi(s'+i\delta)}{s'-s-i\epsilon} \label{disp2}
\end{align}
[The $i\epsilon$ on the l.h.s.\ can be dropped if we only consider the real part
of $\Pi$.]
Now we can relative the photon vacuum polarization $\Pi(s)$ to the matrix
element for $e^+e^- \to e^+e^-$ with a photon self-energy in the s-channel:
\begin{align}
\text{Im}\, \Pi(s') &= \frac{1}{e^2} \;\; \text{Im}\;{\cal M}\biggl\{
\raisebox{-.7em}{\epsfig{figure=eeee1.eps, height=2em}}\biggr\}_{\theta=0} \\
&= \frac{s'}{e^2} \sum_f \sigma[e^+e^- \to f\bar{f}] && \text{[optical theorem]}
\label{opt1}\\
&= \frac{s'}{e^2} \,R(s')\,\underbrace{\sigma[e^+e^- \to
\mu^+\mu^-]}_{4\pi\alpha^2/(3s')} \label{opt2}
\end{align}
where ``$\theta=0$'' indicates that we are restricting ourselves to forward
scattering, $i.\,e.$ the kinematics of the final-state $e^+e^-$ are the same as
in the initial state. Then we can apply the optical theorem in \eqref{opt1}.
Inserting \eqref{opt2} into \eqref{disp2}, one arrives at
\begin{align}
\text{Re}\;\bigl[\Pi(s)-\Pi(0)\bigr] =
\frac{\alpha}{3\pi} \int_0^\infty ds' \; R(s') \biggl[ \frac{1}{s'-s-i\epsilon}
- \frac{1}{s'}\biggr]
\end{align}
which immediately leads to the formula for $\Delta\alpha$ in \eqref{delal2}.
\rule{1ex}{1ex}
\end{quote}
\paragraph{Exercise:} Compute the result in \eqref{se1}. {\sl Hint:}
$\Sigma_T(k^2)$ can be computed with Feynman rules and standard techniques, with
the result $\frac{\alpha}{3\pi}\Bigl[
\frac{3(d/2-1)k^2+6m^2}{d-1}B_0(k^2,m^2,m^2) - \frac{4(d-2)}{d-1} A_0(m^2)
\Bigr]$. To compute the derivative of $B_0(k^2,m^2,m^2)$,
show that $\frac{\partial^2}{\partial k_\mu \partial k^\mu} f(k^2)
= 4k^2\,f''(k^2) + 2d\,f'(k^2)$. Then apply $\frac{\partial^2}{\partial k_\mu
\partial k^\mu}$ \emph{inside} the integral and use this to compute
$\frac{\partial}{\partial(k^2)}B_0(k^2,m^2,m^2)\big|_{k^2=0}$. Finally, express the
result in terms of $A_0(m^2)$ and derivatives thereof and use $A_0(m^2) =
m^2\Bigl[\frac{2}{4-d} - \gamma_E - \ln\frac{m^2}{4\pi\mu^2} + 1\Bigr].$
\label{se1a}
\subsection{On-shell Renormalization in the Standard Model}
In this subsection, the renormalization procedures from QED are extended to the
full Standard Model (SM). Some unique aspects related to electroweak symmetry
breaking and massive gauge bosons are reviewed in detail, whereas the remainig
aspects that are conceptually similar to QED are only summarized
briefly\footnote{A more detailed exposition of renormalization in the SM can be
found $e.\,g.$ in Ref.~\cite{Denner:1991kt}.}.
\begin{table}
\centering
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{|l|l|}
\hline
Gluon & $G_0^a = \sqrt{1+\delta Z_G}\,G^a \quad [a=1,...8]$ \\
Charged $W^\pm$ bosons & $W_0^\pm = \sqrt{1+\delta Z_W}\,W^\pm$ \\
Photon, $Z$ boson & $\begin{pmatrix} Z_0 \\[-.5ex] A_0 \end{pmatrix}
= \begin{pmatrix} \sqrt{1+\delta Z_{ZZ}} & \frac{1}{2}\delta Z_{AZ} \\[-.5ex]
\frac{1}{2}\delta Z_{AZ} & \sqrt{1+\delta Z_{AA}} \end{pmatrix}
\begin{pmatrix} Z \\[-.5ex] A \end{pmatrix}$ \\
\hline
Higgs boson & $H_0 = \sqrt{1+\delta Z_H}\,H$ \\
\hline
Fermions & $\begin{matrix} \psi^L_{f,0} = \sqrt{1+\delta Z^L_f}\,\psi^L_f
\\
\psi^R_{f,0} = \sqrt{1+\delta Z^R_f}\,\psi^R_f \end{matrix}$ \\
\hline
\end{tabular}
\caption{Field renormalization counterterms of the SM fields.}
\label{smfield}
\end{table}
The field content of the SM and the associated field renormalization
counterterms are listed in Tab.~\ref{smfield}. The photon and $Z$ boson receive
a matrix-valued renormalization factor to account for mixing between these two
fields. Since the left- and right-handed fermions in the SM have different
interactions, the also receive independent renormalization factors.
In addition to the field renormalization terms, the SM also contains several
parameters that in general will be renormalized by higher-order effects:
\begin{itemize}
\item Gauge couplings $g,g',g_{\rm s}$ associated with the U(1), SU(2) and SU(3)
gauge interactions.
\item Yukawa couplings $y_f$. In general these are matrices in the space of the
three SM fermion generations, but for the purpose of these lecture we will
ignore CKM mixing\footnote{This approximation is justified by the fact that for
electroweak physics CKM mixing is most relevant in the third generation, due to
the enhancement from the large top-quark mass, but the CKM matrix is very nearly
unitary in the third row.}.
\item Higgs vacuum expectation value (vev) $v = \langle \phi_2 \rangle \approx
246$~GeV, where $\phi$ is the Higgs SU(2) doublet, and the Higgs self-coupling
$\lambda$.
\end{itemize}
For the renormalizion procedure, it is desirable to relate these parameters to
observables, such as
\begin{itemize}
\item the positron charge $e$ (in the Thomson limit);
\item the massive boson masses $M_W,\,M_Z,\,M_H$;
\item the fermion masses $m_f$ ($f=e,\mu,\tau,u,d,s,c,b,t$)\footnote{Neutrino
masses are exactly zero in the SM, in obvious conflict with observations.
However, the tiny neutrino masses are irrelevant for electroweak physics.}.
\end{itemize}
At tree-level, the relationship between parameters and these observables is
given by the following equations
\begin{flalign}
\hspace{3ex}
\begin{aligned}
&\bullet \; \mathswitch {c_{\scrs\mathswitchr W}} \equiv \cos\theta_{\rm W} = \frac{M_W}{M_Z}, \quad \mathswitch {s_{\scrs\mathswitchr W}}^2 = 1-\mathswitch {c_{\scrs\mathswitchr W}}^2, \\
&\bullet \;g = \frac{e}{\mathswitch {s_{\scrs\mathswitchr W}}}, \quad g'=\frac{e}{\mathswitch {c_{\scrs\mathswitchr W}}}, \\
&\bullet \;v = 2M_W/g, \\
&\bullet \;\lambda = M_H^2/(2v^2).
\end{aligned} && \label{osr}
\end{flalign}
Here the weak mixing angle $\theta_{\rm W}$ has been introduced for convenience.
The \emph{on-shell (OS) renormalization scheme} is defined by enforcing relations in
eq.~\eqref{osr} to all orders in perturbation theory.
Note the absence of $g_{\rm s}$ in this list. Since the strong coupling becomes
non-perturbative at low energies, there is no OS definition for $g_{\rm s}$.
Instead, the most common prescription for this coupling is the so-called \mathswitch{\overline{\mathswitchr{MS}}}\
scheme, where the counterterm is defined as a pure UV-divergent term in
dimensional regularization: \label{msbar1}
\begin{align}
\delta g_{\rm s} &= (4\pi e^{-\gamma_{\rm E}})^{L\varepsilon}
\Bigl(\frac{C_L}{\varepsilon^L} + \frac{C_{L-1}}{\varepsilon^{L-1}} +
... + \frac{C_1}{\varepsilon} \Bigr),
&& \varepsilon = \frac{2}{4-d}, \quad L = \text{loop order}
\end{align}
where the $C_i$ are chosen such that the sum of the $L$-loop corrections to the
$gq\bar{q}$ vertex and the vertex counterterm are UV-finite:
\begin{align}
\left[\raisebox{-2em}{\epsfig{figure=vertexg.eps, height=4.5em,
bb=240 640 350 720, clip=}}
\right]_{L\rm -loop}
+ \raisebox{-2em}{\epsfig{figure=vertexg0.eps, height=4.5em,
bb=240 640 350 720, clip=}} \times
\bigl[\sqrt{1+\delta Z_G}\,(1+\delta Z_q)(1+\delta g_{\rm s})\bigr]_{L\rm -loop}
= \text{finite}
\end{align}
\noindent
\paragraph{\boldmath $\gamma$--$Z$ mixing:}
The derivation of the OS counterterms proceeds in a similar fashion as for QED,
with a few modifications. For example, the expression for the charge counterterm
in \eqref{dze1} must be adjusted to account for photon--$Z$ mixing:
\begin{align}
\delta Z_{e(1)} &= -\tfrac{1}{2}\delta Z_{AA(1)} - \frac{\mathswitch {s_{\scrs\mathswitchr W}}}{2\mathswitch {c_{\scrs\mathswitchr W}}}
\delta Z_{ZA(1)}
\end{align}
Here and the in the following the subcript $(n)$ indicates the loop order.
Additional renormalization conditions are needed to fix the mixing
counterterms $\delta Z_{ZA}$ and $\delta Z_{AZ}$. Within the OS scheme, this is
achieved by demanding that an on-shell photon does not mix with the $Z$ boson,
and conversly an on-shell $Z$ boson does not mix with the photon. At one-loop
order, we can write the photon--$Z$ two-point function as
\begin{align}
G_{\mu\nu(1)}^{AZ} &\equiv \;\raisebox{-1.3em}{\epsfig{figure=se2.eps, height=3em, bb=90 640 510
720, clip=}} \notag \\
& = -i\Bigl(g_{\mu\nu} - \frac{k_\mu k_\nu}{k^2} \Bigr)
\bigl[ \Sigma^{AZ}_{T(1)}(k^2) + k^2\tfrac{1}{2}\delta Z_{AZ(1)} +
(k^2-M_Z^2) \tfrac{1}{2} \delta Z_{ZA(1)} \bigr ]
-i\frac{k_\mu k_\nu}{k^2} \bigl[ ... \bigr] \label{azmix}
\end{align}
where the blob symbolizes all loop diagrams contributing at the given order,
whereas the cross symbolizes the counterterm contributions. The former is
formally described by the photon--$Z$ self-energy $\Sigma^{AZ}_{T(1)}$, whereas
the latter receives two contributions: the one with $\delta Z_{AZ(1)}$ stems
from the $A_0$ propagator, when the $A_0$ field then gets renormalized according to
Tab.~\ref{smfield}, while the one with $\delta Z_{ZA(1)}$ stems from the $Z_0$
propagator, when the $Z_0$ field gets renormalized. The
longitudinal part has not been spelled out in \eqref{azmix} because it does not contribute to
physical in- and out-states. Now, imposing the on-shell non-mixing conditions,
one obtains
\begin{align}
&G_{\mu\nu(1)}^{AZ} = 0 \quad \text{for } k^2 = 0
& \Rightarrow \quad \delta Z_{ZA(1)} &= 2\frac{\Sigma^{AZ}_{T(1)}(0)}{M_Z^2}
\\
&G_{\mu\nu(1)}^{AZ} = 0 \quad \text{for } k^2 = M_Z^2
& \Rightarrow \quad \delta Z_{AZ(1)} &=
-2\frac{\text{Re}\,\Sigma^{AZ}_{T(1)}(M_Z^2)}{M_Z^2}
\end{align}
\noindent
\paragraph{Unstable particles:} \label{unstable}
An additional complication arises for the renormalization of unstable particles,
since their self-energy has an imaginary part, $\text{Im}\,\Sigma(M^2)>0$, so
that the pole of the propagator becomes complex! In practice, in the SM, this is
relevant for the $W$ and $Z$ bosons and the top quark, since the width of all
other SM particles is negligibly small.
A detailed review of this issue can be found, $e.\,g.$, in
Ref.~\cite{Freitas:2016sty}. Here we will illustrate the main points for the
example of the $W$ boson to discuss this issue. The propagator pole is defined
by
\begin{align}
Z_W(k^2-M^2_{W,0}) + \Sigma^W_T(k^2) = 0
\qquad\text{for}\quad k^2 = M_W^2 - iM_W\Gamma_W \label{compm1}
\end{align}
The real part of the complex pole can be interpreted as the renormalized mass $M_W$, whereas the
imaginary part is associated with the decay width, $\Gamma_W$. Defining the OS
mass in this way ensures that it is well-defined and gauge-invariant to all
orders in perturbation theory, since the propagator pole is an analytic property
of the physical $S$-matrix
\cite{Willenbrock:1991hu,Sirlin:1991fd,Stuart:1991xk,Gambino:1999ai}.
What is the implication of the complex pole for the counterterms $\delta Z_W$
and $\delta M_W^2$? To answer this question, let us assume that $\Gamma_W \ll
M_W$\footnote{Numerically, $\Gamma_W/M_W \approx 2.5\%$ in the SM.}. Then one can
expand \eqref{compm1} as
\begin{align}
Z_W(M_W^2-iM_W\Gamma_W-M_W^2-\delta M_W^2) + \Sigma^W_T(M_W^2) -
iM_W\Gamma_W \, \Sigma^{W\prime}_T(M_W^2) + {\cal O}(\Gamma_W^2) = 0
\label{compm2}
\end{align}
where $\Sigma^{W\prime}_T(k^2) = \frac{\partial}{\partial(k^2)}\Sigma^W_T(k^2)$.
Taking the imaginary part of \eqref{compm2} one obtains
\begin{align}
&Z_W M_W \Gamma_W \approx \text{Im}\,\Sigma^W_T(M_W^2)
- M_W\Gamma_W \; \text{Re}\,\Sigma^{W\prime}_T(M_W^2) \\
&\Rightarrow\quad \Gamma_W \approx \frac{\text{Im}\,\Sigma^W_T(M_W^2)}{M_W [Z_W +
\text{Re}\,\Sigma^{W\prime}_T(M_W^2)]} \label{compm3}
\intertext{$i.\,e.$ this provides a prescription for computing the total decay
width. On the other hand, the real part of \eqref{compm2} leads to}
&Z_W\,\delta M_W^2 \approx \text{Re}\,\Sigma^W_T(M_W^2)
+ M_W\Gamma_W \; \text{Im}\,\Sigma^{W\prime}_T(M_W^2) \label{compm4}
\end{align}
The last term in \eqref{compm4} would not be present for a stable particle.
However, for an unstable particle, its inclusion is important to ensure that the
renormalized mass is well-defined and gauge-invariant.
Eqs.~\eqref{compm3} and \eqref{compm4} depend on the field renormalization
counterterm $\delta Z_W$. However, it becomes ill-defined when taking into
account the width $\Gamma_W$, because we do not know whether we should demand
that it compensates the self-energy correction for $p^2=M_W^2$ or for
$p^2=M_W^2-iM_W\Gamma_W$. The latter may seem preferrable because it is
the gauge-invariant pole of the propagator, but what are we to make of an
external particle with complex momentum?
The problem occurs because we have explicitly taken into account the fact that
the $W$ boson is unstable. But in this case it cannot be an asymptotic external
state, because it will decay rather rapidly! Instead, we should consider a
process where the production and decay of the $W$ boson is included, so that it
occurs only as an internal particle. An example would be $u\bar{d} \to W^+ \to
\mu^+\nu_\mu$. When computing this process, $\delta Z_W$ occurs in the
intermediate $W$ propagator, but also in the initial-state $u\bar{d}W$ vertex
and in the final-state $W\mu^+\nu_\mu$ vertex. Summing up all these
contributions, one can easily verify that $\delta Z_W$ drops out in the total
result, and we never need to provide an explicit expression for it.
\bigskip
\noindent
\begin{minipage}[b]{.7\textwidth}
\paragraph{Tadpole renormalization:} A large number of loop diagrams contains
so-called \emph{tadpoles}, which are sub-diagrams with one external leg. An
example for the process $\mu^- \to e^-\nu_\mu\bar{\nu}_e$ is shown to the right.
In a practical calculation, these diagrams constitute a large fraction of the
total number of diagrams and they signficantly increase the size of
intermediate algebraic expressions.
\end{minipage}
\hfill\raisebox{1em}{%
\epsfig{figure=mudec_tad.eps, width=.25\textwidth}}
Fortunately, these tadpoles can be absorbed by renormalizing the Higgs vev,
\begin{align}
v_0 = v + \delta v \label{vevren}
\end{align}
Introducing this additional counterterm will break any
tree-level relationships between the vev and other parameters. For example, for
the bare Higgs potential,
\begin{align}
V_0 &= -\mu_0|\phi|^2 + \lambda_0|\phi|^4
\end{align}
one has $v_0 = \sqrt{\mu_0^2/\lambda_0}$, but this relationship between $v$,
$\lambda$ and $\mu$ can be modified at higher orders without causing any
problems since $v$ is not an observable, and its numerical value does not have
any direct physical meaning. Therefore, we are free to choose the counterterm
$\delta v$ at will during the computation of any physical observables, without
affecting the final result.
To see how this can be used to eliminate tadpole diagrams, let us write the
Higgs doublet field as
\begin{align}
\phi &= \begin{pmatrix} G^+ \\ \frac{1}{\sqrt{2}}(v+H+G^0) \end{pmatrix}
\end{align}
where $G^+,G^0$ are the Goldstone fields. Using $\mu_0^2 = \frac{1}{2}M_{H,0}^2
= \frac{1}{2}(M_H^2 + \delta\tilde{M}_H^2)$, and expanding the bare Higgs
potential to one-loop order, one finds
\begin{align}
V_0 &= V + \delta V + {\cal O}(\delta X^2), \\
V &= -\frac{M_H^2}{2}|\phi|^2 + \frac{M_H^2}{2v^2}|\phi|^4, \\
\delta V &= \text{const.} - \underbrace{M_H^2\,\delta v}_{\equiv\, \delta t}\,H
+
\underbrace{\Bigl(\frac{\delta\tilde{M}_H^2}{2}-\frac{3}{2}M_H^2\,\frac{\delta
v}{v} \Bigr)}_{\equiv\, \delta M_H^2/2}H^2 - \frac{M_H^2\,\delta v}{2v}(G_0^2
+ 2G^+G^-) + \text{interact.} \label{delV}
\end{align}
To avoid clutter, the constant term (which is physically irrelevant) and any
interaction terms involving three or more scalar fields have not been spelled
out in eq.~\eqref{delV}. The term linear in $H$ produces a tadpole-like
counterterm Feynman rule:
\begin{align}
& \raisebox{-1em}{\epsfig{figure=tadct.eps, width=5em,
bb=245 662 360 712, clip=}} = i\,\delta t
\intertext{Now one can choose $\delta t$ (or, equivalently, $\delta v$) such
that the sum of the tadpole loop diagrams plus this counterterm vanishes:}
&\raisebox{-1em}{\epsfig{figure=tad1.eps, width=5em,
bb=245 662 360 712, clip=}} \;+
\raisebox{-1em}{\epsfig{figure=tadct.eps, width=5em,
bb=245 662 360 712, clip=}} \; =\; 0
\end{align}
When adopting this convention, no tadpole diagrams need to be taken into account
in an actual calculation of a physics process.
As can be seen in \eqref{delV}, $\delta v$ also modifies the Higgs mass
counterterm, but this shift can be absorbed by redefining this mass counterterm.
The new counterterm $\delta M_H^2$ can be derived from the standard OS
renormalization condition without needing to worry about tadpoles.
However, $\delta v$ also generates a fictious mass for the Goldstone bosons
($G^0,G^\pm$), as also shown in \eqref{delV}.
Since these fields are \emph{a priori} massless, this fictious
mass cannot be absorbed by any redefinition of other parameters. While this is
not explicitly shown in \eqref{delV}, additional non-trivial contributions
proportional to $\delta v$ also appear in self-interactions of the Goldstone
scalars. These Goldstone mass and vertex corrections are a (small) price to pay
for renormalizing away the tadpole diagrams. For a complete list of Feynman
rules modified by $\delta t$ ($\delta v$), see $e.\,g.$
Ref.~\cite{Denner:1991kt}.
\paragraph{Exercise:} Determine the contributions of $\delta t$ and $\delta
M_H^2$ to the scalar three- and four-point interactions in $\delta V$.
\label{dV}
\subsection{Other Renormalization Schemes}
\label{otherren}
While the OS scheme has certain advantages, by relating renormalized SM
parameters to physical observables, a range of other renormalization schemes are
frequently adopted in the literature. They each come with specific advantage and
disadvantages (indicated by
+\hspace{-2.1ex}$\bigcirc$ and
$-$\hspace{-2.1ex}$\bigcirc$ below, respectively).
\paragraph{\mathswitch{\overline{\mathswitchr{MS}}}\ scheme:} All already mentioned on page~\pageref{msbar1}, all
counterterms in the \mathswitch{\overline{\mathswitchr{MS}}}\ scheme have the form
\begin{align}
\delta X &= (4\pi e^{-\gamma_{\rm E}})^{L\varepsilon}
\Bigl(\frac{C_L}{\varepsilon^L} + \frac{C_{L-1}}{\varepsilon^{L-1}} +
... + \frac{C_1}{\varepsilon} \Bigr),
&& \varepsilon = \frac{2}{4-d}, \quad L = \text{loop order}
\end{align}
$i.\,e.$ they simply subtract the divergent pieces of an amplitude but do not
contain any non-trivial finite terms. The factor with $4\pi$ and $\gamma_{\rm
E}$ in front is included to cancel similar terms that universally appear for any
divergent loop integral in dimensional regularization.
\begin{itemize}
\item[+\hspace{-1.7ex}$\bigcirc$]
The dependence on the scale $\mu$ of dimensional regularization, $d^4k \to
\mu^{4-d} d^dk$, is not cancelled by the counterterms, so that the \mathswitch{\overline{\mathswitchr{MS}}}\
couplings and masses depend on the choice of $\mu$. The $\mu$-dependence can be
described by the renormalization group, which allows one to resum dominant terms
in some calculations and to study phase transitions.
\item[+\hspace{-1.7ex}$\bigcirc$]
In some cases, when renormalizing the relevant parameters of a physical process
in the \mathswitch{\overline{\mathswitchr{MS}}}\ scheme, the perturbation series for this process converges better
than in the OS scheme.
\item[$-$\hspace{-1.7ex}$\bigcirc$]
One needs an additional calculation to relate a \mathswitch{\overline{\mathswitchr{MS}}}\ parameter to an oservable,
in order to determine the numerical value for this parameter from experiment.
\end{itemize}
\begin{figure}
\centering
\begin{tabular}{c@{\hspace{3em}}c@{\hspace{3em}}c@{\hspace{3em}}c}
\epsfig{figure=mudec_Gmu.eps, width=1in} &
\epsfig{figure=mudec_born.eps, width=1.2in} &
\epsfig{figure=mudec_1la.eps, width=1.2in} &
\epsfig{figure=mudec_1lb.eps, width=1.2in} \\
(a) & (b) & (c) & (d)
\end{tabular}
\caption{Diagrams for muon decay (a) in the Fermi model, (b) at tree-level in
the SM, and (c,d) contributing to the one-loop corrections in the SM.}
\label{mudec}
\end{figure}
\paragraph{\boldmath $G_\mu$ scheme:} The Fermi constant $G_\mu$ describes the
decay of muons as an effective four-fermion interaction described by the
Lagrangian
\begin{align}
{\cal L}_{\rm Fermi} = \frac{G_\mu}{2\sqrt{2}}
\bigl(\overline{\psi}_{\nu_\mu}\gamma_\lambda\omega_-\psi_\mu\bigr)
\bigl(\overline{\psi}_e\gamma_\lambda\omega_-\psi_{\nu_e}\bigr)
\end{align}
where $\omega_- \equiv (1-\gamma_5)/2$. In Feynman diagrammatic form, the muon
decay process generated by this interaction in shown in Fig.~\ref{mudec}~(a).
In the SM, the four-fermion interaction
is instead mediated by $W$-boson exchange at tree-level, see Fig.~\ref{mudec}~(b).
Therefore, $G_\mu$ can be expressed in terms of SM parameters,
\begin{align}
\frac{G_\mu}{\sqrt{2}} = \frac{g^2}{8M_W^2}(1+\Delta r) \label{GmuSM}
\end{align}
where $\Delta r$ accounts for corrections beyond tree-level (see
Fig.~\ref{mudec}~(c,d) for example diagrams).
This relation can be used to express the weak coupling $g$ in terms of $G_\mu$:
\begin{align}
g^2 = 4\sqrt{2}\,g_\mu M_W^2 (1+\Delta)^{-1} \label{gtoGmu}
\end{align}
When computing electroweak radiative corrections, instead of writing them as a
series in powers of $g=e/\mathswitch {s_{\scrs\mathswitchr W}}$, one can employ \eqref{gtoGmu} to represent them
as a series in powers of $G_\mu$.
\begin{itemize}
\item[+\hspace{-1.7ex}$\bigcirc$] $G_\mu = 1.1663787(6)\times
10^{-5}\,\text{GeV}^{-2}$ is precisely known from measurement
\cite{Zyla:2020zbs}.
\item[+\hspace{-1.7ex}$\bigcirc$] The leading corrections in $\Delta r$ may
(partially) cancel similar terms in other observables. For example, when
computing the $W$ decay rate, the one-loop corrections are much smaller in the
$G_\mu$ scheme than in the OS scheme of the previous subsection \cite{Denner:1991kt}.
\item[$-$\hspace{-1.7ex}$\bigcirc$] One needs to include the corrections to muon
decay ($\Delta r$) in the calculation of any other observable.
\end{itemize}
\section{Electroweak Precision Observables}
\label{ewpos}
The term \emph{electroweak precision observable} (EWPO) refers to a set of
quantities that have been measured with high precision (typically at the
per-mille level or better) and that are related to properties of the electroweak
($W$ and $Z$) gauge bosons. In general, they also include a number of quantities
that are not stricty instrinsic to the electroweak sector, but that are needed
to make predictions for EWPOs within the SM. These are often called ``input
parameters.''
Another rationale for distinguishing between input parameters and ``genuine''
EWPOs is the expectation that input parameters are unlikely to be significantly
affected by new physics beyond the SM (possible reasons include: their
measurement is based on kinematical features; they are protected by symmetries;
new physics decouples due to effective field theory arguments). On the other
hand, the genuine EWPOs can get modified by a large range of beyond-the-SM (BSM)
models. In fact, one of the main motiviations for studying EWPOs is their
potential to constrain new physics by comparing measurement data with
theoretical SM predictions.
In the following subsection, the relevant input parameters will be discussed one
by one, before giving an overview of the most important genuine EWPOs in the
remainder of this chapter.
\subsection{Input Parameters}
A typical choice of input parameters for electroweak precision studies is:
$\alpha = \frac{e^2}{4\pi}$, $G_\mu$, $\alpha_{\rm s}$, $M_Z$, $M_H$, $m_t$.
The masses of any fermions besides the top quark ($m_f,\,f\neq t$) are generally
negligible in electroweak physics since their impact is suppressed by powers of
$m_f^2/M_W^2$, with the exception of $\Delta\alpha$, where they contribute
logarithmically, see page~\pageref{deltaalpha}.
\paragraph{Fine structure constant \boldmath $\alpha$:}
There are two leading methods for determining the value of $\alpha$:
\begin{itemize}
\item From the electron magnetic moment $a_e = \frac{g_e-2}{2}$
\cite{Aoyama:2019ryr}, which has been
theoretically computed to very high precision:
\begin{align}
a_e = \frac{\alpha}{2\pi} + A_2\alpha^2 + A_3\alpha^3 + A_4\alpha^4 +
A_5\alpha^5 + C_{\rm EW}\frac{m_e^2}{M_W^2} + C_{\rm
had}\frac{m_e^2}{\Lambda_{\rm QCD}^2} ... \label{aeth}
\end{align}
The coefficients $A_i$ denote $i$-loop QED loop corrections, which have been
computed to five-loop order \cite{Aoyama:2012wk,Aoyama:2012qma,Baikov:2013ula}. Electroweak corrections, denoted by the
term with $C_{\rm EW}$, are suppressed by the small electron mass. Similarly,
hadronic corrections, denoted by the
term with $C_{\rm had}$, first enter at the two-loop level, and they are
suppressed by the ratio $m_e^2/\Lambda_{\rm QCD}^2$, with $\Lambda_{\rm QCD}
\sim {\cal O}(1\,\text{GeV})$. Both of these contributions are negligible at
current levels of precision.
Comparing \eqref{aeth} to precise measurements of $g_e$ using Penning traps
\cite{Hanneke:2008tm}, one finds \cite{Mohr:2015ccw,Aoyama:2017uqe}
\begin{align}
\alpha^{-1} = 137.035\,999\,174(35) \label{alpha1}
\end{align}
where the numbers in brackets indicate the uncertainty in the last quoted
digits.
\item An independent determination of $\alpha$ can be obtained from the defining
formula for the Rydberg constant, $R_\infty$:
\begin{align}
\alpha^2 = \frac{R_\infty}{2c}\,\frac{m_{\rm At}}{m_e}\,\frac{h}{m_{\rm At}}
\end{align}
Here $m_{\rm At}$ is the mass of some atom.
The fine structure constant can be determined by using precise values for
$R_\infty$ (from atomic spectroscopy), $m_{\rm At}$ and $m_e$ (in atomic units),
and $h/m_{\rm At}$ (from atom interferometry). The limiting factor, in terms of
precision, is the measurement of $h/m_{\rm At}$, which recently has been
significantly improved for Cs-133 atoms \cite{Parker:2018vye}, resulting in
\begin{align}
\alpha^{-1} = 137.035\,999\,046(27) \label{alpha2}
\end{align}
\end{itemize}
The two values \eqref{alpha1} and \eqref{alpha2} exhibit a 2.5$\sigma$ tension.
\paragraph{Fermi constant \boldmath $G_\mu$:}
As already mentioned in section~\ref{otherren}, the Fermi constant gives the
strength of an effective four-fermion interaction, which can be
extracted from muon decay. Besides the leading-order diagram in
Fig.~\ref{mudec2}~(a), there are also significant QED corrections as illustrated
in Fig.~\ref{mudec2}~(b,c). The corrections are known to next-to-next-to-leading
(NNLO) order \cite{vanRitbergen:1999fi,Steinhauser:1999bx,Pak:2008qt}. Combining
these with the measured value of the muon lifetime, $\tau_\mu$,
\cite{Webber:2010zf}, one obtains
\begin{align}
G_\mu = 1.1663787(6) \times 10^{-5} \mbox{ GeV}^{-2}
\end{align}
where the uncertainty is dominant by the experimental measurement of $\tau_\mu$,
whereas the estimated theory error from missing higher orders is sub-dominant.
\begin{figure}
\centering
\begin{tabular}{c@{\hspace{3em}}c@{\hspace{3em}}c@{\hspace{3em}}c}
\raisebox{-.1in}{\epsfig{figure=mudec_Gmu.eps, width=1in}} &
\epsfig{figure=mudec_Gmu1.eps, width=1in} &
\raisebox{-.05in}{\epsfig{figure=mudec_Gmu2.eps, width=1in}} \\
(a) & (b) & (c)
\end{tabular}
\caption{Muon decay in the Fermi model: (a) leading order diagram, and sample
diagrams for the (b) one-loop and (c) two-loop QED corrections.}
\label{mudec2}
\end{figure}
\paragraph{Strong coupling \boldmath $\alpha_{\rm s}=g_{\rm s}^2/(4\pi)$:} There
are many independent methods for its determination. For a complete review, see
chapter 9 of Ref.~\cite{Zyla:2020zbs}. In the following, a few of the most
precise methods are listed:
\begin{itemize}
\item The currently most precise
approach uses lattice QCD calculations. Two recent studies yield
\begin{align}
\text{Lattice:} \qquad \alpha_{\rm s} &= 0.1185 \pm 0.0008 \quad \text{\cite{Bruno:2017gxd}} \\
\alpha_{\rm s} &= 0.1172 \pm 0.0011 \quad \text{\cite{Zafeiropoulos:2019flq}}
\end{align}
\item Differential distributions (event shapes) of $e^+e^- \to \text{jets}$ and
deep inelastic scattering (DIS), using NNLO QCD corrections. These approaches yield
values of $\alpha_{\rm s} \approx 0.114$ on average, significantly below the
numbers obtained with other methods. Possible issues include large
non-perturbative QCD uncertainties (for $e^+e^- \to \text{jets}$) and scheme
dependence and parametrization of parton distribution functions (for DIS). See
chapter 9 of Ref.~\cite{Zyla:2020zbs} for more details.
\item From the branching fraction of taus into hadrons one obtains
\begin{align}
\tau\text{ decays:} \qquad \alpha_{\rm s} &= 0.117 \pm 0.002
\end{align}
(see section 10 of Ref.~\cite{Zyla:2020zbs}). This determination is subject
to non-perturbative hadronic uncertainties, $e.\,g.$ from violations of
quark-hadron duality.
\item EWPOs, in particular the branching ratio $R_\ell \equiv
\Gamma[Z\to\text{had.}]/\Gamma[Z\to\ell^+\ell^-]$ ($\ell=e,\mu,\tau$), yield
\begin{align}
\text{Electroweak precision:} \qquad \alpha_{\rm s} &= 0.1221 \pm 0.0027
\end{align}
(see section 10 of
Ref.~\cite{Zyla:2020zbs}). This method has negligible QCD uncertainties
(both perturbative and non-perturbative), but since $R_\ell$ is a high-energy
observable, it is more likely to be impacted by new physics beyond the SM.
\end{itemize}
\paragraph{Top-quark mass \boldmath $m_t$:} The currently most precise
measurements are based on the invariant mass distribution ($m_{\rm inv}$) of the
top decay products at LHC (for a review see the section ``Top Quark'' in
Ref.~\cite{Zyla:2020zbs}). This approach yields a result that is
numerically very close to the OS (pole) mass \cite{Hoang:2018zrp}. However, the
OS mass of the top quark is theoretically not well-defined due to the presence
of non-perturbative QCD contributions to the top-quark self-energy in
\eqref{dysonf3}. These effects, called \emph{renormalons}, are typically of the
order of ${\cal O}(\Lambda_{\rm QCD}) \sim 300$~MeV, where $\Lambda_{\rm QCD}$
is the scale where $\alpha_{\rm s}$ becomes non-perturbative
\cite{Beneke:2016cbu}. Therefore, when trying to use the peak of the $m_{\rm
inv}$ distribution as an input for other calculations, there necessarily is an
ambiguity of the same order.
The problem of the top-quark mass definition can be circumvented by measuring a
more inclusive observable, such as the total $t\bar{t}$ cross-section,
$\sigma_{t\bar{t}}$, at the LHC. $\sigma_{t\bar{t}}$ can be described in terms of
the \mathswitch{\overline{\mathswitchr{MS}}}\ mass $m_t^{\mathswitch{\overline{\mathswitchr{MS}}}}$, which is free of the renormalon ambiguity.
However, it may be affected by possible new physics effects, such as heavy new
physics particles in the $s$-channel, which are predicted by theories
with extra dimensions and other models (see Fig.~\ref{gkk}).
\begin{figure}
\centering
\epsfig{figure=gkk.eps, width=1.5in, bb=190 615 404 720, clip=true}
\caption{Example of a new physics contribution to $t\bar{t}$ production at the
LHC, due to a Kaluza-Klein excitation of the gluon.}
\label{gkk}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{c@{\hspace{.5in}}c}
\epsfig{figure=TopXsMassWidthScale.eps, width=8cm, bb=255 95 822 575} &
\raisebox{1in}{\epsfig{figure=eett_1s.eps, width=1.8in,
bb=180 610 404 720, clip=true}} \\[-1ex]
(a) & (b)
\end{tabular}
\caption{(a) Illustration of a 10-point threshold scan for $e^+e^- \to t\bar{t}$ at ILC
(figure taken from Ref.~\cite{Simon:2019axh}). (b) Sample diagram of a
gluon-exchange contribution to the $t\bar{t}$ bound-state effect near threshold.}
\label{eett}
\end{figure}
At future $e^+e^-$ colliders with a center-of-mass energy of at least 350~GeV, a
precise, well-defined measurement of $m_t$ can be performed, that is largely
unaffected by BSM physics. This is achieved through a \emph{threshold scan},
measuring $\sigma_{t\bar{t}}$ at different values of the center-of-mass energy
$\sqrt{s}$, see Fig.~\ref{eett}~(a). The shape of $\sigma_{t\bar{t}}$ as a function of $\sqrt{s}$ can be
predicted with high precision in terms of $m_t^{\mathswitch{\overline{\mathswitchr{MS}}}}$, including NNNLO QCD as
well as NLO and leading NNLO electroweak corrections
\cite{Beneke:2015kwa,Beneke:2016kkb}. The small bump in the lineshape at the
threshold is caused by 1S bound-state effects from gluon exchange\footnote{Here
the spectroscopic notation ``1S'' is used for the lowest-energy mode with zero
orbital angular momentum.}, as
illustrated in Fig.~\ref{eett}~(b). Note that this is not a true bound state
since the decay width of the top quark is larger than the binding energy, but it
still leads to a an enhancement of the cross-section at the would-be bound-state
energy.
\paragraph{Z-boson mass \boldmath $M_Z$:} The most precise determination of
$M_Z$ is obtained from measurements of the cross-section for $e^+e^- \to
f\bar{f}$ at different center-of-mass energies at LEP.
Ignoring $\gamma$--$Z$ mixing for the moment, this process can be described by
the generic Feynman diagram below, yielding
\begin{align}
&\raisebox{-0.25in}{\epsfig{figure=eeZff.eps, width=1.5in,
bb=200 648 390 720, clip=true}}
&
\sigma_f(s) &= \biggl|\frac{\color{blue} R(s)}{s-M_Z^2-\color{red} \delta M_Z^2 +
\Sigma^Z_T(s)/Z_{ZZ}}\biggr|^2 \label{eezff1} \\[-1ex]
&&&= \biggl|\frac{R(M_Z^2)}{s-M_Z^2+iM_Z\Gamma_Z} + \text{non-res.}\biggr|^2 \label{eezff2}
\end{align}
In line \eqref{eezff1}, the red blob and terms indicates the contribution from
the Z self-energy (including counterterms), whereas the blue blobs and term
denotes contributions from vertex corrections. The amplitude inside the modulus
brackets $||$ in \eqref{eezff1} has a complex pole (see
page~\pageref{unstable}). Expanding about this pole and using $\Gamma_Z \ll M_Z$
yields the expression in
\eqref{eezff2}, which has a resonant piece and an infinite series of terms
suppressed by powers of $(s-M_Z^2)$ and $\Gamma_Z/M_Z$, which are not explicitly spelled out here.
Including $\gamma$--$Z$ mixing requires the replacement of $\Sigma_T^Z(s)$ with
\label{mz}
\begin{align}
\Sigma_T^Z(s) &\to
\Sigma_T^Z(s) - \frac{[\hat\Sigma_T^{AZ}(s)]^2}{s+\hat\Sigma_T^A(s)}\,, \\
\hat\Sigma_T^{AZ}(s) &= \Sigma_T^{AZ}(s) + \tfrac{1}{2}
\delta Z_{ZA}\sqrt{Z_{ZZ}}(s-M_Z^2-\delta M_Z^2) + s\tfrac{1}{2}
\delta Z_{AZ}\sqrt{Z_{AA}}\,, \\
\hat\Sigma_T^A(s) &= \Sigma_T^A(s) + s\tfrac{1}{2} \delta Z_{AA}
+ \tfrac{1}{4}(\delta Z_{ZA})^2(s-M_Z^2-\delta M_Z^2)
\end{align}
Even though the expressions for the counterterms are rather lengthy, the result
still takes the general form in \eqref{eezff2}, since this result only relies on
the presence of a complex pole in the amplitude.
Carrying out the square in \eqref{eezff2} yields
\begin{align}
\sigma_f(s) &= \frac{|R(M_Z^2)|^2}{(s-M_Z^2)^2 + M_Z^2\Gamma_Z^2} +
\text{non-res.} \label{bw1}
\end{align}
which is called a \emph{Breit-Wigner} resonance, see the solid curve in
Fig.~\ref{linez}.
\begin{figure}
\centering
\epsfig{figure=linez.eps, width=12cm}
\vspace{-1ex}
\caption{Illustration of $Z$-pole cross-section line-shape (not to scale). The
solid (dashed) line indicates the line-shape without (with) initial-state QED
radiation. The dotted line depicts backgrounds from photon exchange and box
contributions to the cross-section (without initial-state radiation). [Figure
taken from Ref.~\cite{Freitas:2016sty}.]
\label{linez}}
\end{figure}
By fitting this curve to experimental measurements of $\sigma_f$ at three or
more values of $\sqrt{s}$, one can determine $M_Z$ and $\Gamma_Z$ at high
precision. However, in the experimental studies at LEP, a different
parametrization of the Breit-Wigner resonance has been used,
\begin{align}
\sigma_f(s) &= \frac{R'^2}{(s-m_Z^2)^2 + s^2\gamma_Z^2/m_Z^2} +
\text{const.} \label{bw2}
\end{align}
with the results \cite{Zyla:2020zbs,Janot:2019oyi}
\begin{align}
m_Z &= 91.1876\pm 0.0021 \,\text{GeV}, &
\gamma_Z &= 2.4942\pm 0.0023 \,\text{GeV} \label{mzexp}
\end{align}
When ignoring the non-resonant terms, the two forms \eqref{bw1} and \eqref{bw2}
are fully equivalent, but the mass and width parameters are different. The
relation is given by \cite{Bardin:1988xt}
\begin{align}
\begin{aligned}
M_Z &= m_Z(1+\gamma_Z^2/m_Z^2)^{-1/2} \approx m_Z - 34\,\text{MeV}, \\
\Gamma_Z &= \gamma_Z(1+\gamma_Z^2/m_Z^2)^{-1/2} \approx m_Z - 0.9\,\text{MeV} \\
\end{aligned} \label{mztrans}
\end{align}
Thus, whenever aiming to use \eqref{mzexp} as inputs to a theory calculation,
one first needs to apply the translation \eqref{mztrans}\footnote{The same is true
for W mass measurements at colliders.}.
\paragraph{Exercise:} For each the quantities listed above, which of the following
concepts limits the influence of new physics in their determination: measurement
is based on kinematical features; protected by symmetries; new physics decouples
due to effective field theory arguments (see appendix for answer).
\label{input}
\subsection{Z-pole EWPOs}
Electroweak precision observables at the $Z$-pole are related to the vector and
axial-vector couplings of the $Z$--fermion interactions. For massless fermions,
these interactions have the form
\begin{align}
\overline{\psi}_f\,i\gamma_\mu(v_f - a_f \gamma_5) \, \psi_f
\end{align}
where the subscript $f$ labels the fermion type ($f=e,\mu,\tau,...$). At leading
order (Born level),
\begin{align}
v_f = e\frac{I^3_f-2\mathswitch {s_{\scrs\mathswitchr W}}^2Q_f}{2\mathswitch {s_{\scrs\mathswitchr W}}\mathswitch {c_{\scrs\mathswitchr W}}}, \qquad
a_f e\frac{I^3_f}{2\mathswitch {s_{\scrs\mathswitchr W}}\mathswitch {c_{\scrs\mathswitchr W}}} \label{zcpl}
\end{align}
Here $Q_f$ is the fermion charge in units of the positron charge $e$, whereas
$I^3_f$ is the third component of weak isospin. Beyond Born level, $v_f$ and
$a_f$ receive radiative corrections within the SM and potentially also from BSM
contributions. Therefore, one can use precision measurements of these quantities
to constrain and potentially discover various typoes of new physics.
The following observables are useful to extract information about $v_f$ and
$a_f$ from data:
\begin{itemize}
\item \begin{minipage}[t]{.7\columnwidth}
The total $Z$ decay width, $\Gamma_Z$. According to \eqref{compm3},
$\Gamma_Z \propto \text{Im}\,\Sigma^W_T(M_W^2)$. Using the optical theorem,
which diagrammatically corresponds to cutting the self-energy diagram shown to
the right, the width is related to the matrix elements for the process $Z \to
f\bar{f}$, so that
\end{minipage}
\hfill
\raisebox{-2.5em}{\epsfig{figure=se_cut.eps, height=3.5em, bb=190 620 410 715,
clip=true}}
\begin{align}
\Gamma_Z \propto \sum_f \bigl|{\cal M}[Z \to f\bar{f}]\bigr|^2
= \sum_f \bigl( |v_f|^2 + |a_f|^2 \bigr)
\end{align}
\item The cross-section for $e^+e^- \to Z \to f\bar{f}$, which, up to a simple
phase-space and flux factor, can be written as
\begin{align}
\sigma_f(s) \propto \bigl( |v_e|^2 + |a_e|^2 \bigr)\,
\biggl|\frac{1}{s-M_Z^2+iM_Z\Gamma_Z+\Sigma^Z_T(s)+...}\biggr|^2
\bigl( |v_f|^2 + |a_f|^2 \bigr)
\end{align}
where the dots in the denominator refer to the $\gamma$--$Z$ mixing and
counterterms described on page~\pageref{mz}.
Near the $Z$ pole ($s \approx M_Z^2$) this expression can be recast into the
form
\begin{align}
\sigma_f(s) \approx 12\pi \frac{\Gamma_e\Gamma_f}{(s-M_Z^2)^2 + M_Z^2\Gamma_Z^2}
\equiv \sigma_f^0
\end{align}
where $\Gamma_f$ is the \emph{partial width} for the decay $Z \to f\bar{f}$ into
a particular fermion type $f$.
\item With polarized electron beams, one can measure the cross-section
separately for left- and right-handed polarized electrons:
\begin{align}
\sigma_{\rm L} \equiv \sigma[e^+e^-_{\rm L} \to f\bar{f}]
\propto |v_e+a_e|^2\,
\biggl|\frac{1}{s-M_Z^2+iM_Z\Gamma_Z}\biggr|^2 \bigl( |v_f|^2 + |a_f|^2 \bigr)
\\
\sigma_{\rm R} \equiv \sigma[e^+e^-_{\rm R} \to f\bar{f}]
\propto |v_e-a_e|^2\,
\biggl|\frac{1}{s-M_Z^2+iM_Z\Gamma_Z}\biggr|^2 \bigl( |v_f|^2 + |a_f|^2 \bigr)
\end{align}
From this one can form a \emph{left-right asymmetry} where several important
systematic uncertainties, such as the lumonisity uncertainty or the detector
acceptance, cancel:
\begin{align}
A_{\rm LR} = \frac{\sigma_{\rm L}-\sigma_{\rm R}}{\sigma_{\rm L}+\sigma_{\rm R}}
= \frac{2\,\text{Re}\{v_ea_e^*\}}{|v_e|^2 + |a_e|^2}
= \frac{2\,\text{Re}\{v_e/a_e\}}{1+|v_e/a_e|^2} \equiv {\cal A}_e
\end{align}
Thus, in contrast to the decay width or the total cross-section, this asymmetry
yields information about the ratio of the vector and axial-vector couplings.
\begin{align}
&\text{At Born level:} & \frac{v_f}{a_f} &= 1-4|Q_f|\mathswitch {s_{\scrs\mathswitchr W}}^2
\quad \Bigl[ = 1-4\mathswitch {s_{\scrs\mathswitchr W}}^2 \text{ for } f=e\Bigr] \\
&\text{With higher orders:} & \text{Re}\,\frac{v_f}{a_f} &\equiv
1-4|Q_f|\seff{f} \label{sweff}
\end{align}
where we have defined the \emph{effective weak mixing angle} $\seff{f}$ as the radiative
corrected verion of the on-shell weak mixing angle $\mathswitch {s_{\scrs\mathswitchr W}}^2$.
\item
Without polarized beams, one can use the differential cross-section to obtain
information about the ratio $v_f/a_f$:
\begin{align}
\frac{d\sigma}{d\cos\theta} \propto \bigl( |v_e|^2 + |a_e|^2 \bigr)
\bigl( |v_f|^2 + |a_f|^2 \bigr)(1+\cos^2\theta) + 4\,\text{Re}\{v_ea_e^*\}
\,\text{Re}\{v_fa_f^*\}\cos\theta
\end{align}
where we have spelled out only the terms that depend
on the scattering angle $\theta$ (the angle between the momenta of the incident
$e^-$ and the outgoing $f$), whereas all other terms (such as the $Z$
propagator) subsumed in the unspecified proportionality factor.
The range of possible scattering angle can be divided into a forward and
backward hemisphere,
\begin{align}
&\sigma_{\rm F} \equiv = \int_0^1 d\cos\theta\;\frac{d\sigma}{d\cos\theta},
&
&\sigma_{\rm B} \equiv = \int_{-1}^0 d\cos\theta\;\frac{d\sigma}{d\cos\theta}
\end{align}
which then allows us to define the \emph{forward-backward asymmetry}
\begin{align}
A_{\rm FB}^f = \frac{\sigma_{\rm F}-\sigma_{\rm B}}{\sigma_{\rm F}+\sigma_{\rm B}}
= \frac{3\,\text{Re}\{v_ea_e^*\}\,\text{Re}\{v_fa_f^*\}}{(|v_e|^2 + |a_e|^2)
(|v_f|^2 + |a_f|^2}
= \frac{3}{4} {\cal A}_e{\cal A}_f
\end{align}
\end{itemize}
The quantities introduced above ($\Gamma_Z$, $\Gamma_f$, $\sigma^0_f$, $A_{\rm
FB}^f$, $A_{\rm LR}$) are so-called \emph{pseudo-observables}. The reason for
this terminology is due to the fact that \emph{real observables} involve extra
effects:
\bigskip
\noindent
\begin{minipage}[b]{.6\textwidth}
\paragraph{Initial-state radiation (ISR):} There are corrections due to emission
of real and virtual photons off the incoming electron and photon. Photons that
are soft or collinear to one of the incoming particles lead to contributions
that are enhanced by terms involving logarithms of the form
\begin{align}
&\frac{2\alpha}{\pi}L \equiv \frac{2\alpha}{\pi}\ln\frac{s}{m_e^2} \approx 11\%
&&\text{[for } s = M_Z^2]
\end{align}
\end{minipage}
\hfill\raisebox{2em}{%
\epsfig{figure=isr.eps, width=.35\textwidth, bb=130 560 465 715, clip=}}
\vspace{1ex}\noindent
The ISR effects can be taken into account through a convolution
\begin{align}
\sigma^{\rm full}_f(s) = \int_0^{1-4m_f^2/s} dx \; H(x) \, \sigma^{\rm
deconv}_f\bigl(s(1-x)\bigr) \label{isr}
\end{align}
The deconvoluted cross-section, $\sigma^{\rm deconv}_f$, is illustrated by the gray
blows in the diagrams above. The radiator function $H(x)$ contains the soft and
collinear photon contributions. It has the general form
\begin{align}
H(x) = \sum_n \Bigl(\frac{\alpha}{\pi}\Bigr)^n \sum_{m=0}^n h_{nm} (2L)^m
\end{align}
The leading logarithms (for $m=n$) are universal ($i.\,e.$ independent of the
specific process) and known to $n=6$ (see Ref.~\cite{Ablinger:2020qvo} and
references therein). Also some sub-leading terms are known for $e^+e^- \to
\gamma^*/Z* \to f\bar{f}$. The impact of ISR on the cross-section is shown in
Fig.~\ref{linez}.
\bigskip
\noindent
\begin{minipage}[b]{.5\textwidth}
\paragraph{Backgrounds}: $\sigma^{\rm deconv}_f$ receives contributions from
several sources:
\begin{align}
\sigma^{\rm deconv}_f = \sigma^Z_f + \underbrace{\sigma^\gamma_f +
\sigma^{\gamma Z}_f + \sigma^{\rm box}_f}_{\sigma^{\rm bkgd}_f}
\end{align}
\end{minipage}
\hfill\raisebox{.5em}{%
\epsfig{figure=eeff.eps, width=.4\textwidth, bb=140 600 460 710, clip=}}
\noindent
where $\sigma_Z$ stems from s-channel $Z$-boson exchange, $\sigma_\gamma$ from
s-channel photon exchange, $\sigma_{\gamma Z}$ from the inteference of these
two, and $\sigma_{\rm box}$ from box diagrams that involve the exchange of two
(or more) gauge bosons between the initial and final fermions.
Only $\sigma_Z$ has a Breit-Wigner resonance at $s\approx M_Z^2$, whereas the
remaining contributions in $\sigma^{\rm bkgd}_f$ are relatively suppressed.
For measurements near the $Z$ pole, the non-resonant terms in $\sigma^{\rm
bkgd}_f$ are typically subtracted, using their SM prediction
\cite{ALEPH:2005ab}.
\paragraph{Detector acceptance and cuts:} The measured cross-section is affected
by the capability of the detector to identify the final state $f\bar{f}$
particles, the presence of extra photon radiation in the detector, blind regions
of the detector, cuts to suppress backgrounds from fakes, etc. These effects are
typically evaluated using Monte-Carlo simulations.
\subsubsection{\boldmath $A_{\rm FB}$ at LHC}
In addition to $e^+e^-$ colliders, EWPOs can also be measured at hardon
colliders. However, a challenge arises when trying to determine the
forward-backward asymmetry at the LHC from the so-called \emph{Drell-Yan}
process $pp \to \ell^+\ell^-$ ($\ell=e,\mu$)\footnote{The achievable precision
for $\ell=\tau$ is strongly reduced since hadronic tau decay suffer from large
QCD backgrounds.}, since the initial state ($pp$) is symmetric and thus there
is no obvious distinction between the forward and backward directions.
\begin{figure}
\centering
\begin{tabular}{c@{\hspace{3em}}c@{\hspace{3em}}c}
\epsfig{figure=ppff.eps, width=1.6in, bb=210 600 390 720} &
\raisebox{0.1in}{\epsfig{figure=ppff_lab.eps, width=1.5in, bb=210 620 366 712}} &
\epsfig{figure=ppff_cm.eps, width=1.5in, bb=210 586 366 706} \\
(a) & (b) lab frame & (c) CoM frame
\end{tabular}
\caption{Drell-Yan process at LHC: (a) Leading Feynman diagram; (b) kinematics
in the lab frame, and (c) in the center-of-mass frame. The direction of the
boost from the center-of-mass to the lab frame is taken at the forward direction
to define $A_{\rm FB}$.}
\label{afbpp}
\end{figure}
However, the leading partonic process consists of an asymmetric quark-antiquark
pair, see Fig.~\ref{afbpp}~(a). On average, the quark momentum is expected to be
larger than the antiquark momentum, since the quark may be a valence parton of the proton,
whereas the antiquark necessarily stems from the sea parton distribution.
Therefore, the final-state $\ell^+\ell^-$ will typically be boosted in the
direction of the incoming quark, Fig.~\ref{afbpp}~(b). To evaluate $A_{\rm FB}$,
the event must be tranformed to the center-of-mass frame, but one can use the
boost direction of the event in the lab frame to define the forward direction
for the asymmtry, Fig.~\ref{afbpp}~(c).
Given the large cross-section for $Z$-boson production at the LHC, there is the
potential to perfrom high-precision measurements of $A_{\rm FB}$ at the ATLAS
and CMS experiments. Nevertheless, the achievable precision is limited by systematic
effects:
\begin{itemize}
\item The overall boost direction of the event is not a perfect proxy for the
direction of the incident quark. To evaluate how often the forward and backward
hemispheres get incorrectly assigned, precise parton distribution functions
(PDFs) are needed. Thus the measurement precision for $A_{\rm FB}$ is limited by
the PDF errors.
\item Drell-Yan production receives large QCD corrections from gluon exchange
among the initial-state $q\bar{q}$ system. Recently, the NNNLO corrections have
been computed \cite{Duhr:2020seh}, but the error from unknown higher-order QCD
contribution is still not negligible.
\end{itemize}
\bigskip\noindent
Let us conclude this section by highlighting some examples of EWPO measurements.
The best measurement of the total $Z$ width has been obtained at LEP
\cite{ALEPH:2005ab}:
\begin{align}
\Gamma_Z &= 2495.5 \pm 2.3 \,\text{MeV} && \text{(LEP)}
\end{align}
For the leptonic effective weak mixing angle, a number different
measurements of left-right and forward-backward asymmetries at lepton and hadron
colliders are similarly competitive
\cite{ALEPH:2005ab,Aaltonen:2018dxj,ATLAS-CONF-2018-037}:
\begin{align}
\seff{\ell} =\; &0.23098 \pm 0.00026 && (A_{\rm LR} \text{ @ SLD}) \notag \\
&0.23221 \pm 0.00029 && (A_{\rm FB}^b \text{ @ LEP}) \notag \\
&0.23148 \pm 0.00033 && (A_{\rm FB}^{e,\mu} \text{ @ TeVatron}) \notag \\
&0.23140 \pm 0.00036 && (A_{\rm FB}^{e,\mu} \text{ @ ATLAS})
\end{align}
\subsection{W-boson mass}
The $W$-boson mass is typically not considered an input parameters, since it can
be computed from the Fermi constant $G_\mu$, using eq.~\eqref{GmuSM}. Together
with $g=e/\mathswitch {s_{\scrs\mathswitchr W}} = e/\sqrt{1-M_W^2/M_Z^2}$, this equation can be solved for
\begin{align}
M_W^2 = M_Z^2\biggl[\frac{1}{2} + \sqrt{\frac{1}{4} -
\frac{\alpha\pi}{\sqrt{2}G_\mu M_Z^2}(1+\Delta r)}\biggr] \label{MWfromGmu}
\end{align}
$\Delta r$ in general depends on all parameters in the SM, including $M_W$, so
that \eqref{MWfromGmu} needs to be solved recursively.
\medskip\noindent
This prediction of $M_W$ can be compared to direct measurements. Currently, the
most precise determination of the $W$ mass is from hadron collider experiments,
using the process $pp \to \ell^\pm \stackrel{\!\!\text{\tiny (}-\text{\tiny
)}\!\!}\nu_\ell$, which proceeds through an s-channel $W$-boson (at tree-level).
The $W$-boson mass thus corresponds to a peak in the invariant mass distribution
of the final state lepton-neutrino system:
\begin{align}
m_{\rm inv} = \sqrt{(p_\ell+p_\nu)^2} \approx \sqrt{2|\vec{p}_\ell||\vec{p}_\nu|
- 2\vec{p}_\ell\cdot\vec{p}_\nu}
\end{align}
where in the last step we have neglected the masses of the lepton and neutrino.
The transverse component (perpendicular to the beam axis) of $\vec{p}_\nu$ can
be reconstructed by using momentum conservation, $\vec{p}_{\nu,\rm T} =
-\vec{p}_{\ell,\rm T}-\vec{p}_{X,\rm T}$, where $X$ are any jets or other
particles stemming from the proton remnants. However, since only a fraction of
the momenta of the incoming protons is transferred to the $W$ boson, the total
longitudinal momentum of the event is unknown, and thus one cannot reconstruct
$\vec{p}_{\nu,\rm L}$.
Instead of the invariant mass distribution, one can utilize the \emph{transverse
mass}
\begin{align}
m_{\rm T} \equiv \sqrt{2|\vec{p}_{\ell,\rm T}||\vec{p}_{\nu,\rm T}|
- 2\vec{p}_{\ell,\rm T}\cdot\vec{p}_{\nu,\rm T}}
\end{align}
One can straightforwardly show that $m_{\rm T} \leq m_{\rm inv}$. When
neglecting the $W$ width and assuming a perfect detector, $M_W$ thus corresponds
to the endpoint of the $m_T$ distribution. In reality, finite width effects and
detector smearing lead to a washed-out endpoint \cite{Smith:1983aa}, see
Fig.~\ref{mtdist}. Therefore, careful modeling of
detector effects and photon radiation is required for a precision measurment of
$M_W$ from the $m_T$ distribution.
\begin{figure}
\centering
\epsfig{figure=mtrans.eps, width=2.5in, bb=160 485 420 708}
\caption{(a) Sketch of the transverse mass distribution for $W$-boson production at
hadron colliders, for a perfect detectors and without width effects (dashed),
and including detector smearing and width effects (solid).}
\label{mtdist}
\end{figure}
\medskip\noindent
At lepton colliders, the $W$-mass can be measured from the invariant mass
distribution in the processes $e^+e^- \to W^+W^- \to qqqq$ and $e^+e^- \to
W^+W^- \to qq\ell\nu$. The reconstruction of both the transverse and
longitudinal components the neutrino momentum is possible here, since there is
no ambiguity due to the momentum carried away by the proton remnants.
Alternatively, one may measure $M_W$ from a threshold scan, by measuring the
cross-section for $e^+e^- \to W^+W^-$ at a few center-of-mass energies near
$2M_W$. Since the cross-section near threshold is small, this approach requires
a large amount of luminosity. With the available statistics at LEP, the
achievable precision was rather low \cite{Schael:2013ita}.
For the theoretical description of the cross-section as a function of $\sqrt{s}$
near threshold, one needs to compute the full process $e^+e^- \to
qqqq/qq\ell\nu$, since contributions where the $W$-bosons are off-shell are
important for $\sqrt{s} \lesssim 2M_W$. In fact, in this regime diagrams without
a $W^+W^-$ pair also contribute significantly. The currently most accurate
calculation includes full NLO corrections to the $e^+e^- \to 4f$ process
\cite{Denner:2005fg}.
The most precise available $M_W$ measurements are
\cite{Schael:2013ita,Aaltonen:2013iut,Aaboud:2017svj}
\begin{align}
M_W =\; &80.376\pm0.033\;\text{GeV} && (m_{\rm inv} \text{ @ LEP}) \notag \\
&80.387\pm0.016\;\text{GeV} && (m_{\rm T} \text{ @ TeVatron}) \notag \\
&80.370\pm 0.019\;\text{GeV} && (m_{\rm T} \text{ @ ATLAS})
\end{align}
\subsection{Future \boldmath $e^+e^-$ colliders}
The experimental precision of EWPOs could be significantly improved at an
$e^+e^-$ collider with much larger luminosity than LEP or SLD. Such machines are
proposed primarily for the purpose of detailed measurements of Higgs boson
properties, but they could also perform electroweak measurements at $\sqrt{s}
\sim M_Z$ and $\sqrt{s} \sim 2M_W$. The FCC-ee \cite{Abada:2019zxq} and CEPC
\cite{CEPCStudyGroup:2018ghi} concepts are based on circular colliders, where
the ILC concept \cite{Baer:2013cma,Bambade:2019fyw} envisions a linear setup.
The baseline run scenario for ILC does not include any runs on the $Z$ pole and
near the $WW$ threshold, but it can study electroweak physics at a higher energy
$\sqrt{s} \sim 250$~GeV by using the \emph{radiative return} method, where
radiation of initial-state photons results in a lower effective center-of-mass
energy [see eq.~\eqref{isr}].
\medskip\noindent
The following table illustrates the expected improved precision for a few
selected EWPOs:
\begin{center}
\begin{tabular}{|l|cccc|}
\hline
& today & FCC-ee & CEPC & ILC \\
\hline
$\Gamma_Z$ [MeV] & 2.3 & 0.1 & 0.5 & -- \\
$\seff{\ell}$ [$10^{-5}$] & 13 & 0.5 & $<1$ & $\sim 2$ \\
$M_W$ [MeV] & 12 & $\lesssim 1$ & $\sim 1$ & 2.4 \\
\hline
\end{tabular}
\end{center}
\subsection{Low-Energy EWPOs}
\label{lewpo}
Electroweak physics can also be studies with precision experiments performed at
lower energies, where the $W$ and $Z$ bosons appear only as virtual particles.
An overview of some variety of such experiments can be found in
Ref.~\cite{Erler:2013xha}. In the following, two types of such electroweak
precision tests will be briefly described.
\bigskip\noindent
\begin{minipage}{.72\textwidth}
\paragraph{Polarized electron scattering:} A beam of left- or right-handed $e^-$
is scattered off target particles $X$, where $X$ could be electrons, protons,
deuterons, or heavier nuclei. If $X$ is a hadronic or nuclear target, it is
advantageous to choose the kinematics such that the momentum transfer is small,
$q^2 \ll m_p^2$, so that the proton (or nucleus) can be regarded as
approximately pointlike.
\end{minipage}
\hfill\raisebox{-3em}{%
\epsfig{figure=eeXX.eps, width=.23\textwidth, bb=200 610 384 718, clip=}}
\medskip
While the cross-section overall is strongly dominated by t-channel photon
exchange, one can probe electroweak physics through the left-right asymmetry
\begin{align}
A_{\rm LR} &= \frac{\sigma_{\rm L}-\sigma_{\rm R}}{\sigma_{\rm L}+\sigma_{\rm R}}
\intertext{For electron-proton scattering in the limit $q^2 \ll m_p^2$, this asymmetry is
given by, at tree-level,}
A_{\rm LR}^{ep} &\approx \frac{G_\mu(-q^2)}{4\sqrt{2}\pi \alpha}(1-4\mathswitch {s_{\scrs\mathswitchr W}}^2)
\end{align}
Thus a measurement of $A_{\rm LR}^{ep}$ can be used to determine the weak mixing
angle.
Higher-order radiative corrections can be accounted for by replacing the
on-shell weak mixing angle $\mathswitch {s_{\scrs\mathswitchr W}}$ with the effective weak mixing angle $\seff{}$, and by
including additional correction factors:
\begin{align}
1-4\mathswitch {s_{\scrs\mathswitchr W}}^2 \quad\to\quad 1-4\kappa \seff{\ell} + \Delta Q
\end{align}
\begin{minipage}{.72\textwidth}
$\kappa$ includes large corrections from the $\gamma$--$Z$ mixing self-energy,
which are enhanced by large logarithms:
\begin{align}
\kappa \approx 1 - \frac{\mathswitch {c_{\scrs\mathswitchr W}}}{12\pi^2\mathswitch {s_{\scrs\mathswitchr W}}}\sum_f v_f (eQ_f) \ln
\frac{m_f^2}{M_Z^2}
\end{align}
Similar to the logarithms in the charge renormalization, eq.~\eqref{sigmap},
these are ill-defined for light quarks, $f=u,d,s$.
\end{minipage}
\hfill\raisebox{-3em}{%
\epsfig{figure=eeXXse.eps, width=.23\textwidth, bb=200 610 384 718, clip=}}
Similar to what is done for $\Delta\alpha$, one may try to extract the hadronic
corrections to $\kappa$ from data for $R(s) = \frac{\sigma[e^+e^- \to
\text{hadrons}]}{\sigma[e^+e^- \to \mu^+\mu^-]}$ using a dispersion integral.
However, this requires additional assumptions in this case, such as SU(3)$_{\rm
u,d,s}$ flavor symmetry
\cite{Wetzel:1981vt,Marciano:1983wwa,Jegerlehner:1985gq,Jegerlehner:2017zsb},
because the $Z$ couplings have a different dependence on the fermion flavor that
$\gamma$ couplings.
Alternative, the leading hadronic effects can be absorbed into a running \mathswitch{\overline{\mathswitchr{MS}}}\
weak mixing angle \cite{Erler:2004in,Erler:2017knj},
\begin{align}
\kappa\seff{\ell} \approx \sin^2 \overline{\theta}(\mu^2=-q^2)
\equiv
\frac{\overline{g}'^2(\mu)}{\overline{g}^2(\mu)+\overline{g}'^2(\mu)}
\bigg|_{\mu^2=-q^2}
\end{align}
where the bar above an expression denotes that this quantity is defined in the
\mathswitch{\overline{\mathswitchr{MS}}}\ scheme.
The following table lists some of the current and near-future electron-electron
and electro-proton scattering experiments, together with their precision in
measuring the weak mixing angle
\cite{Anthony:2005pm,Androic:2018kni,Benesch:2014bas,Becker:2018ggl}:
\begin{center}
\begin{tabular}{|l|cc|}
\hline
& $ee$ & $ep$ \\
\hline
current & E158 (0.5\%) & Qweak (0.5\%) \\
future & MOLLER (0.1\%) & P2 (0.1\%) \\
\hline
\end{tabular}
\end{center}
The anticipated precision of the future MOLLER and P2 experiments will be
comparable to the combined $Z$-pole analysis from LEP/SLC, but in an entirely
different setup at low energies, with different sources of experimental and
theoretical systematic errors.
A more
comprehensive exposition of these types of experiments can be found $e.\,g.$ in
Ref.~\cite{Kumar:2013yoa}.
\paragraph{Muon anomalous magnetic moment:} Charged fermions have a magnetic
moment with the magnitude $\frac{eQ_f}{2m_f}g_f$, where $g_f$ is called the
Land\'e factor. At tree-level ($e.\,g.$ from the Dirac equation) $g_f=2$.
However, the value of $g_f$ gets modified through radiative corrections,
generating an anomalous magnetic moment $a_f = (g_f-2)/2 \neq 0$.
In the following we focus on the anomalous magnetic moment of leptons
\cite{Jegerlehner:2009ry,Jegerlehner:2018zrj}. The main
contribution to $a_\ell$ stems from QED, which has been computed up to ${\cal
O}(\alpha^5)$, see eq.~\eqref{aeth}.
Electroweak and hadronic corrections are suppressed by powers of
$m_\ell^2/M_W^2$ and $m_\ell^2/\Lambda_{\rm QCD}^2$, respectively.
Thus they are negligble for the electron magnetic moment, but they become
imporant for $\ell = \mu$. The hadronic corrections are relatively large and are
typically extracted from data for $R(s) = \frac{\sigma[e^+e^- \to
\text{hadrons}]}{\sigma[e^+e^- \to \mu^+\mu^-]}$ \cite{Jegerlehner:2017lbd,Davier:2019can,Keshavarzi:2019abf}. The experimental error of this
data is the dominant uncertainty in the theoretical prediction of $a_\mu$.
Efforts to compute the hadronic corrections using lattice QCD have made a lot
of progress recently \cite{Meyer:2018til,Borsanyi:2020mff}.
The electroweak effects are rather small,
\begin{align}
a_\mu^{\rm EW} = \frac{g^2}{16\pi^2}\,\frac{m_\mu^2}{M_W^2} \times {\cal O}(1)
\sim 1.5\times 10^{-9} \label{amuEW}
\end{align}
but need to be taken into account given the experimental precision for the
measurement of $a_\mu$. The most precise experimental value is from the g--2
experiment at BNL \cite{Bennett:2004pv}, which yielded
\begin{align}
a_\mu^{\rm exp} = (11\,659\,208.0 \pm 6.3) \times 10^{-10}
\end{align}
which differs from the SM prediction \cite{Zyla:2020zbs}
\begin{align}
a_\mu^{\rm exp} = (11\,659\,184.6 \pm 4.7) \times 10^{-10}
\end{align}
by more than 3 standard deviations. An ongoing experiment at FNAL aims to
improve the precision of $a_\mu^{\rm exp}$ by a factor 4
\cite{Grange:2015fou}.
\paragraph{Exercise:} The electroweak corrections in \eqref{amuEW} are
proportional to $m_\mu^2$, and most corrections from BSM physics would have the
same proportionality. One power of $m_\mu$ stems from the fact that the magnetic
moment coupling ${\cal L} \supset \text{const.}\times \overline{\psi} \sigma_{\mu\nu} F^{\mu\nu}
\psi$ involves a derivative inside the field strength tensor and thus is
proportional to the overall energy scale of the process. Where does the other
power of $m_\mu$ comes from? Can it be replaced by something else in some new
physics model?
\label{alm}
\section{Tests of the Standard Model and Physics Beyond the Standard Model}
\label{bsm}
\subsection{Standard Model predictions}
The consistency and accuracy of the SM as a description of electroweak physics
can be tested by comparing experimental data for EWPOs with theoretical
predictions, where the latter care computed within the SM in as a function of a
set of input parameters. All the EWPOs discussed in the previous section can be
used for this purpose: $\Gamma_Z$, $\sigma_f^0$, $\seff{f}$, $M_W$ (predicted
from $G_\mu$), $a_\mu$, etc.
Owing to the precision of the available experimental data, higher-order
corrections need to be included in this comparison. For all EWPOs listed above,
complete two-loop corrections are known, as well as some partial higher-order
contributions, in particular from QED and QCD effects (see
Refs.~\cite{Freitas:2016sty,Dubovyk:2018rlg,Dubovyk:2019szj,Awramik:2003rn,Gnendiger:2013pva,Czarnecki:2002nt}
and references therein). While the one-loop corrections can be evaluated
analytically, with logarithms and dilogathrims appearing in the final result
\cite{Denner:1991kt}, the is in general not the case at the two-loop level and beyond.
Instead one needs to result to either approximations or numerical methods. The
numerical approaches can be divided into two groups:
\begin{itemize}
\item General techniques that can in principle be applied to problems with any
number of loops, external legs and types of particles. The best-known approach
in this category is sector decomposition \cite{Binoth:2000ps}, which allows one
to extract all UV and IR singularities with an algorithm that can be implemented
in computer programs and then integrate the coefficients of the singularities
and the finite remainder numerically
\cite{Bogner:2007cr,Borowka:2015mxa,Borowka:2017idc,Smirnov:2015mct}. Another
approach, which is not fully general but works for many two- and three-loop
applications, is based on Mellin-Barnes representations
\cite{Czakon:2005rk,Dubovyk:2016ocz}. The disadvantage of these techniques is
their relatively large need of computing resources for the evaluation of
multi-dimensional numerical integrals that are slowly converging.
\item A range of numerical methods have been tailored for a particular type of
problem, $i.\,e.$ self-energy or vertex integrals of a certain loop order. While
limited in scope, these approaches tend to produce numerical integrals of lower
dimensionality and more favorable convergence behavior than the general
techniques. A review of can be found in Ref.~\cite{Freitas:2016sty}.
\end{itemize}
It is instructive to look at some of the leading effects of the radiative
corrections. For this purpose, let us consider the corrections to
the Fermi constant, see eq.~\eqref{GmuSM}, and to the effective weak mixing
angle, see eq.~\eqref{sweff}. They may be written as
\begin{align}
&\frac{G_\mu}{\sqrt{2}} = \frac{g^2}{8M_W^2}(1+\Delta r),
& \Delta r &= \Delta\alpha - \frac{\mathswitch {c_{\scrs\mathswitchr W}}^2}{\mathswitch {s_{\scrs\mathswitchr W}}^2}\Delta\rho + \Delta r_{\rm rem},
\\
&\seff{f} = \mathswitch {s_{\scrs\mathswitchr W}}^2(1+\Delta\kappa),
& \Delta\kappa &= \frac{\mathswitch {c_{\scrs\mathswitchr W}}^2}{\mathswitch {s_{\scrs\mathswitchr W}}^2}\Delta\rho + \Delta \kappa_{\rm rem}
\end{align}
Here $\Delta r$ and $\Delta\kappa$ include all higher-order corrections. Two
leading contributions can be identified:
The shift in the fine structure constant, $\Delta\alpha$, has already been
discussed on page~\pageref{deltaalpha}. It receives numerically
comparable contributions from both leptonic and hadronic loops, which add up to
\begin{align}
\Delta\alpha = \Delta\alpha_{\rm lept} + \Delta\alpha_{\rm had} \approx 6\%
\end{align}
The numerical enhancement stems from the logarithmic dependence on light fermion
masses, see eq.~\eqref{sigmap}.
On the other hand, $\Delta\rho$ contains contributions that are proportional to
the Yukawa couplings of fermions inside the loop, where the top Yukawa $y_t
\approx 1$ dominates, whereas all other fermions are negligible:
\begin{align}
\Delta\rho = \frac{3y_t^2}{32\pi^2} +
\underbrace{...}_{\text{fermions other than the top}\hspace{-8em}}\hspace{4em}
\end{align}
It appears in $\Delta r$ and $\Delta\kappa$ in the combination
$\frac{\mathswitch {c_{\scrs\mathswitchr W}}^2}{\mathswitch {s_{\scrs\mathswitchr W}}^2}\Delta\rho \approx 3\%$. The remaining corrections are
numerically smaller: $\Delta r_{\rm rem}$, $\Delta\kappa_{\rm rem} \lesssim
1\%$.
When comparing $G_\mu$, $M_W$ and $\seff{f}$ to data, the dominant effect of
$\Delta\rho$ leads to a relatively precise indirect determination of the top
mass, $m_t = y_t v/\sqrt{2} = 176.3 \pm 1.9\,$GeV, which agrees reasonably well
with the direct measurement from LHC and Tevatron, $m_t^{\rm exp} = 172.9 \pm
0.3\,$GeV (see section 10 of Ref.~\cite{Zyla:2020zbs}).
On the other hand, the indirect determination of $M_H$ from electroweak
precision data is much less accurate \cite{Zyla:2020zbs}, since the $M_H$ only
appears in the small terms $\Delta r_{\rm rem}$, $\Delta\kappa_{\rm rem}$, and
the functional dependence on $M_H$ is only logarithmic.
The numerically large quadratic dependence on $y_t$ in $\Delta\rho$ can be
explained through the breaking of \emph{custodial symmetry}. This is a symmetry
of the Higgs potential, which can be most easily seen by re-writing the Higgs
field as a matrix. Since the complex Higgs field $\phi = \begin{pmatrix} \phi^+
\\ \phi^0 \end{pmatrix}$ has four physical degrees of freedom, one can arrange
these four components into a matrix,
\begin{align}
\Omega = \begin{pmatrix} \phi^{0*} & \phi^+ \\ \phi^- & \phi^0 \end{pmatrix}
\end{align}
where $\phi^{0*}$ and $\phi^-$ are the conjugate fields of $\phi^0$ and
$\phi^+$, respectively. Then the scalar potential becomes
\begin{align}
V = -\mu^2|\phi|^2 + \lambda|\phi|^4 =
- \frac{\mu^2}{2}\,\text{Tr}\{\Omega^\dagger\Omega\}
+ \frac{\lambda}{4}\bigl(\text{Tr}\{\Omega^\dagger\Omega\}\bigr)^2
\end{align}
In this form, one can see that $V$ is manifestly invariant under transformations
\begin{align}
\Omega \to L\Omega R^\dagger,
\qquad L \in \text{SU(2)}_L, \quad R \in \text{SU(2)}_R \label{lrsymm}
\end{align}
where $L,R$ are unitary SU(2) matrices. Since $L$ and $R$ can be independent of
each other, they are part of two separate symmetry groups, labeled
SU(2)$_{L,R}$. SU(2)$_L$ is the usual weak symmetry group.
When $\phi$ obtains a vev, $\langle \Omega \rangle = \begin{pmatrix} v & 0 \\ 0 &
v \end{pmatrix}$, the symmetry \eqref{lrsymm} will be broken, but $\langle
\Omega \rangle$ is still invariant under a symmetry sub-group where $L=R\equiv V$:
\begin{align}
\langle\Omega\rangle \to V\langle\Omega\rangle V^\dagger,
\qquad V \in \text{SU(2)}_{\rm diag}
\end{align}
$\text{SU(2)}_{\rm diag}$ is called the ``custodial symmetry'' group. The SM
Higgs potential, Higgs vev, and weak and QCD gauge interactions are invariant
under it, but not the Yukawa couplings. As an example, let us consider the
Yukawa couplings of the top and bottom quarks,
\begin{align}
{\cal L}_{\rm Yuk,tb} = -y_t \overline{Q}_{3L} \tilde{\phi}\, t_R - y_b
\overline{Q}_{3L} \phi\, b_R + \text{h.c.}, \qquad \overline{Q}_{3L} = \begin{pmatrix} t_L \\
b_L \end{pmatrix}
\end{align}
Here $\tilde{\phi} = C\phi^*$, and $C = i\sigma^2$ is the charge conjugation matrix.
If $y_t$ and $y_b$ were equal, $y_t=y_b \equiv y$, this could be re-written as
\begin{align}
{\cal L}_{\rm Yuk,tb} = -y\, \overline{Q}_{3L} \Omega\, Q_{3R} + \text{h.c.},
\qquad \overline{Q}_{3R} = \begin{pmatrix} t_R \\ b_R \end{pmatrix}
\end{align}
which would be invariant under $\text{SU(2)}_{\rm diag}$ if the quarks doublets
transform as $Q_{L,R} \to VQ_{L,R}$.
However, the fact that $y_t \neq y_b$ leads to breaking of $\text{SU(2)}_{\rm
diag}$. Any breaking effect must be proportional to some power of $(y_t - y_b)^2
\approx y_t^2$,
so that is vanishes in the limit where $\text{SU(2)}_{\rm diag}$ is restored.
This is the origin of the effect in $\Delta\rho$ proportional to $y_t^2$.
Note that $\text{SU(2)}_{\rm diag}$ is also broken by the hypercharge gauge
coupling, but the numerical impact of that in EWPOs is smaller.
\subsection{Constraints on Physics Beyond the Standard Model}
A global fit to all relevant EWPOs yields good agreement within the SM
\cite{Zyla:2020zbs}, and there is no obvious hint for BSM physics, except for
the discrepancy in the $a_\mu$ (see section~\ref{lewpo}). Thus the data can be
used to set constraints on new physics models. Based on the discussion from the
previous subsection, one can already conclude the models with new sources of
custodial symmetry breaking will be severely bounded by electroweak precision
data.
If one assumes that the new degree of freedom beyond the SM are heavy compared
to the electroweak scale, one BSM effects in EWPOs can be parametrized in a
model-independent way by adding higher-dimensional operators to the theory. This
framework is often referred to as SMEFT (SM Effective Field Theory).
The leading contribution for EWPOs stems from dimension-6 operators,
\begin{align}
{\cal L} = {\cal L}_{\rm SM} + \sum_i \frac{C_i}{\Lambda^2}{\cal O}_i,
\end{align}
where $\Lambda \gg v$ is the mass scale of the BSM physics (typically the
smallest BSM mass if there is a more complex particle spectrum). The complete
list of dimension-6 operators ${\cal O}_i$ can be found $e.\,g.$ in
Ref.~\cite{Grzadkowski:2010es}. The values of the Wilson coefficients depend on
the underlying BSM physics, and they can be computed in terms of the parameters
of a specific hypothetical model (a procedure called ``matching'').
By comparing data to predictions for EWPOs within SMEFT, constraints on the
Wilson coefficients of a subset of operators can be derived. The subset that
EWPOs are sensitive to includes operators that modify gauge-boson--fermion
couplings, Higgs-boson--gauge-boson interactions, and certain four-fermion
interactions. In principle, these constraints can be derived in a
model-independent fashion, but since there are more operators than independent
observables, certain assumptions are typically imposed. For example, one may
assume \emph{flavor universality}, which means that the Wilson coefficients for
operators involving fermions are the same for all three fermion generations.
A more detailed description of SMEFT and its applications can be found in the
lectures on ``Standard Model Effective Field Theories'' in this school
\cite{martinlect}.
\bigskip\noindent
In the following, we will instead focus on BSM models where the scale of new
physics is lower, $\Lambda \lesssim v$, and thus the SMEFT is not applicable. In
the spirit of the school's theme, ``The Obscure Universe: Neutrinos and Other
Dark Matters,'' the focus is on examples that relate to neutrino and dark
matter physics.
\subsection{Neutrino Counting}
Decays of the $Z$ boson to neutrinos are invisible to collider detectors.
However, the existence of this decay channel can be probed by determining the
total width $\Gamma_Z$ from a fit to the Breit-Wigner lineshape and subtracting
the rates for all visible decay channels from it,
\begin{align}
\Gamma_Z = 3\,\Gamma_\ell + N_\nu \Gamma_\nu + \Gamma_{\rm had}
\end{align}
Here the masses of charged leptons and neutrinos have been neglected, so that
$\Gamma_e = \Gamma_\mu = \Gamma_\tau \equiv \Gamma_\ell$ and $\Gamma_{\nu_e} =
\Gamma_{\nu_\mu} = \Gamma_{\nu_\tau} \equiv \Gamma_\nu$. $N_\nu$ is the number
of neutrino species ($N_\nu=3$ in the SM).
$\Gamma_\ell$ and $\Gamma_\nu$ are not observables by themselves. However, they
can be related to observables as follows \cite{ALEPH:2005ab,Janot:2019oyi}:
\begin{align}
N_\nu &= \biggl[\biggl(\frac{12\pi}{M_Z^2}\,\frac{R_\ell}{\sigma^0_{\rm had}}
\biggr)^2 - R_\ell - 3\biggr] \, \frac{\Gamma_\ell}{\Gamma_\nu}
\intertext{Here}
\sigma^0_{\rm had} &= \sigma_{e^+e^- \to \rm had}(s{=}M_Z^2)
= \frac{12\pi}{M_Z^2}\,\frac{\Gamma_\ell\Gamma_{\rm had}}{\Gamma_Z^2}, \\
R_\ell &= \frac{\Gamma_{\rm had}}{\Gamma_\ell}
= \frac{\sigma^0_{\rm had}}{\sigma^0_\ell}
\end{align}
can be determined from data, and ``had'' refers to all hadronic final state
($i.\,e.$ summing over all quarks $q \neq t$ in the partonic picture). On the
other hand, $\Gamma_\ell/\Gamma_\nu$ is computed in the SM, but the result is
correct also in a variety of models with extended neutrino sectors. Using
measurements from LEP, one finds \cite{Janot:2019oyi}
\begin{align}
N_\nu = 2.996 \pm 0.007
\end{align}
in agreement with SM expectations.
If one assumes that any BSM neutrino is part of an SU(2)$_L$ doublet together
with a new charged lepton ($i.\,e.$ a fourth lepton family),
an additional constraint follows from the
contribution of this lepton doublet to $\Delta\rho$:
\begin{align}
\Delta\rho_{\nu\ell4} = \frac{1}{32\pi^2} \biggl[\underbrace{y^2_{\ell 4} + y^2_{\nu 4}
- \frac{4y^2_{\ell 4} y^2_{\nu 4}}{y^2_{\ell 4} - y^2_{\nu 4}}
\ln \frac{y_{\ell 4}}{y_{\nu 4}}}_{\geq (y_{\ell 4}-y_{\nu 4})^2} \biggr]
\end{align}
Here $y_{\ell 4}$ and $y_{\nu 4}$ are the Yukawa couplings of the extra
charged lepton and extra neutrino, respectively.
The expression in [ ] can be shown to be bounded from below by $(y_{\ell
4}-y_{\nu 4})^2$.
EWPO data puts a constraint on any new physics contributions to $\Delta\rho$,
leading to the bound (at 90\% confidence level) \cite{Zyla:2020zbs}
\begin{align}
|y_{\ell 4}-y_{\nu 4}| < 48\,\text{GeV} \label{lnudiff}
\end{align}
Extra charged leptons would be visible in particles detectors, of course.
Searches at LEP2 exclude the existence of any such particle with mass below 101
GeV \cite{Zyla:2020zbs}. Together with \eqref{lnudiff} this implies that a 4th
generation neutrino with $m_{\nu 4} < 50$~GeV is excluded.
At the same time, studies of the $H \to \gamma\gamma$ rate forbid the existence
of a 4th lepton family where both the $\ell_4$ and $\nu_4$ are heavy
\cite{Lenz:2013iha}, so that the combination of electroweak precision and Higgs data
fully rules out the existence of a sequential 4th fermion generation.
\subsection{Sterile Neutrinos}
The bounds in the previous subsection do not apply to new neutral fermions that
are singlets under SU(2)$_L$, $i.\,e.$ that do not (electro)weak interactions.
Such particles are called \emph{sterile neutrinos} or \emph{right-handed
neutrinos}, since they can form Yukawa couplings with the SM neutrinos.
Let us consider a model where two such sterile neutrinos are added, denoted
$N_R^1$ and $N_R^2$, with the interaction Lagrangian \cite{Antusch:2015mia}
\begin{align}
{\cal L} = {\cal L}_{\rm SM} + \sum_k i\overline{N}_R^k \slashed{\partial}
N_R^k
- \Bigl[ \sum_\alpha Y_{\nu\alpha} \overline{L}_{\alpha L}^{}
\tilde{\phi} N_R^1 - M\,\overline{N}_R^1 C N_R^2 + \text{h.c.}\Bigr ]
\end{align}
Here $L_{1L} = \begin{pmatrix} \nu_{eL} \\ e_L \end{pmatrix}$ etc.\ are the SM
lepton doublets and $C$ is again the charge conjugation matrix.
In the limit that $M$ is much larger than the observed light neutrino masses,
the mass eigenstates of the model are:
\begin{itemize}
\item A pseudo-Dirac sterile neutrino $N$ with mass $\approx M$. Here the term
``pseudo-Dirac'' is used for a pair of Majorana fields with nearly degenerate masses,
which behave like a single Dirac particle in some phenomenological contexts.
$N$ is mostly composed of $N_R^{1,2}$, with a small admixture of left-handed SM
neutrinos $\nu_{\alpha L}$, so that is has strongly suppressed couplings to other SM particles
and could have escaped detection until now.
\item Active Majorana neutrinos $\nu'_{e,\mu,\tau}$, which are mostly SM-like,
with a small admixture of $N_R^{1,2}$, where the mixing angle is approximately given
by $\theta_\alpha \approx \frac{Y_{\nu\alpha}v}{\sqrt{2}M}$.
\end{itemize}
Assuming that $M > v$, the main phenomenological effect of this model, compared
to the SM, are reduced couplings of the active neutrinos to gauge bosons.
\begin{itemize}
\item In muon decay, $\mu \to e \nu'_\mu \bar{\nu}'_e$, the relationship between
Fermi constant and SM parameters is modifieid according to
\begin{align}
\frac{G_\mu}{\sqrt{2}} = \frac{g^2}{8M_W^2}(1+\Delta r)(1-\theta_e^2)(1-\theta_\mu^2)
\end{align}
\item The invisible $Z$ decay rate is reduced,
\begin{align}
\Gamma_{Z \to \rm inv} = \Gamma_\nu^{\rm SM}\Bigl(N_\nu -
\sum_{\alpha,\beta}\theta_\alpha\theta_\beta\Bigr) \label{gaminvnu}
\end{align}
\end{itemize}
In the above formulae, $\sin\theta_\alpha$ and $\cos\theta_\alpha$ have been
expanded for $\theta_\alpha \ll 1$.
Comparing these expressions to electroweak precison data, one obtains the bounds
\cite{Antusch:2015mia}
\begin{align}
&& \theta^2_e,\theta^2_\mu &\lesssim 2\times 10^{-3}, &
\theta^2_\tau &\lesssim 7\times 10^{-3} && \text{(today)} && \notag \\
&& &\lesssim 2\times 10^{-5}, &
&\lesssim 10^{-3} && \text{(FCC-ee)} && \notag \\
&& &\lesssim 2\times 10^{-5}, &
&\lesssim 3\times 10^{-3} && \text{(CEPC)} &&
\end{align}
\paragraph{Exercise:} Assuming a special scenario where
$\theta_e=\theta_\mu=\theta_\tau\equiv \theta$, what bound on $\theta$ (at 95\%
C.L.) to you obtain from \eqref{gaminvnu}. Use numbers from section 10 in
Ref.~\cite{Zyla:2020zbs} for $\Gamma_{Z \to \rm inv}^{\rm exp}$ and
$\Gamma_\nu^{\rm SM}$.
\label{thlim}
\subsection{Dark Photon}
Dark photon models are extensions of the SM with an additional U(1) gauge boson,
$Z'$ that can kinetically mix with the hypercharge gauge boson (see
Ref.~\cite{Fabbrichesi:2020wbt} for a recent review). Let us furthermore
introduce a fermion $\chi$ as a dark matter (DM) candidate that couples to $Z'$
with coupling strength $g_D$. The Lagrangian is given by
\begin{align}
{\cal L} = {\cal L}_{\rm SM} - \frac{1}{4}Z'_{\mu\nu} Z'^{\mu\nu} +
\frac{M_{Z'}^2}{2}Z'_\mu Z'^\mu + \overline{\chi} (i\slashed{\partial}
+g_D \slashed{Z}'-m_\chi)\chi + \frac{\epsilon}{2\mathswitch {c_{\scrs\mathswitchr W}}}Z'_{\mu\nu} B^{\mu\nu}
\end{align}
Here we have written an explicit mass term for $Z'$ for simplicity. In
a realistic model this mass would need to be generated through the Higgs or
St\"uckelberg mechanism, but the details are unimportant for the following
discussion.
The kinetic terms can be diagonalized can canonically normalized by transforming
$Z'$ and $B$ to the new fields $Z^\mu_{D,0}$ and $B^\mu_0$ according to
\begin{align}
\begin{pmatrix} Z^\mu_{D,0} \\ B^\mu_0 \end{pmatrix}
\approx \begin{pmatrix} 1-2\epsilon^2/\mathswitch {c_{\scrs\mathswitchr W}}^2 & 0 \\ -\epsilon/\mathswitch {c_{\scrs\mathswitchr W}} & 1 \end{pmatrix}
\begin{pmatrix} Z'^\mu \\ B^\mu \end{pmatrix} + {\cal O}(\epsilon^3)
\label{zdmix}
\end{align}
When expressing ${\cal L}$ in terms of the $Z^\mu_{D,0}$ and $B^\mu_0$, one can
see that the dark photon field $Z^\mu_{D,0}$ has ${\cal O}(\epsilon)$
couplings to the SM fermions.
After electroweak symmetry breaking, mass mixing between
$B_0^\mu$, $W_0^\mu$ and $Z^\mu_{D,0}$ produces the
observable photon and $Z$-boson, as well as the ``dark photon'' mass eigenstate
$Z_D$ with mass $M_{Z_D}$. Note that the mass mixing between $Z^\mu_{D,0}$ and the other fields is
also suppressed by $\epsilon$. As a result, the $Z$-boson mass is shifted by an
${\cal O}(\epsilon^2)$ contribution relative to the SM \cite{Curtin:2014cca},
\begin{align}
M_Z^2 \approx \frac{M_W^2}{\mathswitch {c_{\scrs\mathswitchr W}}^2}\Bigl(1+\epsilon^2\frac{\mathswitch {s_{\scrs\mathswitchr W}}^2}{\mathswitch {c_{\scrs\mathswitchr W}}^2}\Bigr)
\end{align}
where $\mathswitch {s_{\scrs\mathswitchr W}}$ and $\mathswitch {c_{\scrs\mathswitchr W}}$ are the sine and cosine of the weak mixing angle defined
through the (tree-level) gauge-couplings, $\mathswitch {s_{\scrs\mathswitchr W}} = g'/\sqrt{g^2+g'^2}$, $\mathswitch {c_{\scrs\mathswitchr W}} =
g/\sqrt{g^2+g'^2}$.
The $Zff$ vector and axial-vector couplings, see eq.~\eqref{zcpl}, are
additionally modified through $Z$--$Z_D$ mass mixing, leading to \cite{Curtin:2014cca}
\begin{align}
v_f &\approx \frac{e}{2\mathswitch {s_{\scrs\mathswitchr W}}\mathswitch {c_{\scrs\mathswitchr W}}}\biggl[\Bigl(1-\frac{\alpha^2}{2}\Bigr)
\bigl(I_f^3 - 2\mathswitch {s_{\scrs\mathswitchr W}}^2 Q_f\bigr) + \alpha\epsilon\frac{\mathswitch {s_{\scrs\mathswitchr W}}^2}{\mathswitch {c_{\scrs\mathswitchr W}}^2} \bigl(Q_f -
I^3_f\bigr)\biggr], \\
a_f &= v_f|_{Q_f \to 0}^{}
\intertext{where}
\alpha &= \frac{\epsilon\mathswitch {s_{\scrs\mathswitchr W}}}{\mathswitch {c_{\scrs\mathswitchr W}}(M^2_{Z'}/M^2_{Z,0}-1)}\,, \qquad M_{Z,0} =
\frac{M_W}{\mathswitch {c_{\scrs\mathswitchr W}}}
\end{align}
As a result, the predictions for all $Z$-pole EWPOs are modified, such as
$\Gamma_Z$, the $Z$ branching ratios, and $\seff{f}$.
Additionally, the dark photon also leads to a correction of the electron and
muon magnetic moments \cite{Pospelov:2008zw},
\begin{align}
\delta a_\ell = \frac{\alpha\epsilon^2}{8\pi}\,
F\Bigl(\frac{m^2_\ell}{M^2_{Z_D}}\Bigr) \label{alZd}
\end{align}
where $F(x)$ is a function which is $F(x) \approx 1$ for $x \gg 1$ and $F(x)
\approx \frac{2}{3}x$ for $x \ll 1$. For some range of $\epsilon$ and $M_{Z_D}$,
\eqref{alZd} can explain the $>3\sigma$ discrepancy of the muon magnetic moment,
see section~\ref{lewpo}. However, for very small values of $M_{Z_D}$ the
correction to $a_e$ also can become sizeable and this region of parameter space
is ruled out.
\begin{figure}[tb]
\centering
\epsfig{figure=A-visible-Higgs-Global-Summary-v1.eps, width=12cm}
\caption{Constraints on the parameter space of the dark photon model (figure
taken from Ref.~\cite{Curtin:2014cca}).
Electroweak precision constraints from $Z$-pole data are labeled ``EWPT'' (shaded
region for existing constraints and short-dashed lines for future constraints
obtainable at ILC). ``$a_{\mu,\pm 2\sigma}$ favored'' indicates the region that
would alleviate the muon magnetic moment discrepancy within 95\% confidence
level, while the region ``$a_{\mu,5\sigma}$'' is excluded because it would
worsen the discrepancy to the level of 5 standard deviations. The shaded region
labeled ``$a_e$'' is excluded from electron magnetic moment constraints. The
other shaded regions are excluded by direct searches for $Z_D$.}
\label{Zd_constr}
\end{figure}
The constraints from $Z$-pole EWPOs and magnetic moments are
depicted in Fig.~\ref{Zd_constr}, together with bounds from direct searches for
$Z_D$ at various experiments. Note, however, that the direct search limits
assume that $Z_D$ only decays into SM particles. These bounds can be relaxed
when the invisible decay channel into DM particles, $Z_D \to
\chi\bar{\chi}$ is kinematically open ($m_\chi < M_{Z_D}/2$), whereas the bounds
from electroweak precision data are independent from such assumptions.
\bibliographystyle{JHEP}
|
1,477,468,750,463 | arxiv | \section{Introduction}
The availability of large data sets has led to an increasing interest in variable selection methods applied to regression models with many potential variables but few observations ({\it large} $p$, {\it small} $n$ problems). Frequentist approaches have mainly concentrated on providing point estimates under assumptions of sparsity using penalized maximum likelihood procedures \citep{HaTiWa15}.
However, some recent work has considered constructing confidence intervals and taking into account model uncertainty
\citep{ ShSa13, DeBuMeMe15}.
Bayesian approaches to variable selection are an attractive and natural alternative and lead to a posterior distribution on all possible models which can be used to address model uncertainty for variable selection and prediction. A growing literature provides a theoretical basis for good properties of the posterior distribution in large $p$ problems \citep[see {\it e.g.}][]{CaSHVDV15, johnson2012bayesian}.
The posterior probabilities of all models can usually only be calculated or approximated if $p$ is smaller than 30.
If $p$ is larger,
Markov chain Monte Carlo methods are typically used to sample from the posterior distribution
\citep{george1997approaches, o2009review, ClGhLi11}.
\cite{GDMB13} discuss the
benefits of such methods. The most widely used Markov chain Monte Carlo algorithm in this context is
the Metropolis-Hastings sampler where new models are proposed using add-delete-swap samplers \citep{BVF98, Chip01}. For example, this approach is used by \cite{NiWaJo16} in a binary regression model with a non-local prior for the regression coefficients on a data set with 7129 genes. Some supporting theoretical understanding of convergence is available for the add-delete-swap samplers, {\it e.g.}~conditions for rapid mixing in linear regression model have been derived by \cite{YaWaJo16}. Others have considered more targeted moves in model space. For example, \cite{TitsiasYau} introduce the Hamming Ball sampler which more carefully selects the proposed model in a Metropolis-Hastings sampler (in a similar way to shotgun variable selection, \cite{HaDoWe07}) and
\cite{ScCh11} develop a sequential Monte Carlo method that uses a sequence of annealed posterior distributions.
However, the mixing of these methods is often thought to be poor, when applied to data sets with thousands of potential covariates. As an alternative, several authors use more general shrinkage priors and develop suitable MCMC algorithms for high-dimensional problems \citep[see {\it e.g.}][]{ABACBM16}. Nonlocal priors \citep{johnson2012bayesian} are adopted in \cite{Shin_etal_18}, who use screening for high dimensions. \cite{RobertsZanella_19} combine Markov chain Monte Carlo and importance sampling ideas in their tempered Gibbs sampler.
The challenge of performing Markov chain Monte Carlo for Bayesian variable selection in high dimensions has lead to several developments sacrificing exact posterior exploration. For example,
\cite{LiSoYu13} used the stochastic approximation Monte Carlo algorithm \citep{LiLiCa07} to efficiently explore model space. In another direction, variable selection can be performed as a post-processing step after fitting a model including all variables \citep[see {\it e.g.}][]{BoRe12, HaCa15}. Several authors develop algorithms that focus on high posterior probability models. In particular \cite{rovckova2014emvs} propose a deterministic expectation-maximisation based algorithm for identifying posterior modes, while \cite{papaspiliopoulos2016scalable} develop an exact deterministic algorithm to find the most probable model of any given size in block-diagonal design models.
Alternatively, Markov chain Monte Carlo methods for variable selection can be tailored to the data to allow faster convergence and mixing
using the adaptive Markov chain Monte Carlo framework \citep[see {\it e.g.}][Section 2.4, and references therein]{MR3360496}.
Several strategies have been developed in the literature for both the Metropolis-type algorithms \citep{lamnisos12, JiSchmidler} and Gibbs samplers \citep{nottkohn05, richardson2010bayesian}.
Our proposal is a Metropolis-Hastings kernel that learns the relative importance of the variables, unlike \cite{JiSchmidler} who use an independent proposal, and unlike \cite{lamnisos12} who only tune the number of variables proposed to be changed.
This leads to substantially more efficient algorithms than commonly-used methods in high-dimensional settings and for which the computational cost of one step scales linearly with $p$.
The design of these algorithms utilizes the observation that in large $p$, small $n$ settings posterior correlations will be negligible for the vast majority of the $p$ inclusion indicators. The algorithms adaptively build suitable non-local Metropolis-Hastings type proposals that result in moves with expected squared jumping distance \citep{MR1425429} significantly larger than standard methods. In idealized examples the limiting versions of our adaptive algorithms converge in $\mathcal{O}(1)$ and result in super-efficient sampling. They outperform independent
sampling in terms of the expected squared jump distance and also in the sense of the central limit theorem asymptotic variance. This is in contrast to the behaviour of optimal local random walk Metropolis algorithms that on analogous idealized targets need at least $\mathcal{O}(p)$ samples to converge \citep{RGG97}. The performance of our algorithms is studied empirically in realistic high-dimensional problems for both synthetic and real data. In particular, in Section \ref{sec_sim_data},
for a well studied synthetic data example, speedups of up to 4 orders of magnitude are observed compared to standard algorithms.
Moreover, in Section \ref{sec_tecator}, we show the efficiency of the method in the presence of multicollinearity on a real data example with $p=100$ variables, and in Section \ref{sec_PCR}, we present real data gene expression examples with $p=22\ 576$ and with $p=79\ 748$,
and reliably estimate the posterior inclusion probabilities for all variables. All proofs are grouped in the Supplementary Material.
\section{Design of the Adaptive Samplers}
\subsection{The Setting} \label{sec:setting}
Our approach is applicable to general regression settings but we will focus on normal linear regression models. This will allow for clean efficiency comparisons independent of model-specific sampling details (e.g.~of a reversible jump implementation). We define
$\gamma=(\gamma_1,\dots,\gamma_p) \in \Gamma = \{0,1\}^p$ to be a vector of
indicator variables with $\gamma_i=1$ if the $i$-th variable is included in the model and
$p_{\gamma}=\sum_{j=1}^p \gamma_j$. We
consider the model specification
\begin{equation}
y=\alpha {\bf 1}_n + X_{\gamma}\beta_{\gamma} + e,\qquad e\sim{\mbox{N}}(0,\sigma^2 I_n)
\label{linreg}
\end{equation}
where $y$ is an $(n\times 1)$-dimensional vector of responses, ${\bf a}_q$ represents a $q$-dimensional column vector with entries $a$,
and $X_\gamma$ is
a $(n\times p_\gamma)$-dimensional data matrix
formed using the included variables.
We consider Bayesian variable selection and, for clarity of exposition and validity of comparisons, we will assume the commonly used prior structure
\begin{equation}p(\alpha,\sigma^2,\beta_{\gamma},\gamma)\propto \sigma^{-2}p(\beta_{\gamma}\mid\sigma^2, \gamma)p(\gamma)\label{prior}
\end{equation}
with $\beta_{\gamma} \mid \sigma^2,\gamma \sim {\mbox{N}}(0,\sigma^2 V_{\gamma})$, and $p(\gamma) = h^{p_{\gamma}}(1-h)^{p-p_{\gamma}}$.
The hyperparameter $0<h<1$ is the prior probability that a particular variable is included in the model and $V_{\gamma}$ is often chosen as proportional to $(X_\gamma^T X_\gamma)^{-1}$ (a $g$-prior) or to the identity matrix (implying conditional prior independence between the regression coefficients). For both priors, the marginal likelihood $p(y\mid \gamma)$ can be calculated analytically.
The prior can be further extended with hyperpriors, for example, assuming that $h\sim{\mbox{Be}}(a,b)$.
We will consider sampling from the target distribution $\pi_p(\gamma)=p(\gamma\vert y)$ using a non-symmetric Metropolis-Hastings kernel. Let the probability of proposing to move from model $\gamma$ to $\gamma'$ be
\begin{equation}
q_{\eta}(\gamma,\gamma')=\prod_{j=1}^p q_{\eta,j}(\gamma_j,\gamma'_j)
\label{gen_prop}
\end{equation}
where $\eta = (A,D) = (A_1,\dots, A_p, D_1, \dots, D_p)$,
$q_{\eta,j}(\gamma_j=0, \gamma_j'=1)=A_j$ and
$q_{\eta,j}(\gamma_j=1,\gamma_j'=0)=D_j$.
The proposal can be quickly sampled, the parametrisation allows
optimisation of the expected squared jumping distance, and multiple variables can be added to or deleted from the model in one iteration.
The proposed model is accepted using the standard Metropolis-Hastings acceptance probability
\begin{equation}
a_{\eta}(\gamma,\gamma') = \min\left\{1,\frac{\pi_p(\gamma') q_{\eta}(\gamma',\gamma)}{\pi_p(\gamma) q_{\eta}(\gamma,\gamma')}\right\}.
\label{MH_accept}
\end{equation}
\subsection{In Search of Lost Mixing Time: Optimising the Sampler} \label{sec:optimising}
The transition kernel in (\ref{gen_prop}) is highly parameterised with $2p$ parameters and these will be tuned using adaptive Markov chain Monte Carlo methods
\citep[see {\it e.g.}][]{andrieuthoms08, roberts2009examples, MR3360496}. These methods allow the tuning of parameters on the fly to improve mixing using some computationally accessible performance criterion whilst maintaining the ergodicity of the chain. Suppose that $\mu_p$ is a $p$-dimensional probability density function which has the form $\mu_p = \prod_{j=1}^p f$.
A commonly used result is that the optimal scale of a random walk proposal for $\mu_p$ leads to a mean acceptance rate of $0.234$ as $p\rightarrow\infty$ for some smooth enough $f$.
The underlying analysis also implies that the optimised random walk Metropolis will converge to stationarity in $\mathcal{O}(p)$ steps.
This is a useful guide even in moderate dimensions and well beyond the restrictive independent, identically distributed product form assumption of \cite{RGG97}. \cite{lamnisos12} show that this rule can be effectively used to tune a Metropolis-Hastings sampler for Bayesian variable selection.
However, other results suggest that other optimal scaling rules could work well in Bayesian variable selection problems.
Firstly, \cite{MR3025684} established,
under additional regularity conditions,
that if $f$
is discontinuous, the optimal mean acceptance rate for a Metropolis-Hastings random walk is $e^{-2}\approx 0.1353$ and the chain mixes
in $\mathcal{O}(p^2)$ steps, an order of magnitude slower than with smooth target densities $f$. Rather surprisingly, \cite{neal2016optimal} show that the optimally tuned independence sampler in this settings recovers the $O(p)$ mixing and acceptance rate of $0.234$
without any additional smoothness conditions.
Secondly, \cite{MR1613256}
considered optimal scaling of the random walk Metropolis-Hastings algorithm on $\Gamma=\{0,1\}^p$ for the product measures
\[\mu_p(\gamma_1, \dots, \gamma_p) = s^{p_\gamma}(1-s)^{p- p_\gamma},\qquad \gamma=(\gamma_1,\dots,\gamma_p)\in\Gamma, \quad 0<s<1.
\]
If $s$ is close to $1/2$, the optimal $\mathcal{O}(p)$ mixing rate occurs as $p$ tends to infinity if the mean acceptance rate is 0.234. If $s\rightarrow 0$ as $p\rightarrow \infty$,
the numerical results of Section 3 in \cite{MR1613256} indicate that the optimally tuned random walk Metropolis proposes to change two $\gamma_j$'s at a time but that
the acceptance rate deteriorates to zero resulting in the chain not moving. This suggests the actual mixing in this regime is slower than the $\mathcal{O}(p)$ observed for smooth continuous densities.
In Bayesian variable selection, it is natural to assume that the variables differ in posterior inclusion probabilities and so we consider target densities that have the form
\begin{equation}\label{our-target}
\pi_p(\gamma) = \prod_{j=1}^{p}\pi_j^{\gamma_j}(1-\pi_j)^{1-\gamma_j},\qquad \gamma\in\Gamma
\end{equation}
where $0<\pi_j<1$ for $j=1,\dots,p$.
Consider the non-symmetric Metropolis-Hastings algorithm with the product form proposal $q_{\eta}(\gamma, \gamma')$ given by \eqref{gen_prop} targeting the posterior distribution given by~\eqref{our-target}.
Note that $\alpha_{\eta}(\cdot, \cdot) \equiv 1$ for any choice of $\eta = (A,D)$ satisfying \begin{equation}\label{accept1}
\frac{A_j}{D_j} = \frac{\pi_j}{1-\pi_j}, \quad \textrm{for every } j.\end{equation} To discuss optimal choices of $\eta,$ we consider several commonly used criteria for Markov chains with stationary distribution $\pi$ and transition kernel $P$ on a finite discrete state space $\Gamma$. The \emph{mixing time} of a Markov chain
\citep{roberts2004general}
is
$
\rho := \min\{t: \max_{\gamma \in \Gamma} \|P^t(\gamma, \cdot) - \pi(\cdot)\|_{TV} < 1/2\}
$ where $\|\cdot\|_{TV}$ is the total variational norm.
If $\Gamma = \{0,1\}^p$,
it is natural to define the \emph{expected squared jumping distance} \citep{MR1425429} as
${E}_{\pi}\left[\sum_{j=1}^p \vert\gamma^{(0)}_j - \gamma^{(1)}_j\vert^2\right]$ where $\gamma^{(0)}$ and $\gamma^{(1)}$
are two consecutive values in a Markov chain trajectory, which
is the average number of variables changed in one iteration.
Suppose that the Markov chain is ergodic, then, for any function $f: \Gamma \to \mathbb{R}$,
$
\frac{1}{\sqrt{n}}\sum_{k=0}^{n-1} f(\gamma^{(k)}) \stackrel {D}{\rightarrow} N(E_{\pi}f, \sigma^2_{P,f}),
$
where the constant $\sigma^2_{P,f}$ depends on the transition kernel $P$ and function $f$.
Consider transition kernels $P_1$ and $P_2$. If $\sigma^2_{P_1,f} \leq \sigma^2_{P_2,f} $ for every $f,$ then $P_1$ dominates $P_2$ in \emph{Peskun ordering} \citep{MR0362823}. If $P_1$ dominates all other kernels from a given class, then $P_1$ is optimal in this class with respect to Peskun ordering. Apart from toy examples, Peskun ordering can be rarely established without further restrictions. Hence, for the variable selection problem, where posterior inclusion probabilities are often of interest, we consider Peskun ordering for the class $\mathbb{L}(\Gamma)$ of linear combinations of univariate functions,
\begin{equation} \label{lin_fun}\mathbb{L}(\Gamma) := \left\{f:\Gamma \to \mathbb{R}: f(\gamma) = a_0 + \sum_{j=1}^p a_j f_j(\gamma_j) \right\}.
\end{equation}
We consider two proposals which satisfy \eqref{accept1}. The \emph{independent proposal} for which $A_j=1-D_j=\pi_j $ and
the \emph{random walk proposal} for which $A_j = \min\{1, \frac{\pi_j}{1-\pi_j}\}$ and $D_j = \min\{1, \frac{1- \pi_j}{\pi_j}\}$. The following proposition shows that the random walk proposal has more desirable properties.
\begin{proposition} \label{prop_AD_choices} Consider the class of Metropolis-Hastings algorithms with target distribution given by~\eqref{our-target} and proposal $q_{\eta}(\gamma, \gamma')$ given by~\eqref{gen_prop} with the independent or random walk proposal. Let $Var_{\pi}f $ be the stationary variance of $f$ under $\pi_p(\gamma)$ and
$\pi^{(j)} := \{1-\pi_j, \pi_j\}$.
Then,
\vspace*{-8pt}
\begin{enumerate}
\item[(i)] the independent proposal leads to
\begin{itemize}
\item[(a)] independent sampling and optimal mixing time $\rho = 1;$
\item[(b)] the expected squared jumping distance is $E_{\pi}[\Delta^2] = 2\sum_{j=1}^p \pi_j(1-\pi_j)$;
\item[(c)] the asymptotic variances is $\sigma_{P,f}^2 = Var_{\pi}f$ for arbitrary $f$ and $\sigma_{P,f}^2 = Var_{\pi}f = \sum_{j=1}^p a_j^2 Var_{\pi^{(j)}}f_j$ for $f \in \mathbb{L}(\Gamma)$;
\end{itemize}
\item[(ii)] the random walk proposal leads to
\begin{itemize}
\item[(a)] the expected squared jumping distance is
$E_{\pi}[\Delta^2] = 2\sum_{j=1}^p\min\{1-\pi_j, \pi_j\}$, which is maximal;
\item[(b)]
the asymptotic variance is $\sigma_{P,f}^2 = \sum_{j=1}^p \big(2 \max \{1-\pi_j, \pi_j\} -1 \big)a_j^2 Var_{\pi^{(j)}}f_j$
for $f \in \mathbb{L}(\Gamma)$
and it is
optimal with respect to the Peskun ordering for the class of linear functions~$\mathbb{L}(\Gamma)$ defined in \eqref{lin_fun}.\end{itemize}
\end{enumerate}
\end{proposition}
\begin{remark}
The differences of the expected squared jumping distance and asymptotic variance for the two proposals is largest when $\pi_j$ is close to $1/2$.
\end{remark}
\begin{remark}
In discrete spaces, \cite{ScCh11} argue that the mutation rate
\begin{equation*}
\bar{a}_M=\int \mathbb{I}(\gamma\neq \gamma')a_\eta(\gamma,\gamma')q_{\eta}(\gamma,\gamma')\pi(\gamma) d\gamma'\,d\gamma,
\end{equation*}
which excludes moves which do not change the model, is more appropriate than average acceptance rate.
The mutation rate is
$
\bar{a}_M
=1 - \prod_{j=1}^p [
(1 - \pi_j)^2 + \pi_j^2
]$
with independent sampling and is
$
\bar{a}_M
=1 - \prod_{j=1}^p \vert 2\pi_j-1\vert$
with the random walk proposal.
Therefore, the random walk proposal always leads to a higher mutation rate.
\end{remark}
These results suggest that the random walk proposal should be preferred to the independent proposal when designing a Metropolis-Hastings sampler for idealised posteriors of the form in \eqref{our-target}.
In practice, the posterior distribution will not have a product form but can anything be said about its form when $p$ is large? The following result sheds some light on this issue. We define $\mbox{BF}_j(\gamma_{-j})$ to be the Bayes factor of including the $j$-th variable given the values of $\gamma_{-j}=(\gamma_1,\dots,\gamma_{j-1},\gamma_{j+1},\dots,\gamma_p)$ and denote by $\gamma_0$ the vector $\gamma$ without $\gamma_j$ and $\gamma_k$.
\begin{proposition}
Let
$a=\frac{\mbox{BF}_j(\gamma_k=1,\gamma_0)}
{\mbox{BF}_j(\gamma_k=0,\gamma_0)}$.
If (i) $a \rightarrow 1$ or
(ii) $a\rightarrow A<\infty$ and $\mbox{BF}_j(\gamma_k=0,\gamma_{0})
h\rightarrow 0$ then
$
p(\gamma_j=1\vert \gamma_k=1,\gamma_0)
\rightarrow p(\gamma_j=1\vert \gamma_k=0,\gamma_0)$.
\end{proposition}
This result gives condition under which $\gamma_j$ and $\gamma_k$ are approximately independent. Condition (ii) is interesting in large $p$ settings: $\gamma_j$ and $\gamma_k$ are approximately independent if $p$ is large (and so $h$ is small) and $\mbox{BF}_j(\gamma_k=0,\gamma_{0})$ is not large, {\it i.e.} the evidence in favour of including $\gamma_j$ is not large. This will be the case for all variables apart from the most important.
Although this result provides some reassurance, there will be some posterior correlation in many problems and the random walk proposal may propose to change too many variables leading to low acceptance rates. This can be addressed by using a scaled proposal of the form \begin{equation}\label{individually_scaled}
A_j = \zeta_j \min\left\{1, \frac{\pi_j}{1-\pi_j}\right\}, \qquad D_j = \zeta_j \min\left\{1, \frac{1- \pi_j}{\pi_j}\right\}.\end{equation}
The family of these proposals for $\zeta_j \in [0,1]$ form a line segment for $(A_j, D_j)$ between $(0, 0)$ and $\left(\min\left\{1, \frac{\pi_j}{1-\pi_j}\right\}, \min\left\{1, \frac{1- \pi_j}{\pi_j}\right\}\right)$, illustrated in Figure~\ref{fig_segment} (Supplementary Material \ref{add_fig}). The independent proposal corresponds to the point on this line where $\zeta_j=\max\{\pi_j, 1-\pi_j\}$.
In the next section, we devise adaptive MCMC algorithms to tune proposals of the form \eqref{gen_prop} so that
$A_j$'s and $D_j$'s lie approximately on this line.
Larger values of $\zeta_j$ tend to lead to larger jumps whereas smaller values of $\zeta_j$ tend to increase acceptance. These algorithms aim to find a point which balances this trade-off.
We define two strategies for adapting $\eta$: \emph{Exploratory Individual Adaptation} and \emph{Adaptively Scaled Individual Adaptation}.
\cite{craiu2009learn} showed empirically that running multiple independent Markov chains with the same adaptive parameters
improves the rate of convergence of adaptive algorithms towards their target acceptance rate in the context of the classical adaptive Metropolis algorithm of \cite{haario2001adaptive} (see also Bornn et al. 2013)
\nocite{BJDD12}. Therefore, we consider
algorithms with different numbers of independent parallel chains (but the same parameters of the proposal) and refer to this as { multiple chain acceleration}.
To avoid the algorithms becoming trapped in well separated modes, we also consider parallel tempering versions of the algorithms, following \cite{MiMoVi12} as explained in Supplementary Material \ref{PT}.
At this point, it is helpful to define some notation.
Let $\eta^{(i)}=(A^{(i)},D^{(i)})$ and
$\gamma^{(i)}$ be the values of $\eta$ and $\gamma$ at the start of the $i$-th iteration, and $\gamma'$
be the subsequently proposed value. Let $
a_i = a_{\eta^{(i)}}(\gamma^{(i)}, \gamma')$ be the acceptance probability at the $i$-th iteration.
We define for $j=1,\dots,p$,
\[
\gamma^{A\,(i)}_j=
\left\{\begin{array}{ll}
1\mbox{ if }\gamma'_j\neq \gamma_j^{(i)}\mbox{ and }\gamma_j^{(i)}=0\\
0\mbox{ otherwise}
\end{array}\right.,\qquad
\gamma^{D\,(i)}_j=
\left\{\begin{array}{ll}
1\mbox{ if }\gamma'_j\neq \gamma_j^{(i)}\mbox{ and }\gamma_j^{(i)}=1\\
0\mbox{ otherwise}
\end{array}\right.
\]
and the map ${\mbox{logit}}_{\epsilon}: (\epsilon,1-\epsilon)\rightarrow \mathbb{R}$ by
$
{\mbox{logit}}_{\epsilon}(x) = \log(x-\epsilon) - \log(1 - x - \epsilon),
$
where $0\leq\epsilon\leq1/2$. This reduces to the usual logit transform if $\epsilon=0$.
\subsection{Remembrance of Things Past: Exploratory Individual Adaptation}
The first adaptive strategy is a general purpose method that we term {\it Exploratory Individual Adaptation} (EIA). It aims to find pairs $(A_j, D_j)$ on the line segment defined by \eqref{accept1}
which lead to good mixing. Proposals with larger values of $A_j$ and $D_j$ will tend to propose more changes to the included variables but will also tend to reduce the average acceptance probability or mutation rate. The method introduces two tuning parameter $\tau_L$ and $\tau_U$.
There are three types of updates for $A^{(i)}$ and $D^{(i)}$ which move towards the correct ratio $A_j/D_j$ and then along the segment (note that the slope of the segment is not known in practice, as it depends on $\pi_j$). Unless otherwise stated, $A^{(i+1)}_j=A^{(i)}_j$ and $D^{(i+1)}_j=D^{(i)}_j$:
\begin{enumerate}
\item Both the \emph{expansion step} and the \emph{shrinkage step} change $A^{(i+1)}_j$ and $D^{(i+1)}_j$ for $j$ in
$\gamma^{A(i)}$ and $\gamma^{D(i)}$ to adjust the average squared jumping distance whilst maintaining that
$A^{(i+1)}_j / D^{(i+1)}_j \approx A^{(i)}_j / D^{(i)}_j$.
The expansion step is used if a promising move is proposed (if $a_i>\tau_U$) and sets $A^{(i+1)}_j$ and $D^{(i+1)}_j$ larger than $A^{(i)}_j$ and $D^{(i)}_j$ respectively. Similarly, the shrinkage step is used if an unpromising move has been proposed (if $a_i<\tau_L$) and $A^{(i+1)}_j$ and $D^{(i+1)}_j$ are set smaller than $A^{(i)}_j$ and $D^{(i)}_j$.
\item The \emph{correction step} aims to increase the average acceptance rate by correcting the ratio between $A$'s and $D$'s. If $\tau_L<a_i<\tau_U$, we set $A^{(i+1)}_j>A^{(i)}_j$ and $D^{(i+1)}_j<D^{(i}_j$
if $\gamma^{D(i)}_j=1$ and $A^{(i+1)}_j<A^{(i)}_j$
and
$D^{(i+1)}_j>D^{(i)}_j$ if $\gamma^{A(i)}_j=1$.\end{enumerate}
The gradient fields of these updates are shown in Figure \ref{gradient} (Supplementary Material \ref{add_fig}). These three moves can be combined into the following adaptation of $A^{(i)}$ and $D^{(i)}$
\begin{eqnarray}
{\mbox{logit}}_{\epsilon}A^{(i+1)}_j& = &{\mbox{logit}}_{\epsilon}A^{(i)}_j
+
\phi_i \left(
\gamma^{A(i)}_j d_i(\tau_U) +\, \gamma^{D(i)}_j
d_i(\tau_L)
- \gamma^{A(i)}_j (1-d_i(\tau_U))\right), \label{eqn:adap_A}
\\
{\mbox{logit}}_{\epsilon}D^{(i+1)}_j & = &{\mbox{logit}}_{\epsilon}D^{(i)}_j
+
\phi_i \left(
\gamma^{D(i)}_j d_i(\tau_U)
+\, \gamma^{A(i)}_j
d_i(\tau_L)
- \gamma^{D(i)}_j (1 - d_i(\tau_U))\right), \label{eqn:adap_B}
\end{eqnarray}
for $j=1\dots,p$ where $d_i(\tau) =
\mathbb{I}{\big\{a_i\geq \tau\big\}}$ and
$\phi_i=O(i^{-\lambda})$ for some constant $1/2<\lambda\leq 1$. The transformation implies that $\epsilon<A_j^{(i)}<1-\epsilon$ and
$\epsilon<D_j^{(i)}<1-\epsilon$ and we assume that $0<\epsilon<1/2$. It also implies diminishing adaptation (essentially since the derivative of the inverse logit is bounded, see Lemma \ref{lemma:diminishing_adaptation}). Based on several simulation studies, we suggest to take $\tau_L = 0.01$ and $\tau_U = 0.1$. As discussed in Section \ref{sec:optimising}, targeting a low acceptance rate is often beneficial in irregular cases, so we expect this choice to be robust in real data applications. In all our simulations with this parameter setting, the resulting mean acceptance rate was between $0.15$ and $0.35$, i.e. in the high efficiency region identified in \cite{RGG97}. We also suggest the initial choice of parameters such that $A_j^{(1)}/D_j^{(1)} \approx h/(1-h)$ as this summarises the prior information on $\pi_j / (1-\pi_j)$, and in particular $D_j^{(1)}\equiv 1$ and $A_j^{(1)}\equiv h$ often works well.
The parameter $\epsilon$ controls the minimum and maximum values of $A_i$ and $D_i$. In the large $p$ setting, $A_i\approx\epsilon$ for unimportant variables and the expected number of those unimportant variables proposed to be included at each iteration will be approximately $p\epsilon$ (since the number of excluded, unimportant variables will be close to $p$). This expected value can be controlled by choosing $\epsilon=0.1/p$.
The EIA algorithm is described in Algorithm~\ref{explore_IA} and we indicate its transition kernel at time $i$ as $P_{\eta^{(i)}}^{\textrm{EIA}}$.
\begin{algorithm}[!h]
\label{explore_IA}
\caption{Exploratory Individual Adaptation (EIA)}
\vspace*{-12pt}
\begin{tabbing}
\enspace for $i=1$ to $i=M$\\
\qquad sample $\gamma' \sim q_{\eta^{(i)}}(\gamma^{(i)}, \cdot) \;$ \textrm{and} $\; U \sim U(0,1);$\\
\qquad if {$\;U < a_{\eta^{(i)}}(\gamma^{(i)}, \gamma')\;$} then\ $\gamma^{(i+1)}:=\gamma'$, else
$ \gamma^{(i+1)}:=\gamma^{(i)}$ \\
\qquad \textrm{update $A^{(i+1)}$ using (\ref{eqn:adap_A}) and
$D^{(i+1)}$ using (\ref{eqn:adap_B})}\\
\enspace endfor
\end{tabbing}
\end{algorithm}
\vspace*{-12pt}
\subsection{Remembrance of Things Past: Adaptively Scaled Individual Adaptation}
Algorithm~\ref{explore_IA} learns two parameters $A_j^{(i)}$ and $D_j^{(i)}$ for each variable and can be slow to converge to optimal values if $p$ is large. Alternatively, we could learn $\pi_1,\dots,\pi_p$ from the chain to approximate the slope of the line defined by \eqref{accept1}
and use the proposal \eqref{individually_scaled} with the same scale parameter for all variables. We term this approach the {\it Adaptively Scaled Individual Adaptation} (ASI) proposal. In particular, we use \begin{equation} A_j^{(i)}=\zeta^{(i)}\min\left\{1,\hat\pi^{(i)}_j/\left(1-\hat\pi^{(i)}_j\right)\right\} \quad \mbox{and} \quad D_j^{(i)}=\zeta^{(i)}\min\left\{1,\left(1-\hat\pi^{(i)}_j\right)/\hat\pi^{(i)}_j\right\}, \label{ASI_update} \end{equation}
for $j=1,\dots,p$ where $0<\zeta^{(i)}<1$ is a tuning parameter
and $\hat\pi^{(i)}_j$ is a Rao-Blackwellised estimate of the posterior inclusion probability of variable
$j$ at the $i$-th iteration.
Like
\cite{GhCl11}, we work with the Rao-Blackwellised estimate conditional on the model, marginalizing out $\alpha$, $\beta_{\gamma}$ and $\sigma^2$,
in contrast to
\cite{GuSt11} who condition on the model parameters. We assume that $V_{\gamma} = g I_{p_{\gamma}}$, where $I_q$ is the $q\times q$ identity matrix.
After $N$ posterior samples, $\gamma^{(1)},\dots,\gamma^{(N)}$,
the Rao-Blackwellised estimate of $\pi_j=p(\gamma_j=1\vert y)$ is
\begin{equation}\label{eq:RB}
\hat\pi_j = \frac{1}{N}
\sum_{k=1}^N
\frac{\tilde{h}_j^{(k)}\,\mbox{BF}_j\left(\gamma_{-j}^{(k)}\right)}{1-\tilde{h}_j^{(k)}+\tilde{h}_j^{(k)}\,\mbox{BF}_j\left(\gamma_{-j}^{(k)}\right)}
\end{equation}
where
$\tilde{h}_j^{(k)} = h$ if $h$ is fixed or $\tilde{h}_j^{(k)} = \frac{\#\gamma^{(k)}_{-j}+1+a}{p+a+b}$ if $h\sim{\mbox{Be}}(a,b)$.
Let $Z_{\gamma} = [{\bf 1}_n \ X_{\gamma}]$,
$\Lambda_{\gamma} = \left(
\begin{array} {cc}
0 & {\bf 0}_{p_{\gamma}}^T\\
{\bf 0}_{p_{\gamma}} & V^{-1}_{\gamma}
\end{array}
\right)$,
$F = (Z_{\gamma}^TZ_{\gamma} + \Lambda_{\gamma})^{-1}$
and $A = y^Ty - y^T Z_{\gamma} F Z_{\gamma}^T y$.
If $\gamma_j=0$,
\[
\mbox{BF}_j(\gamma_{-j})={d_j^{\uparrow}}^{-1/2}g^{-1/2}
\left(
\frac{A-\frac{1}{d_j^\uparrow}
(y^T x_j-y^T Z_{\gamma} F Z_{\gamma}^T x_j)^2}{A}
\right)^{-n/2}
\]
with $
d_j^\uparrow = x_j^T x_j + g^{-1} - (x_j^T Z_{\gamma})F(Z_{\gamma}^T x_j)$.
If $\gamma_j=1$, we define $z_j$ to be ordered position of the included variables
($z_j=1$ if $j$ is the first included variable, etc.), then
\[
\mbox{BF}_{j}(\gamma_{-j})={d_j^{\downarrow}}^{-1/2}g^{-1/2}
\left(
\frac{A}{A +
{d_j^\downarrow}
(y^T Z_{\gamma}
F_{\cdot, z_j+1})^2}
\right)^{-n/2}
\]
where $
d_j^\downarrow = 1/F_{z_j+1,z_j+1}$.
These results allow the contribution to the Rao-Blackwellised estimates for all values of $j$ to be calculated in $O(p)$ operations at each iteration
if the values of $F$ and $A$ (which are needed for calculating the marginal likelihood) are stored. Derivations
are provided in Supplementary Material \ref{SM:RB}.
The value of $\zeta^{(i)}$ is updated using
\begin{equation}
\label{eqn:adap_C}
{\mbox{logit}}_{\epsilon} \zeta^{(i+1)} = {\mbox{logit}}_{\epsilon} \zeta^{(i)} + \phi_i (a_i - \tau),
\end{equation}
where $\tau$ is a targeted acceptance rate. We use $\epsilon=0.1/p$ as in Algorithm 1. We shall see (in Lemma \ref{lemma:diminishing_adaptation}) that ASI also satisfies diminishing adaptation by verifying that the Rao-Blackwellised estimate in \eqref{eq:RB} evolves at the rate $1/i$ and reiterating the argument about inverse logit derivatives.
To avoid proposing to change no variable with high probability, we set $\zeta^{(i+1)} = 1/\Delta^{(i+1)}$ if $\zeta^{(i+1)}\Delta^{(i+1)}<1$ where $\Delta^{(i+1)}=2\sum_{j=1}^p \min\{\pi_j^{(i+1)},1-\pi_j^{(i+1)}\}$. This ensures that the algorithm will propose to change at least one variable with high probability. The ASI algorithm is described in Algorithm~\ref{explore_ISA} and we indicate its transition kernel at time $i$ as $P_{\eta^{(i)}}^{\textrm{ASI}}$. We use $\kappa = 0.001$ to avoid the estimated probabilities becoming very small.
\begin{algorithm}[h!]
\label{explore_ISA}
\caption{Adaptively Scaled Individual Adaptation (ASI)}
\vspace*{-12pt}
\begin{tabbing}
\enspace for $i=1$ to $i=M$\\
\qquad sample $\gamma' \sim q_{\eta^{(i)}}(\gamma^{(i)}, \cdot) \;$ \textrm{and} $\; U \sim U(0,1);$\\
\qquad if {$\;U < a_{\eta^{(i)}}(\gamma^{(i)}, \gamma')\;$} then\
$\gamma^{(i+1)}:=\gamma'$, else $ \gamma^{(i+1)}:=\gamma^{(i)}$ \\
\qquad Update $\hat\pi_1^{(i+1)},\dots,\hat\pi_p^{(i+1)}$ as in (\ref{eq:RB}) and
set $\tilde\pi_j^{(i+1)}=\kappa + (1 - 2\kappa)\, \hat\pi_j^{(i+1)}$\\
\qquad Update $\zeta^{(i+1)}$ as in \eqref{eqn:adap_C} \\
\qquad Calculate
$A_j^{(i+1)}=\zeta^{(i+1)}\min\left\{1,\tilde\pi^{(i+1)}_j/\left(1-\tilde\pi^{(i+1)}_j\right)\right\}$ for $j=1,\dots,p$\\
\qquad Calculate $D_j^{(i+1)}=\zeta^{(i+1)}\min\left\{1,\left(1-\tilde\pi^{(i+1)}_j\right)/\tilde\pi^{(i+1)}_j\right\}$ for $j=1,\dots,p$
\\
\enspace endfor
\end{tabbing}
\end{algorithm}
\vspace*{-14pt}
\section{Ergodicity of the Algorithms} \label{sec:ergodcity}
Since adaptive Markov chain Monte Carlo algorithms violate the Markov condition, the standard and well developed Markov chain theory can not be used to establish ergodicity and we need to derive appropriate results for our algorithms.
We verify validity of our algorithms by establishing conditions introduced in \cite{MR2340211}, namely simultaneous uniform ergodicity and diminishing adaptation.
The target posterior specified in Section \ref{sec:setting} on the model space $\Gamma$ is
\begin{equation}\label{eqn:target}
\pi_p(\gamma) = \pi_p(\gamma\mid y) \propto p(y|\gamma)p(\gamma)
\end{equation} with $p(y|\gamma)$ available analytically, and the vector of adaptive parameters at time $i$ is \begin{equation} \label{parameters}
\eta^{(i)} = (A^{(i)},D^{(i)}) \;\; \in \;\; [\epsilon, 1-\epsilon]^{2p} =: \Delta_{\epsilon}, \quad \textrm{with} \quad 0 < \epsilon < 1/2, \end{equation}
with the update strategies in Algorithm 1 or 2.
$P_{\eta}(\gamma, \cdot)$ denotes the non-adaptive Markov chain kernel corresponding to a fixed choice of $\eta$.
Under the dynamics of either algorithm, for $S \subseteq \Gamma$ we have
\begin{eqnarray}\nonumber
P_{\eta}(\gamma, S) & = &
\mathbb{P}\Big[\gamma^{(i+1)} \in S \, \Big| \, \gamma^{(i)}= \gamma, \eta^{(i)} =\eta \Big]
\\ & = & \sum_{\gamma' \in S }q_{\eta}(\gamma, \gamma') a_{\eta}(\gamma, \gamma') + \mathbb{I}{\{\gamma \in S\}} \sum_{\gamma' \in \Gamma}
q_{\eta}(\gamma, \gamma')\big(1- a_{\eta}(\gamma, \gamma') \big). \label{trans_kernel}
\end{eqnarray}
In the case of multiple chain acceleration, where $L$ copies of the chain are run, the model state space becomes the product space and the current state of the algorithm at time $i$ is $\gamma^{\otimes L,\, (i)} = (\gamma^{1,\, (i)},\dots, \gamma^{L,\, (i)}) \in \Gamma^L$. The single chain version corresponds to $L=1$ and all results apply.
To assess ergodicity, we need to define the distribution of the adaptive algorithm at time $i$, and the associated total variation distance: for the $l$-th copy of the chain $\{\gamma^{l, (i)}\}_{i=0}^{\infty}$ and $S \subseteq \Gamma$ define
\begin{align*}
\mathcal{L}^{l,(i)}\big[(\gamma^{l}, \eta), S \big] & \; := \; \mathbb{P}\Big[\gamma^{l,\, (i)} \in S \, \Big| \, \gamma^{l,\, (0)} = \gamma^{l}, \eta^{(0)} =\eta \Big], \quad \textrm{and}
\\
T^l(\gamma^{l}, \eta,i) & \; := \; \| \mathcal{L}^{l,(i)}\big[(\gamma^{l}, \eta), \cdot \big] - \pi_p(\cdot) \|_{TV}
\; = \; \sup_{S \in \Gamma}|\mathcal{L}^{l,(i)}\big[(\gamma^{l}, \eta), S \big] - \pi_p(S)|.
\end{align*}
We show that all the considered algorithms are ergodic and satisfy a strong law of large numbers (SLLN), {\it i.e.}~for any starting point $\gamma^{\otimes L} \in \Gamma^L$ and any initial parameter value $\eta \in \Delta_{\epsilon}$, we have:
\begin{eqnarray}
\label{eq_thm:IA_erg}
\textrm{ergodicity:}& & \quad \lim_{i \to \infty} T^l(\gamma^{l}, \eta,i) \; = \; 0, \quad \textrm{for any} \; l=1, \dots, L;\qquad \textrm{and}
\\
\label{eq_thm:IA_WLLN} \textrm{SLLN:}& & \quad {1 \over L} \sum_{l=1}^L
{1 \over k}\sum_{i=1}^{k} f(\gamma^{l,\, (i)}) \; \stackrel {k\to\infty}{\longrightarrow} \; \pi_p(f) \quad \textrm{almost surely, for any} \; f: \Gamma \to \mathbb{R}.
\end{eqnarray}
To this end we first establish the following lemmas.
\begin{lemma}[Simultaneous Uniform Ergodicity]\label{lem:SUE}
The family of Markov chains defined by transition kernels $P_{\eta}$ in \eqref{trans_kernel}, targeting $\pi_p(\gamma)$ in \eqref{eqn:target}, is simultaneously uniformly ergodic for any $\epsilon > 0$ in \eqref{parameters}, and so is its multichain version. That is, for any $\delta > 0$ there exists $N=N(\delta,\epsilon) \in \mathbb{N},$ such that for any starting point $\gamma^{\otimes L} \in \Gamma^L$ and any parameter value $\eta \in \Delta_{\epsilon}$
\[
\|P_{\eta}^{N}(\gamma^{\otimes L}, \cdot) - \pi_p^{\otimes L}(\cdot)\|_{TV} \leq \delta.
\]
\end{lemma}
\begin{lemma}[Diminishing Adaptation]\label{lemma:diminishing_adaptation}
Recall the constant $1/2 \leq \lambda \leq 1$ defining the adaptation rate $\phi_i=O(i^{-\lambda})$ in \eqref{eqn:adap_A}, \eqref{eqn:adap_B}, or
\eqref{eqn:adap_C}, and the parameter $\kappa >0$ in Algorithm 2. Then both algorithms: EIA and ASI satisfy diminishing adaptation. More precisely, their transition kernels satisfy
\begin{eqnarray}\label{eq:dimini}
\sup_{\gamma \in \Gamma} \|P_{\eta^{(i+1)}}^{\bullet}(\gamma, \cdot) - P_{\eta^{(i)}}^{\bullet}(\gamma, \cdot)\| & \leq & C i^{-\lambda} ,\quad \textrm{for some} \; C<\infty,
\end{eqnarray}
where $\bullet$ stands for EIA or ASI.
\end{lemma}
Simultaneous uniform ergodicity together with diminishing adaptation leads to the following
\begin{theorem}[Ergodicity and SLLN] \label{thm:IA-PT_erg} Consider the target $\pi_p(\gamma)$ of \eqref{eqn:target}, the constants $1/2 \leq \lambda \leq 1$ and $\epsilon > 0$ defining respectively the adaptation rate $\phi_i=O(i^{-\lambda})$ and region in \eqref{eqn:adap_A}, \eqref{eqn:adap_B}, or
\eqref{eqn:adap_C}, and the parameter $\kappa >0$ in Algorithm 2. Then ergodicity \eqref{eq_thm:IA_erg}
and the strong law of large numbers
\eqref{eq_thm:IA_WLLN} hold for each of the algorithms: EIA, ASI and their multiple chain acceleration versions.
\end{theorem}
\begin{remark} Lemma \ref{lemma:diminishing_adaptation} and Theorem \ref{thm:IA-PT_erg} remain true with any $\lambda > 0$, however $\lambda >1$ results in finite adaptation (see e.g. \cite{MR2340211}), and $\lambda < 1/2$ is rarely used in practice for finite sample stability concerns.
\end{remark}
Proofs can be found in Supplementary Material \ref{Proofs}. A comprehensive analysis of the algorithms for other generalised linear models or for linear models using non-conjugate prior distributions requires a case-by-case treatment, and is beyond the scope of this paper. However, if the prior distributions of additional parameters are continuous, supported on a compact set and everywhere positive, establishing ergodicity will typically be possible with some technical care.
\section{Results}
\subsection{Simulated Data} \label{sec_sim_data}
We consider the simulated data example of \cite{YaWaJo16}.
They assume that there are $n$ observations and $p$ regressors and the data is generated from the model
\[
Y = X\beta^{\star} + e
\]
where $e\sim{\mbox{N}}(0,\sigma^2 I)$ for $\sigma^2=1$. The first 10 regression coefficients are non-zero and we use
\[
\beta^{\star}=\mbox{SNR}\sqrt{\frac{\sigma^2\log p}{n}}
(2, -3, 2, 2, -3, 3, -2, 3, -2, 3, 0, \dots, 0)^T\in\mathbb{R}^p.
\]
The $i$-th vector of regressors is generated as $x_i\sim{\mbox{N}}(0,\Sigma)$ where $\Sigma_{jk}=\rho^{\vert j - k\vert}$. In our examples, we use the value $\rho=0.6$ which represents a relative large correlation between the regressors.
We are interested in the performance of the two adaptive algorithms (EIA and ASI) relative to an add-delete-swap algorithm.
We define the ratio of the relative time-standardized effective sample size of algorithm $A$ versus algorithm $B$ to be
$
r_{A,B} = ({\mbox{ESS}_A/t_A})/({\mbox{ESS}_B/t_B})
$
where $\mbox{ESS}_A$ is the effective sample size for algorithm $A$. This is estimated by making 200 runs of each algorithm and calculating
$
\hat{r}_{A,B} =({s^2_Bt_B})/({s^2_At_A}),
$
where $t_A$ and $t_B$ are the median run-times and $s^2_A$ and $s^2_B$ are the sample variances of the posterior inclusion probabilities for algorithms $A$ and $B$.
We use the prior in (\ref{prior}) with $V_\gamma=9 I$ and $h=10/p$, implying a prior mean model size of 10.
The posterior distribution changes substantially with the SNR and the size of the data set.
All ten true non-zero coefficients are given posterior inclusion probabilities greater than 0.9 in the two high SNR scenarios (SNR=2 and SNR=3) for each value of $n$ and $p$ and no true non-zero coefficients are given posterior inclusion probabilities greater than 0.2 in the low SNR scenario (SNR=0.5) for each value of $n$ and $p$. In the intermediate SNR scenario (SNR=1), the number of true non-zero coefficients given posterior inclusion probabilities greater than 0.9 are 4 and 8 for $p=500$ and 3 and 0 for $p=5000$.
Generally, the results are consistent with our intuition that true non-zero regression coefficients should be detected with greater posterior probability for larger SNR, larger value of $n$ and smaller value of $p$.
Table~\ref{gene_comp_5} shows the median relative time-standardized effective sample sizes for the EIA and ASI algorithms with 5 or 25 multiple chains for different combinations of $n$, $p$ and SNR. The median is taken across the estimated relative time-standardized effective sample sizes for all posterior inclusion probabilities.
{\footnotesize
\begin{table}[h!]
\caption{\small Simulated data: median values of $\hat{r}_{A,B}$ for the posterior inclusion probabilities over all variables where $B$ is the standard Metropolis-Hastings algorithm and $A$ is either the exploratory individual adaptation (EIA) or adaptively scaled individual adaptation (ASI) algorithm}\label{gene_comp_5}
\begin{center}
\begin{tabular}{ccrrrrrrrr}
&& \multicolumn{4}{c}{5 chains} & \multicolumn{4}{c}{25 chains}\\[5pt]
& & \multicolumn{4}{c}{SNR} & \multicolumn{4}{c}{SNR}\\
$(n,p)$ & & 0.5 & 1 & 2 & 3 & 0.5 & 1 & 2 & 3\\[5pt]
(500, 500) & EIA & 4.9 & 1.8 & 5.5 & 5.1 & 1.2 & 1.5 & 2.4 & 2.3\\
& ASI & 1.7 & 21.3 & 31.8 & 7.5 & 2.0 & 36.0 & 42.7 & 12.6 \\
(500, 5000) & EIA & 8.7 & 2.2 & 718.0 & 81.5 & 7.1 & 2.9 & 2267.2 & 147.2 \\
& ASI & 29.9 & 126.9 & 2053.1 & 2271.3 & 53.5 & 353.3 & 12319.5 & 7612.3 \\
(1000, 500) & EIA & 5.9 & 16.3 & 7.7 & 4.2 & 1.6 & 80.7 & 4.4 & 1.8\\
& ASI & 41.9 & 2.1 & 16.9 & 12.0 & 32.8 & 34.0 & 27.9 & 14.4 \\
(1000, 5000) & EIA & 2.2 & 2.2 & 9167.2 &11.3 & 5.6 & 2.5 & 15960.7 & 199.8 \\
& ASI & 15.4 & 37.0 & 4423.1 & 30.8 & 54.9 & 53.4 & 11558.2 & 736.4 \\
\end{tabular}
\end{center}
\end{table}}
Clearly, the ASI algorithm outperforms the EIA algorithm for most settings with either 5 or 25 multiple chains. The performance of the EIA and, especially, the adaptive scaled individual adaptation algorithm with 25 chains is better than the corresponding performance with 5 chains for most cases. Concentrating on results with the ASI algorithm, the largest increase in performance compared to a simple Metropolis-Hastings algorithm occurs with SNR=2. In this case, there are three or four orders of magnitude improvements when $p=5000$ and several orders of magnitude improvements for other SNR with $p=5000$. In smaller problems with $p=500$, there are still substantial improvements in efficiency over the simpler Metropolis-Hastings sampler.
The superior performance of the ASI algorithm (which has one tuneable parameter) over the EIA algorithm (which has $2p$ tuneable parameters) is due to the substantially faster convergence of the tuning parameters of the ASI algorithm to optimal values. Plotting posterior inclusion probabilities against $A$ and $D$ at the end of a run shows that, in most cases, the values of $A_j$ are close to the corresponding posterior inclusion probabilities for both algorithms. However, the values of $D_j$ are mostly close to 1 for ASI but not for EIA.
If $D_j$ is close to 1, then variable $j$ is highly likely to be proposed to be removed if already included in the model. This is consistent with the idealized super-efficient setting (ii) in Proposition~\ref{prop_AD_choices} for $\pi_j<0.5$ and leads to improved mixing rates for small $\pi_j$ since it allows that variable to be included more often in a fixed run length. This is hard to learn through individual adaptation (since variables with low posterior inclusion probabilities will be rarely included in the model and so the algorithm learns the $D_j$ slowly for those variables) whereas the Rao-Blackwellized estimates can often quickly determine which variables have low posterior inclusion probabilities.
\subsection{Behaviour of the exploratory individual adaptation algorithm on the Tecator data} \label{sec_tecator}
The Tecator data contains 172 observations and 100 variables. They have been previously analysed using Bayesian linear regression techniques by \cite{gribro10}, who give a description of the data, and
\cite{lamnisos12}. The regressors show a high degree of multi-collinearity and so this is a challenging example for Bayesian variable selection algorithms.
The prior used was (\ref{prior}) with $V_\gamma=100 I$ and $h=5/100$. Even short runs of the EIA algorithm for this data, such as 5 multiple chains with 3000 burn in and 3000 recorded iterations, taking about $5$ seconds on a laptop, show consistent convergence across runs.
Our purpose was to study the adaptive behaviour of the EIA algorithm on this real data example, in particular to compare the idealized values of the $A_j$'s and $D_j$'s with the values attained by the algorithm.
We use multiple chain acceleration with 50 multiple chains over the total of 6000 iterations (without thinning). The algorithm parameters were set to $\tau_L = 0.01$ and $\tau_U= 0.1$. The resulting mean acceptance rate was approximately $0.2$ indicating close to optimal efficiency. The average number of variables proposed to be changed in a single accepted proposal was $23$, approximately twice the average model size, meaning that in a typical move all of the current variables were deleted from the model, and a set of completely fresh variables was proposed.
Figure \ref{AD_conv}(a) in the Supplementary Material shows how the EIA algorithm approximates
setting~(ii) of Proposition~\ref{prop_AD_choices}, namely the super-efficient sampling from the idealized posterior \eqref{our-target}.
Figure \ref{AD_conv}(b) illustrates how the attained values of $A_j$'s somewhat overestimate the idealized values $\min\{1, {\pi_j/(1-\pi_j)}\}$ of setting~(ii) in Proposition~\ref{prop_AD_choices}. This indicates that the chosen parameter values $\tau_L=0.01$ and $\tau_U=0.1$ of the algorithm overcompensates for dependence in the posterior, which is not very pronounced for this dataset. To quantify the performance, we ran both algorithms with adaptation in the burn-in only and calculated the effective sample size. With a burn-in of 10\ 000 iterations and 30\ 000 draws, the effective sample per multiple chain was 4015 with EIA and 6673 with ASI. This is an impressive performance for both algorithms given the multicollinearity in the regressors. The difference in performance can be explained by the speed of convergence to optimal values for the proposal. To illustrate this, we re-ran the algorithms with the burn-in extended to 30\ 000 iterations:
the effective sample per multiple chain was now 4503 with EIA but 6533 with ASI, indicating that the first algorithm had caught up somewhat. As a comparison, the effective sample size was 1555 for add-delete-swap and 15039 for the Hamming ball sampler with a burnin of 10\ 000 iterations. However, the Hamming ball sampler required 34 times the run time of the EIA sampler, rendering the latter nine times more efficient in terms of time-standardized effective sample size.
This example and the previous one show that the simplified posterior \eqref{our-target} is a good fit with many datasets and can indeed be used to guide and design algorithms.
\subsection{Performance on problems with moderate $p$}\label{sec_Shafer}
We consider three more data sets with relatively small values for $p$ (around 100) and high dependencies between the covariates, used to showcase the sequential Monte Carlo method proposed in \cite{ScCh11}. They are the Boston Housing data ($n=506$, $p=104$), the concrete data ($n=1030$, $p=79$) and the protein data ($n=96$, $p=88$), which were constructed by \cite{ScCh11} to lead to challenging, multi-modal posterior mass functions.
Further details about the data can be found in \cite{ScCh11}. We focus here on the comparison of the ASI and the EIA algorithms with the add-delete-swap algorithm and the sequential Monte Carlo algorithm of \cite{ScCh11}, also considering parallel tempering versions of the first three algorithms. In addition, we consider two recently proposed methods for high-dimensional variable selection: the Hamming Ball sampler \citep{TitsiasYau} and the Ji-Schmidler adaptive sampler \citep{JiSchmidler}.
Unlike \cite{ScCh11}, we adopt the prior (\ref{prior}) with $V_\gamma=100 I$ and $h=5/100$, while allowing for any combination of main effects and interactions.
We use the method of \cite{ScCh11} for data visualization of the variation in the posterior marginal inclusion probabilities using boxplots. All algorithms were run for the same amount of time and we run them 200 times for each data set. For each variable, the white box contains the central 80\% of results and the black boxes show the upper and lower 10\% most extreme values. The coloured bars cover 0 up to the smallest recorded posterior inclusion probability across all runs. The results (also including other algorithms) are shown in the Supplementary Figure~\ref{results:boston} (Boston housing), Figure~\ref{results:concrete} (concrete) and Figure~\ref{results:protein} (protein). In the Supplementary material (Figure \ref{results:tecator}), we also include results for the Tecator data.
There are clear variations across the data sets with
the protein and Tecator data leading to very consistent results whereas there were much greater variations in the inclusion probabilities for the Boston Housing and concrete data sets. Parallel tempering is most helpful for the ASI, a bit less so for the EIA algorithms while the add-delete-swap sampler only benefits somewhat from this modification.
The ASI, EIA, Sch\"afer-Chopin, Hamming Ball and add-delete-swap algorithms all provide similar levels of accuracy, whereas the parallel tempering versions of the ASI and add-delete-swap algorithms provide the most accurate results. This is likely due to the multi-modality in the posterior distribution which is better addressed by parallel tempering than the annealing in the Sch\"afer-Chopin algorithm. For all cases, the Ji-Schmidler sampler performs the worst by some margin.
\subsection{Performance on problems with very large $p$} \label{sec_PCR}
\cite{BoRe12} described a variable selection problem with 22\ 576 variables and 60 observations on two inbred mouse populations. The covariates are gender and gene expression measurements for 22\ 575 genes. Using quantitative real-time polymerase chain reaction (PCR) three physiological phenotypes are recorded, and used as the response variable in the three data sets called PCR$i, i=1,\dots,3$.
We use prior (\ref{prior}) with $V_{\gamma}=g I$ where $g$ is given a half-Cauchy hyper-prior distribution and a hierarchical prior was used for $\gamma$ by assuming that $h\sim{\mbox{Be}}(1, (p-5)/5)$ which implies that the prior mean number of included variables is 5.
A fourth data set (SNP data) relates to genome-wide mapping of a complex trait \citep{CaZhSt17}. The data are body and weight measurements for 993 outbred mice and 79\ 748 single nucleotide polymorphisms (SNPs) recorded for each mouse. The testis weight is the response, the body weight is a regressor which is always included in the model and variable selection is performed on the 79\ 748 SNPs. The high dimensionality makes this a difficult problem and \cite{CaZhSt17} use a variational inference algorithm (varbvs) for their analysis. We have used various prior specifications in (\ref{prior}), and present results for a half-Cauchy hyper-prior on $g$ and $h=5/p$.
For all four datasets, the individual adaptation algorithms were run with $\tau_L=0.05$ and $\tau_U=0.23$, and $\tau=0.234$.
The EIA algorithm had a burn-in of 2\ 150 iterations and 10\ 750 subsequent iterations and no thinning, and the ASI had 500 burn-in and 2\ 500 recorded iterations and no thinning (giving very similar run times). Rao-Blackwellised updates of $\pi^{(i)}$ were only used in the burn-in and posterior inclusion probability for the $j$-th variable was estimated by the mean of the posterior sample of $\gamma_j$.
In addition, we use the add-delete-swap algorithm, with starting models chosen from the prior as well as a version of this algorithm started in the model suggested by the least absolute shrinkage and selection operator, the Hamming Ball sampler with radius 1 and the Sch\"afer-Chopin algorithm. Three independent runs of all algorithms were executed to gauge the degree of agreement across runs. Using MATLAB and an Intel i7 @ 3.60 GHz processor, each algorithm took approximately 25 minutes to run for the PCR data and around 2.5 hours for the SNP data.
Figures~\ref{pcr11:comp_rand_g}-\ref{mice:comp_rand_g_fixed_h} in Supplementary Material \ref{add_fig_real} show the pairwise comparisons between the different runs for all data sets. The estimates from each independent chain for the ASI algorithm and for its parallel tempered version are very similar and indicate that the sampler is able to accurately represent the posterior distribution. The EIA algorithm does not seem to converge rapidly enough to effectively deal with these very high-dimensional model spaces in the relatively modest running time allocated.
Clearly, the other samplers are not able to adequately characterise the posterior model distribution with runs leading to dramatically different results, especially for the PCR data. For the SNP data, the add-delete-swap method does not do too badly, but provides substantially more variable estimates of the posterior inclusion probabilities than the ASI method. Starting the add-delete-swap algorithm in the model selected by the least absolute shrinkage and selection operator never helps, and can actually harm the performance.
\section{Conclusion}
This paper introduces two adaptive Markov chain Monte Carlo algorithms for variable selection problems with very large $p$ and small $n$. We recommend the adaptively scaled individual adaptation proposal, which is able to quickly find good proposals. This method uses a Rao-Blackwellised estimate of the posterior inclusion probability for each variable in an independent proposal.
On simulated data this algorithm shows orders of magnitude improvements in effective sample size compared to the standard Metropolis-Hastings algorithm. The method is also applied to genetic data with 22\ 576 and 79\ 748 variables and shows excellent agreement in the posterior inclusion probabilities across independent runs of the algorithm, unlike the existing methods we have tried.
We find that multiple independent chains with a shared proposal lead to better convergence to the optimal parameter values and parallel tempering helps to deal with multimodal posteriors. For smaller data sets (say $p<500$), the exploratory individual adaptation algorithm also performs very well. Code to run both algorithms is available from\par
\noindent{\smaller\url{https://warwick.ac.uk/go/msteel/steel_homepage/software/version3.0.zip}}.
There are a number of possible directions for future research. We have only considered serial implementations of our algorithms in this paper. However, the algorithms are naturally parallelizable across the multiple chains but work is needed on efficient updating of the shared adaptive parameters.
Finally, it will be interesting to apply these algorithms to more complicated data which may have a non-Gaussian likelihood or a more complicated
prior distribution.
\section*{Acknowledgements}
K{\L} acknowledges support of the Royal Society through the Royal Society University Research Fellowship and of EPSRC. The authors thank two anonymous referees and an associate editor for their insightful comments that helped improve the paper.
\bibliographystyle{Chicago}
|
1,477,468,750,464 | arxiv | \section{Derivation of the TDVP equation}
The basic idea of a TDVP is to minimize the distance
\begin{equation}
\mathcal{D}\left(P_{\theta(t)} + \dot{P}_{\theta(t)} \tau, P_{\theta(t)} + \sum_k \frac{\partial P_{\theta(t)}}{\partial \theta_k}\dot{\theta}_k \tau\right),
\end{equation}
between the evolved state at time $t+\tau$ and the network with a set of yet unknown update parameters $\dot{\theta}$ at each time $t$.
Here, we exemplarily derive Eq.~\eqref{eqn:tdvp_final} from the Hellinger distance $\mathcal{D}_H(P, Q)$, i.e.\ by maximizing the classical fidelity $F(P,Q)=1-\mathcal{D}_H(P, Q)$. As noted in the main text, an equivalent derivation is possible using the Kullback-Leibler divergence $\mathcal{D}_{KL}$.
For better readability, we drop the time index and continue with the optimality condition
\begin{equation}
\begin{aligned}
0 &= \frac{\partial}{\partial \dot{\theta}_k} F\left(P + \dot{P}\tau, P + \sum_{k^\prime}\frac{\partial P}{\partial \theta_{k^\prime}}\dot{\theta}_{k^\prime}\tau\right)\\
&=\frac{\partial}{\partial \dot{\theta}_k}\sum_{\textbf{a}} P^\textbf{a} \sqrt{1 + a\tau + b \tau^2},
\end{aligned}
\end{equation}
where $a$ and $b$ are given by
\begin{equation}
\begin{aligned}
a &= \frac{\partial \log P^\textbf{a}}{\partial t} + \sum_{k^\prime} \frac{\partial \log P^\textbf{a}}{\partial \theta_{k^\prime}}\dot{\theta}_{k^\prime},\\
b &= \frac{\partial \log P^\textbf{a}}{\partial t}\sum_{k^\prime}\frac{\partial \log P^\textbf{a}}{\partial \theta_{k^\prime}}\dot{\theta}_{k^\prime}.
\end{aligned}
\end{equation}
Next we perform a second order expansion of the square root in the time step $\tau$:
\begin{equation}
\sqrt{1 + a\tau + b\tau^2} = 1 + \frac{a\tau}{2} + \frac{\tau^2}{8} (4b - a^2) + \mathcal{O}(\tau^3).
\end{equation}
Using that the normalization of $P$ is conserved under the time evolution one finds that the term linear in $\tau$ vanishes:
\begin{equation}
\begin{aligned}
P^\textbf{a}a&=\sum_\textbf{a}\left( \dot{P}^\textbf{a} + \sum_{k^\prime} \frac{\partial P^\textbf{a}}{\partial \theta_{k^\prime}}\dot{\theta}_{k^\prime}\right)\\
&=\sum_{k^\prime} \dot{\theta}_{k^\prime}\frac{\partial}{\partial \theta_{k^\prime}}\sum_\textbf{a} P^\textbf{a}\\
&=\sum_{k^\prime}\dot{\theta}_{k^\prime}\frac{\partial}{\partial \theta_{k^\prime}}1\\
&=0.
\end{aligned}
\label{eqn:TDVP_linear_term_vanishes}
\end{equation}
Thus, the optimality condition becomes
\begin{equation}
\begin{aligned}
0 &=\frac{\partial}{\partial \dot{\theta}_k}\sum_\textbf{a} \frac{P^\textbf{a}}{P^{\textbf{a}^2}}\left(4\dot{P}^\textbf{a}\sum_{k^\prime}\frac{\partial P^\textbf{a}}{\partial \theta_{k^\prime}}\dot{\theta}_{k^\prime}-(\dot{P}^\textbf{a} + \sum_{k^\prime}\frac{\partial P^\textbf{a}}{\partial \theta_{k^\prime}}\dot{\theta}_{k^\prime})^2\right)\\
&=-\frac{\partial}{\partial \dot{\theta}_k}\sum_\textbf{a} \frac{P^\textbf{a}}{P^{\textbf{a}^2}}\left(\dot{P}^\textbf{a} -\sum_{k^\prime} \frac{\partial P^\textbf{a}}{\partial \theta_{k^\prime}}\dot{\theta}_{k^\prime}\right)^2\\
&=-\frac{\partial}{\partial \dot{\theta}_k}\sum_\textbf{a} P^\textbf{a}\left(\frac{\partial \log P^\textbf{a}}{\partial t} - \sum_{k^\prime}\frac{\partial \log P^\textbf{a}}{\partial \theta_{k^\prime}}\dot{\theta}_{k^\prime}\right)^2\\
&=2\sum_\textbf{a} P^\textbf{a}\frac{\log P^\textbf{a}}{\partial \theta_k}\left(\frac{\partial \log P^\textbf{a}}{\partial t} - \sum_{k^\prime}\frac{\partial \log P^\textbf{a}}{\partial \theta_{k^\prime}}\dot{\theta}_{k^\prime}\right).
\label{eqn:TDVP_derivation_penultimatestep}
\end{aligned}
\end{equation}
Dropping the factor of 2 we obtain an equation for the optimal parameter update $\dot{\theta}$:
\begin{equation}
\begin{aligned}
0 =& \underbrace{\sum_\textbf{a} P^\textbf{a}\frac{\partial \log P^\textbf{a}}{\partial t} \frac{\partial \log P^\textbf{a}}{\partial \theta_k}}_{=F_k}\\
&-\sum_{k^\prime}\underbrace{\sum_\textbf{a} P^\textbf{a} \frac{\partial \log P^\textbf{a}}{\partial \theta_k}\frac{\partial \log P^\textbf{a}}{\partial \theta_{k^\prime}}}_{=S_{kk^\prime}}\dot{\theta}_{k^\prime}.\\
\end{aligned}
\end{equation}
Importantly we can now tackle the sum over the exponentially many indices $\textbf{a}$ by sampling according to the encoded probabilities $P^\textbf{a}$ since both $F$ and $S$ are proportional to $P^\textbf{a}$. This is a unique property of $\mathcal{D}_H$ and $\mathcal{D}_{KL}$
while other distance measures, as for example the $L^2$ norm, do not lead to expressions of a form that can be efficiently evaluated from Monte Carlo samples.
Further, inserting the probabilistic form of the Lindblad master equation leads to
\begin{equation}
\begin{aligned}
F_k &= \sum_\textbf{a} P^\textbf{a}\frac{\partial \log P^\textbf{a}}{\partial t} \frac{\partial \log P^\textbf{a}}{\partial \theta_k} \\
&=\left\langle \mathcal{L}^{\textbf{ab}}\frac{P^\textbf{b}}{P^\textbf{a}}\frac{\partial \log{P^\textbf{a}}}{\partial \theta_k}\right\rangle_{\textbf{a}\sim P}\\
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
S_{kk^\prime} &= \sum_\textbf{a} P^\textbf{a} \frac{\partial \log{P^\textbf{a}}}{\partial \theta_k}\frac{\partial \log{P^\textbf{a}}}{\partial \theta_{k^\prime}}\\
&= \left\langle \frac{\partial \log{P^\textbf{a}}}{\partial \theta_k}\frac{\partial \log{P^\textbf{a}}}{\partial \theta_{k^\prime}} \right\rangle_{\textbf{a}\sim P} \,.
\end{aligned}
\end{equation}
The same derivation can be carried out without assuming normalization. In this case the form of $S$ and $F$ is altered to
\begin{equation}
\begin{aligned}
P^\textbf{a}&\rightarrow\frac{P^\textbf{a}}{\sum_{\textbf{b}}P^\textbf{b}}\\
\log P^\textbf{a}&\rightarrow\log P^\textbf{a} - \log \sum_{\textbf{b}}P^\textbf{b}\\
\frac{\partial \log P^\textbf{a}}{\partial \theta_k}&\rightarrow\frac{\partial \log P^\textbf{a}}{\partial \theta_k}-\left\langle\frac{\partial \log P^\textbf{a}}{\partial \theta_k}\right\rangle_{\textbf{a}\sim P}\\
\frac{\partial \log P^\textbf{a}}{\partial t}&\rightarrow\frac{\partial \log P^\textbf{a}}{\partial t}-\left\langle\frac{\partial \log P^\textbf{a}}{\partial t}\right\rangle_{\textbf{a}\sim P} \,,
\end{aligned}
\end{equation}
where the last two lines are obtained using
\begin{equation}
\begin{aligned}
&\frac{\partial}{\partial \theta_k} \left( \log P^\textbf{a} - \log \sum_\textbf{b} P^\textbf{b} \right)\\
=&\frac{\partial\log P^\textbf{a}}{\partial \theta_k} - \frac{\sum_\textbf{b}\frac{\partial P^\textbf{b}}{\partial \theta_k}}{\sum_\textbf{c}P^\textbf{c}}\\
=&\frac{\partial\log P^\textbf{a}}{\partial \theta_k} - \sum_\textbf{b}\frac{P^\textbf{b}}{\sum_\textbf{c}P^\textbf{c}} \frac{\partial \log P^\textbf{b}}{\partial \theta_k}\\
=&\frac{\partial\log P^\textbf{a}}{\partial \theta_k} - \left\langle\frac{\partial \log P^\textbf{a}}{\partial \theta_k}\right\rangle_{\textbf{a}\sim P}.
\end{aligned}
\end{equation}
Here, the log derivative trick was used in the third line and we renamed the dummy indices $\textbf{b}$ and $\textbf{c}$ in the last step. One may proceed similarly for the time derivative. Overall, this leaves us with the connected correlator structure described in the main text
\begin{equation}
\begin{aligned}
S_{kk^\prime} &= \left\langle O^\textbf{a}_kO^\textbf{a}_{k^\prime} \right\rangle_{\textbf{a}\sim P} - \left\langle O^\textbf{a}_{k} \right\rangle_{\textbf{a}\sim P} \left\langle O^\textbf{a}_{k^\prime} \right\rangle_{\textbf{a}\sim P}\\
F_k &= \left\langle \mathcal{L}^\textbf{ab}\frac{P^\textbf{b}}{P^\textbf{a}} O^\textbf{a}_k \right\rangle_{\textbf{a}\sim P}-\left\langle O^\textbf{a}_k \right\rangle_{\textbf{a}\sim P}\left\langle \mathcal{L}^\textbf{ab}\frac{P^\textbf{b}}{P^\textbf{a}} \right\rangle_{\textbf{a}\sim P}.\\
\end{aligned}
\end{equation}
We finally arrive at
\begin{equation}
\dot{\theta}_k = \tilde{S}^{-1}_{kk^\prime}F_{k^\prime}
\end{equation}
where the tilde is due to the fact that we cannot invert $S$ directly but rather need to regularize it because it is usually ill-conditioned. One can easily show that the updates that were found are indeed maxima of the fidelity:
\begin{equation}
\begin{aligned}
&\frac{\partial^2}{\partial \dot{\theta}_k^2}F(P^a + \dot{P}^a \tau, P^a + \sum_{k^\prime} \frac{\partial P^a}{\partial \theta_{k^\prime}}\dot{\theta}_{k^\prime} \tau)\\
=&\frac{\partial}{\partial \dot{\theta}_k}(F_k - S_{kk^\prime}\dot{\theta}_{k^\prime})\\
=&-S_{kk^\prime}\delta_{k^\prime k}\\
=&-S_{kk}\\
=& - \left\langle \left(O^\textbf{a}_k-\left\langle O^\textbf{a}_k \right\rangle\right)^2 \right\rangle_{\textbf{a}\sim P}\\
\leq& 0.
\end{aligned}
\end{equation}
\section{Observables and Operators in the POVM-formalism}
As described in the main text, the POVM-distribution $P$ is obtained as expectation values of the respective POVM-operators $\hat M$,
\begin{equation}
P^\textbf{a} = \tr\left(\hat \rho \hat M^\textbf{a}\right) \,,
\end{equation}
where $\hat M^\textbf{a}=\hat M^{a_1}\otimes .. \otimes \hat M^{a_N}$ are product operators. For IC-POVMs with the minimal number of $(d^2)^N$ elements, where $d$ is the local Hilbert space dimension ($d=2$ for spins), this relation can be inverted:
\begin{equation}
\hat \rho = P^\textbf{a}T^{-1 \textbf{a}\textbf{a}^\prime}\hat M^{\textbf{a}^\prime},
\label{eqn:rho_from_P_app}
\end{equation}
with the overlap matrix $T^{\textbf{aa}^\prime} = \tr\left(\hat M^\textbf{a}\hat M^{\textbf{a}^\prime}\right)$. We note that not every normalized probability distribution inserted into Eq.~\eqref{eqn:rho_from_P_app} results in a physical density matrix, as the positivity of $\hat \rho$ is not ensured.
The equation describing the dynamics of $P$ can be obtained from Eq.~$\eqref{eqn:rho_from_P_app}$ and the Lindblad master equation according to
\begin{equation}
\dot{P}^\textbf{a} = \tr\left(\dot{\hat \rho} \hat M^\textbf{a}\right) = \mathcal{L}^\textbf{ab}P^\textbf{b}.
\end{equation}
The the part of the linear map $\mathcal{L}$ resulting from the von-Neumann term, i.e.\ the part accounting for the unitary evolution, is
\begin{equation}
\begin{array}{rl}
\dot{P}^{\textbf{a}}&=\tr\left(-i\left[\hat H,\hat \rho\right] \hat M^\textbf{a}\right)\\
&=\tr\left(-i\left[\hat H, T^{-1\textbf{bb}^\prime}\hat M^{\textbf{b}^\prime}\right] \hat M^\textbf{a}\right)P^{\textbf{b}}\\
&=\tr\left(-i\hat H\left[T^{-1\textbf{bb}^\prime}\hat M^{\textbf{b}^\prime},\hat M^\textbf{a}\right]\right)P^{\textbf{b}}\\
&=U^{\textbf{ab}}P^\textbf{b},
\end{array}
\end{equation}
where the cyclicity of the trace was used in the last line. A similar expression can be found for the dissipative part
\begin{equation}
\begin{aligned}
D^\textbf{ab} = \gamma \tr \Bigl( \sum_i & \hat L^i T^{-1\textbf{bb}^\prime}\hat M^{\textbf{b}^\prime}\hat L^{i^\dagger}\hat M^\textbf{a}\\
&-\frac{1}{2}\hat L^{i^\dagger}\hat L^i \left\lbrace T^{-1\textbf{bb}^\prime}\hat M^{\textbf{b}^\prime},\hat M^\textbf{a} \right\rbrace \Bigr),
\end{aligned}
\end{equation}
from which we set $\mathcal{L}$ together according to
\begin{equation}
\mathcal{L}^\textbf{ab} = U^\textbf{ab} + D^\textbf{ab}.
\end{equation}
The expectation value of any observable in physical index space may be correspondingly expressed in the POVM-formalism replacing $\langle \hat O \rangle = \tr\left(\hat \rho \hat O\right)$ by $\langle \hat O \rangle = P^\textbf{a}\Omega^\textbf{a}$. The numerical values of the coefficients $\Omega^\textbf{a}$ are obtained in similar fashion as the Lindbladian operator $\mathcal{L}$, namely by substituting $\hat \rho$ according to Eq.~(\ref{eqn:rho_from_P_app})
\begin{equation}
\langle \hat O \rangle = \tr\left(\hat \rho \hat O\right) = P^\textbf{a} T^{-1\textbf{a}\textbf{a}^\prime}\tr\left(\hat M^{\textbf{a}^\prime}\hat O\right)=P^\textbf{a}\Omega^\textbf{a}.
\end{equation}
\section{Details of the RNN-architecture}
As described in the main text, the RNN encodes the probability distribution $P^\textbf{a}$ as a product of conditionals, $P^\textbf{a}=\prod_i P(a_i|a_{<i})$. The formula implies that the network's knowledge of previous POVM outcomes $a_{<i}$ may alter the estimation of POVM outcome probabilities at site $i$. In the network architecture, this is ensured by passing a hidden state to the next lattice site where it enters the computation of the probability output. This hidden state may be regarded as a latent embedding of physical contextual information, and is required to accurately encode correlations in the physical system.
Our results are obtained using standard RNN cells which are known to have exponentially decaying correlation length \cite{Shen2019}. In scenarios, where this is expected to be insufficient, more advanced cells, such as the Long Short Term Memory (LSTM) \cite{Hochreiter1997}, whose correlation length decays algebraically \cite{Shen2019}, or the transformer \cite{vaswani2017attention} may be used instead.
Since the RNN architecture was originally developed to tackle tasks associated with serial data, some changes are required in order to make it suitable for quantum applications. For one, to allow the treatment of 2D systems the RNN evaluation and sampling schemes need to be generalized.
Here, we adapt a scheme, introduced in \cite{HibatAllah2020}, that treats correlations along both spatial direction on equal footing.
Additionally, we enforce all symmetries present in the Lindbladian $\mathcal{L}$ by averaging all symmetry-invariant outcome configurations \cite{HibatAllah2020}. These include translational symmetries as well as point symmetries. We emphasize, that explicitly restoring these symmetries in our ansatz improved the accuracies of observables substantially.
Furthermore, we here lay out how the network is initialized. Product states, which form typical initial states in non-equilibrium time evolution, may be encoded to numerical precision in the network, by setting the biases of the output layer to the logarithm of the to be encoded 1-particle probability distribution while simultaneously setting all weights connecting to the output layer to zero. We may therefore attribute all accumulated error to imprecise updates during time-evolution and note that no preceding computations are required.
For our simulations we use RNNs implemented in the open source machine learning library JAX \cite{Bradbury2018}. An RNN is a generative model that works on sequential data, in which the bits of the sequential data are processed in an iterative fashion. The RNN fulfills two tasks: It assigns probabilities to a given POVM outcome configuration and, as a generative model, is capable of exact sampling, meaning that it can be programmed to output sample POVM configurations in agreement with the assigned probabilities. This is a major advantage of autoregressive networks compared to other network architectures, in which the sampling step is carried out using Markov Chain Monte Carlo schemes, which potentially may be plagued by long autocorrelation times.
An RNN-cell is the basic building block of an RNN; RNN-cells may be stacked to form the complete RNN, increasing the representational power of the network. Let us first limit our considerations to RNNs with one layer, i.e. single RNN-cells. The input to every RNN-cell consists of two parts: For one, the physical POVM outcomes $\textbf{a}=a_1..a_N$ are fed into the model piece by piece. Here, each outcome is transformed to a one-hot encoded vector of length 4. Simultaneously, a hidden state of length $l$ which is initialized to zero, i.e. $\mathbf{h}_0=0$ is fed into the model. The first step consists of finding the first probability appearing in $P^\textbf{a}=\prod_i P(a_i|a_{<i})$, i.e. $P(a_1)$.
First, a new hidden state is computed
\begin{equation}
\mathbf{h}_1 = \phi \left(W_h \cdot \mathbf{h}_0 + W_a \cdot \mathbf{a}_0 + \mathbf{b}_h\right).
\label{eqn:RNN_cell}
\end{equation}
$\mathbf{a}_0$ is an input of length 4 carrying zeros, similar to the empty input $\mathbf{h}_0$ of length $l$. The parameters $W^{l(a)}$ consequently are matrices with shape $l\times l$ ($l\times 4$), while the bias vector $\mathbf{b}_h$ has length $l$. We choose the element-wise activation function $\phi$ to be the Exponential Linear Unit (ELU)
\begin{equation}
\phi(x)=
\left\{ \begin{array}{ll}
x &x>0, \\
\alpha\left(e^x-1\right) &x \leq 0.
\end{array} \right.
\end{equation}
Two more sets of parameters $W_s$ ($\mathbf{b}_s$) with shape $4\times l$ ($4$) enter the computation of the output of the RNN-cell,
\begin{equation}
P(a_1) = \sigma\left(W_s \cdot \mathbf{h}_1 + \mathbf{b}_s\right).
\label{eqn:final_comp}
\end{equation}
Here $\sigma$ denotes the softmax-activation,
\begin{equation}
\sigma(\mathbf{x})_i = \frac{e^{x_i}}{\sum_i e^{x_i}}
\end{equation}
and the summation includes the four possible POVM-outcomes, allowing to interpret $P^{a_1}$ as a proper discrete probability distribution. Depending on the task at hand, one may either store the probability of a POVM outcome of interest or sample the first POVM outcome from $P^{a_1}$.
Obtaining an expression for $P(a_2|a_1)$ is identical to the hitherto described procedure, by substituting $\textbf{h}_0$ for $\textbf{h}_1$ and $\textbf{a}_0$ for $\textbf{a}_1$ and, more generally, $\textbf{h}_i$ for $\textbf{h}_{i+1}$ and $\textbf{a}_i$ for $\textbf{a}_{i+1}$ in the following steps. Here, the `recurrent` nature becomes apparent, since the same parameters, i.e. the same network, is used in every computation step.
If one desires to use deeper networks with $K$ layers, Eq.~(\ref{eqn:RNN_cell}) changes to
\begin{equation}
\mathbf{h}_i^k = \phi \left(W_h^k \cdot \mathbf{h}_{i-1}^k + W_a^k \cdot \mathbf{h}_i^{k-1} + \mathbf{b}_h^k\right).
\label{eqn:hiddenstate_deep}
\end{equation}
$\mathbf{h}_i^k$ is then called the hidden state at layer $k$ at lattice site $i$. The computation of $P(a_i|a_{<i})$ is still analogous to Eq.~(\ref{eqn:final_comp}), i.e.
\begin{equation}
P(a_i|a_{<i}) = \sigma\left(W_s \cdot \mathbf{h}_i^K + \mathbf{b}_s\right).
\end{equation}
As the product of these probabilities becomes exponentially small in the system size $N$, one stores the logarithm of the conditional probability instead of the probability itself.
In two-dimensional systems, the situation is slightly more involved. One might be tempted to map the 2D system in a snake-like fashion to a one-dimensional system. However, using this method one observes that correlators of vertical neighbours are not encoded accurately as information may potentially get lost upon long traversing times in horizontal direction \cite{HibatAllah2020}. Instead, we opt to pass hidden states in a two-dimensional fashion, incorporating the dimensionality of the system as shown in Ref.~\cite{HibatAllah2020}. Herein, we once again change Eq.~(\ref{eqn:hiddenstate_deep}) to read
\begin{equation}
\mathbf{h}_{ij}^k = \phi \left(W_h^k \cdot \mathbf{h}_{i-1j}^k + W_h^k \cdot \mathbf{h}_{ij-1}^k + W_a^k \cdot \mathbf{h}_{ij}^{k-1} + \mathbf{b}_h^k\right).
\label{eqn:hiddenstate_deep_2D}
\end{equation}
This method can in principle be extended to three dimensional systems.
\section{Comparison to exact numerical simulations for small system sizes}
To obtain uncontroversial benchmarks, we test our method in system size regimes where exact dynamics is feasible. As benchmark systems we choose the 1D and 2D systems described in the main text in Fig.~\ref{fig:Carra_Comparison} and reduce the system size to $N=10$ spins in the 1D case and a $3\times 3$ lattice in the 2D case.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{appx_Fig_2.pdf}
\caption{(a) and (b): Mean magnetizations and next-nearest neighbour connected correlation functions (e.g.\ $C_{XX}(d=2)=\sum_i\langle \hat X_i \hat X_{i+2}\rangle^c/N$) as a function of time in the anisotropic 1D Heisenberg model for $N=10$ spins starting in the product state $\langle \hat{Y}\rangle = -1$. Nearest neighbor couplings are given by $\vec{J}/\gamma =(2, 0, 1)$, $h_z/\gamma = 1$ and the dissipation channel is $\hat{L}=\hat{\sigma}^- =\frac{1}{2}(\hat{X}-i\hat{Y})$. The exact data is obtained for $N=10$ spins. (c) and (d): Mean $z$-magnetizations and nearest neighbour connected correlation functions (for $J_y/\gamma=1.8$) in a $3\times 3$ anisotropic 2D Heisenberg lattice with nearest neighbor couplings $\vec{J}/\gamma = (0.9, 1.0 (1.8), 1.0)$ and the same decay as in (a) and (b), starting in the product state $\langle \hat{Z} \rangle = 1$.}
\label{fig:appx_Fig_2}
\end{figure*}
\section{Dissipative confinement correlations}
One question that arises in relation with Fig.~\ref{fig:Confinement} in the main text is how the spreading of correlations is to be described in the dissipative setting. As decoherence generically leads to classical transport dynamics one may expect diffusive growth of correlation that is proportional to $\sqrt{t}$ in contrast to the unitary linear light-cone proportional to $t$.
\begin{figure}[ht!]
\centering
\includegraphics[width=\linewidth]{appx_Fig3.pdf}
\caption{Spreading of correlations in the dissipative confinement model discussed in the main text (Fig.~\ref{fig:Confinement}). The data points are obtained as the first passages where $C_{ZZ}(d)\geq 0.002$ and the fitted curve $y=ax^{1/b}$ yields $b=2.04$.}
\label{fig:sqrt_fit_appx}
\end{figure}
This intuition, however, can only hold in an intermediate time regime since at long times the system will relax to its steady state prohibiting an indefinite growth of correlations.
In the present case of single particle dephasing noise the steady state is given by $\hat \rho(t\to\infty)\propto \mathds{1}$, as is easily verified by the observation that the unity operator commutes with $\hat H$ and similar for the dissipative part of the evolution Eq.~(\ref{eqn:Lindbladian_rho}) of the main text.
This means that all correlations will eventually decay to zero again in the long-time limit. One observes that correlations indeed start to disappear again at around $Jt\sim70$ in the considered setting.
Nevertheless, the spreading of correlations at intermediate times is consistent with a square-root, as shown in Fig.~\ref{fig:sqrt_fit_appx} where the dashed line is a fit to the first passage data points of a given threshold.
\begin{table*}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Figure & Number of layers & Layer size & Number of parameters & Number of Samples & Integration tol. $\epsilon$\\\hline
Fig. 2 (1D) & 3 & 20 & 2224 & 80.000 & 1e-05\\
Fig. 2 (2D, $J_y/\gamma=1.0$) & 5 & 12 & 2224 & 8.000 & 1e-02\\
Fig. 2 (2D, $J_y/\gamma=1.8$) & 3 & 20 & 3504 & 80.000 & 5e-03 \\
Fig. 3 & 5 & 12 & 1456 & 160.000 & 1e-03 \\\hline
\end{tabular}
\caption{Hyperparameters that were used for the different figures in the main text. The integration tolerance is with respect to the $S$-matrix scheme proposed in \cite{Schmitt2020}.}
\end{table*}
\end{document} |
1,477,468,750,465 | arxiv | \section{Introduction}
In 2015, the LHCb collaboration studied the $\Lambda_b^0\to J/\psi K^- p$ decays and observed two pentaquark candidates $P_c(4380)$ and $P_c(4450)$ in the $J/\psi p$ mass spectrum with the significances of more than $9\sigma$ \cite{LHCb-4380}.
Recently, the LHCb collaboration studied the $\Lambda_b^0\to J/\psi K^- p$ decays with a data sample, which is an order of magnitude larger than that previously analyzed by the LHCb collaboration, and observed a narrow pentaquark candidate $P_c(4312)$ with a statistical significance of $7.3\sigma$ \cite{LHCb-Pc4312}. Furthermore,
the LHCb collaboration confirmed the $P_c(4450)$ pentaquark structure, and observed that it consists of two narrow overlapping peaks $P_c(4440)$ and $P_c(4457)$
with the statistical significance of $5.4\sigma$ \cite{LHCb-Pc4312}.
The measured masses and widths are
\begin{flalign}
&P_c(4312) : M = 4311.9\pm0.7^{+6.8}_{-0.6} \mbox{ MeV}\, , \, \Gamma = 9.8\pm2.7^{+ 3.7}_{- 4.5} \mbox{ MeV} \, , \nonumber \\
& P_c(4440) : M = 4440.3\pm1.3^{+4.1}_{-4.7} \mbox{ MeV}\, , \, \Gamma = 20.6\pm4.9_{-10.1}^{+ 8.7} \mbox{ MeV} \, , \nonumber \\
&P_c(4457) : M = 4457.3\pm0.6^{+4.1}_{-1.7} \mbox{ MeV} \, ,\, \Gamma = 6.4\pm2.0_{- 1.9}^{+ 5.7} \mbox{ MeV} \, .
\end{flalign}
There have been several possible assignments of the $P_c$ states since the observations of the $P_c(4380)$ and $P_c(4450)$, such as the diquark-diquark-antiquark type pentaquark states \cite{di-di-anti-penta-1,di-di-anti-penta-2,di-di-anti-penta-3,di-di-anti-penta-4,di-di-anti-penta-5,Wang1508-EPJC,WangHuang-EPJC-1508-12,
WangZG-EPJC-1509-12,WangZG-NPB-1512-32,WangZhang-APPB,Pc4312-penta-1,Pc4312-penta-2,Pc4312-penta-3}, the diquark-triquark type pentaquark states \cite{di-tri-penta-1, di-tri-penta-2,di-tri-penta-3},
the molecule-like pentaquark states \cite{mole-penta-1,mole-penta-2,mole-penta-3,mole-penta-4,mole-penta-5,mole-penta-6,mole-penta-7,mole-penta-8,mole-penta-9,mole-penta-10,WangPenta-IJMPA,
Pc4312-mole-penta-1,Pc4312-mole-penta-2,Pc4312-mole-penta-3,Pc4312-mole-penta-4,Pc4312-mole-penta-5,Pc4312-mole-penta-6,Pc4312-mole-penta-7,Pc4312-mole-penta-8,
Pc4312-mole-penta-9,Pc4312-mole-penta-10}, the hadro-charmonium states \cite{Pc4312-hadrocharmonium},
the re-scattering effects \cite{rescattering-penta-1,rescattering-penta-2}, etc.
In this article, we choose the diquark-diquark-antiquark type pentaquark scenario, and restudy the ground state mass spectrum of the pentaquark states with the QCD sum rules.
The QCD sum rules is a powerful theoretical tool in studying the properties of the ground state mesons and baryons, such as the masses, decay constants, form-factors, hadronic coupling constants \cite{SVZ79,PRT85}. In the QCD sum rules, the operator product expansion is used to expand the
time-ordered currents into a series of quark and gluon condensates which parameterize the nonperturbative properties of the QCD vacuum.
Based on the quark-hadron duality, we can obtain copious information
about the hadronic parameters at the phenomenological side
\cite{SVZ79,PRT85}.
There have been many works on the mass spectrum of
the exotic states $X$, $Y$, $Z$ and $P$ \cite{Nielsen-Review}.
In Refs.\cite{Wang1508-EPJC,WangHuang-EPJC-1508-12,WangZG-EPJC-1509-12,WangZG-NPB-1512-32,WangZhang-APPB}, we construct the diquark-diquark-antiquark type pentaquark currents, study the $J^P={\frac{1}{2}}^\pm$, ${\frac{3}{2}}^\pm$, ${\frac{5}{2}}^\pm$ hidden-charm pentaquark states with the strangeness $S=0,\,-1,\,-2,\,-3$ systematically using the QCD sum rules, and explore the possible assignment of the $P_c(4380)$ and $P_c(4450)$ in the scenario of the pentaquark states.
In carrying out the operator product expansion, we take into account the contributions of the vacuum condensates which are vacuum expectations of the quark-gluon operators of the order $\mathcal{O}(\alpha_s^k)$ with $k\leq1$ and dimension $D\leq 10$, and use the energy scale formula $\mu=\sqrt{M_{P}^2-(2{\mathbb{M}}_c)^2}$ with the old value ${\mathbb{M}}_c=1.80\,\rm{GeV}$ of the effective $c$-quark mass to determine the ideal energy scales of the QCD spectral densities.
In this article, we restudy the ground state mass spectrum of the diquark-diquark-antiquark type $uudc\bar{c}$ pentaquark states with the QCD sum rules by taking into account all the vacuum condensates up to the quark-gluon operators of the order $\mathcal{O}(\alpha_s^k)$ with $k\leq1$ and dimension $13$ in carrying out the operator product expansion, and use the energy scale formula $\mu=\sqrt{M_{P}^2-(2{\mathbb{M}}_c)^2}$ with the updated value ${\mathbb{M}}_c=1.82\,\rm{GeV}$ to determine the ideal energy scales of the QCD spectral densities \cite{WangEPJC-1601}, and update the analysis and explore the possible assignments of the $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ in the scenario of the pentaquark states.
In Ref.\cite{WangPenta-IJMPA}, we choose the color-singlet-color-singlet type or meson-baryon type currents to interpolate the $\bar{D}\Sigma$,
$\bar{D}\Sigma^*$, $\bar{D}^*\Sigma$ and $\bar{D}^*\Sigma^*$ pentaquark molecular states, and observe that the experimental values of the masses of the LHCb pentaquark candidates $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ can be reproduced in the meson-baryon molecule scenario. In this article, we explore the relation between the (compact) pentaquark scenario and molecule scenario.
The article is arranged as follows:
we derive the QCD sum rules for the masses and pole residues of the
ground state hidden-charm pentaquark states in Sect.2; in Sect.3, we present the numerical results and discussions; and Sect.4 is reserved for our
conclusion.
\section{QCD sum rules for the hidden-charm pentaquark states}
Firstly, we write down the two-point correlation functions $\Pi(p)$, $\Pi_{\mu\nu}(p)$ and $\Pi_{\mu\nu\alpha\beta}(p)$ in the QCD sum rules,
\begin{eqnarray}\label{CF-Pi-Pi-Pi}
\Pi(p)&=&i\int d^4x e^{ip \cdot x} \langle0|T\left\{J(x)\bar{J}(0)\right\}|0\rangle \, , \nonumber \\
\Pi_{\mu\nu}(p)&=&i\int d^4x e^{ip \cdot x} \langle0|T\left\{J_{\mu}(x)\bar{J}_{\nu}(0)\right\}|0\rangle \, , \nonumber \\
\Pi_{\mu\nu\alpha\beta}(p)&=&i\int d^4x e^{ip \cdot x} \langle0|T\left\{J_{\mu\nu}(x)\bar{J}_{\alpha\beta}(0)\right\}|0\rangle \, ,
\end{eqnarray}
where the currents $J(x)=J^1(x)$, $J^2(x)$, $J^3(x)$, $J^4(x)$, $J_\mu(x)=J^1_\mu(x)$, $J^2_\mu(x)$, $J^3_\mu(x)$, $J^4_\mu(x)$, $J_{\mu\nu}(x)=J^1_{\mu\nu}(x)$, $J^2_{\mu\nu}(x)$,
\begin{eqnarray}
J^1(x)&=&\varepsilon^{ila} \varepsilon^{ijk}\varepsilon^{lmn} u^T_j(x) C\gamma_5 d_k(x)\,u^T_m(x) C\gamma_5 c_n(x)\, C\bar{c}^{T}_{a}(x) \, , \nonumber\\
J^2(x)&=&\varepsilon^{ila} \varepsilon^{ijk}\varepsilon^{lmn} u^T_j(x) C\gamma_5 d_k(x)\,u^T_m(x) C\gamma_\mu c_n(x)\,\gamma_5 \gamma^\mu C\bar{c}^{T}_{a}(x) \, , \nonumber\\
J^{3}(x)&=&\frac{\varepsilon^{ila} \varepsilon^{ijk}\varepsilon^{lmn}}{\sqrt{3}} \left[ u^T_j(x) C\gamma_\mu u_k(x) d^T_m(x) C\gamma_5 c_n(x)+2u^T_j(x) C\gamma_\mu d_k(x) u^T_m(x) C\gamma_5 c_n(x)\right] \gamma_5 \gamma^\mu C\bar{c}^{T}_{a}(x) \, , \nonumber\\
J^{4}(x)&=&\frac{\varepsilon^{ila} \varepsilon^{ijk}\varepsilon^{lmn}}{\sqrt{3}} \left[ u^T_j(x) C\gamma_\mu u_k(x)d^T_m(x) C\gamma^\mu c_n(x)+2u^T_j(x) C\gamma_\mu d_k(x)u^T_m(x) C\gamma^\mu c_n(x) \right] C\bar{c}^{T}_{a}(x) \, , \nonumber\\
J^1_{\mu}(x)&=&\varepsilon^{ila} \varepsilon^{ijk}\varepsilon^{lmn} u^T_j(x) C\gamma_5 d_k(x)\,u^T_m(x) C\gamma_\mu c_n(x)\, C\bar{c}^{T}_{a}(x) \, , \nonumber \\
J^{2}_{\mu}(x)&=&\frac{\varepsilon^{ila} \varepsilon^{ijk}\varepsilon^{lmn}}{\sqrt{3}} \left[ u^T_j(x) C\gamma_\mu u_k(x) d^T_m(x) C\gamma_5 c_n(x)+2u^T_j(x) C\gamma_\mu d_k(x) u^T_m(x) C\gamma_5 c_n(x)\right] C\bar{c}^{T}_{a}(x) \, , \nonumber\\
J^{3}_{\mu}(x)&=&\frac{\varepsilon^{ila} \varepsilon^{ijk}\varepsilon^{lmn}}{\sqrt{3}} \left[ u^T_j(x) C\gamma_\mu u_k(x)d^T_m(x) C\gamma_\alpha c_n(x)+2u^T_j(x) C\gamma_\mu d_k(x)u^T_m(x) C\gamma_\alpha c_n(x) \right] \gamma_5\gamma^\alpha C\bar{c}^{T}_{a}(x) \, , \nonumber\\
J^{4}_{\mu}(x)&=&\frac{\varepsilon^{ila} \varepsilon^{ijk}\varepsilon^{lmn}}{\sqrt{3}} \left[ u^T_j(x) C\gamma_\alpha u_k(x)d^T_m(x) C\gamma_\mu c_n(x)+2u^T_j(x) C\gamma_\alpha d_k(x)u^T_m(x) C\gamma_\mu c_n(x) \right] \gamma_5\gamma^\alpha C\bar{c}^{T}_{a}(x) \, , \nonumber
\end{eqnarray}
\begin{eqnarray}
J^1_{\mu\nu}(x)&=&\frac{\varepsilon^{ila} \varepsilon^{ijk}\varepsilon^{lmn}}{\sqrt{6}} \left[ u^T_j(x) C\gamma_\mu u_k(x)d^T_m(x) C\gamma_\nu c_n(x)+2u^T_j(x) C\gamma_\mu d_k(x)u^T_m(x) C\gamma_\nu c_n(x) \right] \nonumber\\
&&C\bar{c}^{T}_{a}(x)+\left( \mu\leftrightarrow\nu\right)\, , \nonumber\\
J^2_{\mu\nu}(x)&=&\frac{1}{\sqrt{2}}\varepsilon^{ila} \varepsilon^{ijk}\varepsilon^{lmn} u^T_j(x) C\gamma_5 d_k(x)\left[u^T_m(x) C\gamma_\mu c_n(x)\, \gamma_5\gamma_{\nu}C\bar{c}^{T}_{a}(x)\right.\nonumber\\
&&\left.+u^T_m(x) C\gamma_\nu c_n(x)\,\gamma_5 \gamma_{\mu}C\bar{c}^{T}_{a}(x)\right] \, ,
\end{eqnarray}
where the $i$, $j$, $k$, $l$, $m$, $n$ and $a$ are color indices, the $C$ is the charge conjugation matrix \cite{Wang1508-EPJC,WangHuang-EPJC-1508-12,WangZG-EPJC-1509-12,WangZG-NPB-1512-32}. The attractive interaction induced by one-gluon exchange favors formation of diquark correlations in color antitriplet $\bar{3}_c$ channels, we prefer the diquark operators in color antitriplet $\bar{3}_c$.
Compared to the pseudoscalar and vector diquark states, the scalar and axialvector diquark states are favored configurations, we choose the scalar and axialvector diquark operators in color antitriplet $\bar{3}_c$ as the basic constituents to construct the diquark-diquark-antiquark type current operators $J(x)$, $J_\mu(x)$ and $J_{\mu\nu}(x)$ with the spin-parity $J^{P}={\frac{1}{2}}^-$, ${\frac{3}{2}}^-$ and ${\frac{5}{2}}^-$, respectively, which are expected to couple potentially to the lowest pentaquark states \cite{Wang1508-EPJC,WangHuang-EPJC-1508-12,WangZG-EPJC-1509-12,WangZG-NPB-1512-32}.
In the currents $J(x)$, $J_\mu(x)$ and $J_{\mu\nu}(x)$, there are diquark operators $\varepsilon^{ijk}u^T_jC\gamma_5d_k$, $\varepsilon^{ijk}u^T_jC\gamma_{\mu}d_k$, $\varepsilon^{ijk}u^T_jC\gamma_{\mu}u_k$, $\varepsilon^{ijk}q^T_jC\gamma_5c_k$, $\varepsilon^{ijk}q^T_jC\gamma_{\mu}c_k$ with $q=u$, $d$. If we use the $S_L$ and $S_H$
to denote the spins of the light diquarks and heavy diquarks respectively, the light diquark operators $\varepsilon^{ijk}u^T_jC\gamma_5d_k$, $\varepsilon^{ijk}u^T_jC\gamma_{\mu}d_k$ and $\varepsilon^{ijk}u^T_jC\gamma_{\mu}u_k$ have the spins $S_L=0$, $1$ and $1$, respectively, while the heavy diquark operators $\varepsilon^{ijk}q^T_jC\gamma_5c_k$ and $\varepsilon^{ijk}q^T_jC\gamma_{\mu}c_k$ have the spins $S_H=0$ and $1$, respectively. The light diquark and heavy diquark form a charmed tetraquark in color triplet with the angular momentum $\vec{J}_{LH}=\vec{S}_L+\vec{S}_H$, which has the values $J_{LH}=0$, $1$ or $2$.
The $\bar{c}$-quark operator $Cc_a^T$ has the spin-parity $J^P={\frac{1}{2}}^-$,
while the $\bar{c}$-quark operator $\gamma_5\gamma_{\mu}Cc_a^T$ has the spin-parity $J^P={\frac{3}{2}}^-$ due to the axialvector-like factor $\gamma_5\gamma_{\mu}$. The total angular momentums of the currents are $\vec{J}=\vec{J}_{LH}+\vec{J}_{\bar{c}}$ with the values $J=\frac{1}{2}$, $\frac{3}{2}$ or $\frac{5}{2}$, which are shown explicitly in Table \ref{current-pentaQ}. In Table \ref{current-pentaQ}, we present the quark structures of the interpolating currents explicitly. For example, in the current operator $J^2_{\mu\nu}(x)$, there are a scalar diquark operator
$\varepsilon^{ijk} u^T_j(x) C\gamma_5 d_k(x)$ with the spin-parity $J^P=0^+$, an axialvector diquark operator $\varepsilon^{lmn} u^T_m(x) C\gamma_\mu c_n(x)$ with the spin-parity $J^P=1^+$, and an antiquark operator $\gamma_5\gamma_{\nu}C\bar{c}^T_a(x)$ with the spin-parity $J^{P}={\frac{3}{2}}^-$, the total angular momentum of the current is $J={\frac{5}{2}}$.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\hline\hline
$[qq^\prime][q^{\prime\prime}c]\bar{c}$ ($S_L$, $S_H$, $J_{LH}$, $J$)& $J^{P}$ & Currents \\ \hline
$[ud][uc]\bar{c}$ ($0$, $0$, $0$, $\frac{1}{2}$) & ${\frac{1}{2}}^{-}$ & $J^1(x)$ \\
$[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{1}{2}$) & ${\frac{1}{2}}^{-}$ & $J^2(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $0$, $1$, $\frac{1}{2}$) & ${\frac{1}{2}}^{-}$ & $J^3(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $0$, $\frac{1}{2}$) & ${\frac{1}{2}}^{-}$ & $J^4(x)$ \\
$[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{3}{2}$) & ${\frac{3}{2}}^{-}$ & $J^1_\mu(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $0$, $1$, $\frac{3}{2}$) & ${\frac{3}{2}}^{-}$ & $J^2_\mu(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $2$, $\frac{3}{2}$) & ${\frac{3}{2}}^{-}$ & $J^3_\mu(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $2$, $\frac{3}{2}$) & ${\frac{3}{2}}^{-}$ & $J^4_\mu(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $2$, $\frac{5}{2}$) & ${\frac{5}{2}}^{-}$ & $J^1_{\mu\nu}(x)$ \\
$[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{5}{2}$) & ${\frac{5}{2}}^{-}$ & $J^2_{\mu\nu}(x)$ \\ \hline\hline
\end{tabular}
\end{center}
\caption{ The quark structures of the current operators, where the $S_L$ and $S_H$ denote the spins of the light diquarks and heavy diquarks respectively, $\vec{J}_{LH}=\vec{S}_L+\vec{S}_H$, $\vec{J}=\vec{J}_{LH}+\vec{J}_{\bar{c}}$, the $\vec{J}_{\bar{c}}$ is the angular momentum of the $\bar{c}$-quark.
As the current operators couple potentially to pentaquark states which have the same quark structures, thereafter we will use the quark structures of the current operators to represent the corresponding pentaquark states. }\label{current-pentaQ}
\end{table}
Although the currents $J(x)$, $J_\mu(x)$ and $J_{\mu\nu}(x)$ have negative parity, they also couple potentially to the positive parity pentaquark states, as multiplying $i \gamma_{5}$ to the currents $J(x)$, $J_\mu(x)$ and $J_{\mu\nu}(x)$ changes their parity \cite{Chung82,Bagan93,Oka96,WangHbaryon-1,WangHbaryon-2,WangHbaryon-3,WangHbaryon-4,WangHbaryon-5}.
Now we write down the current-pentaquark couplings (or the definitions for the pole residues) explicitly,
\begin{eqnarray}\label{Coupling12}
\langle 0| J (0)|P_{\frac{1}{2}}^{-}(p)\rangle &=&\lambda^{-}_{\frac{1}{2}} U^{-}(p,s) \, , \nonumber \\
\langle 0| J (0)|P_{\frac{1}{2}}^{+}(p)\rangle &=&\lambda^{+}_{\frac{1}{2}} i\gamma_5 U^{+}(p,s) \, ,
\end{eqnarray}
\begin{eqnarray}
\langle 0| J_{\mu} (0)|P_{\frac{3}{2}}^{-}(p)\rangle &=&\lambda^{-}_{\frac{3}{2}} U^{-}_\mu(p,s) \, , \nonumber \\
\langle 0| J_{\mu} (0)|P_{\frac{3}{2}}^{+}(p)\rangle &=&\lambda^{+}_{\frac{3}{2}}i\gamma_5 U^{+}_\mu(p,s) \, , \nonumber \\
\langle 0| J_{\mu} (0)|P_{\frac{1}{2}}^{+}(p)\rangle &=&f^{+}_{\frac{1}{2}}p_\mu U^{+}(p,s) \, , \nonumber \\
\langle 0| J_{\mu} (0)|P_{\frac{1}{2}}^{-}(p)\rangle &=&f^{-}_{\frac{1}{2}}p_\mu i\gamma_5 U^{-}(p,s) \, ,
\end{eqnarray}
\begin{eqnarray}\label{Coupling52}
\langle 0| J_{\mu\nu} (0)|P_{\frac{5}{2}}^{-}(p)\rangle &=&\sqrt{2}\lambda^{-}_{\frac{5}{2}} U^{-}_{\mu\nu}(p,s) \, ,\nonumber\\
\langle 0| J_{\mu\nu} (0)|P_{\frac{5}{2}}^{+}(p)\rangle &=&\sqrt{2}\lambda^{+}_{\frac{5}{2}}i\gamma_5 U^{+}_{\mu\nu}(p,s) \, ,\nonumber\\
\langle 0| J_{\mu\nu} (0)|P_{\frac{3}{2}}^{+}(p)\rangle &=&f^{+}_{\frac{3}{2}} \left[p_\mu U^{+}_{\nu}(p,s)+p_\nu U^{+}_{\mu}(p,s)\right] \, , \nonumber\\
\langle 0| J_{\mu\nu} (0)|P_{\frac{3}{2}}^{-}(p)\rangle &=&f^{-}_{\frac{3}{2}}i\gamma_5 \left[p_\mu U^{-}_{\nu}(p,s)+p_\nu U^{-}_{\mu}(p,s)\right] \, , \nonumber\\
\langle 0| J_{\mu\nu} (0)|P_{\frac{1}{2}}^{-}(p)\rangle &=&g^{-}_{\frac{1}{2}}p_\mu p_\nu U^{-}(p,s) \, , \nonumber\\
\langle 0| J_{\mu\nu} (0)|P_{\frac{1}{2}}^{+}(p)\rangle &=&g^{+}_{\frac{1}{2}}p_\mu p_\nu i\gamma_5 U^{+}(p,s) \, ,
\end{eqnarray}
where the superscripts $\pm$ denote the positive parity and negative parity, respectively, the subscripts $\frac{1}{2}$, $\frac{3}{2}$ and $\frac{5}{2}$ denote the spins of the pentaquark states, the $\lambda$, $f$ and $g$ are the pole residues.
The spinors $U^\pm(p,s)$ satisfy the Dirac equations $(\not\!\!p-M_{\pm})U^{\pm}(p)=0$, while the spinors $U^{\pm}_\mu(p,s)$ and $U^{\pm}_{\mu\nu}(p,s)$ satisfy the Rarita-Schwinger equations $(\not\!\!p-M_{\pm})U^{\pm}_\mu(p)=0$ and $(\not\!\!p-M_{\pm})U^{\pm}_{\mu\nu}(p)=0$, and the relations $\gamma^\mu U^{\pm}_\mu(p,s)=0$,
$p^\mu U^{\pm}_\mu(p,s)=0$, $\gamma^\mu U^{\pm}_{\mu\nu}(p,s)=0$,
$p^\mu U^{\pm}_{\mu\nu}(p,s)=0$, $ U^{\pm}_{\mu\nu}(p,s)= U^{\pm}_{\nu\mu}(p,s)$, respectively. For more details about the spinors, one can consult Ref.\cite{Wang1508-EPJC}.
At the phenomenological side, we insert a complete set of intermediate pentaquark states with the same quantum numbers as the current operators $J(x)$, $i\gamma_5 J(x)$, $J_\mu(x)$, $i\gamma_5 J_\mu(x)$, $J_{\mu\nu}(x)$ and $i\gamma_5 J_{\mu\nu}(x)$ into the correlation functions
$\Pi(p)$, $\Pi_{\mu\nu}(p)$ and $\Pi_{\mu\nu\alpha\beta}(p)$ to obtain the hadronic representation
\cite{SVZ79,PRT85}. We take into account the current-pentaquark couplings (or the quark-hadron duality) shown in Eqs.\eqref{Coupling12}-\eqref{Coupling52}, and isolate the pole terms of the lowest
states of the negative parity and positive parity hidden-charm pentaquark states, and obtain the
following results:
\begin{eqnarray}\label{CF-Hadron-12}
\Pi(p) & = & {\lambda^{-}_{\frac{1}{2}}}^2 {\!\not\!{p}+ M_{-} \over M_{-}^{2}-p^{2} }+ {\lambda^{+}_{\frac{1}{2}}}^2 {\!\not\!{p}- M_{+} \over M_{+}^{2}-p^{2} } +\cdots \, ,\nonumber\\
&=&\Pi_{\frac{1}{2}}^1(p^2)\!\not\!{p}+\Pi_{\frac{1}{2}}^0(p^2)\, ,
\end{eqnarray}
\begin{eqnarray}\label{CF-Hadron-32}
\Pi_{\mu\nu}(p) & = & \left({\lambda^{-}_{\frac{3}{2}}}^2 {\!\not\!{p}+ M_{-} \over M_{-}^{2}-p^{2} }+ {\lambda^{+}_{\frac{3}{2}}}^2 {\!\not\!{p}- M_{+} \over M_{+}^{2}-p^{2} }\right) \left(- g_{\mu\nu}\right)+\cdots \, ,\nonumber\\
&=&\left[\Pi_{\frac{3}{2}}^1(p^2)\!\not\!{p}+\Pi_{\frac{3}{2}}^0(p^2)\right]\left(- g_{\mu\nu}\right)+\cdots\, ,
\end{eqnarray}
\begin{eqnarray}\label{CF-Hadron-52}
\Pi_{\mu\nu\alpha\beta}(p) & = & \left({\lambda^{-}_{\frac{5}{2}}}^2 {\!\not\!{p}+ M_{-} \over M_{-}^{2}-p^{2} } +{\lambda^{+}_{\frac{5}{2}}}^2 {\!\not\!{p}- M_{+} \over M_{+}^{2}-p^{2} }\right)\left( g_{\mu\alpha}g_{\nu\beta}+g_{\mu\beta}g_{\nu\alpha}\right) +\cdots \, , \nonumber\\
& = & \left[\Pi_{\frac{5}{2}}^1(p^2)\!\not\!{p}+\Pi_{\frac{5}{2}}^0(p^2)\right]\left( g_{\mu\alpha}g_{\nu\beta}+g_{\mu\beta}g_{\nu\alpha}\right) +\cdots \, .
\end{eqnarray}
In this article, we study the components $\Pi_{\frac{1}{2}}^1(p^2)$, $\Pi_{\frac{1}{2}}^0(p^2)$, $\Pi_{\frac{3}{2}}^1(p^2)$, $\Pi_{\frac{3}{2}}^0(p^2)$, $\Pi_{\frac{5}{2}}^1(p^2)$, $\Pi_{\frac{5}{2}}^0(p^2)$ to avoid possible contaminations from other pentaquark states with different spins. For detailed discussions about this subject, one can consult Refs.\cite{Wang1508-EPJC,Wang-cc-baryon-penta}.
Now we take a digression to discuss the relation between the (compact) pentaquark scenario and molecule scenario.
In this article, we study the mass spectrum of the diquark-diquark-antiquark type pentaquark states with the QCD sum rules. The diquark-diquark-antiquark type pentaquark current operator with special quantum numbers couples potentially to a special pentaquark state,
while the current operator can be re-arranged both in the color and Dirac-spinor spaces, and changed to a current operator as a special superposition of
a series of color-singlet-color-singlet type (baryon-meson type) current operators.
We perform Fierz rearrangements for the currents $J(x)$, $J_\mu(x)$ and $J_{\mu\nu}(x)$ to obtain the results,
\begin{eqnarray} \label{Fierz-J1}
J^1&=&-\frac{1}{4}\mathcal{S}_{ud}\gamma_5c\,\bar{c}u+\frac{1}{4}\mathcal{S}_{ud}\gamma^\lambda\gamma_5c\,\bar{c}\gamma_{\lambda}u
+\frac{1}{8}\mathcal{S}_{ud}\sigma^{\lambda\tau}\gamma_5c\,\bar{c}\sigma_{\lambda\tau}u+\frac{1}{4}\mathcal{S}_{ud}\gamma^{\lambda}c\,\bar{c}\gamma_{\lambda}\gamma_5u
+\frac{i}{4}\mathcal{S}_{ud}c\,\bar{c}i\gamma_5 u \nonumber\\
&&+\frac{1}{4}\mathcal{S}_{ud}\gamma_5u\,\bar{c}c-\frac{1}{4}\mathcal{S}_{ud}\gamma^\lambda\gamma_5u\,\bar{c}\gamma_{\lambda}c
-\frac{1}{8}\mathcal{S}_{ud}\sigma^{\lambda\tau}\gamma_5u\,\bar{c}\sigma_{\lambda\tau}c-\frac{1}{4}\mathcal{S}_{ud}\gamma^{\lambda}u\,\bar{c}\gamma_{\lambda}\gamma_5c
-\frac{i}{4}\mathcal{S}_{ud}u\,\bar{c}i\gamma_5 c\, , \nonumber\\
\end{eqnarray}
\begin{eqnarray}
J^2&=&-\mathcal{S}_{ud}\gamma_5c\,\bar{c}u+\frac{1}{2}\mathcal{S}_{ud}\gamma^\lambda\gamma_5c\,\bar{c}\gamma_{\lambda}u
-\frac{1}{2}\mathcal{S}_{ud}\gamma^{\lambda}c\,\bar{c}\gamma_{\lambda}\gamma_5u-i\mathcal{S}_{ud}c\,\bar{c}i\gamma_5 u -\mathcal{S}_{ud}\gamma_5u\,\bar{c}c\nonumber\\
&&+\frac{1}{2}\mathcal{S}_{ud}\gamma^\lambda\gamma_5u\,\bar{c}\gamma_{\lambda}c
-\frac{1}{2}\mathcal{S}_{ud}\gamma^{\lambda}u\,\bar{c}\gamma_{\lambda}\gamma_5c
-i\mathcal{S}_{ud}u\,\bar{c}i\gamma_5 c\, ,
\end{eqnarray}
\begin{eqnarray}
\sqrt{3}J^3&=&\frac{1}{4}\mathcal{S}^\mu_{uu}\gamma_{\mu}c\,\bar{c}d+\frac{1}{4}\mathcal{S}^\mu_{uu}\gamma_{\mu}\gamma_{\lambda}c\,\bar{c}\gamma^{\lambda}d
-\frac{1}{8}\mathcal{S}^\mu_{uu}\gamma_{\mu}\sigma_{\lambda\tau}c\,\bar{c}\sigma^{\lambda\tau}d
+\frac{1}{4}\mathcal{S}^\mu_{uu}\gamma_{\mu}\gamma_{\lambda}\gamma_5c\,\bar{c}\gamma^{\lambda}\gamma_5d\nonumber\\
&&-\frac{i}{4}\mathcal{S}^\mu_{uu}\gamma_{\mu}\gamma_{5}c\,\bar{c}i\gamma_{5}d
-\frac{1}{4}\mathcal{S}^\mu_{uu}\gamma_{\mu}d\,\bar{c}c-\frac{1}{4}\mathcal{S}^\mu_{uu}\gamma_{\mu}\gamma_{\lambda}d\,\bar{c}\gamma^{\lambda}c
+\frac{1}{8}\mathcal{S}^\mu_{uu}\gamma_{\mu}\sigma_{\lambda\tau}d\,\bar{c}\sigma^{\lambda\tau}c\nonumber\\
&&-\frac{1}{4}\mathcal{S}^\mu_{uu}\gamma_{\mu}\gamma_{\lambda}\gamma_5d\,\bar{c}\gamma^{\lambda}\gamma_5c
+\frac{i}{4}\mathcal{S}^\mu_{uu}\gamma_{\mu}\gamma_{5}d\,\bar{c}i\gamma_{5}c+2\left(S^\mu_{uu} \to S^\mu_{ud},\, d \to u \right)\, ,
\end{eqnarray}
\begin{eqnarray}
\sqrt{3}J^4&=&-\frac{1}{4}\mathcal{S}^\mu_{uu}\gamma_{\mu}c\,\bar{c}d+\frac{1}{4}\mathcal{S}^\mu_{uu}\gamma_{\lambda}\gamma_{\mu}c\,\bar{c}\gamma^{\lambda}d
+\frac{1}{8}\mathcal{S}^\mu_{uu}\sigma_{\lambda\tau}\gamma_{\mu}c\,\bar{c}\sigma^{\lambda\tau}d
-\frac{1}{4}\mathcal{S}^\mu_{uu}\gamma_{\lambda}\gamma_{\mu}\gamma_5c\,\bar{c}\gamma^{\lambda}\gamma_5d\nonumber\\
&&-\frac{i}{4}\mathcal{S}^\mu_{uu}\gamma_{\mu}\gamma_{5}c\,\bar{c}i\gamma_{5}d
-\frac{1}{4}\mathcal{S}^\mu_{uu}\gamma_{\mu}d\,\bar{c}c+\frac{1}{4}\mathcal{S}^\mu_{uu}\gamma_{\lambda}\gamma_{\mu}d\,\bar{c}\gamma^{\lambda}c
+\frac{1}{8}\mathcal{S}^\mu_{uu}\sigma_{\lambda\tau}\gamma_{\mu}d\,\bar{c}\sigma^{\lambda\tau}c\nonumber\\
&&-\frac{1}{4}\mathcal{S}^\mu_{uu}\gamma_{\lambda}\gamma_{\mu}\gamma_5d\,\bar{c}\gamma^{\lambda}\gamma_5c
-\frac{i}{4}\mathcal{S}^\mu_{uu}\gamma_{\mu}\gamma_{5}d\,\bar{c}i\gamma_{5}c+2\left(S^\mu_{uu} \to S^\mu_{ud},\, d \to u \right)\, ,
\end{eqnarray}
\begin{eqnarray}
J^1_\mu&=&-\frac{1}{4}\mathcal{S}_{ud} \gamma_{\mu}c\,\bar{c}u+\frac{1}{4}\mathcal{S}_{ud}\gamma_{\lambda}\gamma_{\mu}c\,\bar{c}\gamma^{\lambda}u
+\frac{1}{8}\mathcal{S}_{ud} \sigma_{\lambda\tau}\gamma_{\mu}c\,\bar{c}\sigma^{\lambda\tau}u
-\frac{1}{4}\mathcal{S}_{ud} \gamma_{\lambda}\gamma_{\mu}\gamma_5c\,\bar{c}\gamma^{\lambda}\gamma_5u\nonumber\\
&&-\frac{i}{4}\mathcal{S}_{ud} \gamma_{\mu}\gamma_{5}c\,\bar{c}i\gamma_{5}u
-\frac{1}{4}\mathcal{S}_{ud}\gamma_{\mu}u\,\bar{c}c+\frac{1}{4}\mathcal{S}_{ud}\gamma_{\lambda}\gamma_{\mu}u\,\bar{c}\gamma^{\lambda}c
+\frac{1}{8}\mathcal{S}_{ud}\sigma_{\lambda\tau}\gamma_{\mu}u\,\bar{c}\sigma^{\lambda\tau}c\nonumber\\
&&-\frac{1}{4}\mathcal{S}_{ud}\gamma_{\lambda}\gamma_{\mu}\gamma_5u\,\bar{c}\gamma^{\lambda}\gamma_5c
-\frac{i}{4}\mathcal{S}_{ud}\gamma_{\mu}\gamma_{5}u\,\bar{c}i\gamma_{5}c\, ,
\end{eqnarray}
\begin{eqnarray}
\sqrt{3}J^2_\mu&=&-\frac{1}{4}\mathcal{S}^{uu}_{\mu}\gamma_5c\,\bar{c}d+\frac{1}{4}\mathcal{S}^{uu}_{\mu}\gamma^\lambda\gamma_5c\,\bar{c}\gamma_{\lambda}d
+\frac{1}{8}\mathcal{S}^{uu}_{\mu}\sigma^{\lambda\tau}\gamma_5c\,\bar{c}\sigma_{\lambda\tau}d
+\frac{1}{4}\mathcal{S}^{uu}_{\mu}\gamma^{\lambda}c\,\bar{c}\gamma_{\lambda}\gamma_5d
+\frac{i}{4}\mathcal{S}^{uu}_{\mu}c\,\bar{c}i\gamma_5 d \nonumber\\
&&+\frac{1}{4}\mathcal{S}^{uu}_{\mu}\gamma_5d\,\bar{c}c-\frac{1}{4}\mathcal{S}^{uu}_{\mu}\gamma^\lambda\gamma_5d\,\bar{c}\gamma_{\lambda}c
-\frac{1}{8}\mathcal{S}^{uu}_{\mu}\sigma^{\lambda\tau}\gamma_5d\,\bar{c}\sigma_{\lambda\tau}c
-\frac{1}{4}\mathcal{S}^{uu}_{\mu}\gamma^{\lambda}d\,\bar{c}\gamma_{\lambda}\gamma_5c
-\frac{i}{4}\mathcal{S}^{uu}_{\mu}d\,\bar{c}i\gamma_5 c \nonumber\\
&&+2\left(S_\mu^{uu} \to S_\mu^{ud},\, d \to u \right)\, ,
\end{eqnarray}
\begin{eqnarray}
\sqrt{3}J^3_\mu&=&-\mathcal{S}^{uu}_{\mu}\gamma_5c\,\bar{c}d
+\frac{1}{2}\mathcal{S}^{uu}_{\mu}\gamma^\lambda\gamma_5c\,\bar{c}\gamma_{\lambda}d
-\frac{1}{2}\mathcal{S}^{uu}_{\mu}\gamma^{\lambda}c\,\bar{c}\gamma_{\lambda}\gamma_5d
-i\mathcal{S}^{uu}_{\mu}c\,\bar{c}i\gamma_5 d-\mathcal{S}^{uu}_{\mu}\gamma_5d\,\bar{c}c \nonumber\\
&&+\frac{1}{2}\mathcal{S}^{uu}_{\mu}\gamma^\lambda\gamma_5d\,\bar{c}\gamma_{\lambda}c
-\frac{1}{2}\mathcal{S}^{uu}_{\mu}\gamma^{\lambda}d\,\bar{c}\gamma_{\lambda}\gamma_5c
-i\mathcal{S}^{uu}_{\mu}d\,\bar{c}i\gamma_5 c +2\left(S_\mu^{uu} \to S_\mu^{ud},\, d \to u \right)\, ,
\end{eqnarray}
\begin{eqnarray}
\sqrt{3}J^4_\mu&=&-\frac{1}{4}\mathcal{S}_{uu}^{\alpha}\gamma_{5}\gamma_{\alpha}\gamma_{\mu}c\,\bar{c}d
+\frac{1}{4}\mathcal{S}_{uu}^{\alpha}\gamma_{5}\gamma_{\alpha}c\,\bar{c}\gamma_{\mu}d
-\frac{i}{4}\mathcal{S}_{uu}^{\alpha}\gamma_{5}\gamma_{\alpha}\sigma_{\lambda\mu}c\,\bar{c}\gamma^{\lambda}d
+\frac{1}{8}\mathcal{S}_{uu}^{\alpha}\gamma_{5}\gamma_{\alpha}\sigma_{\lambda\tau}\gamma_{\mu}c\,\bar{c}\sigma^{\lambda\tau}d\nonumber\\
&&+\frac{1}{4}\mathcal{S}_{uu}^{\alpha}\gamma_{\alpha}c\,\bar{c}\gamma_{\mu}\gamma_{5}d
-\frac{i}{4}\mathcal{S}_{uu}^{\alpha}\gamma_{\alpha}\sigma_{\lambda\mu}c\,\bar{c}\gamma^{\lambda}\gamma_{5}d
-\frac{i}{4}\mathcal{S}_{uu}^{\alpha}\gamma_{\alpha}\gamma_{\mu}c\,\bar{c}i\gamma_{5}d
-\frac{1}{4}\mathcal{S}_{uu}^{\alpha}\gamma_{5}\gamma_{\alpha}\gamma_{\mu}d\,\bar{c}c\nonumber\\
&&+\frac{1}{4}\mathcal{S}_{uu}^{\alpha}\gamma_{5}\gamma_{\alpha}d\,\bar{c}\gamma_{\mu}c
-\frac{i}{4}\mathcal{S}_{uu}^{\alpha}\gamma_{5}\gamma_{\alpha}\sigma_{\lambda\mu}d\,\bar{c}\gamma^{\lambda}c
+\frac{1}{8}\mathcal{S}_{uu}^{\alpha}\gamma_{5}\gamma_{\alpha}\sigma_{\lambda\tau}\gamma_{\mu}d\,\bar{c}\sigma^{\lambda\tau}c
+\frac{1}{4}\mathcal{S}_{uu}^{\alpha}\gamma_{\alpha}d\,\bar{c}\gamma_{\mu}\gamma_{5}c\nonumber\\
&&-\frac{i}{4}\mathcal{S}_{uu}^{\alpha}\gamma_{\alpha}\sigma_{\lambda\mu}d\,\bar{c}\gamma^{\lambda}\gamma_{5}c
-\frac{i}{4}\mathcal{S}_{uu}^{\alpha}\gamma_{\alpha}\gamma_{\mu}d\,\bar{c}i\gamma_{5}c+2\left(S^\alpha_{uu} \to S^\alpha_{ud},\, d \to u \right)\, ,
\end{eqnarray}
\begin{eqnarray}
\sqrt{6}J^1_{\mu\nu}&=&-\frac{1}{4}\mathcal{S}_\mu^{uu}\gamma_{\nu}c\,\bar{c}d
+\frac{1}{4}\mathcal{S}_\mu^{uu}c\,\bar{c}\gamma_{\nu}d
-\frac{i}{4}\mathcal{S}_\mu^{uu}\sigma_{\lambda\nu}c\,\bar{c}\gamma^{\lambda}d
+\frac{1}{8}\mathcal{S}_\mu^{uu}\sigma_{\lambda\tau}\gamma_{\nu}c\,\bar{c}\sigma^{\lambda\tau}d\nonumber\\
&&-\frac{1}{4}\mathcal{S}_\mu^{uu}\gamma_{\lambda}\gamma_{\nu}\gamma_5c\,\bar{c}\gamma^{\lambda}\gamma_5d
-\frac{i}{4}\mathcal{S}_\mu^{uu}\gamma_{\nu}\gamma_{5}c\,\bar{c}i\gamma_{5}d
-\frac{1}{4}\mathcal{S}_\mu^{uu}\gamma_{\nu}d\,\bar{c}c
+\frac{1}{4}\mathcal{S}_\mu^{uu}d\,\bar{c}\gamma_{\nu}c\nonumber\\
&&-\frac{i}{4}\mathcal{S}_\mu^{uu}\sigma_{\lambda\nu}d\,\bar{c}\gamma^{\lambda}c
+\frac{1}{8}\mathcal{S}_\mu^{uu}\sigma_{\lambda\tau}\gamma_{\nu}d\,\bar{c}\sigma^{\lambda\tau}c
-\frac{1}{4}\mathcal{S}_\mu^{uu}\gamma_{\lambda}\gamma_{\nu}\gamma_5d\,\bar{c}\gamma^{\lambda}\gamma_5c\nonumber\\
&&-\frac{i}{4}\mathcal{S}_\mu^{uu}\gamma_{\nu}\gamma_{5}d\,\bar{c}i\gamma_{5}c+2\left(S_\mu^{uu} \to S_\mu^{ud},\, d \to u \right)
+\left(\mu \leftrightarrow \nu\right)\, ,
\end{eqnarray}
\begin{eqnarray}\label{Fierz-Jmunu2}
\sqrt{2}J^2_{\mu\nu}&=&-\frac{1}{4}\mathcal{S}_{ud}\gamma_5\gamma_{\nu}\gamma_{\mu}c\,\bar{c}u
+\frac{1}{4}\mathcal{S}_{ud}\gamma_5\gamma_{\nu}\gamma_{\lambda}\gamma_{\mu}c\,\bar{c}\gamma^{\lambda}u
+\frac{1}{8}\mathcal{S}_{ud}\gamma_5\gamma_{\nu}\sigma_{\lambda\tau}\gamma_{\mu}c\,\bar{c}\sigma^{\lambda\tau}u\nonumber\\
&&+\frac{1}{4}\mathcal{S}_{ud}\gamma_{\nu}\gamma_{\lambda}\gamma_{\mu}c\,\bar{c}\gamma^{\lambda}\gamma_5u
-\frac{i}{4}\mathcal{S}_{ud}\gamma_{\nu}\gamma_{\mu}c\,\bar{c}i\gamma_5 u -\frac{1}{4}\mathcal{S}_{ud}\gamma_5\gamma_{\nu}\gamma_{\mu}u\,\bar{c}c\nonumber\\
&&+\frac{1}{4}\mathcal{S}_{ud}\gamma_5\gamma_{\nu}\gamma_{\lambda}\gamma_{\mu}u\,\bar{c}\gamma^{\lambda}c
+\frac{1}{8}\mathcal{S}_{ud}\gamma_5\gamma_{\nu}\sigma_{\lambda\tau}\gamma_{\mu}u\,\bar{c}\sigma^{\lambda\tau}c
+\frac{1}{4}\mathcal{S}_{ud}\gamma_{\nu}\gamma_{\lambda}\gamma_{\mu}u\,\bar{c}\gamma^{\lambda}\gamma_5c\nonumber\\
&&-\frac{i}{4}\mathcal{S}_{ud}\gamma_{\nu}\gamma_{\mu}u\,\bar{c}i\gamma_5 c +\left(\mu\leftrightarrow \nu \right)\, ,
\end{eqnarray}
where $\mathcal{S}_{ud}\Gamma c=\varepsilon^{ijk}u^{Ti}C\gamma_5d^j\Gamma c^k$, $\mathcal{S}_{ud}\Gamma u=\varepsilon^{ijk}u^{Ti}C\gamma_5d^j\Gamma u^k$,
$\mathcal{S}^\mu_{uu}\Gamma c=\varepsilon^{ijk}u^{Ti}C\gamma^{\mu}u^j\Gamma c^k$,
$\mathcal{S}^\mu_{ud}\Gamma c=\varepsilon^{ijk}u^{Ti}C\gamma^{\mu}d^j\Gamma c^k$,
$\mathcal{S}^\mu_{uu}\Gamma d=\varepsilon^{ijk}u^{Ti}C\gamma^{\mu}u^j\Gamma d^k$,
$\mathcal{S}^\mu_{ud}\Gamma u=\varepsilon^{ijk}u^{Ti}C\gamma^{\mu}d^j\Gamma u^k$, the $\Gamma$ are Dirac matrixes.
The components $\mathcal{S}_{ud}\Gamma c$ and $\mathcal{S}_{ud}\Gamma u$ have the scalar diquark operator $\varepsilon^{ijk}u^{Ti}C\gamma_5d^j$, and can be classified as
the $\Lambda$-type currents, the components $\mathcal{S}^\mu_{uu}\Gamma c$,
$\mathcal{S}^\mu_{ud}\Gamma c$,
$\mathcal{S}^\mu_{uu}\Gamma d$ and
$\mathcal{S}^\mu_{ud}\Gamma u$ have the axialvector diquark operator $\varepsilon^{ijk}u^{Ti}C\gamma^{\mu} u^j$ or $\varepsilon^{ijk}u^{Ti}C\gamma^{\mu} d^j$, and can be classified as
the $\Sigma$-type currents.
The components of the currents $J^1(x)$ and $J^2(x)$ have analogous $\Lambda$-type structures, while the components of the currents $J^3(x)$ and $J^4(x)$ have analogous $\Sigma$-type structures, the components of the currents $J_\mu^2(x)$, $J_\mu^3(x)$ and $J_\mu^4(x)$ have analogous $\Sigma$-type structures.
The currents have analogous components mix with each other potentially, however, the Fierz rearrangements (see Eqs.\eqref{Fierz-J1}-\eqref{Fierz-Jmunu2}) in the color and Dirac-spinor spaces are not unique, which cannot exclude the mixings between the $\Lambda$-type and $\Sigma$-type current operators if they have the same spin-parity $J^P$, direct calculations indicate that the non-diagonal correlation functions $\Pi^{ij}(p)$, $\Pi^{ij}_{\mu\nu}(p)$ and $\Pi^{ij}_{\mu\nu\alpha\beta}(p)\neq 0$ for $i\neq {j}$, where
\begin{eqnarray}
\Pi^{ij}(p)&=&i\int d^4x e^{ip \cdot x} \langle0|T\left\{J^{i}(x)\bar{J}^{j}(0)\right\}|0\rangle \, , \nonumber \\
\Pi^{ij}_{\mu\nu}(p)&=&i\int d^4x e^{ip \cdot x} \langle0|T\left\{J_{\mu}^{i}(x)\bar{J}_{\nu}^{j}(0)\right\}|0\rangle \, , \nonumber \\
\Pi^{ij}_{\mu\nu\alpha\beta}(p)&=&i\int d^4x e^{ip \cdot x} \langle0|T\left\{J_{\mu\nu}^{i}(x)\bar{J}^{j}_{\alpha\beta}(0)\right\}|0\rangle \, ,
\end{eqnarray}
the correlation functions shown in Eq.\eqref{CF-Pi-Pi-Pi} correspond to the case $i=j$.
We can introduce the mixing matrixes $U$, $J^{\prime i}=U_{ij}J^j$, $J_\mu^{\prime i}=U_{ij}J_\mu^j$ and $J_{\mu\nu}^{\prime i}=U_{ij}J_{\mu\nu}^j$, where the
$U$ are $4\times4$, $4\times4$ and $2\times2$ matrixes, respectively. Then we obtain
the diagonal correlation functions,
\begin{eqnarray}
\Pi^{\prime ij}(p)&=& U_{im}\Pi^{ mn}(p)U^{\dagger}_{nj} \, , \nonumber \\
\Pi^{\prime ij}_{\mu\nu}(p)&=&U_{im}\Pi^{ mn}_{\mu\nu}(p)U^{\dagger}_{nj}\, , \nonumber \\
\Pi^{\prime ij}_{\mu\nu\alpha\beta}(p)&=&U_{im}\Pi^{ mn}_{\mu\nu\alpha\beta}(p)U^{\dagger}_{nj} \, ,
\end{eqnarray}
with the properties $\Pi^{\prime ij}(p)$, $\Pi^{\prime ij}_{\mu\nu}(p)$, $\Pi^{\prime ij}_{\mu\nu\alpha\beta}(p)\propto \delta_{ij}$. The matrixes $U$ can be determined
by direct calculations based on the QCD sum rules, the tedious task may be our next work. The current operators $J^{\prime i}(x)$, $J_\mu^{\prime i}(x)$ and $J_{\mu\nu}^{\prime i}(x)$ couple potentially to more physical pentaquark states, which have more than one diquark-diquark-antiquark type Fock components.
The color-singlet-color-singlet type current operators shown in Eqs.\eqref{Fierz-J1}-\eqref{Fierz-Jmunu2} couple potentially
to the baryon-meson pairs or the pentaquark molecular states.
For example, the components $\mathcal{S}_{ud}c\,\bar{c}i\gamma_5 u $ and $\mathcal{S}_{ud}\gamma^\lambda\gamma_5u\,\bar{c}\gamma_{\lambda}c$ of the current $J^1(x)$ (also $J^2(x)$) couple potentially to the $\Lambda_c^+\bar{D}^0$ and $pJ/\psi$, respectively;
the components $\mathcal{S}^\mu_{uu}\gamma_{\mu}\gamma_{5}c\,\bar{c}i\gamma_{5}d$,
$\mathcal{S}^\mu_{ud}\gamma_{\mu}\gamma_{5}u\,\bar{c}i\gamma_{5}c$ and $\mathcal{S}^\mu_{ud}\gamma_{\lambda}\gamma_{\mu}u\,\bar{c}\gamma^{\lambda}c$ of the current $J^3(x)$ (also $J^4(x)$) couple potentially to the $\Sigma_c^{++}D^-$, $p \eta_c$ and $pJ/\psi$, respectively.
The diquark-diquark-antiquark type pentaquark state can be taken as a special superposition of a series of baryon-meson pairs or pentaquark molecular states, and embodies the net effects, the decays to its components (baryon-meson pairs) are Okubo-Zweig-Iizuka super-allowed.
From Eqs.\eqref{Fierz-J1}-\eqref{Fierz-Jmunu2}, we can see that there are $\bar{c}c$, $\bar{c}i\gamma_5c$, $\bar{c}\gamma_{\mu}c$ and $\bar{c}\gamma_{\mu}\gamma_5c$
components in all the current operators, which have definite heavy quark spin, the conversation of the heavy quark spin favors decay to the final states $\chi_{c0}$, $\eta_c$, $J/\psi$ and $\chi_{c1}$.
In fact, we should be careful in performing the Fierz rearrangements, the rearrangements in the color and Dirac-spinor spaces are non-trivial, the scenarios of the pentaquark states and molecular states are different.
The spatial separation among the diquark, diquark and antiquark leads to small wave-function overlaps to form the baryon-meson pairs, the rearrangements in the color and Dirac-spinor spaces are suppressed, which can account for the small widths of the $P_c(4312)$, $P_c(4440)$ and $P_c(4557)$ qualitatively.
It is difficult to take into account the non-local effects among the diquark, diquark and antiquark in the currents directly,
for example, the current $J^1(x)$ can be modified to
\begin{eqnarray}
J^1(x,\epsilon,\epsilon^\prime)&=&\varepsilon^{ila} \varepsilon^{ijk}\varepsilon^{lmn} u^T_j(x+\epsilon) C\gamma_5 d_k(x+\epsilon)\,u^T_m(x) C\gamma_5 c_n(x)\, C\bar{c}^{T}_{a}(x+\epsilon^\prime) \, ,
\end{eqnarray}
to account for the non-locality by adding two finite separations $\epsilon$ and $\epsilon^\prime$, but it is difficult to deal with the finite $\epsilon$ and $\epsilon^\prime$ in carrying out the operator product expansion, we have to take the limit $\epsilon, \, \epsilon^\prime \to 0$.
Now let us go back to the hadron representation of the correlation functions shown in Eqs.\eqref{CF-Hadron-12}-\eqref{CF-Hadron-52}. We obtain the spectral densities at the phenomenological side through dispersion relation,
\begin{eqnarray}
\frac{{\rm Im}\Pi_{j}^1(s)}{\pi}&=& {\lambda^{-}_{j}}^2 \delta\left(s-M_{-}^2\right)+{\lambda^{+}_{j}}^2 \delta\left(s-M_{+}^2\right) =\, \rho^1_{j,H}(s) \, , \\
\frac{{\rm Im}\Pi^0_{j}(s)}{\pi}&=&M_{-}{\lambda^{-}_{j}}^2 \delta\left(s-M_{-}^2\right)-M_{+}{\lambda^{+}_{j}}^2 \delta\left(s-M_{+}^2\right)
=\rho^0_{j,H}(s) \, ,
\end{eqnarray}
where $j=\frac{1}{2}$, $\frac{3}{2}$, $\frac{5}{2}$, the subscript $H$ denotes the hadron side,
then we introduce the weight functions $\sqrt{s}\exp\left(-\frac{s}{T^2}\right)$ and $\exp\left(-\frac{s}{T^2}\right)$ to obtain the QCD sum rules
at the hadron side,
\begin{eqnarray}
\int_{4m_c^2}^{s_0}ds \left[\sqrt{s}\,\rho^1_{j,H}(s)+\rho^0_{j,H}(s)\right]\exp\left( -\frac{s}{T^2}\right)
&=&2M_{-}{\lambda^{-}_{j}}^2\exp\left( -\frac{M_{-}^2}{T^2}\right) \, ,
\end{eqnarray}
where the $s_0$ are the continuum threshold parameters, the $T^2$ are the Borel parameters.
We separate the contributions of the negative parity and positive parity pentaquark states unambiguously.
\begin{figure}
\centering
\includegraphics[totalheight=3.0cm,width=15cm]{Penta-qq-qq-qqg-13.eps}
\vglue+3mm
\includegraphics[totalheight=3.0cm,width=15cm]{Penta-qq-qqg-qqg.eps}
\caption{The diagrams contribute to the condensates $\langle\bar{q} q\rangle^2\langle\bar{q}g_s\sigma Gq\rangle $, $\langle\bar{q} q\rangle \langle\bar{q}g_s\sigma Gq\rangle^2 $, $\langle \bar{q}q\rangle^3\langle \frac{\alpha_s}{\pi}GG\rangle$. Other
diagrams obtained by interchanging of the $c$ quark lines (dashed lines) or light quark lines (solid lines) are implied. }\label{Feynman}
\end{figure}
In the following, we briefly outline the operator product expansion for the correlation functions $\Pi(p)$, $\Pi_{\mu\nu}(p)$ and $\Pi_{\mu\nu\alpha\beta}(p)$ in perturbative QCD. Firstly, we contract the $u$, $d$ and $c$ quark fields in the correlation functions $\Pi(p)$, $\Pi_{\mu\nu}(p)$ and $\Pi_{\mu\nu\alpha\beta}(p)$
with Wick theorem, for example,
\begin{eqnarray}\label{QCD-Pi}
\Pi(p)&=&i\,\varepsilon^{ila}\varepsilon^{ijk}\varepsilon^{lmn}\varepsilon^{i^{\prime}l^{\prime}a^{\prime}}\varepsilon^{i^{\prime}j^{\prime}k^{\prime}}
\varepsilon^{l^{\prime}m^{\prime}n^{\prime}}\int d^4x e^{ip\cdot x} \nonumber\\
&&\Big\{ - Tr\Big[\gamma_5 D_{kk^\prime}(x) \gamma_5 C U^{T}_{jj^\prime}(x)C\Big] \,Tr\Big[\gamma_5 C_{nn^\prime}(x) \gamma_5 C U^{T}_{mm^\prime}(x)C\Big] C C_{a^{\prime}a}^T(-x)C \nonumber\\
&&+ Tr \Big[\gamma_5 D_{kk^\prime}(x) \gamma_5 C U^{T}_{mj^\prime}(x)C \gamma_5 C_{nn^\prime}(x) \gamma_5 C U^{T}_{jm^\prime}(x)C\Big] C C_{a^{\prime}a}^T(-x)C \Big\} \, ,
\end{eqnarray}
for the current $J(x)=J^1(x)$, where
the $U_{ij}(x)$, $D_{ij}(x)$ and $C_{ij}(x)$ are the full $u$, $d$ and $c$ quark propagators respectively ($S_{ij}(x)=U_{ij}(x),\,D_{ij}(x)$),
\begin{eqnarray}\label{L-quark-propagator}
S_{ij}(x)&=& \frac{i\delta_{ij}\!\not\!{x}}{ 2\pi^2x^4}-\frac{\delta_{ij}\langle
\bar{q}q\rangle}{12} -\frac{\delta_{ij}x^2\langle \bar{q}g_s\sigma Gq\rangle}{192} -\frac{ig_sG^{a}_{\alpha\beta}t^a_{ij}(\!\not\!{x}
\sigma^{\alpha\beta}+\sigma^{\alpha\beta} \!\not\!{x})}{32\pi^2x^2} \nonumber\\
&& -\frac{\delta_{ij}x^4\langle \bar{q}q \rangle\langle g_s^2 GG\rangle}{27648} -\frac{1}{8}\langle\bar{q}_j\sigma^{\mu\nu}q_i \rangle \sigma_{\mu\nu}+\cdots \, ,
\end{eqnarray}
\begin{eqnarray}\label{H-quark-propagator}
C_{ij}(x)&=&\frac{i}{(2\pi)^4}\int d^4k e^{-ik \cdot x} \left\{
\frac{\delta_{ij}}{\!\not\!{k}-m_c}
-\frac{g_sG^n_{\alpha\beta}t^n_{ij}}{4}\frac{\sigma^{\alpha\beta}(\!\not\!{k}+m_c)+(\!\not\!{k}+m_c)
\sigma^{\alpha\beta}}{(k^2-m_c^2)^2}\right.\nonumber\\
&&\left. -\frac{g_s^2 (t^at^b)_{ij} G^a_{\alpha\beta}G^b_{\mu\nu}(f^{\alpha\beta\mu\nu}+f^{\alpha\mu\beta\nu}+f^{\alpha\mu\nu\beta}) }{4(k^2-m_c^2)^5}+\cdots\right\} \, ,\nonumber\\
f^{\alpha\beta\mu\nu}&=&(\!\not\!{k}+m_c)\gamma^\alpha(\!\not\!{k}+m_c)\gamma^\beta(\!\not\!{k}+m_c)\gamma^\mu(\!\not\!{k}+m_c)\gamma^\nu(\!\not\!{k}+m_c)\, ,
\end{eqnarray}
and $t^n=\frac{\lambda^n}{2}$, the $\lambda^n$ is the Gell-Mann matrix \cite{PRT85}. In Eq.\eqref{L-quark-propagator}, we retain the term $\langle\bar{q}_j\sigma_{\mu\nu}q_i \rangle$ comes from the Fierz re-arrangement of the $\langle q_i \bar{q}_j\rangle$ to absorb the gluons emitted from other quark lines to form $\langle\bar{q}_j g_s G^a_{\alpha\beta} t^a_{mn}\sigma_{\mu\nu} q_i \rangle$ to extract the mixed condensate $\langle\bar{q}g_s\sigma G q\rangle$ \cite{WangHuangTao}. Then we compute the integrals both in the coordinate space and momentum space to obtain the correlation functions $\Pi(p)$, $\Pi_{\mu\nu}(p)$ and $\Pi_{\mu\nu\alpha\beta}(p)$ at the quark level, and finally obtain the QCD spectral densities through dispersion relation,
\begin{eqnarray}\label{QCD-rho1-2}
\rho^1_{j,QCD}(s) &=&\frac{{\rm Im}\Pi_{j}^1(s)}{\pi}\, , \nonumber\\
\rho^0_{j,QCD}(s) &=&\frac{{\rm Im}\Pi_{j}^0(s)}{\pi}\, ,
\end{eqnarray}
where $j=\frac{1}{2}$, $\frac{3}{2}$, $\frac{5}{2}$. For more technical details, one can consult Ref.\cite{WangHuangTao}.
In computing the integrals, we draw up all the Feynman diagrams from Eqs.\eqref{QCD-Pi}-\eqref{H-quark-propagator} and calculate them one by one.
In this article, we carry out the operator product expansion up to the vacuum condensates of dimension-$13$ and assume vacuum saturation for the
higher dimensional vacuum condensates. We take the truncations $n\leq 13$ and $k\leq 1$ in a consistent way,
the quark-gluon operators of the orders $\mathcal{O}( \alpha_s^{k})$ with $k> 1$ and dimension $n>13$ are discarded.
In previous works \cite{Wang1508-EPJC,WangHuang-EPJC-1508-12,WangZG-EPJC-1509-12,WangZG-NPB-1512-32,WangZhang-APPB}, we took the truncations $n\leq 10$ and $k\leq 1$
in the operator product expansion and discarded the quark-gluon operators of the orders $\mathcal{O}( \alpha_s^{k})$ with $k> 1$ and dimension $n>10$. Sometimes we also neglected the vacuum condensates $\langle \frac{\alpha_sGG}{\pi}\rangle$,
$\langle \bar{q}q\rangle\langle \frac{\alpha_sGG}{\pi}\rangle$, $\langle \bar{s}s\rangle\langle \frac{\alpha_sGG}{\pi}\rangle$, $\langle \bar{q}q\rangle^2\langle \frac{\alpha_sGG}{\pi}\rangle$, $\langle \bar{q}q\rangle \langle \bar{s}s\rangle\langle \frac{\alpha_sGG}{\pi}\rangle$,
$ \langle \bar{s}s\rangle^2\langle \frac{\alpha_sGG}{\pi}\rangle$, which are not associated with the $\frac{1}{T^2}$, $\frac{1}{T^4}$ and $\frac{1}{T^6}$ to manifest themselves for the small Borel parameter $T^2$. We neglected those terms due to the small values of the gluon condensate $\langle \frac{\alpha_sGG}{\pi}\rangle$. In this article, we take into account all those contributions, such as $\langle \frac{\alpha_sGG}{\pi}\rangle$,
$\langle \bar{q}q\rangle\langle \frac{\alpha_sGG}{\pi}\rangle$, $\langle \bar{q}q\rangle^2\langle \frac{\alpha_sGG}{\pi}\rangle$.
In this article, we re-examine the QCD side of the correlation functions $\Pi(p)$, $\Pi_{\mu\nu}(p)$ and $\Pi_{\mu\nu\alpha\beta}(p)$. From Eq.\eqref{QCD-Pi}, we can see that there are two $c$-quark propagators and three light quark propagators, if each $c$-quark line emits a gluon and each light quark line contributes a quark-antiquark pair, we obtain a operator $G_{\mu\nu}G_{\alpha\beta}\bar{u}u\bar{u}u\bar{d}d$, which is of dimension 13, see Fig.\ref{Feynman}. We should take into account the vacuum condensates at least up to dimension $13$ in stead of dimension $10$. The vacuum condensates $\langle\bar{q} q\rangle^2\langle\bar{q}g_s\sigma Gq\rangle $, $\langle\bar{q} q\rangle \langle\bar{q}g_s\sigma Gq\rangle^2 $, $\langle \bar{q}q\rangle^3\langle \frac{\alpha_s}{\pi}GG\rangle$ are of dimension $11$ and $13$ respectively, and come from the Feynman diagrams shown in Fig.\ref{Feynman}. Those vacuum condensates are associated with the $\frac{1}{T^2}$, $\frac{1}{T^4}$ and $\frac{1}{T^6}$, which manifest themselves for the small values of the $T^2$ and play an important role in determining the Borel windows, although at the Borel windows they play a minor important role.
As we have obtained the QCD spectral densities, see Eq.\eqref{QCD-rho1-2}, now let us match the hadron side with the QCD side of the correlation functions $\Pi(p)$, $\Pi_{\mu\nu}(p)$ and $\Pi_{\mu\nu\alpha\beta}(p)$, take the quark-hadron duality below the continuum thresholds $s_0$, and obtain the QCD sum rules:
\begin{eqnarray}\label{QCDSR}
2M_{-}\lambda^{-}_j{}^2\exp\left( -\frac{M_{-}^2}{T^2}\right)&=& \int_{4m_c^2}^{s_0}ds \,\rho_{QCD,j}(s)\,\exp\left( -\frac{s}{T^2}\right)\, ,
\end{eqnarray}
where $\rho_{QCD,j}(s)=\sqrt{s}\rho_{QCD,j}^1(s)+\rho_{QCD,j}^{0}(s)$,
\begin{eqnarray}
\rho_{QCD,j}(s)&=&\rho^j_{0}(s)+\rho^j_{3}(s)+\rho^j_{4}(s)+\rho^j_{5}(s)+\rho^j_{6}(s)+\rho^j_{7}(s)+\rho^j_{8}(s)+\rho^j_{9}(s)+\rho^j_{10}(s)+\rho^j_{11}(s)\nonumber\\
&&+\rho^j_{13}(s)\, ,
\end{eqnarray}
\begin{eqnarray}
\rho^j_{0}(s)&\propto& {\rm perturbative \,\,\,\, terms}\, , \nonumber\\
\rho^j_{3}(s)&\propto& \langle \bar{q}q\rangle\, , \nonumber\\
\rho^j_{4}(s)&\propto& \langle \frac{\alpha_sGG}{\pi}\rangle\, , \nonumber\\
\rho^j_{5}(s)&\propto& \langle \bar{q}g_s\sigma Gq\rangle \, , \nonumber\\
\rho^j_{6}(s)&\propto& \langle \bar{q}q\rangle^2 \, , \nonumber\\
\rho^j_{7}(s)&\propto& \langle \bar{q}q\rangle\langle \frac{\alpha_sGG}{\pi}\rangle\, , \nonumber\\
\rho^j_{8}(s)&\propto& \langle\bar{q}q\rangle\langle \bar{q}g_s\sigma Gq\rangle\, , \nonumber\\
\rho^j_{9}(s)&\propto& \langle \bar{q}q\rangle^3\, , \nonumber\\
\rho^j_{10}(s)&\propto& \langle \bar{q}g_s\sigma Gq\rangle^2\, , \, \langle \bar{q}q\rangle^2\langle \frac{\alpha_sGG}{\pi}\rangle \, , \nonumber\\
\rho^j_{11}(s)&\propto& \langle\bar{q}q\rangle^2\langle \bar{q}g_s\sigma Gq\rangle\, , \nonumber\\
\rho^j_{13}(s)&\propto& \langle\bar{q}q\rangle\langle \bar{q}g_s\sigma Gq\rangle^2\, , \, \langle \bar{q}q\rangle^3\langle \frac{\alpha_sGG}{\pi}\rangle \, .
\end{eqnarray}
The explicit expressions of the QCD spectral densities are too lengthy to be presented here, the interested reader can obtain them by contacting me via E-mail.
We derive Eq.\eqref{QCDSR} with respect to $\frac{1}{T^2}$, then eliminate the
pole residues $\lambda^{-}_{j}$ and obtain the QCD sum rules for
the masses of the hidden-charm pentaquark states,
\begin{eqnarray}
M^2_{-} &=& \frac{-\int_{4m_c^2}^{s_0}ds \frac{d}{d(1/T^2)}\, \rho_{QCD,j}(s)\,\exp\left( -\frac{s}{T^2}\right)}{\int_{4m_c^2}^{s_0}ds \, \rho_{QCD,j}(s)\,\exp\left( -\frac{s}{T^2}\right)}\, .
\end{eqnarray}
\section{Numerical results and discussions}
We take the vacuum condensates to be the standard values
$\langle\bar{q}q \rangle=-(0.24\pm 0.01\, \rm{GeV})^3$, $\langle\bar{q}g_s\sigma G q \rangle=m_0^2\langle \bar{q}q \rangle$,
$m_0^2=(0.8 \pm 0.1)\,\rm{GeV}^2$, $\langle \frac{\alpha_s
GG}{\pi}\rangle=(0.33\,\rm{GeV})^4 $ at the energy scale $\mu=1\, \rm{GeV}$
\cite{SVZ79,PRT85,ColangeloReview}, and take the $\overline{MS}$ mass $m_{c}(m_c)=(1.275\pm0.025)\,\rm{GeV}$
from the Particle Data Group \cite{PDG}.
Moreover, we take into account
the energy-scale dependence of the quark condensate, mixed quark condensate and $\overline{MS}$ mass,
\begin{eqnarray}
\langle\bar{q}q \rangle(\mu)&=&\langle\bar{q}q \rangle({\rm 1GeV})\left[\frac{\alpha_{s}({\rm 1GeV})}{\alpha_{s}(\mu)}\right]^{\frac{12}{33-2n_f}}\, , \nonumber\\
\langle\bar{q}g_s \sigma G q \rangle(\mu)&=&\langle\bar{q}g_s \sigma G q \rangle({\rm 1GeV})\left[\frac{\alpha_{s}({\rm 1GeV})}{\alpha_{s}(\mu)}\right]^{\frac{2}{33-2n_f}}\, ,\nonumber\\
m_c(\mu)&=&m_c(m_c)\left[\frac{\alpha_{s}(\mu)}{\alpha_{s}(m_c)}\right]^{\frac{12}{33-2n_f}} \, ,\nonumber\\
\alpha_s(\mu)&=&\frac{1}{b_0t}\left[1-\frac{b_1}{b_0^2}\frac{\log t}{t} +\frac{b_1^2(\log^2{t}-\log{t}-1)+b_0b_2}{b_0^4t^2}\right]\, ,
\end{eqnarray}
where $t=\log \frac{\mu^2}{\Lambda^2}$, $b_0=\frac{33-2n_f}{12\pi}$, $b_1=\frac{153-19n_f}{24\pi^2}$, $b_2=\frac{2857-\frac{5033}{9}n_f+\frac{325}{27}n_f^2}{128\pi^3}$, $\Lambda=210\,\rm{MeV}$, $292\,\rm{MeV}$ and $332\,\rm{MeV}$ for the flavors $n_f=5$, $4$ and $3$, respectively \cite{PDG,Narison-mix}, and evolve all the input parameters at the QCD side to the optimal energy scales $\mu$ with $n_f=4$ to extract the pentaquark masses.
In Refs.\cite{WangHbaryon-1,WangHbaryon-2,WangHbaryon-3,WangHbaryon-4,WangHbaryon-5,Wang-cc-baryon-penta}, we study the heavy, doubly-heavy and triply-heavy baryon states with the QCD sum rules in a systematic way. In calculations, we observe that the continuum threshold parameters $\sqrt{s_0}=M_{gr}+ (0.5-0.8)\,\rm{GeV}$ work well, where the subscript $gr$ denotes the ground state baryon states.
The pentaquark states are another type baryon states due to the fractional spins $1\over 2$, $3\over 2$, $5\over 2$. In the present work, we take the continuum threshold parameters as $\sqrt{s_0}= M_{P}+(0.55-0.75)\,\rm{GeV}$.
In this article, we choose the Borel parameters $T^2$ and continuum threshold parameters $s_0$ to satisfy the four criteria:
$\bf 1.$ Pole dominance at the phenomenological side;
$\bf 2.$ Convergence of the operator product expansion;
$\bf 3.$ Appearance of the Borel platforms;
$\bf 4.$ Satisfying the energy scale formula,\\
via try and error.
Now we take a short digression to discuss the energy scale formula. The hidden-charm or hidden-bottom four-quark and five-quark systems can be described
by a double-well potential in the heavy quark limit \cite{Wang1508-EPJC,WangHuang-EPJC-1508-12,WangZG-EPJC-1509-12,WangZG-NPB-1512-32,WangHuangTao,
Wang-tetra-formula-1,Wang-tetra-formula-2,Wang-tetra-IJMPA-1,Wang-tetra-IJMPA-2,Wang-tetra-IJMPA-3,WangHuang-molecule-1,WangHuang-molecule-2}. The heavy quark $Q$ serves as a static well potential and attracts a light quark to form a heavy diquark in color antitriplet $\bar{3}_c$. The heavy antiquark $\overline{Q}$ serves as another static well potential and attracts a light antiquark or a light diquark to form a heavy antidiquark or triquark in color triplet $3_c$. Then the diquark and antidiquark (or triquark) attract each other to form a compact tetraquark state (or pentaquark state).
The hidden-charm or hidden-bottom tetraquark states and pentaquark states are characterized by the effective $Q$-quark mass ${\mathbb{M}}_Q$ and the virtuality $V=\sqrt{M_{X/Y/Z/P}-(2{\mathbb{M}}_Q)^2}$ or the energy scale $\mu=\sqrt{M_{X/Y/Z/P}-(2{\mathbb{M}}_Q)^2}$ of the QCD spectral densities \cite{Wang1508-EPJC,WangHuang-EPJC-1508-12,WangZG-EPJC-1509-12,WangZG-NPB-1512-32,WangHuangTao,Wang-tetra-formula-1,Wang-tetra-formula-2,Wang-tetra-IJMPA-1,Wang-tetra-IJMPA-2,Wang-tetra-IJMPA-3}. The energy scale formula $\mu=\sqrt{M_{X/Y/Z/P}-(2{\mathbb{M}}_Q)^2}$ can enhance the pole contributions remarkably and improve the convergence of the operator product expansion considerably, and works well in the QCD sum rules for the hidden-charm and hidden-bottom tetraquark states (hidden-charm pentaquark states).
In this article, we carry out the operator product expansion up to the vacuum condensates of dimension $13$, which is consistent with the dimension $10$ in the tetraquark case \cite{WangHuangTao,Wang-tetra-formula-1,Wang-tetra-formula-2,Wang-tetra-IJMPA-1,Wang-tetra-IJMPA-2,Wang-tetra-IJMPA-3}, and choose the updated value of the effective $c$-quark mass ${\mathbb{M}}_c=1.82\,\rm{GeV}$ determined in the QCD sum rules for the hidden-charm tetraquark states \cite{WangEPJC-1601}.
While in Refs.\cite{Wang1508-EPJC,WangHuang-EPJC-1508-12,WangZG-EPJC-1509-12,WangZG-NPB-1512-32}, we choose the old value ${\mathbb{M}}_c=1.80\,\rm{GeV}$.
In the following, let us go back to the Borel parameters and continuum threshold parameters.
After try and error, we obtain the Borel parameters or Borel windows $T^2$, continuum threshold parameters $s_0$, ideal energy scales of the QCD spectral densities, pole contributions of the ground state pentaquark states, and contributions of the vacuum condensates of dimension $13$, which are shown explicitly in Table \ref{Borel}.
In Fig.\ref{fr-D13-fig}, we plot the contributions of the vacuum condensates of dimension $11$ and $13$ (which were neglected in our previous works \cite{Wang1508-EPJC,WangHuang-EPJC-1508-12,WangZG-EPJC-1509-12,WangZG-NPB-1512-32}) with variation of the Borel parameter $T^2$ for the hidden-charm pentaquark state $[ud][uc]\bar{c}$ ($0$, $0$, $0$, $\frac{1}{2}$) with the central values of the parameters shown in Table \ref{Borel} as an example. From the figure, we can see that the vacuum condensates of dimension $13$ manifest themselves at the region $T^2< 2\,\rm{GeV}^2$, we should choose the value $T^2> 2\,\rm{GeV}^2$. On the other hand, the vacuum condensates of dimension $11$ manifest themselves at the region $T^2< 2.6\,\rm{GeV}^2$, which requires a larger Borel parameter $T^2> 2.6\,\rm{GeV}^2$ to warrant the convergence of the operator product expansion.
The higher dimensional vacuum condensates play an important role in determining the Borel windows, we should take them into account in a consistent way, while in the Borel windows, they play an minor important role as the operator product expansion should be convergent, for example, in the present case, the contribution of the vacuum condensates of dimension $13$ is less than $1\%$, which is consistent with the analysis in Sect.2.
In Fig.\ref{mass-D10-fig}, we plot the mass of the hidden-charm pentaquark state $[ud][uc]\bar{c}$ ($0$, $0$, $0$, $\frac{1}{2}$) with variation of the Borel parameter $T^2$ for truncations of the operator product expansion up to the vacuum condensates of dimension $10$ and $13$, respectively. From the figure, we can see that
the vacuum condensates of dimension $11$ and $13$ play an important role to obtain stable QCD sum rules, we should take them into account.
In our previous works \cite{Wang1508-EPJC,WangHuang-EPJC-1508-12,WangZG-EPJC-1509-12,WangZG-NPB-1512-32}, we took into account the
vacuum condensates up to dimension $10$ in carrying out the operator product expansion, and sometimes neglected the vacuum condensates $\langle\frac{\alpha_sGG}{\pi}\rangle$,
$\langle \bar{q}q\rangle\langle\frac{\alpha_sGG}{\pi}\rangle$ and $\langle \bar{q}q\rangle^2\langle\frac{\alpha_sGG}{\pi}\rangle$ due to their small contributions,
the Borel platforms were not flat enough. In the present work, we take into account the vacuum condensates up to dimension $13$ in a consistent way, and obtain very flat Borel platforms, the uncertainties originate from the Borel parameters are tiny.
\begin{figure}
\centering
\includegraphics[totalheight=7cm,width=10cm]{fr-12-SSc-D13.EPS}
\caption{ The contributions of the vacuum condensates of dimension $11$ and $13$ with variation of the Borel parameter $T^2$ for the hidden-charm pentaquark state $[ud][uc]\bar{c}$ ($0$, $0$, $0$, $\frac{1}{2}$). }\label{fr-D13-fig}
\end{figure}
\begin{figure}
\centering
\includegraphics[totalheight=7cm,width=10cm]{mass-12-SSc-D10.EPS}
\caption{ The mass with variation of the Borel parameter $T^2$ for the hidden-charm pentaquark state $[ud][uc]\bar{c}$ ($0$, $0$, $0$, $\frac{1}{2}$), the $D=10$, $13$ denote truncations of the operator product expansion. }\label{mass-D10-fig}
\end{figure}
From the Table \ref{Borel}, we can see that the pole contributions are about $(40-60)\%$ and the contributions of the vacuum condensates of dimension $13$ are $\leq 1\%$ or $\ll 1\%$, the pole dominance at the hadron side is satisfied and the operator product expansion is well convergent, the first two criteria or the two basic criteria of the QCD sum rules are satisfied, so we expect to make reasonable predictions.
We take into account all uncertainties of the input parameters,
and obtain the masses and pole residues of
the negative parity hidden-charm pentaquark states, which are shown explicitly in Table \ref{mass}. From Table \ref{Borel} and Table \ref{mass}, we can see that the energy scale formula $\mu=\sqrt{M_{P}-(2{\mathbb{M}}_c)^2}$ is satisfied, the criterion $\bf 4$ is satisfied.
In Figs.\ref{mass-1-fig}-\ref{mass-2-fig}, we plot the masses of the hidden-charm pentaquark states with variations of the Borel parameters $T^2$ in the Borel windows. From the figures, we can see that there appear very flat platforms, the criterion $\bf 3$ is satisfied. Now the four criteria of the QCD sum rules are all satisfied, we expect to make robust predictions.
From Table \ref{mass}, we can see that the mass-splittings among those $J^{P}={\frac{1}{2}}^-$, ${\frac{3}{2}}^-$ and ${\frac{5}{2}}^-$ pentaquark states are rather small, about or less than $0.3\,\rm{GeV}$. In this article, we take the scalar and axialvector diquark states as the basic constituents to study the pentaquark states. The calculations based on the QCD sum rules indicate that the light axialvector diquark states $\varepsilon^{ijk}u^{Ti}C\gamma_{\mu} u^j$ and $\varepsilon^{ijk}u^{Ti}C\gamma_{\mu} d^j$ have a larger mass than the corresponding the scalar diquark state $\varepsilon^{ijk}u^{Ti}C\gamma_{5} d^j$, about $0.15-0.20\,\rm{GeV}$ \cite{WangLightDiquark}, while the heavy scalar and axialvector diquark states $\varepsilon^{ijk}q^{Ti}C\gamma_{\mu} c^j$ and $\varepsilon^{ijk}q^{Ti}C\gamma_{5} c^j$ have almost degenerated masses \cite{WangDiquark-1,WangDiquark-2}. In this way, we can account for the small pentaquark mass splittings reasonably. In fact, the QCD calculations differ from
quark model calculations significantly, the pentaquark masses shown in Table \ref{mass} are not directly related to the diquark masses, we obtain them with the full QCD sum rules by imposing the same criteria.
The predicted masses $M_{P}=4.31\pm0.11\,\rm{GeV}$ for the ground state $[ud][uc]\bar{c}$ ($0$, $0$, $0$, $\frac{1}{2}$) pentaquark state and
$M_{P}=4.34\pm0.14\,\rm{GeV}$ for the ground state $[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $0$, $\frac{1}{2}$) pentaquark state
are both in excellent agreement with the experimental data $M_{P(4312)}=4311.9\pm0.7^{+6.8}_{-0.6} \,\rm{MeV}$ from the LHCb collaboration \cite{LHCb-Pc4312}, and support assigning the $P_c(4312)$ to be the hidden-charm pentaquark state with $J^{P}={\frac{1}{2}}^-$.
The predicted masses
$M_{P}=4.45\pm0.11\,\rm{GeV}$ for the ground state $[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{1}{2}$) pentaquark state,
$M_{P}=4.46\pm0.11\,\rm{GeV}$ for the ground state $[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $0$, $1$, $\frac{1}{2}$) pentaquark state and
$M_{P}=4.39\pm0.11$ for the ground state $[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{3}{2}$), $[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $2$, $\frac{5}{2}$), $[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{5}{2}$) pentaquark states
are in excellent agreement (or compatible with) the experimental data $M_{P(4440)}=4440.3\pm1.3^{+4.1}_{-4.7} \,\rm{MeV}$ from the LHCb collaboration \cite{LHCb-Pc4312}, and support assigning the $P_c(4440)$ to be the hidden-charm pentaquark state with $J^{P}={\frac{1}{2}}^-$, ${\frac{3}{2}}^-$ or ${\frac{5}{2}}^-$.
The predicted masses
$M_{P}=4.45\pm0.11\,\rm{GeV}$ for the ground state $[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{1}{2}$) pentaquark state,
$M_{P}=4.46\pm0.11\,\rm{GeV}$ for the ground state $[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $0$, $1$, $\frac{1}{2}$) pentaquark state and $M_{P}=4.47\pm0.11\,\rm{GeV}$ for the ground state
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $0$, $1$, $\frac{3}{2}$) pentaquark states
are in excellent agreement the experimental data $M_{P(4457)}=4457.3\pm0.6^{+4.1}_{-1.7} \,\rm{MeV}$ from the LHCb collaboration \cite{LHCb-Pc4312}, and support assigning the $P_c(4457)$ to be the hidden-charm pentaquark state with $J^{P}={\frac{1}{2}}^-$ or ${\frac{3}{2}}^-$.
In Table \ref{mass}, we present the possible assignments of the $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ explicitly as a summary. In Table \ref{mass-1508-et al}, we compare the present predictions with our previous calculations \cite{Wang1508-EPJC,WangHuang-EPJC-1508-12,WangZG-EPJC-1509-12,WangZG-NPB-1512-32}, where the vacuum condensates of dimension $11$ and $13$ were neglected, sometimes the vacuum condensates $\langle\frac{\alpha_sGG}{\pi}\rangle$,
$\langle \bar{q}q\rangle\langle\frac{\alpha_sGG}{\pi}\rangle$ and $\langle \bar{q}q\rangle^2\langle\frac{\alpha_sGG}{\pi}\rangle$ were also neglected. From the Table \ref{mass-1508-et al}, we can see that in some cases the predicted masses change remarkably, while in other cases the predicted masses change slightly. All in all, the uncertainties of the present pentaquark masses are smaller than the corresponding old ones, as we obtain more flat Borel platforms in the present work.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}\hline\hline
&$T^2 \rm{GeV}^2)$ &$\sqrt{s_0}(\rm{GeV})$ &$\mu(\rm{GeV})$ &pole &$D_{13}$ \\ \hline
$J^1(x)$ &$3.1-3.5$ &$4.96\pm0.10$ &$2.3$ &$(41-62)\%$ &$<1\%$ \\ \hline
$J^2(x)$ &$3.2-3.6$ &$5.10\pm0.10$ &$2.6$ &$(42-63)\%$ &$<1\%$ \\ \hline
$J^3(x)$ &$3.2-3.6$ &$5.11\pm0.10$ &$2.6$ &$(42-63)\%$ &$\ll1\%$ \\ \hline
$J^4(x)$ &$2.9-3.3$ &$5.00\pm0.10$ &$2.4$ &$(40-64)\%$ &$\leq1\%$ \\ \hline
$J^1_\mu(x)$ &$3.1-3.5$ &$5.03\pm0.10$ &$2.4$ &$(42-63)\%$ &$\leq1\%$ \\ \hline
$J^2_\mu(x)$ &$3.3-3.7$ &$5.11\pm0.10$ &$2.6$ &$(40-61)\%$ &$\ll1\%$ \\ \hline
$J^3_\mu(x)$ &$3.4-3.8$ &$5.26\pm0.10$ &$2.8$ &$(42-62)\%$ &$\ll1\%$ \\ \hline
$J^4_\mu(x)$ &$3.3-3.7$ &$5.17\pm0.10$ &$2.7$ &$(41-61)\%$ &$<1\%$ \\ \hline
$J^1_{\mu\nu}(x)$ &$3.2-3.6$ &$5.03\pm0.10$ &$2.4$ &$(40-61)\%$ &$\leq1\%$ \\ \hline
$J^2_{\mu\nu}(x)$ &$3.1-3.5$ &$5.03\pm0.10$ &$2.4$ &$(42-63)\%$ &$\leq1\%$ \\ \hline\hline
\end{tabular}
\end{center}
\caption{ The Borel windows, continuum threshold parameters, ideal energy scales, pole contributions, contributions of the vacuum condensates of dimension 13 for the hidden-charm pentaquark states. }\label{Borel}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\hline\hline
$[qq^\prime][q^{\prime\prime}c]\bar{c}$ ($S_L$, $S_H$, $J_{LH}$, $J$) &$M(\rm{GeV})$ &$\lambda(10^{-3}\rm{GeV}^6)$ &Assignments &Currents \\ \hline
$[ud][uc]\bar{c}$ ($0$, $0$, $0$, $\frac{1}{2}$) &$4.31\pm0.11$ &$1.40\pm0.23$ &$?\,P_c(4312)$ &$J^1(x)$ \\
$[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{1}{2}$) &$4.45\pm0.11$ &$3.02\pm0.48$ &$?\,P_c(4440/4457)$ &$J^2(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $0$, $1$, $\frac{1}{2}$) &$4.46\pm0.11$ &$4.32\pm0.71$ &$?\,P_c(4440/4457)$ &$J^3(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $0$, $\frac{1}{2}$) &$4.34\pm0.14$ &$3.23\pm0.61$ &$?\,P_c(4312)$ &$J^4(x)$ \\
$[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{3}{2}$) &$4.39\pm0.11$ &$1.44\pm0.23$ &$?\,P_c(4440)$ &$J^1_\mu(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $0$, $1$, $\frac{3}{2}$) &$4.47\pm0.11$ &$2.41\pm0.38$ &$?\,P_c(4440/4457)$ &$J^2_\mu(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $2$, $\frac{3}{2}$) &$4.61\pm0.11$ &$5.13\pm0.79$ & &$J^3_\mu(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $2$, $\frac{3}{2}$) &$4.52\pm0.11$ &$4.49\pm0.72$ & &$J^4_\mu(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $2$, $\frac{5}{2}$) &$4.39\pm0.11$ &$1.94\pm0.31$ &$?\,P_c(4440)$ &$J^1_{\mu\nu}(x)$ \\
$[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{5}{2}$) &$4.39\pm0.11$ &$1.44\pm0.23$ &$?\,P_c(4440)$ &$J^2_{\mu\nu}(x)$ \\ \hline\hline
\end{tabular}
\end{center}
\caption{ The masses and pole residues of the hidden-charm pentaquark states. }\label{mass}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\hline\hline
$[qq^\prime][q^{\prime\prime}c]\bar{c}$ ($S_L$, $S_H$, $J_{LH}$, $J$) &This work &Previous work &Currents \\ \hline
$[ud][uc]\bar{c}$ ($0$, $0$, $0$, $\frac{1}{2}$) &$4.31\pm0.11$ &$4.29\pm 0.13$ &$J^1(x)$ \\
$[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{1}{2}$) &$4.45\pm0.11$ &$4.30 \pm 0.13$ &$J^2(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $0$, $1$, $\frac{1}{2}$) &$4.46\pm0.11$ &$4.42 \pm 0.12$ &$J^3(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $0$, $\frac{1}{2}$) &$4.34\pm0.14$ &$4.35\pm 0.15$ &$J^4(x)$ \\
$[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{3}{2}$) &$4.39\pm0.11$ &$4.38 \pm 0.13$ &$J^1_\mu(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $0$, $1$, $\frac{3}{2}$) &$4.47\pm0.11$ &$4.39\pm 0.13$ &$J^2_\mu(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $2$, $\frac{3}{2}$) &$4.61\pm0.11$ &$4.39 \pm 0.14$ &$J^3_\mu(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $2$, $\frac{3}{2}$) &$4.52\pm0.11$ &$4.39 \pm 0.14$ &$J^4_\mu(x)$ \\
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $2$, $\frac{5}{2}$) &$4.39\pm0.11$ & &$J^1_{\mu\nu}(x)$ \\
$[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{5}{2}$) &$4.39\pm0.11$ & &$J^2_{\mu\nu}(x)$ \\ \hline\hline
\end{tabular}
\end{center}
\caption{ The masses (in unit of GeV) are compared with the old calculations in our previous works \cite{Wang1508-EPJC,WangHuang-EPJC-1508-12,WangZG-EPJC-1509-12,WangZG-NPB-1512-32}. }\label{mass-1508-et al}
\end{table}
\begin{figure}
\centering
\includegraphics[totalheight=6cm,width=7cm]{mass-12-SSc.EPS}
\includegraphics[totalheight=6cm,width=7cm]{mass-12-SAc.EPS}
\includegraphics[totalheight=6cm,width=7cm]{mass-12-ASc.EPS}
\includegraphics[totalheight=6cm,width=7cm]{mass-12-AAc.EPS}
\includegraphics[totalheight=6cm,width=7cm]{mass-32-SAc.EPS}
\includegraphics[totalheight=6cm,width=7cm]{mass-32-100.EPS}
\caption{ The masses with variations of the Borel parameters $T^2$ for the hidden-charm pentaquark states, the $A$, $B$, $C$, $D$, $E$ and $F$ denote the
pentaquark states $[ud][uc]\bar{c}$ ($0$, $0$, $0$, $\frac{1}{2}$),
$[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{1}{2}$),
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $0$, $1$, $\frac{1}{2}$),
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $0$, $\frac{1}{2}$),
$[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{3}{2}$) and $[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $0$, $1$, $\frac{3}{2}$), respectively. }\label{mass-1-fig}
\end{figure}
\begin{figure}
\centering
\includegraphics[totalheight=6cm,width=7cm]{mass-32-111.EPS}
\includegraphics[totalheight=6cm,width=7cm]{mass-32-111A.EPS}
\includegraphics[totalheight=6cm,width=7cm]{mass-52-AAc.EPS}
\includegraphics[totalheight=6cm,width=7cm]{mass-52-SAc.EPS}
\caption{ The masses with variations of the Borel parameters $T^2$ for the hidden-charm pentaquark states, the $G$, $H$, $I$ and $J$ denote the
pentaquark states $[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $2$, $\frac{3}{2}$),
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $2$, $\frac{3}{2}$),
$[uu][dc]\bar{c}+2[ud][uc]\bar{c}$ ($1$, $1$, $2$, $\frac{5}{2}$) and
$[ud][uc]\bar{c}$ ($0$, $1$, $1$, $\frac{5}{2}$), respectively. }\label{mass-2-fig}
\end{figure}
The
diquark-diquark-antiquark type pentaquark state can be taken as a special superposition of a series of baryon-meson pairs or pentaquark molecular states, and embodies the net effects, the decays to its components (baryon-meson pairs) are Okubo-Zweig-Iizuka super-allowed. We can study the two-body strong decays of the pentaquark states exclusively with the three-point QCD sum rules with the guideline of the Fierz rearrangements in Eqs.\eqref{Fierz-J1}-\eqref{Fierz-Jmunu2},
\begin{eqnarray}
P_{c} &\to& p J/\psi \, , \,p\eta_c \, , \,p\chi_{c0} \, , \,p\chi_{c1}\, , \,\Delta J/\psi \, , \,\Delta\eta_c \, , \, N(1440)J/\psi \, , \,N(1440)\eta_c \, , \, \Lambda_c \bar{D}^*\, , \, \Lambda_c \bar{D}\, , \, \nonumber\\
&& \Lambda_c(2595) \bar{D}^{*}\, , \, \Lambda_c(2595) \bar{D}\, , \, \Sigma_c \bar{D}\, , \, \Sigma_c \bar{D}^*\, , \, \Sigma_c^* \bar{D}\, , \,\Sigma_c^* \bar{D}^*\, .
\end{eqnarray}
It is better to use the partial decay widths and total width besides the mass to assign or distinguish a pentaquark candidate. We can search for those hidden-charm pentaquark states in the $J/\psi p$, $p\eta_c$, $\cdots$,
$\Sigma_c^* \bar{D}^{*}$ invariant mass distributions and confront the present predictions to the experimental data in the future as the first step.
In Ref.\cite{WangPenta-IJMPA}, we choose the component $\mathcal{S}^\alpha_{ud}\gamma_{\alpha}\gamma_{5}c\,\bar{c}i\gamma_{5}u$ in the currents $J^3(x)$ and $J^4(x)$ to interpolate the $\Sigma\bar{D}$ pentaquark molecular state;
choose the component $\mathcal{S}_\mu^{ud}c\,\bar{c}i\gamma_{5}u$ in the currents $J^2_\mu(x)$ and $J^3_\mu(x)$ to interpolate the $\Sigma^*\bar{D}$ pentaquark molecular state; choose the component $\mathcal{S}^\alpha_{ud}\gamma_{\alpha}\gamma_{5}c\,\bar{c}\gamma_{\mu}u$ in the current $J^4_\mu(x)$ to interpolate the $\Sigma\bar{D}^*$ pentaquark molecular state; choose the component $\mathcal{S}^{ud}_{\mu}c\,\bar{c}\gamma_{\nu}u$ in the current $J^1_{\mu\nu}(x)$ to interpolate the $\Sigma^*\bar{D}^*$ pentaquark molecular state, see Eqs.\eqref{Fierz-J1}-\eqref{Fierz-Jmunu2}.
The experimental values of the masses of the LHCb pentaquark candidates $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ can be
reproduced both in the diquark-diquark-antiquark type pentaquark scenario and in the baryon-meson molecule scenario. A possible interpretation is that the main Fock components of the $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ are the diquark-diquark-antiquark type pentaquark states, which couple strongly to the baryon-meson pairs $\bar{D}\Sigma_c$, $\bar{D}\Sigma_c^*$, $\bar{D}^{*}\Sigma_c$ and $\bar{D}^{*}\Sigma_c^*$, respectively, as the meson-baryon type currents chosen in Ref.\cite{WangPenta-IJMPA} already appear in the Fierz rearrangements of the pentaquark currents in Eqs.\eqref{Fierz-J1}-\eqref{Fierz-Jmunu2}, the strong couplings induce some pentaquark molecule components, just-like in the mechanism of the $Y(4660)$.
In Ref.\cite{WangZG-Y4600-decay}, we choose the diquark-antidiquark type tetraquark current interpolating the $Y(4660)$ to study the strong decays $Y(4660)\to J/\psi f_0(980)$, $ \eta_c \phi(1020)$, $ \chi_{c0}\phi(1020)$, $ D_s \bar{D}_s$, $ D_s^* \bar{D}^*_s$, $ D_s \bar{D}^*_s$, $ D_s^* \bar{D}_s$, $ \psi^\prime \pi^+\pi^-$, $J/\psi\phi(1020)$ with the QCD sum rules based on solid quark-hadron quality.
In calculations, we observe that the hadronic coupling constants $ |G_{Y\psi^\prime f_0}|\gg |G_{Y J/\psi f_0}|$, which is consistent with the observation of the $Y(4660)$ in the $\psi^\prime\pi^+\pi^-$ mass spectrum, and favors the $\psi^{\prime}f_0(980)$ molecule assignment.
\section{Conclusion}
In this article, we restudy the ground state mass spectrum of the diquark-diquark-antiquark type $uudc\bar{c}$ pentaquark states with the QCD sum rules by taking into account all the
vacuum condensates up to dimension $13$ in a consistent way in carrying out the operator product expansion, and use the energy scale formula $\mu=\sqrt{M_{P}-(2{\mathbb{M}}_c)^2}$ with the updated effective $c$-quark mass ${\mathbb{M}}_c=1.82\,\rm{GeV}$ to determine the ideal energy scales of the QCD spectral densities, and explore the possible assignments of the $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ in the scenario of the pentaquark states. The predicted masses support assigning the $P_c(4312)$ to be the hidden-charm pentaquark state with $J^{P}={\frac{1}{2}}^-$, assigning the $P_c(4440)$ to be the hidden-charm pentaquark state with $J^{P}={\frac{1}{2}}^-$, ${\frac{3}{2}}^-$ or ${\frac{5}{2}}^-$, assigning the $P_c(4457)$ to be the hidden-charm pentaquark state with $J^{P}={\frac{1}{2}}^-$ or ${\frac{3}{2}}^-$.
More experimental data and theoretical works are still needed to identify the $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ unambiguously.
\section*{Acknowledgements}
This work is supported by National Natural Science Foundation, Grant Number 11775079.
|
1,477,468,750,466 | arxiv | \section{Introduction}
Gravitational microlensing has opened a unique window for probing extrasolar planets beyond the snowline (the minimum radius from a star at which water ice could have condensed) \citep{Shude1991,Andy1992, Gaudi2012, 2012RAA....12..947M, 2016ASSL..428..135G}. Among the 132 confirmed microlens planets, 12 were detected in a triple lens event\footnote{Information on the confirmed extrasolar planets is from \url{http://exoplanet.eu} as of Nov 20, 2020.}.
Triple lens systems are usually categorised into two groups, a host star plus two planets or a binary star system plus a single planet. Three two-planetary systems have been firmly established (OGLE-2006-BLG-109, \citealt{OB06109,OB06109_Dave}; OGLE-2012-BLG-0026, \citealt{OB120026,OB120026_AO,OB120026_Zhu}; OGLE-2018-BLG-1011, \citealt{OB181011}), and three likely candidates (OGLE-2014-BLG-1722, \citealt{OB141722}; OGLE-2018-BLG-0532, \citealt{OB180532}; KMT-2019-BLG-1953, \citealt{KB191953}). A planet in a binary star system (i.e., circum-binary planet) is another triple lens case. So far, six cases have been reported in the literature (OGLE-2006-BLG-284, \citealt{OB06284}; OGLE-2007-BLG-349, \citealt{OB07349}; OGLE-2008-BLG-092, \citealt{OB08092}; OGLE-2013-BLG-0341, \citealt{OB130341}; OGLE-2016-BLG-0613, \citealt{OB160613}; OGLE-2018-BLG-1700, \citealt{OB181700}).
The discovery rate of triple lens systems has nearly doubled since 2016, which is mainly attributed to the inauguration of the Korea Microlensing Telescope Network (KMTNet, \citealt{kim2016kmtnet}) in that year. With the continuous operation of KMTNet and the upcoming new facilities like EUCLID \citep{2010ASPC..430..266B, 2019ApJ...880L..32B} and WFIRST \citep{1808.02490}, we expect to encounter more triple lens events.
Analysing microlensing events is time consuming. Some binary events may even take years \citep{bozza2010microlensing}. The situation will be even worse when handling triple lens events because of the exponentially increasing parameter space. Thus, an efficient method is needed to model light curves for such systems. However, due to the inverse nature of solving the lens equation, and the finite source effect, the computation of magnification by a triple lens system is challenging. \textcolor{mycolor}{Three aspects need to be addressed, namely solving the lens equation numerically, dealing with complex caustic structures and image topologies, and handling finite source effects including limb-darkening. Our approach is explained in the next section.}
\textcolor{mycolor}{Currently, there are two main schemes for modelling triple lens events. The first approach is based on the ``binary superposition'' approximation method \citep{2001MNRAS.328..986H, 2002MNRAS.335..159R, 2005ApJ...629.1102H, OB120026}, while the second is based on the ray-shooting method \citep{1986A&A...166...36K, 1987SchneiderRayshooting}. Neither scheme is completely satisfactory however.}
For some triple events, their light curves can be approximated as a superposition of two binary light curves. Nevertheless, the superposition method is not always valid and sometimes the detectability of a second planet will be suppressed by the presence of the first planet \citep{2014ApJ...794...53Z, 2014MNRAS.437.4006S}. \textcolor{mycolor}{\textcolor{mycolor2}{In} ``binary superposition'', \textcolor{mycolor2}{ several methods may be used to calculate the binary light curves, including }the contour integration method \citep{gould1997stokes, dominik1998robust}. \textcolor{mycolor2}{In this work, the general contour integration method introduced in \cite{gould1997stokes} is implemented in the triple-lens scenario.}}
\textcolor{mycolor2}{To calculate magnification for finite sources, \cite{2000Vermaak} started from finding the image positions corresponding to the source centre, then used a recursive flood-fill algorithm to check whether neighbouring integration elements in the image plane can be mapped onto the source through the lens equation.} \cite{dong2006planetary, dong2009microlensing} and \cite{2014ApJ...782...47P} advocated the mapmaking method, which is a hybrid of the ray-shooting and contour integration methods. \cite{bennett2010efficient} proposed a method for modelling high magnification events for multiple-lens microlensing events, based on the image centred ray-shooting approach of \cite{bennett1996detecting}.
In contrast, \cite{mediavilla2006fast, mediavilla2011new} proposed an approach based on inverse polygon mapping to compute the magnification maps.
\textcolor{mycolor2}{For ray-shooting methods,} if the source radius is small or close to the caustics, high density light rays are needed. \textcolor{mycolor}{Other than for validation purposes, in this work we choose not to use ray-shooting for triple microlensing light curve calculations.}
The purpose of this paper is to present a method for calculating the magnification of a limb-darkened finite source lensed by a triple lens system. \textcolor{mycolor}{Our approach, overviewed in \S2, is based on calculating image areas by contour integration but with an alternative procedure for obtaining the image boundaries.} In \S3, we introduce the complex lens equation \textcolor{mycolor}{and our notations}, while in \S4, we present the details of our method. In \S5 we present the results. Finally, a short summary is given in \S6.
\section{Approach}
\textcolor{mycolor}{Modelling lensed light curves is achieved by setting up a representation of the lensing system and then repeatedly moving a source across the lensing system and using its magnifying effect to generate model light curves. Every model light curve is compared with the observed light curve and various lensing parameters are adjusted until a best-fitting curve is found. The focus of this paper is on the magnification calculation required for a limb-darkened, finite-sized source.}
Real source stars have finite sizes, and this causes their light curves to be significantly modified in high magnification regions \citep{witt1994can, 1994ApJ...421L..71G}. Finite source effects are essential for binary and triple lens systems because they are related to the mass ratios of the lens components, and can lead to measurements of the angular Einstein radius $\theta_{\rm E}$ when combined with knowledge of the angular source radius $\theta_*$ \citep[e.g.,][]{Yoo2004}. Finite source effects are particularly important when the source crosses a caustic (where the magnification diverges to infinity) or comes close to a cusp caustic \citep{witt1994can,1994ApJ...421L..71G,Nemiroff1994}. In these cases, the source cannot be regarded as point-like, and the observed magnification is an average of the magnification pattern over the face of the source. \textcolor{mycolor}{The surface brightness of a star is however not uniform \citep{1921Milne}. This is known as the limb-darkening effect, which also affects the microlensing magnification \citep{1996ApJ...464..212G, 1999Spectrophotometric, 2003measurelimb, 2005MNRAS.361..300D}. Limb-darkening effect needs to be considered to model precisely observed light curves, e.g., as in the first limb-darkening measurement by microlensing, the event MACHO 1997-BLG-28 \citep{2001firstlimbdetect}. It is thus crucial to have a reliable and efficient way to calculate the magnification of a limb-darkened, finite size source for triple lens systems.}
\textcolor{mycolor}{
For every position of the finite source, the lens equation has to be solved at multiple points around the source boundary so that the boundaries of the images can be determined. Gravitational lensing preserves surface brightness \citep{1973gravbook}. For a source with uniform surface brightness, the magnification due to lensing is equal to the ratio of the total image area and the source area. Usually, a two dimensional image area is computed with double integrals but these can be converted into a line integral according to Stokes' theorem. This is the contour integration technique \citep{1987contour, dominik1993, dominik1995, dominik1998robust, gould1997stokes, bozza2010microlensing,bozza2018vbbinarylensing} that we use in our work.
}
\textcolor{mycolor}{The triple lens equation has to be solved numerically.} The lens equation for a single lens can be easily solved: the two image positions and magnifications can be derived analytically \citep{Einstein506,paczynski1986gravitational}. The lens equation for a binary lens is considerably more complicated, as it is no longer analytically tractable. The binary lens equation can be transformed into a fifth-order complex polynomial \citep{witt1990investigation}. \citet{skowron2012general} provided an algorithm to solve complex polynomial equations, and we use this algorithm in our modelling. It has been used elsewhere in a public package named VBBinaryLensing to calculate microlensing light curves for binary lens systems \citep{bozza2010microlensing, bozza2018vbbinarylensing}. They found that $\sim 80\%$ of computer time is spent in the root finding routine \citep{bozza2010microlensing}. \textcolor{mycolor}{This usage would increase for the tenth order polynomial for a triple lens system if they were to extend their method to the triple lens scenario.}
\textcolor{mycolor}{
We choose not to use the image boundaries obtaining method in VBBinaryLensing as we believe it would require significant effort to extend it to the triple lens scenario. We have implemented an alternative strategy for determining image boundaries which we describe in \S\ref{sec::conn}, and then compare with previous methods in \S\ref{sec::comp}.
}
\section{General Concepts}
\textcolor{mycolor}{In this section we introduce the complex lens equation and the parametrizations we use.}
\subsection{Lens equation for N point lenses}
Using complex notations, the ${\cal{N}}$ point lens equation can be written as \citep{witt1990investigation}
\begin{equation}
\label{lensequ}
\zeta = z + f(\overline{z}), \;\; f(\overline{z})\equiv - \sum\limits_{j=1}^{{\cal{N}}}\frac{m_j}{\overline{z}-\overline{z_j}},
\end{equation}
where $\zeta=y_1+i\,y_2$ is the source position, $z=x_1+i\,x_2$ is the corresponding image position. $m_j,\,z_j$ are the fractional mass and position of $j$-th lens, $\sum_j m_j = 1$, $\overline{z}$ and $\overline{z_j}$ are the complex conjugates of $z$ and $z_j$. \textcolor{mycolor}{If a point $z$ in the lens plane satisfies the lens equation, it will map back to the source position $\zeta$ through the lens equation, in this case we call $z$ a true image of $\zeta$.} \textcolor{mycolor2}{The position coordinates $\zeta,\, z,\, z_j$ are in units of the angular Einstein radius $\theta_{\rm E}$ of the lensing system,}
\begin{equation}
\theta_{\rm E} = \sqrt{\frac{4GM}{c^2}\frac{D_{ls}}{D_lD_s}}\;,
\end{equation}
\textcolor{mycolor}{
where $D_l,\;D_s$ are the angular diameter distances of the observer to the lens and to the source, $D_{ls}$ is the angular diameter distance between the lens plane and the source plane, $M$ is the total mass of the lens system, $c$ is the speed of light, $G$ is the gravitational constant.}
Taking the conjugate of equation (\ref{lensequ}), we obtain an expression for $\overline{z}$, which one can substitute back into the original lens equation to obtain an ${\cal{N}}^2+1$ order complex polynomial in $z$ only (\citealt{witt1990investigation}). So the maximum number of roots (images) cannot exceed ${\cal{N}}^2+1$. In fact, some of these roots are not \textcolor{mycolor2}{true images}, \textcolor{mycolor}{i.e., although they satisfy the complex polynomial, they do not satisfy the original lens equation, in this case, they are called false images. In \cite{bozza2010microlensing}, such images are called ghost images.} It has been shown that the true upper limit of true images is $5({\cal{N}} - 1)$ \citep{2001astro.ph..3463R, 2003astro.ph..5166R, 2004math......1188K}. Notice that for ${\cal{N}}>3$, there must be false images from solving the polynomial for any given source position. \textcolor{mycolor}{We will introduce how to distinguish true and false images in \S\ref{sec::cri_truesolution}.}
For each image position $z$, its magnification ($\mu$) is related to the Jacobian $J$ by
\begin{equation}
\mu=J^{-1},\;\; J = 1 - f^{\prime}(\overline{z}) \overline{f^\prime(\overline{z})}, \;\; f^\prime(\overline{z}) = \text{d}f(\overline{z})/d\overline{z},
\end{equation}
where $f(\overline{z})$ is defined in equation (\ref{lensequ}). \textcolor{mycolor}{Some image positions will lead to $J = 0$ with the magnification $\mu$ becoming infinite. These image positions constitute ``critical curves'' in the lens plane, with the corresponding source positions forming ``caustics'' in the source plane.} The total magnification $ \mu_{\scalebox{.9}{$\scriptscriptstyle PS$} }$ can be obtained by summing the inverse Jacobian determinant for each true image $z_I$ $(I=1,\cdot\cdot\cdot, N_{\rm im})$, corresponding to the source $\zeta$
\begin{equation}
\mu_{\scalebox{.9}{$\scriptscriptstyle PS$} } = \sum\limits_{I=1}^{N_{\rm im}}\frac{1}{|J(z_I)|},
\end{equation}
where $N_{\rm im}$ is the total number of true images.
\subsection{Triple lens systems: parametrization}
We ignore the source baseline flux \textcolor{mycolor}{(source flux before microlensing event happens, \citealt{2000MNRASBaseline})} and blending flux \textcolor{mycolor}{(flux that is not physically associated with the lensed source, \citealt{1995ApJLblending, 2007MNRAS.380..805S})}, which are easy to model linearly. With this simplification, there are ten parameters in modelling triple lens light curve, five lens parameters ($s_2$, $q_2$, $s_3$, $q_3$, $\psi$) and four trajectory parameters ($t_0$, $u_0$, $t_{\rm E}$, $\alpha$), plus the source radius $\rho$. $\rho =\theta_{*}/\theta_{\rm E}$, where $\theta_{*}$ and $\theta_{\rm E}$ are the angular source radius and the angular Einstein ring radius of the lensing system, respectively. $s_2$ and $q_2$ are the separation and mass ratio between the first and second lenses, i.e., $q_2 = m_2/m_1$. Similarly, $s_3$ and $q_3$ are the separation and mass ratio between the first and third lenses. $\psi$ is the orientation angle of the third body. $t_0$ is the time when the source is closest to the primary mass $m_1$. The impact parameter $u_0$ is the primary lens-source separation at $t_0$. $s_2,\,s_3$ and $u_0$ are in units of $\theta_{\rm E}$. The Einstein timescale $t_{\rm E}$ controls the duration of the event. $\alpha$ is the source trajectory angle with respect to the $m_1$-$m_2$ axis. \textcolor{mycolor2}{We note that the number of parameters in many triple events is even higher due to non-trivial lens motion (parallax or orbital motion effects). In addition, for a finite source, limb-darkening parameters (filter-specific) are relevant too. To start, we focus on the magnification of a uniform brightness star.} A graphical illustration of the triple lens system (similar to Fig. 1 of \citealt{OB160613}) is shown as Fig. \ref{fig:graph}.
\textcolor{mycolor2}{Since most of the observed triple microlensing events have a third, low-mass body (either a second planet or a planet in a binary)}, we choose the origin of the coordinate system to be the centre of mass of the first two masses. Thus the conversion from ($s_2$, $q_2$, $s_3$, $q_3$, $\psi$) to $m_j, z_j$ is as follows
\begin{equation}
\begin{aligned}
m_1 &= 1 / (1 + q_2 + q_3),\\
m_2 &= q_2\; m_1,\\
m_3 &= q_3\; m_1 = 1 - m_1 - m_2,\\
z_1 &= -q_2\;s_2/ (1 + q_2) + i\,0,\\
z_2 &= s_2/(1 + q_2) + i\,0,\\
z_3 &= -q_2\;s_2/ (1 + q_2) + s_3\cos(\psi) + i\,s_3\sin(\psi).
\end{aligned}
\end{equation}
\textcolor{mycolor}{We note that whichever coordinate system is chosen, magnification can still be calculated using this method. The input parameters are $m_j,\,z_j$, $\rho$, and $\zeta$, and the output is the magnification at $\zeta$.}
\begin{figure}
\includegraphics[width=\linewidth,keepaspectratio]{graph.png}
\caption{Schematic diagram of a triple lens system. \textcolor{mycolor}{$m_1,\,m_2$ and $m_3$ are the fractional masses of the three lenses ($m_1+m_2+m_3=1$, \textcolor{mycolor2}{$m_1\geq m_2 \geq m_3$}). The separations of $m_2$, $m_3$ relative to $m_1$ are labelled as $s_2$ and $s_3$, $m_1$ and $m_2$ are on the horizontal axis, $\psi$ is the direction angle of $m_3$ relative to the horizontal axis. The source trajectory is parametrized by the trajectory angle $\alpha$ and the impact parameter $u_0$.}}
\label{fig:graph}
\end{figure}
\subsection{Finite sources: parametrization}
\label{sec::srcbound}
For a circular source with radius $\rho$, centred at $\zeta^{(c)} = y_1^{(c)} + i\,y_2^{(c)}$, its boundary can be represented as
\begin{equation}
\zeta(\theta) = \zeta^{(c)} + \rho e^{i\theta},\;\; \theta \in [0,2\pi].
\end{equation}
In practice, \textcolor{mycolor}{we approximate the circular source boundary by a polygon with $n$ different vertices $\zeta(\theta_k)$}, where $\theta_0 < \theta_1 < \cdot\cdot\cdot < \theta_k < \cdot\cdot\cdot < \theta_n = \theta_0 + 2\pi$. The images of the source are distorted by the lens system (see Fig. \ref{fig:topo}). \textcolor{mycolor}{The $\theta_{k}$ are not necessarily sampled at equal intervals}. For each $\theta_k$, we need to solve the corresponding lens equation to obtain \textcolor{mycolor}{both true and \textcolor{mycolor2}{false images}, and attach them to image tracks. Then we pick the true image segments out to obtain the true image boundaries.} Finally, we can apply Stokes' theorem to obtain the enclosed area of these image boundaries.
\section{Finite source magnification for triple lenses}
\textcolor{mycolor}{Before applying Stokes' theorem to calculate the magnification of a uniform brightness star, one needs to obtain continuous image boundaries.} In \S\ref{sec::conn}, we discuss the new method we have developed to connect the image boundaries. In \S\ref{sec::quad} we have also adopted Bozza's quadrupole test \citep{bozza2018vbbinarylensing} to decide whether point source magnification $ \mu_{\scalebox{.9}{$\scriptscriptstyle PS$} }$ is sufficient to approximate the magnification $ \mu_{\scalebox{.9}{$\scriptscriptstyle FS$} }$ of a uniform brightness star. The flowchart in Fig. \ref{fig:flowchart} shows the major stages we execute. \textcolor{mycolor}{Once we can calculate the magnification of a uniform brightness star, we can model limb-darkening light curves by regarding the source star as a set of annuli weighted by the radial limb-darkening profile, as introduced in \S\ref{sec::limb}.}
\begin{figure*}
\includegraphics[width=1\linewidth]{flowchart.pdf}
\vspace{-1.2cm}
\caption{A flowchart describing our procedure \textcolor{mycolor}{to calculate the magnification of a uniform brightness star}. \textcolor{mycolor}{The three procedures along the bottom of the flowchart (solve the lens equations, remove false images, and connect segments) are described in detail in \S \ref{sec::conn}}.
}
\label{fig:flowchart}
\end{figure*}
\subsection{Topology of image boundaries}
\label{sec:topo}
\textcolor{mycolor}{We first illustrate} the topology of both true and false image boundaries for the circular source lensed by a triple system in Fig. \ref{fig:topo}. We adopt the parameters of the triple lens solution ``Sol C (wide)'' \textcolor{mycolor}{of microlensing event OGLE-2016-BLG-0613 \citep{OB160613}}. According to their Table 3, $t_0 = 7494.153$; $u_0 = 0.021$; $t_{\rm E} = 74.62$; $s_2 = 1.396$; $q_2 = 0.029$; $\alpha = 2.948$; $s_3 = 1.168$; $q_3 = 3.27\times 10^{-3}$; $\psi = 5.332$. Instead of $\rho = 2.2\times 10^{-4}$ in their solution, to visualise better the image boundaries, we set $\rho = 0.1$. \textcolor{mycolor}{Two} source centres are shown. \textcolor{mycolor}{In the right panel, the two primary image boundaries are nested, forming a ``ring-like'' image, which needs special care when calculating the enclosed image areas.}
\begin{figure*}
\begin{center}
\includegraphics[width=0.48\linewidth,keepaspectratio]{topo1.png}
\includegraphics[width=0.48\linewidth, keepaspectratio]{topo2.png}
\end{center}
\vspace{-0.7cm}
\caption{Illustrations of image boundary topologies formed by the roots of the complex polynomial for a triple lens system. Parameters of this system are introduced in \S \ref{sec:topo}. The three plus signs indicate the lens positions, the red solid and dashed curves show the caustic and the critical curve. The circular source is indicated by the black circle ($\rho = 0.1$). \textcolor{mycolor}{The blue curves are true image boundaries while the grey curves are false image boundaries arising from the tenth-order polynomial. \textcolor{mycolor2}{Insets show the detail around the third mass.} In the left panel, the source is at (0.7, 0) while in the right panel it is at (0, 0) in the source plane. In the right panel, the two primary image boundaries are nested, forming a ``ring-like'' image, which needs special care when calculating the enclosed image area inside that ``ring''.}
}
\label{fig:topo}
\end{figure*}
\textcolor{mycolor}{
In Fig. \ref{fig:topo}, the blue curves are true image boundaries, and the grey curves are false image boundaries arising from the tenth-order polynomial. \textcolor{mycolor2}{In the left panel, there are four true image boundaries (three large ones plus a small one near the third lens mass, in blue) and six false image boundaries (three large ones and a small one near the second mass plus two small ones near the third mass, in gray). In the right panel, there are four true image boundaries (two large ones form the ring structure, and the smaller two are close to the second and third lens masses) and four false image boundaries (two large mushroom-like images and two small circular boundaries close to the second and third lens masses).} Such topological figures merely give a preliminary impression about the shape and configuration of the image boundaries. The plotted data are created by solving the lens equation to give points along the boundary. Quantitative information, i.e., enclosed areas can not be calculated using line integrals until these points are connected in order (clockwise or anti-clockwise).
}
\subsection{Connecting image points to obtain continuous image boundaries}
\label{sec::conn}
\textcolor{mycolor}{
We use the configuration in the left panel of Fig. \ref{fig:topo} as an example, i.e., the source centred at $(0.7, 0)$, to illustrate how we construct the continuous image boundaries.}
The outer limb of the source is approximated by a polygon as described in \S\ref{sec::srcbound}. \textcolor{mycolor}{
We first initialise ${\cal{N}}^2+1$ linked lists to store images from solving lens equations, here ${\cal{N}}$ is the number of lenses. At each source position $\zeta({\theta_k})$, $k=0,1, \cdot\cdot\cdot, n$, we solve the corresponding polynomial lens equation, which will generate ${\cal{N}}^2+1$ solutions. Each solution is attached to a linked list, depending on its distance from the tails of linked lists.}
\textcolor{mycolor}{
After the above process, we will obtain ${\cal{N}}^2+1$ linked lists, which store the image points. Each linked list contains the same number of points. As Fig. \ref{fig:n+1_linked_list} shows, points in different linked lists are plotted with different colours, and the head point of the $i$-th linked list is labelled as $H\{i\}$. We add arrows to indicate the direction in which points are linked from the head to the tail of a linked list. For triple lens, i.e., ${\cal{N}} = 3$, there are ten linked lists. In Fig. \ref{fig:n+1_linked_list}, we show only a subset of those ten linked lists to help visualisation.
}
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth,keepaspectratio]{linked_lists.png}
\end{center}
\vspace{-0.5cm}
\caption{Illustrations of the ${\cal{N}}^2+1 = 10$ linked lists after triple lens equation solving and preliminary points-connecting. Parameters are the same as the left panel of Fig. \ref{fig:topo}. Points in different linked lists are plotted with different colours, and the head point of the $i$-th linked list is labelled as $H\{i\}$. Arrows are added to indicate the direction in which points are linked inside each linked list, from head to tail. We just show part of all ten linked lists to help visualisation.
}
\label{fig:n+1_linked_list}
\end{figure}
\textcolor{mycolor}{
Connecting points at this stage is just a preliminary procedure, since it is not guaranteed that every point will be attached to the right place. Some linked lists contain only true images (e.g., the $9$-th list in Fig. \ref{fig:n+1_linked_list}), some contain only false images (e.g., the $8$-th list in Fig. \ref{fig:n+1_linked_list}), while others may contain both (e.g., the $0,\,1,\,6$-th list in Fig. \ref{fig:n+1_linked_list}). Mixing will happen, especially during caustics crossing, when two image boundaries are very close to, or intersect with each other. In such cases, we need to go through several steps to obtain true image boundaries which contribute to the total magnification. We want to emphasise that, \textcolor{mycolor2}{false images} actually help us to connect true image points correctly later in the process. False images can be considered as ``bridges'' which link different true image segments together. If we do not use the position information of the \textcolor{mycolor2}{false images}, it would be hard to link all the true image points into continuous boundaries. Because the number of \textcolor{mycolor2}{true images} would change over the source boundary \textcolor{mycolor2}{(as the sampled point crosses the caustics)}.}
We use the lens equation to check whether a root is a true image \textcolor{mycolor}{(as described in \S \ref{sec::cri_truesolution})}. We then remove all \textcolor{mycolor2}{false images} from the linked lists. \textcolor{mycolor}{After this, one usually breaks the ${\cal{N}}^2+1$ linked lists obtained previously into several image segments, as shown in Fig. \ref{fig:segms}. Notice that the number of resulting segments is not necessarily ${\cal{N}}^2+1$ anymore. Different segments are labelled as $S\{i\}$. Arrows indicate the direction in which points are linked in each segment from head to tail. The colour of each segment is inherited from the original linked list in Fig. \ref{fig:n+1_linked_list}. The last step before applying Stokes' theorem is to connect those image segments into closed continuous image boundaries}.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth,keepaspectratio]{true_segms.png}
\end{center}
\vspace{-0.5cm}
\caption{
Demonstration of true image solution segments after removing false image points in Fig. \ref{fig:n+1_linked_list}. Different segments are labelled with $S\{i\}$. \textcolor{mycolor3}{The colour} of each segment \textcolor{mycolor3}{is inherited} from their original linked list in Fig. \ref{fig:n+1_linked_list}. Arrows \textcolor{mycolor3}{indicate} the direction in which points are linked from the head to the tail of each individual segment.
}
\label{fig:segms}
\end{figure}
To do this, we first check whether any given segment is closed by evaluating the distance $d$ between the head and tail of a segment (we choose a threshold $10^{-5}$, \textcolor{mycolor}{which will be explained in more detail in \S \ref{sec::cri_vicinity}}). If $d\leq10^{-5}$, then this completes our task for the current segment (\textcolor{mycolor}{see segment ``S7'' in Fig. \ref{fig:segms}}). We move to the next segment until all segments are processed. If the current segment is not closed, we first check whether its tail connects with the head or tail of other image segments by checking whether they have identical image positions. If not, we also conduct the check for the head of the current segment (\textcolor{mycolor}{see the segment ``S0'' and ``S1'' in Fig. \ref{fig:segms}, the head of ``S0'' is connected with the tail of ``S1''}). Another possibility to connect two segments occurs when the \textcolor{mycolor}{image} boundary crosses the critical curve (\textcolor{mycolor}{see the tail of ``S0'' and ``S2'' in Fig. \ref{fig:segms}}). In this case, one must ``jump'' over the \textcolor{mycolor}{critical curve} to connect a close pair of images. The condition is that the close image pair must have the same source position, and their magnifications have a comparable absolute value but with opposite sign. In practice, the ``connecting'' procedure is repeated until all the image segments are closed.
Once the connecting procedure is complete, we obtained closed image boundaries from previously un-closed true solution segments\footnote{An animation shows how closed image boundaries are linked can be visualised at:
\href{https://github.com/rkkuang/triplelens}{https://github.com/rkkuang/triplelens}, together with the source code and documentation.}. And finally, we can move on to calculate the enclosed area using line integral.
\subsection{Finding \textcolor{mycolor2}{true images} of the lens equation from its complex polynomial}
\label{sec::cri_truesolution}
\textcolor{mycolor}{
For a triple lens system (${\cal{N}}$=3), the lens equation can be converted into a tenth-order complex polynomial \citep{2002astro.ph..2294R}, which can be solved numerically \citep{skowron2012general}, yielding ten (complex) roots. In most cases, however, not all of these are necessarily \textcolor{mycolor2}{true images} of the original lens equation, i.e., eq. (\ref{lensequ}). In the following, we discuss in more detail how to check whether a root from the complex polynomial roots solver is a true root to the original lens equation.}
\textcolor{mycolor}{
Numerically, the complex roots solver in \citet{skowron2012general} locates roots one by one. At each step, they deflate the original polynomial by the found root, then proceed to search for the next root. The deflation process introduces numerical noise, so they also conduct a ``root polishing'' procedure. This involves taking each root as the initial guess in Newton's (or Laguerre's) method to find a more accurate root of the full polynomial.
}
\textcolor{mycolor}{
The roots coming from the polynomial solver are not necessarily \textcolor{mycolor2}{true images} of the original lens equation. For a given source position $\zeta$, theoretically a true solution $z$ should satisfy: $\delta = |\zeta - z - f(\overline{z})| = 0$. Nevertheless, due to numerical noise, in practice $\delta$ is not strictly zero even for \textcolor{mycolor2}{true images}. We found that in general, \textcolor{mycolor2}{true images} correspond to $\delta$ $\sim 10^{-16}$ to $10^{-8}$ while \textcolor{mycolor2}{false images} lead to $\delta$ $\sim 10^{-1}$ to $10^{0}$. There is usually a clear separation between \textcolor{mycolor2}{true images} and \textcolor{mycolor2}{false images} in terms of $\delta$. We set the criterion to be $10^{-5}$, i.e., an image is true if it satisfies $\delta < 10^{-5}$.
}
\textcolor{mycolor}{
We note that $10^{-5}$ is not valid in all cases. When the source is just outside cusps or folds, there will be ``nearly true'' false images. Alternatively, if the source is just inside cusps or folds, there will be ``nearly false'' true images. \citet{skowron2012general} implemented their algorithm in double precision. They found for fifth order polynomials, the limiting precision for very close roots is $10^{-7.7}$ for two close roots (near a fold caustic), and $10^{-5.2}$ for three very close roots (near a cusp).
}
\textcolor{mycolor}{
\NewEntry{The situations discussed above do not \textcolor{mycolor3}{affect} our whole method for the following reasons.}{}
\begin{itemize}[topsep=0pt]
\item Only a small fraction of the source boundary will experience caustic crossing. The probability that discretely sampled points happen to be very close to the caustics is low. Thus only a few points or no points will experience this ambiguity.
\item \textcolor{mycolor3}{Even if} we miss several true images, we can still link the points into image boundaries using procedures which introduced in \S\ref{sec::conn}.
\item If we include some false image points, since they are close to the \textcolor{mycolor2}{true images}, they would not affect the final area significantly.
\end{itemize}
}
\subsection{Criterion for image segment connection}
\label{sec::cri_vicinity}
\textcolor{mycolor}{In \S\ref{sec::conn}, we check whether two segments are connected by checking the distance $d$ between their heads or tails, or the distance between corresponding source positions. We now elaborate further why we choose a threshold of $10^{-5}$. It actually relies on several numerical observations. The first is that although we use $n$ different points to approximate the source boundary, there are actually $n+1$ points for us to use in the code with $\theta_n = \theta_0 + 2\pi$. These two source positions, $\zeta(\theta_0)$ and $\zeta(\theta_n)$, corresponding to exactly the same image points, and they usually correspond to the head and tail of a linked list. Thus the distance $d$ between the head and the tail in this case is exactly zero. The second is when two segments happen to be separated by the critical curves. The segments' head or tail correspond to exactly the same source positions. Finally, in our method, we use this connection check only a few times. It is used after the linked lists are linked and the \textcolor{mycolor2}{false images} are removed. At that time several segments are left. They correspond to either individual boundaries (e.g., ``S7'' in Fig. \ref{fig:segms}), or they are connected at the same points (e.g., ``S0'' and ``S1'' in Fig. \ref{fig:segms}), or they are ``jumping'' over critical curves (e.g., ``S6'' and ``S3'' in Fig. \ref{fig:segms}).
}
\textcolor{mycolor}{
It may seem that the threshold $10^{-5}$ is too large when there are multiple close images, but we note that in a previous step the procedure has already separated different segments from each other. Additionally, in general, image segments which belong to different image boundaries are well separated, or at least their heads and tails are not in the same place. By checking head-tail distance, we will not mix different segments together if they do not belong to the same image boundary.
} \textcolor{mycolor2}{One could set a stricter threshold, e.g., $10^{-10}$ (although for a typical source radius in microlensing events, $10^{-5}$ is already sufficient).
It is also possible to check for segments connectivity by looking for the nearest loose end (head or tail) of a segment. However, when sampled points are not dense enough, the segments obtained in the previous step may be incorrectly linked (especially when two image boundaries are very close to each other, or when a ring-like structure is formed), and the image areas calculated could be wrong. Taking all these points into account, we choose to use the distance criterion.}
\vspace*{-4mm}
\subsection{Comparison with previous image boundary obtaining methods}
\label{sec::comp}
\textcolor{mycolor}{Previously published papers use different ways to obtain the image boundaries. \textcolor{mycolor2}{\citet{gould1997stokes}, which introduced the contour integration method, included a method for constructing continuous image boundaries.} To avoid inverting the lens equation, \citet{1987contour} proposed the contour plot method to find the image positions. This contour plot method was improved by \citet{dominik1995}. By constructing a squared deviation function and plotting its contour, the image boundaries corresponding to a circular source can be obtained. The overall image information, such as image numbers and shapes, are encoded in the contour plot of the squared deviation function. This method has a limitation: it only provides plots, but not precise numerical values about the image boundaries. To find more precise image contours, one needs high density sampling in the lens plane.}
\textcolor{mycolor}{
\citet{dominik1993, dominik1995} further promoted this idea into a contour-plot-and-correct method. The contour plot data can be stored in purpose designed structures, which can potentially be analysed to obtain microlensing light curves. \citet{dominik1998robust} used the contour plot method to obtain image boundaries and then applied Stokes's theorem to calculate image areas. \textcolor{mycolor2}{In a further development, \citet{2007DominikAdapt} proposed an adaptive contouring algorithm to determine the image contour.}
}
\textcolor{mycolor}{Later works used numerical algorithms to solve the lens equation. In \citet{bozza2010microlensing}, he introduced several error estimators which enable adaptive sampling on the source boundary. He starts with two points on the source boundary, and then inserts new points one by one, between the pair of points which has the largest error estimate. This strategy allows efficient sampling near caustics. Accurate image areas can be obtained with the minimum number of calculations. This method has been developed into the widely used VBBinaryLensing package \citep{bozza2018vbbinarylensing}, which has been integrated into some microlensing events modeling Python packages like pyLIMA \citep{pylima} and MulensModel \citep{mulensmodel}.
}
\textcolor{mycolor}{The caustic structures in triple lenses are much more intricate \citep{2015ApJ...806...99D, 2019ApJ...880...72D} than those for binary lenses, resulting in more complicated image topologies and degeneracies \citep{2014MNRAS.437.4006S}. This poses challenge on obtaining the area of highly distorted images of a source star. Bozza's strategy will require effort to extend it to the triple lens scenario. As a consequence, we have designed and implemented a different approach to determine image boundaries (see \S\ref{sec::conn}).}
\subsection{Stokes' theorem and magnification}
\label{sec::stokes}
\textcolor{mycolor}{
Given any continuous image boundaries, $\left\{ z^{(k)}=x_1^{(k)} + i\,x_2^{(k)}\right\}$, $k=0,1, \cdot\cdot\cdot, n$, where the first and last points are identical,} the enclosed area can be calculated as
\begin{equation}
A = \frac{1}{2}\sum_{k=1}^{n} (x_2^{(k)}+x_2^{(k-1)}) (x_1^{(k)} - x_1^{(k-1)}),
\end{equation}
or as a more symmetrical expression (\citealt{dominik1998robust}),
\begin{equation}
\begin{split}
A = \frac{1}{4} \bigl[ \sum_{k=1}^{n} (x_2^{(k)}+x_2^{(k-1)}) (x_1^{(k)} - x_1^{(k-1)}) \\
+(x_1^{(k)} + x_1^{(k-1)}) (x_2^{(k)} - x_2^{(k-1)}) \bigr].
\end{split}
\label{equ:8}
\end{equation}
If there are no nested image boundaries, the magnification is then simply the total area of all the image boundaries divided by the source area. However, there will be nested images in some cases. We handle this by assigning each image boundary object a ``parity'' attribute with value $+1$ or $-1$, according to the sign of magnification at the head of the image boundary. \textcolor{mycolor2}{For example in Fig. \ref{fig:segms}, the heads of ``S1'' and ``S3'' are separated by the critical curve with their magnifications having opposite signs, and thus they are assigned opposite parities.} The total area covered by the source is found by summing up all the boundary areas multiplied by the ``parities''.
\textcolor{mycolor2}{Notice that the (signed) area of each image boundary does not depend on the initial starting point. If we start on ``S1'' with positive parity counter-clockwise, the parity of the boundary will be assigned positive, and the enclosed area calculated with eq. (\ref{equ:8}) will be negative. On the other hand if we start on ``S3'' with negative parity (clockwise), the calculated area will be positive, and so the product of the initial parity with the area calculated with eq. (\ref{equ:8}) will remain the same.}
The total magnification is then simply the sum divided by the area of the source \textcolor{mycolor}{$\pi \rho^2$}.
The simplest scheme is to start with an approximation of the source limb with e.g., $n=256$ uniformly sampled points, and find the finite source magnification. We can then double $n$, and compare the change in the magnification. We iterate until the relative change is smaller than a preset accuracy, e.g., $\epsilon = 10^{-3}$. However, sampling the source boundary uniformly does not take care of special places on the source boundary, for example, when the source straddles the caustics; these special source boundary places need denser sampling. So we first uniformly sample e.g., $n=45$ \textcolor{mycolor}{different} points on the source boundary, i.e., the $k$-th point \textcolor{mycolor}{$\zeta(\theta_k)$} corresponding to an angle $\theta_k = 2\pi k/n$, \textcolor{mycolor}{with $\theta_0 = 0,\,\theta_n = 2\pi$}. For each \textcolor{mycolor}{$\theta_k$}, we compute the point source magnification \textcolor{mycolor}{$ \mu_{\scalebox{.9}{$\scriptscriptstyle PS$} }(\zeta(\theta_k))$}, which controls the density of points to be sampled around \textcolor{mycolor}{$\zeta(\theta_k)$}. In this way, we will get an initial sample on the source boundary which takes special care around high magnification places.
\subsection{Bozza's Quadrupole test in deciding whether finite source computation is necessary}
\label{sec::quad}
Since there is no fundamental difference between binary and triple lens system, we adopt the quadrupole test as introduced in \citet{bozza2018vbbinarylensing}, to detect that whether the source star is close to the caustics, and decide if it is necessary to use finite source computation. If it is not necessary, we use point source magnification $\mu_{PS}$ as an approximation.
The finite source magnification of a uniform brightness source can be expanded as \citep{pejcha2008extended}
\begin{equation}
\label{FSexpand}
\mu_{\scalebox{.9}{$\scriptscriptstyle FS$} } = \mu_{\scalebox{.9}{$\scriptscriptstyle PS$} } + \frac{\rho^{2}}{8}\Delta \mu_{\scalebox{.9}{$\scriptscriptstyle PS$} } + \frac{\rho^{4}}{192}\Delta^2 \mu_{\scalebox{.9}{$\scriptscriptstyle PS$} } + O(\rho^5),
\end{equation}
where $\Delta = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}$ is the Laplacian and $\Delta^2 = \Delta \Delta$ is the biharmonic operator.
The quadrupole term in equation (\ref{FSexpand}) for each image $z_I$ can be written as (Bozza 2018)
\begin{equation}
\label{muQI}
\mu_{Q_I} = -\frac{ 2\,{\rm Re}[3\overline{f}^{\,\prime 3}f^{\,\prime \prime 2} - (3-3J+J^2/2)|f^{\prime \prime}|^2+J\overline{f}^{\,\prime 2}f^{\prime \prime \prime}] }{J^5}\rho^2,
\end{equation}
where $f^\prime(z)=df/dz$, $f^{\prime\prime}(z)=d^2 f/dz^2$ and
$f^{\prime\prime\prime}(z)=d^3 f/dz^3$.
To detect the cusp caustic, \citet{bozza2018vbbinarylensing} also constructed an error estimator
\begin{equation}
\label{errcusp}
\mu_{C} = \frac{6\,{\rm Im}[\overline{f}^{\,\prime3}f^{\,\prime \prime 2}]}{J^5}\rho^2.
\end{equation}
Thus the condition in the quadrupole test can be written as
\begin{equation}
\sum\limits_{I}c_Q(|\mu_{Q_I}|+|\mu_{C_I}|) < \delta,
\label{equ:quadcond}
\end{equation}
where $c_Q$ and $\delta$ are to be chosen empirically so as to make sure there is enough safety margin, in our code, $c_Q=1,\; \delta = 10^{-6}\sim 10^{-2}$, similar to those chosen in \citet{bozza2018vbbinarylensing}.
\subsection{Limb-darkening}
\label{sec::limb}
\textcolor{mycolor}{
In practice, precise modelling of observed light curves needs to include limb-darkening. The linear profile is a reasonable approximation to the limb-darkening for most stars \citep{1921Milne}
}
\begin{equation}
I(r) = \overline{I}f(r),\;\; f(r)=\frac{3}{3-u}\left[ 1-u(1-\sqrt{1-r^2}) \right],
\label{equ:limbd}
\end{equation}
\textcolor{mycolor}{
where $r=\rho_i/\rho$ is the fractional radius at a certain radius $\rho_i$ to the source radius $\rho$, and $\overline{I}$ is the average surface brightness. $u$ is the limb-darkening coefficient. It relates to the $\Gamma$ convention limb-darkening law \citep{2002An}
}
\begin{equation}
I(\vartheta) = \overline{I} \left[ (1-\Gamma) + \frac{3\Gamma}{2}\cos\vartheta \right],
\end{equation}
\textcolor{mycolor}{
by $u = 3\Gamma/(2+\Gamma)$, and $r = \sin\vartheta$, where $\vartheta$ is the emergent angle, $0\leq \Gamma \leq 1$.
}
\textcolor{mycolor}{
We choose the method introduced by \citet{bozza2010microlensing} where they use annuli to approximate the source, summing up the magnification in each annulus weighted by the limb-darkening profile. Error estimators can also be constructed, which allow adaptive sampling of the source profile. If an annulus has the maximum error, it can be split into two sub-annuli. The dividing radius is chosen to equipartition the cumulative function, defined as}
\begin{equation}
F(r) = 2\int_0^rxf(x)dx.
\end{equation}
\textcolor{mycolor}{
We note that \citet{dominik1998robust} introduced a different method to calculate the magnification of a limb-darkened source star, involving two-dimensional numerical integration.
}
\section{Results}
\subsection{Light curves}
\label{sec:lkv}
We show several examples of light curves, the triple lens parameters (other than $\rho$) are the same as in \S \ref{sec:topo}. For the trajectory as shown in Fig. \ref{fig:lkv_geo}, their corresponding light curves are shown in Fig. \ref{fig:lkv} for four source sizes. As the source size increases, the values of magnification peaks are more smoothed out compared to the point source case, and the number of magnification peaks may differ for different source sizes.
\begin{figure}
\includegraphics[width=\linewidth]{lkv_geo.png}
\vspace{-0.5cm}
\caption{The dashed magenta line shows the source trajectory for which the light curves will be shown in Fig. \ref{fig:lkv} for four source sizes indicated by three circles and a dot (for a point source) at the upper right. The red curve shows the caustics.}
\label{fig:lkv_geo}
\end{figure}
\begin{figure*}
\includegraphics[width=\linewidth]{lkv.png}
\vspace{-0.5cm}
\caption{Light curves corresponding to the trajectory indicated in Fig. \ref{fig:lkv_geo}
for four source sizes, $\rho = 0, 0.01, 0.05, 0.15$ (black, green, gold, and blue). The time starts from HJD $-$ 2450000 $=$ 7430 to HJD $-$ 2450000 $=$ 7510, and $\mu$ is the magnification. Notice that as the source size increases, the values of magnification peaks usually decrease.
}
\label{fig:lkv}
\end{figure*}
\textcolor{mycolor}{
We show an example of a light curve of a limb-darkened source in Fig. \ref{fig:lkv_limb}. The source radius $\rho = 0.01$, the limb-darkening coefficient $\Gamma = 0.51$. To test our method, we compare our results with those from ray-shooting. The rays are uniformly generated in the lens plane, with $1.6\times 10^9$ rays per $\theta_{\rm E}^2$. Thus, without lensing, there will be $\sim 5\times 10^5$ rays inside the source boundary. We make comparisons both with and without limb-darkening. The top panel of Fig. \ref{fig:lkv_limb} shows \textcolor{mycolor2}{two light curves calculated using our method for the uniform brightness (blue) and limb-darkened (red) cases. The second and third panels show the relative error of magnification of our method ($\mu_{\text{ours}}$) relative to the result from ray-shooting ($\mu_{\text{ray}}$) for the uniform brightness and limb-darkened cases, respectively.}
In both cases, the relative errors are $\sim5 \times 10^{-5}$. The bottom panel shows the residual of magnification of limb-darkened star to the magnification of uniform brightness star using ray-shooting results, i.e., $(\mu_{\text{limb}}-\mu_{\text{uni}})/\mu_{\text{uni}}$, which shows that the limb-darkening deviation mainly happens during caustics crossing. For example, during caustic entrance (HJD $-$ 2450000 $\sim$ 7480), the limb of the star intersects with the caustic, and the magnification for limb-darkened star is less than the magnification for uniform brightness star. Because from the centre to the edge of a limb-darkened star, the surface brightness decreases gradually.
}
\textcolor{mycolor}{
Each light curve in Fig. \ref{fig:lkv_limb} contains 500 points. If no limb-darkening is involved, it takes $\sim 1$ CPU minute to calculate the light curve using our code. If limb-darkening is considered, it takes $\sim 20$ CPU minutes. This is due to the need to use $15$-$30$ annuli to reduce the modelling error of limb-darkening light curves to be $\sim 5\times 10^{-5}$. In practice, the computing speed can be adjusted by changing the accuracy goal, and by avoiding unnecessary calculations, e.g., during HJD $-$ 2450000 $=$ 7470 to HJD $-$ 2450000 $=$ 7480 in Fig. \ref{fig:lkv_limb}, limb-darkening calculations are unnecessary.
}
\begin{figure*}
\includegraphics[width=\linewidth]{lkv_limb.png}
\vspace{-0.5cm}
\caption{Example light curves for the green source in Fig. \ref{fig:lkv_geo}. $\rho = 0.01$. The time starts from HJD $-$ 2450000 $=$ 7470 to 7510, and $\mu$ is the magnification. The top panel shows \textcolor{mycolor2}{two light curves calculated using our method for the uniform brightness (blue) and limb-darkened (red) cases. The second and third panels show the relative error of magnification of our method ($\mu_{\text{ours}}$) to the result from ray-shooting ($\mu_{\text{ray}}$) for the uniform brightness and limb-darkened cases, respectively.} In both uniform brightness and limb-darkened cases, relative errors of magnification are $\sim 5\times 10^{-5}$. The bottom panel shows the residual of magnification of limb-darkened star to the magnification of uniform brightness star, i.e., $(\mu_{\text{limb}}-\mu_{\text{uni}})/\mu_{\text{uni}}$.
}
\label{fig:lkv_limb}
\end{figure*}
\subsection{Magnification maps}
\textcolor{mycolor}{Since the strategy we adopted to model limb-darkening light curves is based on the magnification calculation of uniform brightness stars, we compare the magnification map from our method with both ray-shooting and VBBinaryLensing results for a uniform brightness star.}
\subsubsection{Comparison with ray-shooting}
We show one magnification map generated using our method, and compare it with one generated using ray-shooting to test the accuracy of our computation. The triple lens parameters (other than $\rho$) are the same as in \S \ref{sec:topo}. Since the ray-shooting method is more suitable for a large source radius, we choose \textcolor{mycolor}{$\rho = 0.01$. The number density of rays shot is \textcolor{mycolor2}{$2.95\times 10^9$} per $\theta_{\rm E}^2$}. The left panel of Fig. \ref{fig:map} shows the magnification map generated by our method. The right panel of Fig. \ref{fig:map} shows the relative error of magnification compared with the ray-shooting result, which is of the order of $10^{-4}$, \textcolor{mycolor}{with the maximum relative error of magnification (absolute value) being \textcolor{mycolor2}{$5.9\times 10^{-5}$}.}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{map_compare_raysht.png}
\vspace{-0.5cm}
\caption{Triple lens magnification ($\mu$) map \textcolor{mycolor}{of a uniform brightness star} generated by our method (left panel), and the relative error \textcolor{mycolor}{of magnification} between the results from our method and ray-shooting (right panel), the source radius is \textcolor{mycolor}{$\rho = 0.01$. The \textcolor{mycolor2}{yellow curves in both panels show the caustics}. The pixel size of each map is $100\times 60$. The colourbar scale goes from \textcolor{mycolor2}{$-6\times 10^{-5}$ to $6\times 10^{-5}$} in the right panel, the maximum error (absolute value) is \textcolor{mycolor2}{$5.9\times 10^{-5}$}.} The triple lens parameters (other than $\rho$) are the same as in \S \ref{sec:topo}. }
\label{fig:map}
\end{figure*}
In generating our magnification maps, \textcolor{mycolor}{obtaining polynomial coefficients from lens equations costs 24$\%$ of computation time, 59$\%$ for solving complex polynomials, 4.7$\%$ for initial sampling on the source boundary, and 2.7$\%$ for checking whether a solution is true. The rest part (9.6$\%$) of the computation time is mainly spent on obtaining continuous image boundaries.}
\subsubsection{Comparison with VBBinaryLensing}
We have applied our code to the binary lens case, and compared our code to the VBBinaryLensing package \citep{bozza2018vbbinarylensing}. \textcolor{mycolor2}{We used the MulensModel\footnote{\href{https://github.com/rpoleski/MulensModel}{https://github.com/rpoleski/MulensModel}} to calculate the full finite source magnification. To obtain a suitable run for comparison purpose, the accuracy goal (VBBL.Tol) is set to be $ 10^{-5}$.}
We choose the binary lens separation $s = 0.8$, mass ratio $q = 0.1$, and source radius \textcolor{mycolor}{$\rho = 0.01$}. The results are shown in Fig. \ref{fig:mapVBBL}. The left panel shows the magnification map generated by our method, and the right panel shows the relative error of magnification compared with the result from the VBBinaryLensing package. \textcolor{mycolor}{We note that, although both software packages use contour integration to obtain the image areas, this does not imply that the resultant magnification maps will be exactly the same. Although we both use polygons to approximate source boundaries as well as image boundaries, our sampling strategies are different. In addition, the stopping criteria of Bozza’s algorithm is more optimized, being controlled by error estimators. Even though the results are different, the maximum relative error is $3.1\times 10^{-4}$.} Overall, we find our magnification calculating code to be slower by a factor of $\sim 15$ compared to Bozza's package for binary lens systems. This is mostly due to more efficient sampling of points on the limb of source star according to error estimators in their package.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{map_compare_vbbl.png}
\vspace{-0.7cm}
\caption{Binary lens magnification ($\mu$) map \textcolor{mycolor}{of a uniform brightness star} generated by our method (left panel), and the relative error \textcolor{mycolor}{of magnification} between the results from our method and the VBBinaryLensing package (right panel). Binary lens separation $s = 0.8$, mass ratio $q = 0.1$, and source radius \textcolor{mycolor}{$\rho = 0.01$. The yellow curve in the left panel shows the caustics. The pixel size of each map is $128\times 128$. The colorbar scale goes from $-3\times 10^{-4}$ to $3\times 10^{-4}$ in the right panel, the maximum relative error (absolute value) is $3.1\times 10^{-4}$.} \textcolor{mycolor2}{VBBinaryLensing package is used to calculate the finite source magnification as a baseline. Other than the region close to the caustics, our code uses the point source approximation, and the relative error of this approximation is $\sim -3\times 10^{-5}$ .}}
\label{fig:mapVBBL}
\end{figure*}
\subsection{High cadence light curves with adaptive sampling}
In our method, which based on Stokes' theorem, the light curve calculation is time-consuming since we have to solve the lens equation many times and need many points to connect the image boundaries. To remedy the situation, we introduce another refinement to speed up the light curve calculation with adaptive sampling.
As shown in Fig. \ref{fig:lkv}, the light curve of a typical event is globally smooth except when approaching/crossing \textcolor{mycolor}{the caustic}. Thus, we can sample the most important points (usually places with large \textcolor{mycolor}{slopes}) in the light curve adaptively, and perform interpolation elsewhere to obtain the full light curve. In this way, we can reduce the number of finite source computations substantially.
The adaptive sampling procedure is performed as follows: we first compute the magnification $A_1, A_2$ at points $p_1,p_2$, we then compute the magnification $A_c$ in the mid-point, $p_c$, and compare with $A_{\rm mid} = (A_1+A_2)/2$. If the difference between $A_c$ and $A_{\rm mid}$ is larger than a threshold, e.g., $\epsilon = 5\times 10^{-4}$, then we add $p_c$ to our sampled points and repeat the procedure for $p_1, p_c$ and $p_c, p_2$. This process is stopped until the error is smaller than $\epsilon$.
Fig. \ref{fig:adalkv} shows one example of this adaptive sampling procedure. The full light curve is uniformly sampled with $10^4$ points, while for the adaptive sampled light curve only 314 points are needed. With linear interpolation, we recover the full light curve with a relative error $\sim 5\times 10^{-4}$ and the CPU time is reduced by a factor of $\sim 3$. Notice that the speedup is not simply the ratio of the number of points ($ 10^4/314 \approx 30$). This is because the adaptively sampled points are mostly at places with large \textcolor{mycolor}{slopes} (e.g., when the source is near caustics) in the light curve, where more time is needed to calculate their magnifications with the finite source effect. \textcolor{mycolor}{We note that, higher order spline interpolations converge faster globally, yet at the entrance and exit of caustics, the light curve is steeper and higher order interpolation yields oscillations. This is known as ``Runge phenomenon'' \citep{runge1901empirical}.}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{lkvadp.png}
\vspace{-0.6cm}
\caption{Result of adaptive sampling of the light curve (top panel), the full light curve (green solid line) uniformly sample $10^4$ points, corresponding to a sampling of $5.76\times 10^{4}$ minutes. By adaptive sampling, merely 314 points (red dot) are sampled, and the linear interpolated light curve (blue solid line) deviate from the full light curve within a relative error $5\times 10^{-4}$ (bottom panel), $\mu$ is the magnification.}
\label{fig:adalkv}
\end{figure*}
\section{Summary}
Currently modelling of microlensing light curves of triple lens events often uses methods based on perturbation or ray-shooting. \textcolor{mycolor}{We have developed a method based on establishing continuous image boundaries and contour integration to calculate triple microlensing light curves. Before this work, the contour integration method has not been developed beyond binary lens systems. We first implemented a procedure to obtain the magnification of a source star with uniform brightness, and then extended the procedure to handle limb-darkening.} It is efficient for small source sizes, and complements the ray-shooting method. \textcolor{mycolor}{
Our approach has two advantages: 1) Unlike the contour plot method, we obtain \textcolor{mycolor2}{image boundaries} accurately from solving lens equations. 2) It starts from successively sampled points on the source boundary, which corresponds to successive image tracks. From these image tracks, which contain both true and \textcolor{mycolor2}{false images}, we identify the true image boundaries and calculate the enclosed areas. Our method is a general method which can be applied to any multiple lens system, not just to triple lens systems.
} Ray-shooting is efficient for big finite source sizes, due to Poisson noise in the number of rays each pixel collects. Our independent modelling code is available to be used for cross-checking with other methods.
We have tested our method on light curves and magnification maps, and compared the results with the ray-shooting method. Due to the need to connect continuous closed image boundaries, our method requires more CPU time when the source radius is large (e.g., $\rho = 0.01$). We have implemented an adaptive sampling scheme in the light curve calculation. This may be relevant, for example for the KMTNet \citep{kim2010technical, atwood2012design, kim2016kmtnet}: its cadence can be as short as 10 minutes ($10m/t_{\rm E}\sim 3\times 10^{-4}$ for $t_E=20$ days). In such cases, there is no need to calculate the magnification for every epoch. In practice, the speedup will likely depend on the source size and the number of data points in the light curve.
Our code is publicly available \footnote{\href{https://github.com/rkkuang/triplelens}{https://github.com/rkkuang/triplelens}}, including both the C$++$ source codes and materials to build a Python module ``TripleLensing'' to call the C$++$ codes from Python scripts. We plan to improve further the efficiency of our code and apply it to real triple lens events.
\section{Acknowledgements}
\textcolor{mycolor}{We thank the anonymous referee for providing \textcolor{mycolor2}{many} helpful comments and suggestions.} We thank Valerio Bozza, Jan Skowron, and Andrew Gould for making their codes publicly available. This work is partly supported by the National Science Foundation of China (Grant No. 11821303, 11761131004 and 11761141012 to SM).
\section{Data Availability}
The data generated as part of this project may be shared on a reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
1,477,468,750,467 | arxiv | \section{Introduction}
The field of Numerical Relativity (NR) has progressed at a remarkable
pace since the breakthroughs of 2005~\cite{Pretorius:2005gq,
Campanelli:2005dd, Baker:2005vv} with the first successful fully
non-linear dynamical numerical simulation of the inspiral, merger, and
ringdown of an orbiting black-hole binary (BHB) system. In
particular, the `moving-punctures' approach, developed independently
by the NR groups at NASA/GSFC and at RIT, has now become the most
widely used method in the field and was successfully applied
to evolve generic BHBs. This approach regularizes a singular
term in space-time metric and allows the black holes (BHs) to
move across the
computational domain. Previous methods used special coordinate
conditions that kept the black holes fixed in space, which introduced
severe coordinate distortions that caused orbiting-black-hole-binary
simulations to crash. Recently, the generalized harmonic approach
method, first developed by Pretorius~\cite{Pretorius:2005gq}, has also
been successfully applied to accurately evolve generic BHBs for tens
of orbits with the use of pseudospectral codes~\cite{Scheel:2008rj,
Szilagyi:2009qz}.
Since then, BHB physics has rapidly matured into a critical tool for
gravitational wave (GW) data analysis and astrophysics. Recent
developments include: studies of the orbital dynamics of spinning
BHBs~\cite{Campanelli:2006uy, Campanelli:2006fg, Campanelli:2006fy,
Herrmann:2007ex, Marronetti:2007ya, Marronetti:2007wz, Berti:2007fi},
calculations of recoil velocities from the merger of unequal mass
BHBs~\cite{Herrmann:2006ks, Baker:2006vn, Gonzalez:2006md}, and the
surprising discovery that very large recoils can be
acquired by the remnant of the merger of two spinning BHs
~\cite{Herrmann:2007ac,
Campanelli:2007ew, Campanelli:2007cga, Lousto:2008dn, Pollney:2007ss,
Gonzalez:2007hi, Brugmann:2007zj, Choi:2007eu, Baker:2007gi,
Schnittman:2007ij, Baker:2008md, Healy:2008js, Herrmann:2007zz,
Herrmann:2007ex, Tichy:2007hk, Koppitz:2007ev, Miller:2008en},
empirical models relating the final mass and spin of
the remnant with the spins of the individual BHs
~\cite{Boyle:2007sz, Boyle:2007ru, Buonanno:2007sv, Tichy:2008du,
Kesden:2008ga, Barausse:2009uz, Rezzolla:2008sd, Lousto:2009mf}, and
comparisons of waveforms and orbital dynamics of
BHB inspirals with post-Newtonian (PN)
predictions~\cite{Buonanno:2006ui, Baker:2006ha, Pan:2007nw,
Buonanno:2007pf, Hannam:2007ik, Hannam:2007wf, Gopakumar:2007vh,
Hinder:2008kv}.
One of the important applications of NR is the generation of waveform
to assist GW astronomers in their search and analysis of GWs from the
data collected by ground-based interferometers, such as
LIGO~\cite{LIGO3} and VIRGO~\cite{Acernese:2004ru}, and future
space-based missions, such as LISA~\cite{lisa}. BHBs are particularly
promising sources, with the final merger event producing a strong
burst of GWs at a luminosity of $L_{GW}\sim 10^{22}L_{\odot}$\footnote{
This luminosity estimate is independent of the binary mass and takes
into account that $3-10\%$ of the total mass $M$
of the binary is radiated over a time interval of $\sim100M$
\cite{Lousto:2009mf}.}, greater
than the combined luminosity of all stars in the observable universe.
The central goal of the field has been to develop the theoretical
techniques, and perform the numerical simulations, needed to explore
the highly-dynamical regions and thus generate GW signals from a
representative sample of the full BHB parameter space. Accurate
waveforms are important to extract physical information about the
binary system, such as the masses of the components, BH spins, and
orientation. With advanced LIGO scheduled to start taking data in
2014-2015, there is a great urgency to develop these techniques in
short order. To achieve these goals, the numerical relativity and data
analysis communities formed a large collaboration, known as NINJA, to
generate, analyze, and develop matched filtering techniques for
generic BHB waveforms. A wide range of currently available
gravitational waveform signals were injected into a simulated data
set, designed to mimic the response of the Initial LIGO and Virgo
gravitational-wave detectors, and the efficiency of current search
methods in detecting and measuring their parameters were successfully
tested~\cite{Aylott:2009tn, Aylott:2009ya}. The next step will be a
more detailed study of the sensitivity of current search pipelines to
BHB waveforms in real data.
In order to create effective templates for GW data analysis, we need
to cover the 7-dimensional parameter space of possible BHB
configurations, including arbitrary mass ratios (1d) $q=m_1/m_2$ and
arbitrary orientation and magnitudes of the individual BH spins (6d),
in an efficient way. There are two important challenges here. The
first challenge is to adapt the numerical techniques developed for
similar-mass, low-spin BHBs to tackle BHBs with extreme mass ratios,
i.e.\ $q<1/10$ (See Refs.~\cite{Lousto:2008dn, Gonzalez:2008bi, Lousto:2010tb})
and, independently, the highly-spinning regime. In the
latter regime, the binaries will precess strongly during the final
stages of inspiral and merger, leading to large recoils and
modulations in the waveform. These two regions are numerically
highly-demanding due to the high resolution required for
accurate simulations.
A second challenge is to efficiently generate the waveforms
numerically. Ideally one would like to have a bank of templates with
millions of waveforms, but the computational expense of each
individual simulation makes this unrealistic.
At RIT, we have been particularly interested in studying spinning BHBs
and the effects of spin on the orbital dynamics, waveforms, and
remnant BHs. In 2006 the RIT group began a series of analyses of
spinning BHBs, with the goal of evolving a truly generic binary. Our
studies began with the `orbital hangup'
configurations~\cite{Campanelli:2006uy}, where the spins are aligned
or counter-aligned with the orbital angular momentum, and display
dramatic differences in the orbital dynamics, see Fig~\ref{fig:hangup}.
In this study we
were also able to provide strong evidence that the merger of two BHs will
produce a submaximal remnant (i.e.\ cosmic censorship is obeyed).
We then analyzed spin-orbit
effects~\cite{Campanelli:2006fg} and found that they were too weak
near merger to force a binary to remain in a corotational state.
Afterwards, we analyzed spin-precession and spin
flips~\cite{Campanelli:2006fy}. With this experience, we were able to
begin evolving `generic' binaries, that is, binaries with unequal and
unaligned spins, and mass ratios differing from
$1:1$~\cite{Campanelli:2007ew}. Remarkably, we found for a generic
binary that the gravitational recoil out of the orbital plane, which
is a function of the in-plane spin, was potentially much larger than
any in-plane recoil. In fact, the measured recoil for our `generic'
configuration was actually as large as the largest predicted in-plane
recoil (which assumed maximal spins perpendicular to the orbital
plane)~\cite{Herrmann:2007ac, Koppitz:2007ev} (see
Fig.~\ref{fig:power} for a plot of the radiated power per unit solid
angle for a `generic' BHB). Based on
these results, we were able to predict a recoil of thousands of km/s
for equal-mass, equal and anti-aligned spins (with spins entirely in
the orbital plane). Based on our suggested configuration, the authors
of Ref~\cite{Gonzalez:2007hi} evolved a binary with a recoil
of 2500\ km/s. However, our prediction indicated that the recoil can
vary sinusoidally with the angle that the spins make with respect to
the initial linear momentum of each hole. After completing a study of
these superkick configurations with various spin angles, we were able
to show that the maximum recoil was, in fact, much closer to
4000\ km/s. Later on, we evolved a set of challenging superkick
configurations, with spins $S_i/m_H^2=0.92$, where $m_H$ is the horizon
mass, and found a recoil of 3300\ km/s~\cite{Dain:2008ck}.
\begin{figure}
\begin{center}
\includegraphics[width=4in]{hangup.pdf}
\caption{The hangup effect in the waveform for binaries with spins
aligned and counter-aligned with the orbital angular momentum. In each
case the binaries started out at the same orbital frequency.}
\label{fig:hangup}
\end{center}
\end{figure}
\begin{figure}
\includegraphics[width=2in]{d2Edomegadt_a.png}
\includegraphics[width=2in]{d2Edomegadt_b.png}
\includegraphics[width=2in]{d2Edomegadt_c.png}
\includegraphics[width=2in]{d2Edomegadt_d.png}
\includegraphics[width=2in]{d2Edomegadt_e.png}
\includegraphics[width=2in]{d2Edomegadt_f.png}
\caption{The radiated power per unit solid angle for a generic BHB
configuration at different times. The asymmetry in the
radiation leads to the instantaneous recoil.}
\label{fig:power}
\end{figure}
The cost of running a numerical simulation for many orbits, and in
particular the cost of running a simulation with high spins and mass
ratios that differ significantly from $1:1$ means that we need to use
hybrid analytic / numerical waveforms to model the full inspiral
waveform. Combining post-Newtonian~\cite{Blanchet:2002av}
(or Effective-one-Body~\cite{Buonanno:1998gg}) waveforms
and full numerical waveforms seems
to be an ideal solution to this problem, but the modeling of even
relatively distant (from an NR point of view) BHBs using PN is
still an unsolved problem because the PN equations of motion are only
known up to 3.5PN order, which as we show in Sec.~\ref{sec:pn_nr}
is not accurate enough to evolve close, highly-precessing, BHBs.
\section{Inspiral and Merger of Generic Black-Hole Binaries}
\label{sec:pn_nr}
In \cite{Campanelli:2008nk}, we compared the numerical relativity (NR)
and post-Newtonian (PN) waveforms of a generic BHB,
i.e., a binary with unequal masses and unequal, non-aligned,
precessing spins. Comparisons of numerical simulations with
post-Newtonian ones have several benefits aside from the theoretical
verification of PN. From a practical point of view, one can directly
propose a phenomenological description and thus make predictions in
regions of the parameter space still not explored by numerical
simulations. From the theoretical point of view, an important
application is to have a calibration of the post-Newtonian error in
the last stages of the binary merger.
To derive the PN gravitational waveforms, we start from the
calculation for the orbital motion of binaries in the post-Newtonian
approach. Here we use the ADM-TT gauge, which is the closest to our
quasi-isotropic numerical initial data coordinates. We use the PN
equations of motion (EOM) based on~\cite{Buonanno:2005xu,
Damour:2007nc, Steinhoff:2007mb}. The Hamiltonian is given
in~\cite{Buonanno:2005xu}, with the additional terms, i.e., the
next-to-leading order gravitational spin-orbit and spin-spin couplings
provided by~\cite{Damour:2007nc, Steinhoff:2007mb}, and the
radiation-reaction force given in~\cite{Buonanno:2005xu}.
The Hamiltonian we use here is given by
\begin{eqnarray}
H &=& H_{\rm O,Newt} + H_{\rm O,1PN} + H_{\rm O,2PN} + H_{\rm O,3PN}
\nonumber \\ &&
+ H_{\rm SO,1.5PN} + H_{\rm SO,2.5PN}
+ H_{\rm SS,2PN} + H_{\rm S_1S_2,3PN} \,,
\label{eq:H}
\end{eqnarray}
where the subscript O, SO and SS denote the pure orbital
(non-spinning) part, spin-orbit coupling and spin-spin coupling,
respectively, and Newt, 1PN, 1.5PN, etc., refer to the perturbative
order in the post-Newtonian approach. The $H_{\rm
S_1S_1(S_2S_2),3PN}$ component of the Hamiltonian was recently derived
in~\cite{Steinhoff:2008ji}. We should note that Porto and Rothstein
also derived higher-order spin-spin interactions using effective field
theory
techniques~\cite{Porto:2006bt,Porto:2007tt,Porto:2008tb,Porto:2008jj}.
We obtain the conservative part of the orbital and spin EOMs from this
Hamiltonian using the standard techniques of the Hamiltonian
formulation. For the dissipative part, we use the non-spinning
radiation reaction results up to 3.5PN, as well as the leading
spin-orbit and spin-spin coupling to the radiation
reaction~\cite{Buonanno:2005xu}.
Although, not used here, higher-order corrections to the spin
dependent radiation reaction terms were derived
in~\cite{Mikoczi:2005dn, Blanchet:2006gy, Racine:2008kj, Arun:2008kb}
and can be applied to our method to improve the prediction for the BH
trajectories (and hence the waveform).
This PN evolution is used both to produce very low eccentricity
orbital parameters at $r\approx11M$ (the initial orbital separation
for the NR simulations) from an initial orbital separation of $50M$,
and to evolve the orbit from $r\approx11M$. We use these same
parameters at $r\approx11M$ to generate the initial data for our NR
simulations. The initial binary configuration at $r=50M$ had the mass
ratio $q=m_1/m_2 = 0.8$, $\vec S_1/m_1^2 = (-0.2, -0.14,0.32)$, and
$\vec S_2/m_2^2 =(-0.09, 0.48, 0.35)$.
We then construct a hybrid PN waveform from the orbital motion by using
the following procedure. First we use the 1PN accurate waveforms
derived by Wagoner and Will~\cite{Wagoner:1976am} (WW waveforms) for a
generic orbit. By using these waveforms, we can introduce effects due
to the black-hole spins, including the precession of the orbital
plane. On the other hand, Blanchet {\it et
al}.~\cite{Blanchet:2008je} recently obtained the 3PN waveforms (B
waveforms) for non-spinning circular orbits. We combine these two
waveforms to produce a hybrid PN waveform. We note that there are no
significant gauge ambiguities arising from combining the WW and B
waveforms in this way because at 1PN order the harmonic and ADM gauges
are equivalent (and hence the WW waveforms are the same in the two
gauges) and the B waveforms are given in terms of gauge invariant
variables. Also, it should be noted that we calculate the spin
contribution to the waveform through its effect on the orbital motion
directly in the WW waveforms and indirectly in B waveforms through the
inclination of the orbital plane.
For the NR simulations we calculate the Weyl scalar $\psi_4$ and
then convert the $(\ell,m)$ modes of $\psi_4$ into $(\ell,m)$
modes of $h = h_{+} - i h_{\times}$.
To compare PN and numerical waveforms, we need to determine
the time translation $\delta t$ between the numerical time and
the corresponding point on the PN trajectory. That is to say,
the time it takes for the signal to reach the extraction sphere
($r=100M$ in our numerical simulation).
We determine this by finding the time translation near $\delta t=100M$
that maximizes the agreement of the early time waveforms
in the $(\ell=2,m=\pm2)$, $(\ell=2,m=\pm1)$, and
$(\ell=3,m=\pm3)$ simultaneously. We find $\delta t \sim 112M$, in
good agreement with the expectation for our observer at $r=100M$.
Since our PN waveforms are given uniquely
by a binary configuration, i.e., an actual location of the PN particle,
we do not have any time shift or phase modification
other than this retardation of the signal.
Note that other methods which are not based on the particle locations,
have freedom in choosing a phase factor.
To quantitatively compare the modes of the PN waveforms
with the numerical waveforms we define the overlap,
or matching criterion, for the real and imaginary parts of
each mode as
\begin{eqnarray}
\label{eq:match}
M_{\ell m}^{\rm \Re/\Im} = \frac{<h^{\rm Num,\Re/\Im}_{\ell m},
h^{\rm PN,\Re/\Im}_{\ell m}>}
{\sqrt{<h^{\rm Num,\Re/\Im}_{\ell m},h^{\rm Num,\Re/\Im}_{\ell m}>
<h^{\rm PN,\Re/\Im}_{\ell m},h^{\rm PN,\Re/\Im}_{\ell m}>}} \,,
\end{eqnarray}
where
$h^{\rm \Re/\Im}_{\ell m}$ are defined by the real and imaginary parts
of the waveform mode $h_{\ell m}$, respectively,
and the inner product is calculated by
$
<f,g> = \int_{t_1}^{t_2} f(t) g(t) dt
$.
Hence, $M_{\ell m}^{\Re/\Im} = 1$ indicates that the given
PN and numerical mode agree.
We analyzed the long-term generic waveform produced
by the merger of unequal mass, unequal spins, precessing black holes,
and found a good initial agreement of waveforms
for the first six cycles, with overlaps of over $98\%$
for the $(\ell=2, m=\pm2)$ modes, over $90\%$
for the $(\ell=2, m=\pm1)$ modes, and over $90\%$
for the $(\ell=3, m=\pm3)$ modes.
The agreement degrades as
we approach the more dynamical region of the late merger and plunge.
While our approach appears promising, there are some remaining
issues. The PN gravitational waveforms
used here does not include direct spin effects (spin
contribution to the waveform arises only through its effect on the orbital
motion). Recently, direct spin effects on the waveform were analyzed
in~\cite{Arun:2008kb}.
In Fig.~\ref{fig:pn_nr_w} we show the $(\ell=3,m=3)$ mode of $\psi_4$.
A comparison of the PN and NR waveforms shows that there are
significant errors in the 2.5PN approximate waveform that are
significantly reduced by going to 3.5PN. However, it appears that
still higher-order corrections are needed in order to accurately model
the waveform using PN at an orbital radius of $r=11M$. In
Fig.~\ref{fig:pn_nr_r} we show the orbital separation versus time.
Here, as well, higher-order PN correction are important.
\begin{figure}
\begin{center}
\includegraphics[width=4in]{PN_P4_Comp.pdf}
\caption{The amplitude of the $(\ell=3,m=3)$ mode of $\psi_4$ for
the `generic' binary configuration using the full numerical waveform,
as well as waveforms derived from 2.5PN and 3.5PN EOMs.
Note the much better agreement
of the 3.5PN waveform, indicating that higher-order PN terms are
important to the waveform during the late-inspiral phase. }
\label{fig:pn_nr_w}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=4in]{PN_R_Comp.pdf}
\caption{The orbital separation versus time the `generic' binary
configuration using the full numerical trajectories, as well as the
trajectories derived from 2.5PN and 3.5PN EOMs. Note the much better
agreement of the 3.5PN trajectories, and that 3.5PN captures the
eccentricity of this configuration much better than 2.5PN,
indicating that higher-order PN terms
are important to orbital dynamics. }
\label{fig:pn_nr_r}
\end{center}
\end{figure}
\subsection{Hybrid Waveforms}
To obtain a continuous and differentiable hybrid PN / NR waveform,
we use a smoothing function to transition from the purely
PN to purely NR parts of the waveform of the form
\begin{eqnarray}
h &=& (1-F(x)) h^{PN} + F(x) h^{Num} \,,
\end{eqnarray}
where for example, we can use a simple polynomial,
\begin{eqnarray}
F(x) &=& x^3 (6 x^2-15 x+10) \,.
\end{eqnarray}
This guarantees the $C^2$ behavior at $F(x)=0$ and $1$.
In~\cite{Ajith:2007qp} the authors chose $F(x)=x$,
which creates a discontinuity
in the derivatives of the waveforms, especially $\psi_4$,
and also an amplitude scaling factor to correct the amplitudes.
Note that here we do not have any free parameters
(we allow the time translation, here $\delta t\sim 112M$, to
vary by $\sim5\%$ about the retardation time ($T_{ret}\sim109M$)
of the observer location).
Figure~\ref{fig:HYB} shows the hybrid waveform generated by
the NR and PN waveforms for the binary discussed in the above section.
Here, we use a half wavelength for the smoothing
interval which starts at $t=226.78875M$, and
the time translation $\delta t = 112.64625M$ is considered.
\begin{figure}[ht]
\center
\includegraphics[width=4in]{HYB22.pdf}
\caption{The real part of the $\ell=2,\,m=2$ mode of the hybrid waveform.
This is created by matching the NR waveform to the waveform
derived from 3.5PN EOMs.}
\label{fig:HYB}
\end{figure}
\section{Discussion}
The remarkable progress in both analytic and fully non-linear
numerical simulations of BHBs has made it possible to accurately model
the inspiral waveform for a generic black-hole binary by combining
both post-Newtonian waveforms from large separations and
smoothly attaching this waveform to the corresponding fully non-linear
waveform produced by the binary during the late-inspiral.
We provide an example of one such hybrid waveform. Our
waveform is available for download from
\url{http://ccrg.rit.edu/downloads/waveforms}.
We found that 3.5PN produces a markedly better predicted waveform
than 2.5PN, but there were still significant errors in the 3.5PN
waveform for separations $r<11M$. However, numerical simulations can
start with larger separations (e.g.\ the 16-orbit simulation described
in~\cite{Scheel:2008rj}) and there is significant progress in
computing higher-order PN corrections. Hence, we expect that
highly-accurate hybrid waveforms for generic binaries will soon be
feasible.
\section*{References}
\bibliographystyle{unsrt}
|
1,477,468,750,468 | arxiv | \section{Introduction}
\label{sec:intro}
Heavy-tailed sensor array data arises e.g. due to clutter in radar \cite{Gini-Greco2002} and interference in wireless links \cite{Clavier2021}.
Such array data demand statistically robust array processing.
There is a rich literature on statistical robustness
\cite{Huber2011,Hampel2011,Zoubir-Koivunen-Chakhchoukh-Muma2012,Zoubir2018}.
Here we derive, formulate, and investigate a statistically robust approach to Sparse Bayesian Learning (SBL) based on a model for Complex Elliptically Symmetric (CES) array data.
Due to the central limit theorem most noise are modeled as Gaussian and SBL was derived under a joint complex multivariate Gaussian assumption on source signals and noise \cite{Tipping2001}. Direction of arrival (DOA) estimation for plane waves using SBL is proposed in Ref.~\cite[Table~I]{gerstoft2016mmv}.
In some cases the noise contains non-Gaussian outliers, thus it is important to have robust processing so that these unlikely events also gives good estimates.
SBL provides DOA estimates based on the sample covariance matrix of the array data sample.
The sample covariance matrix is a sufficient statistic under the jointly Gaussian assumption, but it is not robust against deviations from this assumption \cite{Ollila2003pimrc}.
The SBL approach is flexible through the usage of various priors, although Gaussian are most common \cite{gerstoft2016}.
For Gaussian priors this has been approached based on minimization-majorization \cite{Stoica2012} and with expectation maximization (EM) \cite{Wipf2007,Wipf2007beam,Wipf2004,Zhang2011,Liu2012, Zhang2016}.
We estimate the hyperparameters iteratively from the likelihood derivatives using stochastic maximum likelihood\cite{Boehme1985,Jaffer1988,Stoica-Nehorai1995}.
A numerically efficient SBL implementation is available on GitHub \cite{SBL4-github}.
Recent investigations showed that Sparse Bayesian Learning (SBL) is
lacking in statistical robustness \cite{Ollila2003pimrc,Mecklenbraeuker-Gerstoft-Ollila-WSA2021,Mecklenbraeuker2022icassp}.
A Bayes-optimal algorithm was proposed to estimate DOAs in the presence of impulsive noise from the perspective of SBL in \cite{DaiSo2018}. In the following, we derive robust and sparse Bayesian learning which can be understood as introducing a data-dependent weighting into the sample covariance matrix estimate.
Previously, a direction of arrival (DOA) estimator for plane waves observed by a sensor array based on a complex multivariate Student $t$-distribution array data model was studied.
A qualitatively robust and sparse DOA estimate was derived as Maximum Likelihood (ML) estimate based on this model \cite[Sec.~5.4.2]{Zoubir2018},\cite{Ollila2003pimrc,Mecklenbraeuker-Gerstoft-Ollila-WSA2021}.
Here, we solve the DOA estimation problem from multiple array data snapshots in the SBL framework
\cite{Wipf2007,gerstoft2016} and use the maximum-a-posteriori (MAP) estimate for DOA reconstruction.
We assume a CES array data model with unknown source variances for the formulation of the likelihood function
To determine the unknown parameters, we maximize a Type-II likelihood (evidence) for this CES array data model
and estimate the hyperparameters iteratively from the likelihood derivatives using stochastic maximum likelihood.
We propose a SBL algorithm for DOA M-estimation which, given the number of sources, automatically estimates the set of DOAs corresponding to non-zero source power from all potential DOAs.
Posing the problem this way, the estimated number of parameters is independent of snapshots, while the accuracy improves with the number of snapshots.
We incorporate priors with potentially strong outliers by allowing various loss functions in the formulation of M-estimators.
This leads to a robust and sparse DOA estimator which is based on the assumption that the array data observations follow a centered (zero-mean) CES distribution with finite second-order moments.
The outline of the paper is as follows: We introduce the notation and the array data model in Sec.~\ref{sec:model}.
Thereafter, we formulate the objective function and specific loss functions used for DOA M-estimator in Sec.~\ref{sec:M-estimation} and describe the proposed algorithm.
Simulation results for DOA estimation are discussed in Sec.~\ref{sec:results} and report on
convergence of the algorithm and associated run time in Sec.~\ref{sec:convergence}.
\section{Complex elliptically symmetric array data model}
\label{sec:model}
Narrowband waves are observed on $N$ sensors for $L$ snapshots $\Vec{y}_{\ell} $ and the array data is
$\Mat{Y}=[\Vec{y}_1\ldots\Vec{y}_L]\in\mathbb{C}^{N\times L}$.
We model the snapshots $\Vec{y}_{\ell}$ by a scale mixture of Gaussian distributions, which have a stochastic decomposition of the form
\begin{align}
\Vec{y}_{\ell} = \sqrt{\tau_{\ell}} \, \Vec{v}_{\ell}, \text{ with }
\Vec{v}_{\ell} = \Mat{A}\Vec{x}_{\ell} + \Vec{n}_{\ell}, \quad(\ell=1\ldots L) \label{eq:scale-mixture}
\end{align}
where $\tau_{\ell}>0$ is a random variable independent of $\Vec{v}_{\ell}$.
The unknown zero-mean complex source amplitudes are the elements of $\Mat{X}=[\Vec{x}_1\ldots\Vec{x}_L]\in\mathbb{C}^{M\times L}$ where
$M$ is the considered number of hypothetical DOAs on the given grid $\{ \theta_1,\ldots,\theta_M\}$. The source amplitudes are independent across sources and snapshots, i.e.\ $x_{ml}$ and $x_{m'l'}$ are independent for $(m,l) \ne (m',l')$.
If $K$ sources are present in the $\ell$th array data snapshot, the $\ell$th column of $\Mat{X}$ is $K$-sparse and we assume that the sparsity pattern is the same for all snapshots.
The sparsity pattern is modeled by the active set
\begin{align}
\mathcal{M} &=\left\{m\in\{1,\ldots,M\}\,|\,x_{m\ell}\ne0\right\}=\{m_1,\ldots, m_K\}.
\label{eq:mathcal-M}
\end{align}
The noise $\Vec{N}=[\Vec{n}_1\ldots\Vec{n}_L]\in\mathbb{C}^{N\times L}$ is assumed independent identically distributed (iid) across sensors and snapshots, zero-mean, with finite variance $\sigma^2$ for all $n,\ell$.
The $M$ columns of the dictionary $\Mat{A}=[\Vec{a}_1\ldots\Vec{a}_M]\in\mathbb{C}^{N\times M}$ are the replica vectors for all hypothetical DOAs. For a uniform linear array (ULA), the dictionary matrix elements are $A_{nm}=\mathrm{e}^{-\mathrm{j}(n-1)\frac{2\pi d}{\lambda}\sin\theta_m}$ ($d$ is the element spacing and $\lambda$ the wavelength).
The $K$ ``active'' replica vectors are aggregated in
\begin{align}
\Mat{A}_{\mathcal{M}} = [\Vec{a}_{m_1} \ldots\Vec{a}_{m_K}] \in\mathbb{C}^{N\times K},
\end{align}
with its $k$th column vector $\Mat{a}_{m_k}$, where $m_k\in\mathcal{M}$.
The source and noise amplitudes are jointly Gaussian and independent of each other, i.e. $\Vec{x}_{\ell} \sim \mathbb{C}\mathcal{N}_M(\Vec{0},\Mat{\Gamma})$ and $\Vec{n}_{\ell} \sim \mathbb{C}\mathcal{N}_N(\Vec{0},\sigma^2\Mat{I}_N)$. It follows from \eqref{eq:scale-mixture}, that $\Vec{v}_{\ell} \sim \mathbb{C}\mathcal{N}_N(\Vec{0},\Mat{\Sigma})$ with
\begin{align}
\Mat{\Sigma} &= \Mat{A} \Mat{\Gamma} \Mat{A}^{\sf H} + \sigma^2 \Mat{I}_N , \label{eq:Sigma-model_new}\\
\Mat{\Gamma} &= \mathrm{cov}(\Vec{x}_{\ell}) = \mathop{\mathrm{diag}}(\Vec{\gamma})
\end{align}
where $\Vec{\gamma} = [\gamma_1 \ldots \gamma_M]^T$ is the $K$-sparse vector of unknown source powers.
The matrix $\Mat{\Sigma}$ is interpretable as the covariance matrix $\mathrm{cov}(\Vec{v}_{\ell})$ of the Gaussian component $\Vec{v}_{\ell}$, but this is not observable in this model and the sensor array only observes the scale mixture $\Vec{y}_{\ell}$. The matrix $\Mat{\Sigma}$ is called scatter matrix in the following.
Since $\Vec{y}_{\ell} | \tau_{\ell} \sim \mathbb{C} \mathcal N_N ( \Vec{0}, \tau_{\ell} \Mat{\Sigma})$, the density of $\Vec{y}_{\ell}$ is
\begin{align}
p_{\Vec{y}}(\Vec{y}_{\ell}) &= \int_0^\infty \!\!\! p_{\Vec{y},\tau}(\Vec{y}_{\ell},\tau) \,\mathrm{d}\tau = \int_0^\infty \!\!\! p_{\Vec{y}|\tau}(\Vec{y}_{\ell}|\tau) p_{\tau}(\tau) \, \mathrm{d}\tau \\
&= (\det\Mat{\Sigma})^{-1} g(\Vec{y}_{\ell}^{\sf H} \Mat{\Sigma}^{-1} \Vec{y}_{\ell}) \label{eq:ces}
\end{align}
where the so-called \emph{density generator} $g(\cdot)$ is evaluated by
\begin{equation} \label{eq:t_g}
g(t)= \pi^{-N} \int_0^\infty \tau^{-N} e^{-t/\tau} p_{\tau}(\tau) \,\mathrm{d}\tau.
\end{equation}
The form of \eqref{eq:ces} shows that the distribution of $\Vec{y}_{\ell}$ is CES with mean zero $\bf 0$ \cite{Richmond,greco2014maximum,Fortunati2019b}.
If the random scaling $\sqrt{\tau_{\ell}}=1$ in \eqref{eq:scale-mixture} for all $\ell$ then the commonly assumed Gaussian array data model is recovered,
\begin{equation}
\Vec{y}_{\ell} = \Mat{A}\Vec{x}_{\ell} + \Vec{n}_{\ell}~. \label{eq:linear-model3}
\end{equation}
which relates the array data snapshot
$ \Vec{y}_{\ell}$ to the source amplitude $\Vec{x}_{\ell}$ by a linear regression model.
This model results in Gaussian array data, $\Vec{y}_{\ell}\sim\mathbb{C}\mathcal{N}_N(\Vec{0},\Mat{\Sigma})$.
We will not use \eqref{eq:linear-model3} for the derivation of DOA M-estimation using SBL.
The scaling mixture \eqref{eq:scale-mixture} is assumed instead.
In array processing applications, the complex Multi-Variate $t$-distribution (MVT distribution) \cite{kent1991redescending,ollila2020shrinking} can be used as an alternative to the Gaussian distribution as array data model in the presence of outliers because the MVT-distribution has heavier tails than the Gaussian distribution.
The MVT-distribution is a suitable choice such data and provides a parametric approach to robust statistics \cite{Zoubir2018,Mecklenbraeuker-Gerstoft-Ollila-WSA2021}.
The complex MVT distribution is a special case of the CES distribution, for details see Appendix \ref{sec:MVT-array-data-model}.
For numerical performance evaluations of the derived M-estimator of DOA, three array data models are used in Sec. \ref{sec:results}: Gaussian, MVT, and $\epsilon$-contaminated. The Gaussian and MVT models are CES, whereas the $\epsilon$-contaminated model is not.
\section{M-estimation based on CES distribution}
\label{sec:M-estimation}
\subsection{Covariance matrix objective function}\label{sec:covmat}
We follow a general approach based on loss functions and assume that the array data $\Mat{Y}$ has a CES distribution with zero mean $\Vec{0}$ and positive definite Hermitian $N \times N$ covariance matrix parameter $\Mat{\Sigma}$ \cite{ollila2012complex,Mahot2013}. Thus
\begin{align}
p(\Mat{Y}| \Vec{0}, \Vec{\Sigma}) &= \prod_{\ell=1}^L \det(\Vec{\Sigma}^{-1})g(\Vec{y}_{\ell}^{\sf H}\Vec{\Sigma}^{-1}\Vec{y}_{\ell}).
\label{eq:CES-pdf}
\end{align}
An M-estimator of the covariance matrix $\Mat{\Sigma}$ is defined as a positive definite Hermitian $N \times N$ matrix that minimizes the objective function \cite[(4.20)]{Zoubir2018},
\begin{align}
\mathcal{L}(\Mat{\Sigma})
&= \frac{1}{L b}\ \sum\limits_{\ell=1}^{L} \rho(\Vec{y}_{\ell}^H \Mat{\Sigma}^{-1} \Vec{y}_{\ell}) - \log\det(\Mat{\Sigma}^{-1}) ,
\label{eq:Mobjective}
\end{align}
where $ \Vec{y}_{\ell} $ is the $\ell$th array snapshot and $\rho: \mathbb{R}_0^+ \to \mathbb{R}^+$, is called the loss function. The loss function is any continuous, non-decreasing function which satisfies that $ \rho(e^x)$ is convex in $-\infty <x < \infty$, cf. \cite[Sec. 4.3]{Zoubir2018}.
Note that the objective function \eqref{eq:Mobjective} is a penalized sample average of the chosen loss function $\rho$ where the penalty term is $\log\det\Mat{\Sigma}$. A specific choice of loss function $\rho$ renders \eqref{eq:Mobjective} equal to the negative log-likelihood of $\Mat{\Sigma}$ when the array data are CES distributed with density generator $g(t)=\mathrm{e}^{-\rho(t)}$ \cite{Huber1964}. If the loss function is chosen, e.g., as $\rho(t)=t$ then \eqref{eq:Mobjective} becomes the negative log-likelihood function for $\Mat{\Sigma}$ for Gaussian array data.
The term $b$ is a fitting coefficient, called consistency factor, which renders the minimizer of the objective function \eqref{eq:Mobjective} to be equal to $\Mat{\Sigma}$ when the array data are Gaussian, thus
\begin{align}
b &= \mathsf{E}[\psi(\| \Vec{y} \|^2)]/N, \quad \Vec{y} \sim \mathbb{C}\mathcal N_N(\Vec{0},\Mat{I}), \label{eq:consitency factor} \\
&= \frac 1 N \int_0^\infty \psi( t/2) f_{\chi^2_{2N}}(t) \mathrm{d} t \label{eq:consitency factor2}
\end{align}
where $\psi(t)=t\, \mathrm{d}\rho(t)/\mathrm{d}t$ and $f_{\chi^2_{2N}}(t)$ denotes the pdf of chi-squared distribution with $2N$ degrees of freedom.
To arrive from \eqref{eq:consitency factor} to \eqref{eq:consitency factor2} we used that $\| \Vec{y} \|^2 \sim (1/2) \chi^2_{2N}$.
Minimizing \eqref{eq:Mobjective} with $b$ according to \eqref{eq:consitency factor2} results in a consistent M-estimator of the covariance matrix $\Mat{\Sigma}$ when the objective function is derived under a given non-Gaussian array data assumption (as in Sec. \ref{sec:loss-func}) but is in fact Gaussian ($\Vec{y}_{\ell} \sim \mathbb{C} \mathcal N_N(\Vec{0},\Mat{\Sigma}))$.
A derivation of the consistency factor $b$ for the loss functions used in this paper is given in Appendix \ref{sec:consistency-factor}.
\subsection{Loss functions}
\label{sec:loss-func}
We discuss four different choices of loss function $\rho(\cdot)$ and these are chosen so that \eqref{eq:Mobjective} becomes an negative log-likelihood function for the corresponding distribution: These loss functions are summarized in Table \ref{tab:loss-and-weight-functions}.
For each loss function, we also summarize the consistency factor $b$ and the weight function $u(t) = \mathrm{d} \rho(t)/\mathrm{d}t $ associated with the loss function $\rho$.
\subsubsection{Gauss loss} corresponds to loss function of (circular complex) Gaussian distribution:
\begin{equation}\label{eq:loss-G}
\rho_{\mathrm{Gauss}}(t)=t
\end{equation}
in which case the consistency factor $b=1$ and the objective in \eqref{eq:Mobjective} becomes the Gaussian (negative) log-likelihood function $\mathrm{tr}\{ \Mat{\Sigma}^{-1} \Mat{S}_{\Mat{Y}} \} - \log\det(\Mat{\Sigma}^{-1}) $ where
\begin{align}
\Mat{S}_{\Mat{Y}} = \Mat{YY}^{\sf H} / L,
\label{eq:SY}
\end{align}
is the sample covariance matrix, $(\cdot)^H$ denotes Hermitian transpose, and the minimizer of which is $\hat{\Mat{\Sigma}} = \Mat{S}_{\Mat{Y}}$. In this case $b$ in \eqref{eq:consitency factor2} becomes $b=1$ as expected, since $\Mat{S}_{\Mat{Y}}$ is consistent to $\Mat{\Sigma}$ without any scaling correction.
For Gauss loss $u_{\mathrm{Gauss}}(t)=1$.
\subsubsection{Huber loss} given by \cite[Eq.~(4.29)]{Zoubir2018}
\begin{equation} \label{eq:loss-H}
\rho_{\mathrm{Huber}}(t;c) = \begin{cases} t & \ \mbox{for} \ t \leqslant c^2, \\
c^2 \big( \log (t/c^2) + 1 \big) & \ \mbox{for} \ t > c^2. \end{cases}
\end{equation}
The threshold $c$ is a tuning parameter that affects the robustness and efficiency of the estimator. Huber loss specializes the objective function \eqref{eq:Mobjective} to the negative log-likelihood of $\Mat{\Sigma}$ when the array data are heavy-tailed CES distributed with a density generator of the form $\mathrm{e}^{-\rho_{\mathrm{Huber}}(t;c)}$. The squared threshold $c^2$
in \eqref{eq:loss-H} is mapped to the $q$th quantile of $(1/2) \chi^2_{2N}$-distribution and we regard $q \in (0,1)$ as a loss parameter which is chosen by design, see Table \ref{tab:loss-and-weight-functions}.
It is easy to verify that $b$ in \eqref{eq:consitency factor2} for Huber loss function is \cite[Sec.~4.4.2]{Zoubir2018},
\begin{align}\label{eq:consistency-H}
b_{\mathrm{Huber}} &= F_{\chi^2_{2(N+1)}}(2c^2) + c^2(1- F_{\chi^2_{2 N}}(2 c^2))/N, \\
&= F_{\chi^2_{2(N+1)}}(2c^2) + c^2(1- q)/N,
\end{align}
where $F_{\chi^2_{2N}}(x)$ denotes the cumulative distribution of the $\chi^2_{2N}$ distribution.
For Huber loss \eqref{eq:loss-H} the weight function becomes
\begin{equation} \label{eq:huber_weight}
u_{\mathrm{Huber}} (t;c) = \begin{cases} 1,
& \ \mbox{for} \ t \leqslant c^2 \\ c^2/t , & \ \mbox{for} \ t > c^2 \end{cases} .
\end{equation}
Thus, an observation $\Vec{y}_{\ell}$ with squared Mahalanobis distance (MD) $\Vec{y}^H_{\ell} \Mat{\Sigma}^{-1} \Vec{y}_{\ell}$ smaller than $c^2$ receives constant weight, while observations with a larger MD are heavily down-weighted.
\subsubsection{MVT loss} which corresponds to the ML-loss for (circular complex) multivariate $t$ (MVT) distribution with $\nu_{\mathrm{loss}}$ degrees of freedom, $\Vec{y}_i \sim \mathbb{C} t_{N,\nu}(\Vec{0},\Mat{\Sigma})$ \cite[Eq.(4.28)]{Zoubir2018},
\begin{equation} \label{eq:loss-T}
\rho_{\mathrm{MVT}}(t;\nu_{\mathrm{loss}})= \frac{\nu_{\mathrm{loss}} + 2N}{2} \log (\nu_{\mathrm{loss}} +2 t).
\end{equation}
The $\nu_{\mathrm{loss}}$ parameter in \eqref{eq:loss-T} is viewed as a loss parameter which is chosen by design, see Table \ref{tab:loss-and-weight-functions}. The consistency factor $b_{_\mathrm{MVT}}$
for $\rho_{\mathrm{MVT}}(t;\nu_{\mathrm{loss}})$ is computable by numerical integration.
For MVT-loss \eqref{eq:loss-T} the corresponding weight function is
\begin{equation} \label{eq:t_weight}
u_{\mathrm{MVT}}(t; \nu_{\mathrm{loss}})=\frac{\nu_{\mathrm{loss}}+2N}{\nu_{\mathrm{loss}} + 2t},
\end{equation}
\subsubsection{Tyler loss} given by \cite{Tyler1987jstor}, \cite[Sec. 4.4.3, Eq. (4.30)]{Zoubir2018}
\begin{equation} \label{eq:loss-Tyler}
\rho_{\mathrm{Tyler}}(t)= N \log (t).
\end{equation}
which is the limiting case of $\rho_{\mathrm{MVT}}(t;\nu_{\mathrm{loss}})$ for $\nu_{\mathrm{loss}}\to0$ and
of $\rho_{\mathrm{Huber}}(t;c)$ for $c\to0$.
To obtain this limit using Huber loss function, first note that we may replace $\rho_{\mathrm{Huber}}(t;c)$ with
$\rho^*_{\mathrm{Huber}}(t;c) = \rho_{\mathrm{Huber}}(t;c) - h(c,b_{\mathrm{Huber}})$ where $h(c,b) = c^2\{1-\log(c^2)\}/b$ is constant in $t$. Then, since $c^2/b_{\mathrm{Huber}} \rightarrow N$ as $c \rightarrow 0$, it follows that $\rho^*_{\mathrm{Huber}}(t;c) \rightarrow \rho_{\mathrm{Tyler}}(t)$.
This is the ML-loss of the Angular Central Gaussian (ACG) distribution \cite[Sec. 4.2.3]{Zoubir2018} which is not a CES distribution.
For Tyler loss \eqref{eq:loss-Tyler} the weight function becomes $u_{\mathrm{Tyler}} (t) = N/t$.
In this case, we can not use \eqref{eq:consitency factor} since Tyler's M-estimator estimates the shape of the covariance matrix $\boldsymbol{\Sigma}$ only. Namely, for Tyler loss and $b=1$, if $\hat{\boldsymbol{\Sigma}}$ is a minimizer of \eqref{eq:Mobjective} then so is $b \hat{\boldsymbol{\Sigma}}$ for any $b > 0$. Thus the solution is unique only up to a scale.
However, a consistent estimator of the covariance matrix at the normal distribution can be obtained by multiplying any particular minimum $\hat{\boldsymbol{\Sigma}}$ by $\hat \tau$ given in \eqref{eq:hat_tau} or in \eqref{eq:hat_tau2}.
The latter approach is more robust as it uses medians (instead of the means) of distances $\hat d_{\ell}^2 = \Vec{x}_{\ell}^\top \hat{\boldsymbol{\Sigma}}^{-1} \Vec{x}_{\ell}/N$, $\ell=1,\ldots,L$. It also consistently outperformed the sample mean based estimate \eqref{eq:hat_tau2} in our simulations (not reported in this paper).
Thus, we compute $\hat \tau$ using \eqref{eq:hat_tau2} and set the concistency factor as $b=1/\hat \tau$. More details of these estimators are given in Appendix~\ref{sec:consistency-factor}.
\par
\begin{table}
\begin{center}
\small
\begin{tabular}{|l|c|c|c|l|} \hline
loss & loss & weight & &loss \\
name & $\rho(t)$ & $u(t)$ & $\psi(t)$ & parameter \\ \hline
Gauss & \eqref{eq:loss-G} & 1 & $t$ & n/a \\
MVT & \eqref{eq:loss-T} & \eqref{eq:t_weight} &
$ \frac{\nu_{\mathrm{loss}}+2N}{2+\nu_{\mathrm{loss}}/t}$
& $\nu_{\mathrm{loss}}=2.1$ \\
Huber & \eqref{eq:loss-H} & \eqref{eq:huber_weight} & $t\,u_{\mathrm{Huber}}(t)$ & $q=0.9$ \\
Tyler & \eqref{eq:loss-Tyler} & $N/t$ & $N$ & n/a \\ \hline
\end{tabular}
\\*[0.5ex]
\caption{Loss and weight functions used in DOA M-estimation with their loss parameter.
\label{tab:loss-and-weight-functions}}
\end{center}
\end{table}
\subsection{Source Power Estimation}
Similarly to Ref. \cite[Sec. III.D]{gerstoft2016mmv}, we regard
\eqref{eq:Mobjective} as a function of $\Vec{\gamma}$ and $\sigma^2$ and compute the first order derivative
\begin{align}
\frac{\partial\mathcal{L}}{\partial \gamma_m} &= -\Vec{a}_m^H \Mat{\Sigma}^{-1} \Vec{a}_m
+ \frac{1}{Lb} \sum\limits_{\ell=1}^L \| \Vec{a}_m^H \Mat{\Sigma}^{-1} \Vec{y}_{\ell} \|_2^2 u(\Vec{y}_{\ell}^H \Mat{\Sigma}^{-1} \Vec{y}_{\ell})
\label{eq:dL-over-dgamma-m-new}
\end{align}
where $u(t) = \mathrm{d} \rho(t)/\mathrm{d}t $
is the weight function associated with the loss function $\rho$.
%
Equation \eqref{eq:dL-over-dgamma-m-new} is identical to Ref. \cite[Eq.(21)]{gerstoft2016mmv} except for the weight function $u(\Vec{y}_{\ell}^H \Mat{\Sigma}^{-1} \Vec{y}_{\ell})$. For the Gaussian array data model, the weight function is the constant function $u_{\mathrm{Gauss}}(t)\equiv1$.
Setting \eqref{eq:dL-over-dgamma-m-new} to zero gives
\begin{align}
\Vec{a}_m^H \Mat{\Sigma}^{-1} \Vec{a}_m
&= \Vec{a}_m^H \Mat{\Sigma}^{-1} \Mat{R}_{\Vec{Y}}
\Mat{\Sigma}^{-1} \Vec{a}_m,
\label{eq:Jaffer-new}
\end{align}
where $\Mat{R}_{\Vec{Y}}$ is the weighted sample covariance matrix,
\begin{align}
\Mat{R}_{\Vec{Y}} &=\frac{1}{Lb}
\sum\limits_{\ell=1}^L u(\Vec{y}_{\ell}^H \Mat{\Sigma}^{-1} \Vec{y}_{\ell};\cdot)
\Vec{y}_{\ell} \Vec{y}_{\ell}^H \label{eq:RY}
=\frac{1}{L}\Mat{Y}\Mat{D}\Mat{Y}^H
\end{align}
with $\Mat{D} = \mathrm{diag}(u_1,\ldots,u_L)/b$ and $u_{\ell} = u(\Vec{y}_{\ell}^H \Mat{\Sigma}^{-1} \Vec{y}_{\ell};\cdot)$.
Note that $\Mat{R}_{\Vec{Y}}$ can be understood as an adaptively weighted sample covariance matrix \cite[Sec. 4.3]{Zoubir2018}.
$\Mat{R}_{\Vec{Y}}$ is Fisher consistent for the covariance matrix when $\Mat{Y}$ follows a Gaussian,
i.e. $\mathsf{E}[\Mat{R}_{\Vec{Y}}]=\Mat{\Sigma}$ thanks to the consistency factor $b$ \cite[Sec. 4.4.1]{Zoubir2018}.
We multiply \eqref{eq:Jaffer-new} by $\gamma_m$ and obtain the fixed-point equation
\begin{align}
\gamma_m &= \gamma_m
\frac{\Vec{a}_m^H \Mat{\Sigma}^{-1} \Mat{R}_{\Vec{Y}} \Mat{\Sigma}^{-1} \Vec{a}_m}{\Vec{a}_m^H\Mat{\Sigma}^{-1} \Vec{a}_m}
\quad\forall m\in\{1,\ldots,M\}, \nonumber \\
&= \gamma_m \frac{\frac{1}{Lb} \sum\limits_{\ell=1}^L\left| \Vec{a}_m^H \Mat{\Sigma}^{-1}\Vec{y}_{\ell}\sqrt{u(\Vec{y}_{\ell}^H \Mat{\Sigma}^{-1} \Vec{y}_{\ell})}\right|^2}{\Vec{a}_m^H\Mat{\Sigma}^{-1} \Vec{a}_m} \label{eq:gamma-update-rule}
\end{align}
which is the basis for an iteration to solve for $\Vec{\gamma}$ numerically.
The active set $\mathcal M$ is then selected as either the $K$ largest entries of $\Vec{\gamma}$ or the entries with $\gamma_m$ exceeding a threshold.
\subsection{Noise Variance Estimation}
The original SBL algorithm exploits Jaffer's necessary condition \cite[Eq.~(6)]{Jaffer1988} which leads to the noise subspace based estimate \cite[Eq.~(15)]{Liu2012}, \cite[Sec. III.E]{gerstoft2016mmv},
\begin{align}
\hat{\sigma}^2_S &=
\frac{\trace{(\Mat{I}_N-\Mat{A}^{}_{\mathcal{M}} \Mat{A}^+_{\mathcal{M}} )\Mat{S}_{\Mat{Y}}}}{N-K } ,
\label{eq:noise-estimate}
\end{align}
where $(\cdot)^+$ denotes the Moore-Penrose pseudo inverse.
This noise variance estimate works well with DOA estimation~\cite{gerstoft2016mmv,Nannuru2018,Park2021} without outliers in the array data.
For CES distributed array data, we estimate the noise based on \eqref{eq:Jaffer-new}.
This results in the robust noise variance estimate
\begin{align}
\hat{\sigma}^2_R &=
\frac{\trace{(\Mat{I}_N-\Mat{A}^{}_{\mathcal{M}} \Mat{A}^+_{\mathcal{M}})\Mat{R}_{\Mat{Y}}}}{N-K } ,
\label{eq:noise-estimateR}
\end{align}
for the full derivation see \cite{Mecklenbraeuker-Gerstoft-Ollila-WSA2021}. For Gauss loss function, $\Mat{R}_{\Mat{Y}}=\Mat{S}_{\Mat{Y}}$, and the expressions \eqref{eq:noise-estimate}, \eqref{eq:noise-estimateR} are identical.
To stabilize the noise variance M-estimate \eqref{eq:noise-estimateR} for
non-Gauss loss, we define lower and upper bounds for $\hat{\sigma}^2$
and enforce
$\sigma^2_{\text{floor}}\le \hat{\sigma}^2\le \sigma^2_{\text{ceil}}$ by
\begin{equation}
\hat{\sigma}^2=\max(\min(\hat{\sigma}^2_R,\sigma^2_{\text{ceil}}), \sigma^2_{\text{floor}})
\label{eq:sigma-stabilization}
\end{equation}
The original SBL algorithm \cite{gerstoft2016mmv,SBL4-github} does not use/need this stabilization because for Gauss loss, the weighted sample covariance matrix estimate \eqref{eq:RY} equals \eqref{eq:SY} which does not depend on prior knowledge of $\Mat{\Sigma}$.
We have chosen $\sigma^2_{\text{floor}}=10^{-6}{\rm tr}[\Mat{S}_{\Mat{Y}}]/N$ and $\sigma^2_{\text{ceil}}={\rm tr}[\Mat{S}_{\Mat{Y}}]/N$ for the numerical simulations.
As discussed in Appendix \ref{sec:consistency-factor}, Tyler's M-estimator is unique only up to a scale which affects the noise variance estimate $\hat{\sigma}^2_R$. For this reason, we normalize $\Mat{R}_{\Mat{Y}}$ to trace 1 to remove this ambiguity if Tyler loss is used.
\begin{table}
\begin{algorithmic}[1]
\small
\STATE {\bf input} $\Mat{Y}\in\mathbb{C}^{N\times L}$ \hspace{15ex} array data to be analyzed
\STATE select the weight function $u(\cdot;\cdot)$ and loss parameter
\STATE constant $ \Mat{A}\in\mathbb{C}^{N\times M}$ \hspace{12ex} dictionary matrix
\STATE \hspace{8ex} $K\in\mathbb{N}$, with $K<N$, \hspace{2.5ex} number of sources
\STATE \hspace{8ex} $\delta\in\mathbb{R}^+$ \hspace{16ex} small positive constant
\STATE \hspace{8ex} $\text{SNR}_{\text{max}}\in\mathbb{R}^+$ \hspace{10ex} upper SNR limit in data
\STATE \hspace{8ex} $\gamma_{\text{range}}\in[0,1]$, \hspace{0.5ex} dynamic range for DOA grid pruning
\STATE \hspace{8ex} $j_{\text{max}}\in\mathbb{N}$ \hspace{15ex} iteration count limit
\STATE \hspace{8ex} $z\in\mathbb{N}$ with $z<j_{\text{max}}$ \hspace{4.2ex} convergence criterion
\STATE set $\Mat{S}_{\Mat{Y}} = \Mat{Y} \Mat{Y}^H/L$
\STATE set $\sigma^2_{\text{ceil}} = \trace{\Mat{S}_{\Mat{Y}}}$ \hspace{15ex} upper limit on $\hat{\sigma}^2$
\STATE \hspace{2.5ex} $\sigma^2_{\text{floor}} = \sigma^2_{\text{ceil}}/\text{SNR}_{\text{max}}$ \hspace{9ex} lower limit on $\hat{\sigma}^2$
\STATE initialize $\hat{\sigma}^2$, $\Vec{\gamma}^{\text{new}}$ using \eqref{eq:init-step1}--\eqref{eq:init-step4},
$j=0$
\REPEAT
\STATE $j = j + 1$ \hspace{20ex} increment iteration counter
\STATE $\Vec{\gamma}^{\text{old}}=\Vec{\gamma}^{\text{new}}$
\STATE $\gamma_{\text{floor}} = \gamma_{\text{range}} \max(\Vec{\gamma}^{\text{new}})$, \hspace{6ex} source dynamic range
\STATE $\mathcal{P}=\{p \in \mathbb{N} \; \vert \; \Vec{\gamma}^{\text{new}}_p \ge \gamma_{\text{floor}}\}$ \hspace{3ex} pruned DOA grid \hfill \eqref{eq:mathcal-P}
\STATE $\Mat{\Gamma}_{\mathcal{P}}=\text{diag}(\Vec{\gamma}^{\text{new}}_{\mathcal{P}})$ \hspace{13ex} pruned source powers
\STATE $\Mat{A}_{\mathcal{P}}=[\Vec{a}_{p_1}, \dots, \Vec{a}_{p_P}]$ for all $p_i\in\mathcal{P}$, pruned dictionary
\STATE $\Mat{\Sigma}_{\mathcal{P}}= \Mat{A}_{\mathcal{P}} \Mat{\Gamma}_{\mathcal{P}} \Mat{A}^H_{\mathcal{P}} + \hat{\sigma}^2 \Mat{I}_N $ \hfill \eqref{eq:Sigma-model_new}
\STATE $\Mat{R}_{\Vec{Y}} =\frac{1}{Lb}
\sum\limits_{\ell=1}^L u(\Vec{y}_{\ell}^H \Mat{\Sigma}_{\mathcal{P}}^{-1} \Vec{y}_{\ell};\cdot)
\Vec{y}_{\ell} \Vec{y}_{\ell}^H$ \hfill \eqref{eq:RY}
\STATE {\bf if} using $u_{\mathrm{Tyler}}(\cdot;\cdot)$ {\bf then} normalize $\Mat{R}_{\Mat{Y}}=\frac{\Mat{R}_{\Mat{Y}}}{\mathop{\mathrm{tr}}\Mat{R}_{\Mat{Y}}}$
\STATE $\gamma_p^{\text{new}} =
\gamma_p^{\text{old}} \left(
\frac{\Vec{a}_p^H \Mat{\Sigma}_{\mathcal{P}}^{-1} \Mat{R}_{\Vec{Y}} \Mat{\Sigma}_{\mathcal{P}}^{-1} \Vec{a}_p}{\Vec{a}_p^H\Mat{\Sigma}_{\mathcal{P}}^{-1} \Vec{a}_p}
\right)
$ for all $p\in\mathcal{P}$\hfill \eqref{eq:gamma-update-rule}
\STATE $\mathcal{M}=\{m \in \mathbb{N} \; \vert \; K\; \text{largest peaks in } \Vec{\gamma}^{\text{new}} \}$ active set
\STATE $\Mat{A}_{\mathcal{M}}=[\Vec{a}_{m_1}, \dots, \Vec{a}_{m_K}]$
\STATE $\hat{\sigma}^2_R = \frac{\trace{(\Mat{I}_N- \Mat{A}^{}_{\mathcal{M}}\Mat{A}_{\mathcal{M}}^+)\Mat{R}_{\Mat{Y}}}}{N-K } $ \hfill \eqref{eq:noise-estimateR}
\STATE $\hat{\sigma}^2=\max(\min(\hat{\sigma}^2_R,\sigma^2_{\text{ceil}}), \sigma^2_{\text{floor}})$ \hfill \eqref{eq:sigma-stabilization}
\UNTIL{($\mathcal{M}$ has not changed during last $z$ iterations) or $j > j_{\text{max}}$ }
\STATE {\bf output} $\mathcal{M}$ and $j$
\end{algorithmic}
\caption{Robust and Sparse M-Estimation of DOA. This table documents the algorithm formulated
for single-frequency array data.
The M-function \textcolor{black}{\tt SBL\_v5p11.m} \cite{RobustSBL-github} implements
Robust and Sparse M-Estimation of DOA from multi-frequency array data.
}
\label{table:algorithm}
\end{table}
\subsection{Algorithm}
\label{sec:algorithm}
The proposed DOA M-estimation algorithm using SBL is displayed in Table \ref{table:algorithm} with the following remarks:
\subsubsection{DOA grid pruning}
\label{sec:grid-pruning}
To reduce numerical complexity in the iterations, we introduce the pruned DOA grid $\mathcal{P}$ by not wasting computational resources on those DOAs which are associated with source power estimates below a chosen threshold value $\gamma_{\text{floor}}$, i.e. we introduce a thresholding operation on the $\Vec{\gamma}^{\text{new}}$ vector.
The pruned DOA grid is formally defined as an index set,
\begin{align}
\mathcal{P} = \{p \in \{1,\ldots,M \}\; \vert \; \Vec{\gamma}^{\text{new}}_p \ge \gamma_{\text{floor}}\} =
\{ p_1, \ldots, p_P \}.
\label{eq:mathcal-P}
\end{align}
where $\gamma_{\text{floor}}=\gamma_{\text{range}} \max{ \Vec{\gamma}^{\text{new}}}$ and we have chosen $\gamma_{\text{range}} =10^{-3}$.
\subsubsection{Initialization}
In our algorithm we need to give initial values of source signal powers $\Vec{\gamma}$ and the noise variance $\sigma^2$.
The initial estimates are computed via following steps:
\begin{enumerate}[a)]
\item Compute $\Mat{S}_{\Mat{Y}}$ and CBF output powers
\begin{align}
\gamma_m^{\text{init}} &= \frac{ \Vec{a}^H_m \mathbf{S}_Y \Vec{a}_m }{ \| \Vec{a}_m \|^4} , \quad \forall m = 1,\ldots,M.
\label{eq:init-step1}
\end{align}
\item Compute the initial active set by identifying $K$ largest peaks in the CBF output powers,
\begin{align}
\mathcal{M} &= \{m \in \mathbb{N} \; \vert \; K\; \text{largest peaks in } \Vec{\gamma}^{\text{init}} \}
\label{eq:init-step2}
\end{align}
\item Compute the initial noise variance
\begin{align}
\hat{\sigma}^2 &= \hat{\sigma}^2_S = \frac{\trace{(\Mat{I}_N-\Mat{A}^{}_{\mathcal{M}} \Mat{A}^+_{\mathcal{M}}) \Mat{S}_{\Mat{Y}}}}{N-K }
\label{eq:init-step3}
\end{align}
\item Compute initial estimates of source powers:
\begin{align}
\gamma_m^{\text{new}} &= \max(\delta, (\gamma_m^{\text{init}} - \hat{\sigma}^2 )),
\quad \mbox{for $m=1,\ldots,M$}.
\label{eq:init-step4}
\end{align}
where $\delta>0$ is a small number, guaranteeing that all initial $\gamma_m^{\text{new}}$ are positive.
\end{enumerate}
\subsubsection{Convergence Criterion}
\label{sec:convergece-criterion}
The DOA Estimates returned by the iterative algorithm in Table \ref{table:algorithm} are obtained from the active set $\mathcal{M}$. Therefore, the active set is monitored for changes in its elements to determine whether the algorithm has converged. If $\mathcal{M}$ has not changed during the last $z\in\mathbb{N}$ iterations then the repeat-until loop (lines 14--29 in Table \ref{table:algorithm}) is exited. Here $z$ is a tuning parameter which allows to trade off computation time against DOA estimate accuracy. To ensure that the iterations always terminate, the maximum iteration count is defined as $j_{\mathrm{max}}$ with
$z<j_{\mathrm{max}}$.
\section{Simulation Results}
\label{sec:results}
Numerical simulations are carried out for evaluating the root mean squared error (RMSE) of DOA versus array signal to noise ratio (ASNR) based on synthetic array data $\Mat{Y}$.
Synthetic array data are generated for three scenarios with $K=1,\ldots,3$ incoming plane waves and corresponding DOAs as listed in Table \ref{tab:source-scenarios}.
The source amplitudes $\Vec{x}_{\ell}$ in \eqref{eq:scale-mixture}
are complex circularly symmetric zero-mean Gaussian.
The wavefield is modeled according to the scale mixture \eqref{eq:scale-mixture} and it is observed by a uniform linear array with $N=20$ elements at half-wavelength spacing.
The dictionary $\Mat{A}$ consists of $M=18001$ replica vectors for the high resolution DOA grid
$\theta_m = -90^{\circ} + (m-1)\delta,~\forall m=1,\ldots,M $ where $\delta$ is the dictionary's angular grid resolution, $\delta=180^\circ/(M-1)=0.01^{\circ}$.
The RMSE of the DOA estimates over $N_{\text{run}} = 250$ simulation runs with random array data realizations is used for evaluating the performance of the algorithm,
\begin{align}
\text{RMSE} &= \sqrt{\sum_{r=1}^{N_{\rm run}} \sum_{k=1}^{K} \frac{
[\min(|\hat{\theta}^r_k - \theta^r_k|,e_{\mathrm{max}})]^2}{K \,N_{\rm run}}} \,,
\label{eq:RMSE}
\end{align}
where $\theta^r_k$ is the true DOA of the $k$ source and $\hat{\theta}^r_k$ is the corresponding estimated DOA in the $r$th run
when $K$ sources are present in the scenario.
This RMSE definition is a specialization of the optimal subpattern assignment (OSPA) when $K$ is known, cf. \cite{Meyer2017}.
We use $e_{\mathrm{max}}=10^\circ$ in \eqref{eq:RMSE}. Thus maximum RMSE is $10^\circ$.
\subsection{Data generation}
The scaling $\sqrt{\tau}_{\ell}$ and the noise $\Vec{n}_{\ell}$ in \eqref{eq:scale-mixture} are generated according to three array data models which are summarized in Table \ref{tab:data-models} and explained below:
\paragraph{Gaussian array data:} In this model $\tau_{\ell}=1$ for all $\ell$ in \eqref{eq:scale-mixture} and $\Vec{y}_{\ell}= \Vec{v}_{\ell}\sim\mathbb{C}\mathcal{N}(\Vec{0},\Mat{\Sigma})$, where $\Mat{\Sigma}$ is defined in \eqref{eq:Sigma-model_new}.
\paragraph{MVT array data:} We first draw $\Vec{v}_{\ell}\sim\mathbb{C}\mathcal{N}(\Vec{0},\Mat{\Sigma})$ and $s_{\ell}\sim\chi^2_{\nu_{\mathrm{data}}}$ independently, where $\Mat{\Sigma}$ is defined in \eqref{eq:Sigma-model_new}. We set
$\tau_\ell = \nu_{\mathrm{data}}/s_{\ell}$ and the array data is modelled by the scale mixture
$\Vec{y}_{\ell} = \sqrt{\tau_\ell}\Vec{v}_{\ell} \sim \mathbb{C}t_{\nu_{\mathrm{data}}}$-distributed, cf. \cite{Ollila2003pimrc} and \cite[Sec.~4.2.2]{Zoubir2018}.
\paragraph{$\epsilon$-contaminated array data:} This heavy-tailed array data model is not covered by \eqref{eq:scale-mixture} with the assumptions in Sec. \ref{sec:model}.
Instead, the noise $\Vec n$ is drawn with probability $(1-\epsilon)$ from a $\mathbb{C}\mathcal{N}(\Vec{0}, \sigma^2_1\Mat{I})$ and with probability $\epsilon$ from a $\mathbb{C}\mathcal{N}(\Vec{0}, \lambda^2\sigma^2_1\Mat{I})$, where $\lambda$ is the outlier strength.
Thus, $\Vec{y}_{\ell}$ is drawn from
$\mathbb{C}\mathcal{N}(\Vec{0}, \Mat{A}\Mat{\Gamma}\Mat{A}^H+\sigma^2_1\Mat{I}_N)$, using \eqref{eq:Sigma-model_new} with probability $(1-\epsilon)$ and with outlier probability $\epsilon$ from $\mathbb{C}\mathcal{N}(\Vec{0}, \Mat{A}\Mat{\Gamma}\Mat{A}^H+(\lambda\sigma_1)^2\Mat{I}_N)$.
The resulting noise covariance matrix is $\sigma^2\Mat{I}_N$ similar to the other models, but with
\begin{equation}
\sigma^2 = (1-\epsilon+\epsilon\lambda^2)\sigma_1^2.
\label{eq:epscont-variance}
\end{equation}
The limiting distribution of $\epsilon$-contaminated noise for $\epsilon \to 0$ and any constant $\lambda>0$ is Gaussian.
The proposed DOA M-estimation algorithm using SBL is displayed in Table \ref{table:algorithm}.
The convergence criterion parameter $z=10$ is chosen for all numerical simulations and the maximum number of iterations was set to $j_{\mathrm{max}}=1200$, but this maximum was never reached.
Additionally, the Cram\'er-Rao Bound (CRB) for DOA estimation from Gaussian and MVT array data are shown. The corresponding expressions are given in Appendix \ref{sec:CRB} for completeness.
The Gaussian CRB shown in Figs. \ref{fig:s1s2s3MC250SNRn15}(a,d,g) is evaluated according to \eqref{eq:Gaussian-CRB}.
The bound for MVT array data, $C_{\mathrm{CR,MVT}}(\Vec{\theta})$, is just slightly higher than the bound for Gaussian array data, $C_{\mathrm{CR,Gauss}}(\Vec{\theta})$.
For the three source scenario, $N=20$, $L=25$, Gaussian and MVT array data model, the gap between the bounds $C_{\mathrm{CR,Gauss}}(\Vec{\theta})$ and $C_{\mathrm{CR,MVT}}(\Vec{\theta})$ is smaller than $3\%$ in terms of RMSE in the shown ASNR range.
The gap is not observable in the RMSE plots in Fig. \ref{fig:s1s2s3MC250SNRn15} for the chosen ASNR and RMSE ranges. The MVT CRB is shown in Figs. \ref{fig:s1s2s3MC250SNRn15}(b,e,h).
We have not evaluated the CRB for $\epsilon$-contaminated array data. In the performance plots for $\epsilon$-contaminated array data, we show $C_{\mathrm{CR,Gauss}}(\Vec{\theta})$ for $\mathrm{ASNR}=N/\sigma^2$ using \eqref{eq:epscont-variance}, labeled as ``Gaussian CRB (shifted)'' in Figs. \ref{fig:s1s2s3MC250SNRn15}(c,f,i). We expect that the true CRB for $\epsilon$-contaminated array data is higher than this approximation.
\begin{table}
\begin{center}
\normalsize
\begin{tabular}{|l|l|l|} \hline
scenario & DOAs & source variance \\ \hline
single source & $-10^{\circ}$ & $\gamma_{8001}=1$ \\
two sources & $-10^{\circ}$, $10^{\circ}$ & $\gamma_{8001}=\gamma_{10001}=\frac12$\\
three sources & $-3^{\circ}$, $2^{\circ}$, $75^{\circ}$ & $\gamma_{8701}=\gamma_{9201}=\gamma_{16501}=\frac13$\\ \hline
\end{tabular}
\\*[0.5ex]
\caption{Source scenarios, source variances normalized to $\mathop{\mathrm{tr}}(\Mat{\Gamma})=1$ }
\label{tab:source-scenarios}
\end{center}
\end{table}
\begin{table}
\begin{center}
\normalsize
\begin{tabular}{|l|l|l|l|} \hline
array data model & Eq. & parameters & ASNR \\ \hline
Gaussian & \eqref{eq:linear-model3} & $\Mat{\Sigma}$ & $N/\sigma^2$ \\
MVT & \eqref{eq:scale-mixture} & $\Mat{\Sigma}$, $\nu_{\mathrm{data}}=2.1$ & $N/\sigma^2$ \\
$\epsilon$-contaminated & \eqref{eq:linear-model3} & $\Mat{\Sigma}$, $\epsilon=0.05$, $\lambda=10$ & $N / \sigma^2$ \\ \hline
\end{tabular}
\\*[0.5ex]
\caption{Array data models}
\label{tab:data-models}
\end{center}
\end{table}
\begin{figure*}[t]
\parbox[t]{0.333\textwidth}{%
{\footnotesize a) \textcolor{white}{\tt p08fix\_Gmode\_s1MC250SNRn15.eps}}\\
\includegraphics[width=0.325\textwidth]{./figures/includeTyler/p08fix_Gmode_s1MC250SNRn15_with_legend.eps}\\
{\footnotesize b) \textcolor{white}{\tt p08\_Cmode\_s1MC250SNRn15.eps}}\\
\includegraphics[width=0.325\textwidth]{./figures/includeTyler/p08fix_Cmode_s1MC250SNRn15_with_legend.eps}\\
{\footnotesize c) \textcolor{white}{\tt p08\_emode\_s1MC250SNRn15.eps}}\\
\includegraphics[width=0.325\textwidth]{./figures/includeTyler/p08fix_emode_s1MC250SNRn15_with_legend.eps}}
\parbox[t]{0.333\textwidth}{%
{\footnotesize d) \textcolor{white}{\tt p08\_Gmode\_s2MC250SNRn15.eps}}\\
\includegraphics[width=0.325\textwidth]{./figures/includeTyler/p08fix_Gmode_s2MC250SNRn15_with_legend.eps}\\
{\footnotesize e) \textcolor{white}{\tt p08\_Cmode\_s2MC250SNRn15.eps}}\\
\includegraphics[width=0.325\textwidth]{./figures/includeTyler/p08fix_Cmode_s2MC250SNRn15_with_legend.eps}\\
{\footnotesize f) \textcolor{white}{\tt p08fix\_emode\_s2MC250SNRn15.eps}}\\
\includegraphics[width=0.325\textwidth]{./figures/includeTyler/p08fix_emode_s2MC250SNRn15_with_legend.eps}}
\parbox[t]{0.333\textwidth}{%
{\footnotesize g) \textcolor{white}{\tt p08fix\_Gmode\_s3MC250SNRn15.eps}}\\
\includegraphics[width=0.325\textwidth]{./figures/includeTyler/p08fix_Gmode_s3MC250SNRn15_with_legend.eps}\\
{\footnotesize h) \textcolor{white}{\tt p08\_Cmode\_s3MC250SNRn15.eps}}\\
\includegraphics[width=0.325\textwidth]{./figures/includeTyler/p08fix_Cmode_s3MC250SNRn15_with_legend.eps}\\
{\footnotesize i) \textcolor{white}{\tt p08\_emode\_s3MC250SNRn15.eps}}\\
\includegraphics[width=0.325\textwidth]{./figures/includeTyler/p08fix_emode_s3MC250SNRn15_with_legend.eps}}
\caption{RMSE of DOA estimators vs.~ASNR. Left column: for single source at DOA $-10^{\circ}$. Center column: for two sources at DOAs $-10^{\circ}$ and $10^{\circ}$.
Right column: for three sources at DOAs $-3^{\circ}$, $2^{\circ}$ and $75^{\circ}$.
All: Simulation for uniform line array, $N=20$ sensors, $L=25$ array snapshots, and dictionary size $M=18001$ corresponding to DOA resolution $0.01^{\circ}$, averaged over $250$ realizations. Array data models: Top row: Gaussian, middle row: MVT ($\nu_{\mathrm{data}}=2.1$), bottom row: $\epsilon$-contaminated ($\epsilon=0.05,\lambda=10$). The CRB is for Gaussian \eqref{eq:Gaussian-CRB} and MVT array data models., see Appendix \ref{sec:CRB}. }
\label{fig:s1s2s3MC250SNRn15}
\end{figure*}
\subsection{Single source scenario}
\label{eq:1src}
A single plane wave ($K=1$) with complex circularly symmetric zero-mean Gaussian amplitude is arriving from DOA $\theta_{8001} = -10^{\circ}$.
Here, $\mathrm{ASNR} = N / \sigma^2$, cf. \cite[Eq.~(8.112)]{VanTreesBook}.
Figure \ref{fig:s1s2s3MC250SNRn15} shows results for RMSE of DOA estimates in scenarios with $L=25$ snapshots and $N=20$ sensors.
RMSE is averaged over $250$ iid realizations of DOA estimates from array data $\Mat{Y}$.
There are more snapshots $L$ than sensors $N$, ensuring full rank
$\Mat{R}_{\Vec{Y}}$ almost surely.
\par
Simulations for Gaussian noise are shown in Fig. \ref{fig:s1s2s3MC250SNRn15}(a). For this scenario, the conventional beamformer (not shown) is the ML DOA estimator and approaches the CRB for ASNR greater $3\,$dB.
All shown M-estimators for DOA perform almost identically for Gaussian array data in terms of RMSE due to the consistency factor $b$ introduced in \eqref{eq:loss-H} and \eqref{eq:loss-T} and just slightly worse than the CBF.
\par
Figure \ref{fig:s1s2s3MC250SNRn15}(b) shows simulations
for heavy-tailed MVT distributed array data with degrees of freedom parameter $\nu_{\mathrm{data}}=2.1$ being small.
We observe that the M-estimator for MVT-loss $\rho_{\mathrm{MVT}}$ for loss parameter $\nu_{\mathrm{loss}}=2.1$
performs best, closely followed by the M-estimators for Tyler loss and Huber-loss $\rho_{\mathrm{Huber}}$
for loss parameter $q=0.9$.
Here, the loss parameter $\nu_{\mathrm{loss}}$ used by M-estimator
is identical to the MVT array data parameter $\nu_{\mathrm{data}}$ and thus is expected to work well for this array data model.
In Fig.~\ref{fig:s1s2s3MC250SNRn15}(b), the M-estimator for MVT-loss $\rho_{\mathrm{MVT}}$ closely follows the Gaussian CRB for $\mathrm{ASNR}>3\,\mathrm{dB}$, although a small gap at high ASNR remains. Instead of comparing to the Gaussian CRB, we should compare to the CRB for CES distributed data derived in \cite[Eq. (20)]{besson2013fisher} and \cite[Eq. (17)]{greco2013cramer}.
The assumption that $\nu_{\mathrm{data}}$ is known \emph{a priori} is somewhat unrealistic.
However, methods to estimate $\nu_{\mathrm{data}}$ from data are available, e.g.,
\cite{pascal2021improved}.
Gauss loss exhibits largest RMSE at high ASNR in this case.
\par
Results for $\epsilon$-contaminated noise are shown in Fig. \ref{fig:s1s2s3MC250SNRn15}(c) for outlier probability $\epsilon=0.05$ and outlier strength $\lambda=10$.
The resulting noise variance \eqref{eq:epscont-variance} for this heavy-tailed distribution is $\sigma^2=5.95\sigma^2_1$.
The M-estimators for Tyler loss and MVT-loss $\rho_{\mathrm{MVT}}$ perform identically and best in terms of their DOA RMSE followed by
the M-estimator for Huber loss $\rho_{\mathrm{Huber}}$ with an ASNR penalty of about 2\,dB.
Poorest RMSE exhibits the (non-robust) DOA estimator for Gauss loss $\rho_{\mathrm{Gauss}}$ indicating strong impact of outliers on the
original (non-robust) SBL algorithm for DOA estimation \cite{gerstoft2016mmv}.
\subsection{Two source scenario} \label{section:2src}
Next we consider the two source scenario ($K=2$) in Table \ref{tab:source-scenarios}. The DOAs of source 1 and source 2 are at
$\theta_{8001} = -10^\circ$ and $\theta_{10001}= 10^\circ$ and the corresponding true
$\mathcal{M}$ is $\{8001, 10001\}$.
Source strengths are specified in the scenario as
$\gamma_{8001} = 0.5$, $\gamma_{10001} = 0.5$, and $\gamma_m=0$ for all $m\not\in\mathcal{M}$, so that
$\trace{\Mat{\Gamma}}=1$.
Here, $\mathrm{ASNR} = N / \sigma^2$, cf. \cite[Eq.~(8.112)]{VanTreesBook}.
Figure \ref{fig:s1s2s3MC250SNRn15} shows results for RMSE of DOA estimates in scenarios with $L=25$ snapshots and $N=20$ sensors.
\par
Simulations for Gaussian noise are shown in Fig. \ref{fig:s1s2s3MC250SNRn15}(d). Here, the M-estimate for Huber loss $\rho_{\mathrm{Huber}}$ for loss parameter $q=0.9$ performs equally well as the M-estimate for Gauss loss $\rho_{\mathrm{Gauss}}$ which is equivalent to the original (non-robust) SBL algorithm for DOA estimation \cite{gerstoft2016mmv}.
They approach the CRB for ASNR greater $9\,$dB. The DOA estimator for Tyler loss
performs slightly worse than the previous two.
Here, MVT-loss $\rho_{\mathrm{MVT}}$ for loss parameter $\nu_{\mathrm{loss}}=2.1$ has highest RMSE in DOA M-estimates.
\par
Figure \ref{fig:s1s2s3MC250SNRn15}(e) shows simulations
for heavy-tailed MVT array data with parameter $\nu_{\mathrm{data}}=2.1$ being small.
We observe that M-estimation with Tyler loss and MVT-loss $\rho_{\mathrm{MVT}}$ for loss parameter $\nu_{\mathrm{loss}}=2.1$ perform best, closely followed by M-estimation with Huber loss $\rho_{\mathrm{Huber}}$ with loss parameter $q=0.9$. The non-robust DOA estimator for $\rho_{\mathrm{Gauss}}$ performs much worse than the other two showing an ASNR penalty of about 6\,dB at high ASNR.
Here, the loss parameter $\nu_{\mathrm{loss}}$ used by the M-estimator for MVT-loss $\rho_{\mathrm{MVT}}$ is identical to the MVT array data parameter $\nu_{\mathrm{data}}$ used in generating the MVT-distributed array data and it is expected to work well for this array data model.
In Fig.~\ref{fig:s1s2s3MC250SNRn15}(e), the RMSE result for $\rho_{\mathrm{MVT}}$ closely follows the Gaussian CRB for $\mathrm{ASNR}>9\,\mathrm{dB}$, although a small gap at high ASNR remains. Instead of comparing to the Gaussian CRB, a semiparametric stochastic CRB for DOA estimation under the CES data model would be more appropriate for MVT noise \cite{Fortunati2019b}.
\par
Results for $\epsilon$-contaminated noise are shown in Fig. \ref{fig:s1s2s3MC250SNRn15}(f) for outlier probability $\epsilon=0.05$ and outlier strength $\lambda=10$.
M-estimation with Tyler loss and MVT-loss $\rho_{\mathrm{MVT}}$ for loss parameter $\nu_{\mathrm{loss}}=2.1$ show lowest RMSE followed by Huber loss with slight ASNR penalty.
Poorest RMSE is exhibited by the (non-robust) DOA estimator for Gauss loss $\rho_{\mathrm{Gauss}}$.
\subsection{Three source scenario}
\label{eq:3src}
Array data $\Mat{Y}$ are generated for the three source scenario ($K=3$) with complex circularly symmetric zero-mean Gaussian amplitude from DOAs $\theta_{8701}=-3^{\circ}$, $\theta_{9201}=2^{\circ}$, $\theta_{16501}=75^{\circ}$ according to the array data models in Table \ref{tab:data-models}.
The true active set $\mathcal{M}$ is $\{ 8701, 9201, 16501 \} $ and source strengths are specified in the scenario as
$\gamma_{8701} = \frac13$, $\gamma_{9201} = \frac13$, $\gamma_{16501} = \frac13$
and $\gamma_m=0$ for all $m\not\in\mathcal{M}$, so that
$\trace{\Mat{\Gamma}}=1$.
Here, $\mathrm{ASNR} = N / \sigma^2$, cf. \cite[Eq.~(8.112)]{VanTreesBook}. The RMSE performance shown in Figs. \ref{fig:s1s2s3MC250SNRn15}(g,h,i) are very similar to the corresponding results for the two source scenario shown in Figs. \ref{fig:s1s2s3MC250SNRn15}(d,e,f).
\subsection{Effect of Loss Function}
The effect of the loss function on RMSE performance at high $\mathrm{ASNR}=30\,$dB is illustrated in Fig. \ref{fig:summary}.
This shows that for Gaussian array data all choices of loss functions perform equally well at high ASNR. For MVT data in Fig. \ref{fig:summary} (middle), we see that the robust loss functions (MVT, Huber, Tyler) work well, and approximately equally, whereas RMSE for Gauss loss is factor 2 worse.
For $\epsilon$-contaminated array data in Fig. \ref{fig:summary} (right) the Gauss loss performs a factor worse than the robust loss functions. Huber loss has slightly higher RMSE than MVT and Tyler loss.
\begin{figure}[t]
\center{\includegraphics[width=.8\columnwidth]{Summaryerror.eps}}
\caption{RMSE for each DOA M-estimator at high $\mathrm{ASNR}=30\,$dB. for each of three array data models (Gaussian, MVT and $\epsilon$-contaminated).}
\label{fig:summary}
\end{figure}
\subsection{Effect of Outlier Strength on RMSE}
\label{sec:outlier-strength}
For small outlier strength $\lambda$ for MVT data, the Gauss loss performs fine, but as the outlier noise increases the robust processors outperform, see Fig.~\ref{fig:RMSEvsLambda}.
As $\lambda$ increases, the total noise changes, see \eqref{eq:epscont-variance}.
We here chose to keep the total noise constant in Fig.~\ref{fig:RMSEvsLambda}(left) by decreasing the background noise with increasing $\lambda$,
or having the background noise constant in Fig.~\ref{fig:RMSEvsLambda}(right) whereby the total noise increases.
For large noise outlier, Tyler loss clearly has best performance in Fig.~\ref{fig:RMSEvsLambda}(left) and does not breakdown in Fig.~\ref{fig:RMSEvsLambda}(right).
\begin{figure}[t]
{\footnotesize \textcolor{white}{\tt RMSEvsLambda.eps}}\\
\includegraphics[width=\columnwidth]{RMSEvsLambda.eps}
\caption{\label{fig:RMSEvsLambda} For $\epsilon$-contaminated array data $\epsilon=0.05$ with one source, RMSE versus outlier strength $\lambda$ for each loss function for (left): ASNR$= 25\,$dB and background noise $\sigma_1$ is decreasing,
and (right): Background noise $\sigma_1$ is fixed and outlier noise $\lambda\sigma_1$ is increasing (at $\lambda=1$: ASNR$=25\,$dB; and at $\lambda=10^3$: ASNR $=25-10 \log(1-\epsilon+\epsilon\lambda^2)=-22$
dB ). RMSE evaluation based on $N_{\mathrm{run}}=250$ simulation runs.}
\end{figure}
\begin{figure}[t]
{\footnotesize \textcolor{white}{\tt RMSEvsSnapshot.eps}}\\
\includegraphics[width=\columnwidth]{RMSEvsSnapshot.eps}
\caption{\label{fig:RMSEvsSnapshot} RMSE vs snapshots for $\epsilon$-contaminated array data ($\epsilon=0.05, \lambda=10$) and ASNR$=25\,$dB. Similar setup as Fig. \ref{fig:s1s2s3MC250SNRn15} with 3 sources. }
\end{figure}
\subsection{Effect of Dictionary Size on RMSE}
\label{sec:hires-grid}
Due to the algorithmic speedup associated with the DOA grid pruning described in
Sec. \ref{sec:algorithm}, it is feasible and useful to run the algorithm in Table \ref{table:algorithm} with large dictionary size $M$
which translates to the dictionary's angular grid resolution of $\delta=180^\circ/(M-1)$.
The effect of grid resolution is illustrated in Fig.~\ref{fig:gridresolution} for a single source impinging on a $N=20$ element $\lambda/2$-spaced ULA.
The Gaussian array data model is used.
Fig.~\ref{fig:gridresolution} shows RMSE vs.\ ASNR for a dictionary size of $M\in\{181, 361, 1801, 18001, 180001 \}$.
In Fig.~\ref{fig:gridresolution}(a), the DOA is fixed
at $-10^\circ$, cf. single source scenario in Table \ref{tab:source-scenarios}, the DOA is on the angular grid which defines the dictionary matrix $\Mat{A}$.
In Fig.~\ref{fig:gridresolution}(b)
the DOA is random, the source DOA is sampled from $-10^\circ + U(-\delta/2, \delta/2)$
($\delta=180^{\circ}/(M-1)$ is the angular grid resolution).
The source DOA is not on the angular grid which defines the dictionary matrix $\Mat{A}$.
For source DOA on the dictionary grid, Fig.~\ref{fig:gridresolution}(a), the RMSE performance curve resembles the behavior of an ML-estimator at low ASNR up to a certain threshold ASNR (dashed vertical lines) where the RMSE abruptly crosses the CRB and becomes zero.
The threshold ASNR is deduced from the following argument:
Let $\Vec{a}_m$ be the true DOA dictionary vector and $\Vec{a}_{m+1}$ be the dictionary vector for adjacent DOA on the angular grid. Comparing the corresponding Bartlett powers, we see that DOA errors become likely if the noise variance exceeds $2(|\Vec{a}_m^H\Vec{a}_m| -|\Vec{a}_m^H \Vec{a}_{m +1}|)/N = 2 - 2|\Vec{a}_m^H\Vec{a}_{m +1}|/N$.
For source DOA off the dictionary grid, Fig.~\ref{fig:gridresolution}(b), the RMSE performance curve resembles the behavior of an ML-estimator at low ASNR up to a threshold ASNR. In the random DOA scenario, however, the RMSE flattens at increasing ASNR.
Since the variance of the uniformly distributed source DOA is $\delta^2/12$,
the limiting $\mathrm{RMSE}=\delta/\sqrt{12}$ for $\mathrm{ASNR}\to\infty$.
The limiting RMSE (dashed horizontal lines) depends on the dictionary size $M$ through the angular grid resolution $\delta$.
The asymptotic RMSE limits are shown as dashed horizontal lines in Fig.~\ref{fig:gridresolution}(b).
\begin{figure}[t]
{\footnotesize \textcolor{white}{\tt gridresolution.eps}}\\
\includegraphics[width=\columnwidth]{gridresolution.eps}
\caption{\label{fig:gridresolution} Effect of dictionary size $M\in\{181, 361, 1801, 18001, 180001 \}$ on RMSE vs.~SNR for a single source a) fixed DOA $-10^\circ$ on the grid and b) random uniformly distributed DOA $\sim -10^\circ + U(-\delta/2,\delta/2)$.
RMSE evaluation based on $N_{\mathrm{run}}=250$ simulation runs.}
\end{figure}
\section{Convergence Behavior and Run Time}
\label{sec:convergence}
The DOA M-estimation algorithm in Table \ref{table:algorithm} uses an iteration to estimate the active set $\mathcal{M}$ whose elements represent the estimated source DOAs. The required number of iterations for convergence of $\mathcal{M}$
depends on the source scenario, array data model, and ASNR.
Figure \ref{fig:s3MC250SNRn15cpu} shows the required number of iterations for the three source scenario versus ASNR and all three array data models.
Figure \ref{fig:s3MC250SNRn15cpu} shows fast convergence for high ASNR, and the number of iterations decreases with increasing ASNR.
At $\mathrm{ASNR}< 5\,$dB, where the noise dominates the array data, the number of iterations is around 100 and approximately indepent of the ASNR.
Figure \ref{fig:s3MC250SNRn15cpu}(a) shows that the number of iterations for MVT-loss at low ASNR and Gaussian array data is about 25\% larger than for the other loss functions.
In the intermediate ASNR range 5--20\,dB, the largest number of iterations are required as the algorithm searches the dictionary to find the best maching DOAs.
Peak number of iterations is near 160 at ASNR levels between 12 and 15 dB.
Figure \ref{fig:s3MC250SNRn15cpu}(c) shows that the number of iterations for Tyler loss and MVT-loss for $\epsilon$-contaminated array data
at high ASNR are lowest, followed by Huber loss and Gauss loss.
\begin{figure}
{\footnotesize a) \textcolor{white}{\tt p08fix\_Gmode\_s3MC250SNRn15cpu.eps}}\\
\includegraphics[width=0.9\columnwidth]{./figures/includeTyler/cpu/p08fix_Gmode_s3MC250SNRn15cpu_with_legend.eps}\\
{\footnotesize b) \textcolor{white}{\tt p08fix\_Cmode\_s3MC250SNRn15cpu.eps}}\\
\includegraphics[width=0.9\columnwidth]{./figures/includeTyler/cpu/p08fix_Cmode_s3MC250SNRn15cpu_with_legend.eps}\\
{\footnotesize c) \textcolor{white}{\tt p08fix\_emode\_s3MC250SNRn15cpu.eps}}\\
\includegraphics[width=0.9\columnwidth]{./figures/includeTyler/cpu/p08fix_emode_s3MC250SNRn15cpu_with_legend.eps}
\caption{Iteration count of DOA estimators vs.~ASNR for three sources at DOAs $-3^{\circ}$, $2^{\circ}$ and $75^{\circ}$. Simulation for uniform line array, $N=20$ sensors, $L=25$ array snapshots, and dictionary size $M=18001$ corresponding to DOA resolution $0.01^{\circ}$, averaged over $250$ realizations. Noise: (a) Gaussian, (b) MVT ($\nu_{\mathrm{data}}=2.1$), (c) $\epsilon$-contaminated ($\epsilon=0.05,\lambda=10$). }
\label{fig:s3MC250SNRn15cpu}
\end{figure}
\begin{figure}[t]
{\footnotesize \textcolor{white}{\tt CPUtime.eps}}\\
\includegraphics[width=\columnwidth]{CPUtime.eps}
\caption{\label{fig:CPUtimes} CPU times for three source scenario vs. ASNR for $\epsilon$-contamindated array data processed with Gauss loss, dictionary size $M\in\{181, 18001, 180001 \}$ and for $M=18001$ for Huber, MVT and Tyler loss.}
\end{figure}
The CPU times on an M1 Macbook Pro are shown in Fig.~\ref{fig:CPUtimes} for $\epsilon$-contaminated array data and various choices of dictionary size and loss funnction.
For a fixed dictionary ($M=18001$), the choice of loss function does not much influence the CPU time.
At $\mathrm{ASNR}> 18\,$dB, MVT and Tyler consume just slightly less CPU time than for Huber and Gauss loss.
For low ASNR, CPU time increases by a ratio approx. proportional to dictionary size 180001/181=1000, but at high SNR this ratio reduces to 50, due to the efficiency of the DOA grid pruning, cf. Sec. \ref{sec:algorithm}.
The $M=180001$ dictionary is quite large in this scenario, but in other scenarios for localization in 3 dimensions, this is an expected dictionary size.
\section{Conclusion}
\label{sec:conclusion}
Robust and sparse DOA M-estimation is derived based on array data following a zero-mean complex elliptically symmetric distribution with finite second-order moments. The derivation is based on loss functions which can be chosen freely subject to certain existence and uniqueness conditions.
The DOA M-estimator is numerically evaluated by iterations and made available
on GitHub \cite{RobustSBL-github}.
A specific choice of loss function determines the RMSE performance of the resulting DOA M-estimate for different array data distributions.
Four choices for loss function are discussed and investigated in numerical simulations with synthetic array data: the ML-loss function for the circular complex multivariate $t$-distribution with $\nu$ degrees of freedom, the loss functions for Huber and Tyler M-estimators. For Gauss loss, the method reduces to Sparse Bayesian Learning.
We discuss the robustness of these DOA M-estimators by evaluating the root mean square error for Gaussian, MVT, and $\epsilon$-contaminated array data. The robust and sparse M-estimators for DOA perform well in simulations for MVT and $\epsilon$-contaminated noise and nearly identical with classical SBL for Gaussian noise.
|
1,477,468,750,469 | arxiv | \section{Introduction}
The leptonic decays of mesons provide access to experimentally clean measurements of
the meson decay constants or the relevant Cabibbo-Kobayashi-Maskawa
matrix elements. In the Standard Model (SM) the branching fraction for a leptonic decay of
a charged pseudoscalar meson, such as $D^+_s$, is given by \cite{PDG,Rosner:2012bb}:
\begin{equation}
{\cal B}(D_{s}^+\to \ell^+\nu_{\ell})=\frac{\tau_{D_{s}}M_{D_{s}}}{8\pi}f_{D_{s}}^2G_F^2|V_{cs}|^2m_{\ell}^2\left(1-\frac{m_{\ell}^2}{M_{D_{s}}^2} \right)^2,
\label{eq:brleptonic_sm}
\end{equation}
where $M_{D_{s}}$ is the $D_{s}$ mass, $\tau_{D_{s}}$ is its lifetime, $m_{\ell}$ is the lepton mass, $V_{cs}$ is the
Cabibbo-Kobayashi-Maskawa (CKM) matrix element between the $D_{s}$ constituent quarks $c$ and $s$, and $G_F$ is the Fermi coupling constant.
The parameter $f_{D_{s}}$ is the decay constant, and is related to the wave-function overlap of the quark and anti-quark.
The magnitude of the relevant CKM matrix element, $|V_{cs}|$, can be obtained from the very well measured
$|V_{ud}|=0.97425(22)$ and $|V_{cb}|=0.04$ from an average of exclusive and inclusive semileptonic B decay results
as discussed in Ref. \cite{vcb} by using the following relation, $|V_{cs}|=|V_{ud}|-\frac{1}{2}|V_{cb}|^2$. Measurements of leptonic
branching fraction of a pseudoscalar meson thus provide a clean probe of the decay constant which can than be compared with precise
lattice QCD calculations~\cite{Na:2013ti}.
\section{Absolute branching fraction measurement}
The methods of absolute branching fraction measurement of $D_{s}^-\to\ell^-\overline{\nu}{}_{\ell}$ decays used recently by Belle~\cite{Zupanc:2012cd} and
before by the BaBar \cite{Sanchez} are similar. Both collaborations study $e^+e^- \to c\bar{c}$ events which contain $D_{s}^-$ mesons
produced through the following reactions:
\begin{equation}
e^+e^-\to c\bar{c}\to D_{\rm tag}KX_{\rm frag}D_{s}^{\ast -},~D_{s}^{\ast -}\toD_{s}^-\gamma.
\label{eq:signal_events_type}
\end{equation}
In these events one of the two charm quarks hadronizes into a $D_{s}^{\ast -}$ meson while the other quark
hadronizes into a charm hadron denoted as $D_{\rm tag}$ (tagging charm hadron). The above events are reconstructed fully in two steps: in the first step
$D_{s}$ mesons are reconstructed inclusively while in the second step $D_{s}\to\ell\nu_{\ell}$ decays are reconstructed within the inclusive sample.
The tagging charm hadron is reconstructed as $D^0$, $D^+$, $\Lambda_c^+$\footnote{In events where $\Lambda_c^+$ is reconstructed as tagging charm
hadron additional $\overline{p}$ is reconstructed in order to conserve the total baryon number.} in 18 (15) hadronic decay modes by Belle
(BaBar). In addition $D^{\ast +}$ or $D^{\ast 0}$ are reconstructed in order to clean up the event.
The strangeness of the event is conserved by requiring additional kaon, denoted as $K$, which can be either $K^+$ or $K^0_S$.
Since $B$-factories collected data at energies well above
$D{}^{(\ast)}_{\rm tag} K D_s^{\ast}$ threshold additional particles can be produced in the process of hadronization. These particles are
denoted as $X_{\rm frag}$ and can be: even number of kaons and or any number of pions or photons.
Both Belle and BaBar reconstruct $X_{\rm frag}$ modes with up to three pions in order to keep background at reasonable level. $D_{s}^-$ mesons are required to be produced in a
$D_{s}^{\ast -}\toD_{s}^-\gamma$ decays which provide powerful kinematic constraint ($D^{\ast}_{s}$ mass, or mass difference between $D^{\ast}_{s}$ and $D_{s}$)
that improves the resolution of the missing mass (defined below) and suppresses the combinatorial background.
In the first step of the measurement no requirements are placed on the daughters of the signal $D_{s}^-$ meson
in order to obtain a fully inclusive sample of $D_{s}^-$ events which is used for normalization
in the calculation of the branching fractions. The number of inclusively reconstructed $D_{s}$ mesons is
extracted from the distribution of events in the missing mass, $M_{\rm miss}(D_{\rm tag}KX_{\rm frag}\gamma)$, recoiling against the $D_{\rm tag}KX_{\rm frag}\gamma$ system:
\begin{equation}
M_{\rm miss}(D_{\rm tag}KX_{\rm frag}\gamma) = \sqrt{p_{\rm miss}(D_{\rm tag}KX_{\rm frag}\gamma)^2},
\label{eq:massds}
\end{equation}
where $p_{\rm miss}$ is the missing momentum in the event:
\begin{equation}
p_{\rm miss}(D_{\rm tag}KX_{\rm frag}\gamma) = p_{e^+} + p_{e^-} - p_{D_{\rm tag}} - p_{K} - p_{X_{\rm frag}} - p_{\gamma}.\\
\label{eq:pmiss}
\end{equation}
Here, $p_{e^+}$ and $p_{e^-}$ are the momenta of the colliding positron and electron beams, respectively, and the $p_{D_{\rm tag}}$, $p_{K}$,
$p_{X_{\rm frag}}$, and $p_{\gamma}$ are the measured momenta of the reconstructed $D_{\rm tag}$, kaon, fragmentation system and the photon from
$D^{\ast}_{s}\to\ds\gamma$ decay, respectively. Correctly reconstructed events given in the Eq. \ref{eq:signal_events_type} produce a peak in the
$M_{\rm miss}(D_{\rm tag}KX_{\rm frag}\gamma)$ at nominal $D_{s}$ meson mass as shown in Fig. \ref{fig:dsincl}. Belle finds $94400\pm 1900$ correctly reconstructed
inclusive $D_{s}$ candidates in a data sample corresponding to 913 fb$^{-1}$, while BaBar finds $108900\pm2400$ events\footnote{
Note that Belle quotes number of correctly reconstructed candidates while BaBar number of events. It is subtle but important difference that
reader should be aware of.} containing $D_{s}$ meson in a data
sample corresponding to 521 fb$^{-1}$.
\begin{figure}[t]
\centering
\includegraphics[width=0.54\textwidth]{mmiss_dsincl_data_xdm999.pdf}
\includegraphics[width=0.45\textwidth]{fig02.pdf}
\caption{The $M_{\rm miss}(D_{\rm tag}KX_{\rm frag}\gamma)=m_r(DKX\gamma)$ distributions for the cleanest $X_{\rm frag}$ mode (left) from Belle and all $X_{\rm frag}$ modes combined (right) from BaBar.
The two vertical lines indicate the signal region used in the $\ell\nu_{\ell}$ selections.}
\label{fig:dsincl}
\end{figure}
In the second step Belle and BaBar search for the purely leptonic $D_{s}^+\to\mu^+\nu_{\mu}$ and $D_{s}^+\to\tau^+\nu_{\tau}$ decays within the inclusive $D_{s}^+$
sample by requiring that there be exactly one additional charged track identified as an electron, muon or charged pion present in the rest of the event.
In case of $D_{s}^+\to\tau^+\nu_{\tau}$ decays the electron, muon or charged pion track identifies the subsequent $\tau^+$ decay to
$e^+\nu_e\overline{\nu}{}_{\tau}$, $\mu^+\nu_{\mu}\overline{\nu}{}{\tau}$ or $\pi^+\overline{\nu}{}_{\tau}$.
The $\ds^+\to\mu^+\nu_{\mu}$ decays are identified as a peak at zero in the missing mass squared distribution,
$M_{\rm miss}^2(D_{\rm tag}KX_{\rm frag}\gamma \mu) = p_{\rm miss}^2(D_{\rm tag}KX_{\rm frag}\gamma \mu)$ shown in Fig. \ref{fig:dsmunu}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.59\textwidth]{mm2mu_munu_data_opt0.pdf}
\includegraphics[width=0.4\textwidth]{fig03munu.pdf}
\caption{The $M_{\rm miss}^2(D_{\rm tag}KX_{\rm frag}\gamma \mu)=m_r^2(DKX\gamma\mu)$ distributions for $\ds^+\to\mu^+\nu_{\mu}$ candidates
within the inclusive $D_{s}$ sample.}
\label{fig:dsmunu}
\end{figure}
Due to multiple neutrinos in the final state the $D_{s}\to\tau\nu_{\tau}$ decays don't peak in the $M_{\rm miss}^2$ distribution. Instead, Belle and BaBar use extra
neutral energy in the calorimeter\footnote{The $E_{\rm ECL}$ ($E_{\rm extra}$) at Belle (BaBar) is defined as a sum over all energy deposits in the calorimeter
with individual energy greater than 50 (30) MeV and which are not associated to the tracks or neutrals used in inclusive reconstruction of
$D_{s}$ candidates nor the $D_{s}\to\tau\nu_{\tau}$ decays.}, $E_{\rm ECL}$, to extract the signal yields of $D_{s}\to\tau\nu_{\tau}$ decays. These are expected to peak
towards zero in $E_{\rm ECL}$, while the backgrounds extend over a wide range as shown in Fig. \ref{fig:dstaunu} for $D_{s}\to\tau\nu_{\tau}$ candidates
when $\tau$ lepton is reconstructed in its leptonic decay to a muon.
\begin{figure}[tb]
\centering
\includegraphics[width=0.57\textwidth]{eecl_taumunu_data_opt0.pdf}
\includegraphics[width=0.42\textwidth]{fig03taunu.pdf}
\caption{The $E_{\rm ECL}=E_{\rm extra}$ distribution for $D_{s}\to\tau(\mu)\nu_{\tau}$ candidates from Belle (left) and BaBar (right).}
\label{fig:dstaunu}
\end{figure}
Table \ref{tab:results} summarizes the signal yields and measured absolute branching fractions of leptonic $D_{s}$-meson decays at Belle and BaBar.
The latter are found to be consistent within uncertainties.
\begin{table}[t]\footnotesize
\centering
\begin{tabular}{l|cc|cc}\hline\hline
& \multicolumn{2}{|c|}{Belle} & \multicolumn{2}{|c}{BaBar}\\
$D_{s}^+$ Decay Mode & Signal Yield & ${\cal B}$ [\%] & Signal Yield & ${\cal B}$ [\%] \\
\hline\hline
$\mu\nu_{\mu}$ & $\phantom{0}489\pm26\phantom{0}$ & $0.528\pm0.028\pm0.019$ & $\phantom{0}275\pm17\phantom{0}$ & $0.602\pm0.038\pm0.034$\\\hline
$\tau\nu_{\tau}$ ($e$ mode) & $\phantom{0}952\pm59\phantom{0}$ & $5.37\pm0.33{}^{+0.35}_{-0.30}$ & $\phantom{0}489\pm26\phantom{0}$ & $5.07\pm0.52\pm0.68$\\
$\tau\nu_{\tau}$ ($\mu$ mode) & $\phantom{0}758\pm48\phantom{0}$ & $5.88\pm0.37{}^{+0.34}_{-0.58}$ & $\phantom{0}489\pm26\phantom{0}$ & $4.91\pm0.47\pm0.54$\\
$\tau\nu_{\tau}$ ($\pi$ mode) & $\phantom{0}496\pm35\phantom{0}$ & $5.96\pm0.42{}^{+0.45}_{-0.39}$ & & \\\hline
$\tau\nu_{\tau}$ (Combined) & & $5.70\pm0.21{}^{+0.31}_{-0.30}$ & & $5.00\pm0.35\pm0.49$\\\hline\hline
\end{tabular}
\caption{Signal yields and measured branching fractions for $D_{s}^+\to\ell^+\nu_{\ell}$ decays by Belle and BaBar.
The first uncertainty is statistical and the second is systematic. Results from Belle are preliminary.}
\label{tab:results}
\end{table}
\section{\boldmath Extraction of $f_{\ds}$ and Conclusions}
The value of $f_{\ds}$ is determined from measured branching fractions of leptonic $D_{s}$ decays by inverting Eq. \ref{eq:brleptonic_sm}.
The external inputs needed in the extraction of $f_{\ds}$ are all very precisely measured and do not introduce additional uncertainties
except the $D_{s}$ lifetime, $\tau_{D_{s}}$, which introduces an 0.70\% relative uncertainty on $f_{\ds}$.
An error-weighted averages\footnote{Average of the decay constants extracted from measured ${\cal B}(\ds^+\to\mu^+\nu_{\mu})$ and ${\cal B}(D_{s}^+\to\tau^+\nu_{\tau})$.} of $D_{s}$-meson decay
constant, $f_{\ds}$, are found by Belle and BaBar to be
\begin{eqnarray}
f_{\ds}^{\rm Belle} & = & (255.0\pm4.2(\rm stat.)\pm4.7(\rm syst.)\pm1.8(\tau_{D_s}))~\mbox{MeV}\\
f_{\ds}^{\rm BaBar} & = & (258.6\pm6.4(\rm stat.)\pm7.3(\rm syst.)\pm1.8(\tau_{D_s}))~\mbox{MeV}.
\end{eqnarray}
Preliminary results from Belle represent the most precise measurement of $f_{\ds}$ up to date at single experiment.
Averaging measurements of $f_{\ds}$ from $B$-factories with the one performed by CLEO-c experiment~\cite{Naik:2009tk},
$f_{\ds}^{\rm CLEO-c} = (259.0\pm6.2(\rm stat.)\pm2.4(\rm syst.)\pm1.8(\tau_{D_s}))~\mbox{MeV}$, gives an
experimental world average,
\begin{equation}
f_{\ds}^{\rm WA} = (257.2\pm4.5)~\mbox{MeV},\\
\end{equation}
which is found to be within $2\sigma$ consistent with currently most precise lattice QCD calculation from HPQCD collaboration~\cite{Na:2012iu,Na:2013ti},
$f_{\ds}^{\rm HPQCD}=(246.0\pm3.6)$~MeV.
|
1,477,468,750,470 | arxiv | \subsubsection*{{\textsf{2.1 The Purely Magnetic Case }}}
We begin with a purely magnetically charged black hole, $Q^* = 0$; the holographic dual is a plasma with zero or negligible baryonic chemical potential (see below), permeated by a transverse magnetic field with intensity related to $P^*$ (see equation (\ref{H}) above). In this case, the Euclidean and Lorentzian versions of the quantity $\mathfrak{S}$ coincide, so we can state the physical meaning of the inequality (\ref{K}) by quoting directly from the conclusions of \cite{kn:82}, where this case was investigated in detail: one finds in this case that (\ref{K}) is equivalent to
\begin{equation}\label{L}
B\;\leqslant \;2\pi^{3/2}T^2,
\end{equation}
where $B$ is the magnetic field strength (defined in terms of the flux through a two-dimensional surface) associated with the boundary theory, and $T$ is the Hawking temperature of the black hole (and therefore, by holography, the temperature of the boundary field theory). Thus, if we accept the claim that the boundary field theory resembles a strongly coupled quark-gluon plasma with negligible baryonic chemical potential but subjected to a transverse magnetic field, the fundamental consistency condition (\ref{A}) forbids the magnetic field to be extremely large relative to the squared temperature.
It is interesting that an abstract consistency condition can be related in such a direct manner to the physical parameters of a quasi-realistic physical system; in fact, the consistency condition is making an unexpected physical prediction, that the inequality (\ref{L}) must hold if the plasma does admit a holographic description. Notice in this connection that this case is quite unlike the one, considered earlier, with a bulk containing a black hole with a negatively curved event horizon; for, in that case, we did not need holography to inform us that the boundary theory might be pathological: that was clear from the coupling of a certain scalar to the boundary scalar curvature. Here the boundary has zero scalar curvature, indeed it is flat, so holography reveals something genuinely novel.
But do real plasmas actually satisfy (\ref{L})? As a matter of fact, it is well known that gigantic magnetic fields (of the order $10^{17}$ gauss or more) do arise in two actual quark plasma systems: in the plasma generated by peripheral collisions at heavy-ion colliders \cite{kn:skokov,kn:kharzeev1,kn:naylor,kn:kharzeev2}, and during the plasma era of the early Universe \cite{kn:reviewA,kn:reviewB,kn:planck}. In both cases, however, the temperature is also enormous, so the status of (\ref{L}) is unclear.
If we consider a plasma temperature around the hadronization temperature (say, 150 MeV) which is a natural choice\footnote{In the case of cosmic magnetic fields, the choice of temperature is not important because $B$ and $T^2$ are normally assumed to evolve in the same way with the cosmic expansion, so (\ref{L}) will be satisfied at all temperatures if it is satisfied at any given temperature during the plasma era.} in view of the possibility that the plasma may be less strongly coupled at significantly higher temperatures, then (\ref{L}) takes the explicit form
\begin{equation}\label{M}
eB\; \lesssim \; 3.6 \times 10^{18}\;\; \text{gauss}.
\end{equation}
Interestingly, this is just above the estimated maximal magnetic fields attained in collisions at the RHIC facility \cite{kn:skokov}. The ALICE facility at the LHC \cite{kn:ALICE} observes the plasma at about twice this temperature, but also at significantly higher values of $B$, and it is possible that the ALICE plasma comes very close to saturating (\ref{L}).
However, it may be premature to attach much importance to this, for several reasons. Most importantly, the magnetic fields generated in heavy ion collisions are very short-lived, perhaps too short-lived for the Seiberg-Witten instability (discussed in the preceding section) to affect them \cite{kn:77}; also, they are associated with extremely large angular momentum densities, which we are not taking into account here, though it is known that they are important holographically \cite{kn:klemm,kn:shear,kn:79}. Let us turn, then, to the case of the cosmic plasma, where these complications do not normally (but see \cite{kn:brand}) arise\footnote{The cosmic plasma endures for a period of time (several microseconds) which is very long by strong-interaction standards, so, if it violates (\ref{L}), there will be more than sufficient time for the corresponding instability to develop; in other words, the system would then be inconsistent in both the Euclidean and Lorentzian senses.}.
In this case, a huge magnetic field might be present at the end of the cosmic plasma era, surviving from certain effects during the Inflationary era: see \cite{kn:reviewA,kn:reviewB}. However, the maximal estimated size of the magnetic field (at hadronization) in this case is \cite{kn:82,kn:83} around $3.7 \times 10^{17}$ gauss, which still satisfies (\ref{L}).
To summarize: holographic consistency (in both senses) can, in principle, be violated by a concrete physical system with a non-Einstein holographic dual: a relatively long-lived quark-gluon plasma inhabiting a flat (or conformally flat) spacetime, accompanied by a sufficiently large magnetic field. In fact, however, \emph{no known example of such a plasma has a magnetic field which clearly violates the inequality (\ref{L}).}
\subsubsection*{{\textsf{2.2 The Purely Electric Case }}}
In contrast to the magnetic case, in the purely electric case ($P^* = 0$), inequality (\ref{K}) is clearly automatically satisfied for all values of $Q^*$. The case $P^* = 0$ in $g^E(\mathrm{AdSdyRN^{0}_{4})})$ therefore provides us with a concrete example of a non-Einstein, Euclidean bulk in which the condition (\ref{B}) \emph{is} indeed satisfied everywhere. That is, a non-Einstein bulk does not necessarily violate (\ref{B}).
If we consider only the Euclidean case, we will conclude that holographic consistency always holds in this case. But if we require also that the Lorentzian version of the system should be well-behaved, then we have to require (from (\ref{K})) that
\begin{equation}\label{N}
4\pi Q^{*2}L^2 \;\leqslant \;(r_h)^4,
\end{equation}
where $r_h$ is now the coordinate of the Lorentzian event horizon, and it is no longer clear that this will always hold.
The implications of imposing (\ref{N}) in the Lorentzian case were explored in \cite{kn:83}. As is well known \cite{kn:koba}, the electric charge of the black hole is related holographically to the baryonic chemical potential of the boundary field theory, $\mu_B$. The relation is however not the same as the relation of $P^*$ to the boundary magnetic field, so the physical interpretation takes a quite different form. As in the previous section, and for the same reasons, it is difficult to apply our results to the case of the plasma produced in heavy-ion collisions; in any case, such plasmas normally have very low values of $\mu_B$, certainly at ALICE \cite{kn:ALICE}. (The next phase of the beam scan experiments at RHIC \cite{kn:STAR,kn:BEAM}, and future experiments at FAIR \cite{kn:FAIR}, are expected to change this situation, however.)
We therefore turn again to the cosmic plasma, assuming as usual that it resembles the boundary field theory. Here too, the conventional description involves a plasma with a very low value of $\mu_B$; but recently a new theory of the evolution of the cosmic plasma has been suggested, the ``Little Inflation'' theory \cite{kn:tillmann1,kn:tillmann2,kn:tillmann3}. In this approach, $\mu_B/T$, where $T$ is the temperature of the plasma, can be quite large, well above unity. This is reconciled with the observed baryon asymmetry by postulating that the end of the plasma era is triggered by the decay of a false QCD vacuum, associated with a first-order phase transition to the hadronic state. This is an interesting new approach to early universe cosmology, and it is possible that some of its many concrete predictions may be confirmed in the reasonably near future.
One expects the inequality (\ref{N}) to be relevant to this theory, and so indeed it is: in \cite{kn:83} it is shown that (\ref{N}) is equivalent to the restriction
\begin{equation}\label{O}
\mu_B/T \; \leqslant \; \left(1\;-\;2^{1/3}\;+\;2^{2/3} \right)\sqrt{\pi}\;\approx 2.353.
\end{equation}
This is a very strong condition in the ``Little Inflation'' context, and, as explained in \cite{kn:83}, it forces the cosmic plasma to hadronize quite close to the quark matter critical point \cite{kn:race}, where very distinctive phenomena analogous to critical opalescence \cite{kn:csorgo} may soon be observed in the beam scan experiments mentioned earlier. It is not hard to imagine that such fluctuation phenomena might prove to be irreconcilable with cosmological observations; in which case one \emph{might} eventually be led to conclude that ``Little Inflation'' actually involves values of $\mu_B/T$ well above 2.353. (This would \emph{not} be a problem for ``Little Inflation'' itself, where values of $\mu_B/T$ far larger than this ---$\,$ up to $\approx 100$ ---$\,$ are quite acceptable.) In short, it is perfectly conceivable that near-future observations will indicate that \emph{Lorentzian} holographic consistency is violated in ``Little Inflation'' cosmology, although \emph{Euclidean} holographic consistency always holds.
However, at present there are many unknowns here: for example, even the location of the critical point in the quark matter phase diagram remains controversial \cite{kn:exploring}, and of course ``Little Inflation'' has itself yet to be confirmed. At present, then, \emph{ we have no convincing evidence to suggest that (\ref{O}) is violated}, even though, in principle, it might be: see \cite{kn:83} for the details.
To summarize in a manner parallel to the summary at the end of the preceding section: \emph{the Lorentzian version} of the consistency condition (\ref{A}) can, in principle, be violated by a concrete physical system with a non-Einstein holographic dual: a relatively long-lived quark-gluon plasma inhabiting a flat (or conformally flat) spacetime, described by a sufficiently large baryonic chemical potential. In fact, however, \emph{no known example of such a plasma has a baryonic chemical potential which clearly violates the inequality (\ref{O}).}
Summarizing this entire Section: in the case of a non-Einstein bulk, one has no guarantee that Euclidean holographic consistency will be satisfied, even if the conformal boundary has zero Yamabe invariant; and one has a concrete example (the purely magnetic case, above) where it is not satisfied, \emph{but} this requires magnetic fields stronger than any confirmed actually to exist. On the other hand, one also has a concrete example (the purely electric case) in which Euclidean holographic consistency does hold for all values of all parameters, yet in which Lorentzian consistency can fail in principle: but, once again, no known system actually does cause it to fail.
\subsection*{{\textsf{3. The Non-Einstein Case II: Scalars in the Bulk}}}
Electromagnetic fields are of course but one way of causing the bulk to be non-Einstein. Another form of bulk matter important in holographic applications is defined by \emph{scalar} (dilaton) fields. These are important in, for example, the holographic theory of the quark matter equation of state: see \cite{kn:dilaton} for a recent example with many references.
As in the example at the end of the preceding section, one is interested here in electrically charged\footnote{For the sake of simplicity, in this section we consider only electrically charged black holes. The magnetic case can be studied using electromagnetic duality: bear in mind, however, that the \emph{string} metric transforms non-trivially, because under the electromagnetic duality the dilaton transforms according to $\phi \rightarrow -\phi$. The dyonic solution is non-trivial even in the asymptotically flat case, since the presence of both electric and magnetic charges necessitates the presence of an (antisymmetric 3-form) axion field. See \cite{kn:horowitz, kn:GHS}.} dilatonic black holes in an AdS background. A generic scalar potential leads to black holes which are \emph{not} asymptotically AdS in the strict sense \cite{9407021,9412076,9202031}; we will confine ourselves here to black holes which \emph{are} asymptotically AdS. These are the Gao-Zhang black holes \cite{kn:gz}. These too have important applications; for example they have recently been used to good effect in the holographic theory of the thermalization of the quark plasma \cite{kn:zhang}; this is in fact one of the most active areas of applied holography.
The construction of the Gao-Zhang black holes is highly nontrivial: Gao and Zhang were forced to use a combination of \emph{three} Liouville-type potentials.
The corresponding action in $n$-dimensional spacetime is
\begin{equation}
S=-\frac{1}{16\pi}\int \mathrm{d}^nx \sqrt{-g} \left[R -\frac{4}{n-2}(\nabla \phi)^2 - V(\phi) - e^{-\frac{4\alpha \phi}{n-2}}F^2\right], ~\alpha \geqslant 0,
\end{equation}
where $\alpha$ is the coupling of the dilaton to the electromagnetic field.
The exact form of the potential is rather complicated, and is not important for our discussion here. In the case $\alpha=0$, the potential reduces to the (negative) cosmological constant,
and the dilaton field is identically zero (see for example equation (5) of \cite{kn:zhang}).
Note that, in many applications of holography, especially if one is only interested in the IR physics, then the precise details of the potential are not required for determining the low-energy behavior arising from the near-horizon geometry. This allows one to work with an effective action with its corresponding approximate black hole solution. However, for the purpose for analyzing Seiberg-Witten instability, such an effective action is not suitable since branes are sensitive to the global geometry of the spacetime. In other words, we wish to study the brane action not only for the near-horizon region, but at \emph{all} values of coordinate radius $r$. Therefore we confine our attention to an exact solution of Gao-Zhang type.
The Gao-Zhang black hole solution \cite{kn:gz} (or, in our terminology, the $n$-dimensional asymptotically AdS dilatonic Reissner-Nordstr\"om black hole) is of the form
\begin{equation}
g(\mathrm{AdSdilRN}^{k}_{n})=-U(r)\mathrm{d}t^2 + W(r) \mathrm{d}r^2 + [f(r)]^2 \mathrm{d}\Omega^2[X^k_{n-2}],
\end{equation}
where $d\Omega^2$ is a (dimensionless) metric on $X^k_{n-2}$, a $(n-2)$-dimensional Riemannian manifold of constant curvature $k$.
The coefficient functions are
\begin{equation}
\begin{cases}
U(r) = \left[k-\left(\dfrac{c}{r}\right)^{n-3}\right]\left[1-\left(\dfrac{b}{r}\right)^{n-3}\right]^{1-\gamma(n-3)} + \dfrac{r^2}{L^2}\left[1-\left(\dfrac{b}{r}\right)^{n-3}\right]^\gamma,
\\
\\
W(r) = U(r)^{-1}\left[1-\left(\dfrac{b}{r}\right)^{n-3}\right]^{-\gamma (n-4)}, \\
\end{cases}
\end{equation}
and
\begin{equation}
f(r)^2 = r^2\left[1-\left(\frac{b}{r}\right)^{n-3}\right]^\gamma, ~~~\gamma = \frac{2\alpha^2}{(n-3)(n-3+\alpha^2)},
\end{equation}
where $\gamma$ is of course unrelated to the constant in equation (\ref{A}), and where $b$ and $c$ are constants related to the physical mass and electric charge by the equations (see \cite{1002.0202})
\begin{equation}
M=\frac{V[X^k_{n-2}]}{16\pi}(n-2)\left[c^{n-3} + kb^{n-3}\left(\frac{n-3-\alpha^2}{n-3+\alpha^2}\right)\right], ~~\text{and}
\end{equation}
\begin{equation}
Q=\frac{V[X^k_{n-2}]}{4\pi}\left[\frac{(n-2)(n-3)^2}{2(n-3+\alpha^2)}(bc)^{n-3}\right]^{\frac{1}{2}},
\end{equation}
where $V[X^k_{n-2}]$ is the dimensionless volume of $X^k_{n-2}$.
In four dimensions, and for zero spatial curvature $k=0$, we have
\begin{equation}
U(r)=-\frac{c}{r}\left[1-\frac{b}{r}\right]^{\frac{1-\alpha^2}{1+\alpha^2}} + \frac{r^2}{L^2} \left[1-\frac{b}{r}\right]^{\frac{2\alpha^2}{1+\alpha^2}},
\end{equation}
and $W(r)=U(r)^{-1}$. Note that for this class of AdS black holes, $g_{tt}g_{rr}=-1$ only holds in four dimensions\footnote{It is possible to use a coordinate system $(t, R, \psi, \zeta)$, in which $R$, \emph{unlike} $r$, is an areal radius, and $\psi, \zeta$ are coordinates on a flat space. But then $g_{tt}g_{RR}\neq -1$ even in four dimensions. See \cite{ted}.}.
We also have
\begin{equation}
f(r)^2 = r^2\left(1-\frac{b}{r}\right)^{\frac{2\alpha^2}{1+\alpha^2}}.
\end{equation}
The mass and charge density parameters are given by
\begin{equation}
M^* = \frac{M}{V[X^0_{2}]} = \frac{c}{8\pi}, ~~ Q^*=\frac{Q}{V[X^0_{2}]}=\frac{1}{4\pi}\left(\frac{bc}{1+\alpha^2}\right)^{\frac{1}{2}}.
\end{equation}
Thus, under Wick rotation to Euclidean signature, $Q^{*2} \rightarrow -Q^{*2}$ implies $b \rightarrow -b$.
The (Euclidean) quantity $\mathfrak{S^{\mathrm{E}}}$ for this metric, up to the usual positive constant factor, is
\begin{flalign}
\mathfrak{S^{\mathrm{E}}}(\mathrm{AdSdilRN^{0}_{4}})(r) &= r^2\left[1+\frac{b}{r}\right]^{\frac{2\alpha^2}{1+\alpha^2}} \left[\frac{r^2}{L^2}\left(1+\frac{b}{r}\right)^{\frac{2\alpha^2}{1+\alpha^2}} -\frac{c}{r}\left(1+\frac{b}{r}\right)^{\frac{1-\alpha^2}{1+\alpha^2}}\right]^{\frac{1}{2}}\\ \notag &~~~~-\frac{3}{L}\int_{r_h^E}^r s^2\left[1+\frac{b}{s}\right]^{\frac{2\alpha^2}{1+\alpha^2}} \mathrm{d}s \notag \\
&= \frac{r^3}{L}\left[1+\frac{b}{r}\right]^{\frac{3\alpha^2}{1+\alpha^2}}\left[1-\frac{cL^2}{r^3}\left(1+\frac{b}{r}\right)^{\frac{1-3\alpha^2}{1+\alpha^2}}\right]^{\frac{1}{2}} \\ \notag &~~~~-\frac{3}{L}\int_{r_h^E}^r s^2\left[1+\frac{b}{s}\right]^{\frac{2\alpha^2}{1+\alpha^2}} \mathrm{d}s,
\end{flalign}
where $r_h^E$ is the value of $r$ at the ``Euclidean horizon''.
Since the action is rather complicated, and since in general there is no closed form expression for $r_h^E$, let us fix $r_h^E=1$ in some unit of length. This fixes the relation between the parameters $b$ and $c$, and because the Euclidean horizon satisfies
\begin{equation}
1-\frac{cL^2}{(r_h^E)^2}\left(1+\frac{b}{r_h^E}\right)^{\frac{1-3\alpha^2}{1+\alpha^2}}=0,
\end{equation}
we have $(1+b)^\frac{3\alpha^2-1}{1+\alpha^2}=cL^2$. The action is now
\begin{flalign}
\mathfrak{S^{\mathrm{E}}}(\mathrm{AdSdilRN^{0}_{4}})(r) =&\frac{r^3}{L}\left[1+\frac{b}{r}\right]^{\frac{3\alpha^2}{1+\alpha^2}}\left[1-\frac{(1+b)^{\frac{3\alpha^2-1}{\alpha^2+1}}}{r^3}\left(1+\frac{b}{r}\right)^{\frac{1-3\alpha^2}{1+\alpha^2}}\right]^{\frac{1}{2}} \\ \notag &-\frac{3}{L}\int_{1}^r s^2\left[1+\frac{b}{s}\right]^{\frac{2\alpha^2}{1+\alpha^2}} \mathrm{d}s.
\end{flalign}
A numerical investigation indicates that the Euclidean action is always positive (an example is provided in Figure (\ref{fig1})). In fact, for large $r$, by expanding in powers of $r$, it can be shown that $\mathfrak{S^{\mathrm{E}}}(\mathrm{AdSdilRN^{0}_{4}})(r)$ grows linearly in $r$. Specifically, we have asymptotically,
\begin{equation}\label{OYC}
\mathfrak{S^{\mathrm{E}}}(\mathrm{AdSdilRN^{0}_{4}})(r) \sim \frac{3b^2\alpha^2}{2L(1+\alpha^2)^2} r + \text{const}(\alpha).
\end{equation}
Here, the term $\text{const}(\alpha)$ is an $\alpha$-dependent constant. For $\alpha=0$ (the electrically charged AdS-Reissner-Nordstr\"om case), the constant term is the only term that survives as $r$ tends to infinity, and it is positive.
Recall that for AdS Reissner-Nordstr\"om black holes with flat event horizons but no scalars or magnetic charges, this quantity is likewise everywhere positive. Thus, the introduction of the dilaton does not change the situation, for any value of the dilaton coupling.
\begin{figure}[!h]
\centering
\includegraphics[width=6.0in]{euclidean.pdf}
\caption{\label{fig1}
The Euclidean brane actions $\mathfrak{S}^\text{E}(r)$ of the Gao-Zhang black hole (up to a positive factor) with various values of the dilaton coupling parameter $\alpha$, the same charge parameter $b=0.7$, and horizons held fixed at $r_h^E=1$. Here we set $L=1$, however the value of $L$ only contributes an overall factor to the action, and thus does not affect its positivity. }
\end{figure}
It is otherwise in the Lorentzian case, however (although in both Euclidean and Lorentzian cases, the effect of the dilaton field is to increase the value of the brane action). This case was investigated at length (in five dimensions, but the four-dimensional situation is similar) in \cite{kn:ong}, to which we refer the reader for the details. To summarize:
\medskip
$\bullet$ If we fix the Lorentzian horizon at $r_h=1$ and repeat the calculation above, we find that the Lorentzian action is \emph{negative} (for sufficiently large electric charge) in some range of $r$ if $0 < \alpha < \alpha_c < 1$, where $\alpha_c$ is a critical value\footnote{One must be careful when normalizing the position of the horizon $r_h$ (and $r_h^E$), since although this preserves the qualitative behavior of the brane action, quantitative features can be affected. This means that given a normalization value of $r_h$, say $r_h=s > 0$, the critical value of $\alpha$ is actually dependent on $s$. The actual critical value should be defined as the smallest value of $\alpha$ such that the brane action is non-negative (for all admissible values of $b$), independent of $s$. The asymptotic behavior of the action given by equation (\ref{OYC}) remains unaffected by normalization choice; more specifically, only $\text{const}(\alpha)$ is normalization-dependent.}
of $\alpha$ (which we estimate numerically at around 0.53) although it eventually turns around and asymptotically grows linearly in $r$ according to the same expression above (equation (\ref{OYC})), which only depends on the \emph{square} of $b$. (In five spacetime dimensions, the action grows logarithmically in $r$ \cite{kn:ong}.) Of course, $\text{const}(\alpha)$ is different in the Lorentzian case.
This is quite different to the case without a dilaton, in which, for sufficiently large charge, the action becomes negative and \emph{stays} negative, that is, $\text{const}(\alpha)$ is negative if $\alpha=0$; see Figure (\ref{fig2}). As was pointed out by Maldacena and Maoz \cite{kn:maldacena}, actions of this kind represent a relatively benign form of instability, since the region with negative action is \emph{finite}: presumably the system evolves to some nearby state rather than getting entirely out of control. Nevertheless, this does suggest that, at high values of the electric charge, values of $\alpha$ smaller than $\alpha_c$ should not be considered internally consistent in the sense we are studying here. In short, the situation here has a similar physical interpretation to the one studied in the preceding section: when $\alpha < \alpha_c$, holographic consistency imposes an upper bound on $\mu_B/T$, the ratio of the baryonic chemical potential to the temperature. (This bound will take the form of an $\alpha$-dependent version of the inequality given by (\ref{O}) above.)
\medskip
$\bullet$ If $\alpha > \alpha_c$, then the Lorentzian action is positive for all values of the charge: holographic consistency imposes no restrictions in either the Euclidean or the Lorentzian case.
\medskip
\begin{figure}[!h]
\centering
\includegraphics[width=6.0in]{lorentzian.pdf}
\caption{\label{fig2}
The corresponding Lorentzian brane actions $\mathfrak{S}^\text{L}(r)$ of the Gao-Zhang black hole (up to a positive factor) with various values of the dilaton coupling parameter $\alpha$, the same charge parameter $b=0.7$, and horizons held fixed at $r_h=1$. Here we set $L=1$, however the value of $L$ only contributes an overall factor to the action, and thus does not affect its sign. Note that the case $\alpha=0$ reduces to an electrically charged Reissner-Nordstr\"om black hole, discussed in subsection (2.2), which in this case tends asymptotically to the value $-0.6$. For $\alpha \neq 0$, the action always grows asymptotically in $r$.
}
\end{figure}
These results are potentially of great interest in the application of these black holes to the question of the thermalization of the quark-gluon plasma, as studied in \cite{kn:zhang}. There it was found\footnote{It is true that the metric used in \cite{kn:zhang} is not the Gao-Zhang metric itself, but rather a Vaidya-like deformation of it. However, we doubt that this will change the qualitative conclusions we are drawing here.} that there is an ($\alpha$-dependent) upper bound on $\mu_B/T$ when $\alpha > 1$, but no restriction whatever when $\alpha < 1$; this bound is not due to any instability, but rather simply to the form taken by $\mu_B/T$ as a function of another parameter (the saturation time; see Figure 3 in \cite{kn:zhang}). In other words, there is a bound on $\mu_B/T$ when the dilaton is strongly coupled ($\alpha > 1$). What we are finding here is that there is \emph{also} such a bound, imposed by holographic consistency, in the weak dilaton coupling regime ($0 < \alpha < \alpha_c < 1$).
The holographic bound, in the weak-coupling case, is presumably weaker (that is, higher) than in the case considered in the preceding section. If forthcoming data should violate the bound discussed above (inequality (\ref{O})), then one might try to use the $\alpha$-dependent version of it to avoid the conflict. The role of holography would then be to put a lower bound on $\alpha$. We conjecture that the strong-coupling bound might likewise be used to put a useful \emph{upper} bound on it. The task then would be to use the range of $\alpha$ values so obtained to constrain the values of parameters more directly related to observations, such as thermalization times. This has yet to be done.
In summary, the situation in this case is less clear, since the theory is less fully developed than in our earlier examples; all we can say definitely is that, while Euclidean consistency is certainly satisfied here, Lorentzian consistency is not automatic and may ultimately prove useful in constraining the key parameter $\alpha$. One can hope that, when the subject of holographic thermalization (or ``dilatonic holography'' more generally) is more mature, it will be possible to investigate more fully whether the Lorentzian consistency condition is satisfied here. At present, there is no reason to suspect otherwise.
\subsection*{{\textsf{4. The Einstein Case}}}
We saw in the preceding sections that, because (some) black hole parameters are affected by complexification, good behaviour in the Euclidean case does not necessarily ensure equally good behaviour in the Lorentzian case. One might be tempted to argue that this problem is due to the presence of matter in the bulk, since, after all, the difficulty arises from the presence of electromagnetic fields, and from the complexification of the electric charge. Unfortunately that is not so: the various bulk spacetimes endowed with \emph{angular momentum} are still Einstein manifolds in some cases, but, in every case, the angular momentum parameter has to be complexified in the passage to the Lorentzian domain, and we will see that this can have consequences similar to those associated with complexifying electric charge.
We will consider two cases: topologically spherical event horizons, and their planar counterparts.
\subsubsection*{{\textsf{4.1 AdS-Dyonic-Kerr-Newman with Topologically Spherical Event Horizon }}}
The four-dimensional asymptotically AdS dyonic Kerr-Newman metric with a topologically spherical event horizon (which we continue to indicate by a ($+1$) superscript, though the actual geometry is not that of a round sphere) \cite{kn:carter} takes the form, in Boyer-Lindquist-like coordinates,
\begin{flalign}\label{P}
g(\mathrm{AdSdyKN^{+1}_{4})} = &- {\Delta_r \over \rho^2}\Bigg[\,\mathrm{d}t \; - \; {a \over \Xi}\mathrm{sin}^2\theta \,\mathrm{d}\phi\Bigg]^2\;+\;{\rho^2 \over \Delta_r}\mathrm{d}r^2\;+\;{\rho^2 \over \Delta_{\theta}}\mathrm{d}\theta^2 \\ \notag \,\,\,\,&+\;{\mathrm{sin}^2\theta \,\Delta_{\theta} \over \rho^2}\Bigg[a\,\mathrm{d}t \; - \;{r^2\,+\,a^2 \over \Xi}\,\mathrm{d}\phi\Bigg]^2,
\end{flalign}
where again the ``dy'' denotes ``dyonic'' and where
\begin{eqnarray}\label{eq:Q}
\rho^2& = & r^2\;+\;a^2\mathrm{cos}^2\theta, \nonumber\\
\Delta_r & = & (r^2+a^2)\Big(1 + {r^2\over L^2}\Big) - 2Mr + {Q^2 + P^2\over 4\pi},\nonumber\\
\Delta_{\theta}& = & 1 - {a^2\over L^2} \, \mathrm{cos}^2\theta, \nonumber\\
\Xi & = & 1 - {a^2\over L^2}.
\end{eqnarray}
Here $- 1/L^2$ is the asymptotic curvature, $a$ is the angular momentum/mass ratio, and $M, Q$, and $P$ are related to the physical mass $E$, electric charge $q$, and magnetic charge $p$, by (see \cite{kn:gibperry})
\begin{equation}\label{R}
E\;=\;M/\Xi^2, \;\;\;\;\;q\;=\;Q/\Xi,\;\;\;\;\;p\;=P/\Xi ;
\end{equation}
note that all of these depend on the angular momentum. As before, this metric is not, in general, an Einstein metric; but it is Einstein when $P = Q = 0$, for \emph{any} value of $a$. That is the case in which we are most interested here; but it will be interesting to retain $Q$ and $P$ so as to study the general case.
This black hole corresponds holographically to a \emph{rotating} quark-gluon plasma \cite{kn:sonner,kn:schalm}. In fact, it is expected that, under some circumstances (connected with the viscosity of the plasma), the plasma produced in a peripheral heavy-ion collision will indeed have a strong rotational motion \cite{kn:KelvinHelm,kn:viscous}, so this geometry supplies a holographic description of that motion. (In other cases, the internal motion of the plasma is a shearing rather than a rotation: see the next section.)
Now the geometry of the spacetime described by (\ref{P}) is, unless we impose a certain condition, rather peculiar. In particular, consider the function $\Delta_{\theta}$: in general, this function does not have a fixed sign, being positive in directions near the equator, but possibly negative towards the poles. If indeed $\Delta_{\theta}$ does change sign in this way, then the signature of the metric (outside the event horizon) changes from $(-\,+\,+\,+)$ to $(-\,+\,-\,-)$ as one rotates from the equator to the poles, so that, in particular, the geometry at conformal infinity ($r\,\rightarrow \, \infty$) has signature $(-\,-\,-)$ in some directions, $(-\,+\,+)$ in others. This bizarre behaviour is unphysical from a holographic point of view, indeed probably from any point of view\footnote{Note that this is not like the more familiar signature change discussed in, for example, \cite{kn:ellis}, or more recently in \cite{kn:bojo}.}, so we have to impose the condition\footnote{The case with $a^2/L^2 = 1$ is excluded because then, by the equations (\ref{R}), the parameters $M, Q, P$ have no physical interpretation.}
\begin{equation}\label{S}
a^2/L^2 \;<\; 1;
\end{equation}
this strange relation between the angular momentum/mass ratio of the black hole and the asymptotic spacetime curvature is the only way to ensure that $\Delta_{\theta}$ remains positive for all $\theta$. This apparently recondite point will in fact be crucial for our later discussion.
The electromagnetic potential form in the exterior spacetime is given by (see \cite{kn:frolov} for the asymptotically flat case)
\begin{equation}\label{T}
A = -\,{Q\,\Xi\, r\over 4\pi\rho^2}\left[\mathrm{d}t-{a\,\mathrm{sin}^2\theta\over \Xi}\mathrm{d}\phi\right]-{P\,\Xi\,\mathrm{cos} \theta\over 4\pi\rho^2} \left[a\,\mathrm{d}t - {r^2+a^2 \over \Xi}\mathrm{d}\phi\right].
\end{equation}
From this one sees that $a$ must be complexified ($a\,\rightarrow \, -ia$) along with $Q$ ($Q\,\rightarrow \, -iQ$) when passing to the Euclidean version ($t\,\rightarrow \, it$), while, as usual, $P$ must not.
Up to the usual overall positive factor, $\mathfrak{S^{\mathrm{E}}}(r)$ for this geometry takes the form \cite{kn:74}
\begin{eqnarray}\label{U}
\mathfrak{S^{\mathrm{E}}}(\mathrm{AdSdyKN^{0}_{4})}(r) & = & \left\{r\sqrt{(r^2-a^2)\Big(1 + {r^2\over L^2}\Big) - 2Mr - Q^2 +P^2}\right.\,\,\times \nonumber\\ &&\;\;\;\left.\Bigg[\sqrt{1-{a^2\over r^2}}+ {r\over a}\, \mathrm{arcsin}{a\over r}\Bigg]\right\}-\;{2r^3\over L}\Bigg[1 - {a^2\over r^2}\Bigg] \nonumber \\ && \;\;\;+\;{2(r_h^E)^3\over L}\Bigg[1 - {a^2\over (r_h^E)^2}\Bigg],
\end{eqnarray}
where $r_h^E$ has the usual meaning.
Extensive numerical tests strongly suggest that this is a positive function of $r$ for all $r > r_h^E$. One can see that this is the case at large $r$ by expressing this function in the form
\begin{equation}\label{eq:V}
\mathfrak{S^{\mathrm{E}}}(\mathrm{AdSdyKN^{0}_{4}})(r) \;=\; rL \left(1 + {2a^2 \over 3L^2}\Bigg)\;+\;{2(r_h^E)^3\over L}\Bigg(1 - {a^2\over (r_h^E)^2}\right) - 2ML\;+\;O(1/r).
\end{equation}
One sees that there are two terms that do not decay towards infinity: a linear term and a constant term. The dominant term here is of course the one linear in $r$, and it is clearly positive, so the function is certainly positive at large $r$; in fact it is almost certainly positive everywhere, so the consistency condition, equation (\ref{B}), is satisfied. That had to be so when $Q = P = 0$, since $g^E(\mathrm{AdSdyKN^{+1}_{4})}$, the Euclidean version of the metric here, is an Einstein metric in that case, and it induces a conformal structure at infinity which evidently has a positive Yamabe invariant: so Wang's theorem applies. However, it was not clear that it would hold in the charged case.
More remarkable, because (as we have seen) it does not follow from the result in the Euclidean case, is that the Lorentzian version of this quantity,
\begin{equation}\label{eq:W}
\mathfrak{S^{\mathrm{L}}}(\mathrm{AdSdyKN^{0}_{4}})(r) \;=\; rL \left(1 - {2a^2 \over 3L^2}\right)\;+\;{2(r_h)^3\over L}\left(1 + {a^2\over (r_h)^2}\right) - 2ML\;+\;O(1/r),
\end{equation}
is \emph{also} positive at large values of $r$; in fact, again, the numerical evidence \cite{kn:74} very strongly suggests that it is positive everywhere outside the black hole. This would not be so if it were possible for the angular momentum/mass ratio $a$ to satisfy $a^2/L^2 > 3/2,$ but that is forbidden by the inequality (\ref{S}) given above. Notice that the complexification of $Q$ plays no role here: it does of course affect the numerical details (because reversing the sign of $Q^2$ affects the value of $r$ at the event horizon, that is, $r_h \neq r_h^E$) but it does not affect the sign of the dominant term. For these black holes, then, it does not matter whether the spacetime is Einstein or not.
Thus we see that this system respects holographic consistency, for all (physical) values of the parameters, in both the Euclidean and Lorentzian versions of the geometry, even in the non-Einstein case. It is striking, however, that in the Lorentzian case we had a narrow escape: the situation is saved only by the technical condition that conformal infinity should have a consistent signature, expressed by the inequality (\ref{S}). Again we see that the Lorentzian case is more delicate than its Euclidean counterpart.
\subsubsection*{{\textsf{4.2 Dyonic KMV$_4^0$ with Planar or Toral Event Horizon }}}
The dyonic planar AdS black hole metric discussed earlier (equation (\ref{E})) can be endowed with angular momentum; in fact, this can be done in many ways: see
\cite{kn:chrusc,0401081, 0901.2574, 0904.1566, 1107.3677, 1201.3098} for detailed discussions of the mathematical and physical ramifications of this. However, if we focus on the most physically interesting case, in which the boundary is conformally flat, then \cite{kn:shear,kn:76} the possibilities are enormously restricted. In essence, there are two possible families. The first was obtained in the zero-charge case by Klemm, Moretti, and Vanzo \cite{kn:klemm}; with the addition of electric and magnetic charges, we call these the ``dyonic KMV$_4^0$'' or ``dyKMV$_4^0$'' metrics:
\begin{equation}\label{X}
g(\mathrm{dyKMV_4^0}) = - {\Delta_r\Delta_{\psi}\rho^2\over \Sigma^2}\,\mathrm{d}t^2\;+\;{\rho^2 \over \Delta_r}\mathrm{d}r^2\;+\;{\rho^2 \over \Delta_{\psi}}\mathrm{d}\psi^2 \;+\;{\Sigma^2 \over \rho^2}\left[\omega\,\mathrm{d}t \; - \;\mathrm{d}\zeta\right]^2,
\end{equation}
where the coordinates and parameters are as in equation (\ref{E}) (with the addition of $a$, the angular momentum/mass ratio), and where
\begin{eqnarray}\label{Y}
\rho^2& = & r^2\;+\;a^2\psi^2, \nonumber\\
\Delta_r & = & a^2+ {r^4\over L^2} - 8\pi M^* r + 4\pi (Q^{*2}+P^{*2}),\nonumber\\
\Delta_{\psi}& = & 1 +{a^2 \psi^4\over L^2},\nonumber\\
\Sigma^2 & = & r^4\Delta_{\psi} - a^2\psi^4\Delta_r,\nonumber\\
\omega & = & {\Delta_r\psi^2\,+\,r^2\Delta_{\psi}\over \Sigma^2}\,a.
\end{eqnarray}
As in the preceding section, this is an Einstein metric for any value of $a$, provided that $Q^* = P^* = 0$.
The second family of metrics with angular momentum and with conformally flat boundaries is obtained by adding a parameter similar in some ways to NUT charge: these are the ``$\ell$dyKMV$_4^0$'' metrics introduced (without magnetic charge) in \cite{kn:shear}. As they are rather more complicated than the dyKMV$_4^0$ metrics, and as they do not lead to different conclusions, we shall not discuss them here; see below.
The electromagnetic potential form in the dyKMV$_4^0$ case is
\begin{equation}\label{Z}
A =-\, {1\over \rho^2L}\left[(Q^*r + aP^*\psi)\mathrm{d} t \;+\;(aQ^*r\psi^2 - P^*\psi r^2)\mathrm{d}\zeta\right],
\end{equation}
from which we see that the usual pattern of complexifications continues to hold here.
These black holes have a very remarkable property: like any black hole with angular momentum, they induce frame dragging effects in the surrounding spacetime, but here the frame-dragging effect persists to conformal infinity; yet it is not a uniform rotation there, as it is in the topologically spherical case considered above. Instead, the frame-dragging mimics a \emph{shearing} motion. Under some circumstances (related, as before, to the viscosity of the plasma), the plasma produced by a peripheral heavy-ion collision does indeed take the form of a shearing motion \cite{kn:liang,kn:bec,kn:huang} (see \cite{kn:csernai,1405.7283,1406.1017,1406.1153,1503.03247} for more recent developments). Thus one can use the KMV metrics and their generalizations to give a holographic account of the internal motion of the plasma in these situations \cite{kn:77,kn:shear}.
The dimensionless velocity of the shearing plasma described by the dyKMV$_4^0$ metric is given by
\begin{equation}\label{ALPHA}
v(x) \;=\; a\psi^2/L;
\end{equation}
this corresponds to a motion within the plasma, increasing away from the $\psi = 0$ axis, which corresponds to the axis of the collision in the dual system. Causality therefore imposes the bound
\begin{equation}\label{BETA}
\psi\;<\;\Psi \; \equiv \; \sqrt{L/a}.
\end{equation}
Note carefully that $\Psi$ is just a special numerical value of the spacelike coordinate $\psi$, which of course is never complexified; so we must \emph{not} complexify $a$ in this formula when we pass to the Euclidean geometry.
When we do move to the Euclidean case, we find again that $\psi$ must still satisfy the inequality (\ref{BETA}), since otherwise various pathologies will arise: for example, if (\ref{BETA}) is not enforced, then the Euclidean version of $\rho^2$ can be negative at some values of $r$, and the Euclidean version of $\Delta_{\psi}$ (given by $1 - (a^2 \psi^4/L^2)$) is negative for some values of $\psi$; so that, in particular, the coefficients of d$r^2$ and d$\psi^2$ in the ``Euclidean'' version of the metric will in that case have opposite signs, which is a contradiction.
Thus in both cases $\psi$ ranges between 0 and $\Psi$, so areas and volumes can now be evaluated accordingly: one then finds that the Euclidean quantity $\mathfrak{S^{\mathrm{E}}}(r)$ in this case takes the form
\begin{eqnarray}\label{GAMMA}
\mathfrak{S^{\mathrm{E}}}(\mathrm{dyKMV_0})(r) & = & \left\{{r^2\over 2} \sqrt{-a^2+ {r^4\over L^2} - 8\pi M^*r+ 4\pi (-\,Q^{*2}+P^{*2})}\right. \times \nonumber \\
&&
\;\;\;\;
\left.\left[{1 \over a}\mathrm{arcsin}\Big({a\Psi\over r}\Big) + {\Psi \over r}\sqrt{1-{a^2\Psi^2\over r^2}}\right]\right\} \nonumber \\
&&
\;\;\;\;
- {1\over L}\left[\Psi(r^3 - (r_h^E)^3) - a^2\Psi^3(r-r_h^E)\right],
\end{eqnarray}
where $r_h^E$ locates the Euclidean ``event horizon''. As in the preceding section, numerical evidence strongly suggests that this function is positive everywhere beyond the Euclidean event horizon; one can see this directly at large $r$:
\begin{equation}\label{DELTA}
\mathfrak{S^{\mathrm{E}}}(\mathrm{dyKMV_0})(r) \;=\; {5a^2\Psi^3 \over 6L}\,r \;+\;{\Psi r^E_{h} \over L}\left[\,(r^E_{h})^2 \;-\;a^2\Psi^2 \;-\;{4\pi M^*L^2 \over r^E_{h}}\right] \;+\; O(1/r) .
\end{equation}
As in the topologically spherical case, there are two terms that do not decay towards infinity, the linear term being the dominant one; and clearly the function is positive at large $r$ for all values of $a$ and of the charges. But when we turn to the Lorentzian version, we find a result very different from the topologically spherical case: we have
\begin{equation}\label{EPSILON}
\mathfrak{S^{\mathrm{L}}}(\mathrm{dyKMV_0})(r) \;=\; { -\,5a^2\Psi^3 \over 6L}\,r \;+\;{\Psi r_{h} \over L}\left[\,r_{h}^2 \;+\;a^2\Psi^2 \;-\;{4\pi M^*L^2 \over r_{h}}\right] \;+\; O(1/r) .
\end{equation}
$\mathfrak{S^{\mathrm{L}}}(\mathrm{dyKMV_0})(r)$ is positive near to the event horizon of the black hole, but this expression shows that it is negative (in fact, unbounded below) far from it. The Lorentzian system is \emph{unstable} for all non-zero values of all parameters. (The situation for the other family of metrics mentioned earlier, the $\ell$dyKMV$_4^0$ metrics, is essentially the same.)
In particular, if we focus on the $Q^* = P^* = 0$ case, we have here an example of an Einstein metric in the bulk which (in the Euclidean case, after compactification) has non-negative Yamabe invariant at infinity, so, by Wang's theorem, the Euclidean version of the system had to be consistent; but the Lorentzian version nevertheless misbehaves, for all values of the angular momentum, if the corresponding plasma is sufficiently long-lived.
In fact, however, the relevant plasma here, one which is endowed with a very large angular momentum density, is not the cosmic plasma we considered in section 2 of this work; instead it is the plasma produced in a heavy ion collision. Such plasmas only survive for a very short time, a few femtometres/c, so it is not clear that the Lorentzian instability has sufficient time to manifest itself. In fact, a plasma with a violent internal motion might well be subject to hydrodynamic instabilities analogous to or generalizing the well-known \emph{Kelvin--Helmholtz instability} \cite{kn:KelvinHelm}; and it may be that such instabilities do in fact set in as the plasma hadronizes. Thus, we should interpret Lorentzian holographic consistency as an upper bound on the \emph{time} during which a hydrodynamic model of the plasma is valid. In order to judge whether consistency is violated here, one would need to estimate the time required for the instability to be established. A holographic method of doing so was proposed in \cite{kn:77} (see also \cite{kn:shear}), and in fact preliminary estimates do suggest that the instability time scale is approximately the same as that of hadronization.
Again, therefore, we conclude that, while Lorentzian consistency is not (unlike Euclidean consistency) guaranteed in this case, \emph{in practice} it does not fail ---$\,$ though it very easily might have done so.
\subsection*{{\textsf{5. Conclusion: Consistency as a Law of Physics}}}
It has long been hoped that at least some of the laws of physics might be found to follow inevitably from the requirements of internal mathematical consistency in some unified theory. We propose that Ferrari's Euclidean consistency condition (\ref{A}) should be considered in this manner. We have seen that doing so, and making the natural move of imposing the analogous Lorentzian condition, constrains a wide variety of quark-gluon plasmas (in particular, the quite different plasmas occurring in the early Universe and in heavy ion collisions) in very remarkable ways. One is struck particularly by the fact that observable systems repeatedly come close to violating these constraints, without ever actually doing so.
In the title of this work, we asked a question: ``When is Holography Consistent?''. The answer appears to be, ``Always, at least in all of the various examples we have considered.'' It seems that holographic consistency, in the specific form of the ``isoperimetric inequality'' (\ref{B}), has the character of a law of physics. It will be interesting to investigate whether it continues to hold as more data accumulate and in other applications.
\addtocounter{section}{1}
\section*{\large{\textsf{Acknowledgement}}}
BMc is grateful to Dr. Soon Wanmei, J.L. McInnes, and C.Y. McInnes, for helpful discussions. YCO thanks Nordita for supporting his travel to Singapore, where part of this work was completed.
|
1,477,468,750,471 | arxiv | \section{Introduction}
A range of fractals which we come across during studies occur as graph of functions. Certainly, many natural phenomena such as wind speeds, solar radiation, population, stock market price etc. are plotted against time, that is, we capture them in graphs. The graph of functions and its box and Hausdorff dimension is of qualitative interest for many authors since past few decades. In particular, Weierstrass type functions fascinate many of them. In 1872, Karl Weierstrass produced a function which is continuous everywhere but differentiable nowhere. Classical Weierstrass function $W_{\la,b} : [0,1)\to \R$ with parameter $b\in \N$ and $\la \in (0,1)$ is defined by
\begin{equation}\label{W1}
W_{\la,b}(x) = \sum_{n=0}^{\infty} \la^n \cos (2\pi b^n x).
\end{equation}
Now we mention some known results of the dimension of Weierstrass type functions. In \cite{KMY} it was proved that if $b\la >1$ then the box dimension of graph of $W_{\la,b}$ is equal to $D = 2 + \frac{\log \la}{\log b}.$ Hunt \cite{hunt} proved that if each $\theta_n$ is chosen independently with respect to the uniform probability in $[0,1],$ then with probability one the Hausdorff dimension of graph of $W_\Theta$ defined by
\begin{equation}\label{W2}
W_{\Theta}(x) = \sum_{n=0}^{\infty} \la^n \cos (2\pi( b^n x+\theta_n))
\end{equation}
is $D = 2 + \frac{\log \la}{\log b}.$ In \cite{KF} chapter 11, it has been shown that for Weierstrass function defined by
\begin{equation}\label{W3}
f(x) = \sum_{n=1}^{\infty} \la^{(s-2)n} \sin (\la^n x)
\end{equation}
for $\la>1$ and $1<s<2,$ the box dimension of graph of $f$ is $s$ provided $\la$ is large enough. Similarly, it was proved that two dimensional Weierstrass function defined by
\begin{equation}
\phi(x,y) = \sum_{n=1}^{\infty} \la^{(s-3)n} \sin (\la^n x)\cos (\la^n y)
\end{equation}
has the box dimension $s$ whenever $\la$ is sufficiently large and $s$ is between 2 and 3. In \cite{bara1} Bara\'nski studied the Weierstrass type functions
\begin{equation}\label{W4}
f(x) = \sum_{n=0}^{\infty} \la_n \phi (b_n x+ \theta_n)
\end{equation}
where $\la_n, b_n>0, \theta_n\in \R$ and $\phi : \R \to \R$ is a
non constant, $\Z$-periodic and Lipschitz function. He proved that if a function $f$ of the form \eqref{W4} satisfies $\frac{\la_{n+1}}{\la_n} \to 0$ and $\frac{b_{n+1}}{b_n} \to \infty$ as $n \to \infty$ then
$$\dim_H(graph f) = \underline{\dim}_B(graph f) = 1 + \liminf_{n \to \infty} \frac{\log^+ d_n}{\log(b_{n+1}d_n/d_{n+1})}$$ and
$$ \overline{\dim}_B(graph f) = 1 + \limsup_{n \to \infty} \frac{\log^+ d_n}{\log b_n},$$
where $ \log ^+ = \sup\{\log,0\}$ and $d_n = \la_1b_1+\dots+\la_n b_n.$
Shen \cite{shen} proved that for any function of the following type
\begin{equation}\label{W5}
f_{\la,b}^\phi(x) = \sum_{n=0}^{\infty} \la^n \phi (b^n x)
\end{equation}
where $\phi : \R \to\R$ is a $\Z$-periodic, non constant, $\C^2$-function and $b\geq 2,$ then there exist a constant $K_0$ depending on $\phi$ and $b$ such that if $1<\la b<K_0$ then the graph of $f_{\la,b}^\phi$ has Hausdorff dimension $D = 2 + \frac{\log \la}{\log b}.$ Bara\'nski et al. \cite{BBR} studied functions of the type \eqref{W1} for integer $b\geq 2$ and $1/b < \la <1.$ They established that for every $b,$ there exists $\la_b \in (1/b,1)$ such that the Hausdorff dimension of graph of $W_{\la,b}$ is equal to $D = 2 + \frac{\log \la}{\log b}$ for all $\la \in (\la_b,1).$ A simpler proof of this and some other results was given by Keller in \cite{kell}. Xie and Zhou \cite{XZ} constructed a wide range of Weierstrass type functions whose graph attains the box dimension two. Mauldin and Williams \cite{MW} studied the following function
$$W_b(x) = \sum_{n=\infty}^{\infty} b^{-\alpha n} [\phi(b^nx+\theta_n) - \phi(\theta_n)]$$
where $b>1, 0 <\alpha <1,\theta_n$ is an arbitrary real number and $\phi$ is a periodic function with period one. They have shown that there exist a constant $C>0$ such that the Hausdorff dimension of graph of $W_b$ is bounded below by $2-\alpha- (C/ \ln b)$ when $b$ is large enough. For more details about dimension of Weierstrass functions, readers are encouraged to study the above references and the references therein. Our aim is not to study the dimensions of graph of Weierstrass type functions but to study the graph of functions on the Sierpi\'nski gasket.
Now we discuss briefly about fractal interpolation functions(FIFs) and its box dimension on the real line. Fractal interpolation functions were introduced and studied by Barnsley and his co-researchers in \cite{barn1,barn2,barn3}. Navascu\'es and her co-researchers studied fractal operators and their properties on some suitable function spaces in \cite{MN1,MN2}. Bedford \cite{bed} has shown that fractal interpolation function constructed using linear affinities having H\"older exponent($h$) is related to the box dimension $D$ of the graph by $h\leq 2-D \leq h_\la.$ H\"older exponent of $f$ at $x$ is defined as$$h_x \coloneqq \sup\{\alpha : |f(x)-f(y)| \leq |x-y|^\alpha \text{ for all y in some neighbourhood of } x\}$$
and H\"older exponent of $f$ is defined as $h \coloneqq \inf \{h_x : x\in I\}.$ If there exists a number $h_\la$ such that $h_x=h_\la$ for Lebesgue almost all $x \in I$ then it is called almost everywhere H\"older exponent of $f.$ In \cite{dalla} Dalla et al. obtained the box dimension of some non-affine fractal interpolation functions and also, they have explained this with some explicit examples.
Hardin and Massopust \cite{HM} shown that when interpolation points are not collinear and $\sum_{k=1}^n|a_k| >1$ then $C(G) = 1 + \log_N(\sum_{k=1}^n|a_k|);$ otherwise $C(G) = 1$ where $C(G),$ the capacity dimension of $G$ is another name for the box dimension. Taking forward the above results Nasim et al. obtained that box dimension of $\alpha$-fractal functions with constant scaling factors in \cite{NGM1} and with variable scaling factors in \cite{NGM2}. Recently, Barnsley and Massopust \cite{BM} studied bilinear fractal interpolation functions as a fixed point of Read-Bajraktarevi\'c operator and presented the box dimension formula of bilinear FIFs.
Afterwards, many researchers have extended the version of fractal interpolation functions to fractal interpolation surfaces(FISs) and also studied the box dimension of these FISs. In 2006, Bouboulis et al. \cite{BDD} constructed recurrent bivariate fractal interpolation surfaces and computed the box dimension for some particular cases. Feng and Sun \cite{FS} studied the box dimension of fractal interpolation surfaces derived from FIFs. They introduced FISs on rectangular domain with arbitrary interpolation nodes and calculated the box dimension by considering its relation with variation of functions.
Later, the concept of fractal interpolation functions was protracted on the Sierpi\'nski gasket by \c{C}elik et al. \cite{CKO}. They have shown the existence of a unique extension of function $F : \s^{(n)} \to \R$ to $f : \s \to \R$ which satisfies $$f(L_\omega(x)) = \alpha_\omega f(x) + h_\omega(x)$$ where $h_\omega(x)$ is a harmonic function on Sierpi\'nski gasket. All the notations in the last equation will be described in the next section. Ruan \cite{ruan} extended this work to P.C.F. self similar sets($K$) and established a sufficient conditions for which the linear FIF have finite energy. They also proved that the solution of Dirichlet problem $$-\De_\mu u =f, u|_{\partial K} = 0$$ is a linear FIF on $K$ if $f$ is linear FIF. Ri and Ruan \cite{RR1} established some more properties of FIFs on Sierpi\'nski gasket. They have shown the min-max property of uniform FIFs and provided a sufficient condition such that uniform FIFs have finite energy. At last they explored the result in view of normal derivative and Laplacian of uniform FIFs. In \cite{LR} Li and Ruan proved the energy finiteness of FIFs on P.C.F. self similar fractals. Also, they discussed some results about Laplacian of FIF on Sierpi\'nski gasket and studied Dirichlet problem on Sierpi\'nski gasket.
An important thing to notice in the last few references is that the authors have constructed FIFs on Sierpi\'nski gasket by taking base function to be a harmonic function to prove their results. This is not necessary in general. In this paper we will construct FIFs on the Sierpi\'nski gasket taking arbitrary base function. Motivated from above results we estimate the box dimension of FIFs on the Sierpi\'nski gasket. Later we obtain bounds for the box dimension of graph of harmonic functions on the Sierpi\'nski gasket. To the best of our knowledge no work has been done related to this till now.
This paper is arranged as follows : In section 2 we recall the preliminaries, that is, definition of box dimension, energy functional, harmonic functions, spaces of finite energy functionals($\dom(\E)$) on the Sierpi\'nski gasket etc. Section 3 is dedicated to FIFs on the Sierpi\'nski gasket and supply sufficient conditions on scaling factors such that FIF belongs to $\dom(\E).$ In section 4 we provide bounds for box dimension of graph of FIFs on the Sierpi\'nski gasket. In Section 5 we give bounds for box dimension of graph of harmonic functions on the Sierpi\'nski gasket.
\section{Preliminaries}
In this section we will recall some definitions which we will use later in this paper.
\begin{definition}[Hausdorff Metric]
\noindent Let $(X,d)$ be a metric space. For $A \subseteq X$ and $\epsilon > 0$, let $ N_\epsilon(A) = \{x \in X : d(x,A) < \epsilon \}$ where $d(x,A) = \inf\{d(x,a) : a\in A\}.$ Let $\B(X)$ be the collection of nonempty closed and bounded subsets of $X.$ For $A, B \in \B(X)$ we define $$D_{\hd}(A,B) = \inf \{\epsilon >0 : A \subseteq N_\epsilon(B), B \subseteq N_\epsilon(A) \}.$$
Then this defines a metric on $\B(X)$ and is called the Hausdorff Metric.
\end{definition}
\begin{remark}
Let $(X,d)$ be a metric space and $\B(X)$ be the collection of all nonempty closed and bounded subsets of $X$. Then $(\B(X),D_{\hd})$ is a complete metric space if $(X,d)$ is a complete metric space(\cite{GE}, Theorem 2.5.3).
\end{remark}
\begin{definition}[Hausdorff Measure]
\noindent Suppose that $F$ is a subset of $\R^n$ and $s$ is a non-negative real number. For any $\de > 0,$ we define $$ \hd^s_\de(F) =\inf \left\{\sum_{i=1}^{\infty}|U_i|^s : F \subset \cup_{i=1}^{\infty}{U_i},~0 < |U_i| < \de \right\},$$ where $|U_i|$ denotes the diameter of $U_i$. As $\de$ decreases, $\hd^s_\de(F)$ increases and so approaches a limit(may be $+\infty$) as $\de \to 0^+$. We write $$ \hd^s(F) = \lim_{\de \to 0^+} \hd^s_\de(F). $$ Then $\hd^s(F)$ is called the $s$-dimensional Hausdorff Measure of $F$.
\end{definition}
\begin{definition}[Hausdorff Dimension]
\noindent For any nonempty subset $F$ of $\R^n$, we define the Hausdorff Dimension as
$$ {\dim_H}(F) = \inf \{ s \geq 0 :\hd^s(F) = 0 \} = \sup \{s \geq 0 : \hd^s(F)= \infty\}$$
\end{definition}
\begin{definition}[Box Dimension]
\noindent Let $F$ be a nonempty subset of $\R^n$ and $N_\de(F)$ denote the least number of sets of diameter less than or equal to $\de$ which covers $F$.\\
The lower box dimension (box-counting dimension) of $F$ is defined as
$$ \underline{\dim}_B(F) = \liminf_{\de \to 0^+}\frac{\log N_\de(F)}{- \log \de} $$
and the upper box dimension (box-counting dimension) of $F$ is defined as
$$ \overline{\dim}_B(F) = \limsup_{\de \to 0^+}\frac{\log N_\de(F)}{- \log \de}.$$
When these two values are equal, we call the common value as the box dimension of $F$.
\end{definition}
\begin{remark}
For any subset $F$ of $\R^n$, the following holds true
$${\dim_H}(F) \leq \underline{\dim}_B(F) \leq \overline{\dim}_B(F). $$
\end{remark}
Now, we will discuss about the Sierpi\'nski gasket and energy functional in the space of continuous real valued functions in it. Let $\s_0 = \{q_1, q_2, q_3\}$ be three points on $\R^2$ equidistant from each other. Let $L_i(x) = \frac{1}{2}(x-q_i) + q_i$ for $i= 1,2,3$ and $L: \B(\R^2)\to\B(\R^2)$ defined as $L(A) = \cup_{i=1}^3 L_i(A).$ It is well known that $L$ has a unique fixed point $\s$(see, for instance, \cite[Theorem 9.1]{KF}), which is called the Sierpi\'nski gasket. Another way to view the same is $\s = \overline{\cup_{j \geq 0}L^{j}(\s_0)},$ where $L^j$ means $L$ composed with itself $j$ times. We know that $\s$ is a compact set in $\R^2.$ It is well known that the Hausdorff dimension of $\s$ is $\frac{\ln 3}{\ln 2}$ and the $\frac{\ln 3}{\ln 2}$-dimensional Hausdorff measure is finite and nonzero (i.e., $0<\hd^{\frac{\ln 3}{\ln 2}}(\s)<\infty)$ (see, \cite[Theorem 9.3]{KF}). Throughout this paper, we will use this measure and denote it by $\mu$. If $f$ is a measurable function on $\s$, then
$$\|f\|_\infty \coloneqq \inf \{a \in \R : \mu\{x \in \s : |f(x)| > a\} = 0\}.$$
Now, we will define energy functional on the space of continuous functions on the Sierpi\'nski gasket($ \C(\s)$) as follows
The $m^{\text{th}}$ level Sierpi\'nski gasket is $\s^{(m)} \coloneqq \cup_{j=0}^m L^j(\s_0).$ If $x$ and $y$ belongs to same cell of $\s^{(m)}$ we denote it by $x\thicksim_m y.$ We define the $m^{\text{th}}$ level crude energy as
$$E^{(m)}(u) = \sum_{x\thicksim_my} |u(x)-u(y)|^2$$ and the $m^{\text{th}}$ level renormalized energy is
given by $$\E^{(m)}(u) = \left(\frac{5}{3}\right)^m E^{(m)}(u)$$ where $\frac{5}{3}$ is the unique renormalizing factor. Now we can observe that $\E^{(m)}(u)$ is a monotonically increasing function of $m$ because of renormalization. So we define the energy function as
$$\E(u) = \lim\limits_{m \to \infty} \E^{(m)}(u) $$ which exist for all $u$ as an extended real number. Now we define $\dom(\E)$ as the space of continuous
functions $u$ satisfying $\E(u) < \infty.$ In \cite{RS}, it is shown that $\dom(\E)$ modulo constant functions forms a Banach space endowed with the norm
$\|\cdot\|_{\E}$ defined as $$\|u\|_{\E} = \sqrt{\E(u)}.$$
The space $\dom_0(\E)$ is a subspace of $\dom(\E)$ containing all functions which vanishes at boundary of the Sierpi\'nski gasket. For more details see \cite{RS,falc1,kiga1} and references there in.
\begin{definition}[Harmonic function]
A function $f : \s \to \R$ is said to be a harmonic function if $\E^{(m+1)}(f) = \E^{(m)}(f)$ for every
$m\geq 0.$
\end{definition}
\begin{definition}[Piecewise harmonic function]
A function $p : \s \to \R$ is said to be a piecewise harmonic function if there exist a finite partition of $\s$ such that each set is a subset of $\s$ and it is also a Sierpinski gasket in its own right. $p$ restricted to each subset of the partition is a harmonic function.
\end{definition}
Given three real numbers $a,b,c$ there exist a unique harmonic function $f$ satisfying $f(q_1)=a, f(q_2)=b$ and $f(q_3)=c.$ Function value at intermediate nodes of Sierpi\'nski gasket can be determined by \ldq$\frac{1}{5}-\frac{2}{5}"$ rule
$$f(q_{\omega ij})= \frac{2}{5}h(q_{\omega i}) + \frac{2}{5}h(q_{\omega j}) + \frac{1}{5}h(q_{\omega k})$$
where $\omega \in \Sigma^*$ and $\{i,j,k\}$ are permutation of $\{1,2,3\}.$ We define $\Sigma^* \coloneqq \cup_{m\geq 0}\Sigma^{(m)}$ and $\Sigma^{(m)}$ is the collection of all words of length $m$ which are possible combinations of symbols 1,2 and 3. We define, $q_{\omega i} \coloneqq L_\omega(q_i)$ for $\omega \in \Sigma^*$ and $i \in \{1,2,3\}.$ For more details about harmonic functions and \ldq$\frac{1}{5}-\frac{2}{5}$\rdq rule see section 1.3 of \cite{RS}.
\section{Fractal operator on $\dom {(\E)}$ and its properties}
Now the question arises : Is there any fractal function in the space $\dom(\E)?$ And the answer is affirmative. In this section we will construct an Iterated Function System(IFS) whose fixed point is graph of a function. We will show that this function belongs to $\dom(\E)$ with some restrictions on the independent parameter. Before proceeding, we recall some definitions.
\begin{definition}[\textbf{Iterated Function System}]
\noindent Let $(X,d)$ be a metric space. Let $f_n : X \to X$ for $n \in \Lambda$(a finite index set) be contraction mappings. A finite family of contractions $\{X; f_n : n \in \Lambda\}$ is called an Iterated Function System(IFS).\\
We define $F : \B(X) \to \B(X)$ by $F(A) = \cup_{n\in\Lambda} f_n(A)$ where $\B(X)$ is the collection of all nonempty compact subsets of $X$, $A \in \B(X)$ and $f_n(A) = \{f_n(x) : x \in A\}$.\\
\noindent A non empty set $ \A \subset X$ is called an invariant set(attractor) for the IFS $\{X; f_n: n \in \Lambda\}$, if $ \A = \cup_{n\in \Lambda} f_n(\A).$
\end{definition}
\subsection{Fractal Interpolation Functions on the Sierpi\'nski gasket }
\noindent Let $\{(\mathbf{x}_i,y_i) \in \s \times \R : i \in \Lambda, \Lambda \text{ is a finite index set}\}$ be given set of data points, where
$\s $ is the Sierpi\'nski gasket. We want to construct a continuous function $f : \s \to \R$ which interpolate the given data
\begin{equation} \label{cond}
f(\mathbf{x}_i) = y_i, ~~ i \in \Lambda
\end{equation}
and whose graph $G= \{(\mathbf{x},f(\mathbf{x})) : \mathbf{x} \in \s\}$ is the attractor of an IFS.
\begin{definition}[\textbf{Fractal Interpolation Function}]
Let $K=\s\times [a,b],$ where interval $[a,b]$ is chosen such a way that each $y_i \in [a,b].$ If there is a collection of contraction mappings $f_n : K \to K$ such that the unique attractor of the IFS $\{K;f_n : n \in \Lambda, \Lambda \text{ is a finite index set} \}$ is graph of a function on the Sierpi\'nski gasket and the function satisfies \eqref{cond}, then we will call such a function a fractal interpolation function.
\end{definition}
\subsection{Construction of $\alpha$-fractal functions on the Sierpi\'nski gasket}
Let $f$ be a continuous function on the Sierpi\'nski gasket and $n \in \N$ be a fixed number. Suppose the interpolation points are $\{(q_\omega, f(q_\omega)) : \omega \in \Sigma^{(n)}\}.$ Then for fixed $\alpha= \{\alpha_\omega \in (-1,1) : \omega\in \Sigma^{(n)}\}$ we will construct an IFS such that the attractor is graph of a function and passes through the above interpolation points.
We define a function $L_\omega : \R^2 \to \R^2$ by
\begin{equation}\label{eq_3}
L_\omega \coloneqq L_{w_1}\circ L_{w_2}\circ L_{w_3}\circ...\circ L_{w_n} \text{ where } \omega \in \Sigma^{(n)}
\end{equation}
The function $L_\omega$ satisfies following conditions
$$L_\omega(q_1)=q_{\omega1}, L_\omega(q_2)=q_{\omega2}, L_\omega(q_3)=q_{\omega3}$$
and $$\|L_\omega(c)-L_\omega(d)\| \leq \frac{1}{2^{|\omega|}}\|c-d\|.$$
Further, we define a real valued continuous function $F_{\omega} : \s \times \R \to \R$ by
\begin{equation}\label{eq_4}
F_\omega(\x,y) = \alpha_\omega y + f(L_\omega(\x)) - \alpha_\omega~ b(\x)
\end{equation}
where $b$ is a continuous function on the Sierpi\'nski gasket satisfying conditions
$b(q_1) = f(q_1), b(q_2) = f(q_2)$ and $b(q_3) = f(q_3).$
The function $F_\omega$ satisfies following conditions
$$F_{3\tilde{\omega}}(q_1,y_1) = F_{1\tilde{\omega}}(q_3,y_3), F_{2\tilde{\omega}}(q_3,y_3) = F_{3\tilde{\omega}}(q_2,y_2) \text{~and~} F_{2\tilde{\omega}}(q_1,y_1)= F_{1\tilde{\omega}}(q_2,y_2)$$ for each $\tilde{\omega} \in \Sigma^{(n-1)}$
and
$$\|F_\omega (c,d_1)- F_\omega (c,d_2)\| \leq |\alpha_\omega|\|d_1-d_2\|.$$
Now we define the IFS using the above equations
\begin{equation}\label{eq_5}
\text{IFS} \{K;\f_\omega : \omega \in \Sigma^{(n)}\}~~ \text{where}~~ \f_\omega(\x,y) = (L_\omega(\x), F_\omega(\x,y)).
\end{equation}
This is a contractive IFS. Hence has an unique attractor, let say $G.$ The function corresponding to graph $G$ is named $f^\alpha$ which passes through the interpolation points $\{(q_\omega, f(q_\omega)) : \omega \in \Sigma^{(n)}\}.$
\begin{definition}[\textbf{{$\alpha$-fractal functions}}]\label{def_1}
Let $f^\alpha$ be the function whose graph is an attractor of IFS defined in \eqref{eq_3} - \eqref{eq_5}. Then we call $f^\alpha$ as the $\alpha$-fractal function associated to $f$ with respect to fixed $n\in \N$ and $\alpha = \{\alpha_\omega \in (-1,1): \omega \in \Sigma^{(n)}\}.$
\end{definition}
The above function $f^\alpha$ satisfies the functional equation
\begin{equation}\label{fix-eq1}
f^\alpha(\x) = f(\x) + \alpha_\omega(f^\alpha - b)\circ L_\omega^{-1}(\x),~~ \forall~~ \x \in L_\omega(\s).
\end{equation}
In the above construction if we take $b = T(f)$ where $T : \C(\s) \to \C(\s)$ is a bounded linear operator satisfying $T(f)(q_1) = f(q_1), T(f)(q_2) = f(q_2)$ and $T(f)(q_3) = f(q_3)$ then $f^\alpha$ satisfies the functional equation
\begin{equation}\label{fix-eq2}
f^\alpha(\x) = f(\x) + \alpha_\omega(f^\alpha - T(f))\circ L_\omega^{-1}(\x),~~ \forall~~ \x \in L_\omega(\s).
\end{equation}
\begin{definition}[\textbf{$\alpha$-fractal operator}]
We define the $\alpha$-fractal operator $\FO^{\alpha} = \FO^\alpha_{n,T}$ on $\C(\s)$ with respect to fixed $n, \alpha$ and $T$ as
$$\FO^\alpha(f) = f^\alpha $$ where $f^\alpha$ is defined in definition \ref{def_1}.
\end{definition}
\begin{theorem}
Let $f$ be a function in $\dom(\E)$ and $f^{\alpha}$ be the $\alpha$-fractal function corresponding to $f.$ The the function $f^\alpha$ belongs to $\dom(\E)$ if $\|\alpha\|_\infty \leq \frac{1}{\sqrt{3\times 5^n}}$ where $n$ is the fixed number associated to interpolation points, that is, $\{(q_\omega, f(q_\omega)) : \omega \in \Sigma^{(n)}\}.$
\end{theorem}
\begin{proof}
Let $\|\alpha\|\coloneqq \max\{|\alpha_\omega| : \omega \in \Sigma^{(n)}\}$ and $\|T\| \coloneqq \sup\{|T(x)| : \|x\|\leq 1\}.$ By using functional equation \eqref{fix-eq2} and the fact that $(A+B-C)^2 \leq 3(A^2+B^2+C^2),$ we can deduce the following inequality for all $\x,\y \in L_\omega(\s).$
\begin{align*}
|f^\alpha(\x)&-f^\alpha(\y)|^2 \\&= |f(\x)-f(\y) + \alpha_\omega(f^\alpha - T(f))\circ L_\omega^{-1}(\x)
-\alpha_\omega(f^\alpha + T(f))\circ L_\omega^{-1}(\y)|^2\\
& \leq 3|f(\x)-f(\y)|^2 + 3\alpha_\omega^2|f^\alpha\circ L_\omega^{-1}(\x)-f^\alpha\circ L_\omega^{-1}(\y)|^2 +3\alpha_\omega^2|T(f)\circ L_\omega^{-1}(\x) - T(f)\circ L_\omega^{-1}(\y)|^2.
\end{align*}
Hence, for $m\geq n$ we can estimate the $m^{th}$ level energy of $f^\alpha$ as
\begin{align*}
\E^{(m)}(f^\alpha)& = \left(\frac{5}{3}\right)^m E^{(m)}(f^\alpha)\\ &= \left(\frac{5}{3}\right)^m \sum_{\x\thicksim_m \y} |f^\alpha(\x)-f^\alpha(\y)|^2\\
&\leq \left(\frac{5}{3}\right)^m\sum_{\x\thicksim_m \y} \left( 3|f(\x)-f(\y)|^2 + 3\alpha_\omega^2|f^\alpha\circ L_\omega^{-1}(\x)-f^\alpha\circ L_\omega^{-1}(\y)|^2 +3\alpha_\omega^2|T(f)\circ L_\omega^{-1}(\x) - T(f)\circ L_\omega^{-1}(\y)|^2\right)\\
& \leq 3\E^{(m)}(f) + \left( \frac{5}{3}\right)^n 3^{n+1}\|\alpha\|^2\E^{(m-n)}(f^\alpha) + \left( \frac{5}{3}\right)^n 3^{n+1}\|\alpha\|^2 \|T\|\E^{(m-n)}(f).
\end{align*}
This gives,
$$\E^{(m)}(f^\alpha)- \left( \frac{5}{3}\right)^n 3^{n+1}\|\alpha\|^2\E^{(m-n)}(f^\alpha) \leq 3\E^{(m)}(f) + \left( \frac{5}{3}\right)^n 3^{n+1}\|\alpha\|^2 \|T\|\E^{(m-n)}(f).$$
Then taking limit as $m\to \infty$ we get,
$$\E(f^\alpha)- \left( \frac{5}{3}\right)^n 3^{n+1}\|\alpha\|^2\E(f^\alpha) \leq 3\E(f) + \left( \frac{5}{3}\right)^n 3^{n+1}\|\alpha\|^2 \|T\|\E(f).$$
This implies,
$$0\leq \E(f^\alpha)\left(1- {5}^n 3\|\alpha\|^2\right) \leq \E(f)\left(3 + {5}^n 3\|\alpha\|^2 \|T\|\right)$$
if $1- {5}^n 3\|\alpha\|^2\geq 0,$ that is, $\|\alpha\| \leq \frac{1}{\sqrt{3\times 5^n}}.$
Therefore, $\E(f^\alpha) < \infty.$ This completes the proof.
\end{proof}
\begin{corollary}
If $T$ is a linear bounded operator with respect to uniform norm and $\|\alpha\|_{\infty} \leq \frac{1}{\sqrt{3\times 5^n}}$ then
$\FO^\alpha : \dom(\E) \to \dom(\E)$ is linear and bounded with respect to $\|\cdot\|_{\E}$ norm and
$\|\FO^\alpha\|_{\E} \leq \sqrt{\frac{\left(3 + {5}^n 3\|\alpha\|^2 \|T\|\right)}{\left(1- {5}^n 3\|\alpha\|^2\right)}}.$
\end{corollary}
\begin{corollary}
If $f \in \dom(\E)$ then $\|f^\alpha - f\|_\E \leq \|\alpha\|_\infty^2 3^n \|f^\alpha - Tf\|_\E.$
\end{corollary}
\begin{proof}
From the functional equation \eqref{fix-eq2} it follows directly.
\end{proof}
\section{Box dimension of graph of fractal functions}
In this section we will obtain the upper and lower bounds for the box dimension of graph of fractal functions that we have discussed in the last section. In this section we fix $q_1 = (0,0), q_2 =(1,0),q_3 = (\frac{1}{2},\frac{\sqrt{3}}{2}),n=1$ and interpolation points are $$\{(q_1,f(q_1)),(q_2,f(q_2)),(q_3,f(q_3)),(q_{12},f(q_{12})),(q_{13},f(q_{13})),(q_{23},f(q_{23}))\}.$$
We construct the IFS $\{K, \f_1,\f_2,\f_3\}.$ We define
$$L_i (\x) = \frac{1}{2}(\x-q_i) + q_i$$ and
$$F_i(\x,y) = \alpha_i y + f(L_i(\x)) - \alpha_i b(\x).$$ Here $b \in \C(\s)$ and satisfies $b(q_i) = f(q_i)$ for all $i=1,2,3.$
So, $\f_i(\x,y) = (L_i(\x),F_i(\x,y))$ for all $i= 1,2,3.$ The fixed point of this IFS gives graph of a function. We try to estimate the box dimension of this graph under certain conditions.
\begin{theorem}
Let $f$ and $b$ are H\"older continuous functions with exponent $\eta_1, \eta_2$ respectively and the interpolation points are not coplanar. Let $f^\alpha$be the $\alpha$-fractal function corresponding to $f$ and $G = \{(\x, f^\alpha(\x)) : \x \in \s\}$ be the graph of $f^\alpha.$ Let $\psi = \sum_{i=1}^3 \alpha_i$ and $\eta = \min\{\eta_1, \eta_2\}.$ Then the box dimension of G has following bounds :\\
(I) If $\frac{\psi 2^\eta}{3} \leq 1$, then $\frac{\log 3}{\log 2} \leq{\dim}_B(G) \leq 1-\eta + \frac{\log 3 }{\log 2}.$\\
(II)If $\frac{\psi 2^\eta}{3} > 1$, then $\frac{\log 3}{\log 2} \leq{\dim}_B(G) \leq 1+ \frac{\log \psi}{\log 2}.$
\end{theorem}
\begin{proof}
For calculating the box dimension of $G,$ consider the cover of $G$ as cubes with side length $\frac{1}{2^k}.$ Let $\n(k)$ denote the minimum number of cubes of size $ \frac{1}{2^k} \times \frac{1}{2^k}\times\frac{1}{2^k} $ which covers $G.$ For given $\omega \in \Sigma^k,$ we define $\mathbf{A}(k,\omega)$ as a collection of cubes of size $ \frac{1}{2^k} \times \frac{1}{2^k}\times\frac{1}{2^k}$ which has disjoint interior and we denote $\n(k,\omega)$ to be the number of cubes in $\mathbf{A}(k,\omega).$ Hence $$\n(k) = \sum_{\omega \in \Sigma^k}\n(k,\omega).$$ On applying function $\f_i, i=1,2,3$ on set $\mathbf{A}(k,\omega)$ we observe that it is contained in $\f_i \circ \f_\omega (\blacksquare)\times \R.$ Here, $\blacksquare$ denotes a square $[0,1]\times[0,1].$ So, $\n(k+1) \leq \sum_{i= 1}^3 \sum_{\omega \in \Sigma^k} \n(k+1,i\omega).$
As $f$ and $b$ are H\" older continuous functions there exists $s_1,s_2\geq 0$ and $\eta_1, \eta_2 \geq 0$ such that
$$|f(L_i(x)) - f(L_i(y)) | \leq \frac{s_1}{2^{(k+1)\eta_1}} $$ and
$$|b(x) - b(y)| \leq \frac{s_2}{2^{k\eta_2}}$$ whenever $x,y \in L_\omega(\blacksquare)$ and $\omega \in \Sigma^k.$ Using \eqref{fix-eq1} we get, $\f_i(\mathbf{A}(k,\omega))$ is contained in cuboid whose base is a square of side length $\frac{1}{2^{k+1}}$ and whose height is $\frac{|\alpha_i|\n(k,\omega)}{2^k} + \frac{s_1}{2^{(k+1)\eta_1}} + \frac{|\alpha_i|s_2}{2^{k\eta_2}}.$
\noindent Thus,\begin{align*}
\n(k+1,i\omega) &\leq \left(\frac{|\alpha_i|\n(k,\omega)}{2^k} + \frac{s_1}{2^{(k+1)\eta_1}} + \frac{|\alpha_i|s_2}{2^{k\eta_2}}\right)\times 2^{k+1} +2 \\
&= 2|\alpha_i|\n(k,\omega) + s_1 2^{(k+1)(1-\eta_1)} + |\alpha_i|s_2 2^{k(1-\eta_2)+1} +2.
\end{align*}
Summing over $i$ and $\omega$ we obtain
\begin{align*}
\n(k+1) &=
\sum_{i= 1}^3 \sum_{\omega \in \Sigma^k} \n(k+1,i\omega)\\ &\leq \sum_{i= 1}^3 \sum_{\omega \in \Sigma^k} 2|\alpha_i|\n(k,\omega) + s_1 2^{(k+1)(1-\eta_1)} + |\alpha_i|s_2 2^{k(1-\eta_2)+1} +2\\
&= \sum_{\omega \in \Sigma^k} 2\psi\n(k,\omega) + 3 s_1 2^{(k+1)(1-\eta_1)} + \psi s_2 2^{k(1-\eta_2)+1} +6\\
&= 2\psi\n(k)+ 3^k 3 s_1 2^{(k+1)(1-\eta_1)} + 3^k \psi s_2 2^{k(1-\eta_2)+1} +3^k 6 \\
&\leq 2\psi\n(k)+ 3^k 3 s_1 2^{(k+1)(1-\eta)} + 3^k \psi s_2 2^{k(1-\eta)+1} +3^k 6\\
&\leq 2\psi\n(k) + 3^k 2^{(k+1)(1-\eta)} \left(3s_1+2\psi s_2+ 6\right)\\
&= 2\psi\n(k) + 3^k 2^{(k+1)(1-\eta)}K,
\end{align*}
where $K = 3s_1+2\psi s_2+ 6.$
Applying the above inequality repeatedly we get,
\begin{align*}
\n(k+1) &\leq 2\psi\n(k) +K 3^k 2^{(k+1)(1-\eta)} \\
&\leq 2\psi \left(2\psi\n(k-1) + K 3^{k-1} 2^{k(1-\eta)}\right) + K 3^k 2^{(k+1)(1-\eta)}\\
&= 2^2\psi^2 \n(k-1) + 2\psi K 3^{k-1} 2^{k(1-\eta)} + K 3^k 2^{(k+1)(1-\eta)}\\
&\leq 2^2\psi^2 \left(2\psi \n(k-2) + 3^{k-2} 2^{(k-2)(1-\eta)} \right)+ 2\psi K 3^{k-1} 2^{k(1-\eta)} + K 3^k 2^{(k+1)(1-\eta)}\\
&= 2^3\psi^3 \n(k-2) + K2^2\psi^2 3^{k-2} 2^{(k-2)(1-\eta)}+ 2\psi K 3^{k-1} 2^{k(1-\eta)} + K 3^k 2^{(k+1)(1-\eta)}\\
&= 2^3\psi^3 \n(k-2) + K 3^{k}2^{(k+1)(1-\eta)}\left(1+ 2\psi 3^{-1}2^{\eta-1} + \left(2\psi 3^{-1}2^{\eta-1}\right)^2 \right).
\end{align*}
Continuing this process $k$ times we get
\begin{equation}\label{fin}
\n(k+1) \leq 2^{k+1} \psi^{k+1}\n(0) + K3^{k}2^{(k+1)(1-\eta)}\left(1+\psi 3^{-1}2^{\eta} + \left(\psi 3^{-1}2^{\eta}\right)^2 + \cdots + \left(\psi 3^{-1}2^{\eta}\right)^k \right).
\end{equation}
\underline{Case I} Consider that $\frac{\psi 2^\eta}{3} \leq 1.$ Then we have following bounds for \eqref{fin}
\begin{align*}
\n(k+1) &\leq 2^{k+1} \psi^{k+1}\n(0) + K3^{k}2^{(k+1)(1-\eta)}\left(1+\psi 3^{-1}2^{\eta} + \left(\psi 3^{-1}2^{\eta}\right)^2 + \cdots + \left(\psi 3^{-1}2^{\eta}\right)^k \right)\\
& \leq 2^{k+1} \psi^{k+1}\n(0) + K3^{k}2^{(k+1)(1-\eta)}\left(k+1\right)\\
& \leq 2^{k+1} \frac{3^{k+1}}{2^{\eta(k+1)}} \n(0) + K3^{k}2^{(k+1)(1-\eta)}\left(k+1\right)\\
& = 2^{(k+1)(1-\eta)} 3^{k+1} \n(0) + K3^{k}2^{(k+1)(1-\eta)}\left(k+1\right)\\
& \leq 2^{(k+1)(1-\eta)} 3^{k+1} (k+1)\left(\n(0) + \frac{K}{3} \right).
\end{align*}
In the above estimate, third inequality follows from the assumption $\frac{\psi 2^\eta}{3} \leq 1,$ this implies, $\psi^{k+1} \leq \frac{3^{k+1}}{2^{\eta(k+1)}}.$ This gives
\begin{align*}
{\dim}_B(G) &\leq \lim_{k \to \infty} \frac{\log \n(k+1)}{-\log 2^{-(k+1)}} \leq \lim_{k \to \infty} \frac{\log\left({2^{(k+1)(1-\eta)} 3^{k+1} (k+1)\left(\n(0) + K/3 \right)}\right)}{-\log 2^{-(k+1)}}\\
& =\lim_{k \to \infty} \frac{\log{2^{(k+1)(1-\eta)}}}{-\log 2^{-(k+1)}} +\lim_{k \to \infty} \frac{\log {3^{k+1}} }{-\log 2^{-(k+1)}} + \lim_{k \to \infty}\frac{\log(k+1)}{-\log 2^{-(k+1)}} + \lim_{k \to \infty}\frac{\log\left(\n(0) + K/3 \right)}{-\log 2^{-(k+1)}}\\
&= 1-\eta + \frac{\log 3 }{\log 2}.
\end{align*}
\underline{Case II} Consider the other case $\frac{\psi 2^\eta}{3} >1.$ Then equation \eqref{fin} has following estimate
\begin{align*}
\n(k+1) &\leq 2^{k+1} \psi^{k+1}\n(0) + K3^{k}2^{(k+1)(1-\eta)}\left(1+\psi 3^{-1}2^{\eta} + \left(\psi 3^{-1}2^{\eta}\right)^2 + \cdots + \left(\psi 3^{-1}2^{\eta}\right)^k \right)\\
& = 2^{k+1} \psi^{k+1}\n(0) + K3^{k}2^{(k+1)(1-\eta)} \left(\frac{(\psi 3^{-1}2^{\eta})^k-1}{\psi 3^{-1}2^{\eta} -1}\right)\\
&\leq 2^{k+1} \psi^{k+1}\n(0) + K3^{k}2^{(k+1)(1-\eta)} \left(\frac{(\psi 3^{-1}2^{\eta})^k}{\psi 3^{-1}2^{\eta} -1}\right)\\
&= 2^{k+1} \psi^{k+1}\n(0) + K 2^{(k+1-\eta)} \left(\frac{\psi^k}{\psi 3^{-1}2^{\eta} -1}\right)\\
&\leq 2^{k+1} \psi^{k+1}\n(0) + K 2^{(k+1)} \left(\frac{\psi^{k+1}}{\psi 3^{-1}2^{\eta} -1}\right)\\
&= 2^{k+1} \psi^{k+1} \left(\n(0) + \left(\frac{K}{\psi 3^{-1}2^{\eta} -1}\right)\right).\\
\end{align*}
Hence, we estimate the box dimension as follows
\begin{align*}
{\dim}_B(G) &\leq \lim_{k \to \infty} \frac{\log \n(k+1)}{-\log 2^{-(k+1)}} \leq \lim_{k \to \infty} \frac{\log \left(2^{k+1} \psi^{k+1} \left(\n(0) + \left(\frac{K}{\psi 3^{-1}2^{\eta} -1}\right)\right)\right)}{-\log 2^{-(k+1)}}\\
&= \lim_{k \to \infty} \frac{\log{2^{k+1}}}{-\log 2^{-(k+1)}} + \lim_{k \to \infty}\frac{\log(\psi^{k+1})}{-\log 2^{-(k+1)}} + \lim_{k \to \infty} \frac{\log\left(\n(0) + \left(\frac{K}{\psi 3^{-1}2^{\eta} -1}\right)\right)}{-\log 2^{-(k+1)}}\\
&= 1+ \frac{\log \psi}{\log 2}.
\end{align*}
This completes the proof.
\end{proof}
\section{Box dimension of graph of Harmonic functions}
In this section we provide upper and lower bounds for the box dimension of graph of harmonic functions. Later, we will also give upper and lower bounds for box dimension of all functions that belongs to $\dom(\E).$ As we can not find an IFS whose attractor is a graph of harmonic function, so we use the properties of harmonic function to compute its box dimensions.
\begin{figure}
\begin{center}
\includegraphics[height=6cm, width=8cm]{S1}
\caption{Level-1 Sierpi\'nski gasket($\s^{(1)}$) }
\label{fig-A}
\end{center}
\end{figure}
\begin{lemma}\label{lem1}
Let $h$ be a harmonic function satisfying $|h(\x)-h(\y)| \leq \left(\frac{6}{5}\right)^m\|\x-\y\|$ for each pair of $\x$ and $\y$ in the same cell in the level $\s^{(m)}.$ Then $|h(\tilde{\x})-h(\tilde{\y})| \leq \left(\frac{6}{5}\right)^{(m+1)}\|\tilde{\x}-\tilde{\y}\|$ for each $\tilde{\x}$ and $\tilde{\y}$ that belongs to same cell in the level $\s^{(m+1)}$ and $\tilde{\x}, \tilde{\y}$ belongs to sub cell of the cell containing $\x$ and $\y.$
\end{lemma}
\begin{proof}
We may consider Fig. \ref{fig-A} as one of the cell of m-level Sierpi\'nski gasket and proceed the proof as follow: We calculate the difference for each pair of nodes in the same cell of $\s^{(m+1)}.$ We will use \ldq$\frac{1}{5}-\frac{2}{5}"$ rule and triangle inequality to get the following inequalities.
\begin{equation}
\begin{split}
\left|h(x)-h(z_1)\right| &= \left|h(x)- \frac{2}{5}h(x) - \frac{2}{5}h(z) - \frac{1}{5}h(y)\right|= \left|\frac{3}{5}h(x)-\frac{2}{5}h(z) - \frac{1}{5}h(y)\right| \\
& \leq \frac{2}{5}\left|h(x)-h(z)\right| + \frac{1}{5}\left|h(x)-h(y)\right|\leq \frac{2}{5}\left(\frac{6}{5}\right)^m\|x-z\| + \frac{1}{5}\left(\frac{6}{5}\right)^m\|x-y\| \\
&\leq \frac{3}{5}\left(\frac{6}{5}\right)^m\|x-z\| = \frac{6}{5}\left(\frac{6}{5}\right)^{m}\|x-z_1\|
=\left(\frac{6}{5}\right)^{m+1}\|x-z_1\|.
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\left|h(x)-h(z_2)\right| &= \left|h(x)- \frac{2}{5}h(x) - \frac{2}{5}h(y) - \frac{1}{5}h(z)\right|= \left|\frac{3}{5}h(x)-\frac{2}{5}h(y) - \frac{1}{5}h(z)\right| \\
& \leq \frac{2}{5}\left|h(x)-h(y)\right| + \frac{1}{5}\left|h(x)-h(z)\right|\leq \frac{2}{5}\left(\frac{6}{5}\right)^m\|x-y\| + \frac{1}{5}\left(\frac{6}{5}\right)^m\|x-z\| \\
&\leq \frac{3}{5}\left(\frac{6}{5}\right)^m\|x-y\| = \frac{6}{5}\left(\frac{6}{5}\right)^{m}\|x-z_2\|
=\left(\frac{6}{5}\right)^{m+1}\|x-z_2\|.
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\left|f(z_2)-f(z_1)\right| &= \left|\frac{2}{5}h(x) + \frac{2}{5}h(y) + \frac{1}{5}h(z)- \frac{2}{5}h(x) - \frac{2}{5}h(z) - \frac{1}{5}h(y)\right|= \left|\frac{1}{5}h(y) - \frac{1}{5}h(z)\right| \\
& =\frac{1}{5}\left|h(y)-h(z)\right|\leq \frac{1}{5}\left(\frac{6}{5}\right)^m\|y-z\| = \frac{2}{5}\left(\frac{6}{5}\right)^m\|z_2-z_1\| \leq \left(\frac{6}{5}\right)^{(m+1)}\|z_2-z_1\|.
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\left|h(z)-h(z_1)\right| &= \left|h(z)-\frac{2}{5}h(x)- \frac{2}{5}h(z) - \frac{1}{5}h(y)\right|= \left|\frac{3}{5}h(z)-\frac{2}{5}h(x)- \frac{1}{5}h(y)\right| \\
& \leq \frac{2}{5}\left|h(z)-h(x)\right| + \frac{1}{5}\left|h(z)-h(y)\right|\leq \frac{2}{5}\left(\frac{6}{5}\right)^m\|z-x\| + \frac{1}{5}\left(\frac{6}{5}\right)^m\|z-y\| \\
&\leq \frac{3}{5}\left(\frac{6}{5}\right)^m\|z-x\| = \frac{6}{5}\left(\frac{6}{5}\right)^{m}\|z-z_1\|
=\left(\frac{6}{5}\right)^{m+1}\|z-z_1\|.
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\left|h(z)-h(z_3)\right| &= \left|h(z)-\frac{2}{5}h(z)- \frac{2}{5}h(y) - \frac{1}{5}h(x)\right|= \left|\frac{3}{5}h(z)-\frac{2}{5}h(y)- \frac{1}{5}h(z)\right| \\
& \leq \frac{2}{5}\left|h(z)-h(y)\right| + \frac{1}{5}\left|h(z)-h(x)\right|\leq \frac{2}{5}\left(\frac{6}{5}\right)^m\|z-y\| + \frac{1}{5}\left(\frac{6}{5}\right)^m\|z-x\| \\
&\leq \frac{3}{5}\left(\frac{6}{5}\right)^m\|z-y\| = \frac{6}{5}\left(\frac{6}{5}\right)^{m}\|z-z_3\|
=\left(\frac{6}{5}\right)^{m+1}\|z-z_3\|.
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\left|f(z_3)-f(z_1)\right| &= \left|\frac{2}{5}h(z) + \frac{2}{5}h(y) + \frac{1}{5}h(x)- \frac{2}{5}h(x) - \frac{2}{5}h(z) - \frac{1}{5}h(y)\right|= \left|\frac{1}{5}h(y) - \frac{1}{5}h(z)\right| \\
& =\frac{1}{5}\left|h(y)-h(x)\right|\leq \frac{1}{5}\left(\frac{6}{5}\right)^m\|y-x\| = \frac{2}{5}\left(\frac{6}{5}\right)^m\|z_3-z_1\| \leq \left(\frac{6}{5}\right)^{(m+1)}\|z_3-z_1\|.
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\left|h(y)-h(z_2)\right| &= \left|h(y)-\frac{2}{5}h(y)- \frac{2}{5}h(x) - \frac{1}{5}h(z)\right|= \left|\frac{3}{5}h(y)-\frac{2}{5}h(z)- \frac{1}{5}h(x)\right| \\
& \leq \frac{2}{5}\left|h(y)-h(z)\right| + \frac{1}{5}\left|h(y)-h(x)\right|\leq \frac{2}{5}\left(\frac{6}{5}\right)^m\|y-z\| + \frac{1}{5}\left(\frac{6}{5}\right)^m\|y-x\| \\
&\leq \frac{3}{5}\left(\frac{6}{5}\right)^m\|y-x\| = \frac{6}{5}\left(\frac{6}{5}\right)^{m}\|y-z_2\|
=\left(\frac{6}{5}\right)^{m+1}\|y-z_2\|.
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\left|h(y)-h(z_3)\right| &= \left|h(y)-\frac{2}{5}h(y)- \frac{2}{5}h(z) - \frac{1}{5}h(x)\right|= \left|\frac{3}{5}h(y)-\frac{2}{5}h(z)- \frac{1}{5}h(x)\right| \\
& \leq \frac{2}{5}\left|h(y)-h(z)\right| + \frac{1}{5}\left|h(y)-h(x)\right|\leq \frac{2}{5}\left(\frac{6}{5}\right)^m\|y-z\| + \frac{1}{5}\left(\frac{6}{5}\right)^m\|y-x\| \\
&\leq \frac{3}{5}\left(\frac{6}{5}\right)^m\|y-z\| = \frac{6}{5}\left(\frac{6}{5}\right)^{m}\|y-z_3\|
=\left(\frac{6}{5}\right)^{m+1}\|y-z_3\|.
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\left|f(z_3)-f(z_2)\right| &= \left|\frac{2}{5}h(z) + \frac{2}{5}h(y) + \frac{1}{5}h(x)- \frac{2}{5}h(x) - \frac{2}{5}h(y) - \frac{1}{5}h(z)\right|= \left|\frac{1}{5}h(z) - \frac{1}{5}h(x)\right| \\
& =\frac{1}{5}\left|h(z)-h(x)\right|\leq \frac{1}{5}\left(\frac{6}{5}\right)^m\|z-x\| = \frac{2}{5}\left(\frac{6}{5}\right)^m\|z_3-z_2\| \leq \left(\frac{6}{5}\right)^{(m+1)}\|z_3-z_2\|.
\end{split}
\end{equation}
Hence, the conclusion is true for each pair of nodes in the same cell of $\s^{(m+1)}.$
\end{proof}
\begin{theorem}
Let $h$ be a harmonic function. Then the box dimension of graph of $h$ is less than or equal to $\frac{\log(18/5)}{\log2}\approx 1.8479.$
\end{theorem}
\begin{proof}
Let $h$ be a harmonic function and $G_h$ is the graph of $h.$ We observe that using Lemma \ref{lem1} repeatedly, we get
$$|h({\x})-h({\y})| \leq \left(\frac{6}{5}\right)^{m} \|h\|_\E \|\x-\y\|$$
whenever $\x$ and $\y$ belongs to same cell of the $\s^{(m)}.$ Also, it holds true for every $m\geq 0.$ Hence to cover the graph of function of each cell of $\s^{(m)}$ with cube of side length $(1/2)^{m}$ we need $\left(\frac{6}{5}\right)^{m} \|h\|_\E+2$ number of cubes. To cover graph of $\s^{(m)}$ we need at most $3^{m}\left(\left(\frac{6}{5}\right)^{m} \|h\|_\E+2\right)$ many cubes.
Hence, we get upper bound for the box counting dimension is
\begin{align*}
{\dim}_B(G_h) &\leq \overline\lim_{\de \to 0} \frac{\log N_\delta(G_h)}{-\log \delta} \leq \overline\lim_{m \to \infty} \frac{\log{3^{m}\left(\left(\frac{6}{5}\right)^{m} \|h\|_\E+2\right)}}{-\log 2^{-m}}\\
& = \frac{\log(18/5)}{\log 2}\approx 1.8479.
\end{align*}
\end{proof}
\begin{corollary}
Let $h$ be a piecewise harmonic function on the Sierpi\'nski gasket then the box counting dimension is less than or equal to$\frac{\log(18/5)}{\log 2} \approx 1.8479.$
\end{corollary}
\begin{proof}
Let $h_1, h_2,\ldots,h_{\ell}$ be finite pieces of harmonic functions of $h.$ Then $G_h = \cup_{i=1}^{\ell} G_{h_i}.$ Hence, by finite stability property of box dimension we have, $${\dim}_B(G_h)\leq \overline{\dim}_B(G_h) = \overline{\dim}_B(\cup_{i=1}^{\ell} G_{h_i}) = \max_{i=1}^{\ell}\overline{\dim}_B G_{h_i} \leq \frac{\log(18/5)}{\log 2}.$$ This is true for each harmonic function $h_i$ the upper box dimension is less than or equal to $\frac{\log(18/5)}{\log 2}.$
\end{proof}
\begin{lemma}\label{lem2}
Let $u$ be any function in $\dom(\E)$ then we have
\begin{equation*}
|u(\x) - u(\y)| \leq \left(\frac{3}{5}\right)^{m/2} \sqrt{\E(u)}
\end{equation*}
whenever $x$ and $y$ belong to the same cell of $\s^{(m)}.$
\end{lemma}
\begin{proof}
Let $x,y$ belongs to same cell of $\s^{(m)}.$ Then
\begin{align*}
|u(\x)-u(\y)|^2 \leq \sum_{\x\thicksim_m\y} |u(\x)-u(\y)|^2 &= \left(\frac{3}{5}\right)^m\left(\frac{5}{3}\right)^m \sum_{\x\thicksim_m\y} |u(\x)-u(\y)|^2\\
&= \left(\frac{3}{5}\right)^m \E^{(m)}(u) \\
&\leq \left(\frac{3}{5}\right)^m \E(u).
\end{align*}
Hence, $$|u(x)-u(y)| \leq \left(\frac{3}{5}\right)^{m/2} \sqrt{\E(u)}.$$
This completes the proof.
\end{proof}
\begin{lemma}[\cite{KF}, Proposition 2.5]\label{lem3}
Let $F \subset \R^n$ and suppose that $f : F \to \R^m$ is a Lipschitz transformation, that is, there exist $c\geq 0$ such that $|f (x)- f (y)| \leq c|x- y|$ for every $x, y \in F.$ Then $\underline{\dim}_B f (F) \leq \underline{\dim}_B F$ and
$\overline{\dim}_B f (F) \leq \overline{\dim}_B F.$
\end{lemma}
\begin{theorem}\label{thm1}
Let $f$ be any function in $\dom(\E).$ Then the box dimension of graph of f has lower bound $\frac{\log 3 }{\log 2}$ and upper bound $\frac{\log(108/5)}{2\log 2}\approx 2.21648.$
\end{theorem}
\begin{proof}
Let $f$ be any function in $\dom(\E)$ and $G_f$ is the graph of $f.$
Define $P : G_f = (\x,f(\x)) \to \R^2$ as $P(\x,f(\x)) = \x.$ Clearly, $P$ is a Lipschitz map and $P(G_f) =\s.$ Hence, by Lemma \ref{lem3} we have ${\dim}_B P(G_f) \leq {\dim}_B G_f$ which is same as $\underline{\dim}_B \s \leq \underline{\dim}_B G_f.$ This implies $\dim_B G_f \geq \frac{\log 3}{\log 2}.$
\noindent From Lemma \ref{lem2} we have
$$|f(\x)-f(\y)| \leq \left(\frac{3}{5}\right)^{m/2} \sqrt{\E(u)}$$
whenever $\x,\y$ belongs to same cell of $\s^{(m)}$ and it holds true for every $m\geq 0.$ Hence to cover the graph of each cell of $\s^{(m)}$ with cube of side length $2^{-m}$ we need
$\left(\frac{12}{5}\right)^{m/2} \sqrt{\E(u)}+2$ many cubes. To cover graph of $\s^{(m)}$ we need at most $3^m\left(\left(\frac{12}{5}\right)^{m/2} \sqrt{\E(u)}+2\right)$ many cubes.
Hence, we get upper bound for box counting dimension as
\begin{align*}
{\dim}_B(G_f) &= \overline\lim_{\de \to 0} \frac{\log \n_\delta(G_f)}{-\log \delta} \leq \overline\lim_{m \to \infty} \frac{\log \left(\left(\frac{108}{5}\right)^{m/2} \sqrt{\E(u)}+2\times3^m\right)}{-\log 2^{-m}}\\
& = \frac{\log(108/5)}{2\log 2}= \frac{\log(21.6)}{\log 4} \approx 2.21648.
\end{align*}
\end{proof}
\begin{remark}\label{rem2}
Any constant function $f : \s \to \R$ is harmonic, $f \in \dom (\E)$ and
$$\dim_H(G_f)=\underline{\dim}_B(G_f) = \overline{\dim}_B(G_f) = \dim_B(G_f) = \frac{\log 3}{\log 2}.$$
Hence, we can say that the lower bound in theorem \ref{thm1} is attained.
\end{remark}
\footnotesize
\nocite{}
\bibliographystyle{abbrv}
|
1,477,468,750,472 | arxiv | \section{Introduction}
Let $(U_t)$ be a one-dimensional time-homogeneous diffusion process satisfying the stochastic differential equation
\[ dU_t = \nu(U_t) dt + \sigma(U_t) dW_t, \qquad U_0 = u_0, \]
where $(W_t)$ denotes a Brownian motion. The aim of this note is to bound, from above and below, the transition probability density function for $(U_t)$, $p_U(t, u_0, w) := \frac{d}{dw} P_{U}(t,u_0,w)$, where
$P_{U}(t,u_0,w) := \Pr(U_t \leq w | U_0 = u_0)$. While the focus is on the one-dimensional case, the results are easily extended to some special cases in $\mathbb{R}^n$, $n \geq 2$ (see remark at the end of this section). Some simple bounds for the distribution function are also considered.
Except for a few special cases, the transition functions are unknown for general diffusion processes, so finding approximations to them is an important alternative approach. We use Girsanov's theorem and then a transformation of the Radon-Nikodym density of the type suggested in \cite{Baldi_etal_0802} to relate probabilities for a general diffusion $(U_t)$ to those of a `reference diffusion'. Using a reference diffusion with known transition functions, we are able to derive various bounds for the transition functions under mild conditions on the original process. The results have a simple form and are readily evaluated.
As an aside, the generator of the diffusion $(U_t)$ is given by
\[ Af(x) = \nu(x) \partf{f}{x} + \frac{1}{2} \sigma^2(x) \partfd{f}{x}{2}, \]
and the transition probability density function is the minimal fundamental solution to the parabolic equation
\[ \left(A - \partf{}{t}\right)u(t,x) = 0. \]
Thus the results presented here also bound solutions to certain types of parabolic partial differential equations.
Several papers on this topic are available in the literature, especially for bounding the transition probability density. Most recently, \cite{Qian_etal_0504} proposed upper and lower bounds for diffusions whose drift satisfied a linear growth constraint. This appears to be the first such paper to relax the assumption of a bounded drift term. The results in \cite{Qian_etal_0504} will be compared with those obtained in the current paper, although the former can not be used for processes not satisfying the linear growth constraint. To the best of our knowledge, the bounds presented in the current paper are the only ones to relax this constraint, and also appear to generally offer a tightening of the bounds previously available. For further background on diffusions with bounded drift, see e.g.\ \cite{Qian_etal_1103} and references therein.
In addition, the same ideas allow us to obtain bounds for other functions related to the diffusions. This is not the focus of this note and is not discussed in great detail here, but as an example at the end of Section~\ref{sec:main_result} we consider the density of the process and its first crossing time. This has application in many areas, such as the pricing of financial barrier options. Bounds for other probabilities may be derived in the same manner.
Consider a one-dimensional time-homogeneous non-explosive diffusion $(U_t)$ governed by the stochastic differential equation (SDE)
\begin{align}
\label{eq:original_diff}
dU_t = \nu (U_t) dt + \sigma(U_t) dW_t,
\end{align}
where $(W_t)$ is a Brownian motion and $\sigma(y)$ is differentiable and non-zero inside the diffusion interval (that is, the the smallest interval $I \subseteq \mathbb{R}$ such that $U_t \in I$ a.s.). As is well-known, one can transform the process to one with unit diffusion coefficient by letting
\begin{align}
\label{eq:fn_transform}
F(y) := \int_{y_0}^y \frac{1}{\sigma(u)} du
\end{align}
for some $y_0$ from the diffusion interval of $(U_t)$ and then considering $X_t := F(U_t)$ (see e.g.\ \cite{Rogers_xx85}, p.161). By It\^o's formula, $(X_t)$ will have unit diffusion coefficient and a drift coefficient $\mu(y)$ given by the composition
\[ \mu(y) := \left( \frac{\nu}{\sigma} - \frac{1}{2} \sigma'\right) \circ F^{-1}(y). \]
From here on we work with the transformed diffusion process $(X_t)$ governed by the SDE
\begin{align*}
dX_t = \mu(X_t) dt + dW_t, \qquad X_0 = F(U_0) =: x.
\end{align*}
Conditions mentioned throughout refer to the transformed process $(X_t)$ and its drift coefficient $\mu$.
We will consider the following two cases only (the results extend to diffusions with other diffusion intervals with one finite endpoint by employing appropriate transforms):
\begin{enumerate}
\item[] [A] The diffusion interval of $(X_t)$ is the whole real line $\mathbb{R}$.
\item[] [B] The diffusion interval of $(X_t)$ is $(0, \infty)$.
\end{enumerate}
The results extend to diffusions with other diffusion intervals with one finite endpoint by employing appropriate transforms.
For the diffusion $(X_t)$ we will need a reference diffusion $(Y_t)$ with certain characteristics. The reference diffusion must have the same diffusion interval as $(X_t)$ and a unit diffusion coefficient, so that Girsanov's theorem may be applied to $(X_t)$. To be of any practical use, the reference process must also have known transition functions. In case [A], we use the Brownian motion as the reference process, while in case [B] we use the Bessel process of an arbitrary dimension $d \geq 3$.
Recall the definition of the Bessel process $(R_t)$ of dimension $d = 3, 4, \ldots,$ starting at a point $x>0$. This process gives the Euclidean norm of the $d$-dimensional Brownian motion originating at $(x,0, \ldots, 0)$, that is,
\[ R_t = \sqrt{\bigl(x +W_t^{(1)}\bigr)^2 + \cdots + \bigl(W_t^{(d)}\bigr)^2}, \]
where the $\bigl(W_t^{(i)}\bigr)$ are independent standard Brownian motions, $i = 1, \ldots, d$. As is well known (see e.g.\ \cite{Revuz_etal_xx99}, p.445), $(R_t)$ satisfies the SDE
\begin{align*}
dR_t = \frac{d-1}{2} \frac{1}{R_t}dt + dW_t.
\end{align*}
Note that for non-integer values of $d$ the Bessel process of `dimension' $d$ is defined using the above SDE. The process has the transition density function
\begin{align*}
p_R(t,y,z) = z \left(\frac{z}{y}\right)^{\eta} t^{-1} e^{-(y^2 + z^2)/2t} {\cal I}_{\eta} \left(\frac{yz}{t} \right),
\end{align*}
where $\eta = d/2 -1$ and ${\cal I}_{\eta}(z)$ is the modified Bessel function of the first kind. For further information, see Chapter XI in \cite{Revuz_etal_xx99}.
We denote by $\Pr_x$ and $\mathbb{E}_x$ probabilities and expectations conditional on the process in question ($(X_t)$ or some other process, which will be obvious from the context) starting at $x$. We work with the natural filtration $\cal{F}_s := \sigma\left(X_u: u \leq s\right)$.
Finally, note that the present work can be easily extended to a class of $n$-dimensional diffusions for $n \geq 2$. If $(X_t)$ is an $n$-dimensional diffusion satisfying the SDE
\[ dX_t = \mu(X_t) dt + dW_t, \]
$(W_t)$ being an $n$-dimensional Brownian motion, then the majority of results can be extended assuming $\mu(\cdot)$ is curl-free. The extension is straight-forward, and in this note we shall only concern ourselves with the one-dimensional case.
\section{Main Results}
\label{sec:main_result}
This section states and proves a result relating probabilities for the diffusion $(X_t)$ to expectations under an appropriate reference measure. In the case [A], the result may be known, and we state it here for completeness. The extension to case [B] is straight-forward. We then apply this proposition to obtain bounds for transition densities and distributions.
\textbf{Relation to the Reference Process}
We define the functions $G(y)$ and $N(t)$ as follows, according to the diffusion interval of $(X_t)$:
\begin{enumerate}
\item[][A] If the diffusion interval of $(X_t)$ is $\mathbb{R}$, then we define, for some fixed $y_0 \in \mathbb{R}$,
\begin{equation}
\begin{array}{rl}
G(y) \!\!\!&:= \displaystyle\int_{y_0}^{y} \mu(z) dz,\\
\vphantom{.}\\
N(t) \!\!\!&:= \displaystyle\int_0^t \left(\mu'(X_u) + \mu^2(X_u)\right) du.
\end{array}
\label{eq:bm_G}
\end{equation}
\item[][B] If the diffusion interval of $(X_t)$ is $(0, \infty)$, then we define, for some fixed $d \geq 3$ (the dimension of the reference Bessel process) and $y_0 > 0$,
\begin{align*}
G(y) &:= \int_{y_0}^{y} \left(\mu(z) - \frac{d-1}{2z} \right) dz,\\
N(t) &:= \int_0^t \left( \mu'(X_u) - \frac{(d-1)(d-3)}{4X_u^2} + \mu^2(X_u) \right) du.
\end{align*}
\end{enumerate}
\begin{rem}
For diffusions on $(0, \infty)$, the choice of $d$ is arbitrary subject to $d \geq 3$. Therefore this choice can be used to optimise any bounds presented in the next subsection.
\end{rem}
\begin{prop}
\label{th:gen_int}
Assume the the drift coefficient $\mu$ of $(X_t)$ is absolutely continuous. Then, for any $A \in \mathcal{F}_t$,
\[ \Pr_x(A) = \hat{\Exp}_x\left[ e^{G(X_t) - G(x)} e^{-(1/2) N(t)}\mathbbm{1}} %{1\hspace{-2.5mm}{1}_A \right], \]
where $\hat{\Exp}_x$ denotes expectation with respect to the law of the reference process.
\end{prop}
\begin{rem}
In terms of the original process $(U_t)$ defined in \eqref{eq:original_diff}, the condition of absolute continuity of $\mu(y)$ requires $\nu(z)$ and $\sigma'(z)$ to be absolutely continuous.
\end{rem}
\begin{proof}
The proof is a straight-forward application of Girsanov's theorem and its idea is similar to the one used in \cite{Baldi_etal_0802}. We present the proof for case [A], the proof for case [B] is completed similarly (see \cite{Downes_etal_xx08} for the general approach).
Define $\mathbb{Q}_x$ to be the reference measure such that under $\mathbb{Q}_x$, $X_0 = x$ and
\[ dX_s = d\widetilde{W}_s, \]
for a $\mathbb{Q}_x$ Brownian motion $(\widetilde{W}_s)$. Set
\begin{align*}
\zeta_s &:= \frac{d\Pr_x}{d\mathbb{Q}_x} = \exp \left\{ \int_0^s \mu (X_u) d\widetilde{W}_u - \frac{1}{2} \int_0^s \mu^2(X_u) du \right\},
\end{align*}
so by Girsanov's theorem under $\Pr_x$ we regain the original process $(X_s)$ satisfying
\[ dX_s = \mu(X_s) ds + dW_s, \]
for a $\Pr_x$ Brownian motion $(W_s)$. The regularity conditions allowing this application of Girsanov's theorem are satisfied (see e.g.\ Theorem~7.19 in \cite{Liptser_etal_xx01}), since under both $\Pr_x$ and $\mathbb{Q}_x$ the process $(X_s)$ is non-explosive and $\mu(y)$ is locally bounded so we have, for any $t>0$,
\[ \Pr_x \left( \int_0^t \mu^2(X_s) ds < \infty \right) = \mathbb{Q}_x \left( \int_0^t \mu^2(X_s) ds < \infty \right) = 1. \]
We then have, under $\mathbb{Q}_x$, using It\^o's formula and \eqref{eq:bm_G},
\begin{align}
\label{eq:dG}
d G (X_s) &= \mu(X_s) dX_s + \frac{1}{2} \mu'(X_s) (dX_s)^2\notag\\
&= \mu(X_s) d\widetilde{W}_s + \frac{1}{2} \mu'(X_s) ds.
\end{align}
Note that in order to apply It\^o's formula, we only require $\mu$ to be absolutely continuous with Radon-Nikodym derivative $\mu'$ (see e.g.\ Theorem~19.5 in \cite{Kallenberg_xx97}). This also implies the above is defined uniquely only up to a set of Lebesgue measure zero, and we are free to assign an arbitrary value to $\mu'$ at points of discontinuity.
Rearranging \eqref{eq:dG} gives
\[ \int_0^s \mu (X_u) d\widetilde{W}_u = G(X_s) - G(X_0) - \frac{1}{2} \int_0^s \mu'(X_u) du. \]
Hence
\[ \zeta_s = \exp \left\{ G(X_s) - G(X_0) - \frac{1}{2} \int_0^s \left( \mu'(X_u) + \mu^2 (X_u) \right) du \right\}, \]
which together with
\begin{align*}
\Pr_x(A) = \mathbb{E}_x[\mathbbm{1}} %{1\hspace{-2.5mm}{1}_A] = \int \mathbbm{1}} %{1\hspace{-2.5mm}{1}_A d\Pr_x = \int \mathbbm{1}} %{1\hspace{-2.5mm}{1}_A \zeta_t d\mathbb{Q}_x = \hat{\Exp}_x [ \zeta_t \mathbbm{1}} %{1\hspace{-2.5mm}{1}_A ],
\end{align*}
completes the proof of the proposition.
\end{proof}
\textbf{Bounds for Transition Densities and Distributions}
Define $L$ and $M$ as follows, according to the diffusion interval of $(X_t)$:
\begin{enumerate}
\item[][A] If the diffusion interval of $(X_t)$ is $\mathbb{R}$, then
\begin{align*}
L &:= \displaystyle \mbox{ess sup}\left(\mu'(y) + \mu^2(y)\right),\\
M &:= \displaystyle \mbox{ess inf}\left(\mu'(y) + \mu^2(y)\right),
\end{align*}
where the essential supremum/infimum is taken over $\mathbb{R}$.
\item[][B] If the diffusion interval of $(X_t)$ is $(0, \infty)$, then, for some fixed $d \geq 3$ (the dimension of the reference Bessel process), we put
\begin{align*}
L &:= \displaystyle \mbox{ess sup} \left( \mu'(y) - \frac{(d-1)(d-3)}{4y^2} + \mu^2(y) \right),\\
M &:= \displaystyle \mbox{ess inf} \left( \mu'(y) - \frac{(d-1)(d-3)}{4y^2} + \mu^2(y) \right),
\end{align*}
where the essential supremum/infimum is taken over $(0, \infty)$.
\end{enumerate}
Note that in what follows, in the case [B], the dimension of the reference Bessel process may be chosen so as to optimise the particular bound. Recall also that $(Y_t)$ denotes the reference process (the Weiner process in case [A], the $d$-dimensional Bessel process in case [B]).
\begin{cor}
\label{cor:trans_dens}
The transition density of the diffusion $(X_t)$ is bounded according to
\begin{align}
\label{eq:trans_dens}
e^{-tL/2} \leq \frac{p_X(t,x,w)}{e^{G(w) - G(x)} p_Y(t,x,w)} \leq e^{-tM/2}.
\end{align}
\end{cor}
\begin{rem}
The bound is sharp: for a constant drift coefficient $\mu$, equalities hold in \eqref{eq:trans_dens}.
\end{rem}
\begin{proof}
Recall (see the proof of Proposition~\ref{th:gen_int}) we only required $\mu$ to be absolutely continuous, and its value on a set of Lebesgue measure zero is irrelevant. Hence $L$ (respectively $M$) gives an upper (lower) bound for the integrand in $N(t)$ for all paths. Applying Proposition~\ref{th:gen_int} with $A = \{X_t \in [w, w+h)\}$, $h>0$, gives
\begin{align*}
\inf_{w \leq y \leq w+h} e^{G(y) - G(x)} e^{-tL/2} \Pr_x(Y_t \in [w, &w+h)) \leq \Pr_x(X_t \in [w, w+h))\\
&\leq \sup_{w \leq y \leq w+h} e^{G(y) - G(x)} e^{-tM/2} \Pr_x(Y_t \in [w, w+h)).
\end{align*}
Taking the limits as $h \rightarrow 0$ gives the required result.
\end{proof}
In the case of bounded $L$ and $M$ this immediately gives an asymptotic expression for the density $p_X(t,x,w)$ as $t \rightarrow 0$.
\begin{cor}
If $-\infty < L, M < \infty$, then, as $t \rightarrow 0$,
\[ p_X(t,x,w) \sim e^{G(w) - G(x)} p_Y(t,x,w), \]
uniformly in $x$, $w$.
\end{cor}
While the tightest bounds for the transition distribution are obtained by integrating the bounds for the density given above, this does not, in general, yield a simple closed form expression. We mention other, less tight bounds that are simple and are obtained by a further application of Proposition~\ref{th:gen_int}.
\begin{cor}
\label{cor:trans_dist}
The transition distribution function of the diffusion $(X_t)$ admits the following bound: for any $w \in \mathbb{R}$,
\begin{align*}
\inf_{ \ell \leq y \leq w} e^{G(y) - G(x)} e^{-tL/2} P_Y(t,x,w) \leq P_X(t,x,w)
\leq \sup_{\ell \leq y \leq w} e^{G(y) - G(x)} e^{-tM/2} P_Y(t,x,w),
\end{align*}
where $\ell$ is the lower bound of the diffusion interval.
\end{cor}
The assertion of the corollary immediately follows from that of Proposition~\ref{th:gen_int} with $A = \{X_t \leq w \}$.
By considering other events (e.g.\ $A = \{ X_t > w \}$), other similar bounds can be derived.
\textbf{Further Probabilities}
While the focus of this note is on bounds for the transition functions, Proposition \ref{th:gen_int} can be used to obtain other useful results. For example, consider
\[ \eta_X(t, x, y, w) := \frac{d}{dw} \Pr_x \left( \sup_{0 \leq s \leq t} X_s \geq y, X_t \leq w \right). \]
Such a function has applications in many areas, for example the pricing of barrier options in financial markets. Using ideas similar to the proof of Corollary~\ref{cor:trans_dens} immediately gives
\begin{cor}
\label{cor:other_probs}
For the diffusion $(X_t)$,
\[ e^{ -tL/2} \leq \frac{\eta_X(t,x,y,w)}{e^{G(w) - G(x)} \eta_Y(t,x,y,w)} \leq e^{ -tM/2}. \]
\end{cor}
Note that for such probabilities the bounds may be improved, if desired, by replacing $L$ and $M$ with appropriate constants on a case-by-case basis. For example, if we are considering the probability our diffusion stays between two constant boundaries at the levels $c_1 < c_2$, then the supremum (for $L$) and infimum (for $M$) need only be taken over the range $c_1 \leq y \leq c_2$.
Other probabilities may be considered in a similar way.
\section{Numerical Results}
\label{sec:num_results}
Here we illustrate the precision of the results from the previous section for transition densities. Bounds from Corollary~\ref{cor:trans_dens} are compared with known transition density functions and previously available bounds for the Ornstein-Uhlenbeck process in the case [A]. For the case [B], we only compare the bounds obtained in the current paper with exact results, since there appears to be no other known bounds in the literature. We also construct a `truncated Ornstein-Uhlenbeck' process in order to compare our results with other bounds available in the literature. For the Ornstein-Uhlenbeck process we also consider an example to illustrate Corollary~\ref{cor:other_probs}.
\textbf{The Ornstein-Uhlenbeck Process}
We consider an Ornstein-Uhlenbeck process $(S_t)$, which satisfies the SDE
\[ dS_t = -S_t dt + dW_t. \]
This process has the transition density
\[ p_S(t,x,w) = \frac{ e^{t}}{\sqrt{\pi (e^{2 t}-1)}} \exp\left(\frac{\left(w e^{t} - x\right)^2}{1-e^{2 t}} \right), \]
see e.g.\ (1.0.6) in \cite{Borodin_etal_xx02}, p.522, and we begin by comparing this with the bound obtained in Corollary~\ref{cor:trans_dens}. Since $\mu(z) = -z$, we have
\[ M = -1, \qquad G(w) - G(x) = \frac{1}{2}(x^2 - w^2), \]
giving the estimate
\[ p_S(t,x,w) \leq e^{\frac{1}{2}(x^2 - w^2 + t)} p_W(t,x,w). \]
Clearly in this case the bound is tighter for smaller values of $|x|$ and $t$. Figure~\ref{fig:OU_dens_cent} displays a plot of the right-hand side of this bound together with the exact density for $x=0$ and $t=1,2$.
\begin{figure}[!htb]
\begin{centering}
\includegraphics[width = 0.90 \textwidth, height = 2.5 in]{ou_dens}
\caption{Transition density for an Ornstein-Uhlenbeck process, alongside its upper bound, with $x=0$. The left-hand side displays the functions for $t=1$, the right for $t=2$.}
\label{fig:OU_dens_cent}
\end{centering}
\end{figure}
To compare our results with other known bounds for transition functions, we look at the bound given by (3.3) in \cite{Qian_etal_0504} (which, to the best of the author's knowledge, is the only bound available for such a process). Figure~\ref{fig:OU_dens_comp} compares this bound with that obtained in Corollary~\ref{cor:trans_dens} and the exact transition density. The values $x=0$ and $t=1$ are used (for the bound in \cite{Qian_etal_0504}, $q=1.2$ seemed to give the best result, see \cite{Qian_etal_0504} for further information on notation). Note that \cite{Qian_etal_0504} gives a sharper bound for $w$ close to zero, but quickly grows to very large values as $|w|$ increases, and in general the bounds presented in this note offer a large improvement. This is typical for all values of $t$, with the effect becoming more pronounced as $t$ decreases. A meaningful lower bound for this process is unavailable by the methods of the present paper, since $L = -\infty$.
\begin{figure}[!htb]
\begin{centering}
\includegraphics[width = 0.45 \textwidth, height = 2.5 in]{ou_comp}
\caption{Transition density for an Ornstein-Uhlenbeck process (solid line) compared to bounds given in \cite{Qian_etal_0504} (dashed line) and Corollary~\ref{cor:trans_dens} (dotted line), for $x=0$ and $t=1$.}
\label{fig:OU_dens_comp}
\end{centering}
\end{figure}
For this example, we briefly look at the bound obtained in Corollary~\ref{cor:other_probs}. We have, see e.g.\ (1.1.8) in \cite{Borodin_etal_xx02}, p. 522,
\[ \eta_S(t,x,0,z) = \frac{1}{\sqrt{\pi (1 - e^{-2 t})}} \exp \left( -\frac{(|z| - x e^{-t})^2}{1 - e^{-2 t}} \right). \]
Figure~\ref{fig:ou_eta} compares this as a function of $t \in [0,1]$ with the bound obtained in Corollary~\ref{cor:other_probs},
\begin{align*}
\eta_S(t,x,0,z) &\leq \exp \left\{\frac{1}{2} (x^2 - z^2 + t) \right\} \eta_W(t,x,0,z)\\
&= \exp \left\{ \frac{1}{2} (x^2 - z^2 + t) \right\} \frac{1}{\sqrt{2 \pi t}} \exp \left\{ - \frac{1}{2t} (|z| - x)^2 \right\},
\end{align*}
where $\eta_W(t,x,0,z)$ is given by (1.1.8) on p. 154 of \cite{Borodin_etal_xx02}.
\begin{figure}[!htb]
\begin{centering}
\includegraphics[width = 0.45 \textwidth, height = 2.5 in]{sup_ou}
\caption{Bound for $\eta_S(t,x,0,z)$ compared with its true value, for $x = z = -0.5$.}
\label{fig:ou_eta}
\end{centering}
\end{figure}
\textbf{The Truncated Ornstein-Uhlenbeck Process}
Other density bounds available in the literature hold only for processes which have bounded drift. For completeness we compare one such bound with the results of this paper. We use the bound in \cite{Qian_etal_1103}, which is the most recent for bounded drift and seems to give the best results over a large domain. To use these results, however, we need a process with bounded drift. As such, we have chosen the `truncated Ornstein-Uhlenbeck' process, which we define as a process $(\overline{S}_t)$ satisfying the SDE
\[ d\overline{S}_t = \mu(\overline{S}_t) dt + dW_t, \]
where, for a fixed $c > 0$,
\[ \mu(z) =
\begin{cases}
c, & \qquad z < -c,\\
-z, & \qquad |z| \leq c,\\
-c, & \qquad z > c.
\end{cases}
\]
For this process we again have $M=-1$ and, assuming $|x| \leq c$,
\[ G(w) - G(x) =
\begin{cases}
\frac{1}{2}(c^2 + x^2) + cw, & \qquad w < -c,\\
\frac{1}{2}(x^2 - w^2), & \qquad |w| \leq c,\\
\frac{1}{2}(c^2 + x^2) - cw, & \qquad w > c.
\end{cases}
\]
Figure~\ref{fig:trunc_ou} displays the bounds from Corollary~\ref{cor:trans_dens} together with those in \cite{Qian_etal_1103} for different values of $c$ with $x=0$ and $t=1$. Smaller values of $c$ move the bounds closer together, however for the given choice of $x$ and $t$ they do not touch until we use the (rather severe) truncation $c \approx 0.45$. In general the method outlined in this note provides a dramatic improvement. We have also plotted an estimate for the transition density using simulation. The simulation was performed using the predictor-corrector method (see e.g.\ \cite{Kloeden_etal_xx94} p.198), with $10^5$ simulations and $100$ time-steps.
\begin{figure}[!htb]
\begin{centering}
\includegraphics[width = 0.90 \textwidth, height = 2.5 in]{outrunc_dens}
\caption{Simulated density and bounds for the transition density of the truncated Ornstein-Uhlenbeck process $(\overline{S}_t)$. The solid lines give the simulated densities, the dotted lines the bound given in Corollary~\ref{cor:trans_dens} and the dashed lines the bounds from \cite{Qian_etal_1103}. Both graphs display the functions for $x=0$ and $t=1$, while the left graph displays them for $c=1$, the right for $c=2$.}
\label{fig:trunc_ou}
\end{centering}
\end{figure}
\textbf{A Diffusion on $(0, \infty)$}
Finally we consider a process from the case [B]. The author believes this is the first paper to present a bound on transition densities without the linear growth constraint. The process $(V_t)$ satisfying the SDE
\begin{align}
\label{eq:feller_sde}
dV_t = (p V_t + q)dt + \sqrt{2 r V_t} dW_t
\end{align}
with $p$, $q \in \mathbb{R}$ and $r>0$, has a known transition density (see (26) in \cite{Giorno_etal_0686}). After applying the transform $Z_t = F(V_t)$, with $F(y) = \sqrt{\frac{2}{r}y}$ by \eqref{eq:fn_transform}, we obtain the process
\[ dZ_t = \mu(Z_t) dt + dW_t, \]
where
\[ \mu(y) = \frac{p}{2} y + \frac{1}{y}\left( \frac{q}{r} - \frac{1}{2} \right). \]
For $q > r$ this dominates the drift of a Bessel process of order $2q/r > 2$ so is clearly a diffusion on $(0, \infty)$.
We take the values $q=2.5$, $r=1$ and $p=1$. Using these values, we have
\[ M = \inf_{0 \leq y \leq \infty} \left[ \frac{y^2}{4} + 2.5 + \frac{1}{y^2} \left(2 - \frac{(d-1)(d-3)}{4}\right) \right], \]
and
\begin{align*}
G(y) - G(x) = \frac{1}{4}(y^2 - x^2) + c \log \left( \frac{y}{x} \right),
\end{align*}
where $d$ is the order of the reference Bessel process and $c = 2 - (d-1)/2$.
It remains to choose the order of the reference Bessel process. It is not clear how to define the `best' order of the reference process for a range of $z$ values, as for fixed $t$ and $x$ the upper bound for $p_Z(t,x,w)$ is minimised for different values of $d$ depending on the value of $z$. In Figure~\ref{fig:rplus_dens} we have taken $t=x=0.5$ and used $d=4.7$, however depending on the relevant criterion improvements can be made. Again, a meaningful lower bound for this process is unavailable by the methods of this paper, since $L= -\infty$.
\begin{figure}[!htb]
\begin{centering}
\includegraphics[width = 0.45 \textwidth, height = 2.5 in]{plus_dens}
\caption{Transition density for the diffusion \eqref{eq:feller_sde}, alongside its upper bound, with $x=0.5$ and $t=0.5$.}
\label{fig:rplus_dens}
\end{centering}
\end{figure}
\newpage
{\bf Acknowledgements:} This research was supported by the ARC Centre of Excellence for Mathematics and Statistics of Complex Systems. The author is grateful for many useful discussions with K. Borovkov which lead to improvements in the paper.
|
1,477,468,750,473 | arxiv | \section{INTRODUCTION}
In 1995, the atomic physics community witnessed the coolest state of matter
realized in the laboratory by that time. With lasers and
electromagnetic fields, atomic Bose gases were trapped and cooled
to within a micro-Kelvin to make them coalesce into a single quantum
wave, known as a Bose-Einstein condensate (BEC) state \cite{bec_exp}.
With the development and upgrading of technology and tools, finally about 10
years later, physicists succeeded in cooling atomic Fermi gases
to even lower temperatures and in providing conclusive evidence
that the interacting atomic Fermi gas could be controlled to be in a
state from a Bardeen-Cooper-Schrieffer (BCS) superfluid of loosely-bound
pairs to a BEC of tightly-bound dimers \cite{fermi_superfluid}. This
quantum wonderland of cold atomic gases has been proven to be rich in
physics, and it has also shown us new realms of
research\ \cite{rev_bec_leggett,rev_trapped_bec,rev_bloch,giorgini,quantum simulator}.
Unprecedented high-precision control over the cold
atomic gases has made it possible to mimic various quantum systems.
It has also paved a way to new physical parameter region that had
not been attainable in other quantum systems and, therefore, had not
been well thought about.
In a cold Fermi gas with an equal mixture of two hyperfine states
(for simplicity, called spin up/down),
wide-range control of the short-range interatomic interaction
(characterized by an $s$-wave scattering length $a_s$) via
Feshbach resonances \cite{fano_feshbach} has enabled the mapping of a new landscape of superfluidity,
known as ``BCS-BEC crossover,''
smoothly connecting the BCS superfluid to the BEC superfluid \cite{bcs_bec_crossover}.
For a weakly-attractive interaction ($1/k_{\rm F} a_s \ll -1$,
with $k_{\rm F}$ conventionally being the Fermi momentum of a free Fermi gas of the same density), the gas shows a
superfluid behavior originating from the many-body physics of Cooper
pairs that are formed from weak pairings of atoms of opposite spin and
momentum and whose spatial extent is larger than the interparticle
distance. For a strongly-attractive interaction ($1/k_{\rm F} a_s \gg 1$), the
gas shows a superfluid behavior that can be explained by the
Bose-Einstein condensation of tightly-bound bosonic
dimers made up of two fermions of opposite spin. In the crossover
region/unitary region ($1/k_{\rm F} |a_s| \alt 1$) bridging the two
well-understood regions above, the gas is strongly interacting, which
means that the interaction energy is comparable to the Fermi energy of
a free Fermi gas of the same density and defies perturbative
many-body techniques. Especially, the physics in the so-called
``unitary limit'' ($|a_s|=\infty$) is universal because only one
remaining length scale, $1/k_{\rm F}$, which is approximately the
interparticle distance, appears in the equation of state of the gas,
while the interaction effect appears as a universal dimensionless
parameter.
Because the physics at unitarity is universal, i.e.,
independent of its constituents and is non-perturbative due to the strong interaction,
the unitary Fermi gas has been the test-bed for the various theoretical techniques
developed so far (see, e.g., Sec.~V\,B and V\,C in Ref.~\onlinecite{giorgini}).
Cold atomic gases also provide insights into other forms of matter.
Inside neutron stars, there might be quark matter made of different
ratios of quarks, and this imbalance might result in new types of
superfluid, such as the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) phase,
which has spatially non-uniform order parameter \cite{FFLO}. Even
though the electric-charge neutrality of atoms might change the
picture, we might get some ideas from investigations on imbalanced
atomic Fermi gases \cite{imbalance}. Another example is the
quark-gluon plasma (QGP) created in the Relativistic Heavy Ion
Collider (RHIC). A QGP with a temperature of about $10^{12}$ K was produced by
smashing nuclei together. Measurements on its expansion after its
creation show that the QGP is a nearly perfect fluid with very small shear
viscosity. A cloud of atomic Fermi gases at unitarity shows the same
strange behavior while the origins of the similarity between hot
and cold matter are to be speculated \cite{hot_cold}.
Cold atoms in periodic potentials are more intriguing because of their possible connection to the solid state/condensed matter physics
(see, e.g., Refs.\ \onlinecite{morsch,rev_bloch}, and \onlinecite{yukalov} for reviews). The artificial periodic potential of light, so-called ``optical lattice,'' is configured with pairs of counter-propagating laser fields that are detuned from atomic transition frequencies enough that they act as practically free-of-defect, conservative potentials that the atoms experience via the optical dipole force.
The geometry and the depth of the lattice potential can be controlled
by orienting the directions and by changing the intensities of the laser beams, respectively.
Moreover, the
macroscopically-large lattice constant of the optical lattice compared
with the lattice potential in solids is a great advantage for
experimental observation and manipulation; nowadays, {\it in-situ}
imaging and addressing of strongly-correlated systems at the
single-site level have been realized (e.g.,
Ref.\ \onlinecite{single_site}).
The high controllability of both the optical lattice properties via laser fields and the interatomic interaction via the Feshbach resonance allows cold atomic gases to serve as quantum simulators for various models of condensed matter physics.
Many phenomena of solid state/condensed matter physics have already been observed or realized in cold atomic gases on the optical lattices, including a band structure, a Bloch oscillation, Landau-Zener tunneling,
and a superfluid-Mott insulator transition
\cite{bloch_zener,Greiner_SF_Mott}.
Precision control and measurement at the level of one lattice site enable the cold atomic gases on optical lattices to be applied in inventing new manipulation techniques and novel devices, such as matter-wave interferometers, optical lattice clocks, and quantum registers \cite{optlatclock,jaksch}.
In this review article, we mainly explore the superfluid properties of
cold atomic gases in an optical lattice at zero temperature.
Superfluidity is the most
well-known quantum phase of ultracold atomic gases and is prevalent in
many other systems, such as superconductors, superfluid helium, and
superfluid neutrons in ``pasta'' phases (see, e.g., Ref.\ \onlinecite{pasta})
of neutron star crusts. Knowing its properties is also important both for
judging whether the superfluid state is approached in experiments and
applying its properties in controlling the system. Depending on the
physical conditions, cold atomic gases in optical lattices show
various interesting phases other than the superfluid phase, for
example, simulating Hubbard models and some other quantum phases will be explained
very briefly in \refsec{others}.
In our discussion, we mainly consider atomic Fermi gases
partly because weakly-interacting atomic Bose gases can be considered
as an extreme limit in the BCS-BEC crossover of the atomic Fermi gases
and partly because the physics of the cold Fermi gases is richer.
Bose gases are discussed separately in the cases where
we cannot find the corresponding phenomena in the Fermi gases.
This does not mean that the cold Bose gases are less interesting and less important.
The tools and the techniques developed for creating the BEC were the building blocks
for those for creating superfluid Fermi gases,
and superfluid properties of the Bose gases might be more useful in some applications
due to their higher superfluid transition temperature.
In \refsec{formalism}, our system and the theoretical frameworks are
presented. There, we explain the mean-field theory and the
hydrodynamic theory, as well as the validity region of each theory.
Using cold atomic gases, various parameter regions become accessible,
and the mean-field theory in the continuum model is a powerful tool
that allows us to study superfluid states covering such a wide
region. Hydrodynamic theory, though its validity region is limited,
can provide precise analytic predictions in some limits, which are
complementary to mean-field theory. Based on these
methods, we study various aspects of the superfluid state in a lattice
potential periodic along one spatial direction. This setup is the
simplest, but contains essential effects of the lattice potential.
Section \ref{sec:thermo} discusses basic static and long-wavelength
properties of the cold Fermi gases in an optical
lattice. Specifically, the incompressibility and the effective mass
are obtained from the equation of state (EOS). Focusing on the unitary
Fermi gas as a typical example, here we show that these properties in
the lattice can be strongly modified from those in free space
\cite{optlatunit}. Formation of bound pairs is assisted by the
periodic potential, and this results in a qualitative change of the
EOS.
In \refsec{stability}, we examine the stability of the superfluid flow
in the optical lattice. Mainly, the energetic instability and the
corresponding critical velocity of the superflow are discussed.
The stability of the superflow is a fundamental problem of the
superfluidity, and cold atomic gases allow us to study this important
problem for various parameter regions for both the Bose and the Fermi
superfluids. Unlike an obstacle in uniform superfluids, the periodic
potential can modify the excitation spectrum, and this manifests itself
in the behavior of the critical velocity \cite{vc,vc_crossover}.
In \refsec{energy_band}, we investigate the energy band structures of
cold atomic Fermi gases in an optical lattice along the BCS-BEC
crossover. By tuning the interatomic interaction strength, we can make the
nonlinear effect of the interaction energy to dominate
the external periodic potential. In this parameter region,
``swallowtail'' energy band structures emerge, and the physical
properties of the system are affected by their appearance
\cite{swallowtail}. This is one example of cold atomic gases
showing physical phenomena inaccessible in other systems.
In \refsec{others}, we discuss quantum phases of cold atomic gases in
optical lattices and show how they play their role as ``quantum
simulators'' of Hubbard models. Before the advent of cold atomic
gases in optical lattices, the Hubbard model was considered as an
approximate toy model of more complicated real systems and was one of
the central research topics in solid-state/condensed-matter physics.
High-precision control of cold atomic gases in optical lattices
now allows Hubbard models to be realized in experiments, so
open questions of quantum magnetism and high $T_c$
superconductivity can be addressed.
Finally, \refsec{summary} contains a summary and presents some perspectives on the cold atomic gases in optical lattices.
\section{THEORETICAL FRAMEWORKS\lasec{formalism}}
\subsection*{2.1.\quad Setup of the System and the Periodic Potential\label{sec:setup}}
In the present article, we discuss
superfluid flow made of either fermionic or bosonic cold atomic gases
subject to an optical lattice.
For concreteness, we consider one of the most typical cases:
the system is three dimensional (3D), and the
potential is periodic in one dimension
with the following form:
\begin{equation}
V_{\rm ext}({\bf r})=sE_{\rm R} \sin^2{q_B z} \equiv V_0 \sin^2{q_B z} .
\end{equation}
Here, $V_0\equiv sE_{\rm R}$ is the lattice height,
$s$ is the lattice intensity in dimensionless units,
$E_{\rm R}=\hbar^2q_{\rm B}^2/2m$ is the recoil energy,
$m$ is the mass of atoms,
$q_{\rm B}=\pi/d$ is the Bragg wave vector, and $d$ is the lattice constant.
For simplicity, we also assume that the supercurrent is
in the $z$ direction; thus, the system is uniform in the transverse
(i.e., $x$ and $y$) directions.
Throughout the present article, we set the temperature $T=0$.
Before giving a detailed description of the theoretical framework,
it is useful to summarize the scales in this system.
The periodic potential is characterized by two energy scales:
one is the recoil energy $E_{\rm R}$, which is directly related to
the lattice constant $d$, and the other is the lattice height $V_0=sE_{\rm R}$.
Regarding Bose-Einstein condensates of bosonic atoms,
a characteristic energy scale is the interaction energy $gn$, where
$g= 4\pi\hbar^2a_s/m$ is the interaction strength
and $n$ is the density of atoms.
On the other hand, in the case of superfluid Fermi gases,
the total energy is on the order of the Fermi energy
$E_{\rm F}= \hbar^2k_{\rm F}^2/(2m)$,
with the Fermi wave number $k_{\rm F}\equiv (3\pi^2 n)^{1/3}$
corresponding to that of a uniform non-interacting
Fermi gas with the same density $n$.
Therefore, the relative effect of the lattice strength
is given by the ratio $\eta_{\rm height}=V_0/gn$ for bosons and
$\eta_{\rm height}=V_0/E_{\rm F}$
for fermions.
Likewise, the relative fineness of the lattice
(compared to the healing length)
is characterized by
the ratio $\eta_{\rm fine}=(gn/E_{\rm R})^{-1}\sim (\xi/d)^2$ for bosons and
$\eta_{\rm fine}=(E_{\rm F}/E_{\rm R})^{-1} \sim (k_{\rm F} d)^{-2}$ for fermions,
which is $\sim (\xi/d)^2$ near unitarity.
Here, $\xi$ is the healing length given as
$\xi=\hbar/(2mgn)^{1/2}$ for Bose superfluids and as $\xi\sim k_{\rm F}^{-1}$
for Fermi superfluids at unitarity, which is consistent with the BCS
coherence length $\xi_{\rm BCS}=\hbar v_{\rm F}/\Delta$, where
$v_{\rm F}=\hbar k_{\rm F}/m$ and $\Delta$ is the pairing gap.
We can also say that the validity of the local density approximation (LDA)
is characterized by $1/\eta_{\rm fine}\agt (d/\xi)^2 \gg 1$
corresponding to a lattice with a low fineness $\eta_{\rm fine}\ll 1$.
In the present article, we shall consider a large parameter region
covering weak to strong lattices,
$\eta_{\rm height}=O(10^{-2})$ -- $O(10)$\
($s\sim 0.1$ -- $5$),
and low to high fine lattices,
$\eta_{\rm fine}\sim 0.1$ -- $10$.
\subsection*{2.2.\quad Mean-field Theory in the Continuum Model}
For the study the superfluidity in such various regions
in a unified manner, one of the most useful theoretical frameworks is the
mean-field theory in the continuum model.
Because our system is a superfluid in three dimensions and
the number of particles in each site is infinite in our setup,
the effects of quantum fluctuations, which are not captured by the
mean-field theory, may be small.
We also note that widely-used tight-binding models are
invalid in the weak lattice region of $\eta_{\rm height}\alt 1$.
For dilute BECs at zero temperature, the system is well described by
the Gross-Pitaevskii (GP) equation \cite{gross,pitaevskii61,rev_trapped_bec}:
\begin{equation}
- \frac{\hbar^2}{2m}\partial_{z}^2 \Psi + V_{\rm ext}(z) \Psi +
g | \Psi |^2 \Psi = \mu \Psi ,
\label{eq:gp}
\end{equation}
where $\Psi(z)=\sqrt{n(z)}\exp[i\phi(z)]$ is the
condensate wave function, $\phi(z)$ is its phase, and $\mu$ is
the chemical potential.
The local superfluid velocity is given by
$v(z)=(\hbar/m)\partial_z \phi(z)$.
For superfluid Fermi gases, we consider a balanced system of
attractively-interacting (pseudo)spin $1/2$ fermions,
where the density of each spin component is $n/2$.
To describe the BCS-BEC crossover at zero temperature,
we use the Bogoliubov-de Gennes (BdG) equations
\cite{bdg,giorgini}
\begin{equation}
\left( \begin{array}{cc}
H'(\mathbf r) & \Delta (\mathbf r) \\
\Delta^\ast(\mathbf r) & -H'(\mathbf r) \end{array} \right)
\left( \begin{array}{c} u_i( \mathbf r) \\ v_i(\mathbf r)
\end{array} \right)
=\epsilon_i\left( \begin{array}{c} u_i(\mathbf r) \\
v_i(\mathbf r) \end{array} \right) \; ,
\label{eq:BdG}
\end{equation}
with $H'(\mathbf r) =-\hbar^2 \nabla^2/2m +V_{\rm ext}-\mu$,
$u_i(\mathbf r)$ and $v_i(\mathbf r)$ are quasiparticle wave functions,
which obey the normalization condition
$\int d{\mathbf r}\ [u_i^*(\mathbf r)u_j(\mathbf r)+v_i^*(\mathbf r)v_j(\mathbf r)]=\delta_{i,j}$,
and $\epsilon_i$ are the corresponding quasiparticle energies.
The pairing field $\Delta(\mathbf r)$ and the chemical potential $\mu$
in Eq.\ (\ref{eq:BdG})
are self-consistently determined from the gap equation
\begin{equation}
\Delta(\mathbf r) =-g \sum_i u_i(\mathbf r) v_i^*(\mathbf r) \; ,
\label{eq:gap}
\end{equation}
together with the constraint on the average number density
\begin{equation}
\bar{n}=\frac{2}{V} \sum_i \int |v_i(\mathbf r)|^2\ d{\mathbf r}
=\frac{1}{V}\int n({\bf r})\ d{\mathbf r},
\end{equation}
with $n({\bf r})\equiv 2 \sum_i |v_i({\bf r})|^2$.
Here, $g$ is the coupling constant for the contact interaction, and
$V$ is the volume of the system.
The average energy density $\bar{e}$ can be calculated as
\begin{equation}
\bar{e} = \frac{1}{V}\int \! d{\bf r} \Biggl[\frac{\hbar^2}{2m}
\left(2\sum_i|\bm{\nabla}v_i|^2\right) + V_{\rm ext}\, n({\bf r})
+\frac{1}{g} |\Delta({\bf r})|^2 \Biggr]\, .
\label{eq:energydens}
\end{equation}
For contact interactions, the right-hand side of Eq.\ (\ref{eq:gap})
has an ultraviolet divergence, which has to be regularized by
replacing the bare coupling constant $g$ with the two-body $T$-matrix
related to the $s$-wave scattering length
\cite{randeria,bruun,bulgac,grasso}.
A standard scheme \cite{randeria} is to introduce the cutoff energy
$E_C\equiv\hbar^2k_C^2/2m$
in the sum over the BdG eigenstates and to replace $g$ by the following relation:
\begin{equation}
\frac{1}{g}=\frac{m}{4\pi\hbar^2 a_s}-\sum_{k<k_C}
\frac{1}{2\epsilon^{(0)}_k},
\end{equation}
with $\epsilon^{(0)}_k\equiv \hbar^2k^2/2m$.
In the presence of a
supercurrent with wavevector $Q=P/\hbar$
($P$ is the quasi-momentum for atoms rather than for pairs; thus, it is defined
in the range of $|P| \le \hbar q_{\rm B}/2$)
in the $z$ direction,
one can write the quasiparticle wavefunctions in the Bloch form as
$u_i(\mathbf r) =
\tilde{u}_i(z) e^{i Q z}e^{i\mathbf k \cdot \mathbf r }$ and
$v_i(\mathbf r) = \tilde{v}_i(z) e^{-i Q z}e^{i\mathbf k \cdot
\mathbf r }$,
leading to the pairing field
\begin{equation}
\Delta(\mathbf r)=e^{i 2Q z}\tilde{\Delta}(z) .
\end{equation}
Here, $\tilde{\Delta}(z)$,
$\tilde{u}_i(z)$, and $\tilde{v}_i(z)$
are complex functions with period $d$, and
the wave vector $k_z$ ($|k_z| \le q_{\rm B}$) lies in the
first Brillouin zone. This Bloch decomposition transforms
Eq.~(\ref{eq:BdG}) into the following BdG equations for $\tilde{u}_i(z)$
and $\tilde{v}_i(z)$ :
\begin{equation}
\left( \begin{array}{cc}
\tilde{H}_{Q}(z) & \tilde{\Delta}(z) \\
\tilde{\Delta}^\ast(z) & -\tilde{H}_{-Q}(z) \end{array} \right)
\left( \begin{array}{c} \tilde{u}_i(z) \\ \tilde{v}_i(z)
\end{array} \right)
=\epsilon_i\left( \begin{array}{c} \tilde{u}_i(z) \\
\tilde{v}_i(z) \end{array} \right) \;,
\label{eq:BdG2}
\end{equation}
where
\begin{equation}
\tilde{H}_{Q}(z)\equiv \frac{\hbar^2}{2m} \left[ k^2_\perp
+\left(-i\partial_z+Q+k_z\right)^2 \right] +V_{\rm ext}(z) -\mu\, .
\nonumber\label{hq}
\end{equation}
Here, $k_\perp^2\equiv k_x^2 + k_y^2$, and
the label $i$ represents the wave vector $\mathbf k$, as well
as the band index.
\subsection*{2.3.\quad Hydrodynamic Theory\label{sec:hydro}}
When the local density approximation (LDA) is valid, such that
the typical length scale of the density variation given by $d$
is much larger than the healing length $\xi$ of the superfluid,
hydrodynamic theory in the LDA can be useful \cite{vc}.
In hydrodynamic theory, we describe the system
in terms of the density field $n(z)$ and the (quasi-)momentum field $P(z)$
[or the velocity field $v(z)$].
The LDA assumes that, locally, the system behaves like a uniform
gas; thus, the energy density $e(n,P)$ can be written in the form
\begin{equation}
e(n,P)=nP^2/2m + e(n,0),
\end{equation}
and one can define the local chemical
potential $\mu(n)=\partial e(n,0) /\partial n$. The density
profile of the gas at rest in the presence of the external
potential can be obtained from the Thomas-Fermi relation $\mu_0 =
\mu[n(z)] + V_{\rm ext} (z)$. If the gas is flowing with a
constant current density $j=n(z)v(z)$, the Bernoulli equation for
the stationary velocity field $v(z)$ is
\begin{equation}
\mu_j = \frac{m}{2} \left[ \frac{j}{n(z)}\right]^2 + \mu(n) +
V_{\rm ext} (z),
\label{eq:mu}
\end{equation}
where $\mu_j$ is the $z$-independent value of the chemical
potential.
In the following two typical cases, the uniform gas has
a polytropic equation of state,
\begin{equation}
\mu(n)=\alpha n^\gamma :
\end{equation}
1) a dilute Bose gas with repulsive interaction, where
$\gamma=1$ and $\alpha=g=4\pi\hbar^2a_s/m$, and
2) a dilute Fermi gas at
unitarity, where $\mu(n)=(1+\beta) E_{\rm F}
=[(1+\beta) (3\pi^2)^{2/3}\hbar^2/(2m)] n^{2/3}$, i.e.,
$\gamma=2/3$ and $\alpha=(1+\beta) (3\pi^2)^{2/3}\hbar^2/(2m)$.
Here, $\beta$ is a universal parameter, which is negative, and its
absolute value is of order unity,
accounting for the attractive interatomic interactions \cite{giorgini,beta}.
Using the equation of state, one can write
\begin{equation}
m c_s^2(z) = n \frac{\partial}{\partial n} \mu(n) = \gamma \mu(n) \, ,
\label{eq:mc2}
\end{equation}
where $c_s(z)$ is the local sound velocity, which depends on $z$
through the density profile $n(z)$. In a uniform gas of density $n$,
the sound velocity is given by $c_s^{(0)}= [\gamma\mu(n)/m]^{1/2}$.
\section{EQUATION OF STATE, INCOMPRESSIBILITY, AND EFFECTIVE MASS\label{sec:thermo}}
In this section, focusing on superfluid Fermi gases at unitarity,
we discuss the effects of the periodic potential on
the macroscopic and the static properties of the fluid, such as
the equation of state, the incompressibility, and the effective mass \cite{optlatunit}.
The important point is that the periodic potential
favors the formation of bound molecules in a two-component Fermi gas
even at unitarity \cite{orso}
(see also, e.g., Refs.\ \onlinecite{fedichev} and \onlinecite{moritz}).
The emergence of the lattice-induced bound states
drastically changes the above macroscopic and static properties
from those of uniform systems in the strong lattice region of
$\eta_{\rm height}\gg 1$ \cite{optlatunit}.
Such an effect is absent in ideal Fermi gases and
BECs of repulsively-interacting bosonic atoms, which can be considered
as two limits in the BCS-BEC crossover.
\subsection*{3.1.\quad Basic Equations}
At zero temperature, the chemical potential $\mu$ (the equation of state)
and the current $j$ of a superfluid Fermi gas
in a lattice are given by the derivatives of the average energy density $\bar{e}=E/V$
with respect to the average (coarse-grained) density $\bar{n}$ \cite{note_nbar}
and the average quasi-momentum $\bar{P}$ of the bulk superflow, respectively
(hereafter, for notational simplicity, we omit ``$\bar{\quad}$''
for the coarse-grained quantities, which should not be confused with
the local quantities):
\begin{equation}
\mu = \frac{ \partial e(n,P)}{ \partial n}\ , \quad
j = \frac{\partial e(n,P)}{\partial P}\, .
\end{equation}
The incompressibility (or inverse compressibility)
$\kappa^{-1}$ and the effective mass $m^*$
are given by the second derivatives of $e$ with respect to $n$ and $P$:
\begin{align}
\kappa^{-1} =& n\frac{\partial^2 e(n,P)}{\partial n^2}
= n \frac{\partial \mu(n,P) }{ \partial n}\, ,\label{eq:kappainv}\\
\frac{1}{m^*} =& \frac{1}{n} \frac{\partial^2 e(n,P)}{\partial P^2}
= \frac{1}{n}\frac{\partial j(n,P)}{\partial P}\, .
\label{eq:m*}
\end{align}
We calculate these quantities for $P=0$, i.e., for a gas at
rest, in the periodic potential.
In the absence of the lattice potential ($s=0$),
the thermodynamic properties of unitary Fermi gases show a universal behavior:
the only relevant length scale is the
interparticle distance fixed by $k_{\rm F}$. Due to translational
invariance, one can write $e(n,P) = e(n,0) + n P^2/2m$ so that
$j=nP/m$ and $m^*=m$.
Furthermore, the energy density at $P=0$ can be written as
$e(n,0)=(1+\beta)e^0(n,0)$, where
$e^0(n,0)\equiv (3/5)n E_{\rm F} \propto n^{5/3}$ is
the energy density of the ideal Fermi gas.
Thus, we have $\mu=(1+\beta)E_{\rm F} + P^2/2m$ and
$\kappa^{-1}=(2/3)(1+\beta) E_{\rm F}$.
\subsection*{3.2.\quad Equation of State and Density Profile}
When the lattice height $s$ is large, the periodic potential
favors the formation of bound molecules.
In the strong lattice limit $\eta_{\rm height}=s E_{\rm R}/E_{\rm F} \gg 1$,
the system tends to be a BEC of lattice-induced bosonic molecules.
Therefore, in this region,
the chemical potential shows a linear density dependence, $\mu\propto n$,
as shown by the red solid line in the inset of Fig.\ \ref{fig_densprof}
calculated for unitary Fermi gases in a lattice with $s=5$.
This is clearly different from the density dependence of the
chemical potential in the uniform system ($s=0$), $\mu\propto n^{2/3}$,
as shown by the blue dashed line in the same inset.
We also note that, for $s \gg 1$, this linear density dependence persists
even at relatively large densities so that $E_{\rm F}/sE_{\rm R}\sim 1$
(e.g., $nq_{\rm B}^{-3}=0.1$ corresponds to $E_{\rm F}/E_{\rm R}\simeq2.1$; thus,
$E_{\rm F}/sE_{\rm R}\simeq 0.41$ in the case of this figure)
because the system effectively behaves like a 2D system in this density region
due to the bandgap in the longitudinal degree of freedom.
This drastic change of EOS manifests itself as a change of
the coarse-grained density profile when a harmonic confinement potential
$V_{\rm ho}(\mathbf r)$ is added to the periodic potential.
The coarse-grained
density profile, $n({\mathbf r})$, is calculated using the LDA for $\mu$:
\begin{equation}
\mu_0 = \mu[n({\mathbf r})] + V_{\rm ho}(\mathbf r) .
\end{equation}
Here, $\mu[n]$ is the local chemical potential as a function of the
coarse-grained density $n$ obtained by using the BdG calculation
for the lattice system
and $\mu_0$ is the chemical potential
of the system fixed by the normalization condition
$\int d{\mathbf r}\, n({\mathbf r})=N$.
Figure \ref{fig_densprof} clearly
shows that, for $s=5$, the profile takes the form of an inverted
parabola, reflecting the linear density dependence of the chemical
potential (see inset). In this calculation, we assume an
isotropic harmonic potential,
$V_{\rm ho}(\mathbf r)=m\omega^2r^2/2$,
where $\omega$ is the trapping frequency,
$\hbar\omega/E_{\rm R}=0.01$, and the number of particles $N=10^6$;
these parameters are close to the experimental ones in Ref.~\onlinecite{miller}.
\begin{figure}[tbp]
\begin{center}\vspace{0.0cm}
\rotatebox{0}{
\resizebox{10cm}{!}
{\includegraphics{densprof_rev3.eps}}}
\caption{\label{fig_densprof}(Color online)\quad Coarse-grained
density profiles of a trapped unitary Fermi gas, $n(r_{\perp}=0,z)$ for $s=0$
and $5$ in units of the central density $n(0)=0.0869 q_{\rm B}^3$
calculated for $s=0$ (this local density corresponds to
$E_{\rm F}/E_{\rm R}=1.88$). The quantity $R_z^{(0)}$ is the axial
Thomas-Fermi radius for $s=0$. The inset shows the density
dependence of the chemical potential of unitary Fermi gases.
Here, $\mu_0$ is the chemical potential in the limit of $n=0$.
This figure is taken from Ref.\ \onlinecite{optlatunit}
}
\end{center}
\end{figure}
\subsection*{3.3.\quad Incompressibility and Effective Mass}
The formation of molecules induced by the lattice
also has important consequences for $\kappa^{-1}$ and $m^*$.
Due to the linear density dependence of the chemical potential
in the strong lattice region ($\eta_{\rm height}\gg 1$ or $E_{\rm F}/E_{\rm R}\ll s$;
or low density limit for a fixed value of $s$),
$\kappa^{-1}$ is also proportional to $n$ and
$\kappa^{-1}/\kappa^{-1}(s=0) \propto n^{1/3} \rightarrow 0$
for $E_{\rm F}/E_{\rm R}\rightarrow 0$
[see Fig.\ \ref{fig_kinv_meff_uni}(a)].
This means that the gas becomes highly compressible
in the presence of a strong lattice.
This is in strong contrast to the ideal Fermi gas
corresponding to the BCS limit, which gives nonzero
values of $\kappa^{-1}/\kappa^{-1}(s=0)\sim 1$
even in the same limit (see Fig.\ 2 in Ref.\ \onlinecite{optlatunit}).
On the other hand, in the weak-lattice limit (or high-density limit
for a fixed value of $s$), the system reduces to a uniform gas.
By using an hydrodynamic theory, which is valid when $E_{\rm F}/E_{\rm R}\gg 1$,
and expanding with respect to the
small parameter $sE_{\rm R}/E_{\rm F}$, we obtain $\kappa^{-1}$
of unitary Fermi gases in this region as \cite{optlatunit}
\begin{equation}
\kappa^{-1} \simeq \frac{2}{3}(1+\beta)
E_{\rm F} \left[ 1 + \frac{1}{32} (1+\beta)^{-2}
\left(\frac{sE_{\rm R}}{E_{\rm F}}\right)^2 \right]
+ O \bigl[ \left( sE_{\rm R}/E_{\rm F}\right)^4 \bigr] .
\label{expansionk-1}
\end{equation}
This is shown by dotted lines in Fig.~\ref{fig_kinv_meff_uni}(a).
Note that, in this region, $\kappa^{-1}/\kappa^{-1}(s=0)>1$,
and it decreases to unity with increasing $E_{\rm F}/E_{\rm R}$.
Therefore, $\kappa^{-1}/\kappa^{-1}(s=0)$ should
take a maximum value larger than unity
in the intermediate region of $E_{\rm F}/E_{\rm R}\sim 1$,
as can be seen in Fig.~\ref{fig_kinv_meff_uni}(a),
which is mainly caused by the bandgap in the longitudinal degree of freedom.
Because the tunneling rate between neighboring sites,
which is related to the (inverse) effective mass $1/m^*$,
is exponentially suppressed with increasing mass,
the formation of molecules induced by the lattice
can yield a drastic enhancement of $m^*$ for $s\gg 1$ in the low-density limit
[Fig.\ \ref{fig_kinv_meff_uni}(b)].
This enhancement makes $m^*$ much larger than it is for ideal Fermi gases
(see Fig.\ 2 in Ref.~\cite{optlatunit})
and for BECs of repulsively-interacting Bose gases
(see Fig.\ 4 in Ref.~\onlinecite{kramer})
with the same mass $m$.
As $E_{\rm F}/E_{\rm R}$ increases, the effective mass exhibits a
maximum at $E_{\rm F}/E_{\rm R}\sim 1$ due to the bandgap in the
longitudinal degree of freedom;
then, it decreases to the bare mass, $m^*=m$.
Hydrodynamic theory can explain
the behavior of $m^*$ of unitary Fermi gases
for small $sE_{\rm R}/E_{\rm F}$ \cite{optlatunit}:
\begin{equation}
\frac{m^*}{m} \simeq 1+\frac{9}{32} (1+\beta)^{-2}
\left(\frac{s E_{\rm R}}{E_{\rm F}}\right)^2
+ O\bigl[\left(sE_{\rm R}/E_{\rm F}\right)^4\bigr]\ .
\label{expansionm}
\end{equation}
The numerical factor in the second term
shows that the effect of the lattice is stronger for $m^*$ than for
$\kappa^{-1}$. It is worth comparing the results with the case
of bosonic atoms, where $m^*$ decreases monotonically with increasing
density because the interaction broadens the condensate wave function
and favors the tunneling \cite{kramer}.
\begin{figure}[tbp]
\begin{center}\vspace{0.0cm}
\rotatebox{270}{
\resizebox{!}{16cm}{\includegraphics{kinv_meff_uni_presen_2.eps}}}
\caption{\label{fig_kinv_meff_uni}(Color online)\quad
Incompressibility $\kappa^{-1}$ and effective mass $m^*$ of
unitary Fermi gases for $s=1$ (red), $2.5$ (blue), and $5$ (green).
Asymptotic expressions, Eqs.\ (\ref{expansionk-1}) and (\ref{expansionm}),
obtained by using the hydrodynamic theory are shown by the dotted lines.
Open circles in panel (b) show $m^*$ from Ref.~\onlinecite{orso}
which was obtained by solving the Schr\"odinger equation for the two-body problem.
The $s=1$ results for $m^*$ are also shown in the inset on the
linear scale.
This figure is adapted from Ref.~\onlinecite{optlatunit}.
}
\end{center}
\end{figure}
In Fig.~\ref{fig_cs}, we show the sound velocity
of the unitary Fermi gases in a lattice,
\begin{equation}
c_{\rm s}=\sqrt{\frac{\kappa^{-1}}{m^*}}\, ,
\label{eq:cs}
\end{equation}
calculated from the above results for $\kappa^{-1}$ and $m^*$.
It shows a significant reduction compared to the uniform system,
mainly due to the larger effective mass $m^*/m>1$ except for
the low-density (more precisely, strong lattice) limit
in which $\kappa^{-1}$ and, thus, $c_{\rm s}$ show abrupt reductions.
Using Eqs.\ (\ref{expansionk-1}) and (\ref{expansionm}),
we obtain the expression of the sound velocity
in the weak lattice limit as
\begin{equation}
c_s^2 \simeq {c_s^{(0)}}^2 \left[1-\frac{1}{4}(1+\beta)^{-2}
\left(\frac{s E_{\rm R}}{E_{\rm F}}\right)^2
+ O\bigl[\left(sE_{\rm R}/E_{\rm F}\right)^4\bigr]\right]\, ,
\end{equation}
where $c_{\rm s}^{(0)} \equiv [(2/3)(1+\beta)E_{\rm F}/m]^{1/2}$
is the sound velocity for a uniform system.
\begin{figure}[tbp]
\begin{center}\vspace{0.0cm}
\rotatebox{0}{
\resizebox{10cm}{!}
{\includegraphics{cs_rev2.eps}}}
\caption{\label{fig_cs}(Color online)\quad
Sound velocity $c_{\rm s}$ of unitary Fermi gases in a lattice
in units of the sound velocity
$c_{\rm s}^{(0)} =[(2/3)(1+\beta)E_{\rm F}/m]^{1/2}$
for a uniform system. As in Fig.\ \ref{fig_kinv_meff_uni},
red, blue and green lines correspond to $s=1, 2.5$, and $5$,
respectively.
This figure is taken from Ref.~\onlinecite{optlatunit}
}
\end{center}
\end{figure}
\section{STABILITY\label{sec:stability}}
\subsection*{4.1.\quad Landau Criterion}
Stability of a superfluid flow is one of the most fundamental
issues of superfluidity and was pioneered by Landau \cite{landau}.
He predicted a critical velocity $v_c$ of the superflow
above which the kinetic energy of the superfluid was
large enough to be dissipated by creating excitations
(see, e.g., Refs.\ \onlinecite{landau,nozieres_pines,pethick_smith,pitaevskii_stringari}).
This instability is called the Landau or energetic instability, and
its critical velocity is the Landau critical velocity.
The celebrated Landau criterion for energetic instability
of uniform superfluids is given by \cite{landau,nozieres_pines,pethick_smith,pitaevskii_stringari}
\begin{equation}
v > v_c = \min{\left(\frac{\epsilon(p)}{p}\right)},
\label{eq:landau}
\end{equation}
where $v$ is the velocity of the superflow,
$\epsilon(p)$ is the excitation spectrum in the static case ($v=0$),
and $p$ is the magnitude of the momentum $\bm{p}$
of an excitation in the comoving frame of the fluid.
Here, $v_c$ is determined by the condition for which
there starts to exist a momentum $\bm{p}$ at which the excitation spectrum
in the comoving frame of a perturber
(it can be an obstacle moving in the fluid or a vessel in which the fluid
flows) is zero or negative.
For superfluids of weakly interacting Bose gases, the excitation spectrum
is given by the Bogoliubov dispersion relation
(e.g., Refs.\ \onlinecite{bogoliubov,nozieres_pines,pethick_smith,pitaevskii_stringari})
\begin{equation}
\epsilon(p)= \sqrt{\frac{p^2}{2m}\left(\frac{p^2}{2m} + 2 gn \right)}
= c_s p \sqrt{1+\left(\frac{p}{2mc_s}\right)^2}\ ,
\end{equation}
with
\begin{equation}
c_s= \sqrt{\frac{gn}{m}}
\label{eq:csbose}
\end{equation}
being the sound velocity defined by Eq.\ (\ref{eq:cs})
[note that $\mu=gn$ and, thus, $\kappa^{-1}=gn$, and $m^*=m$
for uniform BECs].
Thus, from Eqs.\ (\ref{eq:landau}) and (\ref{eq:csbose}),
the Landau critical velocity
is easily seen to be given by the sound velocity $v_c=c_s$ for $p=0$.
This means that, in superfluids of dilute Bose gases,
the energetic instability is caused by excitations of long-wavelength phonons.
We also note that BECs of non-interacting Bose gases
cannot show superfluidity in a sense that $v_c=0$ and
they cannot support a superflow.
In superfluid Fermi gases, another mechanism
can cause the energetic instability: fermionic pair-breaking excitations.
In the mean-field BCS theory, the quasiparticle spectrum of
uniform superfluid Fermi gases is given by
\begin{equation}
\epsilon(p)=\sqrt{\left(\frac{p^2}{2m}-\mu\right)^2+\Delta^2}\ .
\end{equation}
Thus, the Landau critical velocity due to the pair-breaking excitations
is given by \cite{combescot}
\begin{equation}
v_c = \sqrt{\frac{1}{m}\left(\sqrt{\mu^2+\Delta^2} - \mu\right)}\ .
\end{equation}
In the deep BCS region, where $\mu\simeq E_{\rm F}\gg \Delta$,
we obtain $v_c\simeq \Delta/p_{\rm F}$ with $p_{\rm F}\equiv \hbar k_{\rm F}$.
In the BCS-BEC crossover of superfluid Fermi gases,
where the above two kinds of excitations exist,
the Landau critical velocity is determined by which of them
gives smaller $v_c$.
In the weakly-interacting BCS region ($1/k_{\rm F}a_s\alt -1$),
the pairing gap is exponentially small, $\Delta \sim E_{\rm F} e^{-\pi/2k_{\rm F}|a_s|}$,
and $v_c$ is set by the pair-breaking excitations.
On the other hand, in the BEC region ($1/k_{\rm F}a_s\agt 1$),
where the system consists of weakly-interacting bosonic molecules,
creation of long-wavelength superfluid phonon excitations
causes an energetic instability.
In the unitary region, both
mechanisms are suppressed, and the critical velocity shows a maximum
value \cite{andrenacci,combescot,sensarma}.
\subsection*{4.2.\quad Stability of Superflow in Lattice Systems}
\subsubsection*{4.2.1.\quad Energetic instability and dynamical instability}
For the onset of an energetic instability, energy dissipation is necessary
in general; i.e., in closed systems, $v>v_c$ is not a sufficient
condition for a breakdown of the superflow, so the flow could still persist
even at $v>v_c$. The energetic instability corresponds to the situation
in which the system is located at a saddle point of the energy landscape
(i.e., there is at least one direction in which the curvature
of the energy landscape is negative).
In the presence of a periodic potential, another type of
instability, called dynamical (or modulational) instability, can occur
in addition to the energetic instability.
The dynamical instability means that small perturbations on a
stationary state grow exponentially in the process of
(unitary) time evolution without dissipation.
Similar to energetically-unstable states,
dynamically-unstable states are also located at saddle points
in the energy landscape, but a difference from energetically-unstable states
is that there are kinematically-allowed excitation processes that satisfy
the energy and (quasi-)momentum conservations.
This means that an energetic instability is a necessary condition
for dynamical instability (see, e.g., Refs.\ \onlinecite{wu2001} and \onlinecite{wu2003}
for bosons and Ref.\ \onlinecite{ring} for fermions); therefore,
the critical value of the (quasi-)momentum for the dynamical instability
should always be larger than (or equal to) that for the energetic instability.
For such a reason, we shall focus on the energetic instability hereafter
\cite{note:dynamical}.
\subsubsection*{4.2.2.\quad Determination of the critical velocity}
If the energetic instability is caused by
long-wavelength superfluid phonon excitations,
the critical velocity can be determined by using a hydrodynamic analysis
of the excitations
\cite{machholm03,taylor,pitaevskii05,pethick_smith,vc,vc_crossover}
(this should not be confused with the LDA hydrodynamics
discussed in Sec.\ 2.3).
This analysis is valid provided that the wavelength of the excitations
that trigger the instability is much larger than the
typical length scale of the density variation, i.e., the lattice constant $d$.
We consider the continuity equation and the Euler equation for the
coarse-grained density $n$ and the coarse-grained
(quasi-)momentum $P$ averaged over
the length scale larger than the lattice constant:
\begin{align}
\frac{\partial n}{\partial t}+\bm{\nabla}\cdot\bm{j}=&
\frac{\partial n}{\partial t}+\frac{\partial}{\partial z} \frac{\partial e}{\partial P}=0 ,\\
\frac{\partial P}{\partial t}+ \frac{\partial \mu}{\partial z}=&
\frac{\partial P}{\partial t}+ \frac{\partial}{\partial z} \frac{\partial e}{\partial n}=0 ,
\end{align}
where $e=e(n,P)$ is the energy density of the superfluid
in the periodic potential for the averaged density $n$ and
the averaged (quasi-)momentum $P$.
Linearizing with respect to the perturbations of
$n(z,t)=n_0+\delta n(z,t)$ and $P(z,t)=P_0+\delta P(z,t)$
with $\delta n(z,t)$ and $\delta P(z,t)\propto e^{iqz-i\omega t}$,
we obtain the dispersion relation of the long-wavelength phonon,
\begin{equation}
\omega(q) = \frac{\partial^2 e}{\partial n \partial P} q
+ \sqrt{\frac{\partial^2 e}{\partial n^2}\frac{\partial^2 e}{\partial P^2}}\,
|q|\, .
\label{eq:dispersion}
\end{equation}
Here, $\hbar\omega$ and $q$ are the energy and the wavenumber of the
excitation, respectively. In the first term
(so-called Doppler term), $\partial_n\partial_P e \ge 0$.
Thus, the energetic instability occurs when
$\omega(q)$ for $q=-|q|$ becomes zero or negative:
\begin{equation}
\frac{\partial^2 e}{\partial n \partial P} \ge
\sqrt{\frac{\partial^2 e}{\partial n^2}
\frac{\partial^2 e}{\partial P^2}}\ .
\label{eq:vcrit-hydro}
\end{equation}
Using this condition, we determine the critical quasi-momentum $P_c$
at which the equality of Eq.\ (\ref{eq:vcrit-hydro}) holds,
and finally, we obtain the critical velocity from
\begin{equation}
v_c = \frac{1}{n}\left(\frac{\partial e}{\partial P}\right)_{P_c}\, .
\label{eq:vc}
\end{equation}
We note that, for calculating $P_c$ and $v_c$ using
Eqs.~(\ref{eq:vcrit-hydro}) and (\ref{eq:vc}),
what we only need is the energy density of the stationary states
as a function of $n$ and $P$.
This can be obtained by solving, e.g., the GP or the BdG equations
for the periodic potential.
If the energetic instability of Fermi superfluids
is caused by pair-breaking excitations,
the critical velocity can be determined by
using the quasiparticle energy spectrum $\epsilon_i$ obtained
from the BdG equations.
The energetic instability due to the pair-breaking excitations occurs
when some quasiparticle energy $\epsilon_i$ becomes zero or negative:
\begin{equation}
\epsilon_i\le 0.
\end{equation}
We obtain a critical velocity for the pair-breaking excitations
from Eq.\ (\ref{eq:vc}) evaluated at the critical quasi-momentum
determined by this condition.
\subsubsection*{4.2.3.\quad Critical velocity of
superfluid Bose gases and superfluid unitary Fermi gases in a lattice}
First, we consider the situation where the LDA is valid; i.e.,
the lattice constant $d$ is much larger than the healing length $\xi$.
This condition corresponds to $gn/E_{\rm R}\gg 1$ for superfluid Bose gases
and $E_{\rm F}/E_{\rm R}\gg 1$ for superfluid Fermi gases at unitary
(see discussion in Sec.\ 2.1).
In the framework of the LDA hydrodynamics explained in Sec.\ 2.3,
the system is considered to become energetically unstable
if there exists some point $z$ at which the local superfluid velocity
$v(z)$ is equal to or larger than the local sound velocity $c_s(z)$.
If the external potential is assumed to have a maximum at $z=z_0$
[i.e., $V(z_0)=V_0$ for our periodic potential],
then at the same point, the density is minimum,
$c_s(z)$ is minimum, and $v(z)$ is maximum due to the current conservation;
i.e., $j=n(z)v(z)$ being constant.
This means that the
superfluid first becomes unstable at $z=z_0$. Using Eq.\ (\ref{eq:mc2}), we can write the
condition for the occurrence of the instability as
$m[j_c/n_c(z_0)]^2=\gamma \mu[n_c(z_0)]= \gamma \alpha n_c^\gamma(z_0)$,
where $n_c(z)$ is the density profile calculated at the critical
current \cite{kink}. By inserting this condition into Eq.~(\ref{eq:mu}),
we can obtain the following implicit relation for the critical current
\cite{vc}:
\begin{equation}
j_c^2 = \frac{\gamma}{m \alpha^{2/\gamma}}
\left[ \frac{2\mu_{j_c}}{2+\gamma}
\left(1-\frac{V(z_0)}{\mu_{j_c}}\right)
\right]^{\frac{2}{\gamma}+1} \, .
\label{eq:jc}
\end{equation}
It is worth noticing that this equation contains only $z$-independent
quantities. It is also independent of the shape of the external
potential: the only relevant parameter is its maximum value
$V(z_0)$. Moreover, it can be applied to both Bose gases and
unitary Fermi gases
(see also Refs.~\onlinecite{mamaladze,hakim,leboeuf} for bosons).
In Fig.~\ref{fig:vc}, we plot as thick solid red lines the critical velocity in a lattice
obtained from the hydrodynamic expression in Eq.\ (\ref{eq:jc})
for BECs [panel (a)] and unitary Fermi gases [panel (b)].
In both cases, the critical velocity
$v_c=j_c/n_0$ ($n_0$ is the average density)
is normalized to the value of the sound velocity in
the uniform gas, $c_s^{(0)}$, and is plotted as a function of
$V_0/\mu_{j=0}$.
The limit $V_0/\mu_{j=0} \to 0$ corresponds to the usual Landau
criterion for a uniform superfluid flow in the presence of a small
external perturbation, i.e., a critical velocity equal to the sound
velocity of the gas. In this hydrodynamic scheme, as mentioned before,
the critical velocity decreases when $V_0$ increases
mainly because the density has a local depletion and the velocity
has a corresponding local maximum, so that the Landau instability
occurs earlier. When $V_0 = \mu_{j_c}$, the density
exactly vanishes at $z=z_0$; hence, the critical velocity
goes to zero.
\begin{figure}[tbp]
\begin{center}\vspace{0.0cm}
\rotatebox{0}{ \resizebox{10cm}{!}
{\includegraphics{vc_bose_lat_jkps_2.eps}}}\vspace{0.5cm}
\rotatebox{0}{\resizebox{10cm}{!}
{\includegraphics{vc_fermi_lat_jkps_2.eps}}}
\caption{\label{fig:vc}(Color online)\quad Critical velocity $v_c$
for energetic instability of superfluids in a 1D periodic potential.
Panel (a) is for superfluids of dilute Bose gases,
and panel (b) is for superfluids of dilute Fermi gases at unitarity.
The critical velocity is given in units
of the sound velocity of a uniform gas, $c_s^{(0)}$, and is plotted
as a function of the maximum of the external potential in units of the
chemical potential $\mu_{j=0}$ of the superfluid at rest.
Thick solid lines: prediction of the hydrodynamic
theory within the LDA, as calculated from Eq.~(\ref{eq:jc}).
Symbols:
results obtained from the numerical solutions of the GP equation
[panel (a)] and the BdG equations [panel (b)].
The thinner black solid lines are the tight-binding prediction,
Eq.\ (\protect\ref{eq:vctb}).
Dashed lines are guides for the eye.
This figure is adapted from Ref.~\onlinecite{vc}.
}
\end{center}
\end{figure}
Next, we discuss the critical velocity of the energetic instability
beyond the LDA \cite{vc}.
Here, we use the energy density $e(n,P)$ of superfluids
in a periodic potential
calculated by using the mean-field theory in the continuum model,
i.e., the GP equation for bosons and the BdG equations for fermions.
Based on the energy density,
we determine the critical quasi-momentum and the critical velocity
from Eqs.~(\ref{eq:vcrit-hydro}) and (\ref{eq:vc}).
For Bose superfluids, we plot $v_c$ for various values of $gn/E_{\rm R}$
in Fig.\ \ref{fig:vc}(a).
We can clearly see that these results
approach the LDA prediction for $gn/E_{\rm R}\gg 1$, as expected.
We also note that $v_c$
exhibits a plateau for $gn/E_{\rm R} \alt 1$ (i.e., $\xi\agt d$) and small $V_0$.
This can be understood as follows:
If the healing length $\xi$ is larger than the lattice spacing $d$
and $V_0/gn$ is not too large, the energy
associated with quantum pressure, which is proportional to $1/\xi^2$,
acts against local deformations of the order parameter, and the latter
remains almost unaffected by the modulation of the external
potential. This is the region of the plateau in Fig.~\ref{fig:vc}.
In terms of Eq.~(\ref{eq:vcrit-hydro}), this region occurs when the
left-hand side is $\simeq P/m$ and the right-hand side
is $\simeq c_s^{(0)}$, so that the critical quasi-momentum obeys
the relation $P_c/m=c_s^{(0)}$, which is the usual Landau criterion
for a uniform superfluid in the presence of small perturbers.
With increasing $V_0$, this region ends when $\mu \sim E_{\rm R}$.
If we further increase $V_0$, the
chemical potential $\mu$ becomes larger than $E_{\rm R}$,
the density is forced to oscillate, and $v_c/c_s^{(0)}$ starts to decrease.
The system eventually reaches a region of weakly-coupled superfluids
separated by strong barriers, which is well described by the
tight-binding approximation (also for $gn/E_{\rm R}\agt 1$,
the system enters this region when $V_0$ is sufficiently large).
There, the energy density is given by a sinusoidal form
with respect to $P$ as
\begin{equation}
e(n,P) = e(n,0) + \delta_J \left[1-\cos{(\pi P/P_{\rm edge})}\right]\, .
\end{equation}
Here, $P_{\rm edge}$ is the
quasi-momentum at the edge of the first Brillouin zone; i.e.,
$P_{\rm edge}=\hbar q_{\rm B}$ for superfluids of bosonic atoms
($P_{\rm edge}=\hbar q_{\rm B}/2$ for those of fermionic atoms). The
quantity $\delta_J = n P_{\rm edge}^2/\pi^2 m^*$ corresponds to
the half width of the lowest Bloch band.
Because of the sinusoidal shape of the energy density and
the large effective mass in the tight-binding limit,
the critical quasi-momentum is around $P_{\rm edge}/2$.
Thus, we see that, from Eq.~(\ref{eq:vc}),
the critical velocity is determined by the effective mass as
\begin{equation}
v_c \simeq \frac{1}{\pi}\frac{P_{\rm edge}}{m^*}\, .
\label{eq:vctb}
\end{equation}
The values of $v_c$
obtained from Eq.~(\ref{eq:vctb}), with $m^*$ extracted from the GP
calculation of $e(n,P)$, are plotted by thinner black solid lines in Fig.~\ref{fig:vc}(a)
for $gn/E_{\rm R}=0.4$ and $1$ in the region of $V_0/\mu_{j=0} \agt 2$.
We also calculate the critical velocity by using another method based on a
complete linear stability analysis for the GP energy functional
as in Refs.\ \onlinecite{wu2001,machholm03}, and \onlinecite{modugno}
(see also Refs.\ \onlinecite{pethick_smith} and \onlinecite{wu2003}). We have checked that the results
agree with those obtained from Eq.~(\ref{eq:vcrit-hydro})
based on the hydrodynamic analysis
to within 1\% over the whole range of $gn/E_{\rm R}$ and $V_0$
considered in the present work.
This confirms that the energetic
instability in the periodic potential is triggered by long-wavelength
excitations, because the excitation energy of the sound mode is the smallest
in this limit.
For superfluid unitary Fermi gases, we plot $v_c$ for various
values of $E_{\rm F}/E_{\rm R}$ in Fig.~\ref{fig:vc}(b).
We observe qualitatively similar results compared to those for bosons
plotted in Fig.~\ref{fig:vc}(a).
For $E_{\rm F}/E_{\rm R}\alt 1$, the critical velocity $v_c$
shows a plateau at small $V_0$,
and it decreases from $\simeq c_s^{(0)}$ with increasing $V_0$.
For larger $E_{\rm F}/E_{\rm R}> 1$, $v_c$ approaches the LDA result
(thick solid red line).
In the region of large $V_0$ such that $V_0/\mu$ is sufficiently large,
$v_c$ is well described by the tight-binding expression in Eq.\ (\ref{eq:vctb})
plotted by thinner black solid lines in Fig.~\ref{fig:vc}(b).
Here, we use $m^*$ calculated from the BdG equations.
We also note that the pair-breaking excitations
are irrelevant to the energetic instability at unitarity
except for very low densities such that $E_{\rm F}/E_{\rm R}\ll 1$ and
small, but nonzero, $V_0$.
These excitations are important on the BCS side of the
BCS-BEC crossover, as discussed in Sec.\ 4.2.4.
Finally, we would like to point out that, in the LDA limit, while
the quantities discussed in Sec.\ \ref{sec:thermo}
approach the value in the uniform system,
the critical velocity of the energetic instability is most strongly affected
by the lattice.
Figure \ref{fig:vc} clearly shows that, as we increase the lattice height,
the critical velocity decreases from that in the uniform system most rapidly
in the LDA limit.
The main reason for this opposite tendency is that
the critical velocity is determined only around the potential maxima
while the quantities discussed in Sec.\ \ref{sec:thermo}
are determined by contributions from the whole region of the system.
\subsubsection*{4.2.4.\quad Along the BCS-BEC crossover\label{sec:vc_crossover}}
\begin{figure}[tbp]
\begin{center}\vspace{0.0cm}
\rotatebox{0}{
\resizebox{7.7cm}{!}
{\includegraphics{vc_s1s5_efer01_vf_new3.eps}}}
\caption{\label{fig:vc_bcsbec}(Color online)\quad
Critical velocity $v_c$ of the energetic instability
for $E_{\rm F}/E_{\rm R}=0.1$ with $s=1$ and $5$ in the BCS-BEC crossover.
Open circles and filled squares show the critical velocity
due to long-wavelength phonons and fermionic pair-breaking excitations,
respectively.
The horizontal dotted line represents the
value of the sound velocity
$c_s^{(0)}$ of a uniform system at unitarity,
$c_s^{(0)}/v_{\rm F}=(1+\beta)^{1/2}/\sqrt{3}\simeq 0.443$.
The red solid line is a guide for the eye.
}
\end{center}
\end{figure}
Here, we extend our discussion on the Landau critical velocity
to the BCS-BEC crossover region \cite{vc_crossover}.
As in the uniform systems, both long-wavelength phonon excitations
and pair-breaking excitations can be relevant to
the energetic instability, depending on the interaction parameter
$1/k_{\rm F}a_s$.
However, there is an additional effect due to the lattice:
when the lattice height is much larger than the Fermi energy,
the periodic potential can cause pairs of atoms to be strongly bound
even in the BCS region, so the pair-breaking excitations are suppressed
\cite{vc_crossover}.
In Fig.\ \ref{fig:vc_bcsbec}, we plot $v_c$
for superfluid Fermi gases in the BCS-BEC crossover
for different values of the lattice height with $s=1$ and $5$.
Here, we set $E_{\rm F}/E_{\rm R}=0.1$ as an example.
The open circles show $v_c$ of the energetic instability caused by
long-wavelength superfluid phonon excitations,
and the filled squares show that caused by pair-breaking excitations.
For a moderate lattice strength of $s=1$,
the result for $v_c$ is qualitatively the same as that of the uniform system.
On the BCS side, the smallest $v_c$ is given by the pair-breaking excitations
while, on the BEC side, $v_c$ is set by the long-wavelength phonon excitations,
and around unitarity $v_c$ shows a maximum.
For a larger value of $s=5$, however, the result is very different
from the uniform case.
Due to the lattice-induced binding, pair breaking is suppressed
so that it does not cause an energetic instability
even on the BCS side.
(Note that there is no filled square for $s=5$.
This means that we do not have a negative quasiparticle energy
for any value of $P$ in the whole Brillouin zone.
This point will be discussed later.)
Throughout the calculated region of $-1\le 1/k_{\rm F}a_s\le 1$,
$v_c$ is determined only by the long-wavelength phonon excitations,
and $v_c$ decreases monotonically with increasing $1/k_{\rm F}a_s$.
\begin{figure}[tbp]
\begin{center}\vspace{0.0cm}
\rotatebox{0}{
\resizebox{8.2cm}{!}
{\includegraphics{qp_energy_s5_kfainv-1_p0_rev.eps}}}
\caption{\label{fig:qp_energy2}(Color online)\quad
Lowest band of the quasiparticle energy spectrum $\epsilon_i$
for large lattice height with $s=5$ and $E_{\rm F}/E_{\rm R}=0.1$
(i.e., $E_{\rm F}/V_0=0.02$)
in the BCS region at $1/k_{\rm F}a_s=-1$.
Here, we show the first radial branch with
$k_\perp^2 \equiv k_x^2+k_y^2=0$, which always gives the smallest values
of $\epsilon_i$ in this case.
The inset shows the amplitude $|\Delta(z)|$ of the order parameter
at $P=0$. The horizontal dotted line shows the amplitude of the order
parameter for a uniform system at the same value of $1/k_{\rm F}a_s=-1$.
This figure is adapted from Ref.\ \onlinecite{vc_crossover}.
}
\end{center}
\end{figure}
The effect of the lattice-induced molecular formation can be clearly
seen in the quasiparticle energy spectrum $\epsilon_i$ and in the enhancement
of the order parameter $\Delta(z)$. In Fig.\ \ref{fig:qp_energy2},
we show $\epsilon_i$ and $|\Delta(z)|$
in the case of $s=5$, $E_{\rm F}/E_{\rm R}=0.1$, and $1/k_{\rm F}a_s=-1$.
The spectrum for $P=0$ shows a quadratic
dependence on $k_z$ with a positive curvature around $k_z=0$, and
there are no minima at $k_z\ne 0$.
Even though the figure represents
a case in the BCS region, the structure of $\epsilon_i$ is
consistent with the formation of bound pairs.
We can also see that the spectrum never becomes negative for
any values of $P$ in the first Brillouin zone.
In the inset of the
same figure, we show the amplitude $|\Delta(z)|$ of the order parameter
at $P=0$.
Here, we see a large enhancement of $|\Delta(z)|$ near $z=0$ compared to
the uniform system, which shows the formation of bosonic bound
molecules.
We also note that the minimum value of
$|\Delta(z)|$ at $z/d=\pm 1$ is smaller than, but still comparable
to, the value of $|\Delta|$ in the uniform case, suggesting that
the system is, indeed, in the superfluid phase.
\subsubsection*{4.2.5.\quad Experiments}
In closing this section, we briefly summarize experimental
studies on the stability of a superfluid.
Using cold atomic gases, stability of the superfluid
and its critical velocity was first experimentally studied in Ref.\ \onlinecite{raman}
and further examined in Ref.\ \onlinecite{onofrio}.
These experiments used a large (diameter $\gg \xi$) and strong
(height $\gg \mu$) vibrating circular potential in a BEC.
However, it has been concluded that what was observed
in these works was not likely to be the energetic instability:
dynamical nucleation of vortices by vibrating potential \cite{onofrio}.
Recently, Ramanathan {\it et al.} performed a new experiment
on the stability of a superflow
with a different setup \cite{ramanathan}.
They used a BEC flowing in a toroidal trap
with a tunable weak link (width of the barrier along the flow direction
much larger than $\xi$).
In that experiment, they obtained a critical velocity
consistent with the energetic instability due to vortex excitations
\cite{feynman}.
For 2D Bose gases, the stability of the superfluidity and
the critical velocity were also studied recently \cite{desbuquois}.
Regarding the superflow in optical lattices, its stability
was first experimentally studied in Ref.\ \onlinecite{burger}.
In that experiment, they used a cigar-shaped BEC that underwent
a center-of-mass oscillation in a harmonic trap
in the presence of a weak 1D optical lattice, and
they measured the critical velocity.
Although their original conclusion was that they
had measured the Landau critical velocity for the energetic instability,
further careful experimental \cite{desarlo} and
theoretical \cite{wu2001,modugno} follow-up studies
clarified that the instability observed in Ref.\ \onlinecite{burger}
was a dynamical instability.
In this follow-up experiment by De Sarlo {\it et al.} \cite{desarlo},
they employed an improved
setup, i.e., a 1D optical lattice moving at constant
and tunable velocities instead of an oscillating BEC in a static lattice.
With this new setup, they succeeded in observing
both energetic \cite{desarlo} and
dynamical \cite{desarlo,fallani} instabilities,
and obtained a very good agreement with theoretical predictions
\cite{desarlo,modugno,fallani}.
It is worthwhile to stress that this understanding was finally obtained through
a continuous effort over a few years by the same group.
Experimental study on the stability of superfluid Fermi gases
in optical lattices was carried out by Miller {\it et al.} \cite{miller}.
Similar to the experiment of Ref.\ \onlinecite{desarlo}, they also used
a 1D lattice moving at constant and tunable velocities.
A different point is that, instead of imposing a periodic potential
on the whole cloud, they produced a lattice potential
only in the central region of the cloud.
They measured the critical velocity of the energetic instability
in the BCS-BEC crossover and found that it was largest
around unitarity.
They also did a systematical measurement of the critical velocity at unitarity
for various lattice heights. However, there is a significant
discrepancy from a theoretical prediction \cite{vc},
so further studies are needed.
\section{ENERGY BAND STRUCTURE \lasec{energy_band}}
In this section, we discuss how the superfluidity can affect
the energy band structures of ultracold atomic gases in periodic optical potentials.
Starting with the noninteracting particles in a periodic potential,
we obtain the well-known sinusoidal energy band structure.
When the interparticle interaction is turned on in such a way that the superfluidity appears,
we see a drastic change in the energy band structure.
For BECs, it has been pointed out that the interaction
can change the Bloch band structure, causing the appearance of
a loop structure called ``swallowtail'' in the energy dispersion \cite{wu_st,diakonov,seaman,danshita}.
This is due to the competition between the external periodic potential and the nonlinear mean-field interaction: the former favors a sinusoidal band structure while the latter tends to
make the density smoother and the energy dispersion quadratic. When the nonlinearity wins, the effect of the external potential is screened, and a swallowtail energy loop appears \cite{mueller}. This nonlinear effect requires the existence of an order parameter; consequently, the
emergence of swallowtails can be viewed as a peculiar manifestation of superfluidity in periodic potentials.
Qualitatively, one can argue in a similar way on the existence of the
swallowtail energy band structure in Fermi superfluids in optical
lattices; the competition between the external periodic potential
energy and the nonlinear interaction energy ($g |\sum_i u_i(\mathbf r)
v_i^*(\mathbf r)|^2$) determines the energy band structure. However,
the interaction energy in the Fermi gas is more involved, and the
unified view along the crossover from the BCS to the BEC states
is a nontrivial and interesting problem in itself. To answer the questions of
1) whether or not swallowtails exist in Fermi superfluids and 2)
whether unique features that are different from
those in bosons exist, we solve the BdG equations [see \refeq{BdG2}] of a
two-component unpolarized dilute Fermi gas subject to a
one-dimensional (1D) optical lattice \cite{swallowtail}.
\subsection*{5.1.\quad Swallowtail Energy Spectrum}
\begin{figure}[!tb,floatfix]
\centering
\resizebox{7.25cm}{!}
{\includegraphics{fig1_ep.eps}}
\resizebox{7.25cm}{!}
{\includegraphics{fig1_width.eps}}
\caption{(Color online) (a) Energy $E$ per particle as a function of the
quasi-momentum $P$ for various values of $1/k_{\rm F}a_s$, and
(b) half width of the swallowtails along the BCS-BEC crossover.
These results are obtained for $s=0.1$ and $E_{\rm F}/E_{\rm R}
= 2.5$. The quasi-momentum $P_{\rm edge}=\hbar q_{\rm B}/2$ fixes
the edge of the first Brillouin zone. The dotted
line in (b) is the
half width in a BEC obtained by solving the GP equation; it vanishes
at $1/k_{\rm F}a_s \simeq 10.6$.
This figure is taken from Ref.~\onlinecite{swallowtail}.} \fig{swtail}
\end{figure}
The energy per particle in the lowest Bloch band as a function
of the quasi-momentum $P$ for various values of $1/k_{\rm F}a_s$
is computed \cite{footnote_parameter_value}.
The results in \reffig{swtail}(a) show that the swallowtails appear above a
critical value of $1/k_{\rm F}a_s$ where the interaction energy is strong
enough to dominate the lattice potential. In \reffig{swtail}(b), the
half-width of the swallowtails from the BCS to the BEC side is shown.
It reaches a maximum near unitarity ($1/k_{\rm F}a_s=0$).
In the far BCS and BEC limits, the width vanishes because the
system is very weakly interacting and the band structure tends to
be sinusoidal.
When approaching unitarity from either side, the interaction energy
increases and can dominate over the periodic potential, which means that
the system behaves more like a translationally-invariant superfluid
and the band structure follows a quadratic dispersion terminating
at a maximum $P$ larger than $P_{\rm edge}$.
\begin{figure}[tb,floatfix]
\centering
\resizebox{8cm}{!}
{\includegraphics{qpspect_a.eps}}
\resizebox{8cm}{!}
{\includegraphics{qpspect_b.eps}}
\caption{(Color online) (a) Lowest three Bloch bands of the quasiparticle
energy spectrum at $k_{\perp}=0$ for $P=P_{\rm{edge}}$
and $1/k_{\rm F}a_s = -0.62$.
Thin black dashed lines labeled by $l$'s
show the approximate energy bands obtained from
Eq.~(\ref{band_approx}) by using $\mu \simeq 2.66 E_{\rm R}$
and $|\Delta|\simeq |\Delta(0)|\simeq 0.54 E_{\rm R}$.
(b) Bloch band of the quasiparticle energy spectrum around the
chemical potential for $P/P_{\rm{edge}}=0$ (black dotted), $0.5$
(green dashed), and $1$ (red solid) at $k_{\perp}=0$
and $1/k_{\rm F}a_s = -0.62$ in the case of $s=0.1$ and
$E_{\rm F}/E_{\rm R} =2.5$. Each horizontal line denotes the value of
the chemical potential for the corresponding value of $P$.
This figure is adapted from Ref.~\onlinecite{swallowtail}.
}
\fig{qp_spectrum}
\end{figure}
The emergence of swallowtails on the BCS side for $E_{\rm F}/E_{\rm R}\agt 1$ is
associated with peculiar structures of the quasiparticle energy spectrum
around the chemical potential.
In the presence of a superflow moving in the $z$ direction with wavevector $Q$ ($\equiv P/\hbar$),
the quasiparticle energies are given by the eigenvalues
in \refeq{BdG2}. Because the potential is shallow ($s \ll 1$), some qualitative
results can be obtained even when ignoring $V_{\text{ext}}(z)$ except for its
periodicity. With this assumption, we obtain
\begin{equation}
\epsilon_{\mathbf k} \! \approx \! \frac{(k_z \! + \! 2q_{\rm B} l)Q}{m}
\! + \! \sqrt{\left[ \! \frac{k_\perp^2 \! + \! (k_z \! + \! 2q_{\rm B} l)^2 \! + \! Q^2}{2m}
\! - \! \mu \right]^2 \!\!\! + \! |\Delta|^2} \ ,
\label{band_approx}
\end{equation}
with $l$ being integers for the band index. If $Q=0$, the $l=0$ band
has the energy spectrum $\sqrt{[(k_\perp^2+ k_z^2)/2m - \mu]^2+ |\Delta|^2}$,
which has a local maximum at $k_z=k_{\perp}=0$. When
$Q=P_{\rm edge}/\hbar$, the spectrum is tilted, and the local maximum moves
to $k_z\simeq q_{\rm B}/2$ provided $|\Delta| \ll E_{\rm F}$ (and
$E_{\rm F}/E_{\rm R}\agt 1$).
In the absence of the swallowtail, the full BdG calculation, indeed, gives
a local maximum at $k_z=q_{\rm B}/2$, and the quasiparticle spectrum is
symmetric about this point, which
reflects that the current is zero.
As $E_{\rm F}/E_{\rm R}$ increases, the band becomes flatter
as a function of $k_z$ and narrower in energy.
In \reffig{qp_spectrum}(a), we show the quasiparticle energy spectrum
at $k_{\perp}=0$ for $P=P_{\rm{edge}}$. When the swallowtail is on the
edge of appearing, the top of the narrow band just
touches the chemical potential $\mu$ [see the dotted ellipse in
\reffig{qp_spectrum}(a)]. Suppose $1/k_{\rm F}a_s$ is slightly larger
than the critical value so that the top of the band is slightly above
$\mu$. In this situation, a small change in the quasi-momentum $P$ causes a
change of $\mu$. In fact, when $P$ is increased
from $P=P_{\rm{edge}}$ to larger values, the band is tilted, and the
top of the band moves upwards; the chemical potential $\mu$ should also
increase to compensate for the loss of states available, as shown
in \reffig{qp_spectrum}(b).
This implies ${\partial}\mu/ {\partial} P> 0$. On the other hand, because the system is periodic, the
existence of a branch of stationary states with ${\partial}\mu/ {\partial} P> 0$ at
$P=P_{\rm{edge}}$ implies the existence of another symmetric branch with
${\partial}\mu/ {\partial} P< 0$ at the same point, thus suggesting the occurrence of a
swallowtail structure.
\begin{figure}[!tb,floatfix]
\centering
\resizebox{10.cm}{!}
{\includegraphics{fig4_kinv_rev.eps}}
\caption{ Incompressibility $\kappa^{-1}$ at $P=P_{\rm{edge}}$ around the
critical value of $1/k_{\rm F}a_s \approx -0.62$ where the swallowtail starts
to appear. The quantity $\kappa_0^{-1}$ is the incompressibility of the
homogeneous free Fermi gas of the same average density. In both panels, we
have used the values $s=0.1$ and $E_{\rm F}/E_{\rm R} =2.5$.
This figure is adapted from Ref.~\onlinecite{swallowtail}.
}
\fig{inv_compress}
\end{figure}
\subsection*{5.2.\quad Incompressibility}
A direct consequence of the existence of a narrow band in the
quasiparticle spectrum near the chemical potential is a strong reduction
of the incompressibility $\kappa^{-1} = n {\partial} \mu(n)/{\partial} n$ close to the
critical value of $1/k_{\rm F}a_s$ in the region where swallowtails start to appear
in the BCS side (see \reffig{inv_compress}). A dip
in $\kappa^{-1}$ occurs in the situation where the top of the narrow band
is just above $\mu$ for $P=P_{\rm edge}$
($1/k_{\rm F}a_s$ is slightly above the critical value).
An increase in the density $n$ has
little effect on $\mu$ in this case because the density of states is
large in this range of energy and the new particles can easily adjust
themselves near the top of the band by a small increase of $\mu$.
This implies that ${\partial} \mu(n)/{\partial} n$ is small and that the incompressibility has a
pronounced dip \cite{note_incompress}.
It is worth noting that on the BEC side, the appearance of the swallowtail
is not associated with any significant change in the incompressibility.
In fact, for a Bose gas with an average density $n_{b0}$ of bosons,
the exact solution of the GP equation gives $\kappa^{-1} = n_{b0}g$
near the critical conditions for the occurrence of swallowtails,
being a smooth and monotonic function of the interaction strength.
\subsection*{5.3.\quad Profiles of Density and Pairing Fields}
Both the pairing field and the density exhibit interesting features
in the range of parameters where the swallowtails appear.
This is particularly evident at the Brillouin zone boundary, $P=P_{\rm{edge}}$.
The full profiles of $|\Delta(z)|$ and $n(z)$
along the lattice vector
($z$ direction) are shown in \reffig{profiles_supp}. In general, $n(z)$ and $|\Delta(z)|$ take maximum (minimum) values
where the external potential takes its minimum (maximum) values.
By increasing the interaction parameter $1/k_{\rm F}a_s$, we find that
the order parameter $|\Delta|$ at the maximum ($z=\pm d/2$) of the lattice
potential exhibits a transition from zero to nonzero values at the
critical value of $1/k_{\rm F}a_s$ at which the swallowtail appears.
Note that here we plot the absolute value of $\Delta$; the order
parameter $\Delta$ behaves smoothly and changes sign.
\begin{figure}[!b,floatfix]
\centering
\resizebox{8cm}{!}
{\includegraphics{profile_delta.eps}}
\resizebox{8cm}{!}
{\includegraphics{profile_n_z.eps}}
\caption{Profiles of the pairing field $|\Delta(z)|$ and the density $n(z)$ at
$1/k_{\rm F}a_s= -0.8$ (blue dotted), $-0.6$ (green dashed), and $-0.4$
(red solid) for $P=P_{\rm{edge}}$ in the case of $s=0.1$ and
$E_{\rm F}/E_{\rm R} =2.5$. The swallowtail starts to appear
at a critical value of $1/k_{\rm F}a_s \approx -0.62$.
This figure is taken from Ref.~\onlinecite{swallowtail}.
}
\fig{profiles_supp}
\end{figure}
\begin{figure}[!tb,floatfix]
\centering
\rotatebox{270}{
\resizebox{!}{16cm}
{\includegraphics{profile_minmax.eps}}
}
\caption{(Color online) Profiles of (a) the pairing field $|\Delta(z)|$
and (b) the density $n(z)$ for changing $1/k_{\rm F}a_s$ for
$P=P_{\rm{edge}}$ in the case of $s=0.1$ and $E_{\rm F}/E_{\rm R}
=2.5$. The values of $|\Delta(z)|$ and $n(z)$ at the minimum ($z=0$,
blue $\square$) and at the maximum ($z=\pm d/2$, red $\times$) of
the lattice potential are shown. The vertical dotted lines show the
critical value of $1/k_{\rm F}a_s$ above which the swallowtail
exists. The dotted curve in (a) shows $|\Delta|$ in a uniform system.
This figure is taken from Ref.~\onlinecite{swallowtail}.
}
\fig{profiles}
\end{figure}
In \reffig{profiles}, we show the magnitude of the pairing field $|\Delta(z)|$ and
the density $n(z)$ calculated at the minimum ($z=0$)
and at the maximum ($z=\pm d/2$) of the lattice potential.
The figure shows that $|\Delta(d/2)|$ remains zero in the BCS region
until the swallowtail appears at $1/k_{\rm F}a_s \approx -0.62$.
Then, it increases abruptly to values comparable to
$|\Delta(0)|$, which means that the pairing field becomes almost uniform
at $P=P_{\rm{edge}}$ in the presence of swallowtails. As regards the density,
we find that the amplitude of the density variation, $n(0)-n(d/2)$,
exhibits a pronounced maximum near the critical value of $1/k_{\rm F}a_s$.
In contrast, on the BEC side,
the order parameter and the density are smooth monotonic
functions of the interaction strength even in the region where the swallowtail
appears. At $P=P_{\rm{edge}}$, the solution of the GP equation for bosonic
dimers gives the densities $n_b(0)= n_{b0}(1+V_0/2n_{b0} g_b)$ and
$n_b(d/2)=n_{b0}(1-V_0/2n_{b0} g_b)$, with $V_0/2n_{b0}g_b = (3\pi/4)
(sE_{\rm R}/E_{\rm F})(1/k_{\rm F}a_s)$, where
$g_b \equiv 4\pi \hbar^2 a_b/m_b$, and $a_b$ and $m_b$ are the scattering length
and the mass of bosonic dimers, respectively \cite{wu_st,bronski,note-nb}.
Near the critical value of $1/k_{\rm F} a_s$, unlike the BCS side, the
nonuniformity just decreases all the way even after the swallowtail appears.
The local density at $z=d/2$ is zero until the swallowtail appears on the BEC
side while it is nonzero on the BCS side irrespective of the existence of the swallowtail.
The qualitative behavior of $|\Delta(z)|$ around the critical point of $1/k_{\rm F}a_s$
is similar to that of $n_b(z)$
because $n_b(\mathbf r) = (m^2 a_s/8\pi)|\Delta(\mathbf r)|^2$ \cite{bdgtogp} in the BEC limit.
\section{QUANTUM PHASES OF COLD ATOMIC GASES IN OPTICAL LATTICES \lasec{others}}
So far, we have discussed some superfluid features of cold atomic
gases in an optical lattice by mainly focusing on our works. In this
section, we discuss some important topics regarding various quantum
phases that the cold atomic gases on optical lattices can show/simulate.
In this section, strong enough periodic potentials to
ignore the next-nearest hopping and small average numbers of particles
per lattice site will be assumed so that the system can be described by
the Hubbard lattice model.
\subsection*{6.1.\quad Quantum Phase Transition from a Superfluid to a Mott Insulator}
At a temperature of absolute zero, cold atomic gases in optical
lattices can undergo a quantum phase transition from superfluid to
Mott insulator phases as the interaction strength between atoms is
tuned from the weakly- to the strongly-interacting region \cite{Jaksch}. The
quantum phase transition of an interacting boson gas in a periodic
lattice potential can be captured by the following Bose-Hubbard
Hamiltonian:
\begin{eqnarray}} % can be used as {equation} or {eqnarray
H = -J \sum_{\langle i,j \rangle} (\hat{b}_i^{\dagger} \hat{b}_j +\text{h.c.})+ \frac{U}{2} \sum_{i} \hat{n}_i(\hat{n}_i-1) \, , \nonumber
\end{eqnarray}
where $\hat{b}_i$ and $\hat{b}_i^{\dagger}$ are annihilation and creation
operators of a bosonic atom on the $i$-th lattice site and $\hat{n}_i
= \hat{b}_i^{\dagger}\hat{b}_i$ is the atomic number operator on the
$i$-th site. The first term is the kinetic energy term whose strength
is characterized by the hopping matrix element $J$ between adjacent
sites $\langle i,j \rangle$, and $U(>0)$ in the second term is the
strength of a short-range repulsive interactions between bosonic
atoms.
In the limit where the kinetic energy dominates,
the many-body ground state of $N$ atoms and $M$ lattice sites is a superposition of delocalized Bloch states with lowest energy (quasi-momentum ${\mathbf k}=0$):
\begin{eqnarray}} % can be used as {equation} or {eqnarray
\ket{ \Psi_{\text{SF}} }_{U/J=0} \propto ( \hat{b}_{ {\mathbf k}=0}^{\dagger} )^N \ket{0} \propto \left ( \sum_{i=1}^{M} \hat{b}_i^{\dagger} \right )^N \ket{0} \, ,
\end{eqnarray}
where $\ket{0}$ is the empty lattice. This state has perfect phase
correlation between atoms on different sites while the number of atoms
on each site is not fixed. This superfluid phase has gapless phonon
excitations.
In the limit where the interactions dominate (so-called, ``atomic
limit''), the fluctuations in the local atom number become
energetically unfavorable, and the ground state is made up of localized
atomic wavefunctions with a fixed number of atoms per lattice
site. The many-body ground state with a commensurate filling of $n$
atoms per lattice site is given by
\begin{eqnarray}} % can be used as {equation} or {eqnarray
\ket{ \Psi_{\text{MI}} }_{J/U=0} \propto \prod_{i=1}^{M} (\hat{b}_i^{\dagger})^n \ket{0}\, .
\end{eqnarray}
There is no phase correlation between different sites because the
energy is independent of the phases of the wavefunctions on each
site. This Mott insulator state, unlike the superfluid state, cannot
be described by using a macroscopic wavefunction. The lowest excited state
can be obtained by moving one atom from one site to another, which
gives an energy gap of $\Delta = (U/2) [(n+1)^2+(n-1)^2-2 n^2] = U$.
As the ratio of the interaction term to the tunneling term increases
($U/J$ can be controlled by changing the depth $V_0$ of the optical
lattice even without using the Feshbach resonances), the system will
undergo a quantum phase transition from a superfluid state to a
Mott insulator state accompanying the opening of the energy gap in the
excitation spectrum. Greiner {\it et al.~{}} realized experimentally a
quantum phase transition from a Bose-Einstein condensate of
$^{87}\text{Rb}$ atoms with weak repulsive interactions to a Mott
insulator in a three-dimensional optical lattice potential
\cite{Greiner_SF_Mott}. Notably, they could induce reversible changes
between these two ground states by changing the strength $V_0$ of the
optical lattice. The superfluid-to-Mott-insulator transitions were
also achieved in one- and two-dimensional cold atomic Bose gases
\cite{Mott_dim}.
The Mott insulator phases of atomic Fermi gases with {\it repulsive}
interactions on a three-dimensional optical lattice have been
realized, and the entrance into the Mott insulating states was observed
by verifying vanishing compressibility and by measuring the
suppression of doubly-occupied sites
\cite{Jordens_Mott,Schneider_Mott}.
\subsection*{6.2.\quad Quantum Phase Transition from a Paramagnet to an Antiferromagnet}
Sachdev {\it et al.~{}} showed that the one-dimensional Mott insulator of
spinless bosons in a tilted optical lattice can be mapped onto a
quantum Ising chain \cite{Sachdev2002}. The Bose-Hubbard Hamiltonian
for a tilted optical lattice takes the form
\begin{equation}
H = - J \sum_{i}( \hat{b}_{i} \hat{b}_{i+1}^{\dagger}+ \hat{b}_{i}^{\dagger} \hat{b}_{i+1} )+ \frac{U}{2} \sum_{i} \hat{n}_i(\hat{n}_i-1) - E \sum_{i} i\hat{n}_i \ ,\nonumber
\end{equation}
where $E$ is the lattice potential gradient per lattice spacing.
For a tilt near $E=U$, the energy cost of moving an atom to its neighbor
(from site $i$ to site $i+1$) is zero. If we start with a Mott
insulator with a single atom per site, an atom can resonantly tunnel
into the neighboring site to produce a dipole state (at the link)
consisting of a quasihole-quasiparticle pair on nearest neighbor
sites. Only one dipole can be created per link, and neighboring links
cannot support dipoles together. This nearest-neighbor constraint is
the source of the effective dipole-dipole interaction that results in
a density wave ordering. If a dipole creation operator
$\hat{d}_i^{\dagger} = \hat{b}_i \hat{b}_{i+1}^{\dagger}/\sqrt{2}$ is defined,
the Bose-Hubbard Hamiltonian is mapped onto the dipole Hamiltonian:
\begin{eqnarray}} % can be used as {equation} or {eqnarray
H = -\sqrt{2}J \sum_{i} (\hat{d}_i^{\dagger} + \hat{d}_i) + (U-E) \sum_{i} \hat{d}_i^{\dagger} \hat{d}_i ,\nonumber
\end{eqnarray}
with the constraint $\hat{d}_i^{\dagger} \hat{d}_i \le 1$ and $\hat{d}_{i}^{\dagger} \hat{d}_{i}\hat{d}_{i+1}^{\dagger} \hat{d}_{i+1} = 0$.
If the dipole present/absent link is identified with a pseudospin up/down, $\hat{S}_{i}^{z} = \hat{d}_{i}^{\dagger} \hat{d}_{i} - 1/2 $, the pseudospin-1/2 Hamiltonian takes the form of a quantum Ising chain:
\begin{eqnarray}
H &=& J_S \sum_{i} \hat{S}_{i}^{z} \hat{S}_{i+1}^{z} -2\sqrt{2}J \sum_{i} \hat{S}_i^{x} +(J_S - D)\sum_{i} \hat{S}_i^{z} \nonumber \\
&=& J_S \sum_{i} ( \hat{S}_{i}^{z} \hat{S}_{i+1}^{z} - h_x \hat{S}_i^{x} + h_z \hat{S}_i^{z} )\, , \nonumber
\end{eqnarray}
where $D= E - U$ and $\sum_{i} J_S(\hat{S}_{i}^{z} +
1/2)(\hat{S}_{i+1}^{z} + 1/2) $, with $J_S \to \infty$ being added to the
Hamiltonian for implementing the constraint $\hat{d}_{i}^{\dagger}
\hat{d}_{i}\hat{d}_{i+1}^{\dagger} \hat{d}_{i+1} = 0$. The
dimensionless transverse and longitudinal fields are defined as $h_x =
2^{3/2} J/J_S$ and $h_z = 1-D/J_S$, respectively.
Thus, a quantum phase transition from a paramagnetic phase ($D\lesssim 0$)
to an antiferromagnetic phase ($D\gtrsim 0$) can be studied by changing the
lattice potential gradient $E$ between adjacent sites. With
$^{87}\text{Rb}$ atoms, a quantum simulation of antiferromagnetic spin
chains in an optical lattice was done, and a phase transition to the
antiferromagnetic phase from the paramagnetic phase was observed
\cite{Ising_chain}.
\section{SUMMARY AND OUTLOOK}
\lasec{summary}
In this article, we have focused on some superfluid properties of
cold atomic gases in an optical lattice periodic along one spatial
direction: basic macroscopic and static properties, the stability of the
superflow, and peculiar energy band structures. As a complement, some
phases other than the superfluid phase have been discussed in the last
section. Considering the rapid growth and interdisciplinary nature of
the research on cold atomic gases in optical lattices, it is
practically impossible to cover all perspectives.
We conclude by mentioning a few of them.
The high controllability of
the lattice potential opens many possibilities: An optical disordered
lattice can be constructed to study the problem of the Anderson
localization of matter waves and the resulting phases
\cite{disorder}. The time modulation of the optical lattice is shown
to tune the magnitude and the sign of the tunnel coupling in the Hubbard
model, which allows us to study various phases
\cite{time_modulation}. Cold atomic gases in optical lattices may
uncover many exotic phases that are still under debate or even lack
solid state analogs.
Due to their long characteristic time scales and large characteristic length scales,
cold atomic gases are good playgrounds for the experimental observation
and control of their dynamics. Particularly, a sudden quench can be
realized experimentally. The nonequilibrium dynamics after sudden
quenches can be studied with high precision in experiments to
discriminate between candidate theories. Long-standing problems
such as thermalization, its connection with non-integrability and/or
quantum chaos are actively sought-after topics in this direction
\cite{non-equilibrium}.
The charge neutrality of atoms seems to limit the use of this
system as a quantum simulator. However, the internal states of an atom,
together with atom-laser interactions, can be exploited for the atom to
gain geometric Berry's phases, which amounts to generating artificial
gauge fields interacting with these charge-neutral particles
\cite{artificial_gauge}. In this way, interactions with
electromagnetic fields, spin-orbit coupling, and even non-Abelian
gauge fields can be emulated to open the door to the study of the physics of
the quantum Hall effects, topological superconductors/insulators, and high
energy physics by using cold atoms in the future.
Cold atomic gases provide a promising platform for controlling
dissipation and for engineering the Hamiltonian, e.g., by
controlling the coupling with a subsystem acting as a reservoir, by
using external fields that induce losses of trapped atoms, etc. This
possibility will allow us to use the cold atomic gases as quantum
simulators of open systems. In addition, the controlled dissipation
will offer an opportunity to study the quantum dynamics driven by
dissipation and its steady states, and to study the non-equilibrium phase transitions
among the steady states determined by the competition between the
coherent and the dissipative dynamics. Furthermore, controlling
the dissipation will pave the way to the design of Liovillian and
to the dissipation-driven state-preparation (e.g., Ref.~\onlinecite{dissipation}).
Last, but not least, cold atomic gases in optical
lattices are also useful for the precision measurements;
the ``optical lattice clock'' consists of millions
of atomic clocks trapped in an optical lattice and working in parallel.
The large number of simultaneously-interrogated atoms greatly improves the
stability of the clock, and state-of-the-art optical lattice clocks
outperform the primary frequency standard of Cs clocks
(Ref.\ \onlinecite{optlatclock} and references therein).
Bloch oscillations of cold atomic gases in optical
lattices offer a promising way of measuring forces
at a spatial resolution of few micrometers
(e.g., Ref.\ \onlinecite{bloch_osc}).
Precision measurement devices/techniques will enable high-precision tests of time and space variations of the fundamental constants,
the weak equivalence principle, and Newton's gravitational law at
short distances.
\begin{acknowledgments}
We acknowledge Mauro Antezza, Franco Dalfovo, Elisabetta Furlan,
Giuliano Orso, Francesco Piazza, Lev P. Pitaevskii, and Sandro Stringari
for collaborations.
This work was supported in part by the Max Planck Society, by the Korean
Ministry of Education, Science and Technology, by Gyeongsangbuk-Do,
by Pohang City [support of the Junior Research Group (JRG) at
the Asia Pacific Center for Thoretical Physics (APCTP)], and by Basic Science
Research Program through the National Research Foundation of Korea
(NRF) funded by the Ministry of Education, Science and Technology
(No. 2012R1A1A2008028).
Calculations were performed by the RIKEN Integrated Cluster of
Clusters (RICC) system, by WIGLAF at the University of Trento,
and by BEN at ECT*.
\end{acknowledgments}
|
1,477,468,750,474 | arxiv | \section{Introduction}
{\bf Spectrum, pseudospectrum and lower norm.}
Given a bounded linear operator $A$ on a Banach space $X$, we denote its {\sl spectrum} and {\sl pseudospectra} \cite{TrefEmb}, respectively, by
\[
\spec A\ :=\ \{\lambda\in\C: A-\lambda I \text{ is not invertible}\}
\]
and
\begin{equation} \label{eq:speps}
\textstyle
\speps A\ :=\ \{\lambda\in\C: \|(A-\lambda I)^{-1}\|>\frac 1\eps \},\qquad \eps>0,
\end{equation}
where we identify $\|B^{-1}\|:=\infty>\frac 1\eps$ if $B$ is not invertible, so that $\spec A\subseteq \speps A$ for all $\eps>0$.
A fairly convenient access to the norm of the inverse is given by the so-called {\sl lower norm}, the number
\begin{equation}\label{eq:nu}
\nu(A)\ :=\ \inf_{\|x\|=1}\|Ax\|.
\end{equation}
Indeed, putting $\mu(A):=\min\{\nu(A),\,\nu(A^*)\}$, we have
\begin{equation}\label{eq:invmu}
\|A^{-1}\|\ \ =\ 1/\mu(A),
\end{equation}
where $A^*$ is the adjoint on the dual space $X^*$ and equation \eqref{eq:invmu} takes the form $\infty=1/0$ if and only if $A$ is not invertible.
One big advantage of this approach is that, in case $X=\ell^p(\Z^d,Y)$ with $p\in [1,\infty]$, $d\in\N$ and a Banach space $Y$, $\nu(A)$ can be approximated by the same infimum \eqref{eq:nu} with $x\in X$ restricted to elements with finite support of given diameter $D$. We can even quantify the approximation error against $D$, see \cite{{CW.Heng.ML:UpperBounds}} and \cite{LiSei:BigQuest} (as well as \cite{HagLiSei} for a corresponding result on the norm).
By means of \eqref{eq:invmu}, we can rewrite spectrum and pseudospectra as follows:
\[
\spec A\ =\ \{\lambda\in\C: \mu(A-\lambda I)=0\}
\]
and
\[
\speps A\ =\ \{\lambda\in\C: \mu(A-\lambda I)<\eps \},\qquad \eps>0.
\]
In other words, $\spec A$ is the level set of the function $f:\C\to [0,\infty)$ with
\begin{equation}\label{eq:f}
f(\lambda)\ :=\ \mu(A-\lambda I)
\end{equation}
for the level zero, and $\speps A$ is the sublevel set of $f$ for the level $\eps>0$.
{\bf Sublevel sets.}
For a function $g:\C\to [0,\infty)$ and $\eps>0$, let
\[
\sub_\eps(g) := \{\lambda\in\C : g(\lambda)<\eps\}
\]
denote the {\sl sublevel set of $g$ for the level $\eps$}.
In general, pointwise convergence $g_n\to g$ of functions $\C\to [0,\infty)$ need not coincide with Hausdorff convergence of their sublevel sets:
\begin{example} \label{ex:1}
Suppose we have $g$ and $g_n$ such that a) $g_n\to g$ as well as b) $\sub_\eps(g_n)\ \Hto\ \sub_\eps(g)$ hold for all $\eps>0$. Increasing $g(\lambda)$ to a certain level $\eps>0$ in a point $\lambda$, where $g$ was continuous and below $\eps$ before, changes the state of a), while it does not affect $\closn(\sub_\eps(g))$ and hence b).
\end{example}
So let us look at continuous examples from here on.
\begin{example} \label{ex:2}
For $g_n(\lambda):=\frac{|\lambda|}n\to 0=:g(\lambda)$, the Hausdorff distance of
the sublevel sets
\[
\sub_\eps(g_n)=n\eps\D
\qquad\text{and}\qquad
\sub_\eps(g)=\mathbb C
\]
remains infinite, where $\D$ denotes the open unit disk in $\C$.
\end{example}
Of course, this problem was due to the unboundedness of $\sub_\eps(g)$.
So let us further focus on functions that go to infinity at infinity, so that all
sublevel sets are bounded.
\begin{example}{\bf (locally constant)\ } \label{ex:3}
Let $g(\lambda):=h(|\lambda|)$
and $g_n(\lambda):=h_n(|\lambda|)$ for $n\in\N$, where
\[
\textstyle
h(x):=\max\{\min\{|x|,1\},|x|-1\},\qquad
h_1(x):= \frac 14 x^2,\qquad
h_n:=h+\frac 1n(h_1-h)\to h
\]
for $x\in\R$ and $n\in\N$.
Then, unlike any $g_n$, $g$ is locally constant in $2\D\setminus\D$. Consequently,
\[
\sub_1(g_n)\equiv 2\D\ \not\!\!\!\Hto\ \D=\sub_1(g)
\qquad\text{but}\qquad g_n\to g.
\vspace{-18pt}
\]
~\hfill \qedhere
\end{example}
\begin{example} {\bf (increasingly oscillating)\ } \label{ex:5}
Let $g_n(\lambda):=h_n(|\lambda|)$ for $n\in\N$, where
\[
h_n(x):=\left\{
\begin{array}{ll}
|\sin(n\pi x)|,&x\in[0,1],\\
x-1,&x>1.
\end{array}
\right.
\]
Then $h_n(x)<\eps$ for all $\eps>0$ and
\[
\textstyle
x\in \frac 1n\Z\cap [0,1]\Hto[0,1]
\quad\text{as}\quad n\to\infty.
\]
It follows that $\sub_\eps(g_n)\Hto(1+\eps)\D$ for all $\eps>0$, while $g_n$ does not converge pointwise at all.
\end{example}
{\bf The result.}
For a sequence of bounded operators $A_n$ on $X$ and their corresponding functions $f_n:\C\to [0,\infty)$ with
\begin{equation}\label{eq:fn}
f_n(\lambda)\ :=\ \mu(A_n-\lambda I),\qquad n\in\N,
\end{equation}
we show equivalence of pointwise convergence $f_n\to f$ and Hausdorff convergence of their sublevel sets, i.e.~of the corresponding pseudospectra,
\[
f_n\to f
\qquad\iff\qquad
\speps A_n\ \Hto\ \speps A,\quad\forall\eps>0.
\]
This result is not surprising (and similar arguments have been used e.g.~in \cite{Colbrook:PE} in a more specific situation) but there are some little details that deserve to be written down as this separate note.
\section{Lipschitz continuity and non-constancy of $\mu$}
Our functions $\nu$ and $\mu$, and hence $f$ and $f_n$, have two properties that rule out effects as in Examples \ref{ex:1} -- \ref{ex:5}: Lipschitz continuity and the fact that their level sets have no interior points, i.e.~$\mu$ is not constant on any open sets.
The first property is straightforward but the latter is a very nontrivial subject \cite{Globevnik,Shargorodsky08,Shargorodsky09,ShargoSkarin,DaviesShargo}, and it actually limits the choice of our Banach space $X$ as shown in Lemma \ref{lem:nonconst}.
\begin{lemma} \label{lem:Lipschitz}
For all bounded operators $B,C$ on $X$, one has
\[
|\nu(B)-\nu(C)|\ \le\ \|B-C\|,
\]
so that also $\mu(B)=\min\{\nu(B),\nu(B^*)\}$ is Lipschitz continuous with Lipschitz constant $1$.\\
The same follows for the functions $f$ from \eqref{eq:f} and $f_n$ with $n\in\N$ from \eqref{eq:fn}.
\end{lemma}
This result is absolutely standard but we give the (short) proof, for the reader's convenience:
\begin{proof}
For all $x\in X$ with $\|x\|=1$, one has
\[
\|B-C\|\ \ge\ \|Bx-Cx\|\ \ge\ \|Bx\|-\|Cx\|\ \ge\ \nu(B)-\|Cx\|.
\]
Now pass to the infimum in $\|Cx\|$ to get $\|B-C\|\ge\nu(B)-\nu(C)$. Finally swap $B$ and $C$.
\end{proof}
It will become crucial to understand when the resolvent norm of a bounded
operator cannot be constant on an open subset. This is a surprisingly rich and
deep problem. As it turns out it is connected to a geometric property,
the complex uniform convexity, of the underlying Banach space (see \cite[Definition
2.4 (ii)]{Shargorodsky08}).
\begin{lemma}[{Globevnik \cite{Globevnik}, Shargorodsky et al. \cite{Shargorodsky08,Shargorodsky09,ShargoSkarin,DaviesShargo}}] \label{lem:nonconst}
~\\Let $X$ be a Banach space which satisfies at least one of the following properties,\\[-8mm]
\begin{enumerate}[label=(\alph*)] \itemsep-1mm
\item $\dim(X) < \infty$,
\item $X$ is complex uniform convex,
\item its dual, $X^*$, is complex uniform convex.
\end{enumerate}
For example, every Hilbert space is of this kind, and every space $X=\ell^p(\Z^d,Y)$ with $p\in [1,\infty]$ and $d\in\N$ falls in this category as soon as $Y$ does \cite{Day}. Then, for every bounded operator $A$ on $X$, the resolvent norm,
\[
\lambda\ \mapsto\ \|(A-\lambda I)^{-1}\|=1/\mu(A - \lambda I),
\]
cannot be locally constant on any open set in $\C$, and, consequently,
\[
\forall \varepsilon > 0: \quad \closn(\speps A) = \{ \lambda \in
\mathbb{C} : \mu (A -\lambda I) \leq \varepsilon \}.
\]
\end{lemma}
\section{Set sequences and Hausdorff convergence}
Let $(S_n)$ be a sequence of bounded sets in $\mathbb C$
and recall the following notations (e.g.~\cite[\S 3.1.2]{HaRoSi2}):
\begin{itemize} \itemsep-1mm
\item $\liminf S_n=$ the set of all limits of sequences $(s_n)$ with $s_n\in S_n$;
\item $\limsup S_n=$ the set of all partial limits of sequences $(s_n)$ with $s_n\in S_n$;
\item both sets are always closed;
\item let us write $S_n\to S$ if $\liminf S_n=\limsup S_n=S\,(=\clos\, S)$;
\item then $S_n\to S\iff \clos\,S_n\to S$ (and again, $S=\clos\, S$ is automatic).
\end{itemize}
Here is an apparently different approach to set convergence:
For $z\in\C$ and $S\subseteq\C$, set $\dist(z,S):=\inf_{s\in S}|z-s|$.
The {\sl Hausdorff distance} of two bounded sets $S,T\subseteq\C$, defined via
\[
\dH(S,T)\ :=\ \max\left\{\sup_{s\in S}\dist(s,T),\ \sup_{t\in T}\dist(t,S)\right\},
\]
\begin{itemize} \itemsep-1mm
\item ...is a metric on the set of all compact subsets of $\C$;
\item ...is just a pseudometric on the set of all bounded subsets of $\C$:\\
besides symmetry and triangle inequality, one has $\dH(S,T)=0\iff \clos S=\clos T$ since
\[
\dH(S,T)\ =\ \dH(\clos S, T)\ =\ \dH(S,\clos T)\ =\ \dH(\clos S,\clos T);
\]
\item let us still write $S_n\Hto S$ if $\dH(S_n,S)\to 0$, also for merely bounded sets $S_n,S$;
\item the price is that the limit $S$ in $S_n\Hto S$ is not unique:\\
one has $S_n\Hto S$ and $S_n\Hto T$ if and only if $\dH(S,T)=0$,
i.e.~$\clos S=\clos T$.
\end{itemize}
Both notions of set convergence are connected, via the Hausdorff theorem:
\begin{equation}\label{eq:Haus}
S_n\Hto S\qquad\iff\qquad S_n\to \clos S\,.
\end{equation}
Remember:\\[-8mm]
\begin{itemize} \itemsep-1mm
\item the limit of ``$\to$'' is always closed;
\item the limit of ``$\Hto$'' need not be closed...
\item ... but its uniqueness only comes by passing to the closure;
\item passing to the closure in front of ``$\to$'' or ``$\Hto$'' does not change the statement.
\end{itemize}
\begin{lemma} \label{lem:X}
Let $S_n$ and $T_n$ be bounded subsets of $\C$ with
$S_n \to S$ and $T_n \to T$. \\
In addition, suppose $S_n \setminus T_n\ne\varnothing$. Then:
\begin{enumerate}[label=(\alph*)]\itemsep-1mm
\item In general, it does \underline{not} follow that
\[
S_n\setminus T_n \quad \to\quad S\setminus T.
\]
\item However, it always holds that
\[
\liminf (S_n\setminus T_n) \quad\supseteq\quad S\setminus T.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}[label=(\alph*)]\itemsep-1mm
\item Consider
$S_n := [0,1]\ \to\ [0,1] =:S$ and $T_n:=\frac{1}{n}\mathbb{Z}\cap [0,1]\ \to\ [0,1] =:T$.\\
Then $S_n\setminus T_n \ \to\ [0,1] \ne \varnothing = S\setminus T$.
\item Let $x \in S\setminus T$.
Since $x\in S$, there is a sequence $(x_n)$ with $x_n\in S_n$ such that $x_n \to x$.\\
We show that $x_n\not\in T_n$, eventually.\\
Suppose $x_n \in T_n$ for infinitely many $n\in\N$.
Then there is a strictly monotonic sequence $(n_k)$ in $\N$ with $x_{n_k}\in T_{n_k}$.
But then
\[
x=\lim_n x_n =\lim_k x_{n_k}\in \limsup_n T_n = T,
\]
which contradicts $x\in S\setminus T$.
Consequently, just finitely many elements of the sequence $(x_n)$ can be in $T_n$.
Replacing these by elements from $S_n\setminus T_n$ does not change the limit, $x$.
So $x\in \liminf (S_n\setminus T_n)$. \qedhere
\end{enumerate}
\end{proof}
\section{Equivalence of pointwise convergence $f_n\to f$ and Hausdorff convergence of the pseudospectra}
Here is our main theorem. Note that we do not require any convergence of $A_n$ to $A$.
\begin{theorem} \label{thm:main}
Let $X$ be a Banach space with the properties from Lemma \ref{lem:nonconst}
and let $A$ and $A_n,\ n\in\N$, be bounded linear operators on $X$.
Then the following are equivalent for the functions and sets introduced in
\eqref{eq:speps}, \eqref{eq:f} and \eqref{eq:fn}
\[
\begin{array}{rlp{50mm}}
(i) & f_n\to f\text{ pointwise},&\\
(ii) & \text{for all }\eps>0,\text{ one has }\speps A_n\ \Hto\ \speps A.&
\end{array}
\]
\end{theorem}
\begin{proof}
$(i)\implies (ii)$:
Assume $(i)$ and take $\eps>0$.
For $f(\lambda)<\eps$, $(i)$ implies $f_n(\lambda)<\eps\ \forall n\ge n_0$.
So it follows
\[
\speps A\ \subseteq\ \liminf\speps A_n\ \subseteq\ \limsup\speps A_n.
\]
Now let $\lambda\in\limsup\speps A_n$, i.e.~$\lambda=\lim \lambda_{n_k}$
with $\lambda_{n_k}\in\speps A_{n_k}$, so that $f_{n_k}(\lambda_{n_k})<\eps$.
Then
\[
|f(\lambda)-f_{n_k}(\lambda_{n_k})|\ \le\
\underbrace{|f(\lambda)-f_{n_k}(\lambda)|}_{\to 0\text{ by }(i)}
\ +\ \underbrace{|f_{n_k}(\lambda)-f_{n_k}(\lambda_{n_k})|}_{\le|\lambda-\lambda_{n_k}|\to 0}\ \to\ 0.
\]
Consequently, $f(\lambda)\le\eps$ and hence $\lambda\in \closn(\speps A)$, by Lemma \ref{lem:nonconst}. We get
\[
\speps A\ \subseteq\ \liminf\speps A_n\ \subseteq\ \limsup\speps A_n\ \subseteq\ \closn(\speps A).
\]
Passing to the closure everywhere in this chain of inclusions, just changes $\speps A$ at the very left into $\closn(\speps A)$, and we have $\speps A_n\to\closn(\speps A)$ and hence, by \eqref{eq:Haus}, $(ii)$.
$(ii)\implies (i):$ Take $\lambda\in\C$ and put $\eps:=f(\lambda)$.\\[-7mm]
\begin{itemize}
\item Case 1: $\eps=0$.\\
Take an arbitrary $\delta>0$.
Then, by $(ii)$, $\lambda\in \specn_\delta A\Hot \specn_\delta A_n$.
So, by \eqref{eq:Haus}, there is a sequence $(\lambda_n)_{n\in\N}$
with $\lambda_n\in \specn_\delta A_n=f_n^{-1}([0, \delta))$ and $\lambda_n \to \lambda$.
\item Case 2: $\eps>0$.\\
Take an arbitrary $\delta\in (0,\eps)$. By $(ii)$, we have
\[
S_n\ :=\ \specn_{\eps+\delta}A_n\ \Hto\ \specn_{\eps+\delta}A\ =:\ S,
\qquad i.e.~S_n\to\clos S, \text{ by } \eqref{eq:Haus},
\]
and
\[
T_n\ :=\ \specn_{\eps-\delta}A_n\ \Hto\ \specn_{\eps-\delta}A\ =:\ T,
\qquad i.e.~T_n\to\clos T, \text{ by } \eqref{eq:Haus}.
\]
By Lemma \ref{lem:X} (b),
\[
\lambda\quad\in\quad
\closn(\specn_{\eps+\delta} A)\ \setminus\ \closn(\specn_{\eps-\delta}A)
\quad \subseteq\quad
\liminf \Big(\specn_{\eps+\delta} A_n\ \setminus\ \specn_{\eps-\delta}A_n\Big),
\]
in short:
\[
\lambda\quad\in\quad
f^{-1}\big((\eps-\delta,\eps+\delta]\big)\quad \subseteq\quad
\liminf f_n^{-1}\big([\eps-\delta,\eps+\delta)\big).
\]
So there is a sequence $(\lambda_n)_{n\in\N}$ with
$\lambda_n\in f_n^{-1}\big([\eps - \delta,\eps +\delta)\big)$ and $\lambda_n \to \lambda$.
\end{itemize}
In both cases, we conclude
\[
|f(\lambda)-f_n(\lambda)|
\ \le\ \underbrace{|\overbrace{f(\lambda)}^\eps - f_n(\lambda_n)|}_{\le\delta}
\ +\ \underbrace{|f_n(\lambda_n)-f_n(\lambda)|}_{\le |\lambda_n-\lambda|\to 0}\ <\ 2\delta
\]
for all sufficiently large $n$, and hence $f_n(\lambda)\to f(\lambda)$ as $n\to\infty$, i.e.~$(i)$.
\end{proof}
\begin{corollary}
Let the assumptions of Theorem \ref{thm:main} be satisfied.\\
If $X$ is a Hilbert space and the operators $A$ and $A_n$, $n\in\N$, are normal then $(i)$ and $(ii)$ are also equivalent to
\[
\spec A_n\ \Hto\ \spec A.
\]
\end{corollary}
\begin{proof}
For normal operators, the $\eps$-pseudospectrum is exactly the $\eps$-neighborhood of the spectrum, e.g.~\cite{TrefEmb}. But $B_\eps(S_n)\Hto B_\eps(S)$ for all $\eps>0$ implies $S_n\Hto S$.
\end{proof}
\begin{remark}
{\bf a) } The pointwise convergence $f_n\to f$ is uniform on compact subsets of $\C$. (Take an $\frac \eps 3$-net for the compact set and use the uniform Lipschitz continuity of the $f_n$.)
{\bf b) }
It is well-known \cite{TrefEmb} that $\speps A\subseteq r\D$ with $r=\|A\|+\eps$. So if $(A_n)_{n\in\N}$ is a bounded sequence then $\speps B\subset r\D$ for all $B\in\{A,A_n:n\in\N\}$ with $r=\max\{\|A\|,\sup \|A_n\|\}+\eps$. By a), the convergence $f_n\to f$ is uniform on $\closn(r\D)$.
\end{remark}
\begin{remark}
Sometimes (especially in earlier works), pseudospectra are defined in terms of non-strict inequality:
\[
\textstyle
\Speps A\ :=\ \{\lambda\in\C: \|(A-\lambda I)^{-1}\|\ge\frac 1\eps \},\qquad \eps>0.
\]
One benefit is to get compact pseudospectra, in which case $\dH$ is a metric and $\Hto$ has a unique limit.
By Lemma \ref{lem:nonconst}, $\Speps A=\closn(\speps A)$ for all $\eps>0$. But since $S_n\Hto S $ if and only if $\closn(S_n)\Hto\closn(S)$, one could add, if one prefers, this third equivalent statement to Theorem~\ref{thm:main}:
\[
(iii)\quad\Speps A_n\ \Hto\ \Speps A,\qquad \forall\eps>0. \qedhere
\]
\end{remark}
\medskip
{\bf Acknowledgements.}
The authors thank Fabian Gabel and Riko Ukena from TU Hamburg for helpful comments and discussions.
|
1,477,468,750,475 | arxiv | \section{Inroduction}
Economical and financial systems are attractive to physicists because of their highly dynamic and complex structure which is generally much more challenging as a complex system than systems exhibiting a high degree of symmetry studied in traditional physics. The goal of this paper is to derive analytical results regarding the physics and phases of a model we developed for quantifying the linear response in a bipartite, dynamical network. While our discussion will exclusively concern a largely simplified financial network and its mean-field behavior, a similar methodology based on an ``effective theory'' approach could be employed in analyzing the response of many other dynamical bipartite networks. Especially networks pertaining to resources and agents using resources, such as power grids and some special food webs, might benefit from a similar type of modeling.
The global financial crisis that followed which started around 2007 made it clear to us how limited our understanding of the dynamics of financial markets are. In recent years, economists seem to have acknowledgement the fact that we cannot predict the real world dynamics with idealized assumptions and we need to account for complex relations of different players in an economical system. Scholars have started consider the network of such interactions and how they may affect business cycles, cause cascades, or contagion. \cite{Gale2,acemoglu2012network}.
Our goal here is to make a first attempt at quantitatively modeling the continuous dynamics of a market system in the simplest form and to lowest order.
In an earlier paper we introduced a bipartite network model for investment markets in which investors traded assets in a fashion similar to common stock markets \cite{dehmamy2014classical}. The assumption of the market being a bipartite network is, of course, a major simplification. In reality, there undoubtedly exist many strong financial interactions among investors and even among assets. In our paper \cite{dehmamy2014classical} the major investors were banks or other major financial institutions. It should be noted that interbank lending network is a very complex and important, multiplex network and much research has been dedicated to it (see \cite{furfine2003interbank,upper2011,bargigli2016interbank, farboodi2014intermediation} and references therein).
Our simplified bipartite model, which has ``assets'' as one layer and ``investors'' on the other side, ignoring all intra-layer connections, was constructed in order to assess the first order response dynamics of a market to any change. In this model neither money nor number of shares is conserved, as investors are assumed to trade with entities both inside and outside the network -- though such conservation laws can be imposed if needed. The weighted connections indicate the amount of investments. The details of the model and the phenomenological derivation for it are explained in the \cite{dehmamy2014classical}. The goal of this paper is to derive analytical results about the stability conditions for such market dynamics.
There are many different models for financial networks \cite{Haldane,allen2000financial,Kok,kok2, kok3} each assessing a different aspects of connectivity which use familiar assumptions about dynamics in of prices based on existing economic models.
Our model, on the other hand, is at its core what we call an ``effective theory'' in physics. It attempts to model the dynamics of a market by making minimal assumptions about details and assuming system-wide parameters. It only uses the microscopic behavior of agents as input. The demonstration of the genericity of our model is the subject of another forthcoming paper \cite{lagrangian}.
Perhaps the closest model to ours in the context bipartite model of investment networks was introduced by Caccioli et al. \cite{caccioli2014stability}. As will become clear in the derivations below, our model in fact also reduces to a Langevin equation's model for price-dynamics in a stock market introduced by Bouchaud \cite{bouchaud1998langevin}. There the authors argue that the price dynamics near the onset of a crisis is simlar to a damped harmonic oscillator plus non-linear terms that become relevant after the phase transition to an unstable phase. Our model has such non-linear naturally from the start and no extra assumption is needed. In another forthcoming paper we will discuss the application of our model in stock markets in detail.
Our model relies on 2 behavioral parameters, one for supply/demand and one for investor decisions. Over time-scales where the behavioral parameters are not changing much, we are arguing that the large scale dynamics of the market can modeled and simulated. It is in this regime that our model may provide cucial insight into stability of a market and serve as a tool for regulators.
\section{Model and Notation}
We will refer the model as
the Group-Impulse Portfolio-Sharing Investment (GIPSI) model henceforth. In essence the GIPSI model summarizes how investors and their brokers may behave on average in a market, if they are aware of the market news.
\begin{figure}
\centerline{\Large\color{blue}Investors}
\centerline{\includegraphics[width=.5\columnwidt
, trim=0 0 0 1.5cm, clip]{plots/graph7.pdf}
}
\caption{A sketch of the network of investors vs assets. It is a
directed, weighted bipartite graph (thicknesses represent
weights of investment). The wighted adjacency matrix has entries $A_{i\mu}$, the number of shares of asset $\mu$ held by investor $i$. \label{fig:graph}
}
\end{figure}
We will approximate the investiment market as a bipartite network as shown in
Fig.~\ref{fig:graph}. On one side we have the ``Assets,'' which are what is traded, and on the other we have the ``Investors'' that
own the assets. The ``assets'' are labeled using
Greek indices $\mu,\nu...$. To each asset $\mu$ we assign a ``price,''
$p_\mu(t)$ at time $t$. The ``investors'' are labeled using Roman
indices $i,j...$. Each investor has an ``equity'' $E_i(t)$, a time $t$, i.e. their net worth. Each investor has a portfolio, meaning
differing amounts of holdings in each of the asset types. The amount of asset $\mu$ that investor $i$ holds is denoted by $A_{i\mu}(t)$, which is
essentially an entry of the weighted adjacency matrix $A$ of the
bipartite network.
We assume that there is a group-impulse in the decision-making, the so-called ``herding effect'' \cite{devenow1996rational,bikhchandani2000herd} where all agents will have the same level of panic or calmness in a situation. They will adjust their portfolio $A_{i\mu}$ by looking at the gains or losses $\delta E_i$ they incurred recently. So one of the equations is a generalization of a simple portfolio adjustment protocol
\[\delta \log A_{i\mu} \sim \beta \delta \log E_i \]
We argue that, if the assumption
Their level of panic is reflected in the so-called ``Income elasticity of demand'', which we will denote as $\beta$.
This assumption has been critisized by some scholars as being unrealistic based on other empirical evidence about ``fire sales'' \cite{shleifer2011fire}, claiming that the leverage (i.e. ratio of investments to equity) . Our justification for this assumption is based on a mean field assumption. We believe that most of these institutions will be leveraged to close to the maximum amout allowed by regulations, thus having similar leverage. Aside from that, if the assumption of herding holds for response to a rapid change, using the same factor $\beta$ makes sense for at least the mean field behavior of the system.
The supply and demand equation for the prices $p_\mu$ has a similar structure
\[\delta \log p_\mu \sim \alpha \delta \log A_{i\mu} \]
where $1/\alpha$ would be what is usually called ``price elasticity'' in economics. There is also a third equation which is related to how trading, i.e. $\delta (A\cdot p)_i$ changes the equity of an investor, which turns out to be \cite{dehmamy2014classical} $\delta E_i = (A\cdot \delta p)_i$. A crucial point for having a more realistic model for such systems is the fact that there is a response time associated with each of these equations.
describing how each of the variables $E_i(t), A_{i\mu}(t)$, and
$p_\mu(t)$ evolve over time. A key feature of our model is that the
weights of links $A_{i\mu}$ are time-dependent, and this introduces
dynamics into our network.
For brevity, we define $\ro_t\equiv {d\over dt}$. The equations of the GIPSI model may be written as
\begin{align}
\pr{\tau_B\ro_t^2 +\ro_t }A_{i\mu}(t)&=\beta {\ro_t E_i(t)\over E_i(t)} A_{i\mu}(t)\label{eq:ddA}\\
\pr{\tau_A\ro_t^2 +\ro_t } p_\mu(t)&=\alpha {\ro_t A_\mu(t)\over A_\mu (t)}p_\mu (t)\label{eq:ddp} \\
\ro_t E_i(t)&= \sum_\mu A_{i\mu}(t) \ro_t p_\mu(t)+f_i(t). \label{eq:ddE}
\end{align}
where $f_i=dS_i/dt$ has the meaning of external force.
where $\tau_B$ is the time-scale in which investors respond to a change in their net worth,
and $\tau_A$ is the time-scale of market's response.
The importance of this model as a lowest order approximation of linear response in a market system is the subject of a separate paper in preparation \cite{lagrangian,dehmamy2016graduate}. But just to motivate the use of this model, we only note that the lowest order effective Lagrangian model (in the spirit of Landau-Ginzburg models) for the response of a market defined by the dynamical variables $A_{i\mu}, p_\mu$ and $E_i$ plus dissipation yields equations with the structure of \eqref{eq:ddA}-\eqref{eq:ddE}.
\begin{table}
\caption{Notation\label{tab:definitions}}
\centering
\begin{tabular}{@{\vrule height 10.5pt depth4pt width0pt}cc}
\hline
symbol & denotes \\
\hline
$A_{i\mu}(t)$ & Holdings of investor $i$ in asset $\mu$ at time $t$\\
$p_\mu (t)$ & Normalized price of asset $\mu$ at time $t$ ($p_\mu(0)=1$)\\
$E_i(t)$ & Equity of bank $i$ at time $t$. \\
$\alpha$ & Inverse price elasticity\\%``Inverse market depth'' factor of price to a sale.\\
$\beta$ & Income elasticity of demand (rashness)\\
\hline
\end{tabular}
\end{table}
\section{Confidence in the market in terms of based on market response}
The simple strategies devised above are expected to describe the linear response of the market in the mean-field approximation. The sign of $\beta$ may be considered an indicator for confidence in the market: positive $\beta$ means in response to a loss (i.e. $\delta E_i <0$) the investor sells assets (i.e. $\delta A_{i\mu} <0$). This can be interpreted as the investor fearing more losses and therefore reducing their holdings. By the same token negative $\beta$ may signal confidence in the market as it indicates a willingness to buy more shares when the investor has lost money.
Note that the GIPSI model Eq. \eqref{eq:ddA}--\eqref{eq:ddE} are response euation, i.e. they yield no dynamics of there is no change in the variables $E,A,p$.
To assess the behavaior of this system we will introduce an initial shock to the system.
We do this by assuming a delta function change in equity $f_i(t) = c \delta(t)$ for some investor $i$, meaning that at $t=0$ agent $i$ either gained or lost money from outside the market and it's decision to trade in the market triggers the response of the market \footnote{It turns out that the qualitative behavior of the final state after a shock to the system is only weakly sensitive to the value of the response times $\tau_A,\tau_B$. The response times only need to be nonzero. As we will show analytically below, the phase of the system is determined by the two behavioral couplings $\alpha$ and $\beta$. The plots shown below are for $\tau_A = \tau_B = 1$.}.
For the empirical analysis we will use the same Eurozone crisi data we used in the original paper \cite{dehmamy2014classical}
which consists of sovereign bonds of five European countries, namely
Greece Italy, Ireland, Portugal and Spain (GIIPS), as assets and the network of major banks and financial institutions who purchased and traded these bonds.
Many smaller players and the European Central Bank (ECB) that also participated in the trades are not in this data and their existence justifies the non-conservation of the total amount of holdings in the network.
They are also partly responsible for dissipation in the dynamics. The data is given in the appendix of \cite{dehmamy2014classical}.
\begin{figure*}
\centering
\includegraphics[width=.95\columnwidth]{plots/pEmergedEBAab060.pdf}
\includegraphics[width=.95\columnwidth]{plots/pEEBAanbn050.pdf}
\includegraphics[width=.95\columnwidth]{plots/{Sergey-pE-shk=0,a=-10.00,b=10.00,t_f=69}.pdf}
\includegraphics[width=.95\columnwidth]{plots/{Sergey-pE-shk=0,a=10.00,b=-10.00,t_f=69}.pdf}
\caption{Contrarian regimes: top, both $\alpha,\beta<0$. Here many banks
fail, even for relatively small $\alpha, \beta$. The losses are
devastating. Our model suggests that such a regime should be
avoided. The bottom two plots show the two points $\alpha=\pm \beta$,
$\beta=\pm 10$. The two results are almost identical. They also show
that no appreciable amount of profit or loss is generated in these
regimes, thus making them rather unfavorable for investors most of the
time, but because of their safeness could be a contingency plan
(buyout of bad assets by central banks is one such contrarian
behavior).
\label{fig:4}}
\end{figure*}
Fig. \ref{fig:4} shows 4 examples of choices for $\alpha$ and $\beta$ and the response of the prices and equities of banks to a shock to a random bank in the system
\footnote{It turns out that, similar to a system in the thermodynamic limit, the details of which bank is shocked through $f_i$ doesn't matter in the final state of the system\cite{dehmamy2014classical}.}.
As we will show below, all these choices fall in the ``stable'' regime, though the losses incurred from a negative shock
(i.e. $f_i< 0$) are much larger in magnitude when $\alpha$ and $\beta$ have the same sign than when they have opposite signs.
The upper left plot in Fig. \ref{fig:4} is for $\alpha$ and $\beta>0$. This is what one normally expects from this system: $\beta>0$
means if a bank incurs a loss, they try to make up for it by making
money from selling assets; $\alpha>0$ means if there is selling
pressure (more supply than demand) the prices will go down. There are,
however, cases where the opposite happens.
Negative values for $\alpha$ and $\beta$ are possible in the real world and are known as ``contrarian'' behavior.
For example, a contrarian investor is someone who invest more in asset $\mu$ when they actually lose money on asset $\mu$.
This happens a lot when there is confidence in the market and investors believe the price will rebound.
The market may also sometimes behave in a contrarian fashion, when there is an anticipation of good news that
overcomes the selling pressure, or when other investors outside our Eurozone GIIPS
network (such as smaller investors or the ECB) are
actually exerting a buying pressure.
As is seen in Fig. \ref{fig:4} this results in price slightly oscillating around its original value. This trust mechanism plays an important role in the stability of the market.
It requires that one side (either investor or the prices) behaves in a contrarian fashion and the other side behaves normally.
\section{Phases and Confidence}
As we see the behavior of prices $p_\mu$ and equities $E_i$ in the case where the product $\gamma \equiv \alpha\beta$ is positive and negative differ qualitatively. When $\gamma >0$ negative shocks ($\ro_t E_i(0) <0$) cause a noticeable drop in prices and some equities\footnote{All of $E,A,p$ are positive quantities and we do not allow them to go negative.}, whereas when $\gamma < 0$ there is only oscillation and the final values of $p_\mu$ and $E_i$ are very close to their original values.
\begin{figure}
\centering
\includegraphics[width = \columnwidth]{plots/{pEmergedEBAab150}.pdf}
\caption{When $\gamma = \alpha\beta >1$, although the shape of the price and equity plot after a negative shock look similar to when $1>\gamma>0$, the behavior of the system is qualitatively different. Many asset prices plummets down to or close to zero. The system survives only because investors (banks) whose equity goes to zero stop existing and won't propagate the shock further. }
\label{fig:unstable}
\end{figure}
There is, however, another qualitatively different regime which is not shown in Fig. \ref{fig:4}. This happens when $\gamma >1$ and is shown in Fig. \ref{fig:unstable}. In fact, this qualitative difference becomes much clearer when we show the effect of a positive shock $\ro_t E_i(0)>0$ to the system in Fig. \ref{fig:1d}. There we see that when $\gamma<1$ after a positive shock eventually stops growing and reaches a new equilibrium, whereas when $\gamma>1$ all three variables $E,A,p$ grow exponentially, indefinitely. In other word an economic ``bubble'' forms. So $\gamma>1$ if it lasts for a long time either forms a bubble or results in a crash.
\begin{figure*
\centering
\includegraphics[width=.5\columnwidth]{plots/{1pE-1by1,a=0.50,b=0.50,f=-0.1}.pdf}\includegraphics[width=.5\columnwidth]{plots/{1pE-1by1,a=0.50,b=0.50,f=0.1}.pdf}
\includegraphics[width=.5\columnwidth]{plots/{1pE-1by1,a=1.50,b=1.50,f=-0.1}.pdf}\includegraphics[width=.5\columnwidth]{plots/{1pE-1by1,a=1.50,b=1.50,f=0.1}.pdf}
\outNim{\includegraphics[width=1.7in]{1ddamped.pdf}\includegraphics[width=1.7in]{1d-damped.pdf}
\includegraphics[width=1.7in]{1ddiv.pdf}\includegraphics[width=1.7in]{1d-div.pdf}
}
\caption{Numerical solutions to the differential equations in a 1 bank
vs 1 asset system. The upper plots show a ``stable'' regime, where
after the shock none of the variables decays to zero or blows up, but
rather asymptotes to a new set of values. The lower plots are in the
``unstable'' regime where positive or negative shocks either result in
collapse or blowing up or collapsing of some variables.
\label{fig:1d}}
\end{figure*}
This leads to the following conclusions:
\begin{enumerate}
\item When $\gamma=\alpha\beta <0$
no investor goes bankrupt, but also the amount of money lost or generated during the trading is negligible.
This makes these regimes (where either the investors or the market is contrarian,
but not both) good for preventing failures, but they are very
undesirable for profit making.
\item When $1>\gamma >0$ the system does not show oscillation but eventually settles into new equilibrium states not far from the original situation. Negative shocks may cause bankruptcy of some investors, depending on how their assets $(A\cdot p)_i$ compares to their equity $E_i$ initially.
\item When $\gamma >1$, negative shocks cause exponential drops in assets and equities. If $\gamma<1$ persists there will be a crisis. Positive shocks may form bubbles and periods of exponential growth.
\end{enumerate}
\subsection{The full phase diagram}
\begin{figure}
\centerline{
\includegraphics[trim = 1cm 0 20mm 0, clip,
width=.5\columnwidth]{plots/Sergeyphasediag2601.pdf}\includegraphics[trim = 1cm 0
20mm 0, clip,width=.5\columnwidth]{plots/phaseTimefit2601.pdf}}
\caption{{\bf Left}: Phase diagram of the GIIPS sovereign debt data, using the
sum of the final price ratios as the order parameter. We can see a
clear change in the phase diagram from the red phase, where the
average final price is high to the blue phase, where it drops to
zero. The drop to the blue phase is more sudden in the $\alpha<0,
\beta<0$ quadrant than the first quadrant. {\bf Right}: The time it takes
for the system to reach the new equilibrium phase. This relaxation
time significantly increases around the transition region, which
supports the idea that a phase transition (apparently second order)
could be happening in the first and third quadrants. The dashed white
line shows the curve $\gamma=\alpha \beta =1$. It fits the red curves
of long relaxation time very well. This may suggest that $\gamma=1$ is
a critical value which separates two phases of the system.
\label{fig:phase}}
\end{figure}
Fig. \ref{fig:phase} shows an example of the average final prices and
relaxation time for the system for various values of $\alpha$ and
$\beta$. It seems the system has two prominent phases: One in which a
new equilibrium is reached without a significant depreciation in all of
the GIIPS holdings (upper left and lower right quadrants), and one where
all GIIPS holdings become worthless (above dashed line in the upper
right quadrant and all of lower left quadrant).
In both the first and the third
quadrants in the transition region the relaxation time becomes very
large, which means that the forces driving the dynamics become very
weak. Both the smoothness and the relaxation time growth seem to be
signalling the existence of a second order phase transition. The phase transition
seems to be described well by:
\[\gamma=\alpha \beta =1.\]
\outNim{082016
But this result is not exact and below we derive a more precise form for
this equation, which is:
\begin{equation}
\gamma = 1+ f_0,
\end{equation}
where $f_0$ is the magnitude of the initial shock $f_i(t)= f_0\delta(t)$
for a fixed $i$ that's being shocked.
082016}
Now we do a systematic numerical analysis of different phases of this phenomenological model. We identify to phases and what appears to be a second order phase transition between them.
We then modify the equations \eqref{eq:ddA}--\eqref{eq:ddE} and analytically derive the condition for the phase transition. The GIIPS data is very heterogeneous, in the sense that for each of these European countries there are generally less than a handful of investors who hold more than half of the total sovereign debt of that country.
If we make a mean field assumption and break the network apart, assuming there is one major investor for each asset $\mu$, we can treat that as a 1 investor 1 asset system and try to derive the phases in that case.
As we show below, the 1 by 1 system, despite not having the richness of the networked system in terms of the details of the final state, nevertheless exhibits the same phases. In addition we are able to analytically derive the phase transitions.
\section{Analytical derivation of the mean-field phase space}
In the mean-field approach described above, we are dealing with a 1 investor by 1 asset system with a single $E,A$ and $p$. We can combine Eqs. \eqref{eq:ddA}--\eqref{eq:ddE} in a 1 by 1 system by taking another $\ro_t$ derivative from \eqref{eq:ddp}. This way we can eliminate most occurrences of $E$ and $A$ an find an equation for $p$ (see Appendix \ref{ap:eqs1}) which has some non-linear terms in it.
Defining the ``return'' $u \equiv \ro_t p$ as the fundamental
variable, the nonlinearities are roughly of type $u^2 + a \ro_t u^2$.
In short, the equations are
\begin{align}
\left[\tau \ro_t^2+\ro_t+\omega^2 \right]u &={O\pr{u^2,\ro_t u^2}\over p}\cr
{1\over \tau}= {1\over \tau_B}+{1\over \tau_A},
\quad &\omega^2= {1-\gamma{Ap\over E}\over \tau_B+\tau_A}
\label{eq:d3p}
\end{align}
As we show in Appendix \ref{ap:eqs1} none of the nonlinearities on the r.h.s. can be large in the stable phase where $\gamma < 1$. We also show there that $Ap/E$ becomes its $t=0$ value plus other nonlinear terms that cannot be large when $\gamma < 1$ and
\({Ap\over E} \approx 1+f_0 +O(u^2). \)
For a small shock $f_0 = -\eps$ we may safely use $Ap/E = 1$.
Thus for a time-scale where $\gamma <1 $ and is not changing much we are essentially dealing with a damped harmonic oscillator. Notice that equation \eqref{eq:d3p} is almost identical to what Bouchaud proposes in \cite{bouchaud1998langevin} to explain the 1987 crash.
Although $\omega^2$ depends on $A,p$ and $E$, we can use an approximate
exponential ansatz $u\sim u_0 \exp[\lambda t]$. The
solutions to $\lambda$ are:
\[\lambda_\pm={-1\pm \sqrt{1- 4 \tau \omega^2}\over 2\tau}\]
Thus we get three regimes:
\begin{enumerate}
\item When $\omega^2>{1\over 4\tau}$
there will be oscillatory solutions. This happens when $\gamma < {-(\tau_A-\tau_B)^2\over \tau_A\tau_B}.$ For $\tau_A = \tau_B$ this is just the $\gamma < 0$ condition we observed for oscillations in our simulations.
\item When ${1\over 4\tau}>\omega^2
>0 $, i.e. ${-(\tau_A-\tau_B)^2\over \tau_A\tau_B } < \gamma < 1$ we have decaying solutions but both $\lambda_\pm <0$. Therefore the changes won't be large and eventually the sytems settles in new equilibrium.
\item When $\omega^2<0$, i.e.
$\gamma>1$ we will have two real solutions for with opposite signs. The presence of the positive root $\lambda_+>0$ signals an instability because this solution diverges.
\end{enumerate}
Thus we have proven the existence of the three phases we had observed earlier and dervied the transition conditions analytically. It is easy to prove that in the unstable phase both exponents $\lambda_\pm$ appear in the solution:
\outNim{082016
For
a delta function shock of magnitude $f$ at $t=0$ we found that:
\[E_0\to E_0(1+f)\]
Having initially scaled to $E_0=A_0=p_0=1$, the condition for existence of the positive root becomes:
\[t=0:\quad \gamma> {E\over Ap}= (1+f)\]
This dependence on the shock magnitude is normal, as a strong enough
kick can kick a particle out of a local minimum. The shock can be
arbitrarily small and therefore the absolute condition for stability is
as we anticipated
\begin{equation}
\large \mbox{\bf unstable at:} \quad \gamma >1 \label{eq:unstable}
\end{equation}
Now the question is, which solution does the system pick when it is shocked.
082016}
The return $\ro_t p$ is
\[\ro_t p(t) = u(t)= u_+ e^{\lambda_+ t}+u_- e^{\lambda_- t} \]
Since at $t=0$ the initial conditions dictated $\ro_t p(0)=0$ we have
\(u_+=-u_-\)
and therefore both solutions appear with equal strength. It follows that
whenever one of the solutions ($u_-$ in our case) is positive the
solution diverges. When $f>0$ a bubble forms and grows exponentially and
when $f<0$, because our variables are non-negative, the price just
crashes to zero. This proves that the sufficient condition for
stability is $\gamma<1$. Also note that the nonlinear terms are all
proportional to $\ro_t p$ and therefore at $t=0$
\[O\pr{u^2(0),\ro_t u^2(0)}=0\]
and so the solution is exact at $t=0$. The details of the derivations as well as the nonlinear terms are given in Appendix \ref{ap:eqs1}.
In Appendix \ref{ap:exp} we provide another, direct proof for the phase transition being at $\gamma=1$ using exponential ansatz for all three variables $X=E,p,A$
\[X\sim X_0 +X_1\exp[w_{X1} t]+X_2\exp[w_{X2} t].\]
We then show that near the phase transition the exponents in $E,A,p$ show that the near the phase exponents for all three need to be the same. Moreover, as can be seen in the right figure in Fig. \ref{fig:phase} near the phase transition the time-scale of the system's evolution is very long compared to other time-scales in the system and thus much longer than $\tau$ in \eqref{eq:d3p}, meaning
\[|w_X|\ll \tau^{-1} \]
This leads to a massive simplification of the equations and yields
\[\gamma = {1+f_0\over 1-\beta f_0} \]
as the phase transition point, which for infinitesimal shocks $f_0\to 0$ recovers $\gamma =1$.
\section{Conclusion}
We have shown that the GIPSI model \cite{dehmamy2014classical} for response dynamics of a bipartite investment market exhibits three phases: two stable phases, one of which is oscillatory the other logistic-like; one unstable phase with indefinite exponential decay or growth depending on the initial perturbation. The unstable phase can exist in real world, though it must be short-lived. It can describe ``boom-bust'' periods: crisis periods (as was shown in \cite{dehmamy2014classical}) or periods of rapid growth, so-called ``bubbles''. We also derive these phases analytically from a mean-field version of the model. The mean-field equations become almost identical to the Langevin equations proposed by Bouchaud \cite{bouchaud1998langevin} which could well describe behavior of markets near crashes. The non-linear terms required for the transition in Bouchaud's model exist naturally in the GIPSI model and nothing new needs to be added by hand.
\section{acknowledgments}
We thank the European Commission FET Open Project ``FOC'' 255987 and
``FOC-INCO'' 297149, NSF (Grant SES-1452061), ONR (Grant
N00014-09-1-0380, Grant N00014-12-1-0548),DTRA (Grant HDTRA-1-10-1-0014,
Grant HDTRA-1-09-1-0035), NSF (Grant CMMI 1125290), the European
MULTIPLEX and LINC projects for financial support. We also thank
Stefano Battiston for useful discussions and providing us with part of
the data. The authors also wish to thank Matthias Randant and others for
helpful comments and discussions, and especially Fotios Siokis for
sharing important points about the data and the Eurozone crisis. S.V.B. thanks the Dr. Bernard W. Gamson Computational
Science Center at Yeshiva College for support.
\bibliographystyle{plain}
|
1,477,468,750,476 | arxiv | \section{\normalsize INTRODUCTION}\label{sec1}
Not long ago, multivariate analysis was mainly based on linear methods illustrated on
small to medium-sized data sets. However, many novel developments, have permitted the
introduction of several innovative statistical and mathematical tools for high-dimensional data
analysis. Developments as generalised multivariate analysis, latent variable analysis, DNA
microarray data, pattern recognition, multivariate nonlinear analysis, data mining, manifold
learning, shape theory etc., have given a new and modern image to Multivariate Analysis.
One of the topics of statistical theory that is most commonly used in many fields of scientific
research is the theory of probabilistic sampling. From a multivariate point of view, diverse
authors have studied the problem of optimum allocation in multivariate stratified random
sampling. \cite{ad81} and \cite{sssa84}, among many others, proposed the problem of optimum
allocation in multivariate stratified random sampling as a deterministic multiobjective
mathematical programming problem, by considering as objective function a cost function subject
to restrictions on certain functions of variances or viceversa, i.e., considering the functions
of variances as objective and subject to restrictions on costs. Noting that, for the case when
the function of costs is taken as the objective function, the problem of optimum allocation in
multivariate stratified random sampling is reduced to a classical uniobjective mathematical
programming problem. Furthermore, \citet{dgu:08} propose the optimum allocation in multivariate
stratified random sampling as a deterministic nonlinear problem of matrix integer mathematical
programming constrained by a cost function or by a given sample size. Also, \cite{pre78} and
\cite{dggt:07} observe that the values of the population variances are in fact random variables
and formulate the corresponding problem of optimum allocation in multivariate stratified random
sampling as a stochastic mathematical programming problem.
In this paper, the optimum allocation in multivariate stratified random sampling is posed as a
stochastic matrix integer mathematical programming problem constrained by a cost function or by
a given sample size. Section \ref{sec2} provides notation and definitions on multivariate
stratified random sampling. Section \ref{sec3} studies in detail the asymptotic normality of
the sample mean vectors and covariance matrices. The optimum allocation in multivariate
stratified random sampling via stochastic matrix integer mathematical programming is given in
Section \ref{sec4}. Also, several particular solutions are derived for solving the proposed
stochastic mathematical programming problems. Finally, an example of the literature is given in
Section \ref{sec5}.
\section{\normalsize PRELIMINARY RESULTS ON MULTIVARIATE STRATIFIED RANDOM SAMPLING}\label{sec2}
Consider a population of size $N$, divided into $H$ sub-populations (strata). We wish to find a
representative sample of size $n$ and an optimum allocation in the strata meeting the following
requirements: i) to minimise the variance of the estimated mean subject to a budgetary
constraint; or ii) to minimise the cost subject to a constraint on the variances; this is the
classical problem in optimum allocation in univariate stratified sampling, see \cite{coc77},
\cite{sssa84} and \cite{t97}. However, if more than one characteristic (variable) is being considered
then the problem is known as optimum allocation in multivariate stratified sampling. For a
formal expression of the problem of optimum allocation in stratified sampling, consider the
following notation.
The subindex $h=1,2,\cdots,H$ denotes the stratum, $i=1,2,\cdots,N_{h} \mbox{ or } n_{h}$ the unit
within stratum $h$ and $j=1,2,\cdots,G$ denotes the characteristic (variable). Moreover:
\bigskip
\begin{footnotesize}
\begin{tabular}{ll}
$N_{h}$ & Total number of units within stratum $h$.\\
$n_{h}$ & Number of units from the sample in stratum $h$.\\
\hspace{-.4cm}
\begin{tabular}{lcl}
$\mathbf{Y}_{h}$ &=& $(\mathbf{Y}_{h}^{1}, \cdots ,\mathbf{Y}_{h}^{G})$ \\
&=& $(\mathbf{Y}_{h1}, \cdots ,\mathbf{Y}_{h N_{h}})'$
\end{tabular} &
\hspace{-.3cm}
\begin{tabular}{l}
$N_{h} \times G$ population matrix in stratum $h$; $\mathbf{Y}_{hi}$ is the\\
$G$-dimensional value of the $i$-th unit in stratum $h$.\\
\end{tabular}\\
\begin{tabular}{lcl}
$\mathbf{y}_{h}$ &=& $(\mathbf{y}_{h}^{1}, \cdots ,\mathbf{y}_{h}^{G})$ \\
&=& $(\mathbf{y}_{h1}, \cdots ,\mathbf{y}_{h n_{h}})'$
\end{tabular} &
\hspace{-.3cm}
\begin{tabular}{l}
$n_{h} \times G$ sample matrix in stratum $h$; $\mathbf{y}_{hi}$ is the $G$-dimensional\\
$G$-dimensional value of the $i$-th unit of the sample in stratum $h$.\\
\end{tabular}\\
$y_{hi}^{j}$ & Value obtained for the $i$-th unit in stratum $h$\\
& of the $j$-th characteristic\\[1ex]
$\mathbf{n} = ({n}_{1},\cdots, {n}_{H})'$ & Vector of the number of units in the sample\\
$\displaystyle{W_{h}} = \displaystyle{\frac{N_{h}}{N}}$ & Relative size of stratum $h$\\
$\displaystyle{\overline{Y}_{h}^{j}} = \frac{1}{N_{h}}\displaystyle
\sum_{i=1}^{N_{h}}y_{hi}^{j}$ & Population mean in stratum $h$ of the $j$-th characteristic.\\[3ex]
$\overline{\mathbf{Y}}_{h}= (\overline{Y}_{h}^{1}, \cdots,\overline{Y}_{h}^{G})'$
& Population mean vector in stratum $h$. \\[1ex]
$\displaystyle{\overline{y}_{h}^{j}}=\frac{1}{n_{h}}\displaystyle
\sum_{i=1}^{n_{h}}y_{hi}^{j}$ & Sample mean in stratum $h$ of the $j$-th characteristic.\\
$\overline{\mathbf{y}}_{h}= (\overline{y}_{h}^{1}, \cdots,\overline{y}_{h}^{G})'$
& Sample mean vector in stratum $h$. \\
\end{tabular}
\end{footnotesize}
\begin{footnotesize}
\begin{tabular}{ll}
$\displaystyle{\overline{y}_{_{ST}}^{j}= \sum_{h=1}^{H}W_{h}\overline{y}_{h}^{j}}$
& Estimator of the population mean in multivariate\\
& stratified sampling for the $j$-th characteristic.\\
$\overline{\mathbf{y}}_{_{ST}}=(\overline{y}_{_{ST}}^{1},\cdots,\overline{y}_{_{ST}}^{G})'$
& Estimator of the population mean vector in\\
& multivariate stratified sampling.\\
$\mathbf{S}_{h}$ & Covariance matrix in stratum $h$\\
& $\mathbf{S}_{h}=\displaystyle\frac{1}{N_{h}}
\sum_{i=1}^{N_{h}}(\mathbf{y}_{hi}-\overline{\mathbf{Y}}_{h})(\mathbf{y}_{hi}-\overline{\mathbf{Y}}_{h})'$\\[2ex]
& where $S_{h_{jk}}$ is the covariance in stratum $h$ of the\\
& $j$-th and $k$-th characteristics; furthermore\\
& $S_{h_{jk}}=\displaystyle\frac{1}{{N_{h}}}
\sum_{i=1}^{N_{h}}(y_{hi}^{j}-\overline{y}_{h}^{j})(y_{hi}^{k}-\overline{y}_{h}^{k})$, and \\
& $S_{h_{jj}}\equiv S_{hj}^{2} = \displaystyle\frac{1}{N_{h}}
\sum_{i=1}^{N_{h}}(y_{hi}^{j}-\overline{y}_{h}^{j})^2$.\\
$\mathbf{s}_{h}$ & Estimator of the covariance matrix in stratum\\
& $h$;\\
& $\mathbf{s}_{h}=\displaystyle\frac{1}{{n_{h}-1}}
\sum_{i=1}^{n_{h}}(\mathbf{y}_{hi}-\overline{\mathbf{y}}_{h})(\mathbf{y}_{hi}-\overline{\mathbf{y}}_{h})'$\\
& where $s_{h_{jk}}$ is the sample covariance in stratum $h$ of the\\
& $j$-th and $k$-th characteristics; furthermore\\
& $s_{h_{jk}}=\displaystyle\frac{1}{{n_{h}}-1}
\sum_{i=1}^{n_{h}}(y_{hi}^{j}-\overline{y}_{h}^{j})(y_{hi}^{k}-\overline{y}_{h}^{k})$, and \\
& $s_{h_{jj}}\equiv s_{hj}^{2} = \displaystyle\frac{1}{{n_{h}}-1}
\sum_{i=1}^{n_{h}}(y_{hi}^{j}-\overline{y}_{h}^{j})^2.$\\
$\Cov(\overline{\mathbf{y}}_{_{ST}})$
& Covariance matrix of $\overline{\mathbf{y}}_{_{ST}}.$\\
$
\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})$
& Estimator of the covariance matrix of $\overline{\mathbf{y}}_{_{ST}}$,\\
& it is denoted as $\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}) \equiv
\widehat{\Cov(\overline{\mathbf{y}}_{_{ST}})}$, and defined as \\[1ex]
& $
= \left(
\begin{array}{cccc}
\widehat{\Var}(\overline{y}_{_{ST}}^{1}) & \widehat{\Cov}(\overline{y}_{_{ST}}^{1},\overline{y}_{_{ST}}^{2})
& \cdots & \widehat{\Cov}(\overline{y}_{_{ST}}^{1},\overline{y}_{_{ST}}^{G}) \\
\widehat{\Cov}(y_{_{ST}}^{2},\overline{y}_{_{ST}}^{1}) & \widehat{\Var}(\overline{y}_{_{ST}}^{2}) & \cdots
& \widehat{\Cov}(\overline{y}_{_{ST}}^{2},\overline{y}_{_{ST}}^{G}) \\
\vdots & \vdots & \ddots & \vdots \\
\widehat{\Cov}(\overline{y}_{_{ST}}^{G},\overline{y}_{_{ST}}^{1}) & \widehat{\Cov}(\overline{y}_{_{ST}}^{G},
\overline{y}_{_{ST}}^{2}) & \cdots & \widehat{\Var}(\overline{y}_{_{ST}}^{G}) \\
\end{array}
\right )
$\\[3ex]
& = $\displaystyle{\sum_{h=1}^{H}\frac{{{W_{h}}^{2}}
\mathbf{s}_{h}}{n_{h}} - \sum_{h=1}^{H} \frac{{W_{h}}\mathbf{s}_{h}}{N}}$\\[1ex]
$\widehat{\Cov}(\overline{y}_{_{ST}}^{j},\overline{y}_{_{ST}}^{k}) $ &
Estimated covariance of $\overline{y}_{_{ST}}^{j}$ and $\overline{y}_{_{ST}}^{k}$ where \\[1ex]
& $\widehat{\Cov}(\overline{y}_{_{ST}}^{k},\overline{y}_{_{ST}}^{j}) \equiv
\widehat{\Cov(\overline{y}_{_{ST}}^{j},\overline{y}_{_{ST}}^{k})}$, with \\[1ex]
\hspace{3.5cm}
& $\widehat{\Cov}(\overline{y}_{_{ST}}^{j},\overline{y}_{_{ST}}^{k})= \displaystyle{\sum_{h=1}^{H}\frac{{{W_{h}}^{2}}
s_{h_{jk}}}{n_{h}} - \sum_{h=1}^{H} \frac{{W_{h}}s_{h_{jk}}}{N}}$, and \\[1ex]
&
$\widehat{\Cov}(\overline{y}_{_{ST}}^{j},\overline{y}_{_{ST}}^{j}) \equiv
\widehat{\Var}(\overline{y}_{_{ST}}^{j})= \displaystyle{\sum_{h=1}^{H}\frac{{{W_{h}}^{2}}
s_{hj}^{2}}{n_{h}} - \sum_{h=1}^{H} \frac{{W_{h}}s_{hj}^{2}}{N}}$. \\[2ex]
$c_{h}$ & Cost per $G$-dimensional sampling unit in stratum $h$ and let\\
& $\mathbf{c} = (c_{1}, \dots, c_{G})'$.\\
\end{tabular}
\end{footnotesize}
\noindent Where if $\mathbf{a} \in \Re^{G}$, $\mathbf{a}'$ denotes the transpose of
$\mathbf{a}$.
\section{\normalsize LIMITING DISTRIBUTION OF SAMPLE MEANS AND COVARIANCE MATRICES}\label{sec3}
In this section the asymptotic distribution of the estimator of the covariance
matrix $\mathbf{s}_{h}$ and mean $\overline{\mathbf{y}}_{h}$ is considered. With this aim in mind,
the multivariate version of H\'ajek's theorem is proposed in the context of sampling theory in
terms of the extension stated in \citet{h:61}. First, consider the following notation and
definitions.
A detailed discussion of operator ``$\Vec$", ``$\Vech$", Moore-Penrose inverse, Kronecker
product, commutation matrix and duplication matrix may be found in \citet{mn:88}, among many
others. For convenience, some notations shall be introduced, although in general it adheres to
standard notations.
For all matrix $\mathbf{A}$, there exists a unique matrix $\mathbf{A}^{+}$ which is termed
the \emph{Moore-Penrose inverse} of $\mathbf{A}$.
Let $\mathbf{A}$ be an $m \times n$ matrix and $\mathbf{B}$ a $p \times q$ matrix. The $mp
\times nq$ matrix defined by
$$
\left[
\begin{array}{ccc}
a_{11}\mathbf{B} & \cdots & a_{11}\mathbf{B} \\
\vdots & \ddots & \vdots \\
a_{11}\mathbf{B} & \cdots & a_{11}\mathbf{B}
\end{array}
\right]
$$
is termed the \emph{Kronecker product} (also termed tensor product or direct product) of
$\mathbf{A}$ and $\mathbf{B}$ and written $\mathbf{A} \otimes \mathbf{B}$. Let $\mathbf{C}$ be
an $m \times n$ matrix and $\mathbf{C}_{j}$ its $j$-th column, then $\Vec \mathbf{C}$ is the
$mn \times 1$ vector
$$
\Vec \mathbf{C} =
\left [
\begin{array}{c}
\mathbf{C}_{1} \\
\mathbf{C}_{2} \\
\vdots \\
\mathbf{C}_{n}
\end{array}
\right].
$$
The vector $\Vec \mathbf{C}$ and $\Vec \mathbf{C}^{'}$ clearly contain the same $mn$
components, but in different order. Therefore there exist a unique $mn \times mn$ permutation
matrix which transform $\Vec \mathbf{C}$ into $\Vec \mathbf{C}'$. This matrix is termed the
\emph{commutation matrix} and is denoted $\mathbf{K}_{mn}$. (If $m=n$, is often write
$\mathbf{K}_{n}$ instead of $\mathbf{K}_{mn}$.) Hence
$$
\mathbf{K}_{mn} \Vec \mathbf{C} = \Vec \mathbf{C}'.
$$
Similarly, let $\mathbf{B}$ be a square $n \times n$ matrix. Then $\Vech \mathbf{B}$ (also
denoted as $\v(\mathbf{B})$) shall denote the $n(n+1)/2 \times 1$ vector that is obtained
from $\Vec \mathbf{B}$ by eliminating all supradiagonal elements of $\mathbf{B}$. If
$\mathbf{B} = \mathbf{B}'$, $\Vech \mathbf{B}$ contains only the distinct elements of
$\mathbf{B}$, then there is a unique $n^{2} \times n(n+1)/2$ matrix termed \textit{duplication
matrix}, which is denoted by $\mathbf{D}_{n}$, such that $\mathbf{D}_{n}\Vech \mathbf{B} = \Vec
\mathbf{B}$ and $\mathbf{D}_{n}^{+}\Vec \mathbf{B} = \Vech \mathbf{B}$. Finally, denote
$(\Vech \mathbf{B})' \equiv \Vech' \mathbf{B}$.
In what follows, from Lemma \ref{lemma1} through Theorem \ref{teo2}, asymptotic results are stated
for a single stratum. The notation $N_{\nu}$ and $n_{\nu}$ denote the size of a generic stratum
and the size of a simple random sample from that stratum.
\begin{lem}\label{lemma1}
Let $\mathbf{\mathbf{\Xi}}_{\nu}$ be a $G \times G$ symmetric random matrix defined as
$$
\mathbf{\Xi}_{\nu} = \frac{1}{n_{\nu}-1}\sum_{i = 1}^{n_{\nu}}(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})'.
$$
Suppose that for $\boldsymbol{\lambda} = (\lambda_{1}, \dots, \lambda_{k})'$, any vector of constants, $k = G(G+1)/2$
{\small
\begin{equation}\label{shc}
\boldsymbol{\lambda}'\left(\mathbf{M}_{\nu}^{4} - \Vech \mathbf{S}_{\nu}
\Vech' \mathbf{S}_{\nu}\right) \boldsymbol{\lambda} \geq \epsilon \build{\max}{}{1 \leq \alpha \leq k
} \left[\lambda_{\alpha}^{2} \mathbf{e}_{k}^{\alpha '}\left(\mathbf{M}_{\nu}^{4} - \Vech \mathbf{S}_{\nu}
\Vech' \mathbf{S}_{\nu}\right) \mathbf{e}_{k}^{\alpha }\right],
\end{equation}}
where $\mathbf{e}_{k}^{\alpha } = (0, \dots, 0, 1, 0, \dots, 0)'$ is the $\alpha$-th vector of
the canonical base of $\Re^{k}$, $\epsilon > 0$ and independent of $\nu
> 1$ and
$$
\mathbf{M}_{\nu}^{4} = \frac{1}{N_{\nu}}\mathbf{D}_{G}^{+}\left[\sum_{i = 1}^{N_{\nu}}
(\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})(\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})'
\otimes (\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})(\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})'
\right]\mathbf{D}_{G}^{+'},
$$
is the fourth central moment. Assume that $n_{\nu}\rightarrow \infty$, $N_{\nu} - n_{\nu}
\rightarrow \infty$, $N_{\nu}\rightarrow \infty$, and that, for all $j = 1,\dots,G$
\begin{equation}\label{hcas}
\left[\build{\lim}{}{\nu \rightarrow \infty}\left(\frac{n_{\nu}}{N_{\nu}}\right) = 0\right] \Rightarrow
\build{\lim}{}{\nu \rightarrow \infty} \frac{\build{\max}{}{1 \leq i_{1} < \cdots < i_{n_{\nu}}\leq N_{\nu}}
\displaystyle\sum_{\beta = 1}^{n_{\nu}}\left[\left(y_{\nu i_{\beta}}^{j} - \overline{Y}_{\nu}^{j}\right)^{2} - S_{\nu j}^{2}\right]^{2}}
{N_{\nu}\left[m_{\nu j}^{4}-\left(S_{\nu j}^{2}\right)^{2}\right]} = 0,
\end{equation}
where
$$
m_{\nu j}^{4} =\frac{1}{N_{\nu}}\sum_{i = 1}^{N_{\nu}}\left(y_{\nu i}^{j} -
\overline{y}_{\nu}^{j}\right)^{4}.
$$
Then, $\Vech\mathbf{\Xi}_{\nu}$ is asymptotically normally distributed as
$$
\Vech\mathbf{\Xi}_{\nu} \build{\rightarrow}{d}{} \mathcal{N}_{k}(\E(\Vech\mathbf{\Xi}_{\nu}),
\Cov(\Vech\mathbf{\Xi}_{\nu})),
$$
with
\begin{equation}\label{mXi}
\E(\Vech\mathbf{\Xi}_{\nu}) = \frac{n_{\nu}}{n_{\nu}-1}\Vech\mathbf{S}_{\nu},
\end{equation}
and
\begin{equation}\label{cmXi}
\Cov(\Vech\mathbf{\Xi}_{\nu}) = \frac{n_{\nu}}{(n_{\nu} - 1)^{2}}\left(\mathbf{M}_{\nu}^{4}
- \Vech \mathbf{S}_{\nu}\Vech' \mathbf{S}_{\nu}\right).
\end{equation}
$n_{\nu}$ is the sample size for a simple random sample from the $\nu$-th population of size
$N_{\nu}$.
\end{lem}
\begin{rem}\label{remark1}
Let
$$
\mathbf{\Xi}_{\nu} = \frac{1}{n_{\nu}-1}\sum_{i = 1}^{n_{\nu}}(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})'.
$$
Hence,
\begin{eqnarray*}
\Vec \mathbf{\Xi}_{\nu} &=& \frac{1}{n_{\nu}-1}\sum_{i = 1}^{n_{\nu}}\Vec(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})' \\
&=& \frac{1}{n_{\nu}-1}\sum_{i = 1}^{n_{\nu}}(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
\otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu}).
\end{eqnarray*}
From where
$$
\Vech \mathbf{\Xi}_{\nu} = \frac{1}{n_{\nu}-1}\sum_{i = 1}^{n_{\nu}}\mathbf{D}_{G}^{+}(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
\otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu}),
$$
$k = G(G+1)/2$.
Taking $m = k$ and $\mathbf{a}_{\nu i} = (a_{\nu i}^{1}, \dots, a_{\nu i}^{k})' =
\mathbf{D}_{G}^{+}(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu}) \otimes (\mathbf{y}_{\nu
i}- \overline{\mathbf{Y}}_{\nu})$ in \citet{h:61}, it is obtained that:
\begin{enumerate}[i)]
\item $\Vech \mathbf{\Xi}_{\nu}$ can be expressed as
$$
\Vech \mathbf{\Xi}_{\nu} = \sum_{i =1}^{N_{\nu}} b_{\nu i} \mathbf{a}_{\nu R_{\nu i}}.
$$
with $b's$ fixed, furthermore $b_{\nu 1} = \cdots = b_{\nu n_{\nu}} = 1/(n_{\nu}-1)$, $b_{\nu n_{\nu}+1} = \cdots = b_{\nu N_{\nu}} =
0$. Then
$$
\build{\lim}{}{\nu \rightarrow \infty} \frac{\build{\max}{}{1 \leq j \leq N_{\nu}} \left(b_{\nu j} -
\overline{b}_{\nu}\right)^{2}}{\displaystyle\sum_{i=1}^{N_{\nu}}\left(b_{\nu j} -
\overline{b}_{\nu}\right)^{2}} = 0, \quad \mbox{ where } \quad \overline{b}_{\nu} = \frac{1}{N_{\nu}}\sum_{i = 1}^{N_{\nu}}b_{\nu i}
$$
holds if $n_{\nu}\rightarrow \infty$, $N_{\nu} - n_{\nu} \rightarrow \infty$.
\item $\overline{\mathbf{a}}_{\nu} = (\overline{a}_{\nu}^{1} \cdots \overline{a}_{\nu}^{k})'$ is
\begin{eqnarray*}
\overline{\mathbf{a}}_{\nu} &=& \frac{1}{N_{\nu}} \sum_{i = 1}^{N_{\nu}}\mathbf{a}_{\nu i}\\
&=& \frac{1}{N_{\nu}} \sum_{i = 1}^{N_{\nu}} \mathbf{D}_{G}^{+} (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
\otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})\\
&=& \Vech \frac{1}{N_{\nu}} \sum_{i = 1}^{N_{\nu}} (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})'\\
&=& \Vech \mathbf{S}_{\nu}
\end{eqnarray*}
\item From (7.2) in \citet{h:61}
\begin{equation}\label{ohc}
\sum_{i = 1}^{N_{\nu}}\left[\sum_{\alpha = 1}^{k}\lambda_{\alpha} (a_{\nu i}^{\alpha} -
a_{\nu}^{\alpha})\right]^{2} \geq \epsilon \build{\max}{}{1 \leq \alpha \leq k}
\left[\lambda_{\alpha}^{2}\sum_{i = 1}^{N_{\nu}} (a_{\nu i}^{\alpha} -
a_{\nu}^{\alpha})^{2}\right].
\end{equation}
In the context of sampling theory the right side in (\ref{ohc}) can be written as
\begin{eqnarray*}
\hspace{-1cm}
\sum_{i = 1}^{N_{\nu}}\left[\sum_{\alpha = 1}^{k}\lambda_{\alpha} (a_{\nu i}^{\alpha} - a_{\nu}^{\alpha})\right]^{2}
&=& \sum_{i = 1}^{N_{\nu}}\left\{\boldsymbol{\lambda}' \left[\mathbf{D}_{G}^{+} (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
\otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu}) - \Vech \mathbf{S}_{\nu}\right]\right\}^{2} \\
\end{eqnarray*}
\vspace{-1.5cm}
\begin{eqnarray}
\quad &=& \sum_{i = 1}^{N_{\nu}}\boldsymbol{\lambda}' \left[\mathbf{D}_{G}^{+} (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
\otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu}) - \Vech \mathbf{S}_{\nu}\right]\nonumber\\
&& \left[(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})' \otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})'\mathbf{D}_{G}^{+'} -
\Vech' \mathbf{S}_{\nu}\right]\boldsymbol{\lambda}\nonumber\\
&=& \boldsymbol{\lambda}' \left[\mathbf{D}_{G}^{+} \sum_{i = 1}^{N_{\nu}}(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})' \otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})'\mathbf{D}_{G}^{+'}\right.\nonumber\\
&& - \Vech \mathbf{S}_{\nu} \sum_{i = 1}^{N_{\nu}}(\mathbf{y}_{\nu i}-
\overline{\mathbf{Y}}_{\nu})' \otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})'\mathbf{D}_{G}^{+'}\nonumber\\
&& \left. - \mathbf{D}_{G}^{+}\sum_{i = 1}^{N_{\nu}}(\mathbf{y}_{\nu i}-
\overline{\mathbf{Y}}_{\nu}) \otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})\Vech' \mathbf{S}_{\nu}
+ N_{\nu}\Vech \mathbf{S}_{\nu}\Vech'
\mathbf{S}_{\nu}\right]\boldsymbol{\lambda}\nonumber\\ \label{lshc}
&=& N_{\nu}\boldsymbol{\lambda}'\left(\mathbf{M}_{\nu}^{4} - \Vech \mathbf{S}_{\nu}
\Vech' \mathbf{S}_{\nu}\right) \boldsymbol{\lambda},
\end{eqnarray}
where $\mathbf{M}_{\nu}^{4}$ i
{\small
\begin{equation}\label{m4}
= \frac{1}{N_{\nu}}\mathbf{D}_{G}^{+}\left[\sum_{i = 1}^{N_{\nu}}
(\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})(\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})'
\otimes (\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})(\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})'
\right]\mathbf{D}_{G}^{+'},
\end{equation}}
Similarly the right side of (\ref{ohc}) is
\begin{eqnarray*}
\hspace{-1cm}
\lambda_{\alpha}^{2}\sum_{i = 1}^{N_{\nu}} (a_{\nu i}^{\alpha} - a_{\nu}^{\alpha})^{2}
&=& \sum_{i = 1}^{N_{\nu}}\left\{\boldsymbol{\lambda}' \mathbf{e}_{k}^{\alpha}\mathbf{e}_{k}^{\alpha'}
\left[\mathbf{D}_{G}^{+} (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
\otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu}) - \Vech \mathbf{S}_{\nu}\right]\right\}^{2} \\
&=& \lambda_{\alpha}^{2}\sum_{i = 1}^{N_{\nu}}\left\{\mathbf{e}_{k}^{\alpha'}
\left[\mathbf{D}_{G}^{+} (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
\otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu}) - \Vech
\mathbf{S}_{\nu}\right]\right\}^{2}.
\end{eqnarray*}
Then, proceeding as in 3.,
\begin{equation}\label{rshc}
\lambda_{\alpha}^{2}\sum_{i = 1}^{N_{\nu}} (a_{\nu i}^{\alpha} - a_{\nu}^{\alpha})^{2} =
N_{\nu} \lambda_{\alpha}^{2} \mathbf{e}_{k}^{\alpha '}\left(\mathbf{M}_{\nu}^{4} -
\Vech \mathbf{S}_{\nu}\Vech' \mathbf{S}_{\nu}\right) \mathbf{e}_{k}^{\alpha}.
\end{equation}
Therefore, from (\ref{lshc}) and (\ref{rshc}), (\ref{shc}) is established.
\item The expression for (\ref{hcas}) is found analogously as the procedure described in item 3.
\item Finally,
\begin{eqnarray*}
\E(\Vech \mathbf{\Xi}) &=& \frac{1}{n_{\nu}-1} \sum_{i = 1}^{n_{\nu}}\E\mathbf{D}_{G}^{+}(\mathbf{y}_{\nu i}-
\overline{\mathbf{Y}}_{\nu}) \otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu}) \\
&=& \frac{1}{n_{\nu}-1} \sum_{i = 1}^{n_{\nu}}\Vech \E(\mathbf{y}_{\nu i}-
\overline{\mathbf{Y}}_{\nu})(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})' \\
&=& \frac{1}{n_{\nu}-1} \sum_{i = 1}^{n_{\nu}} \Vech \mathbf{S}_{\nu}\\
&=& \frac{n_{\nu}}{n_{\nu}-1} \Vech \mathbf{S}_{\nu}\\
\end{eqnarray*}
Similarly, by independence
$$\hspace{-2cm}
\Cov(\Vech \mathbf{\Xi}) = \frac{1}{(n_{\nu}-1)^{2}} \sum_{i = 1}^{n_{\nu}}\Cov\left[\mathbf{D}_{G}^{+}(\mathbf{y}_{\nu i}-
\overline{\mathbf{Y}}_{\nu}) \otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})\right]
$$
\vspace{-.75cm}
{\small
\begin{eqnarray*}
\quad &=& \frac{1}{(n_{\nu}-1)^{2}} \sum_{i = 1}^{n_{\nu}}\left\{\E\left[\mathbf{D}_{G}^{+}(\mathbf{y}_{\nu i}-
\overline{\mathbf{Y}}_{\nu}) \otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})(\mathbf{y}_{\nu i}-
\overline{\mathbf{Y}}_{\nu})' \otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})'\mathbf{D}_{G}^{+'}\right]
\right .\\
&& \left. - \E\left[\mathbf{D}_{G}^{+}(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu}) \otimes
(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})\right] \E \left[(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})'
\otimes (\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})'\mathbf{D}_{G}^{+'}\right] \right\}\\
&=& \frac{1}{(n_{\nu}-1)^{2}} \sum_{i = 1}^{n_{\nu}} \left(\mathbf{M}_{\nu}^{4} - \Vech \mathbf{S}_{\nu}\Vech' \mathbf{S}_{\nu}\right)\\
&=& \frac{n_{\nu}}{(n_{\nu}-1)^{2}} \left(\mathbf{M}_{\nu}^{4} - \Vech
\mathbf{S}_{\nu}\Vech' \mathbf{S}_{\nu}\right),
\end{eqnarray*}}
the last expression is obtained observing that
$$
\E\left[\mathbf{D}_{G}^{+}(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu}) \otimes
(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})\right] = \Vech \E\left[(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})
(\mathbf{y}_{\nu i}- \overline{\mathbf{Y}}_{\nu})'\right] = \Vech \mathbf{S}_{\nu}
$$
and that
$$
\E\left\{\mathbf{D}_{G}^{+}
(\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})(\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})'
\otimes (\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})(\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})'
\mathbf{D}_{G}^{+'}\right\} = \mathbf{M}_{\nu}^{4}
$$
where $\mathbf{M}_{\nu}^{4}$ is defined in (\ref{m4}). \qed
\end{enumerate}
\end{rem}
\begin{thm}\label{teo1}
Under assumptions in Lemma \ref{lemma1}, the sequence of sample covariance matrices
$\mathbf{s}_{\nu}$ are such that $\Vech \mathbf{s}_{\nu}$ has an asymptotic normal distribution with
asymptotic mean and covariance matrix given by (\ref{mXi}) and (\ref{cmXi}), respectively.
\end{thm}
\begin{proof}
This follows immediately from Lemma \ref{lemma1}, only observe that
\begin{eqnarray*}
\mathbf{s}_{\nu} &=& \frac{1}{n_{\nu}-1}\sum_{i = 1}^{n_{\nu}}(\mathbf{y}_{\nu i}- \overline{\mathbf{y}}_{\nu})
(\mathbf{y}_{\nu i}- \overline{\mathbf{y}}_{\nu})' \\
&=& \mathbf{\Xi} - \frac{n_{\nu}}{n_{\nu}-1} (\overline{\mathbf{y}}_{\nu}- \overline{\mathbf{Y}}_{\nu})
(\overline{\mathbf{y}}_{\nu }- \overline{\mathbf{Y}}_{\nu})',
\end{eqnarray*}
where
$$
\frac{n_{\nu}}{n_{\nu}-1} \rightarrow 1 \quad \mbox{ and } \quad (\overline{\mathbf{y}}_{\nu}- \overline{\mathbf{Y}}_{\nu})
(\overline{\mathbf{y}}_{\nu }- \overline{\mathbf{Y}}_{\nu})'\rightarrow 0 \quad \mbox{in
probability}. \qquad\mbox{\qed}
$$
\end{proof}
\begin{rem}
Observe that it is possible to find the asymptotic distribution of $\Vec \mathbf{s}_{\nu}$, but
this asymptotic normal distribution is singular, because $\Cov(\Vec\mathbf{s}_{\nu})$ is
singular. This is due to the fact $\Cov(\Vec\mathbf{s}_{\nu})$ is the $G^{2} \times G^{2}$
covariance matrix in the asymptotic distribution distribution of $\Vec \mathbf{s}_{\nu}$ and,
because $\mathbf{s}_{\nu}$ is symmetric, then $\Vec \mathbf{s}_{\nu}$ has repeated elements. In
this case, $\Vec\mathbf{s}_{\nu}$ is asymptotically normally distributed as (see \citet{mh:82})
$$
\Vec\mathbf{s}_{\nu} \build{\rightarrow}{d}{} \mathcal{N}_{G^{2}}(\E(\Vec\mathbf{\Xi}_{\nu}),
\Cov(\Vec\mathbf{\Xi}_{\nu})),
$$
where
$$
\E(\Vec\mathbf{\Xi}_{\nu}) = \frac{n_{\nu}}{n_{\nu}-1}\Vec\mathbf{S}_{\nu},
$$
$$
\Cov(\Vec\mathbf{\Xi}_{\nu}) = \frac{n_{\nu}}{(n_{\nu} - 1)^{2}}\left(\mathfrak{M}_{\nu}^{4}
- \Vec \mathbf{S}_{\nu}\Vec' \mathbf{S}_{\nu}\right),
$$
and
$$
\mathfrak{M}_{\nu}^{4} = \frac{1}{N_{\nu}}\left[\sum_{i = 1}^{N_{\nu}}
(\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})(\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})'
\otimes (\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})(\mathbf{y}_{\nu i} - \overline{\mathbf{Y}}_{\nu})'
\right]. \quad\mbox{\qed}
$$
\end{rem}
Proceeding in analogous way as in Lemma \ref{lemma1} and Remark \ref{remark1}, it is
obtained:
\begin{thm}\label{teo2}
Suppose that for $\boldsymbol{\lambda} = (\lambda_{1}, \dots, \lambda_{G})'$, any vector of
constants,
\begin{equation}\label{shcm}
\boldsymbol{\lambda}'\mathbf{S}_{\nu}\boldsymbol{\lambda} \geq \epsilon \build{\max}{}{1 \leq j
\leq G} \left[\lambda_{\alpha}^{2} S_{\nu \alpha}^{2}\right].
\end{equation}
Assume that $n_{\nu}\rightarrow \infty$, $N_{\nu} - n_{\nu}
\rightarrow \infty$, $N_{\nu}\rightarrow \infty$, and tha
\begin{equation}\label{hcas1}
\left[\build{\lim}{}{\nu \rightarrow \infty}\left(\frac{n_{\nu}}{N_{\nu}}\right) = 0\right] \Rightarrow
\build{\lim}{}{\nu \rightarrow \infty} \frac{\build{\max}{}{1 \leq i_{1} < \cdots < i_{n_{\nu}}\leq N_{\nu}}
\displaystyle\sum_{\beta = 1}^{n_{\nu}}\left(y_{\nu i_{\beta}}^{j} - \overline{Y}_{\nu}^{j}\right)^{2}}
{N_{\nu}S_{\nu j}^{2}} = 0,
\end{equation}
Then, $\overline{\mathbf{y}}_{\nu}$ is asymptotically normally distributed as
$$
\overline{\mathbf{y}}_{\nu} \build{\rightarrow}{d}{} \mathcal{N}_{G}\left(\overline{\mathbf{Y}}_{\nu},
\mathbf{S}_{\nu}\right).
$$
$n_{\nu}$ is the sample size for a simple random sample from the $\nu$-th population of size
$N_{\nu}$.
\end{thm}
As direct consequence of Theorem \ref{teo1} it is obtained:
\begin{thm}
Let $\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})$ be the estimator of the covariance matrix
of $\overline{\mathbf{y}}_{ST}$, then
$$
\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}) = \sum_{h=1}^{H}\left(\frac{{{W_{h}}^{2}}}{n_{h}} -
\frac{{W_{h}}}{N} \right)\Vech \mathbf{s}_{h}
$$
is asymptotically normally distributed; furthermore
\begin{equation}\label{normal}
\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}) \build{\rightarrow}{d}{} \mathcal{N}_{k}
\left(\E\left(\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right),
\Cov\left(\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)\right),
\end{equation}
where
\begin{equation}\label{ecyst}
\E\left(\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right) = \sum_{h=1}^{H}\left(
\frac{{{W_{h}}^{2}}}{n_{h}} - \frac{{W_{h}}}{N} \right) \frac{n_{h}}{n_{h}-1}\Vech\mathbf{S}_{h},
\end{equation}
\begin{eqnarray}
\Cov\left(\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right) \hspace{8cm}\nonumber \\
\label{ccyst}
\phantom{xx}=\sum_{h=1}^{H}\left(
\frac{{{W_{h}}^{2}}}{n_{h}} - \frac{{W_{h}}}{N} \right)^{2} \frac{n_{h}}{(n_{h}-1)^{2}}
\left(\mathbf{M}_{h}^{4} - \Vech \mathbf{S}_{h}\Vech' \mathbf{S}_{h}\right),
\end{eqnarray}
and
$$
\mathbf{M}_{h}^{4} = \frac{1}{N_{h}}\mathbf{D}_{G}^{+}\left[\sum_{i = 1}^{N_{h}}
(\mathbf{y}_{h i} - \overline{\mathbf{Y}}_{h})(\mathbf{y}_{h i} - \overline{\mathbf{Y}}_{h})'
\otimes (\mathbf{y}_{h i} - \overline{\mathbf{Y}}_{h})(\mathbf{y}_{h i} - \overline{\mathbf{Y}}_{h})'
\right]\mathbf{D}_{G}^{+'}.
$$
\end{thm}
Observe that the asymptotic means and covariance matrices of the asymptotically normality
distributions of $\overline{\mathbf{y}}_{h}$, $\Vech \mathbf{S}_{h}$, $\Vec
\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})$ and $ \Vech
\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})$ are in terms of the populations parameters
$\overline{\mathbf{Y}}_{h}$, $\Vech \mathbf{S}_{h}$, $\mathfrak{M}_{h}^{4}$ and
$\mathbf{M}_{h}^{4}$; then, from \citet[iv), pp. 388-389]{r:73}, approximations of asymptotic
distributions can be obtained using consistent estimators instead of population parametrers.
In what follows, the following substitutions are used:
\begin{equation}\label{sus}
\overline{\mathbf{Y}}_{h} \rightarrow \overline{\mathbf{y}}_{h}, \qquad \Vech \mathbf{S}_{h}
\rightarrow\Vech \mathbf{s}_{h}, \quad \mathfrak{M}_{h}^{4} \rightarrow \boldsymbol{\mathfrak{m}}_{h}^{4}
\quad \mbox{ and } \quad \mathbf{M}_{h}^{4} \rightarrow \mathbf{m}_{h}^{4}
\end{equation}
where
$$
\mathbf{m}_{h}^{4} = \frac{1}{n_{h}}\mathbf{D}_{G}^{+}\left[\sum_{i = 1}^{n_{h}}
(\mathbf{y}_{h i} - \overline{\mathbf{y}}_{h})(\mathbf{y}_{h i} - \overline{\mathbf{y}}_{h})'
\otimes (\mathbf{y}_{h i} - \overline{\mathbf{y}}_{h})(\mathbf{y}_{h i} - \overline{\mathbf{y}}_{h})'
\right]\mathbf{D}_{G}^{+'},
$$
and
$$
\boldsymbol{\mathfrak{m}}_{h}^{4} = \frac{1}{n_{h}}\left[\sum_{i = 1}^{n_{h}}
(\mathbf{y}_{h i} - \overline{\mathbf{y}}_{h})(\mathbf{y}_{h i} - \overline{\mathbf{y}}_{h})'
\otimes (\mathbf{y}_{h i} - \overline{\mathbf{y}}_{h})(\mathbf{y}_{h i} - \overline{\mathbf{y}}_{h})'
\right].
$$
\section{\normalsize OPTIMUM ALLOCATION IN MULTIVARIATE STRATIFIED RANDOM SAMPLING VIA STOCHASTIC MATRIX MATHEMATICAL PROGRAMMING}\label{sec4}
When the variances are the objective functions, subject to certain cost function, the optimum
allocation in multivariate stratified random sampling can be expressed as the following matrix
mathematical programming using a deterministic approach
\begin{equation}\label{om1}
\begin{array}{c}
\build{\min}{}{\mathbf{n}}{\widehat{\Cov}}(\overline{\mathbf{y}}_{_{ST}})\\
\mbox{subject to}\\
\mathbf{c}'\mathbf{n} + c_{0} = C \\
2\leq n_{h}\leq N_{h}, \ \ h=1,2,\dots, H\\
n_{h}\in \mathbb{N},
\end{array}
\end{equation}
where $\mathbb{N}$ denotes the set of natural numbers. (\ref{om1}) has been studied in detail
by \citet{dgu:08}.
Observing that $\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})$ is in terms of $s_{h_{jk}}$,
which are random variables, the optimum allocation of (\ref{om1}) via stochastic mathematical
programming can be stated as the following stochastic matrix mathematical programming, see
\citet{p:95} and \citet{sm:84},
\begin{equation}\label{smamp}
\begin{array}{c}
\build{\min}{}{\mathbf{n}}{\widehat{\Cov}}(\overline{\mathbf{y}}_{_{ST}})\\
\mbox{subject to}\\
\mathbf{c}'\mathbf{n} + c_{0} = C \\
2\leq n_{h}\leq N_{h}, \ \ h=1,2,\dots, H\\
\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}) \build{\rightarrow}{d}{} \mathcal{N}_{k}
\left(\E\left(\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right),
\Cov\left(\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)\right)\\
n_{h}\in \mathbb{N},
\end{array}
\end{equation}
where $\E\left(\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)$ and $\Cov\left(\Vech
\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)$ are given by (\ref{ecyst}) and
(\ref{ccyst}) respectively.
Observe that $\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})$ is an explicit function of
$\mathbf{n}$, and so it must be denoted as $\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})
\equiv \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}(\mathbf{n}))$. Also, assume that
$\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}(\mathbf{n}))$ is a positive definite matrix for
all $\mathbf{n}$, $\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}(\mathbf{n})) > \mathbf{0}$.
Now, let $\mathbf{n_{1}}$ and $\mathbf{n_{2}}$ be two possible values of the vector
$\mathbf{n}$ and, recall that, for $\mathbf{A}$ and $\mathbf{B}$ positive definite matrices,
$\mathbf{A} > \mathbf{B} \Leftrightarrow \mathbf{A} - \mathbf{B} > \mathbf{0}$.
Then, proceeding as \citet{dgu:08} the stochastic solution of (\ref{smamp}) is reduced to the
following stochastic uniobjective mathematical programming problem
\begin{equation}\label{smauo}
\begin{array}{c}
\build{\min}{}{\mathbf{n}}f\left(\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)\\
\mbox{subject to}\\
\mathbf{c}'\mathbf{n}+ c_{0}=C \\
2\leq n_{h}\leq N_{h}, \ \ h=1,2,\dots, H\\
\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}) \build{\rightarrow}{d}{} \mathcal{N}_{k}
\left(\E\left(\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right),
\Cov\left(\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)\right)\\
n_{h}\in \mathbb{N},
\end{array}
\end{equation}
where the function $f$ is such that: $f: \mathcal{S}\rightarrow \Re$,
\begin{equation}\label{citerio}
\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}(\mathbf{n_{1}})) <
\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}(\mathbf{n_{2}}))
\Leftrightarrow
f\left(\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}(\mathbf{n_{1}}))\right)
< f\left(\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}(\mathbf{n_{2}}))\right).
\end{equation}
with $\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}(\mathbf{n})) \in \mathcal{S}\subset
\Re^{G(G+1)/2}$ and $\mathcal{S}$ is the set of positive definite matrices.
Unfortunately or fortunately the function $f(\cdot)$ is not unique. Same alternatives for
$f\left(\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}(\mathbf{n}))\right)$ are $ \tr
\left(\cdot\right)$, $ \left|\cdot\right|$, $\lambda_{\max}\left(\cdot\right)$, where
$\lambda_{\max}$ is the maximum eigenvalue, $ \lambda_{\min}\left(\cdot\right)$, where
$\lambda_{\min}$ is the minimum eigenvalue, $ \lambda_{j}\left(\cdot\right)$, where
$\lambda_{j}$ is the $j$-th eigenvalue, among others.
Note that (\ref{smauo}) is a stochastic uniobjective mathematical programming then, any
technique of stochastic uniobjective mathematical programming can be applied, for example:
Point $\mathbf{n} \in \mathbb{N}^{H}$ is the expected modified value solution to (\ref{smauo})
if it is an efficient solution in the \textbf{Pareto}\footnote{For the sampling context, observe
that in matrix mathematical programming problems, there rarely exists a point $\mathbf{n^{*}}$
which is considered as a minimum. Alternatively, it say that $f^{*}(\mathbf{x})$ is a
\textit{Pareto point} of $f(\mathbf{n}) = (f_{1}(\mathbf{n}), \dots, f_{G}(\mathbf{n}))'$, if
there is not other point $f^{1}(\mathbf{n})$ such that $f^{1}(\mathbf{n}) \leq
f^{*}(\mathbf{n})$, i.e. for all $j$, $f^{1}_{j}(\mathbf{n}) \leq f^{*}_{j}(\mathbf{n})$ and
$f^{1}(\mathbf{n}) \neq f^{*}(\mathbf{n})$.} sense to following deterministic uniobjetive
mathematical programming problem
\begin{equation}\label{emsma}
\begin{array}{c}
\build{\min}{}{\mathbf{n}} k_{1}\E\left(f\left(\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)\right)
+ k_{2}\sqrt{\Var\left(f\left(\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)\right)}\\
\mbox{subject to}\\
\mathbf{c}'\mathbf{n}+ c_{0}=C \\
2\leq n_{h}\leq N_{h}, \ \ h=1,2,\dots, H\\
n_{h}\in \mathbb{N},
\end{array}
\end{equation}
Here $k_{1}$ and $k_{2}$ are non negative constants, and their values show the relative
importance of the expectation and the covariance matrix
$\widehat{\Cov}(\bar{\mathbf{y}}_{_{ST}})$. Some authors suggest that $k_{1} + k_{2} =1$, see
\citet[p. 599]{rao78}. Observe that if $k_{1}$ and $k_{2}$ are such that $k_{1} = 1$ and $k_{2}
= 0$ in (\ref{emsma}), the resulting method is known as the E-model. Alternatively, if $k_{1} =
0$ and $k_{2} = 1$, the method is called the V-model, see \citet{chc:63}, \citet{p:95} and
\citet{up:01}.
Alternatively, the point $\mathbf{n} \in \mathbb{N}^{H}$ is a minimum risk solution of the
aspiration level $\tau$ to the problem (\ref{smauo}) (also termed P-model, see \citet{chc:63})
if its is an efficient solution in the Pareto sense of the uniobjetive stochastic optimization
problem
\begin{equation}\label{mrsma}
\begin{array}{c}
\build{\min}{}{\mathbf{n}} \P\left(f\left(\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right) \leq \tau\right)\\
\mbox{subject to}\\
\mathbf{c}'\mathbf{n}+ c_{0}=C \\
2\leq n_{h}\leq N_{h}, \ \ h=1,2,\dots, H\\
n_{h}\in \mathbb{N}.
\end{array}
\end{equation}
In Section \ref{sec5} the solution is studied for the case when $f =
\tr\left(\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)$ and the case when $f =
\left|\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right|$. These solutions are implemented in the context of
problems (\ref{emsma}) and (\ref{mrsma}).
Finally, note that so far, the cost constraint $\displaystyle\sum_{h=1}^{H}c_{h}n_{h}+
c_{0}=C$ has been used in every stochastic mathematical programming method. However, in diverse situations,
this cost restriction could represent existing restrictions on the availability of man-hours for
carrying out a survey, or restrictions on the total available time for performing the survey,
etc. These limitations can be established by using the following constraint, see \cite{ad81}:
$$
\sum_{h=1}^{H}n_{h} = n.
$$
\section{\normalsize APPLICATION}\label{sec5}
The input information was taken from \citet{aa81} in which they describe a forest survey conducted in Humbolt
County, California. The population was subdivided into nine strata on the basis of the timber
volume per unit area, as determined from aerial photographs. The two variables included in this
example are the basal area (BA)\footnote{In forestry terminology, `Basal area' is the area of a
plant perpendicular to the longitudinal axis of a tree at 4.5 feet above ground.} in square
feet, and the net volume in cubic feet (Vol.), both expressed on a per acre basis. The
variances, covariances and the number of units within stratum $h$ are listed in Table 1.
\begin{table}
\caption{\small Variances, covariances and the number of units within each stratum}
\begin{center}
\begin{footnotesize}
\begin{tabular}{ c r r r r }
\hline\hline
\multicolumn{2}{c}{} & \multicolumn{2}{c}{Variance} \\
\cline{3-4}
Stratum & $N_{h}$ & \hspace{.5cm} BA \hspace{.5cm} & \hspace{.5cm} Vol. \hspace{.5cm} & \hspace{.5cm}Covariance \\
\hline\hline
1 & 11 131 & 1 557 & 554 830 & 28 980 \\
2 & 65 857 & 3 575 & 1 430 600 & 61 591\\
3 & 106 936 & 3 163 & 1 997 100 & 72 369 \\
4 & 72 872 & 6 095 & 5 587 900 & 166 120\\
5 & 78 260 & 10 470 & 10 603 000 & 293 960 \\
6 & 51 401 & 8 406 & 15 828 000 & 357 300\\
7 & 24 050 & 20 115 & 26 643 000 & 663 300 \\
8 & 46 113 & 9 718 & 13 603 000 & 346 810\\
9 & 102 985 & 2 478 & 1 061 800 & 39 872 \\
\hline\hline
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
For this example, the matrix optimisation problem under approach (\ref{smauo}) is
\begin{equation}\label{ej}
\begin{array}{c}
\build{\min}{}{\mathbf{n}}
f\left
\begin{array}{c c}
\widehat{\Var}(\overline{y}_{_{ST}}^{1}) & \widehat{\Cov}(\overline{y}_{_{ST}}^{1}, \overline{y}_{_{ST}}^{2})\\
\widehat{\Cov}(\overline{y}_{_{ST}}^{2}, \overline{y}_{_{ST}}^{1}) & \widehat{\Var}(\overline{y}_{_{ST}}^{2}) \\
\end{array
\right)\\
\mbox{subject to}\\
\displaystyle\sum_{h=1}^{9}n_{h}=1000 \\
2\leq n_{h}\leq N_{h}, \ \ h=1,\dots, 9\\
\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}) \build{\rightarrow}{d}{} \mathcal{N}_{3}
\left(\E\left(\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right),
\Cov\left(\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)\right)\\
n_{h}\in \mathbb{N}.
\end{array}
\end{equation}
\subsection{\normalsize Solution when $f(\cdot) \equiv \tr(\cdot)$}
Note that by (\ref{normal}), (\ref{ecyst}) and (\ref{ccyst})
$$
\tr \Cov\left(\overline{\mathbf{y}}_{ST}\right) \sim \mathcal{N}\left(\E \left (\tr \Cov\left(\overline{\mathbf{y}}_{ST}\right)\right),
\Var\left(\tr \Cov\left(\overline{\mathbf{y}}_{ST}\right)\right)\right)
$$
where
$$
\E\left(\tr \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right) = \sum_{j=1}^{G}\sum_{h=1}^{H}\left(
\frac{{{W_{h}}^{2}}}{n_{h}} - \frac{{W_{h}}}{N} \right) \frac{n_{h}}{n_{h}-1}S_{h_{j}}^{2},
$$
$$
\Var\left(\tr \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)
=\sum_{j=1}^{G}\sum_{h=1}^{H}\left(
\frac{{{W_{h}}^{2}}}{n_{h}} - \frac{{W_{h}}}{N} \right)^{2} \frac{n_{h}}{(n_{h}-1)^{2}}
\left(m_{h_{j}}^{4} - (S_{h_{j}}^{2})^{2}\right),
$$
and
$$
m_{h_{j}}^{4} = \frac{1}{N_{h}}\left[\sum_{i = 1}^{N_{h}} \left(y_{h i}^{j} - \overline{Y}_{h}^{j}\right)^{4} \right].
$$
Therefore, considering the substitutions (\ref{sus}), the equivalent deterministic uniobjetive
mathematical programming problem to stochastic mathematical programming (\ref{ej}) via the
modified $E$-model is
$$
\begin{array}{c}
\build{\min}{}{\mathbf{n}} k_{1}\widehat{\E}\left(\tr\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)
+ k_{2}\sqrt{\widehat{\Var}\left(\tr\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)}\\
\mbox{subject to}\\
\displaystyle\sum_{h=1}^{9}n_{h}=1000 \\
2\leq n_{h}\leq N_{h}, \ \ h=1,2,\dots, 9\\
n_{h}\in \mathbb{N},
\end{array}
$$
where
\begin{equation}\label{esp}
\widehat{\E}\left(\tr \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right) = \sum_{j=1}^{2}\sum_{h=1}^{9}\left(
\frac{{{W_{h}}^{2}}}{n_{h}} - \frac{{W_{h}}}{N} \right) \frac{n_{h}}{n_{h}-1}s_{h_{j}}^{2},
\end{equation}
\begin{equation}\label{varia}
\widehat{\Var}\left(\tr \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)
=\sum_{j=1}^{2}\sum_{h=1}^{9}\left(
\frac{{{W_{h}}^{2}}}{n_{h}} - \frac{{W_{h}}}{N} \right)^{2} \frac{n_{h}}{(n_{h}-1)^{2}}
\left(\mathfrak{m}_{h_{j}}^{4} - (s_{h_{j}}^{2})^{2}\right),
\end{equation}
and
\begin{equation}\label{impo}
\mathfrak{m}_{h_{j}}^{4} = \frac{1}{n_{h}}\left[\sum_{i = 1}^{n_{h}} \left(y_{h i}^{j} - \overline{y}_{h}^{j}\right)^{4} \right].
\end{equation}
\begin{rem}\label{impo1}
Observe that the estimators $\overline{y}_{h}^{j}$, $s_{h_{j}}^{2}$ and $m_{h_{j}}^{4}$ of
$\overline{Y}_{h}^{j}$, $S_{h_{j}}^{2}$ and $M_{h_{j}}^{4}$ are initially obtained as
\begin{enumerate}[i)]
\item a consequence of a pilot study (or preliminary sample) or
\item using the corresponding values of the estimators of another variable $X$ correlated to the variable $Y$.
\end{enumerate}
It is important to have this in mind in the the minimisation step, because for example,
the $n_{h}$'s that appear in expression (\ref{impo}), are the fixed $n_{h}$'s values used in the
pilot study. Same comment for the expressions of the estimator $\overline{y}_{h}^{j}$ and
$s_{h_{j}}^{2}$. While the $n_{h}$'s that appear in expressions (\ref{esp}) and (\ref{varia})
are the decision variables. \qed
\end{rem}
Similarly, proceeding as in \citet{dgrc:00}, and noting that, if $\Phi$ denotes the
distribution function of the standard Normal distribution, the objective function in (\ref{ej})
with $f(\cdot) \equiv \tr(\cdot)$ can be written as
$$
\build{\min}{}{\mathbf{n}}\quad \Phi\left( \frac{\tau
-\widehat{\E}\left(\tr\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)}
{\sqrt{\widehat{\Var}\left(\tr\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)}} \right ).
$$
In this way, since minimising the monotonically increasing distribution function is equivalent
to minimising the value of the associated random variable, the equivalent deterministic problem
to the stochastic mathematical programming (\ref{ej}) via the $P$-model is
$$
\begin{array}{c}
\build{\min}{}{\mathbf{n}} \displaystyle\frac{\tau -\widehat{\E}\left(\tr\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)}
{\sqrt{\widehat{\Var}\left(\tr\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)}}\\
\mbox{subject to}\\
\displaystyle\sum_{h=1}^{9}n_{h}=1000 \\
2\leq n_{h}\leq N_{h}, \ \ h=1,2,\dots, 9\\
n_{h}\in \mathbb{N},
\end{array}
$$
\begin{rem}
When $f(\cdot) \equiv |\cdot|$, this approach consider the following alternative stochastic
matrix mathematical programming problem
\begin{equation}\label{det1}
\begin{array}{c}
\build{\min}{}{\mathbf{n}}\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\\
\mbox{subject to}\\
\displaystyle\sum_{h=1}^{9}n_{h}=1000 \\
2\leq n_{h}\leq N_{h}, \ \ h=1,2,\dots, 9\\
\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}) \build{\rightarrow}{d}{} \mathcal{N}_{2 \times 2}
\left(\Vech \mathbf{0}_{2 \times 2},
\Cov\left(\Vech\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)\right)\\
n_{h}\in \mathbb{N},
\end{array}
\end{equation}
where $\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})
= \Vech^{-1}\left[\Vech\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})
- \E\left(\Vech\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)\right]
$
and $\Vech^{-1}$ is the inverse function of function $\Vech$.
In this way (\ref{mrsma}) is
\begin{equation}\label{det2}
\begin{array}{c}
\build{\min}{}{\mathbf{n}}\left|\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right|\\
\mbox{subject to}\\
\displaystyle\sum_{h=1}^{9}n_{h}=1000 \\
2\leq n_{h}\leq N_{h}, \ \ h=1,2,\dots, 9\\
\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}) \build{\rightarrow}{d}{} \mathcal{N}_{2 \times 2}
\left(\Vech \mathbf{0}_{2 \times 2},
\Cov\left(\Vech\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)\right)\\
n_{h}\in \mathbb{N},
\end{array}
\end{equation}
Thus, taking into account the substitutions (\ref{sus}), the equivalent deterministic uniobjetive
mathematical programming problem to the stochastic mathematical programming (\ref{det2}) via the
modified $E$-model is
$$
\begin{array}{c}
\build{\min}{}{\mathbf{n}}
k_{1}\widehat{\E}\left(\left|\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right|\right)
+ k_{2}\sqrt{\widehat{\Var}\left(\left|\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right|\right)}\\
\mbox{subject to}\\
\displaystyle\sum_{h=1}^{9}n_{h}=1000 \\
2\leq n_{h}\leq N_{h}, \ \ h=1,2,\dots, 9\\
n_{h}\in \mathbb{N},
\end{array}
$$
where for $G = 2$ and assuming that
$\widehat{\Cov}\left(\Vech\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)$ is such that
$$
\widehat{\Cov}\left(\Vech\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right) = \mathbf{B}\otimes
\mathbf{B},
$$
it is obtained that, see \citet{dc:00},
$$
\widehat{\E}\left(\left| \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right|\right) =
|\mathbf{N}|^{1/4} \frac{(-1)}{\sqrt{\pi}}\left(\Gamma[1/2]-\Gamma[3/2]\right),
$$
and $\widehat{\Var}\left(\left|\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right|\right)$
i
$$
=|\mathbf{N}|^{1/2} \left[\frac{2}{\sqrt{\pi}}\left(\Gamma[1/2]-\Gamma[3/2] + \frac{\Gamma[5/2]}{2}\right)
-\frac{1}{\pi}\left(\Gamma[1/2]-\Gamma[3/2]\right)^{2}\right],
$$
where $\Gamma[\cdot]$ denotes the gamma function,
$$
\mathbf{N} = \sum_{h=1}^{H}\left(
\frac{{{W_{h}}^{2}}}{n_{h}} - \frac{{W_{h}}}{N} \right)^{2} \frac{n_{h}}{(n_{h}-1)^{2}}
\left(\mathbf{\mathfrak{m}}_{h}^{4} - \Vec \mathbf{s}_{h}\Vec' \mathbf{s}_{h}\right)
$$
and
$$
\boldsymbol{\mathfrak{m}}_{h}^{4} = \frac{1}{n_{h}}\left[\sum_{i = 1}^{n_{h}}
(\mathbf{y}_{h i} - \overline{\mathbf{y}}_{h})(\mathbf{y}_{h i} - \overline{\mathbf{y}}_{h})'
\otimes (\mathbf{y}_{h i} - \overline{\mathbf{y}}_{h})(\mathbf{y}_{h i} - \overline{\mathbf{y}}_{h})'
\right],
$$
see Remark \ref{impo1}.
Similarly, considering (\ref{det1}) and that $f(\cdot) \equiv |\cdot|$, (\ref{mrsma}) is
restated as
$$
\begin{array}{c}
\build{\min}{}{\mathbf{n}} \P\left(\left|\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right| \leq \tau\right)\\
\mbox{subject to}\\
\displaystyle\sum_{h=1}^{9}n_{h}=1000 \\
2\leq n_{h}\leq N_{h}, \ \ h=1,2,\dots, 9\\
\Vech \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}}) \build{\rightarrow}{d}{} \mathcal{N}_{2 \times 2}
\left(\Vech \mathbf{0}_{2 \times 2},
\Cov\left(\Vech\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})\right)\right)\\
n_{h}\in \mathbb{N}
\end{array}
$$
Then, if $\Psi$ denotes the distribution function of the determinant of
$\widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})$, the equivalent deterministic problem to
the stochastic mathematical programming (\ref{ej}) via the $P$-model is
$$
\begin{array}{c}
\build{\min}{}{\mathbf{n}}\quad \tau |\mathbf{N}|^{1/4}\\
\mbox{subject to}\\
\displaystyle\sum_{h=1}^{9}n_{h}=1000 \\
2\leq n_{h}\leq N_{h}, \ \ h=1,2,\dots, 9\\
n_{h}\in \mathbb{N},
\end{array}
$$
where the density of $Z = \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})$ is, see \citet{dc:00}
$$
\frac{dG(z)}{dz} = g_{_{Z}}(z) = \frac{1}{\sqrt{2}} \exp (z)\left[1 - \erf\left(\sqrt{2z}\right)\right], \quad
z\geq 0,
$$
where $\erf(\cdot)$ is the usual error function defined as
$$
\erf(x) = \frac{2}{\sqrt{\pi}}\int_{0}^{x} \exp(-t^{2})dt.
$$ \qed
\end{rem}
Table 2 shows the optimisation solutions obtained by some of the methods described in Section
\ref{sec4}. Specifically, the solution is presented for the case when the value function is
defined as the trace function, $f(\cdot) = \tr(\cdot)$ and for the following stochastic solutions:
Modified $E-$model, $E-$model, $V-$model and the $P-$model. Also, the
optimum allocation is included for each characteristic, BA and Vol (the first two rows in Table 2). The
last two columns show the minimum values of the individual variances for the respective optimum
allocations identified by each method. The results were computed using the commercial software
Hyper LINGO/PC, release 6.0, see \citet{w95}. The default optimisation methods used by LINGO to
solve the nonlinear integer optimisation programs are Generalised Reduced Gradient (GRG) and
branch-and-bound methods, see \citet{bss06}. Some technical details of the computations are the
following: the maximum number of iterations of the methods presented in Table 2 was 2279
(modified $E$-model) and the mean execution time for all the programs was 4 seconds. Finally,
note that the greatest discrepancy found by the different methods among the sizes of the strata
occurred under $P$-model. Beyond doubt, this is a consequence of the election of the corresponding
value of $\tau$ needed for the $P$-model approach.
\begin{table}
\caption{\small Sample sizes and estimator of variances for the different allocations
calculated}
\begin{center}
\begin{minipage}[t]{400pt}
\begin{scriptsize}
\begin{tabular}{ c c c c c c c c c c c c}
\hline\hline Allocation\footnote{The estimated fourth moment $m_{h_{j}}^{4}$ were simulated.} &
$n_{1}$ & $n_{2}$ & $n_{3}$ & $n_{4}$ & $n_{5}$ & $n_{6}$ & $n_{7}$ & $n_{8}$ & $n_{9}$ &
$\widehat{\Var}(\overline{y}_{_{ST}}^{1})$ &
$\widehat{\Var}(\overline{y}_{_{ST}}^{2})$ \\
\hline\hline
BA & 10 & 94 & 144 & 136 & 191 & 113 & 81 & 109 & 122 & 5.591 & 5441.105\\
Vol & 7 & 62 & 119 & 136 & 200 & 161 & 98 & 134 & 83 & 5.953 & 5139.531\\
$\boldsymbol{\tr \widehat{\Cov}(\overline{\mathbf{y}}_{_{ST}})}$
& & & & & & && & & & \\
\begin{tabular}{c}
Modified \\
$E$-model \\
\end{tabular}
& 8 & 46 & 77 & 119 & 191 & 191 & 158 & 161 & 49 & 7.312 & 5593.494\\
$E$-model\footnote{Where $k_{1} = k_{2} = 0.5$.} & 7 & 63 & 119 & 135 & 200 & 160 & 98 & 134 & 84 & 5.937 & 5139.645\\
$V$-model & 8 & 46 & 77 & 119 & 191 & 191 & 158 & 161 & 49 & 7.312 & 5593.494\\
$P$-model\footnote{Where $\tau = 6000$.} & 632 & 9 & 117 & 29 & 46 & 54 & 52 & 49 & 7 & 29.746 & 20820.660\\
\hline\hline
\end{tabular}
\end{scriptsize}
\end{minipage}
\end{center}
\end{table}
\section*{\normalsize CONCLUSIONS}
It is difficult to suggest general rules for the selection of a method in stochastic matrix
mathematical programming (\ref{smamp}). These conclusions are sustained in several regards, for
example: potentiality, there is an infinite number of possible definitions of the value
function $f(\cdot)$; furthermore, the value function approach is not the unique way to restate
(\ref{smamp}); exist many ways to solve (\ref{smamp}) from a stochastic point of view.
We believe that this responsibility lies with the person skilled in the particular field and in
his/her capacity of discern which function or approach that better reflects and meets the
objectives of the study.
In this paper, the problem of optimal allocation in multivariate stratified sampling
was considered. In all sample size problems there is always uncertainty regarding the population parameters
and in this work, this uncertainty was incorporated via a stochastic matrix mathematical solution.
\section*{\normalsize ACKNOWLEDGMENTS}
This research work was partially supported by IDI-Spain, Grants No. FQM2006-2271 and
MTM2008-05785, supported also by CONACYT Grant CB2008 Ref. 105657. This paper was written during J. A. D\'{\i}az-Garc\'{\i}a's stay as a visiting
professor at the Department of Probability Statistics of the Center of Mathematical Research,
Guanajuato, M\'exico.
|
1,477,468,750,477 | arxiv | \chapter{Processing Transient Alerts}
\label{chap:alerts}
\chaptoc{}
\section{Introduction}
\label{sec:alerts_intro}
\begin{colsection}
In this chapter I describe the software used by GOTO to process alerts generated from transient astronomical events, including gravitational-wave detections.
\begin{itemize}
\item In \nref{sec:transient_alerts} I give an overview of the established systems through which the LVC, NASA and other organisations publish and distribute astronomical alerts.
\item In \nref{sec:gotoalert} I describe the GOTO-alert Python package, and how transient alerts are received and processed.
\item In \nref{sec:strategy} I describe how the optimal follow-up strategy is determined for different types of alert, and how the targets are defined for GOTO to observe.
\end{itemize}
All work described in this chapter is my own unless otherwise indicated, and has not been published elsewhere.
\end{colsection}
\section{Transient event alerts}
\label{sec:transient_alerts}
\begin{colsection}
For the last few decades the number of detections of transient astronomical sources has been rapidly increasing. From space-based gamma-ray burst monitors such as \textit{Fermi} and \textit{Swift} to wide-field survey telescopes such as the \glsfirst{ptf} and \glsfirst{asassn}, increasing numbers of time-critical events have been detected and rapidly sent to other observing partners for follow-up campaigns. Historically this was done, at a much slower pace, through physical post and telegrams --- hence the names of some of the current services offering modern, email-based alternatives: the Astronomer's Telegram \citep{ATel} and the \glsfirst{gcn} Circulars and Notices \citep{GCN}.
Today the global system has evolved to remove the human factor entirely, in order to reduce the delay between events being detected and follow-up observations being taken. Robotic telescopes are now common and can be triggered automatically by machine processing of alerts, which are themselves generated automatically by the detection instruments. The \glsfirst{ivoa} VOEvent protocol \citep{voevent} has become the standard language for such robotic communications, allowing telescopes around the world to respond within seconds to transient detections. New projects such as the \glsfirst{ztf}, itself paving the way for the forthcoming \glsfirst{lsst}, will produce millions of events per night requiring even faster and more efficient alert systems \citep{ZTF_alerts}.
GOTO's priority is, of course, detecting optical counterparts to gravitational-wave events. Such events are published by the LIGO-Virgo Collaboration as VOEvents through the GCN Notice system.
\newpage
\end{colsection}
\subsection{GCN alerts}
\label{sec:gcn}
\begin{colsection}
The Gamma-ray burst Coordinates Network, also known as the Transient Astronomy Network or together GCN/TAN, is a system hosted by NASA originally to publish alerts relating to gamma-ray burst detections \citep{GCN}. It publishes events from a variety of telescopes including \textit{INTEGRAL}, \textit{Fermi} and \textit{Swift}, and more recently has expanded to neutrino and gravitational-wave alerts --- including publishing alerts from the LIGO-Virgo Collaboration.
Alerts are produced by the various facilities in the form of GCN Notices, standard machine-readable text messages that are distributed by the network. Notices are designed to be written and sent out automatically by the facility without the need for human intervention, and likewise can be received and acted on by automated systems run by follow-up projects such as GOTO's sentinel (see \aref{sec:sentinel}). Transmitting notices is only done by the projects that are part of the network.
There is a second form of alerts distributed by the network called GCN Circulars. Unlike notices, circulars are intended to be written by humans and sent out by email to anyone subscribed to the distribution list. They act as a formal, citable way to share information about events, both the initial detection by the facility any any follow-up activity by other groups.
\end{colsection}
\subsection{VOEvents}
\label{sec:voevents}
\begin{colsection}
The GCN/TAN system broadcasts notices in multiple ways over many different channels, but the most useful for automated telescopes such as GOTO uses the IVOA VOEvent standard \citep{voevent}. VOEvents are a standard way to transmit information about transient astronomical events, in a structured format to make the reports easily machine-readable. Each event is assigned an \glsfirst{ivorn} and follows a defined schema. By defining a standard template to follow, diverse events can be automatically processed and robotic telescopes triggered without the need for human interpretation or vetting.
The structure to transmit these events is fairly flexible, but there are certain common roles. The names below are taken from \citet{voevent}:
\begin{itemize}
\item \textbf{Authors} are the projects, facilities or institutions that create the original data worthy of reporting in the VOEvent.
\item \textbf{Publishers} take the information about the astronomical event, put it into the VOEvent format and broadcast it from their servers.
\item \textbf{Brokers} act as nodes in the communication web that can take in events from multiple publishers and rebroadcast them in a single stream.
\item \textbf{Subscribers} are the end users that listen to VOEvent servers, either directly to the publishers or to a broker.
\end{itemize}
In some cases the above roles can be combined, but for the case of GOTO receiving gravitational-wave events there are distinct actors: the LIGO-Virgo Collaboration is the event's author, NASA and the GCN/TAN system are the publishers and the GOTO sentinel is the subscriber.
It is possible to listen directly to the GCN/TAN servers, in which case the sentinel would receive VOEvents from the NASA missions and other projects like LIGO.\@ However, there are other groups publishing their own VOEvents, separate to the GCN system, which we might want to receive. It would be possible to run multiple event listeners within the sentinel, each listening to a different server, but it is much easier to listen to a broker that already does that and provides a single point of access to these pipelines.
The broker listened to by the G-TeCS sentinel is the 4 Pi Sky VOEvents Hub \citep{4pisky}. 4 Pi Sky combines alerts from the GCN system\footnote{\url{https://gcn.gsfc.nasa.gov/burst_info.html}} as well as from the \textit{Gaia}\footnote{\url{http://gsaweb.ast.cam.ac.uk/alerts/alertsindex}} and ASAS-SN\footnote{\href{http://www.astronomy.ohio-state.edu/~assassin/transients}{\texttt{http://www.astronomy.ohio-state.edu/\raisebox{0.5ex}{\texttildelow}assassin/transients}}} projects. At the time of writing GOTO only follows up LVC gravitational-wave events and \textit{Fermi} and \textit{Swift} gamma-ray burst detections, all of which are published through GCNs, meaning there is technically no benefit of listening to 4 Pi Sky over listening directly to NASA.\@ However pt5m uses the 4 Pi Sky broker to receive and automatically follow up \textit{Gaia} transient detections, and it has been suggested GOTO could do the same in the future.
In order to receive VOEvents from any source it is necessary to set up a VOEvent client. The most common way to do this is using the Comet software \citep{comet}, which allows both sending and receiving of events. For the G-TeCS sentinel (see \aref{sec:sentinel}) all that was required was a simple way to listen to and download alerts, which is why it instead uses code based on the PyGCN Python package (\texttt{pygcn}\footnote{\url{https://github.com/lpsinger/pygcn}}). Despite the name, PyGCN can receive any VOEvents, not just those from the GCN servers. The sentinel uses PyGCN to open a socket to the 4 Pi Sky server and ingest binary packets, as well as sending the required receipt and ``\texttt{iamalive}'' responses to the server to ensure it keeps receiving events.
VOEvents take the form of a structured \glsfirst{xml} document. XML is a ``markup'' language similar to HTML, JSON or \LaTeX, meaning it is understandable by humans but follows a set schema and so can be easily read and processed by computers. A sample of a VOEvent is given in \aref{fig:voevent_xml}.
\begin{figure}[p]
\lstinputlisting[language=xml,
tabsize=2,
breaklines=true,
keywordstyle={},
stringstyle=\color{red},
showstringspaces=false,
basicstyle=\ttfamily\scriptsize,
emph={voe,Who,What,WhereWhen,How,Citations},
emphstyle={\color{magenta}},
columns=fullflexible
]{images/voevent.xml}
\caption[VOEvent XML sample]{
A sample of VOEvent text from an LVC event, formatted so the core XML structure is visible. Some of the key pieces of information are the role and IVORN defined in the header, the skymap URL, and the event classification probabilities.
}\label{fig:voevent_xml}
\end{figure}
\newpage
\end{colsection}
\section{Processing alerts}
\label{sec:gotoalert}
\begin{colsection}
Once a VOEvent is received by the G-TeCS sentinel, the task of parsing and processing the event uses another Python package, GOTO-alert (\texttt{gotoalert}\footnote{\url{https://github.com/GOTO-OBS/goto-alert}}), which contains functions related to processing transient alerts. GOTO-alert was originally written by Alex Obradovic at Monash to listen for GRB alerts, when I took the code over I rewrote it to integrate it into G-TeCS, as well as adding the capability to process gravitational-wave alerts.
\end{colsection}
\subsection{Event classes}
\label{sec:event_classes}
\begin{colsection}
\begin{table}[t]
\begin{center}
\begin{tabular}{clll}
Packet type & Source & Notice type & Event subclass \\
\midrule
\texttt{61} & NASA/\textit{Swift} & \texttt{SWIFT\_BAT\_GRB\_POS} & \texttt{GRBEvent} \\
\texttt{115} & NASA/\textit{Fermi} & \texttt{FERMI\_GBM\_FIN\_POS} & \texttt{GRBEvent} \\
\texttt{150} & LVC & \texttt{LVC\_PRELIMINARY} & \texttt{GWEvent} \\
\texttt{151} & LVC & \texttt{LVC\_INITIAL} & \texttt{GWEvent} \\
\texttt{152} & LVC & \texttt{LVC\_UPDATE} & \texttt{GWEvent} \\
\texttt{164} & LVC & \texttt{LVC\_RETRACTION} & \texttt{GWRetractionEvent} \\
\end{tabular}
\end{center}
\caption[GCN notices recognised by the GOTO-alert event handler]{
GCN notices and corresponding classes recognised by the GOTO-alert event handler. The packet type is used by the GCN system to identify the class of event.
}\label{tab:events}
\end{table}
At the core of the GOTO-alert code is the \texttt{Event} object class. Events are Python classes created from the raw VOEvent XML payload received by the PyGCN listener, containing the basic event information (IVORN, type, source etc). Once the basic Event is created it is checked against an internal list of so-called ``interesting'' event packet types --- the ones we care about processing for GOTO.\@ At the time of writing these are \texttt{SWIFT\_BAT}, \texttt{FERMI\_GBM} and \texttt{LVC} events, as listed in \aref{tab:events}. If the event matches any of the recognised packet types then the Event is subclassed into a new object, which allows more specific properties and methods. The current subclasses are as follows:
\subsubsection{GRB Events}
The \texttt{GRBEvent} class is used for events relating to gamma-ray burst detections, specifically from \textit{Fermi} and \textit{Swift}. The VOEvents for these events contain a sky position in right ascension and declination as well as an error radius, so a HEALPix skymap is produced using the Gaussian method described in \aref{sec:grb_skymaps}. For \textit{Fermi} events the class also has an additional attribute extracted from the VOEvent: the duration of the burst (Long or Short).
\subsubsection{GW Events}
The \texttt{GWEvent} class is used for LVC gravitational-wave events. LVC events have several stages: a ``Preliminary'' alert is released as soon as the signal is detected, then an ``Initial'' alert is issued once it has been human-vetted. From then on future versions are marked as ``Update'' alerts, unless the event itself is found to be non-physical or below certain thresholds in which case a ``Retraction'' alert is issued. As these are events produced by LIGO-Virgo they should contain a \texttt{skymap\_fits} parameter (as shown in \aref{fig:voevent_xml}), which gives a URL pointing to where the skymap can be downloaded from GraceDB, the LIGO event database\footnote{\url{https://gracedb.ligo.org}}. The gravitational-wave VOEvents also contain a variety of properties that are stored in the event class and which can be used to determine the observing strategy (see \aref{sec:event_strategy}) to use. These include:
\begin{itemize}
\item \textbf{\glsfirst{far}}: an estimate of the probability that this event is a false alarm, i.e.\ not from a real astronomical source. Given in the form of an expected frequency or rate, so an event with a false alarm rate of 1~per~year is much less significant than one with a FAR of 1~per~10,000 years.
\item \textbf{Instruments}: which of the active gravitational-wave detectors (currently LIGO-Livingston, LIGO-Hanford and Virgo) detected the signal. A non-detection in one or more instruments is also accounted for in the false alarm rate.
\item \textbf{Group}: which type of GW pipeline detected the event signal, either ``CBC'' (Compact Binary Coalescence)\glsadd{cbc} or ``Burst'' \citep[other, unmodelled detections, see][]{GW_burst}. The following parameters only apply to CBC events.
\item \textbf{Classification}: the VOEvents for CBC events include probabilities that the source falls into one of five categories: \glsfirst{bns} mergers, \glsfirst{nsbh} mergers, \glsfirst{bbh} mergers, ``MassGap'' mergers \citep[one or other of the components is in the hypothetical ``mass gap'' between neutron stars and black holes, defined as 3--\SI{5}{\solarmass};][]{GW_MassGap}, or ``Terrestrial'' (a non-astronomical source).
\item \textbf{Properties}: CBC events also contain two important properties: ``HasNS'', the probability that the mass of one or both of the components is consistent with a neutron star ($<$\SI{3}{\solarmass}); and ``HasRemnant'', the probability that a non-zero amount of material was ejected during coalescence and therefore an electromagnetic signal might be expected \citep{LVC_userguide}.
\item \textbf{Distance}: The skymaps produced by the LVC contain three-dimensional localisation information \citep{GW_distance}. For deciding on event strategy the mean distance, in megaparsec, is read from the skymap FITS header, along with the standard deviation.
\end{itemize}
\subsubsection{GW Retraction Events}
\texttt{GWRetractionEvent} is a special event subclass used to handle LVC notices that are retractions of earlier events. They are effectively just a more limited version of the \texttt{GWEvent} class, as the retraction VOEvents do not contain a skymap or any of the additional parameters listed above. Having retraction events occupy their own subclass makes it easier to identify and process them when sent through the event handler.
\newpage
\end{colsection}
\subsection{The event handler}
\label{sec:event_handler}
\begin{colsection}
Once the correct \texttt{Event} class has been created the sentinel passes it to the GOTO-alert \texttt{event\_handler} function. Before processing the event, the handler first filters out any unwanted events which do not come under any of the above subclasses. These are primarily alerts from other facilities (\textit{INTEGRAL}, \textit{Gaia} etc) or other event classes that are not ``interesting'' to GOTO (\textit{Fermi} releases several types of alerts, but only the final GBM positions are processed). If the event passed to the event handler is not marked as ``interesting'' then it is rejected at this stage and the handler exits. The handler will also intentionally reject an event that is ``interesting'' if it has an incorrect role. LVC sends out test VOEvents to allow full testing of any follow-up systems; these are identical to real events (even including simulated skymaps) but are explicitly marked with the role of \texttt{test} rather than \texttt{observation}. These can be optionally processed by the event handler, but in the live sentinel system they are rejected at this point.
If the event passes the above filter the next step is to download the event's skymap (for GW events, see \aref{sec:skymaps}) or create a corresponding Gaussian skymap (for GRB events, see \aref{sec:grb_skymaps}). Doing this after filtering-out uninteresting events saves time and space when downloading or creating skymaps that are not used, for example for LVC test events.
Once the event has a skymap the observing strategy for the event can be generated. This can only happen after the skymap is downloaded as some parameters, notably the distance for GW events, are only stored within the skymap headers instead of in the VOEvent XML.\@ The details of the different event strategies and how they are defined are given in \aref{sec:event_strategy}.
Finally, the event handler inserts pointings and other information into the observation database, with parameters depending on the strategy determined in the previous stage. This is detailed in \aref{sec:event_insert}.
\newpage
\end{colsection}
\subsection{Event reports}
\label{sec:event_slack}
\begin{colsection}
Throughout the event handling process, GOTO-alert has the option of sending confirmation messages to Slack in a similar way to the G-TeCS pilot (see \aref{sec:slack}). This allows human observers to be informed of any new alerts processed by the sentinel, as well as the expected outcome of upcoming observations. Alerts are deliberately spaced out at different stages within the event handler, rather than all sent at the end, to make it obvious if a problem occurs and one or more alerts are missing.
Four alerts are sent out in total:
\begin{enumerate}
\item An \textbf{initial} alert is sent out by the sentinel as soon as an interesting event is received, and contains only one line reporting the IVORN of the event to be processed. Sending this first means there is a record should an error subsequently occur.
\item Next, the \textbf{event} alert is sent out by GOTO-alert once the event has been created and the skymap downloaded. It contains the key information and properties of the event, as well as a plot of the skymap produced by GOTO-tile.
\item The \textbf{strategy} alert is sent out after the event observing strategy has been retrieved, and reports the contents of the strategy dictionaries (see \aref{sec:event_strategy}).
\item Finally, the \textbf{visibility} alert is sent out after the event tiles have been added to the observation database (see \aref{sec:event_insert}). As well as showing the number of tiles selected and their combined skymap coverage, this alert also predicts which tiles will be visible from the GOTO site on La Palma over the valid observing period. This is only based on tile altitudes during the night, and excludes other factors (weather conditions, the position of the Moon, any higher-priority pointings that would take precedence) considered by the G-TeCS just-in-time scheduler (see \aref{sec:ranking}).
\end{enumerate}
An example of alerts generated for a gravitational-wave event are shown in \aref{fig:gotoalert_slack}.
\begin{sidewaysfigure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/slack_alert_side2.png}
\end{center}
\caption[Slack alerts created by GOTO-alert for a GW event]{
Slack alerts generated by GOTO-alert for GW event S190426c \citep{S190426c}.
The event alert (left) includes key information about the event gathered from the GCN notice. For GW events that includes FAR, classification and distance information, a link to the GraceDB page and a skymap generated by GOTO-tile.
The strategy alert (centre) details the event observing strategy (see \aref{sec:event_strategy}).
The visibility alert (right) reports how many tiles were inserted into the observation database and what skymap probability they cover. The attached plot shows a prediction of which tiles will be visible during the valid observing period.
}\label{fig:gotoalert_slack}
\end{sidewaysfigure}
\end{colsection}
\section{Strategies for follow-up observations}
\label{sec:strategy}
\begin{colsection}
An important function of the GOTO-alert event handler is to determine the specific strategy to be used for follow-up observations of each interesting event. Through the G-TeCS observation database (see \aref{sec:obsdb}), observations can be tailored to the properties of the triggering event, either by altering the validity, priority and cadence of the pointings inserted into the scheduler queue (see \aref{sec:ranking}) or by customising the commands issued by the pilot when each pointing is selected (see \aref{sec:pilot}).
The term \emph{strategy} is used deliberately to differentiate from the actions carried out by the on-site pilot, which are better called \emph{tactics}. Properly defined, strategy considers long-term aims and objectives (consider generals directing a war far from the front lines, or the coach of a sports team), whereas tactics are the on-the-ground implementation details used to move towards those objectives (determined by the captain in the trenches or on the pitch). The sentinel decides the observing strategy for a particular event, using the structures and functions within GOTO-alert. These decisions are then communicated to the pilot through the observation database, which decides what to do independently using the scheduler and the local conditions. Only then are commands sent to the hardware daemons to put the plan into action: like the soldiers on the battlefield or players on the pitch, theirs is not to reason why but to carry out their orders as issued.
The importance of such a distinction is that the strategies and objectives decided by the sentinel can only ever be aspirational, for the best-case scenario. It can decide a follow-up plan for an event, but if the location is not visible from La Palma, or it is currently raining, then the pilot will be unable to implement it. The sentinel is designed to be aspirational and not consider smaller details such as these in order to make it independent of physical hardware. Ultimately GOTO is envisioned to occupy multiple sites across the world, as described in \aref{sec:goto_expansion}, but the intention is that they will all still be taking orders from a single central sentinel and database (see \aref{sec:gtecs_future}).
\end{colsection}
\subsection{Determining observation strategies for events}
\label{sec:event_strategy}
\begin{colsection}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/strategy_flowchart.pdf}
\end{center}
\caption[Decision tree for determining event strategy]{
Decision tree for determining event strategy.
}\label{fig:strategy_flowchart}
\end{figure}
In order to decide the observation strategy for a given event, the GOTO-alert event handler uses the decision tree shown in \aref{fig:strategy_flowchart}. The codes at the end of the branches (\texttt{GW\_CLOSE\_NS}, \texttt{GRB\_FERMI}, etc) are the individual strategies, and they correspond to keys in the strategy dictionary defined within GOTO-alert as shown in \aref{tab:strategy_dict}. Each strategy corresponds to an integer rank as well as further keys relating to cadence, constraints and exposure sets which are keys in three additional dictionaries shown in \aref{tab:cadence_dict}, \aref{tab:constraints_dict} and \aref{tab:exposuresets_dict}. Through this structure all the values required for inserting an event into the database are defined, and it is also simple to modify strategies or add in new ones. The reasoning behind each strategy is outlined below.
\clearpage
\begin{table}[!p]
\begin{center}
\begin{tabular}{lclll}
Strategy & Rank & Cadence & Constraints & ExposureSets \\
\midrule
\texttt{GW\_CLOSE\_NS} & 2 & \texttt{NO\_DELAY} & \texttt{LENIENT} & \texttt{3x60L} \\ %
\texttt{GW\_FAR\_NS} & 13 & \texttt{NO\_DELAY} & \texttt{LENIENT} & \texttt{3x60L} \\ %
\texttt{GW\_CLOSE\_BH} & 24 & \texttt{TWO\_NIGHTS} & \texttt{LENIENT} & \texttt{3x60L} \\ %
\texttt{GW\_FAR\_BH} & 105 & \texttt{TWO\_NIGHTS} & \texttt{LENIENT} & \texttt{3x60L} \\ %
\texttt{GW\_BURST} & 52 & \texttt{NO\_DELAY} & \texttt{LENIENT} & \texttt{3x60L} \\ %
\texttt{GRB\_SWIFT} & 207 & \texttt{TWO\_FIRST\_ONE\_SECOND} & \texttt{NORMAL} & \texttt{3x60L} \\ %
\texttt{GRB\_FERMI} & 218 & \texttt{TWO\_FIRST\_ONE\_SECOND} & \texttt{NORMAL} & \texttt{3x60L} \\ %
\end{tabular}
\end{center}
\caption[Event strategy dictionary keys]{
Event strategy dictionary keys. The ranks are used by the scheduler to sort pointings (see \aref{sec:rank}). The cadence values are matched to \aref{tab:cadence_dict}, constraints values to \aref{tab:constraints_dict} and ExposureSets to \aref{tab:exposuresets_dict}.
}\label{tab:strategy_dict}
\end{table}
\begin{table}[!p]
\begin{center}
\begin{tabular}{lccc}
Cadence & Max visits & Visit delay (hours) & Valid days \\
\midrule
\texttt{NO\_DELAY} & 99 & 0 & 3 \\
\texttt{TWO\_NIGHTS} & 2 & 12 & 3 \\
\texttt{TWO\_FIRST\_ONE\_SECOND} & 3 & 4, 12 & 3 \\
\end{tabular}
\end{center}
\caption[Cadence strategy dictionary keys]{
Cadence strategy dictionary keys, used to define pointings in the observation database (see \aref{sec:obsdb}).
}\label{tab:cadence_dict}
\end{table}
\begin{table}[!p]
\begin{center}
\begin{tabular}{lcccc}
Constraints & Min Alt & Max Sun Alt & Min Moon Separation & Max Moon Phase \\
\midrule
\texttt{NORMAL} & \SI{30}{\degree} & \SI{-15}{\degree} & \SI{30}{\degree} & Bright \\
\texttt{LENIENT} & \SI{30}{\degree} & \SI{-12}{\degree} & \SI{30}{\degree} & Bright \\
\end{tabular}
\end{center}
\caption[Constraints strategy dictionary keys]{
Constraints strategy dictionary keys, used to define the constraints applied by the scheduler to determine if a pointing is valid (see \aref{sec:constraints}).
}\label{tab:constraints_dict}
\end{table}
\begin{table}[!p]
\begin{center}
\begin{tabular}{lcccc}
ExposureSets & Set position & Number of exposures & Exposure time & Filter \\
\midrule
\texttt{3x60L} & 1/1 & 3 & \SI{60}{\second} & \textit{L} \\ %
\texttt{3x60RGB} & 1/3 & 1 & \SI{60}{\second} & \textit{R} \\ %
& 2/3 & 1 & \SI{60}{\second} & \textit{G} \\ %
& 3/3 & 1 & \SI{60}{\second} & \textit{B} \\ %
\end{tabular}
\end{center}
\caption[ExposureSets strategy dictionary keys]{
ExposureSets strategy dictionary keys, used by the pilot when adding exposure sets to the exposure queue (see \aref{sec:exq}).
}\label{tab:exposuresets_dict}
\end{table}
\clearpage
\subsubsection{Event sources}
The first distinction to make is between gravitational-wave and gamma-ray burst events. GOTO's primary aim is searching for GW counterparts, with any GRB follow-up being a useful but decidedly lower-priority use of GOTO's time. Therefore all GW events have ranks considerably higher than GRB events and, due to their importance to the project, GW events also use more lenient observing constraints (defined in \aref{tab:constraints_dict}) than the normal ones used by GRB events and the all-sky survey.
\subsubsection{Gravitational-wave sources}
The highest priority for GOTO should always be GW events that are predicted to contain a neutron star component, as they are the ones that are expected to produce an electromagnetic counterpart that GOTO could detect (see \aref{sec:gw_sources}, and the definitions of ``HasNS'' and ``HasRemnant'' in \aref{sec:event_classes}).
Neutron star events (which can include neutron star-black hole binaries and MassGap binaries) have no delay between visits (see \aref{tab:cadence_dict}), meaning once a pointing is completed it will be immediately re-inserted into the queue (for each observation the effective rank will be increased by 10, as described in \aref{sec:rank}). For these events, once GOTO has observed all of the visible tiles once it will immediately start covering the skymap again, and by default this will continue until it reaches the stop time three days after the event (or until it reaches 99 observations of each tile, which is just inserted as a nominal maximum and is not expected to be reached within three nights).
Binary black hole (BBH) events, on the other hand, have a more limited \texttt{TWO\_NIGHTS} strategy, which only requires two observations of each tile with at least 12 hours between them. The 12 hour delay in practice means observing on two subsequent nights; as the GOTO site on La Palma can not see the northern celestial pole circumpolar targets do not need to be considered, and even in winter the longest nights are less than 12 hours.
\newpage
For both classes of events it is expected that GOTO will want to respond immediately and attempt to image the localisation area as quickly as possible. The sole current detection of an electromagnetic counterpart to a gravitational-wave event, AT~2017gfo associated with GW170817 (see \aref{sec:followup}), was first observed by Swope 10.9 hours after the event, and there remains a lot of uncertainty in how kilonova appear in their early stages \citep{GW_kilonova_early}. Therefore even an early non-detection by GOTO would provide valuable information to constrain the lightcurve.
\subsubsection{Gravitational-wave distances}
For both neutron star and black hole events a distinction is also made between ``close'' and ``far'' events based on their reported source distance: a close neutron star event is defined to be within \SI{400}{\mega\parsec} while a close binary black hole event is within \SI{100}{\mega\parsec}.
Swope observed AT~2017gfo at \textit{i}$=17.057\pm0.018$ mag \citep{GW170817_Swope}, which at a distance of \SI{40}{\mega\parsec} corresponds to an absolute magnitude of -16. Using the equation for apparent magnitude
\begin{equation}
m-M = -5 +5\log_{10}(d),
\label{eq:absolute_magnitude}
\end{equation}
AT~2017gfo would have peaked above 19th magnitude out to a distance of \SI{100}{\mega\parsec} and above 22nd magnitude out to a distance of \SI{400}{\mega\parsec}. Binary-black hole mergers are not expected to produce the same amounts of ejected matter that would produce a kilonova, but some predictions suggest there may be material ejected from disks around one or more of the black holes which might reach 22nd magnitude if closer than \SI{100}{\mega\parsec} \citep{BBH_EM}. As a first approximation, therefore, the 22nd magnitude limits were adopted for the close/far distinction. Note this is a arbitrary division, not particularly based on GOTO's capability but a more general division into sources that could have a counterpart discoverable by existing wide-field follow-up projects (see the limiting magnitudes in \aref{tab:rivals} in \aref{sec:goto_motivation}).
\newpage
The only difference in strategy between ``close'' and ``far'' events is that they are assigned different initial ranks when inserted into the database (shown in \aref{tab:strategy_dict}). The rest of the strategy values, including cadence and constraints, are identical, and therefore the division is completely academic except in the case where \emph{multiple} events exist in the observing queue at the same time. Should this happen, and tiles for both events are visible at the same time, then the ranking system provides a quick method to prioritise events for observations. Using the ranks given in \aref{tab:strategy_dict}, events from close neutron star mergers would be inserted at rank 2, while far mergers of the same type would be inserted at rank 13. As the rank of a pointing is increased by 10 every time it is observed (see \aref{sec:rank}), this would prioritise two passes of the ``close'' skymap before the ``far'' skymap. The first observation would be at rank 2, the second at rank 12, and by the third it would be in the queue at rank 22. This is lower than the initial rank of 13 for the ``far'' event, so the latter would then be higher in the queue. Any following observations would alternate between the two events: ``close'' observation 3 at rank 22, ``far'' observation 2 at rank 23, ``close'' observation 4 at rank 32 etc. The ranks for the other events are likewise carefully chosen: close BBH events would be inserted at rank 24 and therefore fall behind the first three passes of a close NS or two passes of a far NS.\@ Far BBH events are very unlikely to produce any visible optical counterparts, so although GOTO will still follow-up the alerts they are inserted at rank 105, well below several passes of any more promising GW events.
What is currently not considered by the strategy outlined above is a \textit{maximum} valid distance, beyond which GOTO would not respond to the alert. LVC detections have already reached out to the gigaparsec scale, and at those distances the chance of there being any optical counterpart visible from Earth is very low. At the time of writing GW events are rare enough that there is little reason for GOTO not to follow up every alert, but in the future if necessary it would be easy to limit which events GOTO follows up based on distance (or another parameter, such as the false-alarm rate).
\newpage
\subsubsection{Gravitational-wave burst alerts}
Gravitational-wave events from the LVC burst pipelines \citep{GW_burst} are hard to categorise, as there is very little information to base any observation strategy on (as noted in \aref{sec:event_classes}, alerts from unmodelled burst detections do not contain source classifications or predicted distances). As a compromise they use the same cadence strategy as neutron star events, but are inserted at rank 52, below any more promising GW events but above binary-black hole events. As described in \aref{sec:gw_sources}, to date no burst alerts, e.g.\ from supernovae, have been released by the LVC.\@ The only detections from the burst pipelines (that have been made public) have corresponded to compact binary coalescence events detected by the other pipelines.
\subsubsection{Gamma-ray burst alerts}
Gamma-ray burst alerts from \textit{Fermi} and \textit{Swift} are also processed by the sentinel and inserted into the observation database. As shown in \aref{tab:strategy_dict}, pointings from GRB events are inserted at ranks above 200, ensuring that GOTO will always prioritise gravitational-wave follow-up. GRB events use a different cadence strategy of \texttt{TWO\_FIRST\_ONE\_SECOND}, which, as detailed in \aref{tab:cadence_dict}, consists of three observations: two in the first night separated by at least 4 hours and another on the second night. This cadence was recently implemented to attempt to account for the fast-fading nature of GRB afterglows, and is an example of more complex cadences allowed by the G-TeCS scheduler.
For gamma-ray burst events the only distinction is made between events originating from \textit{Swift} and \textit{Fermi}. Typically \textit{Swift}'s \glsfirst{bat} detections are much better localised than those from \textit{Fermi}'s \glsfirst{gbm}, and so, in cases where a source is detected by both, the \textit{Swift} detection should be prioritised. Further division could be made based on the burst being classified by \textit{Fermi} as Long or Short, but this is not currently implemented.
\subsubsection{Exposure sets}
Each strategy currently defined in \aref{tab:strategy_dict} uses the same exposure set definition: \texttt{3x60L}. From \aref{tab:exposuresets_dict} this comprises of three sequential \SI{60}{\second} exposures in the \textit{L} filter, the same as the all-sky survey. A different set of exposures using the coloured filters instead is shown in \aref{tab:exposuresets_dict} as \texttt{3x60RGB}, this a possible example of what could be defined using the G-TeCS system but is not used as part of any current strategy. %
\subsubsection{Subsiquent strategy alterations}
The strategies detailed in the above sections are designed to inform the default, automatic reaction of GOTO to any incoming alert. They are not alone intended to be a perfect reaction for every case, and later, human-guided input is to be expected. For example, the default strategy for a gravitational-wave alert from a close neutron star source is to observe the tiles inserted over and over until the three day limit has passed. In practice, it should be clear after the first few passes if there is any counterpart candidate, in which case a human could intervene and direct GOTO to go take observations of particular tiles with promising candidates. Likewise, if after the first pass of a distant binary black hole event skymap no candidates are detected, and nothing has been reported from other facilities, the decision could be made not to bother with the second pass the day after and just return to the all-sky survey.
Another possible modification to a follow-up campaign could be the inclusion of feedback from the GOTOphoto detection pipeline (described in \aref{sec:gotophoto}). In the same way as a human could take over observations described above, should the pipeline detect a promising source it could be allowed to trigger GOTO to re-observe that tile, either automatically or after human vetting. This would require significant development to the pipeline and sentinel which is not a current priority, and is given as an example of possible future work in \aref{sec:software_future}.
\newpage
\end{colsection}
\subsection{Inserting events into the observation database}
\label{sec:event_insert}
\begin{colsection}
Once the strategy has been determined for an event, based on the details described in \aref{sec:event_strategy}, then the sentinel needs to insert pointings into the observation database so they are visible to the scheduler (see \aref{sec:obsdb} and \aref{sec:scheduler}). This involves mapping the event skymap on to the all-sky grid, as well as accounting for any previous detections of the same event.
\subsubsection{Previous records}
Before inserting any new pointings, the sentinel event handler first checks for any existing records of the new event in the observation database \texttt{events} table. This is done to update event pointings as revised VOEvent alerts are received, or in order to process retraction events. As described in \aref{sec:event_classes} there are several types of LVC alerts: ``Preliminary'' alerts are released first, followed by ``Initial'' alerts when the detections are confirmed, updated notices are released as ``Update'' alerts, and ``Retraction'' alerts are issued if the detection is later retracted. At any of these stages the event skymap might be modified, shifting the area to observe, and the pointings in the observation database will therefore need to updated. This is most common for gravitational-wave events as the initial skymaps are typically created using the rapid BAYESTAR pipeline \citep{BAYESTAR}, while later updated skymaps are made using using the slower LALInference code \citep{LALInference}. If a previous entry for a new event is detected then all of the old pointings that are still pending in the queue are deleted, before the new ones are added as described below. Should the event be of type \texttt{GWRetractionEvent} then this is where the event handler exits, as once the previous pointings are deleted then there are no more to replace them.
\subsubsection{Mapping onto the all-sky grid}
Once any existing pointings have been removed, new entries in the database need to be created. All GOTO pointings from the sentinel are defined \textit{on-grid}, meaning that they need to be mapped onto the current GOTO-tile sky grid (see \aref{sec:gototile}). The database \texttt{grids} table contains the field of view and overlap parameters of the current grid as well as the algorithm used (see \aref{sec:algorithms}), allowing it to be reconstructed using GOTO-tile within the event handler. Once a GOTO-tile \texttt{SkyGrid} class has been created, then a corresponding \texttt{SkyMap} class is made based on the information in the event. If the event was from a gravitational-wave alert then it should have a URL to download the LVC-created skymap (shown in \aref{fig:voevent_xml}). If instead it just has a coordinate and error radius, i.e.\ it is a GRB alert, then a new Gaussian skymap is constructed as described in \aref{sec:grb_skymaps}. Once both grid and map are ready then the skymap is mapped onto the grid as described in \aref{sec:mapping_skymaps}, using the class method \texttt{SkyGrid.apply\_skymap(SkyMap)}. This returns a table of tiles and associated contained probabilities, which are used to create the database pointings. %
\subsubsection{Selecting tiles}
The tile probability table created by GOTO-tile contains entries for every one of the thousands of tiles in the all-sky grid, of which the vast majority will contain only a very small amount of the overall probability for any reasonably-well located skymap. Adding an entry to the database for every tile would therefore be unnecessary, and even harmful to the follow-up observations. The scheduling system is designed to complete a full pass of the visible tiles before going on to re-observe those already completed, a consequence of the effective rank increasing by 10 each time a tile is observed (see \aref{sec:rank}), and so adding excess tiles would delay subsequent observations. On the other hand, it is still important to add in enough tiles covering enough of the probability area to maximise the chance of detecting the source. Therefore there is a balance required between adding too many or too few tiles into the database.
There are multiple ways to chose which tiles to select. Initially GOTO-alert used a simple cut-off in terms of each tile's contained probability, selecting tiles that contain greater than, for example, $1\%$ of the total probability. This hard limit however quickly proved to be unsuitable, as large, spread-out skymaps might have few if any tiles which reach the limit, while for well-localised events adding tiles down to the $1\%$ level is redundant and would waste time observing them compared to revisiting higher probability tiles. An attempt to correct this was to modify the probability cut-off for each skymap, making it a function of the highest tile probability. For example, a well-localised event (such as GW170817, see \aref{fig:170817_gw}) might result in the highest-probability tile containing $40\%$ of the probability; with a $P=0.1P_\text{max}$ cut-off all tiles containing 4\% or above would therefore be added to the database. On the other, hand a spread-out skymap might have a highest tile containing only 2\% of the probability, so then all tiles with 0.2\% or above would be added. Ultimately, a hard probability cut-off proved to be unresponsive to the spread of the skymap, and determining the relative limit (e.g. 0.1 in the above example) was hard to balance between large and small maps. The preferred method to select tiles is instead based on the 90\% probability contour, this method was discussed previously in \aref{sec:mapping_skymaps}.
\subsubsection{Adding database entries}
Once the tiles to add have been selected then entries can be inserted into the appropriate tables in the observation database (see \aref{sec:obsdb}).
First a new entry in the \texttt{events} table is added for the event, containing the unique VOEvent IVORN, the event source (LVC, \textit{Fermi} etc), type (GW or GRB), and an event ID (for example S190425z for an LVC event, \textit{Fermi} and \textit{Swift} have their own trigger IDs). Then an entry in the \texttt{surveys} table is also created, in order to group together all of the pointings from this particular event. The database does allow multiple surveys per event; for example, there could be a quick initial survey in a wide-passband filter that prioritises possible host galaxies, followed by a slower survey using the colour filters and longer exposure times that focuses on covering the skymap. However, at the time of writing each event only has a single survey defined.
Finally, the individual tiles are inserted as entries in the \texttt{mpointings} table. The most important entries for the mpointings are determined by the event strategy as described in \aref{sec:event_strategy}: the rank (taken from \aref{tab:strategy_dict}), cadence parameters (from \aref{tab:cadence_dict}), the target constraint values (minimum altitude, moon phase etc; from \aref{tab:constraints_dict}), and the exposure settings (exposure time, filter and number in each set; from \aref{tab:exposuresets_dict}). Each mpointing is connected to an entry in the \texttt{grid\_tiles} table for this particular grid, and the tile probabilities are stored as corresponding weights in the \texttt{survey\_tiles} table. Once the mpointings are defined, the first pointings are also created and added to the \texttt{pointings} table with the status \texttt{pending}, to insure they are immediately valid in the queue (see \aref{sec:scheduler}).
Once all of the entries described above have been added to the observation database the event has been successfully handled. At this point the GOTO-alert event handler sends the final visibility report to Slack (see \aref{sec:event_slack}) and exits. In total the entire event handling process, from the alert being received to the pointings being added to the database, takes under 10 seconds. The majority of this time for a gravitational-wave event is downloading the often quite large skymap files from GraceDB, and actually processing the event only takes a few seconds. Once the event pointings are added to the database they should be ready to observe the next time the scheduler fetches the queue, and if they are valid the highest priority pointing will be sent to the pilot to immediately begin follow-up observations.
\end{colsection}
\section{Summary and Conclusions}
\label{sec:alerts_conclusion}
\begin{colsection}
In this chapter I described how the functions within the GOTO-alert process astronomical transient event alerts.
The GOTO sentinel alert listener receives gravitational-wave alerts from the LIGO-Virgo Collaboration in the form of GCN Circulars, which are formatted XML documents following the VOEvent schema. These VOEvents contain the key properties of the detection and a link to a skymap localisation file, which is then mapped onto the GOTO all-sky grid (see \aref{chap:tiling}).
One of the important features of the GOTO-alert event handler is the ability to automatically select the observation strategy for different alerts based on the contents of the VOEvent. Gravitational-wave detections predicted to come from near-by binary neutron star or neutron star-black hole binaries are the highest priority to follow up, followed by other classes of events. Gamma-ray burst events are also processed and added to the observations database by the G-TeCS sentinel (see \aref{chap:autonomous}), but always at a lower priority than the gravitational-wave alerts.
At the time of writing GOTO has been following-up gravitational-wave events for several months since the start of the third LIGO-Virgo observing run. The results of the observing run so far are detailed as part of the overall conclusions in \aref{chap:conclusion}.
\end{colsection}
\chapter{Autonomous Observing}
\label{chap:autonomous}
\chaptoc{}
\section{Introduction}
\label{sec:autonomous_intro}
\begin{colsection}
Continuing the description of the GOTO Telescope Control System from \aref{chap:gtecs}, in this chapter I describe the higher level programs written to enable GOTO to operate as a robotic observatory.
\begin{itemize}
\item In \nref{sec:auto} I outline the additional functionality added to G-TeCS in order to allow the telescope to operate autonomously.
\item In \nref{sec:pilot} I describe the master control program that operates the telescope when in robotic mode.
\item In \nref{sec:conditions} I detail how G-TeCS monitors the local conditions, and list the different flags used to judge if it is safe to observe.
\item In \nref{sec:observing} I give an outline of how targets are observed by the robotic system, and introduce the scheduling system that is expanded further in \aref{chap:scheduling}.
\end{itemize}
All work described in this chapter is my own unless otherwise indicated. As noted before, a description of the G-TeCS control system has previously been published as \citet{Dyer}.
\end{colsection}
\section{Automating telescope operations}
\label{sec:auto}
\begin{colsection}
The hardware control systems described in \aref{sec:hardware_control} provide the basic functions to control and operate GOTO.\@ A human observer could run through a series of simple commands to open the dome, slew the mount to a given target, take exposures once there, and then repeat with other targets for the rest of the night. There is a limited level of autonomy provided by the dome daemon, so the dome will close in bad weather without the delay from a human sending the command, but even that can be disabled if desired. Fundamentally, the software described in \aref{chap:gtecs} provides a perfectly usable human-operated telescope control system.
GOTO, however, was always designed as a fully robotic installation, as described in \aref{sec:goto_motivation}. Therefore an additional level of software was required, to take the place of the observer as the source of commands to the daemons.
\end{colsection}
\subsection{Robotic telescopes}
\label{sec:robotic_telescopes}
\begin{colsection}
One of the first robotic telescopes was the Wisconsin Automatic Photoelectric Telescope \citep{Wisconsin_APT}. Built in 1965, it could take routine observations unattended for several days. Today a huge number and variety of automated telescopes now regularly take observations of the night sky with limited or no human involvement, ranging from wide-field survey projects like the All-Sky Automated Survey for Supernovae \glsadd{asassn} \citep[ASAS-SN,][]{ASAS-SN}, large robotic telescopes like the \SI{2}{\metre} Liverpool Telescope \citep{Liverpool}, to countless small automated observatories around the world\footnote{A list of over 130 active robotic telescopes is available at \href{http://www.astro.physik.uni-goettingen.de/~hessman/MONET/links.html}{\texttt{http://www.astro.physik.uni-}} \\ \href{http://www.astro.physik.uni-goettingen.de/~hessman/MONET/links.html}{\texttt{goettingen.de/\raisebox{0.5ex}{\texttildelow}hessman/MONET/links.html}}.}. While larger facilities still tend to be manually operated, they often have multiple instances of automation in their hardware control or scheduling system; the planned conversion of the 50-year-old Isaac Newton Telescope for the automated HARPS3 survey \citep{INT_robotic} is a recent example of large, established telescopes exploiting the benefits of automation. Larger purely robotic telescopes are also being developed, such as the proposed \SI{4}{\metre} successor to the Liverpool telescope \citep{Liverpool2}. The opportunity for multiple robotic telescopes to be networked together into global observatories has also been exploited by projects like the Las Cumbres Observatory Global Telescope Network \citep{LCO} and the MASTER network \citep{MASTER}.
In G-TeCS, as in the pt5m system before it, the role of the observer is filled by a master control program called the pilot. The pilot sends commands to the daemons, monitors the hardware and attempts to fix any problems that arise. The intention is that the pilot will fully replicate anything a trained on-site observer would be required to do. In order to manage this there are several auxiliary systems and additional support daemons that the pilot confers with: the conditions daemon monitors weather and other system conditions, the sentinel daemon listens for alerts and enters new targets into the observation database and the scheduler daemon reads the database and calculates which target the pilot should observe. Each of these systems are described in this chapter.
\end{colsection}
\subsection{System modes}
\label{sec:mode}
\begin{colsection}
Although GOTO is a robotic telescope, sometimes it is necessary for a human operator to take control if one of the automated scripts fails or a situation arises that is easier to deal with manually. One example was taking observations of the asteroid Phaethon \citep{Phaethon}: G-TeCS was not designed to observe solar system objects, and although the mount allows non-sidereal tracking there was no way to add a pointing into the database without fixed coordinates. Therefore, it was necessary for a human observer to determine and slew to the coordinates of the asteroid as it moved past the Earth. There are also cases when it is important that the automated systems are disabled: if work is being done to the hardware on-site it could be dangerous if the system still tried to move the mount or dome autonomously.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|ccccc} %
mode &
pilot &
day marshal &
dome autoclose &
dome alarm &
\texttt{hatch} flag
\\
\midrule
\texttt{robotic} &
\textcolor{Green}{active} &
\textcolor{Green}{active} &
\textcolor{Green}{enabled} &
\textcolor{Green}{enabled} &
\textcolor{Green}{active}
\\[5pt]
\texttt{manual} &
\textcolor{Orange}{paused} &
\textcolor{Green}{active} &
\textcolor{Orange}{adjustable} &
\textcolor{Orange}{adjustable} &
\textcolor{Red}{ignored}
\\[5pt]
\texttt{engineering} &
\textcolor{Red}{disabled} &
\textcolor{Red}{disabled} &
\textcolor{Red}{disabled} &
\textcolor{Red}{disabled} &
\textcolor{Red}{ignored}
\\
\end{tabular}
\end{center}
\caption[System mode comparison]{
A comparison of the three G-TeCS system modes. In \texttt{robotic} mode all automated systems are enabled, in \texttt{engineering} mode they are all disabled, and in \texttt{manual} mode the pilot is paused and the observer can disable other systems if desired.
}\label{tab:modes}
\end{table}
G-TeCS deals with manual operation by having an overall system mode flag stored in a datafile, which is checked by the automated systems before activating. There are three possible modes, outlined below and summarised in \aref{tab:modes}.
\begin{itemize}
\item \texttt{robotic} mode is the default. In this case it is assumed that the system is completely automated and therefore could move at any time. In this mode during the night the pilot will be in complete control of the telescope, and the dome will automatically close in bad weather. The dome entry hatch being open is also treated as a critical conditions flag (see \aref{sec:conditions_flags}).
\item \texttt{manual} mode is designed for manual observing, either on-site or remotely. In this mode the pilot will be paused and so will not interrupt commands sent by the observer. The dome will still sound the alarm when moving and autoclose in bad weather by default, but both can be disabled. It is intended that they should only be disabled if there is an observer physically present in the dome, otherwise the dome should still be able to close automatically when observing remotely.
\item \texttt{engineering} mode is designed to be used if there are workers on site, when the hardware moving automatically could be dangerous. All of the dome systems are automatically disabled, and the pilot and day marshal will refuse to start. Leaving the system in this state for long periods of time is undesirable, and so it should only be used while work is ongoing or the telescope is completely deactivated.
\end{itemize}
\newpage
\end{colsection}
\subsection{Slack alerts}
\label{sec:slack}
\begin{colsection}
Although when in robotic mode GOTO is a completely autonomous system, it is still important that it does not operate completely unsupervised. As the GOTO collaboration has adopted the Slack messaging client\footnote{\url{https://slack.com}} for instant messaging and collaboration it was decided that the telescope control system should send reports automatically to a dedicated Slack channel. This was implemented through the Python Slack API package (\texttt{slackclient}\footnote{\url{https://python-slackclient.readthedocs.io}}), and has been widely adopted throughout G-TeCS.\@
The two most detailed Slack messages are the startup report, sent by the night marshal within the pilot (see \aref{sec:night_marshal}), and the morning report sent by the day marshal (see \aref{sec:day_marshal}). Examples of both are shown in \aref{fig:pilot_slack}. The startup report includes a summary of the current condition flags (see \aref{sec:conditions_flags}), links to the site weather pages, the external webcam view and the latest IR satellite image over La Palma. The morning report includes the internal webcam view and automatically-generated plots showing what the pilot observed last night and the current status of the all-sky survey.
Several other functions within the pilot send short messages to Slack when called. For example, as shown in \aref{fig:pilot_slack}, the pilot sends a message when the script starts and completes and when the dome opens or closes. A message will also be sent if the conditions turn bad, if the system mode changes, or if a hardware error is being fixed (see \aref{sec:monitors}). The pilot sends the majority of messages to Slack, but other daemons can also send their own alerts if nessesary. For example, the dome daemon sends a message when it enters lockdown (see \aref{sec:dome}), and the sentinel sends a series of messages whenever it processes an interesting alert through GOTO-alert (see \aref{sec:sentinel} and \aref{sec:event_slack}).
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/slack2.png}
\end{center}
\caption[Slack messages sent by the pilot and day marshal]{
Slack messages sent by the pilot and day marshal on a typical night. The pilot reports when it starts automatically at 5pm, then the night marshal sends out the startup report when the STARTUP task has completed. The pilot also sends out messages when it is opening and closing the dome, and when it finishes in the morning. The day marshal later independently confirms the dome is closed and sends out its own morning report.
}\label{fig:pilot_slack}
\end{figure}
\end{colsection}
\section{The pilot}
\label{sec:pilot}
\begin{colsection}
The pilot is a Python script, \texttt{pilot.py}, not a daemon. It is run once each night; started automatically at 5pm by the Linux cron utility, it runs through to the morning, quits, and then is started again in the afternoon. This happens every day, unless the system is in engineering mode.
\end{colsection}
\subsection{Asynchronous programming}
\label{sec:async}
\begin{colsection}
The pilot is written as an \textit{asynchronous} program, using the AsyncIO package from the Python standard library (\texttt{asyncio}). An asynchronous program is one where its code runs in separate parallel routines, which are switched between as required, and should not be confused with programs that are written utilising multiple processes or threads that run in parallel. See \aref{fig:async} for a graphical comparison between the two methods.
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.89\linewidth]{images/async.pdf}
\end{center}
\caption[Multi-threaded verses asynchronous programming]{
A comparison of multi-threaded verses asynchronous programming. This example uses a \textcolorbf{NavyBlue}{blue} task that takes \SI{0.5}{\second} to execute and then waits for \SI{1.5}{\second}, a \textcolorbf{Green}{green} task that takes \SI{0.75}{\second} and waits for \SI{1.5}{\second} and a \textcolorbf{Red}{red} task that takes \SI{0.5}{\second} and waits for \SI{2.5}{\second}. These times are exaggerated, typically pilot tasks wait for between 10--\SI{60}{\second}.
The upper plot shows three tasks with different execution periods (solid blocks) and wait times (blank) running in a multi-threaded program. Each task is being run in an independent parallel thread on its own core, even though they rarely overlap and it is uncommon for multiple cores to be in use at the same time.
The lower plot shows the same three tasks running as coroutines in an asynchronous program. The event loop decides which coroutine to run on the single core, represented by the black arrows. This does lead to some coroutines being left waiting (lighter blocks) until the current one finishes, and, as routines can be delayed, it is not suitable for checks that need to happen at exact frequencies. However, the overall core usage is much more efficient.
}\label{fig:async}
\end{figure}
An example of a simple task might be monitoring a particular source of data, like a weather station. It would contain a function to download the current weather information from the external mast, and then a \texttt{sleep} command to wait for 10 seconds, which when put inside a loop will ensure that the weather information is queried and updated every 10 seconds. If this loop was called in a multi-threaded program then the thread will be held up for a majority of the time not doing anything between checks. If there were multiple threads, for example checking different masts, then there could be no coordination between them and the whole program would end up being very inefficient. There are also other issues with multi-threaded programs, including input/output and sharing data between threads.
Asynchronous code contains multiple parallel \textit{coroutines}. The program itself runs an \textit{event loop}, which is a function with the job of choosing between the different coroutines to execute in the main thread. In an asynchronous version of the weather-monitoring program instead of a \texttt{sleep} function each coroutine would include an \texttt{await} function. When a routine reaches an \texttt{await} command it is suspended for the given time period and control is passed back to the event loop, which then chooses which of the other suspended routines should be run. Importantly, when the coroutine resumes it remembers where it stopped and continues from that point. The asynchronous style of writing code is ideally used with multiple coroutines that contain short functions with wait periods between when they need to be called again, and the pilot is a good example of this. The pilot runs a single-threaded event loop with multiple coroutines, which execute commands and then pause using the \texttt{await} command to allow other routines to be run.
One complication with the asynchronous model is handling errors within a coroutine. Should one coroutine block the thread control will never be returned to the event loop, meaning tasks in other coroutines will never be carried out. Within the G-TeCS pilot each task has a set timeout which will trigger an error should it take too long to complete, and a failure in any of the coroutines will raise an error within the pilot.
\end{colsection}
\subsection{Check routines}
\label{sec:checks}
\begin{colsection}
The coroutines within the pilot can be separated into two types: the check routines and the night marshal. Most of the coroutines are designed as monitors to regularly check different parts of the system, which fits well into the asynchronous model. These check routines are as follows:
\begin{itemize}
\item \texttt{check\_flags} is a routine that monitors the system flags, most notably those created by the conditions daemon (see \aref{sec:conditions}). If any of the conditions flags are bad then the dome daemon will enter lockdown and close the dome on its own (see \aref{sec:dome}), but the \texttt{check\_flags} routine will abort exposures, pause the pilot and ensure it is not resumed until the flag is cleared. When the pilot is paused the dome will close, the mount will park, and the night marshal (see below) will not trigger any more tasks. When conditions are clear again the pilot will reopen the dome and allow normal operations to be restored. The \texttt{check\_flags} routine also monitors the system mode and will pause the pilot if it is set to manual mode or exit if set to engineering mode (see \aref{sec:mode}).
\item \texttt{check\_scheduler} is a routine that queries the scheduler daemon (see \aref{sec:scheduler}) every 10 seconds to find the best job to observe. If the pilot is currently observing the scheduler will either return the database ID of the current pointing, in which case the pilot will continue with the current job, or a new ID which will lead to the pilot interrupting the current job and moving to observe the new one. If the pilot is not currently observing (either it is the start of the night, resuming from being paused or the previous pointing has just completed) then it will begin observing whatever the scheduler returns. The details of how the scheduler decides which target to observe are given in \aref{chap:scheduling}. The ID returned is then passed to the observe (OBS) task run by the night marshal.
\item \texttt{check\_hardware} monitors the hardware daemons (see \aref{sec:hardware_control}), checking every 60 seconds that they are all reporting their expected statuses. It does this using the hardware monitor functions described in \aref{sec:monitors}. If an abnormal status is returned then the pilot will pause, and a series of pre-set recovery commands generated by the monitor are executed in turn. While in recovery mode the pilot will check the monitors more frequently. If the commands work and the status returns to normal the pilot is resumed, but if the commands are exhausted without the problem being fixed then a Slack alert is issued reporting that the system requires human intervention and the pilot triggers an emergency shutdown.
\item \texttt{check\_dome} is a backup to the primary hardware check routine. \texttt{check\_hardware} does monitor the dome along with the other hardware daemons, but \texttt{check\_dome} provides a simple, dedicated backup to ensure the dome is closed when it should be and to raise the alarm if it is not.
\end{itemize}
\newpage
\end{colsection}
\subsection{Monitoring the hardware}
\label{sec:monitors}
\begin{colsection}
One of the important tasks that the pilot is required to do is monitoring the status of the various system daemons, and therefore the hardware units they are connected to. If any problems are detected (e.g.\ hardware not responding) the easiest automated response would be to shut down everything and send a message for a human to intervene. However this would be unnecessary in the case of small problems that could be easily fixed with one command, and it would be much better if the pilot could identify the problem and issue the command itself. The other benefit of this is a much faster reaction time than potentially needing to wake a human operator in the middle of the night, this is important both to minimise observing time lost and also potentially save the hardware by, for example, making sure the dome is closed in bad weather.
Therefore, a system was created to enable the pilot to attempt to respond and fix any errors that occur itself. This is done within the \texttt{check\_hardware} coroutine though a series of hardware monitor Python classes, one for each of the daemons (i.e. \texttt{DomeMonitor}, \texttt{CamMonitor} etc.). Each daemon has a set of recognised statuses, representing the current hardware state, and a set of valid modes which represent the expected state. The current status is fetched from the hardware daemon, the mode is set by the pilot, and the hardware checks consist of comparing the two to discover if there are any inconsistencies. For example, the dome daemon can have current statuses of \texttt{OPEN}, \texttt{CLOSED} or \texttt{MOVING} (or \texttt{UNKNOWN}), and its valid modes are just \texttt{OPEN} and \texttt{CLOSED}. At the start of the night when the pilot starts the dome should be in \texttt{CLOSED} mode, and the pilot only switches it to \texttt{OPEN} mode when it is ready to open the dome. If when a check is carried out the dome is in \texttt{CLOSED} mode but the current status is reported as not \texttt{CLOSED} then that is a problem, and the hardware check function returns that it has detected an error with the dome. These checks can have timeouts associated with each status. For example, if the dome is in \texttt{CLOSED} mode and is reported as \texttt{MOVING} that is not necessarily an error, as it might be currently closing. The hardware monitor stores the time since the hardware status last changed, so if the dome reports that it has been in the \texttt{MOVING} state for longer than it should normally take to close ($\sim$\SI{90}{\second}) then that raises an error. This example used states specific to that hardware, but every daemon also has various other possible states and errors --- for example if the daemon is not running, or is running and not responding.
When one of the monitor checks returns an error then the pilot will take action as described within the \texttt{check\_hardware} routine: pause night marshal (see below), stop any current tasks and send a Slack alert to record the error. But instead of stopping there, the monitor goes on to attempt to recover from the error and fix the problem. In the same way that a human observer would run though a series of commands in order to solve the problem, each monitor has a defined set of recovery steps to be run through depending on the error reported. Continuing with the previous example, if the dome reports \texttt{OPEN} when in \texttt{CLOSED} mode then the first recovery step is simple: execute the command \texttt{dome~close}. Each step then has a timeout value and an expected state if the recovery command worked. If after 10 seconds the status of the dome has not changed from \texttt{OPEN} to \texttt{MOVING}, then the error is persisting and more actions need to be taken. If however the dome daemon reports that the dome is moving then the error is not cleared immediately, only when the status finally reaches \texttt{CLOSED}. As mentioned previously, should a monitor run out of recovery steps then the pilot will send out an alert that there is nothing more that it can do and will attempt an emergency shutdown.
Using the above method, the vast majority of minor hardware issues can be solved by the pilot without the need for human intervention. Every time the recovery steps are triggered a message is sent to Slack (see \aref{sec:slack}) containing the error code and the steps required to fix it, so it is easy to then go back and examine why the error occurred and how to prevent it in the future.
\newpage
\end{colsection}
\subsection{The night marshal}
\label{sec:night_marshal}
\begin{colsection}
The check routines described in \aref{sec:checks} are support tasks for the primary routine, which is called the \texttt{night\_marshal}. Unlike the check routines, the night marshal does not contain a loop, instead it runs through a list of tasks as the night progresses, based on the altitude of the Sun. Each task is contained in a separate Python observation script, which contains the commands to send to the hardware daemons (see \aref{sec:scripts}). Each is run by spawning a new coroutine, meaning that while they are running the other routines --- such as the check tasks --- can continue. In the order they are performed during the night, the night marshal tasks are:
\begin{enumerate}
\item STARTUP, run immediately when the pilot starts. The \texttt{startup.py} script powers on the camera hardware, unparks the mount, homes the filter wheels and cools the CCDs down to their operating temperature of \SI{-20}{\celsius}. Once startup has finished the pilot will send a report of the current conditions to Slack (see \aref{fig:pilot_slack}).
\item DARKS, run after the system start up is complete before opening the dome. This executes the \texttt{take\_biases\_and\_darks.py} script to take bias and dark frames at the start of the night.
\item OPEN, run once the Sun reaches \SI{0}{\degree} altitude. It simply executes the \texttt{dome~open} command. If the pilot is paused due to bad weather or a hardware fault then the night marshal will wait and not open until the weather improves or the fault is fixed. If it is never resolved then the night marshal will remain at this point until the end of the night and the shutdown timer runs out (see below).
\item FLATS, run once the dome is open and the Sun reaches \SI{-1}{\degree}. This executes the \texttt{take\_flats.py} script, which moves the telescope into a position pointing away from the Sun and then takes flat fields in each filter, stepping in position between each exposure and automatically increasing the exposure time as the sky darkens. See \aref{sec:flats} for details of the flat field routine.
\item FOCUS, run once the Sun reaches \SI{-11}{\degree}. This executes the \texttt{autofocus.py} script, which finds the best focus position for each of the unit telescopes. See \aref{sec:autofocus} for details of the autofocus routine. If the routine fails for any reason the previous nights' focus positions are restored.
\item OBS (short for ``observing''), begun once autofocusing is finished and continuing for the majority of the night until the Sun reaches \SI{-12}{\degree} in the morning. When a database ID is received from the scheduler via the \texttt{check\_schedule} routine the \texttt{observe.py} script is executed. The script queries the observation database (see \aref{sec:obsdb}) to get the coordinates and exposure settings for that pointing and then sends the commands to the mount and exposure queue daemons. Once a job is finished, either through completing all of its exposures or being interrupted, the entry in the database is updated and the routine starts observing the next job from the scheduler, starting the \texttt{observe.py} script again with the new pointing ID.\
\item FLATS is repeated once the Sun reaches \SI{-10}{\degree} in the morning, using the same script but this time decreasing the exposure times as the sky brightens.
\end{enumerate}
Once the night marshal has completed all of its tasks it exits and triggers the \texttt{shutdown.py} script, which powers off the cameras, parks the mount and ensures the dome is closed. Once this is finished the pilot quits. In addition there is a separate night countdown timer within the pilot, which will trigger the shutdown once the Sun reaches \SI{0}{\degree} in the morning. Normally the night marshal will have finished and triggered the shutdown long before that point, but the countdown acts as a backup ensuring that if there is a problem with the night marshal the pilot will still trigger a shutdown.
\newpage
It is also possible for the pilot to trigger an emergency shutdown during the night. This triggers the same \texttt{shutdown.py} script observing script, with the only difference being that it ensures the dome is closed first. An emergency shutdown will be triggered by the pilot only in situations that it could not recover from without human intervention. Notably, this occurs if the hardware monitors called by the \texttt{check\_hardware} routine reach the end of a daemon's recovery steps without fixing the problem.
\end{colsection}
\subsection{The day marshal}
\label{sec:day_marshal}
\begin{colsection}
The day marshal is a completely separate script (\texttt{day\_marshal.py}) which provides a counterpart and backup to the pilot (the name is the mirror of the night marshal). The script is run as a cron job like the pilot, but starts in the early morning rather than the late afternoon. The day marshal is a much simpler script, with only one key task --- to wait until dawn and then check that the dome is closed. In this sense it is specifically a backup for the pilot's inbuilt night countdown timer, and as it is completely independent of the pilot it will run even if the pilot script has frozen or crashed during the night.
If the day marshal finds that the dome is still open when it runs it will send out Slack alerts that the system has failed, and then try closing the dome itself by sending commands to the dome daemon. So far this has not occurred, aside from deliberately during on-site tests. If all is well the day marshal will send out a Slack report as shown in \aref{fig:pilot_slack}, again as a mirror of the report that the night marshal sends after startup. This report will confirm that the dome is closed, and also contains some simple plots showing the targets the pilot observed last night.
\end{colsection}
\section{Conditions monitoring}
\label{sec:conditions}
\begin{colsection}
Perhaps the most important role of the autonomous systems is monitoring the on-site conditions. The weather at the site on La Palma is typically very good, however storms can affect the mountain-top observatory, especially in the winter months (see \aref{sec:challenges}). It is vital that the dome is closed whenever the weather turns bad, or in any other abnormal circumstances. For example, if the site loses power or internet connection it is better to stop observing and close the dome in case they are not restored quickly. The system had to be trusted to close in an emergency before it was allowed to run completely without human supervision.
\end{colsection}
\subsection{The conditions daemon}
\label{sec:conditions_daemon}
\begin{colsection}
The conditions daemon is a support daemon that runs on the central observatory server in the SuperWASP building on La Palma alongside the observation database (see \aref{fig:flow}). The daemon is run on the central server because it deals with site-wide values, so when the second GOTO telescope on La Palma is built it is envisioned that they will both share the same conditions daemon (see \aref{sec:gtecs_future}).
The daemon takes in readings from the three local weather stations next to the GOTO dome on La Palma (shown in \aref{fig:conditions}) every 10 seconds, as well as other sources such as internal sensors. The daemon processes these inputs into a series of output flags, which have a value of \texttt{0} (good), \texttt{1} (bad) or \texttt{2} (error). If any of the flags are marked as not good (i.e.\ the sum of all flags is $>0$) then the overall conditions are bad. The output flags are monitored by the dome daemon and the pilot, if the conditions are bad the dome will enter lockdown and automatically close if it is open (see \aref{sec:dome}) and the pilot \texttt{check\_flags} routine will trigger the pilot to pause observations (see \aref{sec:pilot}).
\begin{figure}[t]
\begin{center}
\includegraphics[height=0.5\linewidth]{images/orm_warwick.png}
\includegraphics[height=0.5\linewidth]{images/conditions_photo.jpg}
\end{center}
\caption[Locations of the three weather masts on La Palma]{
On the left the locations of the three local weather masts on La Palma are marked by the \textcolorbf{Red}{red} stars (see \aref{fig:orm} for the context of the site). There are three masts around GOTO:\@ one next to the GOTO platform (shown on the right), one on the SuperWASP shed and a third on the liquid nitrogen plant next to W1m.
}\label{fig:conditions}
\end{figure}
\newpage
\end{colsection}
\subsection{Conditions flags}
\label{sec:conditions_flags}
\begin{colsection}
Each conditions flag has a limit below or above which the flag will turn from good to bad. For categories with multiple sources (for example the three local weather stations each give an independent external temperature reading) then the limit will be applied to each, and if \textit{any} are found to be bad then the flag is set. It follows therefore that \textit{all} the conditions sources must be good for the flag to be set to good. Each category also has two parameters: the bad delay and the good delay. These are the time the conditions daemon waits between an input going bad/good and setting the flag accordingly, which has the effect of smoothing out any sudden spikes in a value and ensures the dome will not be opening and closing too often.
The conditions flags can be grouped into three categories, divided according to severity. The latest version of G-TeCS contains 13 flags, listed in \aref{tab:conditions_flags}. An explanation of the different categories, and the flags within each, is given below.
\begin{table}[p]
\begin{center}
\begin{tabular}{c|cccc} %
Flag name & Criteria measured & Bad criteria & Good criteria & Category \\
\midrule
\texttt{dark} & Sun altitude
& > \SI{0}{\degree}
& < \SI{0}{\degree}
& info
\\[20pt]
\texttt{clouds} & IR opacity
& > 40\%
& < 40\%
& info
\\[20pt]
\texttt{rain} & Rain detectors
& \makecell{\texttt{True} \\ for \SI{30}{\second}}
& \makecell{\texttt{False} \\ for \SI{60}{\minute}}
& normal
\\[20pt]
\texttt{windspeed} & Wind speed
& \makecell{> \SI[per-mode=symbol]{35}{\kilo\metre\per\hour} \\ for \SI{2}{\minute}}
& \makecell{< \SI[per-mode=symbol]{35}{\kilo\metre\per\hour} \\ for \SI{10}{\minute}}
& normal
\\[20pt]
\texttt{humidity} & Humidity
& \makecell{> 75\% \\ for \SI{2}{\minute}}
& \makecell{< 75\% \\ for \SI{10}{\minute}}
& normal
\\[20pt]
\texttt{dew\_point} & \makecell{Dew point \\ above ambient \\ temperature}
& \makecell{< +\SI{4}{\degree} \\ for \SI{2}{\minute}}
& \makecell{> +\SI{4}{\degree} \\ for \SI{10}{\minute}}
& normal
\\[20pt]
\texttt{temperature} & Temperature
& \makecell{< \SI{-2}{\degree} \\ for \SI{2}{\minute}}
& \makecell{> \SI{-2}{\degree} \\ for \SI{10}{\minute}}
& normal
\\[20pt]
\texttt{ice} & Temperature
& \makecell{< \SI{0}{\degree} \\ for \SI{12}{\hour}}
& \makecell{> \SI{0}{\degree} \\ for \SI{12}{\hour}}
& critical
\\[20pt]
\texttt{internal} & \makecell{Internal \\ temperature \\ \& humidity}
& \makecell{< \SI{-2}{\degree} or > 75\% \\ for \SI{1}{\minute}}
& \makecell{> \SI{-2}{\degree} and < 75\% \\ for \SI{10}{\minute}}
& critical
\\[30pt]
\texttt{link} & \makecell{Network \\ connection}
& \makecell{ping fail \\ for \SI{10}{\minute}}
& \makecell{ping okay \\ for \SI{1}{\minute}}
& critical
\\[20pt]
\texttt{diskspace} & \makecell{Free space \\ remaining}
& < 5\%
& > 5\%
& critical
\\[20pt]
\texttt{ups} & \makecell{Battery power \\ remaining}
& < 99\%
& > 99\%
& critical
\\[20pt]
\texttt{hatch} & Hatch sensor
& \makecell{\texttt{open} \\ for \SI{30}{\minute}}
& \makecell{\texttt{closed} \\ for \SI{30}{\minute}}
& critical
\\
\end{tabular}
\end{center}
\caption[List of conditions flags and change criteria]{
A list of all the conditions flags, and the criteria for them to switch from good to bad and bad to good.
}\label{tab:conditions_flags}
\end{table}
\clearpage
\subsubsection{Information flags}
The first category are the `information' flags. These are assigned values like the other flags, however they are purely for information purposes and do not contribute to the overall decision of whether the conditions are bad or not. In other words, an information flag can be bad, but the overall system conditions still considered good because the flag is not included in the final calculation. The information flag being being bad is not a reason to send the dome into lockdown, however it is still useful information to record. The two current information flags are described below:
\begin{itemize}
\item \texttt{dark}: A simple information flag that is bad when the Sun is above the \SI{0}{\degree} horizon and good when it is below. This has no effect on the robotic system, but is useful for human observers.
\item \texttt{clouds}: This information flag uses free IR satellite images downloaded from the sat24.com website\footnote{\url{https://en.sat24.com}} to measure a rough cloud coverage value, based on the methods of \citet{clouds}. Although initially trialled as a normal flag, meaning the dome would close when high cloud was detected, the results were not consistent enough and the presence of clouds was more reliably calculated by the zero point measured by the data processing pipeline. The flag remains a useful information source however, and the satellite cloud opacity is added to the image headers to assist in later data quality control checks.
\end{itemize}
Other information flags that have been proposed include seeing (from the ING or TNG seeing monitors on La Palma) and dust (from the TNG aerosols monitor). Both would be useful information to gather and store in image headers, but as they pose little threat to the hardware they are not valid reasons to close the dome unlike the other conditions flags described below.
\newpage
\subsubsection{Normal flags}
The second category contains the `normal' flags, and makes up the conditions flags relating to the external weather conditions. These flags going bad are valid grounds to close the dome, however as they relate to natural events they are not in any way unusual and the pilot can happily remain paused and wait for the flags to clear. The normal flags are described below:
\begin{itemize}
\item \texttt{rain}: This flag is set to bad if any of the weather stations report rain, and will only be cleared after 60 minutes of no more rain being reported. In practice rain usually coincides with high humidity, meaning the \texttt{rain} and \texttt{humidity} flags often overlap.
\item \texttt{windspeed}: This flag gets set if the windspeed is above \SI[per-mode=symbol]{35}{\kilo\meter\per\hour}, with a bad delay of two minutes and a good delay of ten minutes.
\item \texttt{humidity}: The humidity limit is 75\%, with a bad delay of two minutes and a good delay of ten minutes.
\item \texttt{dew\_point}: The dew point is related to the humidity, and has a limit of \SI{4}{\celsius} above the ambient external temperature (so if the external temperature is \SI{2}{\celsius} then the flag is set to bad if the dew point is \SI{6}{\celsius} or below).
\item \texttt{temperature}: The \texttt{temperature} flag is set if the temperature drops below \SI{-2}{\celsius} for two minutes, and also has a good delay of ten minutes. The telescope can operate in below-freezing temperatures for short amounts of time, but for longer cold periods when ice build-up is a concern see the critical \texttt{ice} flag below.
\end{itemize}
The limits for some of these flags and how they were determined is discussed further in \aref{sec:conditions_limits} below.
\subsubsection{Critical flags}
The final category are the `critical' flags, for more serious situations that might arise. In early versions of G-TeCS any of these flags turning bad was enough to trigger an emergency shutdown and stop the pilot for the night. However this proved to be an over-reaction, and there were no issues with having the pilot continue, although remaining paused while the flag was bad. The only difference now between `normal' and `critical' flags is that when a critical flag changes a Slack alert is sent out to ensure it is brought to the attention of the human monitors. The critical flags are described below:
\begin{itemize}
\item \texttt{ice}: A critical flag which uses the same input as the \texttt{temperature} flag, but is set to bad if the temperature is below \SI{0}{\celsius} for 12 hours and will only clear if it is constantly above freezing for another 12 hours. These longer timers mean this flag prevents the dome opening after a serious cold period until the temperature is regularly back above freezing, and also gives time for a manual inspection to be carried out to ensure the dome is free of ice.
\item \texttt{internal}: A combination flag for the two internal temperature and humidity sensors within the dome. These have very extreme limits, a humidity above 75\% or a temperature below \SI{-2}{\celsius}, which should never be reached inside under normal circumstances due to the internal dehumidifier. This flag therefore is a backup for an emergency case, when either the dehumidifier is not working or the dome has somehow opened in bad conditions (see \aref{sec:challenges}).
\item \texttt{link}: The conditions daemon also monitors the external internet link to the site, by pinging the Warwick server and other public internet sites. After 10 minutes of unsuccessful pings the flag is set to bad. It is technically possible for the system to observe without an internet link, and there is a backdoor into the system through the separate SuperWASP network, but it is an unnecessary risk: in an emergency alerts could not be sent out and external users would not be able to log in.
\item \texttt{diskspace}: The amount of free disk space on the image data drive is also monitored, with the flag being set to bad if there is less than 5\% of free space available. As images are immediately sent to Warwick and then regularly cleared from the local disk this should never be an issue, but this is a critical conditions flag as if the local disk was full it would prevent any more data being taken.
\item \texttt{ups}: The conditions daemon will set the \texttt{ups} flag if the observatory has lost power and the system UPSs are discharging (see \aref{sec:power}). Brief power cuts do occur on La Palma, but rarely for more than a few minutes as there are on-site backup generators that take over.
\item \texttt{hatch}: A critical flag to detect if the access hatch into the dome has been left open. This flag is unique in that it is only valid in robotic mode (see \aref{sec:mode}); when in manual or engineering mode it is assumed that the hatch being opened is a result of someone operating the telescope. But when the system is observing robotically the hatch being open is a problem, as there is no way to close it remotely and in bad weather damage could be caused to the telescope.
\end{itemize}
\subsubsection{Age flag}
There is a 13th pseudo-flag that is not set by the conditions monitor: the \texttt{age} flag. The output of the conditions daemon is saved in a datafile with a timestamp, and the dome daemon and the pilot monitors this file for changes, instead of querying the conditions daemon directly, to prevent any errors if the conditions daemon freezes or crashes. If the timestamp is out of date compared to the current time (2 minutes by default) then something must have happened to the conditions daemon and the flags are not reliable. The \texttt{age} flag is then created and set to bad, and it is then treated identically to the other 11 non-info flags when checking the conditions. Note that the \texttt{age} flag is included in the startup report sent to Slack in \aref{fig:pilot_slack} alongside the others given in \aref{tab:conditions_flags}.
\end{colsection}
\subsection{Determining conditions limits}
\label{sec:conditions_limits}
\begin{colsection}
Setting the conditions flags based on the external weather is a balance between the loss of sky time and the potential risk to the hardware.
\subsubsection{Temperature}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/conditions/temperature.png}
\end{center}
\caption[Histogram of temperature readings]{
Histogram of temperature readings recorded for all nights in 2018. The temperature was recorded as below the \SI{-2}{\celsius} limit 1.1\% of this period (the \textcolorbf{Orange}{orange} bars) and below \SI{0}{\celsius} (the \textcolorbf{Orange}{orange} and \textcolorbf{Cyan}{cyan} bars) 5.7\% of the time.
}\label{fig:temperature}
\end{figure}
Originally the temperature limit was set to \SI{0}{\celsius}, meaning the dome would close if any of the conditions masts recorded the temperature as below freezing for more than two minutes (the standard `bad delay' for the normal flags). However the temperature on its own is not a risk to the hardware unless coupled with high humidity, meaning that the limit was later lowered to \SI{-2}{\celsius}. As shown in \aref{fig:temperature} this gained up to an extra 4.6\% of observing time that was previously lost, although note that is an upper limit as other flags might be bad during that period.
\newpage
\subsubsection{Humidity}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/conditions/humidity.png}
\end{center}
\caption[Histogram of humidity readings]{
Histogram of humidity readings recorded for all nights in 2018. The humidity was recorded as above the 75\% limit 16.8\% of this period (\textcolorbf{Orange}{orange} bars).
}\label{fig:humidity}
\end{figure}
\end{colsection}
High humidity at the GOTO site on La Palma is associated with clouds forming in the caldera and spilling over the edge to cover the telescopes. There is a clear bimodal distribution in the site humidity readings shown in \aref{fig:humidity}, a result of fairly distinct high-humidity periods (mainly in the winter) and a range of lower humidities that pose no threat to the hardware. The humidity limit of 75\% is semi-arbitrary, in that if it is surpassed that acts as a forewarning of a high humidity period. Changing the limit to 80\% would not in practice gain much on-sky time as the humidity tends to rise rapidly when clouds are forming, and it would come at a risk of condensation on the hardware (the related dew point measurement is also an important measurement of this).
\newpage
\subsubsection{Windspeed}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/conditions/windspeed.png}
\end{center}
\caption[Histogram of windspeed readings]{
Histogram of windspeed readings recorded for all nights in 2018. The windspeed was recorded as above the \SI[per-mode=symbol]{35}{\kilo\metre\per\hour} limit 2.3\% of this period (\textcolorbf{Orange}{orange} bars).
}\label{fig:windspeed}
\end{figure}
The \texttt{windspeed} flag is set to bad if the wind speed from any of the local masts is recorded as above \SI[per-mode=symbol]{35}{\kilo\meter\per\hour} for more than two minutes. This limit is well below the winds needed to damage the GOTO hardware, but is more a factor of the effect of wind shake on image quality. This is rarely the case, as shown in \aref{fig:windspeed}, and only tends to occur when storms are passing over the island; otherwise the conditions are fairly stable.
The wind limit was previously \SI[per-mode=symbol]{40}{\kilo\metre\per\hour}, but when the full four unit telescope array was installed, with the addition of the light shields, the wind sensitivity of the mount was increased and the high wind limit had to be lowered. It is possible the wind limit will also need to be revisited when the next four unit telescopes are added to the mount.
\section{Observing targets}
\label{sec:observing}
\begin{colsection}
In order for the pilot to function during the night it needs to know what targets to observe. This section describes the architecture within G-TeCS to allow targets to be defined, selected and passed to the pilot, while the details of the scheduling algorithms are discussed in more detail in \aref{chap:scheduling}.
\end{colsection}
\subsection{The observation database}
\label{sec:obsdb}
\begin{colsection}
\begin{sidewaysfigure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/schema.png}
\end{center}
\caption[Relationship diagram for the G-TeCS observation database]{
Relationship diagram for the G-TeCS observation database.
}\label{fig:schema}
\end{sidewaysfigure}
The scheduling system for G-TeCS is based around a database known as the observation database or ``ObsDB''. This database is located on the central observatory server hosted by SuperWASP, which not only is a faster machine than the control computer in the dome but in the future will allow a single database to be shared between mounts (see \aref{sec:multi_tel} and \aref{sec:gtecs_future}). The database is implemented using the MariaDB database management system\footnote{\url{https://mariadb.com}}, and is queried and modified using \glsfirst{sql} commands. In order to interact easily with the database within G-TeCS code a separate Python package, ObsDB (\texttt{obsdb}\footnote{\url{https://github.com/GOTO-OBS/goto-obsdb}}), was written as an \glsfirst{orm} package utilising the SQLAlchemy package (\texttt{sqlalchemy}\footnote{\url{https://sqlalchemy.org}}). An entity relationship diagram for the database schema is shown in \aref{fig:schema}.
The primary table in the database is for individual \texttt{pointings}. These each represent a single visit of the telescope, with defined RA and Dec coordinates and a valid time range for it to be observed within, as well as other observing constraints. Each pointing has a status value which is either \texttt{pending}, \texttt{running}, \texttt{completed} or some other terminal status (\texttt{aborted}, \texttt{interrupted}, \texttt{expired} or \texttt{deleted}). Ideally a pointing passes through three stages: it is created as \texttt{pending}, the scheduler selects it and the pilot marks it as \texttt{running}, then if all is well when it is finished it is marked as \texttt{completed}. If it stays in the database and never gets observed it will eventually pass its defined stop time (if it has one) and will be marked as \texttt{expired}. If the pointing is in the middle of being observed but is then cancelled before being completed it will be marked either \texttt{interrupted} (if the scheduler decided to observe another pointing of a higher priority) or \texttt{aborted} (in the case of a problem such as having to close for bad weather). The \texttt{deleted} status is reserved for pointings being removed from the queue before being observed, such as updated pointings being inserted by the sentinel and overwriting the previous ones (see \aref{sec:event_insert}). A representation of the relationship between the pointing statuses and how they progress is shown in \aref{fig:pointings}.
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/pointings_flowchart.pdf}
\end{center}
\caption[Pointing status progression flowchart]{
A flowchart showing how the status of an entry in the pointings table can change.
}\label{fig:pointings}
\end{figure}
As well as the target information (RA, Dec, name) a pointing entry contains constraints on when they can be observed. Each pointing can have set start and stop times; the scheduler will only select pointings where the current time is within their valid range (and once the stop time has passed they will be marked as \texttt{expired}). Limits can also be set on minimum target altitude, minimum distance from the Moon, maximum Moon brightness (in terms of Bright/Grey/Dark time) and maximum Sun altitude. These constraints are applied by the scheduler to each pointing when deciding which to observe (see \aref{sec:constraints}), and unless they all pass the pointing is deemed invalid. When created, a pointing is also assigned a rank, usually from 0--9, as well as a True/False flag marking it as a time-critical \glsfirst{too}. These are used when calculating the priority of the pointing, to compare with others in order to determine which is the highest priority to observe (see \aref{sec:ranking}).
The commands to be executed once the telescope is in position are stored in a separate \texttt{exposure\_sets} table. The table contains the number of exposures to take, the exposure time and the filter to use; so an observation requiring three \SI{60}{\second} exposures in the \textit{L} filter only requires one entry in the table. When the pointing is observed the pilot will read the set information and issue the appropriate commands to the exposure queue daemon (see \aref{sec:exq}), where each exposure is added to the queue and observed in turn.
\newpage
Each entry in the \texttt{pointings} table can only be observed once, after which it is marked as \texttt{completed} and is therefore excluded from future scheduler checks (which only consider \texttt{pending} pointings). For observing a target more than once there also exists the \texttt{mpointings} table, which contains information to dynamically re-generate pointings for a given target. An mpointing entry is defined with three key values: the requested number of observations, the time each should be valid in the queue and the minimum time to wait between each observation. Each time the database caretaker script is run it looks for any entries in the \texttt{mpointings} table that still have observations to do and it creates another entry in the \texttt{pointings} table for that target (this is tracked in a separate \texttt{time\_blocks} table). Setting the time values allows a lot of control over when pointings can be valid; for example, scheduling follow-up observations a set number of hours or days after an initial pointing is observed (see \aref{sec:event_strategy}).
The three tables described above (\texttt{pointings}, \texttt{exposure\_sets} and \texttt{mpointings}) are the core tables required for observation scheduling. However, there are several other tables defined in the database which are used to group pointings together and relate to GOTO's purpose as a survey instrument. As described in more detail in \aref{sec:gototile}, GOTO observes the sky divided into a fixed grid of individual tiles. The database therefore also contains a \texttt{grids} table and a \texttt{grid\_tiles} table, which define the current grid based on the field of view of the telescope. Mapping pointings to the grid is achieved through two more tables, \texttt{surveys} and \texttt{survey\_tiles}. A \textit{survey} in this context is a group of tiles that are being observed for a specific reason, one example are the pointings comprising the all-sky survey that GOTO carries out every night. Events that are processed by the alert sentinel might have a skymap that covers multiple tiles, and therefore the set of pointings required to cover it forms a survey within the database (the details of adding event pointings to the database are described in \aref{sec:event_insert}). Each pointing within the survey is linked to a survey tile, and each survey tile is linked to a grid tile of the current grid. The additional field added by the survey tile is a `weighting' column, which allows tiles within a survey to be weighted relative to each other. In the all-sky survey each tile is weighted equally, but in a survey coming from an event skymap the tiles will be weighted by the contained probability within that tile. The scheduler takes this weighting into account when deciding which pointing to observe (see \aref{sec:scheduler_tiebreaker}).
There are two additional tables in the database that are used to contain supporting information: the \texttt{events} and \texttt{users} tables. The \texttt{events} table contains fields such as the event type and source, and is filled by the sentinel when events are processed (see \aref{sec:event_insert}). The \texttt{users} table connects each pointing to the user who added it to the database. At the moment this is unused, and every pointing is linked to the single generic ``GOTO'' user, but in the future individuals might wish to insert and keep track of their own targets. Finally there is an \texttt{image\_logs} table that is populated by the camera daemon whenever an image is taken, this builds a record and allows individual images to be traced back to other database entries if required (all connected database IDs are also stored in the image FITS header).
\end{colsection}
\subsection{The sentinel}
\label{sec:sentinel}
\begin{colsection}
In order for targets to be observed by the pilot they must have entries defined in the \texttt{pointings} table in the observation database. These can be added manually, but for automated follow-up observations they have to be inserted whenever an alert is received. As shown in \aref{fig:flow} this is the job of the sentinel daemon.
In addition the normal control loop, the sentinel daemon includes an independent alert listener loop that is continuously monitoring the transient alert stream output by the 4 Pi Sky event broker \citep{4pisky}, using functions from the PyGCN Python package (\texttt{pygcn}\footnote{\url{https://pypi.org/project/pygcn}}). Should the link to the server fail the daemon will automatically attempt to re-establish the connection every few seconds until it is restored. Alerts come in to the listener and are are appended to an internal queue, and the sentinel also has an additional \texttt{ingest} command which can be used to manually insert test events or bypass the alert listener. Alerts are then removed from the queue and processed using the handler from the GOTO-alert Python package. The details of how alerts are processed are described in \aref{chap:alerts}, which includes how events are defined, processed, mapped onto the all-sky grid and ultimately added to the database.
The alert listener is a key part of the automated system but was not initially planned to be assigned to its own independent daemon. The pt5m system uses the Comet software \citep{comet} in a separate script independent of any daemons. The advantages to including a dedicated alert listener daemon in G-TeCS, which became known as the sentinel, come from it being integrated into the pilot monitoring systems like the other daemons (described in \aref{sec:monitors}). Should the sentinel daemon crash or not respond to checks the pilot will notice and restart it like any other daemon.
\end{colsection}
\subsection{The scheduler}
\label{sec:scheduler}
\begin{colsection}
All entries in the observation database \texttt{pointings} table with status ``\texttt{pending}'' form the current queue, and the task of selecting which of these pointings the system should observe is the role of the scheduler. Within G-TeCS the \emph{scheduler} can refer to two linked concepts: the scheduling functions or the scheduler daemon itself. This section describes how the scheduler daemon operates; for how the scheduling functions chose which pointing to observe see \aref{chap:scheduling}.
The pt5m control system has no independent scheduler daemon; when the pilot needs to know what to observe it simply calls the scheduling functions to read the current queue, rank the pointings and find the one with the highest priority. When expanding the system for GOTO it was decided to farm these calculations off to a separate daemon, which the pilot queries just like the other hardware daemons. There are several advantages to this method. Firstly, just like with the sentinel alert monitor, having a dedicated daemon means it can be monitored by the pilot using the functions described in \aref{sec:monitors}. Furthermore, the scheduling commands can take a significant amount of time to run (several seconds), so splitting them out to a separate program saves time and frees up the pilot thread for other routines (recall the pilot is asynchronous but not multi-threaded). Also, having an independent scheduler daemon allows it to be run on the faster central server in SuperWASP that hosts the observation database, as shown in \aref{fig:flow}. Having the database queries run on the same, machine as the database, instead of over the network, improves the speed of the scheduling functions. Finally, when GOTO moves to a multi-telescope system it is anticipated that the scheduler will be one of the common systems shared between telescopes (see \aref{sec:multi_tel_scheduling}), so it makes sense to have the daemon on the central server alongside the other shared systems.
The scheduler daemon contains the usual control loop, which runs the scheduling functions described in \aref{chap:scheduling} and internally stores the returned highest-priority pointing. The daemon exposes a single command, \texttt{check\_queue}, which returns the ID of that pointing. The pilot \texttt{check\_schedule} coroutine queries the daemon every 10 seconds using this command, and the scheduler returns one of three results: carry on with the current observation, switch to a new observation, or park the telescope (in the case that there are no valid targets). Most of the time the pilot will be observing a pointing previously given by the scheduler, and on the next check the scheduler will return the same pointing as it is still the highest priority --- in which case pilot will continue observing it. Even if the scheduler finds that a different pointing now has a higher priority it will not tell the pilot to change targets whilst observing the current target, unless the new pointing has the \glsfirst{too} flag set. Otherwise the pilot will wait until it has finished the current job, mark it as complete in the database and ask the scheduler for the next target to observe. The different possible cases are summarised in \aref{tab:sched}.
\begin{sidewaystable}[p]
\begin{center}
\begin{tabular}{cc|cccc} %
&
& \multicolumn{4}{c}{Highest priority pointing is\ldots}
\\[0.5cm]
&
& \makecell{\ldots same as \\ current pointing}
& \makecell{\ldots a new, \\ valid pointing}
& \makecell{\ldots a new, \\ invalid pointing}
& \ldots None
\\[0.5cm]
\midrule
& & & & &
\\
\multirow{8}{*}{\rotatebox[origin=c]{90}{Current pointing is\ldots}}
& \ldots valid
& \makecell{\textcolor{Green}{Continue} \\ \textcolor{Green}{current pointing}}
& \makecell{\textcolor{BlueGreen}{Interrupt and start new pointing} \\ \textcolor{BlueGreen}{if it is a ToO and the current pointing is not,} \\ \textcolor{BlueGreen}{otherwise continue current pointing}}
& \textcolor{Red}{Park}
& \textcolor{Red}{Park}
\\[1.5cm]
& \ldots invalid
& \textcolor{Red}{Park}
& \textcolor{NavyBlue}{Interrupt and start new pointing}
& \textcolor{Red}{Park}
& \textcolor{Red}{Park}
\\[1.5cm]
& \makecell{\ldots N/A}
& ---
& \textcolor{NavyBlue}{Start new pointing}
& \textcolor{Red}{Park}
& \textcolor{Red}{Park}
\\[0.5cm]
\end{tabular}
\end{center}
\caption[Actions to take based on scheduler results]{
Actions the pilot will take based on the scheduler results. The scheduler can return one of three options as the highest priority pointing: either the current pointing, a different pointing or None (meaning the current queue is empty). Each pointing can either be valid or invalid. The pilot will either continue with the current pointing (\textcolorbf{Green}{green}), switch to the new pointing depending on the ToO flag (\textcolorbf{NavyBlue}{blue}, \textcolorbf{BlueGreen}{blue-green}) or park the telescope (\textcolorbf{Red}{red}).
}\label{tab:sched}
\end{sidewaystable}
\clearpage
\end{colsection}
\section{Summary and Conclusions}
\label{sec:autonomous_conclusion}
\begin{colsection}
In this chapter I have described the autonomous systems that allow GOTO to function as a robotic telescope.
The purpose of the programs described in this chapter is to add onto the core G-TeCS software as described in \aref{chap:gtecs}, and to replicate and replace every role of a human telescope operator. The core of the robotic control system is the pilot master control program, and I described how the pilot operates as an asynchronous program with multiple coroutines dedicated to monitoring or carrying out specific tasks throughout the observatory.
There are also several additional daemons added to support the pilot. The conditions daemon performs a vital role in the control system, and I described the different conditions flags and weather limits used for the telescope on La Palma. I then gave an outline of how targets are observed by the robotic system, with the sentinel daemon adding pointings to the observation database and the scheduler daemon selecting which ones to observe. How the scheduling functions determine which target is the highest priority is examined in the following chapter (\aref{chap:scheduling}), and how the sentinel daemon processes alerts is described in \aref{chap:alerts}.
\end{colsection}
\chapter{On-Site Commissioning}
\label{chap:commissioning}
\chaptoc{}
\section{Introduction}
\label{sec:commissioning_intro}
\begin{colsection}
In this chapter I describe commissioning the GOTO hardware and parallel software developments.
\begin{itemize}
\item In \nref{sec:hardware_commissioning} I give an outline of the commissioning period, focusing on my own involvement with trips to La Palma and building additional hardware for the dome.
\item In \nref{sec:software_commissioning} I describe how the control software was developed alongside the hardware, including creating nightly observing routines to take flat fields and focus the telescopes, and how challenges arising from hardware issues were overcome.
\end{itemize}
All work described in this chapter is my own unless otherwise indicated, and has not been published elsewhere. Commissioning the GOTO hardware on La Palma was carried out along with several members of the GOTO collaboration; in particular Vik Dhillon and Stu Littlefair from Sheffield; Danny Steeghs, Krzysztof Ulaczyk and Paul Chote from Warwick; Kendall Ackley from Monash; and several undergraduate and postgraduate students from the GOTO member institutions.
\end{colsection}
\section{Deploying the hardware}
\label{sec:hardware_commissioning}
\begin{colsection}
GOTO commissioning began with the installation of the prototype telescope on La Palma in the spring of 2017. Over 2017 and 2018 I spent a total of nine weeks on-site, helping deploy the hardware as well as commissioning and developing the G-TeCS software described in previous chapters.
\end{colsection}
\subsection{Deployment timeline}
\label{sec:timeline}
\begin{colsection}
GOTO was envisioned as a quick, simple and cheap project that could provide a large field of view to cover the early gravitational-wave skymaps produced by LIGO.\@ When I first interviewed to join the project in February 2015 it was anticipated that GOTO would be up and running imminently, perhaps before the end of that year. Ultimately that did not happen, as can be seen in the project timeline given in \aref{tab:timeline}, and GOTO did not see first light until June 2017. In hindsight the delay was irrelevant, as the first (and, at the time of writing, only) gravitational-wave event to have an electromagnetic counterpart occurred two months later in August 2017 --- and was only visible from the southern hemisphere \citep{GW170817,GW170817_followup}.
GOTO's deployment date was repeatedly set back for a variety of reasons, including planning permission being held up by local tax disputes and delays in manufacturing the mount and optics. The site was ready months before the telescope was, with the first dome being built in November 2016. My first visit to La Palma took place in March 2017, while the telescopes and mount were still in the factory. Vik Dhillon and I went out to the site to develop the dome control and conditions monitoring systems as described in \aref{sec:dome} and \aref{sec:conditions}. During this trip we also installed the additional dome hardware systems I had built, which are described in \aref{sec:arduino}.
\begin{figure}[p]
\begin{center}
\begin{tabular}{cl|@{\hspace{-3.2pt}$\bullet$ \hspace{5pt}}l} %
2015 & July & Collaboration meeting in Warwick (29 Jul) \\
& & Site planning application submitted \\
& September & Research collaboration agreement signed \\
& & \textit{LIGO's first observing run (O1) begins} \\
& & \textit{First observation of gravitational waves (GW150914)} \\
\midrule
2016 & January & \textit{O1 ends} \\
& August & Planning permission granted \\
& September & Site construction begins \\
& November & First dome assembled \\
& & \textit{LIGO's second observing run (O2) begins} \\
\midrule
2017 & March & \textcolor{Blue}{Trip 1 (23--31 Mar) --- install dome systems} \\
& May & Telescope hardware shipped \\
& June & \textbf{GOTO first light (10 Jun)} \\
& & Collaboration meeting in Warwick (19--20 Jun) \\
& & \textcolor{Blue}{Trip 2 (22 Jun--7 Jul) --- install control software} \\
& July & Inauguration ceremony (3 July) \\
& & Dec axis encoder fails \\
& & \textcolor{Blue}{Trip 3 (20--28 Jul) --- pilot commissioning} \\
& & Robotic operations begin \\
& August & UT3 mirrors sent back to manufactures \\
& & \textit{Virgo joins O2} \\
& & \textit{First gravitational-wave counterpart detected (GW170817)} \\
& & \textit{O2 ends} \\
& November & Drive motors upgraded, arm extensions installed \\
& & \textcolor{Blue}{Trip 4 (9--16 Nov) --- on-site monitoring} \\
& December & Second dome assembled \\
\midrule
2018 & January & \textcolor{Blue}{Trip 5 (14 Jan--5 Feb) --- on-site monitoring} \\
& April & Collaboration meeting in Warwick (11--13 Apr)\\
& May & On-site monitoring program ends \\
& June & Refurbished mirrors installed into UT4, old mirrors sent back \\
& July & \textcolor{Blue}{Trip 6 (5--13 Jul) --- software development} \\
& & New UT mounting brackets installed \\
& December & \textit{LIGO-Virgo Engineering Run 13 (14--18 Dec)} \\
\midrule
2019 & February & Refurbished mirrors reinstalled into UT3 \\
& & Current 4-UT all-sky survey begins \\
& April & \textit{LIGO-Virgo's third observing run (O3) begins} \\
\end{tabular}
\end{center}
\caption[Timeline of the GOTO project]{
A timeline of the GOTO project from when I joined up until the time of writing, including the six trips I made to La Palma during commissioning (in \textcolorbf{Blue}{blue}) and concurrent developments in the field of gravitational waves (in \textit{italics}).
}\label{tab:timeline}
\end{figure}
\clearpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/inauguration_photo.jpg}
\end{center}
\caption[Working in the GOTO dome prior to the inauguration in June 2017]{
Working in the GOTO dome prior to the inauguration in June 2017.
Photo taken looking west towards W1m and the WHT (with the orange CANARY laser visible). The second dome was installed on empty platform to the right later in the year.
}\label{fig:inauguration}
\end{figure}
Ultimately, the mount and first four unit telescopes were shipped to La Palma in late May 2017, and GOTO officially saw first light on the 10th of June 2017. I went out to the site a few weeks later, in order to install the G-TeCS software (an image of the site at the time is shown in \aref{fig:inauguration}). By the time of the inauguration ceremony on the 3rd of July the hardware control system was in place and working well, and I was able to demonstrate the telescope that evening to the assembled dignitaries.
I returned to the site less than two weeks later with Stu Littlefair, in order to do further work on the control software. We commissioned the pilot and developed the observing routines described in \aref{sec:software_commissioning}, and oversaw the telescope's first fully-autonomous night on the 27th of July.
Unfortunately, in the months after the inauguration problems began to surface with the hardware. The first problem was the failure of the declination motor encoder shortly after the inauguration (prior to my second visit). We were able to operate GOTO in a limited RA survey mode (described in \aref{sec:challenges}), however this greatly limited the capability of the telescope. There were also other problems with the mounting brackets that hold the unit telescopes to the boom arm becoming loose, as well as the boom arms being short enough that the unit telescopes could hit the mount pier. These issues meant that for the first few months of commissioning someone always needed to be present in the dome to stop the mount moving if it was in danger of damaging itself. Once the second LIGO observing run (O2) finished at the end of August there was less of a reason to be observing in this limited mode, so GOTO was shut down during the autumn of 2017 until hardware upgrades could be installed at the start of November.
At the same time, problems with the optical performance of the unit telescopes had become apparent, which were blamed on the mirror quality and issues with collimation. A program of sending each set of mirrors back to the manufacturer one at a time was decided on, allowing GOTO to continue operating with the remaining three unit telescopes. The worst performing telescope, UT3, had its mirrors taken out and returned in August 2017. Once the telescope was reactivated in November, the remaining three unit telescopes were aligned to form a single 3$\times$1 footprint, shown in \aref{fig:3ut_footprint}. Counterweights were placed in the empty UT3 tube to allow the mount to maintain balance.
GOTO operated in this mode for over a year. The gap between LIGO runs gave time to fully test the control software, as well as develop the GOTOphoto image pipeline (see \aref{sec:gotophoto}). The first set of mirrors were returned to the site in June 2018 and were placed into UT4, which was the second-worst performing telescope. The old UT4 mirrors were then sent back to the manufacturer, and GOTO continued to operate with three unit telescopes until February 2019. At this point, based on the imminent start of the third LIGO-Virgo observing run (O3), it was decided to leave the UT1 and UT2 mirrors in place, and operate from then on in the 4-UT configuration. The resulting 2$\times$2 footprint is shown \aref{fig:4ut_footprint}. Note the unit telescopes are arranged with overlapping fields of view, to counteract for the poor image quality off-axis. With future optical improvements this overlap could be reduced, increasing the overall field of view.
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.7\linewidth]{images/footprint_1_box.png}
\end{center}
\caption[The previous 3-UT GOTO footprint]{
The 3-UT GOTO footprint, used from August 2017 to February 2019.
The initial \SI{5.5}{\degree} $\times$ \SI{2.6}{\degree} tile area used by GOTO-tile (see \aref{chap:tiling}) is shown in \textcolorbf{Green}{green}. Note that a reasonable amount of space is left around the edge of the tile.
}\label{fig:3ut_footprint}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.55\linewidth]{images/footprint_2_box.png}
\end{center}
\caption[The current 4-UT GOTO footprint]{
The current 4-UT GOTO footprint, in use from February 2019 onwards.
The revised \SI{3.7}{\degree} $\times$ \SI{4.9}{\degree} tile area is shown in \textcolorbf{Green}{green}. The future four unit telescopes are expected to be arranged in two more columns on the left and right as shown in \aref{fig:fov}, creating an approximately \SI{7.8}{\degree}--wide footprint.
}\label{fig:4ut_footprint}
\end{figure}
\newpage
Another problem found during commissioning was excessive scattered light entering the system, in particular light from the Moon entering the corrector lens (see the optical design in \aref{fig:ota}). This was solved by adding covers around the telescope tubes, which prevented light from entering the corrector but made the system more susceptible to wind-shake (the reason that the tubes were open in the first place). Ultimately the covers were found to provide enough benefits, including protecting the mirrors from dust, that it has been decided that future unit telescopes will have closed tubes.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\linewidth]{images/commissioning_photo.jpg}
\includegraphics[width=0.4\linewidth]{images/commissioning_photo3.jpg}
\end{center}
\caption[Photos from GOTO commissioning]{
Photos from GOTO commissioning: monitoring the telescope from inside the dome in November 2017 on the left, and fitting the mirror counterweights in UT3 in January 2018 on the right.
}\label{fig:commissioning}
\end{figure}
The second commissioning period ran from the telescope being reactivated in November 2017 through to May 2018. During this time the telescope was typically running robotically each night, however there was always a member of the collaboration on site monitoring it in case of problems. Over that period the monitor moved from physically sitting in the dome (shown in \aref{fig:commissioning}), to sitting in the relative comfort of the neighbouring SuperWASP server room, until finally in the last few months being able to monitor from the observatory residencia or one of the other large telescopes on site.
I visited La Palma twice during this period, with the first trip in November 2017 covering the first week of the monitoring period immediately following the telescope being reactivated. Other volunteers monitored the system on-site until Christmas, then commissioning halted over the holiday period before I returned to the site in January for a three week visit. Kendall Ackley from Monash was also on site during the first week, and the first and third weeks overlapped with a team from Sheffield including Vik Dhillon and Stu Littlefair. During this period we replaced the counterweights (also shown in \aref{fig:commissioning}), rebalanced the mount and realigned the unit telescopes, and I continued the software work with a major update to the observation database. During the second week I monitored the telescope alone from SuperWASP, and continued developing the pilot so it was able to run automatically with no human supervision. In the third week I was due to remain on site and continue to monitor the telescope, however a severe snowstorm stopped all observing.
Due to the cold weather and ice build up GOTO was unable to open throughout all of February 2018. It was during this period that issues arose from the weight of the ice on the dome shutters, described in \aref{sec:challenges}. On-site monitoring resumed in the spring, once the snow had melted, and monitors continued to be on site for several more months, in between hardware upgrade trips lead by the Warwick team. Eventually in May the software was deemed robust enough to allow GOTO to run unsupervised. The pilot output is still regularly monitored remotely, especially from Australia by the Monash team, who have the benefit of a more convenient timezone.
By the time the 4-UT system was recommissioned, in February 2019, the G-TeCS pilot and hardware control systems had been fully tested and were operating reliably. By then my focus had shifted to the alert follow-up systems detailed in \aref{chap:alerts}, in advance of the start of O3 in April 2019. Since then GOTO has been reliably running and responding to gravitational-wave alerts, as detailed in \aref{sec:conclusion}.
\end{colsection}
\subsection{Additional dome systems}
\label{sec:arduino}
\begin{colsection}
GOTO uses a clamshell dome manufactured by Astrohaven, the same company that made the pt5m dome \citep{pt5m}. Based on experience with p5tm, there were several hardware systems which we decided to add to the GOTO dome. In fact, the entire pt5m dome control unit was replaced by a custom one designed and manufactured in Durham, but we wanted to avoid taking such a drastic step. Several limitations of the stock Astrohaven dome are outlined below.
\begin{itemize}
\item First, there was no easily-accessible emergency stop button to cut power to the dome in an emergency (e.g.\ something gets caught in the motors). This is a serious concern for pt5m, as when the dome is open the shutters completely cover the access hatch, making it dangerous for anyone to be passing through the hatch when the dome is moving. Therefore, one of the additions to pt5m was an emergency stop button within arms reach of the hatch entrance. As the GOTO dome is taller the hatch is mostly uncovered when the dome is open, but installing an emergency stop button was still a priority for safety reasons.
\item The dome does not come with a siren to sound when it opens or closes. This is an important safety feature when operating a robotic observatory, as the dome will be operated entirely through software and it is important to warn anyone on site several seconds before it is about to move. When members of the GOTO team are on-site the robotic systems can be disabled entirely by going into engineering mode (see \aref{sec:mode}), however it is still important to make sure that there is no chance of the dome moving without prior warning. In addition, the GOTO site on La Palma is publicly accessible, and it is not unknown for tourists or hikers, or other astronomers, to be around the dome when it is unsupervised.
\item By default, the dome \glsfirst{plc} only provides limited information about the status of the dome shutters. As described in \aref{sec:dome}, the PLC only returns a single status byte in response to a query. This is not enough to distinguish whether the dome is fully or only partially open, and if one side is moving the status of the other side is unknown. Adding our own sensors would allow the complete status of the dome to be determined. The dome comes with two in-built magnetic sensors on each side, which should detect when the shutters are either fully closed or fully open. However these have been known to be unreliable and tricky to align. In some cases the switches failed to trigger when the shutter reached its open limit, leading to the dome continuing to drive the belts and the shutter embedding itself into the ground. Therefore having secondary, independent sensors was a priority to ensure this did not happen.
\item The dome does not include a sensor on the hatch door to detect if it is open, and there is no way to close the hatch remotely in case of bad weather. As GOTO will be operating without anyone on site, the hatch should normally remain closed at all times, and by adding a sensor to the hatch this could be confirmed and an alert issued if the hatch is detected as open (see \aref{sec:conditions_flags}).
\item One final proposed addition was a quick-close button, which acts as a simple and direct way to communicate with the dome daemon. The motivation is a practical one: in the case that the weather turns bad and the telescope is exposed, assuming the automatic monitoring systems fail and we can not connect to close it remotely, it is a lot easier and quicker to direct someone on site to access the dome and press the prominent ``close'' button than direct them to log on to the in-dome computer, open a terminal and type \texttt{dome~close}. This was also true during the commissioning phase, when the software was still being tested and someone has to be on-site all night. One requirement was that this button could not be easily confused with the emergency stop button (i.e.\ it should not be coloured red), as instead of stopping the dome this button will prompt it to move.
\end{itemize}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{images/button_photo.jpg}
\end{center}
\caption[Building the quick-close button]{
Building the quick-close button in the lab. The transmit (\textcolorbf{Brown}{brown}) and receive (\textcolorbf{Red}{red}) wires from the serial cable are attached to the button NC pins.
}\label{fig:quickclose_button}
\end{figure}
Creating a quick-close button involved attaching a ``normally closed'' (NC) push button in series between the transmit and receive wires of an RS 232 serial cable, as shown in \aref{fig:quickclose_button}. By doing this a simple feedback loop can be set up within the dome daemon, by sending a test signal out through the serial connection and listening for it to be returned to the same port. If after three tries the signal does not return then the button is assumed to have been pressed, and the dome daemon triggers a lockdown (see \aref{sec:dome}). By using a locking push button the loop will remain broken, and the dome closed, until the button is released. The bright yellow button was was labelled and attached to the wall of the dome near the computer rack (shown in \aref{fig:arduino_button_dome}).
Adding an emergency stop button to the dome was fairly simple. There was enough slack on the PLC power cable to install a prominent red button on the wall of the dome, as shown in \aref{fig:estop_plc}. When the button is pressed the power to the PLC and the dome motors is cut, which stops the dome moving.
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.8\linewidth]{images/estop_photo.jpg}
\end{center}
\caption[The dome PLC and emergency stop button]{
The dome \glsfirst{plc} and emergency stop button. The green cable comes directly from the dome power supply at the top, passes through the button unit and into the PLC at the bottom.
}\label{fig:estop_plc}
\end{figure}
\newpage
Adding a siren and additional sensors to the dome required an additional system to power, monitor and (for the siren) activate them. This was done using a small Arduino Uno microcontroller\footnote{\url{https://www.arduino.cc}} running a simple HTML server, which reports the status of the switches and can be queried in order to activate the siren. The circuit design for this system is shown in \aref{fig:arduino_circuit}. In order to power the siren from the Arduino a bipolar MOSFET (metal-oxide-semiconductor field-effect transistor)\glsadd{mosfet} was used to connect to one of the board input/output pins, with a large enough resistor to prevent the voltage from destroying the board. The Arduino and siren were mounted within a weatherproof case, with output connectors for the dome switches as well as for power and an ethernet connection. Photos of the box during construction are shown in \aref{fig:arduino_wip}, and its installation in the dome is shown in \aref{fig:arduino_installed} and \aref{fig:arduino_button_dome}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\linewidth]{images/arduino.png}
\end{center}
\caption[Circuit design for the Arduino box]{
Circuit design for the Arduino box.
}\label{fig:arduino_circuit}
\end{figure}
\newpage
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/arduino_photo.jpg}
\includegraphics[width=\linewidth]{images/box_photo.jpg}
\end{center}
\caption[Building the dome Arduino box]{
Building the dome Arduino box in the lab in Sheffield. The top image shows the circuit design being tested, with the Arduino (circuit board in the back) and two of the dome switches: a magnetic proximity switch (grey blocks on the left) and a Honeywell limit switch (cyan unit in the centre, with the cover and arm detached). The lower image shows the completed weatherproof box with the siren, power and ethernet cables and one of the Honeywell switches attached.
}\label{fig:arduino_wip}
\end{figure}
\newpage
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.75\linewidth]{images/box_installed_photo.jpg}
\end{center}
\caption[The Arduino box installed in the GOTO dome]{
The Arduino box installed in the GOTO dome during my first trip to La Palma in March 2017. This photo was taken before the cover and cables were attached.
}\label{fig:arduino_installed}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.75\linewidth]{images/box_installed_photo2.jpg}
\end{center}
\caption[The Arduino box and quick-close button in the GOTO dome]{
The Arduino box and yellow quick-close button in the GOTO dome, attached to the southern pillar under the dome drive. The cables run to the computer rack which is just off to the left of the photo.
}\label{fig:arduino_button_dome}
\end{figure}
\clearpage
Four additional sensors were added to the dome, each connected to a port on the Arduino through the connectors on the bottom of the weatherproof box. Two Honeywell limit switches were attached to the rim of the dome wall, set to be triggered when the dome was fully open; additional magnetic proximity switches were added to the two inner-most shutters, which trigger when the dome is fully closed; and a magnetic proximity switch was added to the dome hatch. Each of these are shown in \aref{fig:dome_switches}.
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.8\linewidth]{images/dome_sensor_1.jpg}
\includegraphics[width=0.8\linewidth]{images/dome_sensor_2.jpg}
\includegraphics[width=0.8\linewidth]{images/dome_sensor_3.jpg}
\end{center}
\caption[Additional sensors added to the dome]{
Additional sensors added to the dome. The top photo shows one of the two Honeywell limit switches added to the rim of the dome wall to detect when the dome is fully open, the middle photo shows magnetic switch added to the inner-most shutters to detect when the dome is fully closed, and the bottom photo shows the magnetic switch added to the dome hatch to detect when it is open.
}\label{fig:dome_switches}
\end{figure}
Using a combination of these new switches and the PLC output it is possible for the dome daemon to build a complete picture of the status of the dome, as described in \aref{sec:dome}. Having a sensor on the hatch also allowed it to be added as a conditions flag, as described in \aref{sec:conditions_flags}, meaning the pilot will stop and report if the hatch is open when in robotic mode. As of yet, the hatch flag or in-dome buttons have not been needed in an emergency, however they are important as an insurance policy just in case. It is anticipated that the same work will need to be done in the second GOTO dome when it is commissioned. Comments have also been passed on to the dome manufacturer to suggest that they could include some of the features described in this section in their own stock hardware.
One further addition to the GOTO dome should be mentioned: the ``heartbeat'' monitor designed and installed by Paul Chote at Warwick. As described in \aref{sec:dome}, in the event that the dome daemon crashes, or the control NUC itself fails for whatever reason, the dome would be left vulnerable --- especially if it is already open. As a backup, Paul created his own Arduino system that connects to the dome PLC and sends it commands to close in the event that it does not receive a regular signal from the dome daemon. This system was installed into the GOTO dome in April 2018, along with the other Warwick domes on the site, and was one of the final stages required before GOTO could safely leave the commissioning phase and not require an on-site monitor. The two Arduino systems may be merged when the second GOTO dome is commissioned.
\end{colsection}
\section{Developing the software}
\label{sec:software_commissioning}
\begin{colsection}
While the GOTO hardware was being commissioned the G-TeCS control software was also being developed. There were several important parts of the software that could not reasonably be developed without access to the actual telescope, for example the observing routines for taking flat fields and autofocusing.
This section focuses on the software side of commissioning, and does not include pure hardware issues such as with the mount drive, mirrors or UT brackets outlined in \aref{sec:timeline}. These were dealt with by the core GOTO hardware team at Warwick, and although I spent many hours on La Palma balancing the mount, adjusting mirror positions and aligning unit telescopes, it had limited impact on the control software development.
\end{colsection}
\subsection{Taking flat fields}
\label{sec:flats}
\begin{colsection}
GOTO uses the twilight sky for taking flat fields. Some care has to be taken to design a reasonable flat-field routine, as taking sky flats is not simple for a wide-field instrument such as GOTO \citep{flats3, flats2}. As described in \aref{sec:night_marshal}, the night marshal runs the \texttt{take\_flats.py} observing script at twilight twice a day, once in the evening and again in the morning. In the evening the script begins after the dome is opened, when the Sun has set below \SI{0}{\degree} altitude, and in the morning the routine is run in reverse, starting after observations have finished and running until the Sun rises above \SI{0}{\degree}.
First, the telescope needs to slew to a chosen position. Based on the analysis of the twilight sky gradient in \citet{flats}, GOTO slews to the ``anti-sun'' position, which is at an azimuth of \SI{180}{\degree} opposite the position of the Sun and at an altitude of \SI{75}{\degree}. This should be the position where the sky gradient is minimised and therefore the field is flattest. In the pt5m version of the script, the telescope would slew to one of a predefined set of empty sky regions, however with GOTO's large field of view there are no large enough regions devoid of bright stars (to counter this, the mount moves slightly between images so that median stacking the frames will remove any stars).
Once the telescope is in position, glance exposures (see \aref{sec:cam}) are taken until the sky brightness has reached an appropriate level. In the evening, exposures start at $E_0=\SI{3}{\second}$ soon after sunset, and the first images will almost always be saturated. Images are taken until the mean count level has fallen below the target level of 25,000 counts per pixel. In the morning, exposures start at $E_0=\SI{40}{\second}$ while the sky is still dark, and exposures are taken until the mean count level is above the same target level.
Once the sky has reached the target level of brightness, exposures are taken at increasing exposure times in the evening, or decreasing in the morning. The exposure time sequence is determined using the method of \citet{flats3}, which defines the delay between exposures iteratively from $t_0=0$ using
\begin{equation}
t_{i+1} = \frac{\ln{(a^{t_i+\Delta t} + a^{E_i} -1)}}{\ln{a}},
\label{eq:sky}
\end{equation}
where $\Delta t$ is the time between exposures (including readout time and any offset slew time) and $a$ is a scaling factor which depends on the twilight duration $\tau$ in minutes as
\begin{equation}
a = 10^{\pm 0.125/\tau}.
\label{eq:sky2}
\end{equation}
The twilight duration can be calculated easily using Astropy, and $a$ is taken as less than 1 in the evening (the delay between exposures decreases) or greater than 1 in the morning (the delay increases). Note that $t$ is the time delay \emph{between} exposures, the actual exposure time of each exposure is given by
\begin{equation}
E_{i+1} = t_{i+1} - (t_i + \Delta t).
\label{eq:sky3}
\end{equation}
Using this method a sequence of exposure times is determined iteratively either until a target number of flat fields have been taken (by default 5 in each filter) or the exposure times pass a given limit (greater than \SI{60}{\second} in the evening, less than \SI{1}{\second} in the morning). Between each exposure the telescope is stepped 10~arcminutes in both RA and declination, enough to ensure that any objects in the field do not fall on the same CCD pixels. This means any stars in the field can be removed by median combining the individual flat field images.
Every time the script is run, flat fields are taken in each of the Baader \textit{LRGB} filters used by GOTO (see \aref{sec:filters}). In the evening flats start in the \textit{B} filter (as the sky progressively reddens as the Sun sets), progresses through \textit{G} and \textit{R}, and finishes on \textit{L} (as the \textit{L} filter has the widest bandpass). In the morning the sequence is reversed. Once the first set of flats is taken in the starting filter (\textit{B} in the evening, \textit{L} in the morning) a new starting exposure time ($E_0$) is calculated based on the relative difference in the filter bandpasses (see \aref{sec:filters}).
This method allows a reasonable set of flat fields to be taken in each filter most nights. The GOTOphoto pipeline (\aref{sec:gotophoto}) creates new master flat frames each month (the same is true of bias and dark frames); this means that taking new flats each night is important but not critical. If, for example, the Moon is too close to the anti-Sun point then flats can be skipped without causing any disruption. The routine also assumes a clear night and does not account for the presence of clouds in the field, however any poor-quality images are rejected by the pipeline when creating the master frames, and by taking new flats twice each night there should always be enough to create a valid master frame each month.
\end{colsection}
\subsection{Focusing the telescopes}
\label{sec:autofocus}
\begin{colsection}
The GOTO unit telescopes are designed to keep a stable focus through the night, and use carbon-fibre trusses to minimise any changes due to temperature fluctuations. Based on images taken during commissioning this is generally true, and the pilot only has to refocus the telescope once at the start of each night. Once the flats routine has finished the night marshal within in the pilot runs the \texttt{autofocus.py} observing script (see \aref{sec:night_marshal}). To save time, all of the unit telescopes are focused at the same time, although completely independently.
The autofocus routine is based on the V-curve method of \citet{autofocus}, which measures the focus using the \glsfirst{hfd}. The HFD is defined as the diameter of a circle centred on a star in which half of the total flux lies inside the circle and half is outside. As this parameter is based only on the total spread of the flux and not on the maximum peak it is not disrupted due to seeing effects. Importantly, the HFD should vary linearly with focus position, forming a V-shaped curve with a fixed gradient either side of the best focus (unlike the FWHM, which forms a U-shaped, non-linear curve). The gradient of this V-curve (shown in \aref{fig:autofocus}) is a function of the telescope hardware: changes in seeing move the curve up and down while changes in temperature will move the curve side-to-side, but the shape should remain the same. Therefore, if the V-curve has been defined for the given telescope and you can find which part of the curve you are on, you can use the known gradient and intercept to move directly to the minimum point, which will give the best focus.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{images/autofocus.pdf}
\end{center}
\caption[Steps to find the best focus position using the HFD V-curve]{
Steps to find the best focus position using the HFD V-curve.
}\label{fig:autofocus}
\end{figure}
As GOTO is a wide-field instrument no particular focus star needs to be picked, and instead focusing takes place at zenith assuming that there are always enough stars to focus on within the field. Images are windowed to just a central 2000$\times$2000 pixel area, to avoid any distortions around the edge of the field. Sources are extracted and the half flux diameter measured using the SEP Python package (Source Extraction and Photometry, \texttt{sep}\footnote{\url{https://sep.readthedocs.io}}) which implements the Source Extractor algorithms in Python \citep{SE}. This is done for all objects in the frame with a signal of more than $5\sigma$ above the background sky, excluding any sources that do not match a Gaussian shape i.e.\ are not point sources. This typically results in several hundred sources in each unit telescope, and the mean of all of the points is taken as the representative HFD for that image.
At the start of the focus routine an initial image is taken and starting HFD values are measured. The starting focus position is also recorded at this point, so that if the script fails for any reason the initial focus can be restored. It is assumed that the initial value should be fairly close to the ideal focus position, but it is not known which side of the ideal position it is (i.e.\ if it is on the positive or negative gradient side of the V-curve). This starting point is shown by the \textbf{?} marker in \aref{fig:autofocus}.
The first step of the routine is to move the focuser by a large positive quantity, to point \textbf{A} in \aref{fig:autofocus}, and measure the HFD.\@ This is done to make sure that the image is completely de-focused, and we are on a known side of the best focus position. At this point the V-curve might not even be linear, but as long as the measured HFDs have increased compared to the starting value then we can proceed.
The second step is a small step back in the opposite direction, towards the best focus position (point \textbf{B}). The HFD is measured again, and it should now be smaller than when measured at point \textbf{A} but still larger than the starting value. If this is not true then the script returns an error, as it is not possible to determine if we are on the correct side of the V-curve.
The next stage is to continue taking small steps in the same (negative) direction, until the measured HFD is less than double the \emph{near-focus value} (NFV) \glsadd{nfv}. The NFV is chosen for each telescope to be a HFD value in pixels that is approximately equal to the expected best-focus value, based on previous measurements (for GOTO the NFV is 7 pixels). Once at this point (point \textbf{C}) we should be well within the linear portion of the V-curve. At this stage the exact HFD values are important, so three consecutive images are taken at this focus position and the smallest of the HFD values is taken as the first point on the V-curve. The HFD values between images will change due to external factors, such as seeing or windshake, but these will only ever make the HFD worse than the ``true'' value, never better. Therefore taking the minimum reduces the effect of these external factors on the measured HFD values.
Once the HFD value has been well measured at point \textbf{C} then the near-focus position (point \textbf{D}), the position that should produce a HFD equal to the near-focus value, can be found with
\begin{equation}
F_\text{NF} = F + \frac{\text{NFV} - D(F)}{m_\text{R}}
\label{eq:nearfocus}
\end{equation}
where $F$ is the current focus position, $D(F)$ is the current HFD and $m_\text{R}$ is the known negative gradient of the right-hand side of the V-curve --- this is just applying the equation of a straight line between two points.
Once the near-focus position has been found the focuser is moved to that position (point \textbf{D}) and the HFD is measured three times again. Now that we have a known $F_\text{NF}$ and $D(F_\text{NF})$ on the right-hand side of the V-curve the best focus position ($F_\text{BF}$) is given by the meeting point of the two lines shown in \aref{fig:autofocus}, which can be calculated using
\begin{equation}
\begin{split}
c_1 & = D(F_\text{NF}) - m_\text{R} F_\text{NF}, \\
c_2 & = m_\text{L}(\frac{c_1}{m_\text{R}} - \delta), \\
F_\text{BF} & = \frac{c_2 - c_1}{m_\text{R} - m_\text{L}},
\end{split}
\label{eq:bestfocus}
\end{equation}
where $m_\text{L}$ and $m_\text{R}$ are the gradients of the lines and $\delta$ is the difference between their intercepts. The focuser is then moved to the best focus position, the HFD values are recorded and the script is complete. This method has proven reliable to focus the GOTO telescopes on a nightly basis, although the optical aberrations produce badly-focused regions in the corners of the frames (described in \aref{sec:timeline}).
\aref{fig:focus_time} shows the focus values (half-flux diameter) measured from every image taken over a single night of observing. The HFD values vary between 3--5 pixels, and although there are some fluctuations there is no clear focus drift over the course of the night. \aref{fig:focus_temp} shows the same values plotted instead as a function of temperature, again no trends are visible. This is just a single sample from one night, and over a longer term there will be shifts in the best focus position. However, refocusing just once in the evening appears to produce a stable-enough position to last through the night.
It has been suggested that future GOTO unit telescopes might use an enclosed, prime-focus tube instead of the current Newtonian design with carbon fibre trusses (see \aref{sec:optics}). Solid metal tubes are much more sensitive to temperature variations and therefore need to continually be refocused during the night. To account for this a refocus step could be added into the exposure queue daemon (see \aref{sec:exq}) at the same stage that the filters are changed.
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.95\linewidth]{images/focus.png}
\end{center}
\caption[Measured focus values over a night of observations]{
Measured focus values (mean half flux diameter) over a night of observations. For clarity error bars are only plotted on every 10th point.
}\label{fig:focus_time}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.95\linewidth]{images/focus_temp.png}
\end{center}
\caption[Measured focus values against temperature]{
Measured focus values against temperature inside the dome.
}\label{fig:focus_temp}
\end{figure}
\clearpage
\end{colsection}
\subsection{Mount pointing and stability}
\label{sec:pointxp}
\begin{colsection}
As described in \aref{sec:mount}, using the SiTech-provided mount software (SiTechEXE) required a Windows computer for it to run on, as well as additional development effort to allow the rest of the software to interact with it. However once this was implemented it allowed us to use the various utilities built into SiTechEXE, including the pointing modelling software PointXP.\@ Using this software meant it was not necessary to create our own pointing model within the mount daemon, as once a model is created with PointXP any commands sent to SiTechEXE have the model applied before slewing.
In order to create a pointing model using PointXP the camera output from one of the telescopes needs to be connected to the Windows NUC running the software, and the rest of the G-TeCS software must be disabled to ensure PointXP has full control. The software creates a grid of equally spaced pointings at a range of altitudes and azimuths, as shown on the sky chart in \aref{fig:pointing_model}, then takes CCD images at each position, extracts the position of sources in each frame, and calculates the pointing model transformations to best convert requested coordinates to mount axis positions.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/pointing_model.png}
\end{center}
\caption[Creating a pointing model using PointXP]{
Creating a pointing model using PointXP.\@
}\label{fig:pointing_model}
\end{figure}
One of the complications when creating a pointing model for GOTO is that the model can only be based on the output of a single unit telescope (as PointXP only expects a single camera input). When aligned into the 3-UT configuration, as shown in \aref{fig:3ut_footprint}, it was simple to create the model using the central telescope (UT4), but in the 4-UT configuration (\aref{fig:4ut_footprint}) this is not an option. The chosen camera needs to physically be disconnected from the interface NUC and connected to the Windows mount NUC PointXP is running on, which prevents creating a pointing model unless there is someone on-site. One suggestion has been to add a small guide telescope in the centre of the array which could be permanently connected to the Windows NUC PointXP is on, this could then be used to base the pointing model on.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/pointing.png}
\end{center}
\caption[Pointing errors over a single night]{
Pointing errors (the difference between the target and actual image positions) taken from a single night of observations, 489 exposures in total. The error bars for each UT show the standard deviation of points in each axis.
}\label{fig:pointing}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/tracking.png}
\end{center}
\caption[Position drift over a 30 minute observation]{
Position drift of sources in a series of images taken over 30 minutes. The darker solid lines show the average of all the sources detected in each UT.\
}\label{fig:tracking}
\end{figure}
\aref{fig:pointing} shows the pointing error for each unit telescope from all images taken in a single night of normal observations after creating a new pointing model with PointXP, at various different altitudes and azimuths. The error is found as the difference between the target position reported by the mount and the actual image centre found by the GOTOphoto pipeline. As each UT has a unique offset each subplot is centred on the mean offset for that UT.\ The pointing errors vary between 8--\SI{10}{\arcmin} in right ascension and 1--\SI{3}{\arcmin} in declination, with clear variations between the unit telescopes. The pattern of points for each unit telescope also appears to be unconnected. This illustrates one of the major complications due to the GOTO mount design: each unit telescope will flex and shift slightly on their own mounts attaching them to the boom arm, in addition to the pointing of the overall mount. The original brackets mounting the unit telescopes to the boom arm were much worse and occasionally allowed unit telescopes to drift several degrees from their original position. These were replaced in July 2018, and since then the models created using PointXP have been perfectly good for use with GOTO.\@ In practice even being a few arcminutes off the desired pointing is minor compared to GOTO's large field of view, and can easily be accounted for when the images are processed by the GOTOphoto pipeline.
GOTO does not include an autoguiding system, although the proposed guide scope mentioned above could be used as one. In the absence of this the telescope must be able to track accurately. The initial mount drives suffered from tracking problems, and were very sensitive to any imbalances in the weight distribution. The motors were replaced in November 2017, and since then have been much more reliable, although it is still important to ensure that the mount is balanced. \aref{fig:tracking} shows the drift of sources taken from a series of exposures of the same target over 30 minutes, revealing a maximum drift of approximately 80 pixels/hour, or \SI{1.65}{\arcmin} (using the plate scale of \SI[per-mode=symbol]{1.24}{\arcsec\per\pixel}). Again each unit telescope has a slightly different drift in different directions (for example UT1 is very stable in the $x$ direction (right ascension) but has the biggest drift in $y$ (declination)). In practice GOTO usually switches targets every few minutes, so the long-term tracking performance over several hours is not a major concern. GOTO also typically only takes \SI{60}{\second} exposures, and no significant trailing is seen in these images.
\end{colsection}
\subsection{Other commissioning challenges}
\label{sec:challenges}
\begin{colsection}
In this section I outline a few of the changes that had to be made to the software based on experience with the hardware. This is not an exhaustive list, but gives some examples of the challenges that are typical when commissioning a facility such as GOTO.\@
\subsubsection{Filter wheel serial numbers}
One of the hardware issues that was identified early on concerned identifying the filter wheels when they were connected to the interface NUCs. The usual way to connect to specific hardware units through the FLI-API code is to search the connected USB devices for their unique FLI serial numbers (for example the serial numbers of the GOTO cameras given in \aref{tab:cameras}). However, the initial set of CFW9--5 filter wheels delivered to us by FLI did not have serial numbers defined in their firmware. Two filter wheels are connected to each interface NUC, and this problem made it impossible to tell between them or send a command to a particular filter wheel.
A solution was eventually found using the pyudev Python package (\texttt{pyudev}\footnote{\url{https://pyudev.readthedocs.io}}), which uses the Linux udev device manager to identify devices using the \texttt{/dev/} name, and therefore create a pseudo-serial number based on which port each device is connected to. Using this method it is possible to tell filter wheels apart as long as which physical USB port each is plugged into is known. This is not an ideal solution, but as long as the USB cables remain connected it is not an issue even if the NUCs are rebooted.
\subsubsection{Downloading images from the cameras}
One of the more complicated parts of the camera control software is reading images from the FLI cameras. Once an exposure has finished, photo-electrons from the CCD are read out and stored as counts in a memory buffer on the camera, where they can be downloaded by USB.\@ A last-minute change led to the GOTO cameras using new, larger detectors than originally designed for the cameras, which led to there not being enough space in the camera buffer to store a full-frame image. This was discovered when corrupted images such as the one shown in \aref{fig:cam_readout} were being produced; the regions in the lower third of the image are just duplicates of the data in the upper third, meaning the original data in this section was lost.
The cameras return a \texttt{DataReady} status once the exposure has finished and the data is reading out. However, when installing G-TeCS on site it was clear that the camera daemon was not able to reliably start downloading the data from the cameras quickly enough to clear space in the internal buffer before it starts being overwritten. The solution was to add an internal image queue within the camera class, which would immediately begin downloading from the cameras as soon as they reported the exposure was finished. This means the camera data is stored within the memory on the interface NUCs, and then the camera daemon queries the interface (see \aref{sec:fli}) to download the image across the network and write it to disk.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\linewidth]{images/cam_readout.png}
\end{center}
\caption[A corrupted image which was not read out fast enough]{
An example of a corrupted image from one of the FLI MicroLine cameras which was not read out fast enough.
}\label{fig:cam_readout}
\end{figure}
This solution did run into a few problems with a feature of the Python programming language called the \glsfirst{gil}, which prevents multiple threads accessing the same Python object at once (the exact workings of the GIL are outside of the scope of this thesis). In short, this prevented reading out the cameras to the internal queue in parallel, which added an extra delay. Luckily, the FLI-API package was not written in standard Python (technically CPython) but in Cython, which interfaces between the FLI SDK written in C and the rest of the control system written in Python. Cython contains a GIL to maintain compatibility with CPython, however it is not required and can be disabled. Doing this allowed the cameras to be read out in parallel as intended.
\newpage
\subsubsection{Declination axis encoder failure}
Just a few weeks after the inauguration the mount declination axis encoder failed, preventing any automated slewing in the declination axis (the telescope could still be moved manually with the hand-pad). Of the two axes this was by far the better one to fail, had the RA axis failed instead then the telescope would not have been able to track and so no observations could have been taken. Instead, the telescope could at least still take on-sky images, and commissioning the optics and the camera control software was able to continue by manually moving the mount.
However, the SiTech control software was not able to cope with the disabled declination axis, which meant that the telescope could not be operated in robotic mode. When sent a command to slew to a position, the mount would move to the correct RA but would never reach the target (as it could not move in declination) and therefore would not start tracking. Even sending commands to move only in RA (i.e.\ keeping the same declination position) did not work. The mount would reach the correct position but the declination encoder would never register reaching the target, so the slew was never registered as `complete' and the mount would not start tracking.
A workaround was therefore coded into the mount daemon: a separate thread which monitored the RA position and stopped the mount moving when the target RA coordinates were reached, regardless of the declination coordinates. This forced the SiTech slew command to reset, meaning it could start tracking. When this modification was in place GOTO was able to observe `normally', and was able to carry out a survey in a limited declination band of the sky. Had an important gravitational-wave alert come through during this period, the person on-site would have had to move the telescope to the correct declination and then manually carry out observations. Thankfully this was not needed, and, as described in \aref{sec:timeline}, once O2 ended GOTO was shut down until the motors could be replaced.
\subsubsection{Dome movement}
The Astrohaven clamshell dome is driven by internal belts attached to the dome shutters. It is important when moving the dome not to put undue stress on these belts, as should one of them dislodge or snap there is nothing preventing the dome from falling open. As described in \aref{sec:dome}, the dome motors are deliberately moved in short bursts rather than continually when opening. This prevents the shutters being pulled down too fast, which can cause the upper shutter to fall and put excess stress on the belts.
As mentioned in \aref{sec:arduino}, the dome has also occasionally opened past its limits when the in-built switches fail to trigger, meaning the motors drive the shutters into the ground. The extra limit switches we installed provide a backup in order to cut the motors when they are triggered, and also give a method to detect when the shutters overshoot and let the dome daemon move the shutters back up.
One of the more serious hardware problems occurred a few days after I left La Palma in February 2018. During freezing conditions, ice had built up on the upper dome shutter, and eventually was heavy enough to partially pull the shutter open past its limits, exposing the telescope to the elements (see \aref{fig:ice_internal}). I was able to remotely move the dome and drag the shutter closed, but a large amount of ice fell into the dome. Luckily, Vik Dhillon and Stu Littlefair were still at the observatory, along with Tom Marsh, from Warwick, and my replacement GOTO monitor, Tom Watts from Armagh. As shown in \aref{fig:ice_external}, they were able to get up the mountain to clear ice from inside and outside the dome, as well as place a tarpaulin over the telescope.
Once informed about the event, Astrohaven manufactured brackets to fit inside the dome to prevent the upper shutter from being forced open in this way. The dome falling open was accompanied by a sharp drop in the internal temperature, which was the motivation to add the \texttt{internal} and \texttt{ice} flags into the conditions monitor (see \aref{sec:conditions}), to alert us should similar conditions occur in the future.
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.45\linewidth]{images/ice_open.jpeg}
\includegraphics[width=0.45\linewidth]{images/ice_closed.jpeg}
\end{center}
\caption[Internal webcam images showing the dome open during a snowstorm]{
Internal webcam images showing the dome open during the 2018 snowstorm. The image on the left was taken when opening was discovered, with the upper shutter (which normally closes on the south side, to the left of the image) having been open by the weight of ice built up on the north side. The image on the right was taken after closing the shutter remotely. Moving the dome caused a large amount of ice to dislodge and fall into the dome, thankfully missing the mirrors and camera hardware.
}\label{fig:ice_internal}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.88\linewidth]{images/ice_outside.jpeg}
\end{center}
\caption[External webcam image showing the ice rescue team]{
External webcam image showing the ice rescue team. Note the build up of ice visible on the northern side of the upper shutter of the (empty) left-hand dome. A similar build up caused the upper shutter on the right-hand dome containing GOTO to be pulled open.
}\label{fig:ice_external}
\end{figure}
\clearpage
\subsubsection{Errors processing GW alerts}
Ensuring the stability of the GOTO-alert event handling system described in \aref{chap:alerts} was a high priority when commissioning the GOTO software in the run-up to the third LIGO-Virgo observing run (O3). The LVC began producing test GCN notices in December 2018 prior to the beginning of the run, which included mock skymaps similar to the ones used for the simulations in \aref{sec:scheduler_sims} and \aref{sec:gw_sims}. These events allowed a full system test of the GOTO-alert code, from the VOEvent being received to the pointings being added to the observation database.
Since the start of O3 in April 2019 until the end of August there were 32 gravitational-wave alerts, of which seven were ultimately retracted, and the G-TeCS sentinel (see \aref{sec:sentinel}) received each event and processed it using the GOTO-alert event handler code. A full run-down of the response to each event is given in \aref{sec:gw_results}. A few complications did arise during O3 which required changes to the software, mostly due to problems at the LVC's end. These are outlined below.
\begin{itemize}
\item Initially each notice sent out by the LVC had to be approved manually by a LIGO-Virgo member, which lead to some delays to follow-up observations being triggered. The initial alert for S190421ar was delayed until several hours after the event, even though the skymap had already been uploaded by the LVC to the GraceDB service. A possible addition to the G-TeCS sentinel was proposed, a separate thread that could query GraceDB to check for new skymaps. If it detected on the skymap could quickly be processed to start GOTO observing, even before the ``official'' notice was sent out. However, since the first few events the delay between the gravitational-wave detection and the notice being sent out has been much shorter, and this modification was never implemented.
\item An updated skymap for the S190426c event was uploaded to GraceDB by the LVC with the wrong permissions, causing the sentinel to raise an error when it could not download it from the URL in the notice. Unfortunately the event handler (see \aref{sec:event_handler}) had already deleted the existing pointings from the observation database before crashing, preventing GOTO from observing the previous tiles. After this event the order of functions in the event handler was changed, so that existing pointings were only removed once the new skymap had been downloaded and processed.
\item The second skymap for the S190521g event was initially uploaded in an uncommon HEALPix format (see \aref{sec:healpix}) used internally in the LVC, which could not be read by GOTO-tile. Due to the above changes made following the S190426c event this did not interrupt GOTO observations, and the LVC have since clarified their filetypes.
\item Finally, the updated skymap for the S190814bv event was incorrectly uploaded by the LVC to GraceDB with the same filename (\texttt{bayestar.fits.gz}) as the initial skymap (typically updates are called \texttt{bayestar1.fits.gz}, etc). By default, the Astropy FITS download function used within the sentinel caches each file, and if asked to download a file from the same URL will instead use the cached version. This lead to GOTO continuing to observe the large initial skymap instead of focusing on the smaller region given in the updated map. Again, the LVC have said that this will be prevented in the future, but just in case the sentinel was patched to disable the caching feature.
\end{itemize}
\end{colsection}
\section{Summary and Conclusions}
\label{sec:commissioning_conclusion}
\begin{colsection}
In this chapter I described work carried out during the GOTO commissioning period on La Palma.
The GOTO prototype suffered several delays before finally being deployed in the summer of 2017. After that a series of hardware issues and failures lead to several elements being replaced, in particular two of the sets of mirrors. The final full prototype with four unit telescopes started reliable operations in February 2019, in time for the start of the third LIGO-Virgo observing run.
Amongst the hardware problems I installed, commissioned and developed the control software as described in the previous chapters. The primary G-TeCS hardware control systems (\aref{chap:gtecs}) were primarily developed before and during the delay in deployment, in particular I built and integrated several hardware units in the dome to ensure the safety of the telescope and any operators on site. The rest of the commissioning period was focused on developing the autonomous systems (\aref{chap:autonomous}), until in May 2018 the telescope was trusted to operate entirely robotically without full-time supervision. The G-TeCS software has proven itself to be reliable, and should provide a framework to build upon as GOTO expands.
\end{colsection}
\chapter{Conclusions and Future Work}
\label{chap:conclusion}
\chaptoc{}
\section{Summary and Conclusions}
\label{sec:conclusion}
\begin{colsection}
In this thesis I have described my work as part of the GOTO project, primarily working on the control software in order to create a fully-autonomous robotic telescope. After several years of development and commissioning the prototype GOTO telescope is fully operational, and observing from La Palma most nights with no human interaction.
\end{colsection}
\subsection{Telescope control}
\label{sec:control_results}
\begin{colsection}
The core of my work has been the \glsfirst{gtecs}, a Python software package that controls every aspect of the telescope. The hardware control daemons interface with the dome, mount and cameras (see \aref{chap:gtecs}) while the ``pilot'' master control program and its associated systems allow the telescope to function with no human involvement (see \aref{chap:autonomous}). GOTO has now been operating successfully for years with the pilot in full control. The conditions monitoring systems have proven robust enough to trust the dome to close in bad weather, and when the occasional unexpected hardware issues do occur the pilot recovery systems can fix the problem and resume observing in the majority of cases, often before a human even has time to log in. Of course, commissioning was not entirely without incident, as described in \aref{chap:commissioning}. However all of the software challenges were overcome, and the majority of the delays to GOTO were due to hardware faults which were out of my purview.
Each set of exposures taken with the G-TeCS camera daemon are assigned an incremental run number. From the initial installation in the summer of 2017 up until September 2019, GOTO had taken over 185,000 such exposure sets, and produced many tens of terabytes of data. The current all-sky survey that began in February 2019 has almost completely covered the northern sky, and at the time of writing the GOTO photometry database contains approximately 642 million sources from almost 500,000 individual frames taken since the start of the survey.
\newpage
\end{colsection}
\subsection{Scheduling and alert follow-up}
\label{sec:gw_results}
\begin{colsection}
As outlined in \aref{sec:control_requirements}, GOTO needed an observation scheduling system that could deal with both the survey and gravitational-wave follow-up modes. The scheduler used by G-TeCS is a just-in-time system (see \aref{chap:scheduling}), where the highest priority target is recalculated every time the scheduler is called. This makes it very reactive to transient alerts, which was a requirement of the project.
As previously detailed in \aref{sec:gw_detections}, in the first 5 months of the third LIGO-Virgo observing run (O3) 32 gravitational-wave events were detected, all predicted to come from compact binary mergers. These events are listed in \aref{tab:gw_log}. Of the 32 alerts 7 were ultimately retracted by the LVC, leaving 25 real events, and only three of these (S190425z, S190426c and S190814bv) are thought to have originated from sources that could produce visible optical counterparts (binary neutron stars or neutron star-black hole binaries). However, events are currently infrequent enough for GOTO to react to all of them, even if there was a very low chance of there being a visible counterpart.
The G-TeCS sentinel received and reacted to every one of these alerts. In a few cases the event handler initially failed to process the VOEvent or the skymap, as described in \aref{sec:challenges}; this was usually due to a problem on the LVC end, and, after each time, changes were made to the GOTO-alert code to work around the problem should it happen again. For every alert the sentinel received the VOEvent packet and passed it to the event handler code as described in \aref{chap:alerts}, which added pointings to the observation database which could then be observed by the pilot (see \aref{chap:autonomous}). Details of GOTO's reaction to every event are given in \aref{tab:obs_log}. Observations were taken for 25 of the 32 events; of the remaining seven events four were received during the day on La Palma and were then retracted before sunset, meaning the pointings were deleted from the database, and the last three events had no part of the skymap visible from La Palma.
\newpage
\makeatletter
\setlength{\@fptop}{0pt}
\makeatother
\begin{table}[t]
\begin{footnotesize}
\begin{center}
\begin{tabular}{l|ccrrl} %
& GW signal & Source & \multicolumn{1}{c}{Dist.} & \multicolumn{1}{c}{90\% area} & \multicolumn{1}{c}{False Alarm} \\
\multicolumn{1}{c|}{Event} & detection time & Classification & \multicolumn{1}{c}{(Mpc)} & \multicolumn{1}{c}{(sq deg)} & \multicolumn{1}{c}{Rate} \\
\midrule
\textcolor{Red}{S190405ar} & 2019--04--05 16:01:30 & Terrestrial & 268 & 2677 & 1 per 0.00015 yrs \\ %
S190408an & 2019--04--08 18:18:02 & \textcolorbf{BrickRed}{BBH} & 1473 & 386 & 1 per \SI{1.1e+10} yrs \\ %
S190412m & 2019--04--12 05:30:44 & \textcolorbf{BrickRed}{BBH} & 812 & 157 & 1 per \SI{1.9e+19} yrs \\ %
S190421ar & 2019--04--21 21:38:56 & \textcolorbf{BrickRed}{BBH} & 1628 & 1443 & 1 per 2.1 yrs \\ %
S190425z & 2019--04--25 08:18:05 & \textcolorbf{Cerulean}{BNS} & 156 & 7461 & 1 per 69882 yrs \\ %
S190426c & 2019--04--26 15:21:55 & \textcolorbf{Cerulean}{BNS}/\textcolorbf{Purple}{NSBH}/\textcolorbf{Green}{MG} & 377 & 1131 & 1 per 1.6 yrs \\ %
S190503bf & 2019--05--03 18:54:04 & \textcolorbf{BrickRed}{BBH} & 421 & 448 & 1 per 19.4 yrs \\ %
S190510g & 2019--05--10 02:59:39 & \textcolorbf{Cerulean}{BNS} & 227 & 1166 & 1 per 3.6 yrs \\ %
S190512at & 2019--05--12 18:07:14 & \textcolorbf{BrickRed}{BBH} & 1388 & 252 & 1 per 16.7 yrs \\ %
S190513bm & 2019--05--13 20:54:28 & \textcolorbf{BrickRed}{BBH} & 1987 & 691 & 1 per 84922 yrs \\ %
S190517h & 2019--05--17 05:51:01 & \textcolorbf{BrickRed}{BBH} & 2950 & 939 & 1 per 13.4 yrs \\ %
\textcolor{Red}{S190518bb} & 2019--05--18 19:19:19 & \textcolorbf{Cerulean}{BNS} & 28 & 136 & 1 per 3.2 yrs \\ %
S190519bj & 2019--05--19 15:35:44 & \textcolorbf{BrickRed}{BBH} & 3154 & 967 & 1 per 5.6 yrs \\ %
S190521g & 2019--05--21 03:02:29 & \textcolorbf{BrickRed}{BBH} & 3931 & 765 & 1 per 8.3 yrs \\ %
S190521r & 2019--05--21 07:43:59 & \textcolorbf{BrickRed}{BBH} & 1136 & 488 & 1 per 100 yrs \\ %
\textcolor{Red}{S190524q} & 2019--05--24 04:52:06 & \textcolorbf{Cerulean}{BNS} & 192 & 5685 & 1 per 4.5 yrs \\ %
S190602aq & 2019--06--02 17:59:27 & \textcolorbf{BrickRed}{BBH} & 797 & 1172 & 1 per 16.7 yrs \\ %
S190630ag & 2019--06--30 18:52:05 & \textcolorbf{BrickRed}{BBH} & 926 & 1483 & 1 per 220922 yrs \\ %
S190701ah & 2019--07--01 20:33:06 & \textcolorbf{BrickRed}{BBH} & 1849 & 49 & 1 per 1.7 yrs \\ %
S190706ai & 2019--07--06 22:26:41 & \textcolorbf{BrickRed}{BBH} & 5263 & 825 & 1 per 16.7 yrs \\ %
S190707q & 2019--07--07 09:33:26 & \textcolorbf{BrickRed}{BBH} & 781 & 921 & 1 per 6023 yrs \\ %
S190718y & 2019--07--18 14:35:12 & Terrestrial & 227 & 7246 & 1 per 0.9 yrs \\ %
S190720a & 2019--07--20 00:08:36 & \textcolorbf{BrickRed}{BBH} & 869 & 443 & 1 per 8.3 yrs \\ %
S190727h & 2019--07--27 06:03:33 & \textcolorbf{BrickRed}{BBH} & 2839 & 152 & 1 per 230 yrs \\ %
S190728q & 2019--07--28 06:45:10 & \textcolorbf{BrickRed}{BBH} & 874 & 105 & 1 per \SI{1.3e+15} yrs \\ %
\textcolor{Red}{S190808ae} & 2019--08--08 22:21:21 & \textcolorbf{Cerulean}{BNS} & 208 & 5365 & 1 per 0.9 yrs \\ %
S190814bv & 2019--08--14 21:10:39 & \textcolorbf{Purple}{NSBH} & 267 & 24 & 1 per \SI{1.6e+25} yrs \\ %
\textcolor{Red}{S190816i} & 2019--08--16 13:04:31 & \textcolorbf{Purple}{NSBH} & 261 & 1467 & 1 per 2.2 yrs \\ %
\textcolor{Red}{S190822c} & 2019--08--22 01:29:59 & \textcolorbf{Cerulean}{BNS} & 35 & 2767 & 1 per \SI{5.2e+09} yrs \\ %
S190828j & 2019--08--28 06:34:05 & \textcolorbf{BrickRed}{BBH} & 1946 & 228 & 1 per \SI{3.7e+13} yrs \\ %
S190828l & 2019--08--28 06:55:09 & \textcolorbf{BrickRed}{BBH} & 1528 & 358 & 1 per 685.0 yrs \\ %
\textcolor{Red}{S190829u} & 2019--08--29 21:05:56 & \textcolorbf{Green}{MG} & 157 & 8972 & 1 per 6.2 yrs \\ %
\end{tabular}
\end{center}
\end{footnotesize}
\caption[GW detections from O3 so far]{
All 32 detections of gravitational-wave signals made during O3, up to the end of August 2019. Events in \textcolorbf{Red}{red} were ultimately retracted by the LVC.\@ All the given values are from the latest issued alert and final skymaps. The distance is the peak of the distribution included in the alert, and the area given is the area contained within the 90\% skymap contour level (see \aref{sec:healpix}).
}\label{tab:gw_log}
\end{table}
\clearpage
\begin{table}[t]
\begin{footnotesize}
\begin{center}
\begin{tabular}{l|ccrrrr} %
& Time alert & Time of first & \multicolumn{1}{c}{Time} & & & \\
\multicolumn{1}{c|}{Event} & received & observation & \multicolumn{1}{c}{delay} & $N_\text{obs}$ & $N_\text{tiles}$ & \multicolumn{1}{c}{$P_\text{obs}$} \\
\midrule
\textcolor{Red}{S190405ar} & 2019--04--12 15:07:26 & \multicolumn{5}{l}{\textcolor{Thistle}{\textit{(Not observed --- Event retracted before becoming visible)}}} \\
S190408an & 2019--04--08 19:02:50 & 2019--04--09 05:40:39 & \SI{10.63}{\hour} & 17 & 9 & 22.5\% \\
S190412m & 2019--04--12 06:31:39 & 2019--04--12 20:28:35 & \SI{13.95}{\hour} & 36 & 18 & 96.1\% \\
S190421ar & 2019--04--22 16:26:24 & 2019--04--23 21:54:59 & \SI{29.48}{\hour} & 49 & 7 & 10.2\% \\
S190425z & 2019--04--25 09:00:56 & 2019--04--25 20:38:22 & \SI{11.62}{\hour} & 306 & 173 & 22.6\% \\
S190426c & 2019--04--26 15:47:11 & 2019--04--26 20:38:45 & \SI{ 4.86}{\hour} & 96 & 49 & 55.6\% \\
S190503bf & 2019--05--03 19:30:15 & \multicolumn{5}{l}{\textcolor{Thistle}{\textit{(Not observed --- Skymap not visible from La Palma)}}} \\
S190510g & 2019--05--10 04:21:59 & 2019--05--10 04:22:55 & \textcolorbf{NavyBlue}{56\,s} & 7 & 7 & 0.2\% \\
S190512at & 2019--05--12 18:59:01 & 2019--05--12 20:53:20 & \SI{ 1.91}{\hour} & 201 & 19 & 89.1\% \\
S190513bm & 2019--05--13 21:21:51 & 2019--05--13 21:26:19 & \textcolorbf{NavyBlue}{4\,min} & 38 & 7 & 30.2\% \\
S190517h & 2019--05--17 06:26:48 & 2019--05--17 21:42:06 & \SI{15.26}{\hour} & 9 & 7 & 15.4\% \\
\textcolor{Red}{S190518bb} & 2019--05--18 19:25:49 & \multicolumn{5}{l}{\textcolor{Thistle}{\textit{(Not observed --- Event retracted before becoming visible)}}} \\
S190519bj & 2019--05--19 17:01:40 & 2019--05--19 20:55:19 & \SI{ 3.89}{\hour} & 139 & 42 & 78.7\% \\
S190521g & 2019--05--21 03:08:49 & 2019--05--21 03:09:17 & \textcolorbf{NavyBlue}{28\,s} & 58 & 24 & 44.5\% \\
S190521r & 2019--05--21 07:50:27 & 2019--05--21 22:54:03 & \SI{15.06}{\hour} & 90 & 45 & 94.0\% \\
\textcolor{Red}{S190524q} & 2019--05--24 04:58:40 & 2019--05--24 04:59:33 & \textcolorbf{NavyBlue}{53\,s} & 2 & 2 & 14.2\% \\
S190602aq & 2019--06--02 18:06:01 & \multicolumn{5}{l}{\textcolor{Thistle}{\textit{(Not observed --- Skymap not visible from La Palma)}}} \\
S190630ag & 2019--06--30 18:55:47 & 2019--06--30 21:14:49 & \SI{ 2.32}{\hour} & 149 & 75 & 62.9\% \\
S190701ah & 2019--07--01 20:38:06 & \multicolumn{5}{l}{\textcolor{Thistle}{\textit{(Not observed --- Skymap not visible from La Palma)}}} \\
S190706ai & 2019--07--06 22:44:31 & 2019--07--06 22:45:09 & \textcolorbf{NavyBlue}{38\,s} & 70 & 35 & 27.0\% \\
S190707q & 2019--07--07 10:13:24 & 2019--07--07 21:54:47 & \SI{11.69}{\hour} & 116 & 58 & 41.0\% \\
S190718y & 2019--07--18 15:03:13 & 2019--07--18 21:08:53 & \SI{ 6.09}{\hour} & 135 & 15 & 61.5\% \\
S190720a & 2019--07--20 00:11:26 & 2019--07--20 00:11:57 & \textcolorbf{NavyBlue}{31\,s} & 175 & 87 & 83.9\% \\
S190727h & 2019--07--27 06:12:02 & 2019--07--27 21:03:40 & \SI{14.86}{\hour} & 94 & 47 & 42.4\% \\
S190728q & 2019--07--28 06:59:32 & 2019--07--28 21:29:58 & \SI{14.51}{\hour} & 36 & 9 & 90.5\% \\
\textcolor{Red}{S190808ae} & 2019--08--08 22:28:00 & 2019--08--08 22:28:31 & \textcolorbf{NavyBlue}{31\,s} & 75 & 31 & 17.3\% \\
S190814bv & 2019--08--14 21:31:44 & 2019--08--14 22:59:27 & \SI{ 1.46}{\hour} & 141 & 45 & 95.4\% \\
\textcolor{Red}{S190816i} & 2019--08--16 13:11:35 & \multicolumn{5}{l}{\textcolor{Thistle}{\textit{(Not observed --- Event retracted before becoming visible)}}} \\
\textcolor{Red}{S190822c} & 2019--08--22 01:37:00 & 2019--08--22 01:37:30 & \textcolorbf{NavyBlue}{30\,s} & 17 & 8 & 2.9\% \\
S190828j & 2019--08--28 06:50:14 & 2019--08--28 22:38:25 & \SI{15.80}{\hour} & 54 & 27 & 9.3\% \\
S190828l & 2019--08--28 07:17:46 & 2019--08--28 23:48:38 & \SI{16.51}{\hour} & 56 & 28 & 2.0\% \\
\textcolor{Red}{S190829u} & 2019--08--29 21:17:14 & \multicolumn{5}{l}{\textcolor{Thistle}{\textit{(Not observed --- Event retracted before becoming visible)}}} \\
\end{tabular}
\end{center}
\end{footnotesize}
\caption[GOTO observation log for O3 events so far]{
GOTO observation log for O3 events (from \aref{tab:gw_log}). Seven events were not observed by GOTO (in \textcolorbf{Thistle}{pink}); four were retracted before observations could begin and three had skymaps never visible from La Palma. The time delay is the delay between the sentinel receiving the alert and observations beginning (including event processing and slew time), events with a delay in \textcolorbf{NavyBlue}{blue} were received during night on La Palma and had tiles immediately visible. $N_\text{obs}$ is the total number of pointings observed by GOTO for each event, $N_\text{tiles}$ is the number of tiles observed within those pointings and $P_\text{pbs}$ is the total contained skymap probability within the observed tiles.
}\label{tab:obs_log}
\end{table}
\clearpage
\makeatletter
\setlength{\@fptop}{0\p@ \@plus 1fil} %
\makeatother
\newpage
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.95\linewidth]{images/events_prob.png}
\end{center}
\caption[Histogram of probabilities covered for O3 events]{
Histogram of the skymap probability covered by GOTO for O3 events. \\
7 of the 32 events were never observed (see \aref{tab:obs_log}).
}\label{fig:events_prob}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.95\linewidth]{images/events_delay.png}
\end{center}
\caption[Histogram of post-event delay for O3 events]{
Histogram of the delay between the GW signal being detected by the LVC and GOTO commencing observations for O3 events. Events that GOTO did not observe are excluded. The outlier is S190421ar (due to a delay sending out the alert).
}\label{fig:events_delay}
\end{figure}
\clearpage
\aref{fig:events_prob} shows the percentage of each event skymap covered by GOTO.\@ These value will depend on a variety of factors: for a particular event if only a small fraction of the skymap is covered that might be because of the position of the skymap in the sky, or a period of bad conditions forcing the dome to close. The coverage level is relatively even across all the events, once the seven events that GOTO did not observe are removed. \aref{fig:events_delay} shows the delay between the event being detected by the gravitational-wave detectors (from \aref{tab:gw_log}) and GOTO starting observations (from \aref{tab:obs_log}). For seven events GOTO was able to start observing less than an hour after the event was detected. This value includes factors such as the delay between the event being detected and the LVC releasing their alert, which was the cause for observations of S190421ar being delayed so long (see \aref{sec:challenges}). For this reason the time delay between the alert being received by the G-TeCS sentinel and the start of observations (given in \aref{tab:obs_log}) is a better indicator of the performance of the G-TeCS software.
Eight alerts were received while it was night time on La Palma, and the GOTO-alert event handling system allowed the pilot to immediately begin observations of the visible skymap. As shown in \aref{tab:obs_log}, in all but one of the eight cases the first exposure was started less than \SI{60}{\second} after the sentinel received the alert. The time delay varies between 28 and 56 seconds, primarily depending on how far the mount had to slew from its previous target. Of the remaining $\sim$\SI{25}{\second} delay, a significant amount is due to having to download the LVC skymaps, with the rest due to various small delays in the event handler, sentinel and pilot, such as the pilot needing to wait up to \SI{10}{\second} for the next scheduler check (see \aref{sec:checks}). Future optimisation could potentially reduce these delays further. The one exception was event S190513bm, which was immediately visible but observations were delayed by 4 minutes. At the time the alert was received the pilot was already observing a pointing from the S190512at event received the previous day; as both events were black hole binaries they were inserted at the same rank, and, as detailed in \aref{sec:toos}, equal-rank ToO pointings will not interrupt each other, so the new pointing had to wait until the previous one was completed. In all other cases the pilot was observing a lower-rank target, usually a survey tile, which was immediately aborted when the scheduler check returned the ToO gravitational-wave pointing.
\begin{table}[t] \begin{center}
\begin{tabular}{l|ccccc} %
& Post-detection & Probability & Area covered & $5\sigma$ limiting & GOTO \\
\multicolumn{1}{c|}{Event} & time delay & covered & (sq deg) & magnitude & GCN \\
\midrule
S190425z & \SI{12.3}{\hour} & 22.6\% & 2857 & $g=20.1$ & 24224 \tablefootnote{~~\citet{S190425z_GOTO}} \\
S190426c & \SI{5.3}{\hour} & 55.6\% & 841 & $g=19.9$ & 24291 \tablefootnote{~~\citet{S190426c_GOTO}} \\
S190814bv & \SI{1.8}{\hour} & 95.4\% & 811 & $g=18.9$ & 25337 \tablefootnote{~~\citet{S190814bv_GOTO}} \\
\end{tabular}
\end{center}
\caption[GOTO follow-up results for three key O3 events]{
GOTO follow-up results for three key O3 events.
}\label{tab:events_3key}
\end{table}
The three events that were the most likely to have a potential electromagnetic counterpart (originating from either a binary neutron star or neutron star-black hole binary) were S190425z, S190426c and S190814bv. The GOTO response to each was reported in a public GCN Notice, and the key values are given in \aref{tab:events_3key}.
S190425z was the second detection of gravitational waves from a binary neutron star after GW170817 \citep{S190425z}. Unlike GW170817 however, the signal was only detected by a single detector, LIGO-Livingston (LIGO-Hanford was offline at the time, and while Virgo did detect a signal it was below the valid signal-to-noise threshold). This resulted in a very large initial skymap, shown in \aref{fig:190425_goto}, with a 90\% contour area of 10,183 square degrees. Many other projects aside from GOTO followed-up this event, and the \glsfirst{ztf} efforts were described in \aref{sec:followup}. Unfortunately a large portion of the skymap was located behind the Sun at the time and was therefore unobservable (on the right of \aref{fig:190425_goto}). The alert was received at 09:00 UTC, a few hours after GOTO had closed in the morning, meaning observations from La Palma could not begin for just over 12 hours until sunset that evening. The final skymap reduced the search area to 7,416 square degrees but shifted the probability even further into the unobservable region of the sky, meaning in the end GOTO only covered 22.6\% of the final probability.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{images/190425_goto.pdf}
\end{center}
\caption[Follow-up observations of S190425z with GOTO]{
GOTO follow-up observations of GW event S190425z \citep{S190425z_GOTO}. The tiled observations are shown in \textcolorbf{NavyBlue}{blue} over the initial skymap. Compare to \aref{fig:ztf}, which shows ZTF's coverage of the same event.
}\label{fig:190425_goto}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{images/190426_goto.pdf}
\end{center}
\caption[Follow-up observations of S190426c with GOTO]{
GOTO follow-up observations of GW event S190426c \citep{S190426c_GOTO}. The tiled observations are shown in \textcolorbf{NavyBlue}{blue} over the initial skymap.
}\label{fig:190426_goto}
\end{figure}
190426c was detected just 31 hours after S190425z \citep{S190425z}. This meant that on the night of the 26th GOTO was completing both the first pass of the 190426c tiles and the second pass of the S190425z tiles which had been first observed the previous night. The initial 190426c skymap and the tiles observed are shown in \aref{fig:190426_goto}. This time event was detected by all three detectors; the initial skymap had an area of 1,932 square degrees, and changed very little in the final skymap. The skymaps from the two events also did not overlap. Following-up two events at once had been considered in the design of the G-TeCS scheduling system (described \aref{chap:scheduling}), and as planned GOTO alternated between the two as tiles were observed (therefore lowering the effective rank as described in \aref{sec:rank}).
The latest event with a high chance of an optical counterpart was S190814bv \citep{S190814bv}, the first confirmed detection of gravitational waves from a neutron star-black hole binary. As shown in \aref{tab:gw_log} several other binary neutron star and neutron star-black hole binary events have been detected but have since been retracted. The initial skymap only included the contribution from the LIGO-Livingston and Virgo detectors, it covered 772 square degrees and is shown in \aref{fig:190826_goto}. A few hours later a revised skymap including LIGO-Hanford data was released, which reduced the 90\% contour region to just 38 square degrees. Unfortunately due to an error in how the LVC uploaded the skymap this was not immediately processed by GOTO (see \aref{sec:challenges}). The lower plot of \aref{fig:190826_goto} shows the GOTO tiles over the final skymap, and although a portion was below the observable horizon from La Palma GOTO still covered 95.4\% of the probability in just 5 pointings. The Moon was full at the time, hence the worse limiting magnitude in \aref{tab:events_3key}.
No electromagnetic counterparts were found for any of the three events, either by GOTO or other projects. But the G-TeCS follow-up code has proven to be fast and reliable, and GOTO will continue to follow-up LIGO-Virgo alerts. Based on the time from previous alerts, if, or when, another GW170817-like event is detected GOTO could potentially be observing the counterpart within seconds of the alert being received.
\newpage
\makeatletter
\setlength{\@fptop}{0pt}
\makeatother
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.9\linewidth]{images/190814_goto.pdf}
\includegraphics[width=0.8\linewidth]{images/190814_goto2.pdf}
\end{center}
\caption[Follow-up observations of S190814bv with GOTO]{
GOTO follow-up observations of GW event S190814bv \citep{S190814bv_GOTO}. In the upper plot the tiled observations are shown in \textcolorbf{NavyBlue}{blue} over the initial skymap. The final skymap for this event is shown in the lower plot, with the seven GOTO tiles needed to cover the 90\% probability region. The two tiles in \textcolorbf{Red}{red} were below the local horizon and so were not observed.
}\label{fig:190826_goto}
\end{figure}
\clearpage
\makeatletter
\setlength{\@fptop}{0\p@ \@plus 1fil} %
\makeatother
\newpage
\end{colsection}
\section{Future work}
\label{sec:future}
\begin{colsection}
This thesis only details the beginning of the GOTO project, and the work described will need to be continued and built upon as the project expands in the future.
\end{colsection}
\subsection{The global control system}
\label{sec:gtecs_future}
\begin{colsection}
Stage 1 of the GOTO project, the first mount and four unit telescopes, is currently observing from La Palma. The obvious direction of future work will be adapting and expanding G-TeCS in order to match the expansion of GOTO, as described in \aref{sec:goto_expansion}.
\subsubsection{Stage 2}
Adding the second set of four unit telescopes to the existing mount on La Palma should require nothing more than a few configuration changes to handle the new interface daemons. On the scheduling side, the observation database will need to be reset with a new all-sky grid, based on the GOTO-8 field of view instead of the existing GOTO-4 tiles (see \aref{fig:fov}). As each tile will cover a larger area, the tile selection algorithm described in \aref{sec:mapping_skymaps} might need to be adjusted, and some of the observing strategy detailed in \aref{chap:alerts} could be revisited. Otherwise, no major changes are anticipated to be required, and the pilot should be able to resume observations immediately.
\subsubsection{Stage 3}
The addition of the second mount in the second dome on La Palma will require more control system development. With two telescopes of the same design it should be simple to copy the hardware control daemons, and some systems could be shared between the two domes (for example, there is no need to have two conditions daemons both monitoring the same weather masts). A proposed system diagram is shown in \aref{fig:flow2}. But, as described in \aref{sec:multi_tel_scheduling}, the great benefit of having two telescopes is having them share a common scheduling system. This could be as simple as the system adopted for the multi-telescope simulations in \aref{chap:multiscope}, marking one telescope as the primary that always observes the highest-priority pointing and having the other always observe the second-highest. But in reality the telescopes will never be perfectly in sync, and there are more benefits to be gained from a more advanced scheduling system.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/flow2.pdf}
\end{center}
\caption[Future G-TeCS system architecture for two telescopes]{
Proposed future G-TeCS system architecture for controlling two telescopes at the same site (``Stage 3''). Note that the two pilots share the same scheduler and conditions daemons but are otherwise independent.
}\label{fig:flow2}
\end{figure}
Just as the current scheduler described in \aref{chap:scheduling} has to decide what to observe based on various constraints, the next-generation scheduler will need to optimise which target is being observed by each telescope. Although several of the constraints will be the same for both mounts (e.g.\ the Moon phase or Sun altitude) it is possible that the two telescopes could have different artificial horizons and therefore altitude limits. The scheduler tie-breaking system will need to be revised, to account for distributing pointings to either telescope. One possible parameter to add to decide which telescope to send a pointing to would be the distance each mount would need to slew from the current target, and the scheduler could also account for the time left on the current observations (e.g.\ telescope 1 might be ready to observe while telescope 2 has 10 seconds left on the current target; if the difference in slew times to the new pointing is greater than \SI{10}{\second} then it would be better to wait for telescope 2 to finish).
The move from one 8-UT mount to two 8-UT mounts should mean the same all-sky grid can be used, as long as both have the same field of view (see \aref{sec:multi_grid_scheduling} for the problems inherent in observing with different grids). However, having multiple mounts opens up opportunities for more advanced observing strategies. They could both observe different parts of the sky to cover the survey or gravitational-wave skymaps more quickly, or they could observe the same tiles to achieve a greater depth when the images are stacked. The simulations in \aref{chap:multiscope} assumed coverage was the priority, but the future scheduling system should be designed to allow concurrent observations of the same tiles if required. The presence of the coloured filters adds even more possibilities. It should be possible to have the two telescopes observe the same target simultaneously but using different filters, to get immediate colour information on any sources. It might also be desirable to accept the impact on survey cadence and have each telescope carry out an independent survey in different filters, or perhaps have one taking rapid \SI{60}{\second} exposures while the other surveys the sky more slowly using the meridian scanning method detailed in \aref{sec:survey_sim_meridian}. The possibilities are endless, and although the ultimate decision will be made depending on the science requirements of the GOTO collaboration, ideally the future G-TeCS scheduling system should be able to handle whatever strategy is desired.
\subsubsection{Stage 4}
The final form of the GOTO project is intended to include multiple telescopes at different sites across the globe. This is unlikely to happen before the second mount is built on La Palma, so by the time an Australian site is added the advanced systems described under Stage 3 above should already be in place, and ideally the next-generation scheduler should be able to delegate observations to multiple telescopes wherever they are in the world. There are several existing projects that operate in this manner which GOTO can emulate, such as the \glsfirst{lco} network \citep{LCO_scheduling}.
\newpage
\end{colsection}
\subsection{Continued development}
\label{sec:software_future}
\begin{colsection}
The work described in \aref{sec:gtecs_future} will be important to carry out as GOTO expands, but the timescales on which it will be needed will be dictated by the hardware status of the project. There is still plenty of software development work to do that is less dependent on the GOTO funding situation.
\subsubsection{Pipeline integration}
One particular area of importance is better integration between the control system and the analysis pipeline and candidate marshal described in \aref{sec:gotophoto}. To achieve a fully autonomous system, the pipeline should be able to reschedule observations independently, for example if an image is affected by clouds. More excitingly, a future transient detection algorithm might be permitted to automatically schedule follow-up observations of promising candidates.
\subsubsection{Unifying scheduler targets}
The scheduler system described in \aref{chap:scheduling} works well for both the all-sky survey and gravitational-wave follow-up events, as discussed in \aref{sec:conclusion}. However, one aspect that could be improved is the integration of both roles. For example, observing a particular tile as part of a gravitational-wave follow-up survey should be counted within the observation database as an observation of that tile for the all-sky survey as well, assuming they use the same filter. The GOTOphoto difference imaging pipeline already looks for reference images for difference imaging in all prior observations of that tile, regardless of what purpose it was taken for. In other words, observing tiles as part of a gravitational-wave skymap should also count towards the all-sky survey cadence: the same tiles are being observed, just in a different order.
\newpage
Another proposed addition in the same vein is linking tile observations between events. It is not uncommon for the same gamma-ray burst event to be detected by both \textit{Fermi} and \textit{Swift}; as described in \aref{sec:event_strategy}, both alerts are processed by the GOTO-alert event handler, and the only difference is that \textit{Swift} events are inserted into the observation database at a higher rank. This is intentional, as \textit{Swift} events typically are very well-localised, easily within a single GOTO tile, while the large skymaps for \textit{Fermi} events cover many tiles (see \aref{sec:grb_skymaps}). Because of this, if the sentinel detects an alert from both facilities within a few minutes, and the two skymaps overlap, then covering the large \textit{Fermi} skymap should be unnecessary, as the event source should be given by the much better localised \textit{Swift} position.
Independent detections of gamma-ray bursts are a common-enough example to test this behaviour, but where it could be very useful to GOTO is for coincident GRB and gravitational-wave events. The GW170817 event was notable as being also detected by \textit{Fermi} as GRB 170817A \citep{GW170817_Fermi}, and the LVC is investigating putting out automated alerts for future coincident GW-GRB events using the RAVEN pipeline \citep{RAVEN, LVC_userguide}. Were the G-TeCS sentinel able to achieve a similar result, simply prioritising GW skymap tiles that overlap with the GRB skymap, it could reduce the delay before observing the all-important tile that contains the counterpart kilonova.
\subsubsection{Further simulations}
The test code written to simulate GOTO observations was a vital tool for optimising the G-TeCS scheduler (see \aref{sec:scheduler_sims}), and in \aref{chap:multiscope} it was used to model the benefits of GOTO's plans for future expansion. As described in \aref{sec:scheduler_sim_future}, it would be good to revisit the scheduler simulations with the benefit of subsequent code development and more-realistic simulation parameters based on the live GOTO system. Other scheduler simulations have also been proposed, for example to find optimal tile-selection limits for gravitational-wave skymaps (see \aref{sec:selecting_tiles}). A majority of the future work proposed in this chapter will also require further simulations, and making the simulation code as realistic as possible is a priority.
\subsubsection{Code generalisation and availability}
Another potential future project that is being considered is the generalisation of some or all of the G-TeCS code, removing the GOTO-specific parts and making it usable by other projects. For example, the GOTO-tile code for creating survey grids and mapping skymaps to them described in \aref{chap:tiling} is not at all GOTO-specific, and would only require a small amount to work to rewrite into a separate Astropy-compatible Python package (probably along with a new name). On a wider scale, the G-TeCS control system could be adapted for other telescopes. A parallel version is already being used by the other Warwick telescopes on La Palma, and using G-TeCS is also being considered for other robotic telescope projects, such as the SAMNET solar telescope network \citep{SAMNET}. Currently all GOTO code is private, restricted only to GOTO collaboration members. However, if my code was reconfigured to be usable by other projects I would hope to make it publicly available and open-source.
\bigskip
Overall the GOTO Telescope Control System is still under active development, and this will continue as the GOTO project evolves. Based on the initial results from O3 the system has been working well and fulfilling its requirements, and it is therefore most likely only a matter of time until GOTO observes its first gravitational-wave counterpart.
\end{colsection}
\chapter[The GOTO Telescope Control System]{%
\protect\scalebox{0.96}[1.0]{\mbox{The GOTO Telescope Control System}}
}
\label{chap:gtecs}
\chaptoc{}
\section{Introduction}
\label{sec:gtecs_intro}
\begin{colsection}
Over the next three chapters I detail my work creating a software control system for GOTO.\@ This chapter includes the initial requirements and outline of the control system, and then focuses on the core programs to control the telescope hardware.
\begin{itemize}
\item In \nref{sec:control_systems} I go through the requirements for the GOTO control system and describe the different options considered.
\item In \nref{sec:gtecs} I give an overview of the software that makes up the GOTO Telescope Control System and how it was implemented.
\item In \nref{sec:hardware_control} I go through each category of hardware and describe how the G-TeCS programs were written to control them.
\end{itemize}
All work described in this chapter is my own unless otherwise indicated. This and the following two chapters have been expanded from my SPIE conference paper on G-TeCS, \citet{Dyer}. G-TeCS is based on the pt5m control system \citep{pt5m}, written primarily by Tim Butterly at Durham, and Stu Littlefair and Vik Dhillon at Sheffield.
\newpage
\end{colsection}
\section{The telescope control system}
\label{sec:control_systems}
\begin{colsection}
The term \glsfirst{tcs} describes the various software packages and scripts required to operate a telescope. As described in \aref{sec:goto_design}, GOTO was designed to use standard, off-the-shelf hardware, the type used by high-end amateur astronomers. The control software for this hardware has increasingly been standardised, and there are many TCS software packages available on the market that can be used to operate all aspects of an observatory. However, GOTO has an unusual multi-telescope design and strict requirements for target scheduling, meaning a more customised control system was required. In this section I detail the requirements of the GOTO project and how the choice of TCS was made.
\end{colsection}
\subsection{Requirements}
\label{sec:control_requirements}
\begin{colsection}
My first task as part of the GOTO collaboration, in the summer of 2015 before I started my PhD in Sheffield, was to decide on what control system software to use. There were several requirements to consider.
First, the chosen system had to allow for remote and, most importantly, robotic operation of GOTO.\@ There are many telescope control software packages available, but the majority are designed for a human observer to operate. GOTO, however, was to be a fully autonomous telescope, which meant operating nightly with no human intervention. This meant the control system had to contain routines for observing targets and standard tasks like taking calibration frames. On top of that it was desirable for the system to be able to monitor itself to detect and fix any errors as much as possible without the need for human intervention. Finally it had to be able to monitor and react to external conditions, for example closing the dome if rain was detected.
Second, the system had to include an observation scheduler, which could decide what the telescope should observe during the night. A basic scheduler might be run in the evening to create a night plan, as observers typically do when operating a telescope. However that function alone would not meet the expected operations required from GOTO:\@ normally carrying out an all-sky survey but with a robust interrupt protocol for gravitational-wave follow-up. The system therefore had to be able to recalculate what to observe on-the-fly, and be able to react immediately to transient \glsfirst{too} events.
Furthermore, although the project was still at an early stage the idea of linking together multiple telescopes into a global network was also considered, and the chosen control system would ideally be expandable to facilitate this in the future.
There were also several physical considerations when it came to choosing between software systems. The telescope hardware described in \aref{sec:goto_design} had already been decided on: a clamshell dome from AstroHaven Enterprises, a custom mount with a \glsfirst{sitech} servo controller and multiple unit telescopes all equipped with \glsfirst{fli} cameras, focusers and filter wheels. Any control system would need to communicate with all of this hardware, so any software package with existing drivers would be desirable.
Two particular hardware-related challenges faced the control system project. The first was that the SiTech controller software, SiTechEXE, only ran on Microsoft Windows. The software did have an accessible \glsfirst{api} through the ASCOM standard\footnote{\url{www.ascom-standards.org}}, but that still required some form of the mount control system to be running on Windows. As most professional scientific software in astronomy runs on Linux systems, this led to two options: either have just a small interface running on the Windows machine and the rest of the system on Linux, or have the entire system run on Windows.
The second hardware-related challenge was to deal with the multiple-unit telescope design of GOTO.\@ A full array of eight unit telescopes (UTs) would require eight cameras, focusers and filter wheels. These would all need to be run in parallel, most importantly there needed to be no delay between the exposures starting and finishing on each camera. The physical construction of the telescope also came into play. The FLI units all require a USB connection to the control computer. A single computer situated in the dome would therefore require 24 extra-long USB cables to run up the mount. The suggested solution was to have small computers attached to the mount boom arms next to the unit telescopes, to act as intermediate interfaces to the hardware. The control system therefore needed to be able to run in a distributed manner across multiple computers, potentially even running different operating systems.
There were also practical details to consider when choosing the control software. GOTO was designed as a relatively inexpensive project that could be built quickly and copied across multiple sites, therefore any costly software licenses should ideally be avoided. Experience and support requirements should also be considered, and reusing a software system that members of the collaboration had experience with would provide benefits compared to a completely new system.
\end{colsection}
\subsection{Existing software options}
\label{sec:control_options}
\begin{colsection}
Four possible options for the GOTO control system were considered: the existing software packages ACP Expert, Talon and RTS2, or a custom system based on the code written at Durham and Sheffield for their \glsfirst{pt5m}. At the July 2015 GOTO meeting at Warwick University I gave a talk outlining the control system requirements and presenting the four options, and the decision taken was to adapt the pt5m system for use by GOTO.\@ The three rejected systems are described below, while the pt5m system is described in more detail in \aref{sec:pt5m}.
\subsubsection{ACP Expert}
ACP Expert\footnote{\url{http://acp.dc3.com}} is a commercial observatory control software system by DC3-Dreams. It is used by some advanced amateur astronomers and a few scientific and university telescopes, such as the Open University's PIRATE telescope \citep{PIRATE}. As a complete Windows software package with a web interface it is marketed as being straightforward to use, in either remote or fully robotic modes. It uses the ASCOM standard library and DC3-Dreams also provide professional support and updates. This however came at a cost: \$2495 for the base software, plus an additional \$599 for Maxim DL camera control and \$650 per year for continued support. At the time, GOTO was anticipated to be deployed in a matter of months, so the quick and simple pre-existing commercial solution was tempting. However it was unclear if the ACP software would be able to cope with GOTO's unusual design, and its closed-source model would restrict our ability to make modifications.
\subsubsection{Talon}
The Talon observatory control system\footnote{\url{https://sourceforge.net/projects/observatory}} is a Linux-based, open-source system created by \glsfirst{omi}. It was included as an option primarily as at the time it was the control system of choice for the other observatories operated by Warwick University, such as SuperWASP \citep{SuperWASP}. OMI had built the SuperWASP mount and developed Talon alongside it, before later making it open source. However development of Talon has been almost non-existent over the past decade, and when building the \glsfirst{ngts} a large amount of custom software was needed to allow Talon to work with its multiple telescopes \citep{ngts}. Warwick were already looking at replacing Talon for their \glsfirst{w1m} and when upgrading SuperWASP.\@ Therefore adopting it for GOTO would be unlikely and even counter-productive, as whatever was chosen for GOTO was expected to (and ultimately did) influence and benefit the concurrent development of a new control system for W1m.
\subsubsection{RTS2}
The \glsfirst{rts2}\footnote{\url{https://rts2.org}} \citep{RTS2, RTS2b} is another free and open-source Linux software package. Unlike Talon, RTS2 is under active development and is used by telescopes and observatories around the world \citep{BORAT, BOOTES-3, antarctic, ARTN}. There is a small but active user community and drivers for the hardware GOTO would use had already been developed. The first version of RTS was written in Python, while the second version was rewritten in C++ but with a Python interface available. RTS2 was an attractive choice, however like the others it was unclear if it could be easily modified to meet the requirements for GOTO's multiple telescopes and Windows-controlled mount and no one in the collaboration had prior experience of using or implementing it.
\end{colsection}
\subsection{The pt5m control system}
\label{sec:pt5m}
\begin{colsection}
Built and operated by Sheffield and Durham Universities, \emph{pt5m} is a \SI{0.5}{\metre} telescope located on the roof of the \SI{4.2}{\metre} \glsfirst{wht} on La Palma \citep{pt5m}. The telescope was originally developed as a \glsfirst{slodar} system for atmospheric turbulence profiling in support of the CANARY laser guide star project on the WHT \citep{SLODAR_LaPalma, CANARY}. There are several SLODAR telescopes around the world operated by Durham, including one in Korea that had just been commissioned at the time I joined the GOTO project \citep{SLODAR_Korea}. In order to make the most of the telescope when not being used for SLODAR observations, a science camera was added to pt5m by Sheffield, and in-house control software was written to enable robotic observations. It has successfully been used for automatic observations of transient events since 2012, as well as being used for undergraduate teaching at Sheffield and Durham. All of the SLODAR telescopes used a custom telescope control system developed at Durham, and a similar system was used by the teaching telescopes of the Durham Department of Physics that I worked with during my undergraduate degree. The pt5m control system had been modified by the team at Durham and Sheffield for robotic operation, which matched well with what we needed for GOTO.\@ For this reason, on top of the Sheffield group's existing experience, the pt5m software was chosen to be the base for the GOTO control system.
An overview of the pt5m control system architecture from \citet{pt5m} is shown in \aref{fig:pt5m_software}. The software is written in Python and was built around multiple independent background programs called \emph{daemons}. Each daemon controls one category of hardware, for example the dome, mount or CCD controller. A script called the \emph{pilot} sends commands to the daemons when the system is operating in robotic mode, and the decision of what to observe is taken by the \emph{scheduler} which picks targets out of a database and returns the highest priority to the pilot. Finally a separate script called the \emph{conditions monitor} checks the local weather conditions and tells the pilot to close the dome in bad weather.
The basic framework of the pt5m control system was adopted for GOTO, but with several changes. The major difference between pt5m and GOTO are the multiple unit telescopes, but the same software control system could be adapted through the creation of interface daemons which allow communication to the unit telescopes over the internal dome network. In fact the independent, distributed nature of the daemon system made it very easy to expand to have daemons running on physically separate machines but still communicating over the same local network, including on both Linux and Windows computers.
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/pt5m_software.png}
\end{center}
\caption[The pt5m control system architecture]{
The pt5m control system architecture, taken from \citet{pt5m}. The hardware daemons are shown on the right; they communicate with the pilot which receives information from the observation scheduler and the conditions monitor. This basic framework was adapted for the GOTO control system, c.f. \aref{fig:flow}.
}\label{fig:pt5m_software}
\end{figure}
\end{colsection}
\section{Overview of G-TeCS}
\label{sec:gtecs}
\begin{colsection}
The \glsfirst{gtecs} is the name given to the collection of programs that have been developed to fulfil the requirements of the GOTO project given in \aref{sec:control_requirements}. The pt5m control system as described in the previous section formed the basis for G-TeCS.\@ Its structure of multiple independent daemons was developed into the core system architecture of G-TeCS, shown in \aref{fig:flow}. This section gives an overview of the system and its implementation. There are two core branches of G-TeCS:\@ the base hardware control programs, described in \aref{sec:hardware_control}, and the autonomous systems built on top of them. The latter software is described in \aref{chap:autonomous} and \aref{chap:scheduling}.
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/flow.pdf}
\end{center}
\caption[The G-TeCS system architecture]{
The G-TeCS system architecture as deployed on La Palma, taken from \citet{Dyer}. The observation database as well as the sentinel, scheduler and conditions daemons shown to the left run on a central observatory-wide server located in the SuperWASP building next to the GOTO domes, while the pilot and hardware daemons are located on the telescope control computer within the dome. Control for the unit telescope hardware (focuser, filter wheel and camera) is sent via an interface daemon for each pair of UTs, running on computers attached to the mount. Only the system for the prototype instrument (one mount with four unit telescopes) is shown.
}\label{fig:flow}
\end{figure}
\end{colsection}
\subsection{Implementation}
\label{sec:implementation}
\begin{colsection}
The core G-TeCS code is contained in a Python package (\texttt{gtecs}\footnote{\url{https://github.com/GOTO-OBS/g-tecs}}). This includes all of the core daemons, scripts, associated modules and functions. One important module, containing code and functions to interact with the observation database (see \aref{sec:obsdb}), was split off into a separate Python package ObsDB;\@ this was done to allow other users to interact with the database without the need to install the entire G-TeCS package. In addition, the code for alert processing within the sentinel (see \aref{sec:sentinel}) is in a separate package, GOTO-alert. This is because it originated as a separate coding project written by Alex Obradovic at Monash, that I then took over and integrated with G-TeCS.\@ GOTO-alert is described in detail in \aref{chap:alerts}.
G-TeCS and the associated packages are written almost entirely in Python \citep{Python}. Python is a versatile programming language that is increasingly common in astronomy, helped by the popular open-source Astropy Project \citep{astropy}. Python version 3.0 was released in 2008 and was infamously not backwards-compatible with Python2. The code for pt5m was written in Python2, and therefore initially so was G-TeCS.\@ Over the subsequent years the G-TeCS code was re-written to be compatible with both Python2 and Python3, which was possible due to the standard \texttt{\_\_future\_\_} library in Python2 and the Six package (\texttt{six}\footnote{\url{https://six.readthedocs.io}}). Eventually, the addition of new features added to Python3, such as the AsyncIO library used heavily by the pilot (see \aref{sec:async}) in version 3.5, and the imminent end-of-life of Python2 in 2020, led to the dropping of Python2 support. This is in line with most other scientific Python packages including Astropy, which is no longer developed for Python2. %
The core G-TeCS packages have multiple dependencies. Some of the most critical external packages (not included in the Python standard library) are NumPy for mathematical and scientific structures \citep{NumPy}, Astropy for astronomical functions \citep{astropy}, Pyro for communicating between daemons (see \aref{sec:daemons} below), SQLAlchemy for database management (see \aref{sec:obsdb}), Astroplan for scheduling \citep[][see \aref{sec:scheduler}]{astroplan}, VOEvent-parse for handling VOEvents \citep[][see \aref{sec:voevents}]{voevent-parse} and GOTO-tile, a custom package written for GOTO (described in \aref{chap:tiling}).
\end{colsection}
\subsection{Daemons}
\label{sec:daemons}
\begin{colsection}
The core elements of the control system are the daemons. A \emph{daemon} is a type of computer program that runs as a background process, continually cycling and awaiting any input from the user. This is in contrast to a \emph{script} which is run once (either by the system or a user), carries out a series of tasks in the foreground and then exits once it is completed. Common examples of daemons on a Unix-based system are sshd, which listens for and creates \glsfirst{ssh} connections, and cron, which runs commands at predefined times. Incidentally both are used by G-TeCS:\@ SSH is used to execute commands on remote machines, and cron is used to run scripts like the pilot at a set time of day.
Daemons are an ideal model for hardware control software. Once started, each daemon runs continually as a background process, with a main control loop that repeats on a regular timescale (for the G-TeCS hardware daemons this is usually every \SI{0.1}{\second}). There are two primary tasks that are carried out within the loop by every daemon: monitoring the status of the hardware, and listening for and carrying out commands. The former is typically not carried out every time the loop runs, because attempting to request and process the hardware status every \SI{0.1}{\second} would overwhelm the daemon and delay the loop. Instead the status checks are typically carried out every \SI{2}{\second}, or sooner if requested. By continually requesting the status of the hardware the daemon will detect very quickly if there are any problems, and should it be unable to reach its hardware it will enter an error state. However the daemons themselves will not attempt to self-diagnose and fix any problems that are detected, with the notable exception of the dome daemon (see \aref{sec:dome}). Instead, that is the job of the hardware monitors (see \aref{sec:monitors}); the daemons themselves will just report any problems to the pilot or user. The second reason for a control loop within the daemons is to listen for and carry out any commands issued to them. As these commands are dealt with within the loop it ensures only one command is carried out at a time; the alternative of user input going directly to the hardware could cause problems with overlapping commands. These commands can be as simple as querying the cameras for how long left until an exposure finishes, or the mount for the current position, to opening the dome, taking and saving an image, or calculating the current highest priority pointing to observe.
Within G-TeCS each category of hardware has a dedicated control daemon that acts as an interface to the hardware. For example, the mount daemon communicates with the SiTech mount controller, sending commands and reading the current status, while the camera daemon does the same for every camera attached to the telescope. Therefore there is not necessarily a one-to-one correspondence between daemons and pieces of hardware. Having separate daemons for each hardware type allows them to operate independently and allows the pilot, or a human operator, to send commands to each in turn without needing the other one to complete. It also means that a failure in one daemon or its hardware is isolated from the others, should the mount develop a fault, for example, the dome daemon will still be able to communicate with, and close, the dome. Not every daemon within G-TeCS interacts with external hardware: there is the sentinel daemon which monitors alert channels and adds pointings to the observation database, and the scheduler daemon which selects which pointing should be observed at a given time.
Functionally, each daemon is built around a Python class which contains hardware control functions and a main loop. When the daemon starts, the loop is set running in its own thread, and when a control function is called it sets a flag within this loop to carry out the requested commands. The daemons are created using the Pyro Python package (Python Remote Objects, \texttt{Pyro4}\footnote{\url{https://pythonhosted.org/Pyro4}}). Each daemon is run as a Pyro server, so any client script can then access its functions and methods across the network using the associated server ID.\@ This system allows complicated interactions across the network between daemons and scripts with very simple code, and was one of the major benefits of adopting the pt5m system.
\end{colsection}
\subsection{Scripts}
\label{sec:scripts}
\begin{colsection}
As well as the daemons, the G-TeCS package includes multiple Python scripts. These scripts can be run on the control computer from the command line by a human user, called from within other scripts like the pilot, or started through utilities like cron.
In order to send commands to the daemons, each has an associated control script that can be called by a user from a terminal, or by the pilot in robotic mode (see \aref{sec:pilot}). The commands follow a simple format which was inherited from pt5m, first the short name of the daemon, then the command, and finally any arguments. There are several commands that are common to all daemons: \texttt{start}, \texttt{shutdown} and \texttt{restart} to control if the daemon is running; \texttt{ping} to see the current status of the daemon; \texttt{info} to see the current status of the hardware; \texttt{log} to print the daemon output log. Examples of daemon-specific commands include ``\texttt{dome~close}'' to close the dome, ``\texttt{mnt~slew~30.54~+62}'' to slew the mount to the given coordinates, ``\texttt{cam~image~60}'' to take a \SI{60}{\second} exposure with all connected cameras and ``\texttt{cam~image~2~60}'' to take a \SI{60}{\second} exposure with camera 2 only.
Every daemon can also be controlled in ``interactive mode'', which is a user-friendly way to save time sending multiple commands to the same daemon. Interactive mode is entered with \texttt{i} and exited with \texttt{q}.
There is also a utility script, \texttt{lilith.py}, which can send the same command to all the daemons. For example, to shutdown every daemon it is possible to call each directly (\texttt{cam~shutdown}, \texttt{foc~shutdown}, \texttt{mnt~shutdown} etc\ldots) but it is instead much easier to run \texttt{lilith~shutdown}. The name ``Lilith'' comes from the biblical ``mother of demons''.
The most important script to the robotic operation of the telescope is the pilot, detailed in \aref{sec:pilot}. The pilot is started every night using cron at 5pm, but can also be started manually with the command ``\texttt{pilot~start}'' (note although this uses the same syntax as a daemon it simply runs the pilot script in the current terminal instead of starting a background process). There is also a daytime counterpart to the pilot, called the day marshal, which is run in the same way (see \aref{sec:day_marshal}). Finally, several of the more common observing tasks are separated off into ``observation scripts''. These contain lists of commands to send to the daemons to carry out tasks such as focusing the telescope, taking flat fields or starting/shutting down the hardware in the evening/morning respectively. These are run at specific times each night by the pilot night marshal routine (see \aref{sec:night_marshal}), but they can also be run by human observers through the command line (for example ``\texttt{obs\_script~startup}'' to run the startup script, or ``\texttt{obs\_script~autofocus}'' to start the autofocus routine).
\end{colsection}
\section{Hardware Control}
\label{sec:hardware_control}
\begin{colsection}
The core programs of G-TeCS are the hardware daemons. There are seven primary daemons, as shown in the centre of \aref{fig:flow}. This section provides a summary of each of the hardware categories, describing how the daemons interact with them and the particular challenges and features unique to each.
\end{colsection}
\subsection{FLI interfaces}
\label{sec:fli}
\begin{colsection}
As described previously, GOTO uses off-the-shelf camera, focuser and filter-wheel hardware from \glsfirst{fli}. Each GOTO unit telescope has a MicroLine ML50100 camera, an Atlas focuser and a CFW9--5 filter wheel, these are connected to a small Intel \glsfirst{nuc} attached to the boom arm (one per pair of UTs, shown in \aref{fig:boomarm}). These NUCs run very basic daemons called the FLI interfaces, shown in \aref{fig:flow}. Barely daemons by the definition given in \aref{sec:daemons}, these interfaces have no control loop and exist only as a way to expose the serial connection of the hardware to the wider Pyro network. By using these interface daemons, the primary control daemons for the FLI hardware can run on the main control computer without being physically connected to the hardware (aside from via ethernet).
Communicating with the hardware has to be done using the \glsfirst{sdk} provided by FLI, which is written in C. In order to use this SDK with the control system written in Python, a separate wrapper package FLI-API (\texttt{fliapi}\footnote{\url{https://github.com/GOTO-OBS/fli-api}}) was written by Stu Littlefair in Cython, a programming language that provides a way for C code to be imported and run in Python.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/boomarm.png}
\end{center}
\caption[Photo of GOTO with hardware elements labelled]{
A photo of GOTO observing at night, with key hardware elements labelled. The back of the four unit telescopes is visible, attached to the central boom arm. Each pair of UTs is connected to a NUC interface computer and a \glsfirst{pdu}. The mount declination drive is also visible with the cover removed, the right ascension drive is on the other side of the mount.
}\label{fig:boomarm}
\end{figure}
The interfaces and the Pyro network allow a single daemon to interact with multiple pieces of hardware across multiple computers. This means that the single camera daemon running on the primary control computer can interface with all of the cameras attached to the mount, and instead of sending commands to each camera individually the user can speak to all of them together through the daemon. As an example, the command ``\texttt{cam~image~60}'' will take a 60 second exposure on every attached camera simultaneously. Including a specific number (``\texttt{cam~image~2~60}'') will only start the exposure on the camera attached to UT2. Multiple selections can also be made using a simple comma-separated syntax, such as ``\texttt{cam~image~1,2,4~60}''. This notation and functionality is one of the major differences between G-TeCS and the pt5m control system, and in fact all of the other control systems considered in \aref{sec:control_options}, which typically can only communicate with a single telescope at a time.
\newpage
There are three control daemons that interact with the FLI interfaces: the camera, filter wheel and focuser daemons. There is also a fourth, the exposure queue daemon, which coordinates sets of exposures and communicates with both the cameras and filter wheels through their daemons, not the interfaces directly. Each of these four daemons are described in the following sections.
\end{colsection}
\subsection{Camera control}
\label{sec:cam}
\begin{colsection}
The camera daemon interacts with all of the FLI cameras on the GOTO mount, making it the most complicated daemon to design. The commands to the camera daemon, however, are fairly straightforward. There are four types of exposures that can be taken:
\begin{itemize}
\item Normal images, with the shutter opening and closing for the given exposure time.
\item ``Glance'' images, which are the same as normal images but are saved to a separate file that is overwritten each time a glance is taken.
\item Dark images, where the shutter remains closed during the exposure time.
\item Bias images, where the shutter remains closed and a zero-second exposure is taken.
\end{itemize}
The FLI-API interface also gives other options for exposures aside from just the exposure time, including different binning factors and windowing the active area of the chip to read out. Although the camera daemon does offer commands to set these they are never used during normal operations, and the GOTOphoto image processing pipeline is set up to only expect full-frame, unbinned images.
Once an exposure is completed, the image data needs to be downloaded from the cameras and sent through the interfaces to the camera daemon, before the frames can be saved as \glsfirst{fits} files. This is a disadvantage of the interface system, and consideration was given to instead having the interfaces write out the files to their local NUCs. Although this would have been faster to save the raw images, they would still need to be copied down from the NUCs to the primary archive on the control computer. Having the interfaces send the raw count arrays to the camera daemon for processing proved to save more time in the long run. The camera daemon also queries all the other hardware daemons at the start of the exposure, to get their current statuses to add to the FITS headers (for example, getting the current pointing position from the mount daemon).
The time taken by each exposure, from the command being received to the FITS images being written to disk, has been optimised to minimise the amount of ``dead time'' between exposures. One of the primary ways to save time was to have the two most time-dependent processes, downloading the images from the interfaces and writing them to disk, run as separate threads for each camera independently of the main daemon control loop. Other time-saving improvements included only fetching the status information from the other daemons once, just after starting the exposures (so it does not take any extra time in addition to the exposure time).
Images are written to FITS files by the camera daemon and are archived in different directories by date (e.g. \texttt{2019--09--30}). Each camera output is saved as a separate file, named by the current run number and the name of the unit telescope it originated from (e.g. \texttt{r000033\_UT2.fits} is the image from camera 2 for run 33). The run number is increased whenever a non-glance exposure is taken, even if the exposure is subsequently aborted. After being saved the images are copied at regular intervals from La Palma to Warwick University via a dedicated fibre link, where the GOTOphoto photometry pipeline is run (as described in \aref{sec:gotophoto}). GOTOphoto has been developed at Warwick and Monash separately from the control system, which means image calibration, astrometry and photometry are all out of the scope of this thesis.
\newpage
\end{colsection}
\subsection{Filter wheel control}
\label{sec:filt}
\begin{colsection}
The filter wheel daemon (sometimes shortened to just the filter daemon) controls the filter wheels on the GOTO unit telescopes. The FLI CFW9--5 filter wheels are fairly standard pieces of hardware, with 5 slots that contain the \SI{65}{\milli\metre} square Baader \textit{R}, \textit{G}, \textit{B}, \textit{L} and \textit{C} filters (see \aref{sec:filters}). Moving the filter wheel is usually done via the exposure queue daemon (see \aref{sec:exq}) but can be done individually. When powered-on the filter wheels must be homed to position 0 before moving. The \textit{L} filter was placed in the home position as the vast majority of GOTO observations are taken in this filter.
\end{colsection}
\subsection{Focuser control}
\label{sec:foc}
\begin{colsection}
The focuser daemon is the third of the three FLI hardware daemons. Each connected focuser can be set to a specific position or moved by a given offset by the daemon. The focuser daemon is usually only used when the pilot runs the autofocus routine at the start of the night (see \aref{sec:night_marshal} and \aref{sec:autofocus}).
\end{colsection}
\subsection{Exposure queue control}
\label{sec:exq}
\begin{colsection}
The exposure queue daemon (often abbreviated to `ExQ' or `exq') does not directly talk to hardware; instead it is the only daemon with the primary purpose of communicating with other daemons, specifically the camera and filter wheel daemons. The exposure queue daemon coordinates taking frames in sequence and setting the correct filters before each exposure starts. For example, consider needing a series of three \SI{30}{\second} exposures, one each in the \textit{R}, \textit{G} and \textit{B} filters. Through the camera and filter wheel daemons this would require six commands: \texttt{filt~set~R}, \texttt{cam~image~30}, \texttt{filt~set~G}, \texttt{cam~image~30}, \texttt{filt~set~B}, \texttt{cam~image~30}. The exposure queue daemon gives shorter method to carry out the same commands, and these same exposures can be requested with a single command: ``\texttt{exq~mimage~30~R,G,B}'' (\texttt{mimage} is short for multiple-image).
\begin{figure}[t]
\begin{center}
\vspace{1cm}
\texttt{1111;30;R;1;normal;M101;SCIENCE;0;1;3;545}\\
\texttt{1111;30;G;1;normal;M101;SCIENCE;0;2;3;545}\\
\texttt{1111;30;B;1;normal;M101;SCIENCE;0;3;3;545}\\
\vspace{0cm}
\end{center}
\caption[A sample exposure queue file]{
A sample of an exposure queue file. Each line is a new exposure, and details of the exposure are separated by semicolons. In order, these are: the binary UT mask, exposure time in seconds, filter, binning factor, frame type, object name, image type, glance flag, set position, set total and database set ID number.
}\label{fig:exq_file}
\end{figure}
When a set of exposures is defined and passed to the exposure queue daemon they are added to the queue, which is stored in a text file written to and read by the daemon. An example of the contents of the file is given in \aref{fig:exq_file}. The details of each exposure are saved in this file, and adding more using the \texttt{exq} command adds more exposures to the end of the queue. When the queue is running (it can be paused and resumed, for example to allow slews between exposures) the daemon will select the first exposure in the queue, tell the filter wheel daemon to change filter if necessary and then tell the camera daemon to start the exposure.
As shown in \aref{fig:exq_file}, extra meta-data can be written for each exposure. The UT mask is simply a binary representation of the unit telescopes to use for this exposure, so \texttt{0101} would be exposing on UTs 1 and 3 only (counting from the right starting with UT1), while \texttt{1111} will be on all four. The frame type is a variable used within the FLI API, it is either \texttt{normal} or \texttt{dark} depending on if the shutter will open or not. Exposures taken through the exposure queue can also have a target name (e.g.\ the galaxy M101 in \aref{fig:exq_file}) and an image type (used to define the type of image, either SCIENCE, FOCUS, FLAT, DARK or BIAS). The glance flag is a boolean value, set to \texttt{1} (True) if the exposure is a glance or \texttt{0} (False) otherwise.
When multiple exposures are defined using the \texttt{exq} commands, as in the previous \texttt{mimage} example, they are grouped into a ``set''. The set position and set total values shown in \aref{fig:exq_file} denote those exposures as 1 of (a set of) 3, 2 of 3, and 3 of 3. Including this information in the exposure metadata is necessary so the photometry pipeline knows if an exposure is part of a set and, if they are all in the same filter, whether they should be co-added to produce reference frames. Exposure sets are defined in the observation database (see \aref{sec:obsdb}), with each pointing having at least one or more sets to be added to the exposure queue by the pilot when that pointing is observed.
Similar to the camera daemon, the timing of code and functions within the exposure queue daemon has been optimised to minimise the ``dead time'' between exposures. However, the commands also need to be timed correctly to ensure that, for example, the exposure does not start while the filter wheel is still moving. This was one of the major reasons for having a separate exposure queue daemon to handle these timing concerns, while the camera and filter wheel daemons dealt only with individual commands. Incidentally, pt5m uses a QSI camera with an integrated filter wheel \citep{pt5m}, so what in G-TeCS are separate camera, filter wheel and exposure queue daemons are all combined into a single ``CCD'' daemon in \aref{fig:pt5m_software}.
\end{colsection}
\subsection{Dome control}
\label{sec:dome}
\begin{colsection}
The dome daemon is the primary interface to the dome. It is in effect the most critical of all of the hardware control systems, because a failure in the software resulting in the dome opening in bad weather could be catastrophic to the hardware inside. As such, the dome daemon includes multiple levels of internal checks and backup systems. It is also the only daemon with a small amount of autonomy built in, and therefore blurs the line between a pure hardware control daemon and the more complicated autonomous systems described in \aref{chap:autonomous}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/webcam_open.jpeg}
\end{center}
\caption[Webcam image of the GOTO site at night]{
A webcam image of the GOTO site taken in June 2018. The two GOTO clamshell domes are on the right; the northern, empty dome is closed while the other is open for observing. Note the SuperWASP shed roof on the left is also open.
}\label{fig:webcam}
\end{figure}
GOTO uses an Astrohaven clamshell dome, shown in \aref{fig:webcam}. The dome daemon communicates with the \glsfirst{plc} that comes with the dome through a simple serial (RS-232) connection. Moving the dome is achieved by sending a single character to the PLC:\@ \texttt{a} to open the south side, \texttt{A} to close it; \texttt{b}/\texttt{B} for the north side. The PLC will respond with another character: either returning the input while the dome is moving, \texttt{x}/\texttt{X} when the south side is fully open/closed and \texttt{y}/\texttt{Y} for the north side. This is a simplistic and quite limited interface. For example, while one side is moving there is no way to know the status of the other. Therefore, when commissioning it was decided to add additional independent limit switches, described in \aref{sec:arduino}. The Arduino system detailed in that section adds four additional inputs: one at the intersection of the two inner-most shutters to confirm the dome is fully closed, two on either side to confirm if either side is fully open, and one on the dome entrance hatch. Using all of these sensors, and the feedback from the dome PLC, it is possible to build up a complete picture of the current dome status. Inside the dome daemon each side has five possible statuses: \texttt{closed}, \texttt{part\_open}, \texttt{full\_open}, \texttt{opening} and \texttt{closing}. The dome as a whole is only considered confirmed closed if both sides report \texttt{closed}.
As the interface functions of the Astrohaven PLC are very limited, any more advanced functionality had to be coded from scratch. The commands to the dome are contained within a custom Python class \texttt{Astrohaven}, which also returns the status of the dome and the additional sensors. The class has functions to open and close the dome, which include being able to move a specific side or both. Due to the five-shutter design of the GOTO dome (shown in \aref{fig:webcam}) the overlapping side (south) is always opened before the north and closed after it, as it is easier for the shutter casters to roll over the lower shutter than for the lower shutter to force itself under the casters. When opening the south side the motion is deliberately stepped (i.e.\ moving in short bursts) rather than in one smooth motion. This was added due to the design of the top overlapping shutter: if the move command is sent too quickly slack will appear in the drive belts and the upper shutter will end up ``jerking'' the lower one, putting more stress on the belts. This sort of functionality is not included in the default Astrohaven software but is easy to do within the Python code by increasing the time between sending command characters.
As described in \aref{sec:arduino}, along with the extra dome sensors a small siren was attached to the Arduino to give an audible warning before the dome starts moving. This siren can be activated for a given number of seconds through a HTML request to the Arduino, and this is called by the dome daemon whenever the dome is moved automatically. The siren can be disabled in manual mode and is automatically off in engineering mode (see \aref{sec:mode}). One slight complexity is if the system is in manual mode with the alarm disabled but the autoclose feature still enabled. In this case the dome alarm will not sound when manually sending move commands to the daemon, however if the dome is due to close automatically in bad conditions it will re-enable the alarm and make sure it sounds before moving. Forcing the siren to sound whenever the dome moves autonomously is an important safety feature when operating a robotic telescope such as GOTO, and pt5m also has a similar alarm.
As mentioned above, the dome daemon has an ``autoclose'' feature that is unlike any feature of the other daemons. The normal design philosophy of the daemons is that they should not take any action without explicit instructions, which could come from a user or a script like the pilot. The dome, however, is an exception since in the case of bad weather the survival of the hardware is considered to be of higher importance. Therefore, in addition to checking for input commands, the dome daemon control loop also monitors the output of the conditions daemon (see \aref{sec:conditions}). If any conditions flag is set to bad, and the dome autoclose option is enabled, then the dome daemon will automatically enter a ``lockdown'' state. In this state if the dome is currently open it will immediately send itself a close command. Once it is closed the lockdown will prevent any open commands, until either the lockdown is cleared or autoclose is disabled. Another of the hardware additions during commissioning (see \aref{sec:arduino}) was a ``quick-close'' button directly attached to the serial port of the control computer in the dome. The dome daemon automatically sends a signal through the serial connection every time the control loop runs, and if the signal is broken (i.e.\ the button has been pressed, breaking the circuit) then it will immediately trigger a lockdown.
The other custom hardware device added to the GOTO dome was a small backup ``heartbeat'' system, developed by Paul Chote at Warwick. A recognised flaw of the G-TeCS dome control architecture was that it was entirely reliant on the dome daemon, and by extension the master control computer, to close the dome in an emergency. Should the dome daemon or the control computer crash for any reason the dome would be completely disabled. This therefore presented a single point of failure, and a system was designed at Warwick to mitigate against this. An extra circuit, also powered by an Arduino, is connected over the serial port to the dome PLC, and the dome daemon has a separate thread which continuously sends a ping byte to this port. Should the Arduino not receive a signal from the dome daemon after a given timeout period (the default is \SI{5}{\second}) it will automatically start sending the close characters (\texttt{A}/\texttt{B}) to the dome PLC.\@ This system therefore provides a secure secondary backup to the other dome software, and although it has so far not been needed it is an important insurance policy.
The dome daemon is also the hardware interface to the dehumidifier located within the dome. Like the dome, the dehumidifier requires automated control, as the unit uses a lot of power and can get clogged with dust if used excessively. The dome daemon will turn the dehumidifier on if the internal humidity gets too high or temperature gets too low, and will turn it off when they reach normal levels or if the dome is opened. This behaviour can also be overridden and, like all the automated G-TeCS systems, is disabled in engineering mode.
\end{colsection}
\subsection{Mount control}
\label{sec:mount}
\begin{colsection}
The mount daemon sends commands to the GOTO mount through the \glsfirst{sitech} servo controller. As discussed in \aref{sec:control_requirements}, the software for the servo controller is a Windows program called SiTechEXE.\@ Therefore, enabling communication between SiTechEXE and the rest of the control system was a key requirement of G-TeCS.\@
Initially the only way to communicate with SiTechEXE was via the ASCOM software interface. It was possible to communicate directly with the servo controller through a serial interface, however this was a very low-level interface and would have required a lot of work to re-implement the array of commands and functions within SiTechEXE.\@ In particular, the PointXP pointing model software was essential to make a pointing model for the mount (see \aref{sec:pointxp}), and it would have been very difficult to implement using serial commands. ASCOM is so called because it uses the Microsoft \glsfirst{com} interface standard to provide a unified \glsfirst{api} for astronomical hardware. SiTEch provides their own ASCOM driver for their servo controller, and through the Python for Windows package (\texttt{pywin32}\footnote{\url{https://github.com/mhammond/pywin32}}) Python code could interact with ASCOM and therefore SiTechEXE.\@ The ASCOM API gave access to a wide variety of commands and status functions, including being able to slew the telescope, start and stop tracking, parking and setting and clearing targets.
The ASCOM method did however require the Python daemon to be running on the Windows computer. The solution to this was to write a \texttt{sitech} interface in the same manner that the FLI hardware connected to the boom arm computers use an interface daemon running on the NUCs. The \texttt{sitech} interface acted purely as a way of routing commands sent through the Pyro network to the ASCOM equivalent. However, as it had to run on the Windows machine, it differed slightly in implementation to the FLI interfaces and other daemons, as Windows and Linux have different ways of defining ``daemon'' processes (Windows generally does not call them daemons, instead using terms like ``background processes''). Furthermore, the interface had to be able to be started, stopped and killed from the remote control computer using a \texttt{sitech} control script, which meant G-TeCS needed to include functions specifically to interact with Windows processes. This meant the G-TeCS package needed to be installable on Windows and deal with configuration file paths and parameters (compare Windows \texttt{C:\textbackslash{}Users\textbackslash{}goto\textbackslash{}} to Linux \texttt{/home/goto/}). This was simplified by the use of the Cygwin package\footnote{\url{https://www.cygwin.com}}, which provides Unix-like commands and behaviour on Windows including mapping directories into Unix format. Once this was developed the system was reliable enough to correctly control the mount during commissioning.
In July 2017 the author of SiTechEXE, Dan Gray of Sidereal Technology, released an update to the software that enabled communication over a network using \glsfirst{tcpip} commands. This meant the mount daemon running on the control computer could communicate directly with SiTechEXE without the need for the \texttt{sitech} interface, ASCOM, Cygwin or maintaining any Windows-compatible code. Although the existing code was functioning reliably, removing the need for compatibility with ASCOM enabled the addition of several new features, such as more error feedback, whether the limit switches have been triggered, and turning on and off ``blinky mode'' (the error state the mount automatically enters when drawing too much current or one of the inbuilt limit switches is triggered). As such it was seen as a worthwhile update, and therefore the \texttt{sitech} interface and any Windows code were removed from G-TeCS when the La Palma system was updated in August 2017. The TCP/IP interface provides a much simpler way to communicate with the mount than the previous ASCOM commands. Commands are sent as binary strings of characters; for example to get the current mount status information you send `\texttt{ReadScopeStatus}', and to slew to given coordinates the command is `\texttt{GoTo~<ra>~<dec>}'.
One catch is that the SiTech software expects coordinates in the JNow epoch, where the right ascension and declination coordinate system is defined for the current time rather than a fixed date in the past such as used for the J2000 equinox. Conversion from J2000 coordinates, which most professional astronomers use and is used everywhere else in G-TeCS, to the JNow epoch required by SiTechEXE is done using Astropy's coordinates module.
\end{colsection}
\subsection{Power control}
\label{sec:power}
\begin{colsection}
Similar to the camera, focuser and filter wheel daemons, the power daemon acts as an interface to multiple pieces of hardware. In this case, the daemon is connected to three types of power unit in two locations within the GOTO dome:
\begin{itemize}
\item Two Power Distribution Units (PDUs)\glsadd{pdu} are located in the main computer rack within the dome. These are used to control and distribute power to a variety of sources, including the primary control computer and ethernet switches in the rack, the mount controller and Windows control NUC on the mount, the rack monitor, Wi-Fi router and LED lights within the dome.
\item Two additional power relay boxes are attached to the mount boom arms. In the same way that the boom-arm NUCs are used to provide control interfaces instead of running multiple USB cables down the mount, these relays are used to provide and control power to the NUC and hardware (cameras, focusers and filter wheels).
\item Two Uninterruptible Power Supplies (UPSs)\glsadd{ups} are also located in the rack. These are battery devices that provide backup power in the event of mains supply failure. The first of these is connected directly to the dome, so in case of a power failure the dome has its own supply to enable it to close. The second is connected to the other power units described above.
\end{itemize}
Each power outlet in any of the above units can be turned on, off or rebooted (switched off and then back on again after a short delay). Each outlet has a unique name assigned, and multiple outlets can be grouped together to be controlled using a single command similar to the commands for the exposure queue daemon (for example \texttt{power~off~cam1,cam2,cam3}). The FLI hardware (cameras, focusers and filter wheels) are usually powered down during the day, all other hardware including the dome and mount is left on. Power to the dehumidifier unit is controlled by the dome daemon as described in \aref{sec:dome}.
The rack PDUs and UPSs used by GOTO are manufactured by Schneider Electric (previously APC)\footnote{\url{https://www.apc.com}}, and are communicated with using \glsfirst{snmp} commands over the network using the Linux snmpget and snmpset utilities. The relay boxes were manufactured for GOTO using Devantech ETH8020 ethernet boards\footnote{\url{https://www.robot-electronics.co.uk}}, controlled through simple TCP/IP commands. All of these are surrounded by Python wrappers within the power daemon.
\end{colsection}
\section{Summary and Conclusions}
\label{sec:gtecs_conclusion}
\begin{colsection}
In this chapter I have described the key elements of the GOTO Telescope Control System (G-TeCS).
Several options were considered for the GOTO control system, and based on the unique requirements of GOTO a custom system based on the pt5m software was ultimately decided on. I described the fundamental features of this new system, G-TeCS, in particular how the control system is built around standalone daemon programs. Adopting this system proved to be a successful decision, as it provided the flexibility required for GOTO's multi-telescope design. In the future the daemon-based system will provide the basis to expand the control system to multiple independent mounts (see \aref{sec:gtecs_future}).
I went on to describe the hardware control functionality of G-TeCS, and how each type of hardware (cameras, mount, dome etc) is controlled through the associated software daemons. This provides the foundations of the control system, but on its own it still requires an observer to operate the telescope. In the following chapter (\aref{chap:autonomous}) I describe the higher-level software within G-TeCS that replaces the human operator, and allows GOTO to function as a fully-robotic telescope.
\end{colsection}
\chapter{Hardware Characterisation}
\label{chap:hardware}
\chaptoc{}
\section{Introduction}
\label{sec:hardware_intro}
\begin{colsection}
In this chapter I detail my work characterising and modelling the GOTO hardware. This work was carried out predominantly in the first year and a half of my PhD, prior to GOTO's commissioning in 2017.
\begin{itemize}
\item In \nref{sec:detectors} I describe and give the results of the in-lab detector characterisation tests I ran on the GOTO CCD cameras.
\item In \nref{sec:throughput} I detail the throughput model of the GOTO optical system that I created.
\item In \nref{sec:photometry} I apply the results of the previous two sections to predict the photometric properties of GOTO images, before comparing them to real observations taken once GOTO was fully operational.
\end{itemize}
All work described in this chapter is my own unless otherwise indicated, and has not been published elsewhere.
\newpage
\end{colsection}
\section{Detector properties}
\label{sec:detectors}
\begin{colsection}
CCD cameras have a variety of characteristic parameters advertised by the manufacturers, including the amount of detector noise. As described in \aref{sec:goto_design}, GOTO uses MicroLine ML50100 CCD cameras manufactured by \glsfirst{fli}, which contain KAF-50100 CCD sensors manufactured by ON Semiconductor. Both manufactures produce specification sheets advertising expected parameters\footnote{ML50100 available at \url{http://www.flicamera.com/spec_sheets/ML50100.pdf}.}\textsuperscript{,}\footnote{KAF-50100 available at \href{http://www.onsemi.com/pub/Collateral/KAF-50100-D.PDF}{\texttt{http://www.onsemi.com/pub/Collateral/KAF-50100-D.pdf}}.}. Confirming these under laboratory conditions is important before the detectors are used to take scientific images on the telescope. FLI carried out a limited series of tests on the cameras before selling them, but carrying out our own tests ensures that our cameras meet the specifications, and also allows independent measurements of the key parameters.
\end{colsection}
\subsection{Sources of CCD noise}
\label{sec:noise}
\begin{colsection}
There are many sources of noise in images taken with CCD cameras. The most important noise sources for astronomical images are \citep{CCDs}:
\begin{itemize}
\item \emph{Shot noise} derived from counting photo-electrons from the source and background.
\item \emph{Dark current noise} from thermally generated electrons within the sensor.
\item \emph{Read-out noise} from the detector and CCD controller electronics.
\item \emph{Fixed-pattern noise} from different sensitivities between pixels.
\item \emph{Bias}, an offset in counts added to each pixel which can vary with time and position on the detector.
\end{itemize}
The shot noise ($\sigma_\text{N}$) arises as photons from the target object arrive at the sensor at irregular intervals. The photon arrival time is a Poisson distribution, and, if the number of electrons counted is $N$, for large numbers it tends towards a Gaussian distribution with mean $N$ and standard deviation $\sigma_\text{N} = \sqrt{N}$. When taking on-sky astronomical observations there are two sources of shot noise, from the target object ($\sigma_\text{obj}$) and the background sky ($\sigma_\text{sky}$).
Dark current noise ($\sigma_\text{DC}$) is due to electrons produced by thermal excitations, which are indistinguishable from photo-electrons and increase with exposure time. This is also a photon counting measurement, so the noise $\sigma_\text{DC} = \sqrt{D}$ where $D$ is the dark current per pixel. The dark current depends on temperature: cooling the cameras reduces the thermal excitations and therefore reduces the dark current.
Read-out noise ($\sigma_\text{RO}$) depends on the quality of the CCD outputs and readout electronics, and on the speed data is read out from the CCD.\ The FLI MicroLine cameras read out at a fixed frequency of \SI{8}{\mega\hertz} per pixel, but other astronomical cameras have variable read-out speeds. Since read-out noise is a property of the output electronics, it is independent of signal or the exposure time used, and therefore it can be represented by a constant value for each frame, $\sigma_\text{RO} = R$, measured in electrons per pixel. The MicroLine cameras have two channels with independent readouts, so each will have an independent read-out noise (see \aref{sec:chip_layout}).
Fixed-pattern noise ($\sigma_\text{FP}$, also called flat-field noise) is due to the small differences in size and response between pixels. It increases linearly with the electron count, including source ($N$), background ($N_\text{sky}$) and dark ($D$) electrons (the fixed-pattern noise can be further broken down into the photo response non-uniformity and dark signal non-uniformity, but we will consider it as a single noise source). It can be parametrised as $\sigma_\text{FP} = k_\text{FP}(N+N_\text{sky}+D)$, where $k_\text{FP}$ is a dimensionless constant describing the fixed-pattern noise as a fraction of the full-well capacity. Scientific CCD cameras typically have very small non-uniformities between pixels, so $k_\text{FP}$ is usually $<1\%$, but this noise source can dominate when the signal count is high. However, as it is linearly related to the number of counts recorded fixed-pattern noise can easily be removed by flat fielding.
Finally, the bias level is an offset in counts applied to each pixel independent of the input signal. A large bias level is applied to each pixel by CCD manufacturers to prevent negative counts from being recorded due to fluctuations in the read-out noise. Across a frame the bias level will sometimes show structure, but it is simple to remove by subtracting a master bias frame. The bias level can change by a few counts during a night due to changes in the temperature, and it should be measured regularly, as any large changes might indicate a problem with the detector. The MicroLine cameras also include an overscan region (see \aref{sec:chip_layout}), which gives an independent measurement of the typical bias level for every image.
The noise sources described above (aside from the bias) are all independent Gaussian random variables, and therefore are added in quadrature to get the total noise per pixel
\begin{equation}
\begin{split}
\sigma_\text{Total}^2 & = \sigma_\text{obj}^2 +
\sigma_\text{sky}^2 +
\sigma_\text{DC}^2 +
\sigma_\text{RO}^2 +
\sigma_\text{FP}^2 \\
& = N + N_\text{sky} + D + R^2 + k_\text{FP}^2{(N+N_\text{sky}+D)}^2.
\end{split}
\label{eq:noise}
\end{equation}
\end{colsection}
\subsection{In-lab tests}
\label{sec:camera_tests}
\begin{colsection}
The initial deployment of GOTO was delayed for several months, due to delays on-site and manufacturing the unit telescopes (see \aref{sec:hardware_commissioning}). The first set of four cameras, however, had already been purchased from FLI, and the delay gave time to test them in the lab in Sheffield in 2016. The second set of cameras were also purchased before the second four unit telescopes; these were also brought to Sheffield in 2018 so the same tests could be repeated. A list of the nine FLI cameras bought for GOTO is given in \aref{tab:cameras}. Each camera is given a name (Camera 1, Camera 2 etc.) based on the order of their serial numbers. These names are used throughout this section but do not necessarily match which GOTO unit telescope they were assigned to, and the cameras on La Palma are sometimes swapped around to allow for repairs.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|ccc} %
Name & Serial number & Set & Tested \\
\midrule
Camera 1 & ML0010316 & 1 & May---June 2016 \\
Camera 2 & ML0330316 & 1 & March---May 2016 \\
Camera 3 & ML0420516 & 1 & May---June 2016 \\
Camera 4 & ML0430516 & 1 & May---June 2016 \\
Camera 5 & ML5644917 & 2 & May---June 2018 \\
Camera 6 & ML6054917 & 2 & May---June 2018 \\
Camera 7 & ML6094917 & 2 & May---June 2018 \\
Camera 8 & ML6304917 & 2 & May---June 2018 \\
Camera 9 & ML6314917 & 2 & \textit{not tested} \\
\end{tabular}
\end{center}
\caption[List of GOTO cameras]{
A list of the 9 GOTO cameras, with assigned names, serial numbers and dates when the tests were carried out. The ninth camera (bought as a spare) was retained by Warwick for their own use, and was not tested in Sheffield.
}\label{tab:cameras}
\end{table}
The characterisation tests consisted of taking a series of calibration frames with each camera. Three types of images were needed:
\begin{itemize}
\item Zero-second dark exposures, to construct bias frames (see \aref{sec:bias}).
\item Long (30 minute) dark exposures at different temperatures, to measure the dark current (see \aref{sec:dc}).
\item Flat illuminated frames at different exposure times, to construct photon transfer curves (see \aref{sec:ptc}) and measure linearity (see \aref{sec:lin}).
\end{itemize}
The cameras were tested using two different test setups. \aref{fig:dark_photo} shows the setup for taking dark frames: the cameras are face down and covered by a sheet. The long dark exposures required were taken overnight to minimise the background light reaching the detectors. For flat fields a computer monitor was used as a flat source, shown in \aref{fig:flat_photo}. Sheets of paper were placed between the camera and the monitor to reduce the illumination and diffuse the light. The LCD monitor will produce polarised light, however this should not affect the resulting images as long as the angle of the camera remains constant.
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.75\linewidth]{images/dark_photo.jpg}
\end{center}
\caption[The dark frame test setup]{
A photo of the dark frame test setup in the lab in Sheffield. Dark frames were taken at night with the cover down to minimise the ambient light.
}\label{fig:dark_photo}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.75\linewidth]{images/flat_photo.jpg}
\end{center}
\caption[The flat field test setup]{
A photo of the flat field test setup. A spare computer monitor was used as a flat panel, with sheets of paper placed between it and the camera. The cover shown in \aref{fig:dark_photo} was also placed over the setup.
}\label{fig:flat_photo}
\end{figure}
\clearpage
\newpage
\end{colsection}
\subsection{CCD sensors}
\label{sec:chip_layout}
\begin{colsection}
As mentioned previously, the MicroLine ML50100 cameras used by GOTO contain KAF-50100 \glsfirst{ccd} sensors manufactured for FLI by ON Semiconductor\footnote{\url{http://www.onsemi.com}}. These are high-resolution, front-illuminated CCDs with two read-out channels. The \glsfirst{qe} curve for the detector is shown in \aref{fig:qe} in \aref{sec:qe}. The detector is covered in a multilayer anti-reflective coating, and includes a microlens array to focus light onto each pixel and improve the quantum efficiency. The microlenses limit the acceptance angle of the detector to approximately $\SI{\pm20}{\degree}$, for a \SI{40}{\centi\metre} aperture this corresponds to a maximum focal ratio of 2.8 (the \SI{40}{\centi\metre} GOTO unit telescopes are f/2.5, see \aref{sec:optics}).
The KAF-50100 sensor consists of a 50-megapixel CCD with $\SI{6}{\micro\metre} \times \SI{6}{\micro\metre}$ square pixels. The layout of the sensor is shown in \aref{fig:chip}, adapted from the ON Semiconductor sensor specification sheet. The sensor has $8282 \times 6220$ pixels with an imaging area of $8176 \times 6132$ pixels; when taking data in full-frame mode the camera outputs an $8304 \times 6220$ array. Surrounding the image area on each edge are 16 \emph{active buffer pixels}, which are light-sensitive but not considered part of the primary active region (they are not tested for deformities by the manufacturer). Around the edge of the active area is a border of light-shielded \emph{dark reference pixels} which do not respond to light and therefore can be used as a dark current reference. At the beginning and end of each row there is also a test column with 4 blank columns either side, as well as a test row at the end of each frame; these are used to test charge transfer efficiency during the manufacturing process. Finally, at the start of each row the register reads out a test pixel, used in the readout process, followed by 10 \emph{dummy pixels} which do not correspond to physical pixels on the sensor. These form an overscan region which can be used to measure the bias level. A sample flat frame highlighting these areas is shown in \aref{fig:frame}.
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.78\linewidth]{images/chip}
\end{center}
\caption[The layout of the KAF-50100 CCD sensor]{
The layout of the KAF-50100 CCD sensor. The central image area is not shown to scale, but the surrounding rows and columns are all in proportion.
}\label{fig:chip}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.78\linewidth]{images/sample3.pdf}
\end{center}
\caption[A sample bright frame from one of the MicroLine cameras]{
A sample bright frame from one of the MicroLine cameras. The highlighted corner shows some of the features described in \aref{fig:chip}. This image was taken as shown in \aref{fig:flat_photo} with no optical elements between the camera and the screen. As no optical elements were used the cause of the visible vignetting is unclear, but may come from the camera aperture.
}\label{fig:frame}
\end{figure}
\clearpage
\newpage
\end{colsection}
\subsection{Bias}
\label{sec:bias}
\begin{colsection}
The bias level in each pixel can be measured by taking a dark, zero-second exposure image. This image will not include any electrons from a source or background, so the shot noise is zero, and as the dark current is proportional to the exposure time this will also be minimised (there will still be a small component due to the time taken to read out the sensor). To account for the read-out noise, multiple images are taken, 50 for each of the eight cameras, and combined to form a master bias frame by taking the median value in each pixel (to eliminate cosmic rays).
\aref{tab:bias} gives the median bias level from each master bias, measured within a 2000$\times$2000 pixel region in the centre of both channels, along with the standard deviation in the same region. The measured bias levels are all around 1000 counts, which would be typical for a bias level set by the manufacturer. Combining $n$ frames should reduce the noise by $\sqrt{n}$, and, as expected, the errors are equivalent to the read-out noise values given in \aref{tab:ptc} when converted into counts and reduced by a factor of $\sqrt{50}$ ($\approx 7$).
\begin{table}[t]
\begin{center}
\begin{tabular}{c|rr} %
& \multicolumn{2}{c}{Bias} \\
& \multicolumn{2}{c}{(ADU)} \\
& \multicolumn{1}{c}{L} & \multicolumn{1}{c}{R} \\
\midrule
Camera 1 & $971\pm3.6$ & $969\pm3.7$ \\
Camera 2 & $989\pm3.3$ & $983\pm3.3$ \\
Camera 3 & $1004\pm3.2$ & $991\pm3.1$ \\
Camera 4 & $974\pm3.4$ & $1008\pm3.8$ \\
\end{tabular}
\hspace{0.5cm}
\begin{tabular}{c|rr} %
& \multicolumn{2}{c}{Bias} \\
& \multicolumn{2}{c}{(ADU)} \\
& \multicolumn{1}{c}{L} & \multicolumn{1}{c}{R} \\
\midrule
Camera 5 & $994\pm2.9$ & $986\pm3.0$ \\
Camera 6 & $984\pm2.7$ & $991\pm3.0$ \\
Camera 7 & $992\pm3.1$ & $981\pm3.0$ \\
Camera 8 & $1008\pm3.3$ & $1012\pm2.9$ \\
\end{tabular}
\end{center}
\caption[Bias values]{
Bias values for each camera.
}\label{tab:bias}
\end{table}
\end{colsection}
\subsection{Gain, read-out noise and fixed-pattern noise}
\label{sec:ptc}
\begin{colsection}
The gain, read-out and fixed-pattern noise of a CCD camera can be measured using the \glsfirst{ptc} method \citep{CCDs, PTC}. A photon transfer curve is a log-log plot of a signal value against the noise in the signal. To construct a photon transfer curve a series of bright exposures of a flat light source were taken with varying exposure times. For these images there is no background signal, the cameras were cooled meaning dark current noise is negligible (see \aref{sec:dc}), and the master bias frames described in \aref{sec:bias} were subtracted from each frame. The total noise per pixel in electrons is therefore given by \aref{eq:noise} as
\begin{equation}
\sigma_\text{Total}^2 = N + R^2 + k_\text{FP}^2{(N)}^2.
\label{eq:noise_2}
\end{equation}
The signal and total noise in \aref{eq:noise} are all in electrons (\ensuremath{\textup{e}^-}), however the output of the camera's \glsfirst{adc} is a digital signal, $S$, measured in counts or \glsfirst{adu}. This signal is linearly related to the actual number of electrons detected, $N$, through the gain, $g$, in \ensuremath{\textup{e}^-}/ADU, as
\begin{equation}
N = g S.
\label{eq:gain}
\end{equation}
The gain is an important parameter of a CCD, and is set by the manufacturer based on the properties of the detector. For example, if a CCD has the gain set to 3 \ensuremath{\textup{e}^-}/ADU then pixels containing 0, 1 or 2 electrons would all have a measured value of 0 ADU;\@ this is a form of rounding error called quantisation error. If the same camera had a read-out noise of 1 \ensuremath{\textup{e}^-}{} per pixel then the readout-noise would be under-sampled, and setting a lower gain would be required. However, setting the gain too low results in the full-well capacity of each pixel being under-utilised. The KAF-50100 detectors have a full well capacity of 40,300 electrons, and the cameras have a 16-bit ADC (meaning the signal from each pixel can vary from 0 to 65535 ($2^{16}-1$) ADU). If the gain is set to $0.5$ \ensuremath{\textup{e}^-}/ADU then the ADC would saturate after reading 32,768 electrons, which is much less than the capacity of each pixel. Setting the gain therefore is a balance between these two effects.
As electrons and counts are proportional, the noise in both is also proportional (i.e.\ as $N=gS$ from \aref{eq:gain}, $\sigma_\text{Total} = g\sigma_S$). Using these relationships \aref{eq:noise_2} can be converted to give the noise in ADU,
\begin{equation}
\sigma_S^2 = \frac{1}{g} S + \frac{R^2}{g^2} + k_\text{FP}^2 S^2.
\label{eq:ptc}
\end{equation}
This is a quadratic equation which relates the measured total signal $S$ to the variance in the signal $\sigma_S^2$, and can be fitted to a photon transfer curve to determine values for the gain $g$ (in \ensuremath{\textup{e}^-}/ADU), read-out noise $R$ (still in \ensuremath{\textup{e}^-}{}) and fixed-pattern noise $k_\text{FP}$ (dimensionless).
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{images/ptc.pdf}
\end{center}
\caption[Key features of the photon transfer curve]{
The key features of a photon transfer curve, adapted from \citet{CCDs}.
}\label{fig:ptc_cartoon}
\end{figure}
The key features of a photon transfer curve are common for all CCDs, and are shown in cartoon form in \aref{fig:ptc_cartoon}. The first noise regime is when the signal is small: from \aref{eq:ptc} for small $S$ the noise is constant and equal to $R^2/g^2$, or the read-out noise in ADU.\@ At higher signals the noise is dominated by the shot noise which is proportional to $\sqrt{S}$, so this region has a gradient of \sfrac{1}{2} when plotted on the log-log axis. As the signal increases further the fixed-pattern noise begins to dominate, and, as this noise is proportional to the signal, this produces a gradient of 1 in the PTC.\@ Finally, the pixel reaches its full well capacity (assuming the gain has been set so this occurs before the ADU saturates), so the noise drops to zero.
\begin{table}[t]
\begin{center}
\begin{tabular}{l|cc|cc|cc|cc} %
&
\multicolumn{2}{c|}{Gain} &
\multicolumn{2}{c|}{RO noise} &
\multicolumn{2}{c|}{FP noise} &
\multicolumn{2}{c}{Saturation level} \\
&
\multicolumn{2}{c|}{(\ensuremath{\textup{e}^-}/ADU)} &
\multicolumn{2}{c|}{(\ensuremath{\textup{e}^-})} &
\multicolumn{2}{c|}{(\%)} &
\multicolumn{2}{c}{(ADU)} \\
& L & R & L & R & L & R & L & R \\
\midrule
Camera 1 & 0.53 & 0.53 & 12.4 & 12.0 & 0.46 & 0.45 & 64568 & 64585 \\
Camera 2 & 0.53 & 0.53 & 11.9 & 11.7 & 0.44 & 0.46 & 64552 & 64555 \\
Camera 3 & 0.57 & 0.57 & 12.6 & 11.8 & 0.45 & 0.42 & 64540 & 64552 \\
Camera 4 & 0.57 & 0.58 & 13.4 & 14.0 & 0.41 & 0.43 & 64577 & 64536 \\
Camera 5 & 0.62 & 0.63 & 12.3 & 12.8 & 0.40 & 0.40 & 64544 & 64550 \\
Camera 6 & 0.63 & 0.62 & 11.8 & 12.6 & 0.40 & 0.40 & 64554 & 64545 \\
Camera 7 & 0.62 & 0.62 & 13.1 & 12.5 & 0.41 & 0.39 & 64544 & 64552 \\
Camera 8 & 0.62 & 0.62 & 14.3 & 12.2 & 0.41 & 0.39 & 64529 & 64522 \\
\end{tabular}
\end{center}
\caption[Gain, read-out noise, fixed-pattern noise and saturation values]{
Gain, read-out noise, fixed-pattern noise and saturation values found by fitting photon transfer curves for each camera.
}\label{tab:ptc}
\end{table}
Photon transfer curves were constructed for all eight cameras by taking flat fields of varying exposure times between \SI{0.01}{\second} and \SI{90}{\second}. Twelve 50$\times$50 pixel regions were selected across each image, and the mean and standard deviation of the pixel values within each region were plotted to form the PTC for each camera, shown in \aref{fig:ptcs}. \aref{eq:ptc} was then fitted to the data, and the resulting values for the gain ($g$), read-out noise ($R$) and fixed-pattern noise ($k_\text{FP}$) parameters are given in \aref{tab:ptc}. The saturation level for each channel was also measured as the maximum signal (the point where the PTC turns over and the noise drops), these values are also given in \aref{tab:ptc}.
The gain values are all around 0.6 \ensuremath{\textup{e}^-}/ADU, and would have been set as such to maximise the dynamic range based on the full well capacity (65535\,$\times$\,0.6\,$\approx$\,40,000 \ensuremath{\textup{e}^-}). The saturation levels are all around 64550 ADU, but note these images were bias-subtracted and \aref{sec:bias} found the bias levels were around 980--1000 ADU.\@ The read-out noise values match the FLI specification of 12 \ensuremath{\textup{e}^-}{} for the MicroLine cameras, and also match the errors found in the master biases. Finally, the fixed-pattern noise is a very small fraction of the signal ($<0.5\%$) which suggests a low pixel non-uniformity.
\newpage
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/ptc_1.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/ptc_2.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/ptc_3.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/ptc_4.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/ptc_5.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/ptc_6.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/ptc_7.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/ptc_8.png}
\end{minipage}
\end{center}
\caption[Photon transfer curve plots]{
Photon transfer curve plots for each camera. The vertical dashed line shows the measured saturation level.
}\label{fig:ptcs}
\end{figure}
\clearpage
\newpage
\end{colsection}
\subsection{Dark current}
\label{sec:dc}
\begin{colsection}
Dark current noise is independent of the incoming signal but depends on the exposure time of the image. It also increases as a function of temperature, $T$. The dark current per second, $D$, increases at an exponential rate and is usually parametrised as doubling after a fixed increase in temperature called the doubling temperature, $T_d$, so that if the dark current $D(T_0) = D_0$ then $D(T_0 + T_d) = 2D_0$. The dark current as a function of temperature is therefore defined as
\begin{equation}
D(T) = D_0 e^{\frac{\ln2}{T_d}(T - T_0)}.
\label{eq:dc}
\end{equation}
The choice of $T_0$ is arbitrary, and is usually decided as a reasonable operating temperature by the CCD manufacturer. The FLI specifications give a value for the typical dark current at \SI{-25}{\celsius}, so that is the value of $T_0$ used in this test.
In order to find values for the dark current $D_0$ and doubling temperature $T_d$, a series of long (30 minute) dark exposures were taken with each camera at varying temperatures. The MicroLine cameras have an in-built air-cooled Peltier cooler which can reach \SI{40}{\celsius} below the ambient temperature. The laboratory the tests were carried out in was air-conditioned, but only to a typical office level, and the cameras were unable to reach below \SI{-26}{\celsius} even when taking images in the middle of the night. The median dark signal was then measured in a 2000$\times$2000 pixel region in the centre of each channel, and divided by 1800 (as each exposure was 30 minutes) to get the dark current in ADU/second. This value was plotted against temperature, as shown in \aref{fig:dcs}. The points were fitted by \aref{eq:dc}, and the resulting values for the dark current and doubling temperature are given in \aref{tab:dc}.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|cc|cc|rr} %
&
\multicolumn{4}{c|}{Dark current per pixel} &
\multicolumn{2}{c}{Doubling} \\
&
\multicolumn{4}{c|}{at \SI{-25}{\celsius}} &
\multicolumn{2}{c}{temperature} \\
&
\multicolumn{2}{c|}{(ADU/s)} &
\multicolumn{2}{c|}{(e-/s)} &
\multicolumn{2}{c}{(\SI{}{\celsius})} \\
& L & R & L & R &
\multicolumn{1}{c}{L} & \multicolumn{1}{c}{R} \\
\midrule
Camera 1 & 0.0022 & 0.0017 & 0.0012 & 0.0009 & 7.9 & 6.7 \\
Camera 2 & 0.0030 & 0.0027 & 0.0016 & 0.0014 & 8.9 & 8.2 \\
Camera 3 & 0.0034 & 0.0036 & 0.0019 & 0.0020 & 10.7 & 10.9 \\
Camera 4 & 0.0026 & 0.0030 & 0.0015 & 0.0017 & 9.5 & 10.2 \\
Camera 5 & 0.0015 & 0.0017 & 0.0009 & 0.0011 & 6.6 & 7.2 \\
Camera 6 & 0.0020 & 0.0017 & 0.0013 & 0.0011 & 7.5 & 6.8 \\
Camera 7 & 0.0017 & 0.0014 & 0.0011 & 0.0008 & 7.6 & 6.5 \\
Camera 8 & 0.0019 & 0.0015 & 0.0012 & 0.0009 & 7.5 & 6.5 \\
\end{tabular}
\end{center}
\caption[Dark current values]{
Dark current values for each camera. The conversion from ADU/s to \ensuremath{\textup{e}^-}/s used the gain values given in \aref{tab:ptc}.
}\label{tab:dc}
\end{table}
The FLI specification for dark current changed between the two test periods; initially the company gave a typical per-pixel value of 0.002~\ensuremath{\textup{e}^-}/s at \SI{-25}{\celsius}, for the second set of cameras this was increased to 0.008~\ensuremath{\textup{e}^-}/s. All the cameras were found to have a dark current well within the revised specification value, and all except Camera 3 are comfortably below the original 0.002~\ensuremath{\textup{e}^-}/s specification.
The KAF-50100 specification includes a value for the doubling temperature of \SI{5.7}{\celsius} but the measured values are all higher than this. In practice, the temperature dependence of the dark current is not important; the GOTO cameras are cooled to \SI{-20}{\celsius} in the evening and remain there through the night (\SI{-20}{\celsius} is used instead of \SI{-25}{\celsius} as during the summer on La Palma the ambient nightly temperature can reach higher than \SI{15}{\celsius}).
The dark current was also examined as a function of time since power on, as in some cameras there are a noticeable amount of free electrons left trapped in the lattice which take time to dissipate \citep{Liam}. No such trend was visible using the FLI cameras. Since the MicroLine cameras have the detector and cooler integrated into the same body there has to be some time spent waiting after power on for the camera to cool to the target temperature before any images can be taken, thus negating the effect.
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/dc_1.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/dc_2.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/dc_3.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/dc_4.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/dc_5.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/dc_6.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/dc_7.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/dc_8.png}
\end{minipage}
\end{center}
\caption[Dark current plots]{
Dark current plots for each camera.
}\label{fig:dcs}
\end{figure}
\clearpage
\newpage
\end{colsection}
\subsection{Linearity}
\label{sec:lin}
\begin{colsection}
Linearity is a measure of the response of the CCD over its dynamic range. The output counts should ideally be linearly related to the input photons, i.e.\@ if the target doubles in brightness then double the counts should be recorded.
The non-linearity of each camera was measured using the same images taken for the photon transfer curves in \aref{sec:ptc} --- bright images of a flat field with increasing exposure times. The images were bias-subtracted, and the median counts of a 2000$\times$2000 pixel region in the centre of each channel was plotted against the exposure time, shown in \aref{fig:lin}. A linear relation was fitted to the central potion of the data, excluding the upper and lower 10\% of the dynamic range. Residuals from this fit are also plotted in \aref{fig:lin}, and the mean absolute deviation from the linear fit is given in \aref{tab:lin}.
The values for non-linearity measured vary greatly between each camera, and several are over 1\%. If these values were true this would be a major problem when making accurate photometric measurements. However, the FLI specification advertises a non-linearity of <1\%, and FLI's own tests of the cameras consistently report non-linearity of 0.2\% or less. Accurately measuring the response of a CCD requires a stable, uniform light source, which I had to approximate with an LCD screen as described in a \aref{sec:camera_tests}. A better test would be to vary the screen brightness instead of the exposure time, which would prevent systematic effects such as the shutter closing time affecting the results.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|cc} %
& \multicolumn{2}{c}{Non-linearity} \\
& \multicolumn{2}{c}{(\%)} \\
& \multicolumn{1}{c}{L} & \multicolumn{1}{c}{R} \\
\midrule
Camera 1 & 2.29 & 2.00 \\
Camera 2 & 0.76 & 0.65 \\
Camera 3 & 0.34 & 0.39 \\
Camera 4 & 0.18 & 0.22 \\
\end{tabular}
\hspace{0.5cm}
\begin{tabular}{c|cc} %
& \multicolumn{2}{c}{Non-linearity} \\
& \multicolumn{2}{c}{(\%)} \\
& \multicolumn{1}{c}{L} & \multicolumn{1}{c}{R} \\
\midrule
Camera 5 & 1.25 & 1.20 \\
Camera 6 & 1.20 & 1.13 \\
Camera 7 & 0.70 & 0.68 \\
Camera 8 & 0.82 & 0.80 \\
\end{tabular}
\end{center}
\caption[Non-linearity values]{
Non-linearity values for each camera.
}\label{tab:lin}
\end{table}
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{0.47\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/lin_1.png}
\end{minipage}
\begin{minipage}[t]{0.47\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/lin_2.png}
\end{minipage}
\begin{minipage}[t]{0.47\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/lin_3.png}
\end{minipage}
\begin{minipage}[t]{0.47\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/lin_4.png}
\end{minipage}
\begin{minipage}[t]{0.47\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/lin_5.png}
\end{minipage}
\begin{minipage}[t]{0.47\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/lin_6.png}
\end{minipage}
\begin{minipage}[t]{0.47\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/lin_7.png}
\end{minipage}
\begin{minipage}[t]{0.47\linewidth}\vspace{10pt}
\includegraphics[width=\linewidth]{images/detectors/lin_8.png}
\end{minipage}
\end{center}
\caption[Linearity plots]{
Linearity plots for each camera. The horizontal dashed line in the top panel shows the saturation level from \aref{tab:ptc}, and the dashed lines in the lower panels show the target $\pm$1\% non-linearity range.
}\label{fig:lin}
\end{figure}
\clearpage
\newpage
\end{colsection}
\subsection{Defects}
\label{sec:defects}
\begin{colsection}
There are several possible defects in CCD sensors \citep{CCDs}: hot pixels, which have atypically high dark currents, dead pixels, which produce low or zero counts, and trap pixels, which ``trap'' electrons and prevent read out from it and any pixels above it in the column. It is important to identify any bad pixels so that the GOTOphoto pipeline (see \aref{sec:gotophoto}) can mask them when reducing the images from each camera.
Single hot or dead pixels can be removed to some extent by subtracting dark frames and flat fielding. Trap pixels are more of an issue, as they can potentially take out a large fraction of a column. For each camera, a defect mask was made by taking the ratio of two flat field images with different exposure times, making any bad columns easy to pick out by comparing to the surrounding pixels. An example of a bad column caused by a trap pixel is shown in \aref{fig:itsatrap}. The positions of bad columns for each camera are given in \aref{tab:traps}. The KAF-50100 chip specification gives an allowed limit of less than 20 column defects per device, which the GOTO cameras are well within.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|ccc} %
& \multicolumn{2}{c}{Trap location} & Height \\
& x & y & \% \\
\midrule
Camera 1 & 7751 & 4361 & 30 \\
Camera 2 & 1658 & ~172 & 97 \\ %
Camera 3 & 1224 & 1844 & 70 \\
& 5058 & 5185 & 17 \\
Camera 4 & 5406 & 2607 & 58 \\
Camera 5 & 6293 & 1416 & 77 \\
Camera 6 & 5455 & 5036 & 19 \\
\end{tabular}
\hspace{0.5cm}
\begin{tabular}{c|ccc} %
& \multicolumn{2}{c}{Trap location} & Height \\
& x & y & \% \\
\midrule
Camera 7 & 1344 & 3037 & 51 \\
& 2326 & 2495 & 60 \\
& 2610 & 5688 & ~9 \\ %
& 7491 & 5120 & 18 \\
Camera 8 & 1184 & 3043 & 51 \\
& 5659 & 2778 & 55 \\
\multicolumn{4}{c}{} \\
\end{tabular}
\end{center}
\caption[Locations of bad columns]{
Locations and extent (as a percentage of the total column height) of bad columns for each camera.
}\label{tab:traps}
\end{table}
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/detectors/defect_plot.pdf}
\end{center}
\caption[An example of a column defect]{
A flat field for Camera 1 showing a bad column. The location of the trap pixel is magnified, showing that the pixels in the column above the trap have been prevented from being read out. The bad column is also also clearly visible in the plot below, which shows the average counts in each column.
}\label{fig:itsatrap}
\end{figure}
\clearpage
\end{colsection}
\section{System throughput}
\label{sec:throughput}
\begin{colsection}
Unfortunately, not every photon emitted by a target object will be recorded by a telescope: photons will be lost due to absorption and scattering within the telescope's optics and camera, as well as in the Earth's atmosphere (for ground-based telescopes). Understanding each of these factors is required in order to produce a complete throughput model, which can then be compared to the real system (in \aref{sec:onsky_comparison}) to see if the hardware is performing as expected.
\end{colsection}
\subsection{Optical elements}
\label{sec:optics}
\begin{colsection}
As described in \aref{sec:goto_design}, the GOTO unit telescopes are Wynne-Newtonian astrographs: fast (f/2.5) Newtonian telescopes with a \SI{40}{\centi\meter} primary mirror, a flat elliptical secondary (\SI{19}{\centi\metre} short axis) and a Wynne corrector between the secondary and the camera. A drawing of the \glsfirst{ota} is shown in \aref{fig:ota}, and the five elements the light must pass through (the three corrector lenses, the filter in the filter wheel and the window in front of the detector) are shown in \aref{fig:wynne}. In order to model the throughput each element needed to be considered in turn.
\subsubsection{Mirrors}
The GOTO mirrors are were manufactured by Orion Optics\footnote{\url{https://www.orionoptics.co.uk}}. Orion used their own ``HiLux'' high reflectivity aluminium coating, and while individual reflectance curves were not available for the GOTO mirrors at the time this work was carried out, Orion does have a representative curve on their website (shown in \aref{fig:trans_ota}). As there are two mirrors this curve will be included twice in the final throughput model. The difference in the angle of incidence of light on the two mirrors is accounted for in the coating applied to each mirror, so the reflectivity curves of the two are assumed to be identical.
\newpage
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.7\linewidth]{images/throughput/OTA_optics.png}
\end{center}
\caption[GOTO optical telescope assembly]{
The \glsfirst{ota} design for one of the GOTO unit telescopes. Light enters from the left, and relevant elements have been highlighted: the primary mirror in \textcolorbf{Green}{green}, the secondary mirror in \textcolorbf{BlueGreen}{blue}, the Wynne corrector in \textcolorbf{Red}{red} and the FLI camera hardware in \textcolorbf{Purple}{purple}.
}\label{fig:ota}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.7\linewidth]{images/throughput/wynne.pdf}
\end{center}
\caption[Ray tracing the corrector elements]{
A ray trace through the optical elements after the primary and secondary mirrors. From left-to-right light passes through the three Wynne corrector lenses, the filter, and the camera window before reaching the detector located in the focal plane.
}\label{fig:wynne}
\end{figure}
\clearpage
\subsubsection{Lenses}
\begin{table}[t]
\begin{center}
\begin{tabular}{c|ccccc|c} %
& & & \multicolumn{2}{c}{Radius of curvature} & On-axis & Glass \\
Lens & Shape & Diameter & Exterior & Interior & thickness & type \\
\midrule
1 & Meniscus & \SI{120}{\milli\metre} & \SI{89}{\milli\metre} & \SI{86}{\milli\metre} & \SI{12}{\milli\metre} & H-K9L \\
2 & Meniscus & \SI{90}{\milli\metre} & \SI{278}{\milli\metre} & \SI{71}{\milli\metre} & \SI{5}{\milli\metre} & H-K9L \\
3 & Biconvex & \SI{90}{\milli\metre} & \SI{77}{\milli\metre} & \SI{378}{\milli\metre} & \SI{20}{\milli\metre} & S-FPL53 \\
\end{tabular}
\end{center}
\caption[Wynne corrector lens properties]{
Properties of the three Wynne corrector lenses.
}\label{tab:lenses}
\end{table}
Each Wynne corrector contains three lenses, as shown in \aref{fig:wynne} --- the details of each lens are given in \aref{tab:lenses}. No complete transmission data was available, so a model throughput curve had to be created.
For each lens the reflectivity of the front and rear surfaces and the internal transmittance of the glass needs to be considered. Each surface is coated with an anti-reflection coating, the profile of which was included in the GOTO optical report. Transmittance curves for each lens were not available, but the glass types were included in the report and are given in \aref{tab:lenses}. Transmittance data provided by the glass manufacturers were retrieved from the online Refractive Index Database\footnote{\url{https://refractiveindex.info}}. For simplicity, each lens was modelled as having a constant thickness, using their on-axis thickness. As shown in \aref{fig:wynne}, this is a good approximation for lens 1 but will underestimate the absorption within lens 2 and overestimate the absorption within lens 3.
Throughput curves for the anti-reflection coatings and the glass for the three lenses are shown in \aref{fig:trans_lenses} along with the total throughput of the corrector, found by multiplying the contributions from the glass transmission and the coating on both surfaces of each lens:
\begin{equation}
T_\text{corrector} = T_\text{Lens1} \times
T_\text{Lens2} \times
T_\text{Lens3} \times
{(T_\text{coating})}^6.
\label{eq:corrector}
\end{equation}
\newpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/throughput/trans_lenses.png}
\end{center}
\caption[Wynne corrector transmission curve]{
Transmission curve for the Wynne corrector (the \textcolorbf{Orange}{orange} solid line), comprised of the glass throughput of each lens (\textcolorbf{Red}{red}, \textcolorbf{Green}{green} and \textcolorbf{Purple}{purple} dashed lines) and an anti-reflection (AR) coating on all six surfaces (\textcolorbf{NavyBlue}{blue} dotted line).
}\label{fig:trans_lenses}
\end{figure}
\subsubsection{Filters}
The filter transmittance is included in their bandpass profiles, described below in \aref{sec:filters}. At this stage we will consider the OTA with no filter, so as to produce an unfiltered OTA transmission curve which can then by multiplied by the chosen filter bandpass in \aref{sec:total_throughput}.
\subsubsection{Camera window}
Finally, before reaching the detector, light must pass through a glass window in the camera which protects the CCD sensor. The window is made of F116 glass, and a transmission profile was provided by FLI.\@ This is shown in \aref{fig:trans_ota}.
\newpage
\subsubsection{Combined OTA throughput}
The combined throughput for the whole unfiltered OTA is shown in \aref{fig:trans_ota}. This was constructed by multiplying through the transmission curves for the two mirrors, the corrector (from \aref{eq:corrector}) and the camera window:
\begin{equation}
T_\text{OTA} = {(T_\text{mirror})}^2 \times T_\text{corrector} \times T_\text{window}.
\label{eq:ota}
\end{equation}
In the 4000--\SI{7000}{\angstrom} visible region used by GOTO the throughput is typically 60\% or above, although all the elements have a sharp cut-off towards the blue.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/throughput/trans_ota.png}
\end{center}
\caption[Combined OTA transmission curve]{
Transmission curve for the unfiltered OTA (the \textcolorbf{Red}{red} solid line), which includes the two mirrors (\textcolorbf{NavyBlue}{blue} dashed line), the combination of all three corrector lenses (\textcolorbf{Orange}{orange} dashed line, from \aref{fig:trans_lenses}) and the camera window (\textcolorbf{Green}{green} dashed line).
}\label{fig:trans_ota}
\end{figure}
\newpage
\end{colsection}
\subsection{Filters}
\label{sec:filters}
\begin{colsection}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/throughput/trans_filters.png}
\end{center}
\caption[Baader filter transmission curves]{
Transmission curves for the Baader \textit{LRGBC} filter set used by GOTO.\
}\label{fig:filters}
\end{figure}
Each GOTO unit telescope has a five-slot filter wheel containing a set of \SI{65}{\milli\metre} square filters from Baader Planetarium\footnote{\url{https://www.baader-planetarium.com}}: three coloured filters (\textit{R}, \textit{G}, \textit{B}), one wide ``luminance'' filter (\textit{L}) covering the whole visible range, and a clear glass filter ({\textit{C}}). Transmission curves for each filter are shown in \aref{fig:filters}. Each filter has a high throughput and steep cut-offs outside of the desired bandpasses. For the coloured filters the cut-offs were chosen so the [O\textsc{iii}] $\lambda 5007$ emission line falls within the overlap of the \textit{B} and \textit{G} filters and the region around \SI{5800}{\angstrom}, which contains emission lines from Mercury and Sodium vapour lamps, is excluded by the gap between the \textit{G} and \textit{R} filters.
Most GOTO observations are taken using the \textit{L} filter, however the \textit{RGB} filters have been used for manual follow-up observations. The clear filter is never used for scientific observations, so it is not considered as part of the throughput model going forward.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|cccc} %
& Effective wavelength & Effective bandwidth\\
Filter & ($\lambda_\text{eff}$, \SI{}{\angstrom}) & ($\Delta\lambda$, \SI{}{\angstrom}) \\
\midrule
Baader \textit{L} & 5355 & 2942 \\
Baader \textit{R} & 6573 & 979 \\
Baader \textit{G} & 5373 & 813 \\
Baader \textit{B} & 4509 & 1188 \\
\end{tabular}
\end{center}
\caption[Baader filter properties]{
Properties of the Baader \textit{LRGB} filters.
}\label{tab:filters}
\end{table}
Properties of the \textit{LRGB} filters are given in \aref{tab:filters}. The effective wavelength ($\lambda_\text{eff}$) is the pivot wavelength as defined in \citet{HST_calibration} for HST filters:
\begin{equation}
\lambda_\text{eff}^2 = \frac{\int T\lambda~d\lambda}{\int T/\lambda~d\lambda},
\label{eq:pivot_wavelength}
\end{equation}
where $T$ is the transmission integrated over all wavelengths $\lambda$. The effective bandwidth ($\Delta\lambda$) is found by calculating the equivalent width, the width of a rectangle that has a height equal to the maximum transmission (unity) and the same area as the area under the filter transmission curve, i.e.
\begin{equation}
\Delta\lambda = \int T~d\lambda.
\label{eq:bandwidth}
\end{equation}
The Baader filters were designed for amateur astronomers and astro-photographers, and are less commonly used by professional instruments than other sets, such as the \textit{u'g'r'i'z'} set used by the Sloan Digital Sky Survey \glsadd{sdss} \citep{Sloan_filters} or the traditional Johnson-Cousins \textit{UBVRI} set redefined by \citet{Bessell_filters}. GOTO primarily uses Baader filters to reduce costs, as each unit telescope requires a full set. A comparison of the Baader \textit{LRGB} transmission curves with Sloan are shown in \aref{fig:filter_comparison1} and with Bessell \aref{fig:filter_comparison2}. The Baader \textit{L} filter approximately covers the Sloan \textit{g'} and \textit{r'} filters, the \textit{B} and \textit{G} filters cover \textit{g'} and \textit{R} roughly matches \textit{r'}. Colour terms to compare GOTO \textit{RGB} observations with Sloan \textit{g'} and \textit{r'} observations were calculated by \citet{Phaethon}.
\newpage
\makeatletter
\setlength{\@fptop}{0pt}
\makeatother
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/throughput/filt_comp1.png}
\end{center}
\caption[Comparison of Baader and Sloan filters]{
A comparison of the Baader \textit{LRGB} filters to the Sloan \textit{u'g'r'i'z'} set.
}\label{fig:filter_comparison1}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/throughput/filt_comp2.png}
\end{center}
\caption[Comparison of Baader and Bessell filters]{
A comparison of the Baader \textit{LRGB} filters to the Bessell \textit{UBVRI} set.
}\label{fig:filter_comparison2}
\end{figure}
\clearpage
\makeatletter
\setlength{\@fptop}{0\p@ \@plus 1fil} %
\makeatother
\newpage
\end{colsection}
\subsection{Quantum efficiency}
\label{sec:qe}
\begin{colsection}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/throughput/qe.png}
\end{center}
\caption[CCD quantum efficiency curve]{
QE curve for the KAF-50100 CCDs, both with and without microlensing.
}\label{fig:qe}
\end{figure}
After passing through the telescope photons are focused onto the CCD, where they interact with the photosensitive layer and produce electrons which are recorded by the detector \citep{CCDs}. The conversion from photons to electrons is the \glsfirst{qe} of the CCD, and is dependent on wavelength: short-wavelength photons will be absorbed before reaching the photosensitive layer, while long-wavelength photons will not have enough energy to create free electrons in the silicon. CCDs that are back-side illuminated have improved blue QE due to the photons not having to pass through the electrode layer and therefore having less chance of being absorbed, however these are more complicated and expensive to build. The QE can also change with temperature in the near IR, but this is negligible in the optical. The QE curve for the KAF-50100 CCDs is shown in \aref{fig:qe}. As described in \aref{sec:chip_layout}, these CCDs are front-illuminated, and include a microlens array in front of the sensor to improve the QE.\@
\end{colsection}
\subsection{Total throughput}
\label{sec:total_throughput}
\begin{colsection}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/throughput/throughput.png}
\end{center}
\caption[Complete throughput model for the GOTO filters]{
The complete GOTO throughput model. The model elements (dashed lines) are the combined throughput of the OTA elements (from \aref{fig:trans_ota}) and the quantum efficiency of the microlensed CCD (from \aref{fig:qe}); and the Baader \textit{LRGB} filter bandpasses (from \aref{fig:filters}) are shown by the coloured dotted lines. The filled lines shown the total throughput in each filter when the model is applied.
}\label{fig:throughput}
\end{figure}
The complete GOTO throughput is a combination of all of the elements discussed in the previous sections. Each source profile was linearly interpolated to the same wavelength range (3500--\SI{8500}{\angstrom}) and multiplied together to produce the total GOTO throughput model, shown in \aref{fig:throughput}. Since the quantum efficiency has been included, the total throughput describes the conversion between photons to electrons detected in the CCD, and using the gain values given in \aref{tab:ptc} the full conversion between photons and output counts can be made (this does not include photons lost to extinction in the atmosphere, see \aref{sec:atmosphere}). The mean throughput in each filter can be found by dividing the filled areas in \aref{fig:throughput} by the area of the filter bandpass, and are given in electrons per photon in \aref{tab:throughput_extinction}.
\newpage
\end{colsection}
\subsection{Atmospheric extinction}
\label{sec:atmosphere}
\begin{colsection}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/throughput/ext2.png}
\end{center}
\caption[Atmospheric extinction in the GOTO filters]{
Atmospheric extinction in the GOTO filters, in magnitude per airmass. The measured extinction curve from \citet{tn31} is shown by the black dashed line, and the filled lines show the extinction curve multiplied by the filter bandpasses from \aref{fig:filters} (shown by the coloured dotted lines).
}\label{fig:extinction}
\end{figure}
In order to model the entire light path from an astronomical source through to the CCD detector, the absorption of light by the Earth's atmosphere must also be considered. The atmosphere is close to transparent over most of the visible region, however losses due to Rayleigh scattering begin to dominate closer to the UV \citep{atmosphere}.
The amount of light lost due to absorption and scattering in the atmosphere will depend on the altitude of the source, as light from sources closer to the horizon will pass through a thicker layer of atmosphere. The extinction of the atmosphere above La Palma has been measured in terms of magnitude per source airmass by \citet{tn31}, and is shown in \aref{fig:extinction}.
\newpage
Atmospheric extinction can be treated like the throughput elements considered in \aref{sec:total_throughput}, except it is not included in the throughput model as it is not a function of the GOTO hardware. For the \textit{LRGB} filters, the mean extinction can be found by multiplying the extinction curve by the filter bandpasses, as shown in \aref{fig:extinction}, and then dividing the filled areas under each curve by the area of the unmodified filter bandpass. These extinction coefficients are given in magnitudes per airmass in \aref{tab:throughput_extinction}.
Another atmospheric factor unique to observing from La Palma is the \textit{calima}, large quantities of dust from the Sahara Desert which can be carried over the site by easterly winds. The calima occurs most often in the summer, and analysis of dust-affected images has shown that the additional extinction is not wavelength dependent \citep{ORM_dust}. The extinction curve measured by \citet{tn31} was based on observations taken on dust-free nights, and the values in \aref{tab:throughput_extinction} do not include any additional extinction to model the effects of the calima. Based on analysing archival images over 20 years, \citet{ORM_dust} suggests heavy calima could increase the extinction by up to 0.04 mag/airmass.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|cc} %
& Throughput & Extinction \\
Filter & (\ensuremath{\textup{e}^-}/photon) & (mag/airmass) \\
\midrule
Baader \textit{L} & 0.43 & 0.13 \\
Baader \textit{R} & 0.46 & 0.07 \\
Baader \textit{G} & 0.51 & 0.11 \\
Baader \textit{B} & 0.36 & 0.20 \\
\end{tabular}
\end{center}
\caption[Theoretical throughput and extinction coefficients for the GOTO filters]{
Theoretical throughputs and extinction coefficients for the GOTO filters.
}\label{tab:throughput_extinction}
\end{table}
\end{colsection}
\section{Photometric modelling}
\label{sec:photometry}
\begin{colsection}
Using the throughput model created in \aref{sec:throughput} it was possible to simulate photometric observations with GOTO before the telescope was commissioned. This section applies the theoretical throughput model to two important photometric properties: the magnitude zeropoint (the correction required to convert between instrumental and calibrated magnitude values) and the limiting magnitude (the faintest magnitude a source can be to still produce a detectable signal above a given noise threshold). These theoretical values are then compared to values calculated from real GOTO observations, in order to check that the hardware is performing to specification.
\end{colsection}
\subsection{Magnitude zeropoints}
\label{sec:zeropoints}
\begin{colsection}
The flux of a source, $F$, is related to its magnitude, $m$, by
\begin{equation}
m = -2.5 \log_{10}(F).
\label{eq:apparent_magnitude}
\end{equation}
In practice, magnitudes are usually measured relative to a reference star using
\begin{equation}
m - m_\text{ref} = -2.5 \log_{10}\left(\frac{F}{F_\text{ref}}\right),
\label{eq:magnitude_ref}
\end{equation}
which requires a reference star of known magnitude $m_\text{ref}$ and flux $F_\text{ref}$. Traditionally Vega is used as a reference star as it has a magnitude of very close to 0.
The instrumental magnitude measured from an image is related to the number of photo-electrons recorded, $N$, using the same magnitude definition
\begin{equation}
\begin{split}
m_\text{ins} & = -2.5 \log_{10}(N/t),
\end{split}
\label{eq:ins_mag}
\end{equation}
where $t$ is the exposure time. The number of photo-electrons recorded per second $N/t$ from a given source should be proportional to the source flux $F$ (assuming the camera has a low non-linearity, see \aref{sec:lin}). Relating the two through a constant $\kappa$ \aref{eq:ins_mag} becomes
\begin{equation}
\begin{split}
m_\text{ins} & = -2.5 \log_{10}\left(\kappa F\right) \\
& = -2.5 \log_{10}\left(F\right) - m_\text{ZP} \\
& = m - m_\text{ZP},
\end{split}
\label{eq:ins_mag2}
\end{equation}
where the constant $m_\text{ZP}$ is defined as the instrumental \emph{zeropoint}.
The zeropoint is so called because observing an object with a true magnitude equal to the zeropoint ($m = m_\text{ZP}$) will produce an instrumental magnitude of 0, which corresponds to one electron per second on the detector. The zeropoint is usually defined based on the electron rate that would be measured above the atmosphere, which allows zeropoints to be compared between telescopes (i.e.\ not including an atmospheric profile as discussed in \aref{sec:atmosphere}). Each telescope and filter combination will have a unique zeropoint, and once determined it can be used to convert instrumental magnitudes measured using that telescope to a calibrated magnitude using
\begin{equation}
m = m_\text{ins} + m_\text{ZP}.
\label{eq:zp}
\end{equation}
Therefore, were it possible to observe a star with $m=0$ (without saturating the detector) the zeropoint can be calculated as
\begin{equation}
\begin{split}
m_\text{ZP} & = 0 - m_\text{ins} \\
& = 2.5 \log_{10}(N/t).
\end{split}
\label{eq:zp2}
\end{equation}
\newpage
\end{colsection}
\subsection{Calculating theoretical zeropoints}
\label{sec:model_zeropoints}
\begin{colsection}
Consider taking an observation of a zero magnitude star, such as Vega. From \aref{eq:zp}, the instrumental magnitude will be equal to the negative zeropoint. In the AB magnitude system a zero magnitude star has a fixed flux density $F_\nu = $ \SI{3631}{\jansky} \citep{Sloan_filters}. Therefore, passing this flux through the throughput model for each filter created in \aref{sec:throughput} will produce a predicted signal in photo-electrons, which can be used to calculate a theoretical zeropoint.
\subsubsection{Estimating predicted counts from a 0 mag star}
First, the zero-magnitude flux density needs to be converted into a flux in photons. \SI{3631}{\jansky} is equal to \SI{3.631e-20}{\erg\per\second\per\centi\metre\squared\per\hertz}. To convert from $F_\nu$ to $F_\lambda$ this needs to be multiplied by a factor of $c/\lambda_\text{eff}^2$, where $c$ is the speed of light and $\lambda_\text{eff}$ is the effective wavelength of the photon, in this case the effective wavelength of the filter in question\footnote{The $c/\lambda_\text{eff}^2$ conversion factor comes from differentiating the relationship $\nu = c/\lambda$.}. This will then give a flux in \si{\erg\per\second\per\centi\metre\squared\per\angstrom}, but to convert to a photon count it needs to be divided by the energy of each photon given by
\begin{equation}
\begin{split}
E_\lambda = \frac{hc}{\lambda_\text{eff}},
\end{split}
\label{eq:photon_energy}
\end{equation}
where $h$ is Planck's constant. Again, at this stage it is assumed that all of the photons have the effective wavelength of the filter. Therefore, the expected flux in photons from a 0 magnitude star is given by
\begin{equation}
\begin{split}
F_\lambda = 5.5 \times 10^{6}/\lambda_\text{eff}~\si{\photon\per\second\per\centi\metre\squared\per\angstrom}
\end{split}
\label{eq:zero-mag_photons}
\end{equation}
where $\lambda_\text{eff}$ is given in Angstroms.
\newpage
\begin{table}[t]
\begin{center}
\begin{tabular}{c|cc|c} %
& \multicolumn{2}{c|}{Zero-magnitude star} & \\
Filter & flux & predicted signal & Zeropoint \\
& (photon/s) & (\ensuremath{\textup{e}^-}/s) & (mag) \\
\midrule
Baader \textit{L} & \num{3.41e9} & \num{1.26e+09} & 22.75 \\
Baader \textit{R} & \num{9.24e8} & \num{3.67e+08} & 21.41 \\
Baader \textit{G} & \num{9.38e8} & \num{4.14e+08} & 21.54 \\
Baader \textit{B} & \num{1.63e9} & \num{5.04e+08} & 21.76 \\
\end{tabular}
\end{center}
\caption[Theoretical zeropoints for each of the GOTO filters]{
The flux from a zero-magnitude star in each of the GOTO filters, along with the predicted signal and corresponding theoretical zeropoint found using the throughput model from \aref{sec:throughput}.
}\label{tab:zeropoints}
\end{table}
Multiplying the value in \aref{eq:zero-mag_photons} by the effective filter bandwidth (in \si{\angstrom}) and the collecting area of the telescope (in \si{\centi\metre\squared}) will give the predicted photon flux in the detector. Each of GOTO's unit telescopes has a \SI{40}{\centi\metre} diameter primary mirror, with an area of \SI{1257}{\centi\metre\squared}. However not all of this is available to collect photons due to the shadow cast by the secondary mirror. The secondary mirror measures \SI{19}{\centi\metre} on its short axis (see \aref{sec:optics}), modelling this as circular gives the GOTO unit telescopes an effective collecting area of \SI{973}{\centi\metre\squared} --- meaning approximately 23\% of light is blocked. Using this area and the filter bandwidths given in \aref{tab:filters}, the theoretical flux in photons per second expected above the atmosphere from a zero-magnitude star can be calculated for each filter. These values are given in \aref{tab:zeropoints}.
Finally, multiplying these theoretical fluxes by the mean throughput values for each filter from \aref{tab:throughput_extinction} gives the predicted signal on the detector in photo-electrons per second (again still excluding atmospheric extinction), and using \aref{eq:zp2} gives the theoretical zeropoint. The predicted signals and zeropoints are also given in \aref{tab:zeropoints}.
\newpage
\subsubsection{Modeling observations with pysynphot}
The above method is the typical way to calculate a theoretical zeropoint, but it only approximates the bandpasses of each filter by using the effective wavelength and bandwidth, and only considers the mean throughput instead of over the whole bandwidth. To account for the full bandpass a more robust model was created using the pysynphot Python package (Python Synthetic Photometry, \texttt{pysynphot}\footnote{\url{https://pysynphot.readthedocs.io}}), which is based the IRAF \texttt{SYNPHOT} package\footnote{\url{http://www.stsci.edu/institute/software_hardware/stsdas/synphot}}. Each of the throughput elements described in \aref{sec:throughput} were imported to create throughput profiles for each filter, and observations were simulated of a flat spectrum of \SI{3631}{\jansky} (0 mag in the AB system) and the built-in Vega spectrum (0 mag in the Vega system) by multiplying the spectra with the bandpasses. The resulting spectra are shown in \aref{fig:pysynphot}; the area under each curve gives the predicted number of electrons produced by the zero-magnitude star in each system.
The predicted signals found using pysynphot, and the derived zeropoints, are given in \aref{tab:pysynphot_zeropoints}. The difference between the two photometric systems is visible, the AB spectrum gives more electrons in the red filter while the Vega spectrum is brighter in the blue (as shown in \aref{fig:pysynphot}).
\begin{table}[t]
\begin{center}
\begin{tabular}{c|cc|cc} %
& \multicolumn{2}{c|}{AB system} & \multicolumn{2}{c}{Vega system}\\
Filter & Signal & Zeropoint & Signal & Zeropoint\\
& (\ensuremath{\textup{e}^-}/s) & (mag) & (\ensuremath{\textup{e}^-}/s) & (mag) \\
\midrule
Baader \textit{L} & \num{1.25e+09} & 22.74 & \num{1.23e+09} & 22.72 \\
Baader \textit{R} & \num{3.84e+08} & 21.46 & \num{3.25e+08} & 21.28 \\
Baader \textit{G} & \num{4.19e+08} & 21.55 & \num{4.21e+08} & 21.56 \\
Baader \textit{B} & \num{4.98e+08} & 21.74 & \num{5.47e+08} & 21.84 \\
\end{tabular}
\end{center}
\caption[Zeropoints in the AB and Vega systems calculated using pysynphot]{
Zeropoints in the AB and Vega systems calculated using pysynphot.
}\label{tab:pysynphot_zeropoints}
\end{table}
\newpage
\makeatletter
\setlength{\@fptop}{0pt}
\makeatother
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/throughput/synphot_RGB.png}
\includegraphics[width=\linewidth]{images/throughput/synphot_L.png}
\end{center}
\caption[Simulating photometric observations using pysynphot]{
Simulating GOTO observations using pysynphot. The filled coloured areas show the theoretical throughputs from \aref{fig:throughput} for the \textit{RGB} filters in the upper plot and the \textit{L} filter in the lower plot. The coloured dotted lines show the throughputs multiplied by a flat \SI{3631}{\jansky} spectrum (dot-dashed black line), while the coloured dashed lines show the throughputs multiplied with the model Vega spectrum (solid black line).
}\label{fig:pysynphot}
\end{figure}
\clearpage
\makeatletter
\setlength{\@fptop}{0\p@ \@plus 1fil} %
\makeatother
\newpage
\end{colsection}
\subsection{Limiting magnitude}
\label{sec:lim_mag}
\begin{colsection}
Using the CCD parameters determined in \aref{sec:detectors}, the throughput model created in \aref{sec:throughput} and the zeropoints calculated in \aref{sec:model_zeropoints}, a complete photometric model of the GOTO telescopes can be created. One use of this is to predict the system limiting magnitude for a target signal-to-noise ratio.
\subsubsection{Signal-to-noise}
The common sources of noise in CCDs are discussed in \aref{sec:noise}. Discounting the bias level and fixed-pattern noise, both properties of the detector that are easy to remove by subtracting a master bias and dividing by a flat field respectively, the major sources of noise in an astronomical image will be the dark current and read-out noise, as well as the shot noise from the target and the sky background. Accounting for these, the total noise in the image is given by
\begin{equation}
\sigma_\text{Total} = \sqrt{N + N_\text{sky} + D + R^2},
\label{eq:total_noise}
\end{equation}
where $N$ is the electron signal from the source object, $N_\text{sky}$ is the background signal from the sky, $D$ is the dark current and $R$ is the read-out noise. Noise is usually quantified as a fraction of the target signal $N$, known as the signal-to-noise ratio \glsadd{snr}:
\begin{equation}
\text{SNR} = \frac{N}{\sigma_\text{Total}} = \frac{N}{\sqrt{N + N_\text{sky} + D + R^2}}.
\label{eq:snr}
\end{equation}
To be confident of the detection of an astronomical source a signal-to-noise ratio of 5 or more is required, also known as a $5\sigma$ detection (as the signal will be more than 5 times the noise).
\newpage
\subsubsection{Sky background noise}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/throughput/background.png}
\end{center}
\caption[Simulating sky background observations]{
Simulating sky background observations using pysynphot.
}\label{fig:background}
\end{figure}
The one value in \aref{eq:snr} that has not yet been considered is the sky background noise, $N_\text{sky}$. The brightness of the sky will change most noticeably as a function of the Moon phase, with a full Moon creating a background noise several magnitudes brighter than during a new Moon or when the Moon is below the horizon. In order to model the background, sky spectra were taken from \citet{sky_background}, which were obtained from 6 years of VLT observations on using the FORS1 instrument\footnote{Spectra available at \href{http://www.eso.org/~fpatat/science/skybright}{\texttt{http://www.eso.org/\raisebox{0.5ex}{\texttildelow}fpatat/science/skybright}}.} (no equivalent spectra taken from La Palma were available). Three sample spectra were selected to give a range of background signals: a ``Dark'' spectrum taken when the Moon was new and below the horizon, a ``Grey'' spectrum taken when the Moon was 60\% illuminated, and a ``Bright'' spectrum taken when the Moon was full. These spectra are shown in \aref{fig:background}.
In order to determine the sky background noise in the GOTO filters the same method using pysynphot can be used as in \aref{sec:model_zeropoints}. The spectra were again multiplied by the throughput curve for each filter from \aref{sec:total_throughput}, the area under the curve was measured and multiplied by the collecting area of the telescope in order to get an predicted signal in photo-electrons. These were converted into instrumental magnitudes using \aref{eq:ins_mag} and then into calibrated magnitudes using \aref{eq:zp} and the AB zeropoints given in \aref{tab:pysynphot_zeropoints}. The resulting signals are given in \aref{tab:pysynphot_background}. Note that the values are given per square arcsecond; when calculating the sky background flux the signal must be multiplied by the squared plate scale of the camera to get the signal per pixel (the plate scale of the GOTO CCDs is \SI[per-mode=symbol]{1.24}{\arcsec\per\pixel}).
\begin{table}[t]
\begin{center}
\begin{tabular}{c|ccc|ccc} %
& \multicolumn{6}{c}{Thoretical sky signal} \\
Filter &
\multicolumn{3}{c|}{(\ensuremath{\textup{e}^-}/s/arcsec$^2$)} &
\multicolumn{3}{c}{(mag/s/arcsec$^2$)} \\
& Dark & Grey & Bright & Dark & Grey & Bright \\
\midrule
Baader \textit{L} & 3.34 & 18.38 & 34.98 & 21.43 & 19.58 & 18.88 \\
Baader \textit{R} & 1.52 & 5.64 & 10.58 & 21.00 & 19.58 & 18.90 \\
Baader \textit{G} & 1.11 & 6.17 & 12.14 & 21.45 & 19.58 & 18.84 \\
Baader \textit{B} & 0.68 & 7.24 & 13.50 & 22.16 & 19.59 & 18.92 \\
\end{tabular}
\end{center}
\caption[Sky background signals calculated using pysynphot]{
Sky background signals calculated using pysynphot for different Moon phases, in AB magnitudes.
}\label{tab:pysynphot_background}
\end{table}
\subsubsection{Calculating limiting magnitudes}
The limiting magnitude of a telescope is defined as the signal which would be required to obtain a particular SNR, typically $5\sigma$. \aref{eq:snr} can be rearranged into a quadratic formula
\begin{equation}
N_\text{lim}^2 - \text{SNR}^2 N_\text{lim} - \text{SNR}^2 (N_\text{sky} + D + R^2) = 0,
\label{eq:snr2}
\end{equation}
and this can be solved to find $N_\text{lim}$ for a given SNR (e.g.\ setting $\text{SNR}=5$).
It is important to remember that $N_\text{lim}$, $N_\text{sky}$, $D$ and $R$ are usually given as a value per pixel. Each therefore needs to be multiplied by the number of pixels the source is spread across, which will be determined by the size of the seeing disk. A given seeing $s$ in arcseconds is defined as the \glsfirst{fwhm} of the seeing disk in the image, which using a Gaussian profile is given by $ 2\sqrt{2 \ln 2}~\sigma$. Taking the $3\sigma$ radius, the number of pixels the source will be spread across is
\begin{equation}
n = \pi {\left( \frac{3\sigma}{p} \right) }^2
= \pi {\left( \frac{3s}{2\sqrt{2 \ln 2}~p} \right) }^2,
\label{eq:seeing2}
\end{equation}
where $p$ is the plate scale in arcseconds/pixel.
Finally, the limiting magnitude in each filter can be calculated for a range of exposure times. These are plotted in \aref{fig:lim_mags} for dark and bright skies, for each GOTO filter and camera and using a seeing of \SI{1.5}{\arcsecond}. Note that it is almost impossible to distinguish between the curves for each camera, as the differences between their dark and read out noise values are very small. The limiting magnitudes for a \SI{60}{\second} image, the typical exposure time for GOTO observations, are given in \aref{tab:lim_mags}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/throughput/limiting_mag.png}
\end{center}
\caption[$5\sigma$ limiting magnitudes for GOTO]{
$5\sigma$ limiting magnitudes for GOTO plotted as a function of exposure time, assuming an airmass of 1 and seeing of \SI{1.5}{\arcsecond}. Solid lines give the limiting magnitude during dark time, dotted lines during bright time. The \textcolorbf{Purple}{purple} vertical line marks \SI{60}{\second}, the typical GOTO exposure time.
}\label{fig:lim_mags}
\end{figure}
\begin{table}[t]
\begin{center}
\begin{tabular}{c|ccc} %
& \multicolumn{3}{c}{Limiting magnitude} \\
Filter & \multicolumn{3}{c}{(mag)} \\
& Dark & Grey & Bright \\
\midrule
Baader \textit{L} & 19.88 & 19.61 & 19.41 \\
Baader \textit{R} & 18.71 & 18.61 & 18.51 \\
Baader \textit{G} & 18.77 & 18.65 & 18.63 \\
Baader \textit{B} & 18.88 & 18.63 & 18.61 \\
\end{tabular}
\end{center}
\caption[$5\sigma$ limiting magnitudes for a \SI{60}{\second} exposure]{
$5\sigma$ limiting magnitudes for a \SI{60}{\second} exposure.
}\label{tab:lim_mags}
\end{table}
\newpage
\end{colsection}
\subsection{Comparison to on-sky observations}
\label{sec:onsky_comparison}
\begin{colsection}
The GOTO prototype finally reached a stable 4-UT configuration in February 2019 (see \aref{sec:timeline}). In order to determine if it was performing to expectations, the theoretical zeropoints calculated in \aref{sec:model_zeropoints} and limiting magnitudes calculated in \aref{sec:lim_mag} can be compared to those found from on-sky observations. Since GOTO is a wide-field survey instrument there was no need to observe a particular standard star or field --- each frame contains thousands of sources that can be matched to a photometric catalogue. A set of sample observations were used: three \SI{60}{\second} exposures in each of the four filters (so 12 in total) of the Virgo Cluster, taken on the 16th of March 2019. These observations were taken during dark time when the field was at a high altitude (airmass 1.08).
Each image was processed using the standard GOTOphoto pipeline described in \aref{sec:gotophoto}, which corrected the frames for bias, dark and flat frames and extracted sky-subtracted source counts using Source Extractor \citep{SE}. These counts were converted into instrumental magnitudes using \aref{eq:ins_mag}, with $t=\SI{60}{\second}$, and using the gain values for each camera calculated in \aref{sec:ptc}. As GOTO uses the non-standard Baader filters (see \aref{sec:filters}) there are no catalogue magnitudes to compare to. The GOTO pipeline instead makes do with the best available catalogues: the Pan-STARRS PS1 catalogue \citep{Pan-STARRS} and APASS, the AAVSO Photometric All-Sky Survey \citep{APASS}. The \textit{L} and \textit{G} Baader filters are matched to Pan-STARRS \textit{g}, \textit{R} to Pan-STARRS \textit{r} and \textit{B} to APASS \textit{B}.
\newpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/throughput/zeropoint_real_res.png}
\end{center}
\caption[Finding the observed zeropoint from a GOTO image]{
Finding the observed zeropoint from a GOTO image. This particular frame was taken with UT1, the first of a set of three in the \textit{L} filter. It contains 6,117 matched sources, of which 6,126 have a signal-to-noise ratio (SNR) of 5 or better. Note the accuracy of the linear fit is much better than measured in \aref{sec:lin}.
}\label{fig:zeropoint}
\end{figure}
In order to find the zeropoint for each image a linear function was fitted to the measured instrumental magnitudes of each source as a function of the catalogue magnitude of the star it was matched against, with the $y$-intercept being equal to the zeropoint for that image. This is shown in \aref{fig:zeropoint} for one of the \textit{L}-band images. To exclude faint sources with large errors, only sources with a signal-to-noise ratio of 5$\sigma$ or above were included in the fit. This was repeated for every image, and the zeropoints for each are given in \aref{tab:zps_lms}.
The theoretical zeropoints found in \aref{sec:model_zeropoints} were calculated for zero-magnitude stars above the atmosphere, i.e.\ not including the effects of atmospheric extinction described in \aref{sec:atmosphere}. Obviously the real zeropoints measured from GOTO images will include this effect, and so in order to compare to the observed zeropoints the extinction coefficients from \aref{tab:throughput_extinction} were subtracted (multiplied by 1.08, the airmass of the source at the time it was observed) from the theoretical values. This was done for each filter using the AB magnitude zeropoints from \aref{tab:pysynphot_zeropoints}, as both the PS1 and APASS catalogues use AB magnitudes. The new theoretical zeropoints are given in \aref{tab:zps_comparison}, along with the best observed zeropoint from each set of three images.
To measure the limiting magnitude from each image, the catalogue magnitude of each source was compared to the magnitude error measured by Source Extractor, plotted in \aref{fig:lim_mag}. A signal-to-noise ratio of 5 corresponds to a magnitude error of 0.198\footnote{An SNR of 5 means an error of $\pm20\%$, and $2.5\log(1.2)=0.198$. A common approximation is that the magnitude error $\approx 1/$SNR.}, and the limiting magnitude was taken as the lowest magnitude source with a magnitude error greater than 0.198. The best limiting magnitudes from each set of three are given in \aref{tab:lms_comparison}, along with the theoretical dark-time limiting magnitudes again accounting for the target airmass.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/throughput/limiting_mag_real2.png}
\end{center}
\caption[Finding the limiting magnitude from a GOTO image]{
Finding the limiting magnitude from a GOTO image.
}\label{fig:lim_mag}
\end{figure}
\newpage
\begin{table}[p]
\begin{center}
\begin{tabular}{cc|cc|cc|cc|cc} %
& &
\multicolumn{2}{c|}{UT1} &
\multicolumn{2}{c|}{UT2} &
\multicolumn{2}{c|}{UT3} &
\multicolumn{2}{c}{UT4}
\\
& &
\multicolumn{2}{c|}{{\footnotesize(ML6094917)}} &
\multicolumn{2}{c|}{{\footnotesize(ML0010316)}} &
\multicolumn{2}{c|}{{\footnotesize(ML0420516)}} &
\multicolumn{2}{c}{{\footnotesize(ML5644917)}}
\\
\multicolumn{2}{c|}{Filter} &
ZP & LM & ZP & LM & ZP & LM & ZP & LM \\
\midrule
\textit{L} & 1 &
22.32 & 19.7 &
22.31 & 19.7 &
22.40 & 19.8 &
22.32 & 19.6
\\
& 2 &
22.26 & 19.6 &
22.27 & 19.7 &
22.44 & 19.7 &
22.37 & 19.7
\\
& 3 &
22.39 & 19.6 &
22.42 & 19.6 &
22.45 & 19.7 &
22.40 & 19.7
\\
\midrule
\textit{R} & 1 &
20.83 & 18.3 &
21.05 & 18.3 &
21.10 & 18.4 &
21.05 & 18.2
\\
& 2 &
20.84 & 18.4 &
21.11 & 18.4 &
21.13 & 18.5 &
21.06 & 18.2
\\
& 3 &
20.91 & 18.4 &
21.01 & 18.4 &
21.04 & 18.5 &
20.94 & 18.2
\\
\midrule
\textit{G} & 1 &
21.20 & 18.7 &
21.39 & 18.8 &
21.40 & 18.8 &
21.27 & 18.7
\\
& 2 &
21.16 & 18.7 &
21.43 & 18.8 &
21.46 & 18.7 &
21.36 & 18.6
\\
& 3 &
21.26 & 18.6 &
21.44 & 18.8 &
21.45 & 18.8 &
21.32 & 18.6
\\
\midrule
\textit{B} & 1 &
21.22 & 19.0 &
21.35 & 18.9 &
21.43 & 19.1 &
21.27 & 19.1
\\
& 2 &
21.22 & 19.0 &
21.32 & 19.2 &
21.44 & 19.1 &
21.22 & 19.0
\\
& 3 &
21.20 & 18.9 &
21.35 & 19.0 &
21.44 & 19.1 &
21.26 & 19.0
\\
\end{tabular}
\end{center}
\caption[Observed zeropoints and limiting magnitudes]{
Observed zeropoints (ZP) \glsadd{zp} and limiting magnitudes (LM) \glsadd{lm} from three \SI{60}{\second} exposures taken in each filter. The camera serial numbers can be matched to \aref{tab:cameras}.
}\label{tab:zps_lms}
\end{table}
\begin{table}[p]
\begin{center}
\begin{tabular}{c|c|cccc|cccc} %
&
Theoretical &
\multicolumn{4}{c|}{Best observed zeropoint} &
\multicolumn{4}{c}{Difference (obs-theory)}
\\
Filter & zeropoint & UT1 & UT2 & UT3 & UT4 & UT1 & UT2 & UT3 & UT4 \\
\midrule
\textit{L} & 22.60 & 22.39 & 22.42 & 22.45 & 22.40 & -0.21 & -0.18 & -0.15 & -0.20 \\
\textit{R} & 21.38 & 20.91 & 21.11 & 21.13 & 21.06 & -0.47 & -0.27 & -0.25 & -0.32 \\
\textit{G} & 21.43 & 21.20 & 21.44 & 21.46 & 21.36 & -0.23 & +0.01 & +0.03 & -0.07 \\
\textit{B} & 21.52 & 21.22 & 21.35 & 21.44 & 21.27 & -0.30 & -0.12 & -0.08 & -0.25 \\
\end{tabular}
\end{center}
\caption[Comparison between theoretical and observed zeropoints]{
Comparison between the theoretical zeropoints (accounting for extinction) and the best observed zeropoints.
}\label{tab:zps_comparison}
\end{table}
\begin{table}[p]
\begin{center}
\begin{tabular}{c|c|>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}} %
&
Theoretical &
\multicolumn{4}{c}{Best observed limiting magnitude}
\\
Filter & limiting magnitude & UT1 & UT2 & UT3 & UT4 \\
\midrule
\textit{L} & 19.87 & 19.7 & 19.7 & 19.8 & 19.7 \\
\textit{R} & 18.70 & 18.4 & 18.4 & 18.5 & 18.2 \\
\textit{G} & 18.76 & 18.7 & 18.8 & 18.8 & 18.7 \\
\textit{B} & 18.87 & 19.0 & 19.2 & 19.1 & 19.1 \\
\end{tabular}
\end{center}
\caption[Comparison between theoretical and observed limiting magnitudes]{
Comparison between the theoretical limiting magnitudes (for dark time) and the best observed limiting magnitudes.
}\label{tab:lms_comparison}
\end{table}
\clearpage
From \aref{tab:zps_comparison} in most cases the theoretical zeropoints are 0.2--0.3 magnitudes higher than those measured from the sample images, which might suggest that the theoretical model is overestimating the throughput of the system. There is a clear difference between the four unit telescopes, with UT3 consistently performing better than the others. This might be because its mirrors were re-aluminised and returned to La Palma only a month before the images were taken (see \aref{sec:timeline}), and therefore the difference in the observed and theoretical zeropoints may be due to a lower mirror reflectivity than assumed in the throughput model in \aref{sec:optics} (e.g.\ due to dust build up on the mirrors).
There is also a noticeable difference between filters, with all the unit telescopes performing worse in the \textit{R} filter but surpassing the predicted limiting magnitudes in \textit{B}. One limitation in the method used was having to match sources to existing catalogues taken in other filters, without correcting for colour terms i.e.\ the differences in the filter bandpasses (see \aref{sec:filters}). This is something that should be integrated into the GOTOphoto pipeline. Further images taken over more nights would be needed to make any firm conclusions on the performance of the hardware.
\newpage
\end{colsection}
\section{Summary and Conclusions}
\label{sec:hardware_conclusion}
\begin{colsection}
In this chapter I have presented a description and analysis of the GOTO optical hardware.
I first detailed a series of in-lab tests I carried out on the GOTO cameras to determine their key characteristics. I confirmed that the camera properties met the manufacturer's specifications, and was able to independently calculate many of the key properties including the gain and noise levels. I then created a full throughput model of the GOTO unit telescopes, including the contribution of the optics, filters and atmospheric extinction.
I then used the throughput model and camera characteristics to predict values showing the performance of the GOTO telescopes, and then compared them to real on-sky observations. I confirmed that the model does a reasonable job at predicting the zeropoints and limiting magnitudes of real images, and it appears that the GOTO hardware is performing as expected. Future enhancements to the model would include a more detailed comparison of the Baader filters to other sets and calculation of colour terms, to better compare GOTO observations to existing catalogues.
\end{colsection}
\chapter{Introduction}
\label{chap:intro}
\chaptoc{}
\section{Gravitational Waves}
\label{sec:gw}
\begin{colsection}
Einstein's theory of General Relativity describes gravity as the curvature of spacetime \citep{Einstein1914}, and he went on to describe the propagation of distortions within the spacetime `fabric' \citep{Einstein1916}. These \emph{gravitational waves} (GWs) \glsadd{gw} are produced by the acceleration of matter within the field of spacetime and propagate at the speed of light \citep{GW170817_gravity}, analogous to electromagnetic (EM) \glsadd{em} waves being produced by an accelerating charge. The existence of gravitational waves is a consequence of the finite propagation time of gravity in general relativity; there is no analogue to gravitational waves in Newtonian gravity as Newton described a force propagating instantaneously.
The result of Einstein's theory is the quadrupole formula \citep{Einstein1916}, which describes gravity propagating as a transverse wave, which alternately stretches and compresses spacetime in two orthogonal axes \citep{BIGcardiff}. A single object will never `observe' a gravitational wave, as it is embedded in the fabric, and the only way to detect the passing of gravitational waves is to look for changes in the relative positions of two or more objects as the wave passes through. A thought experiment considering the effects of gravitational waves on free-floating masses is shown in \aref{fig:wave}, for the two wave polarisation states. These perturbations are quantified by the strain, the fractional change in distance, which even for astronomical-scale events will be incredibly small --- the first direct detection of gravitational waves involved measuring strains of the order of $10^{-21}$ (see \aref{fig:chirp}). It is the goal of gravitational-wave detectors to observe these minute spatial perturbations as the wave passes through.
A detailed discussion of general relativity and gravitational-wave science is beyond the scope of this thesis, so this section gives only a brief introduction to the topic in order to explain the core purpose of the GOTO project.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{images/waveimg2.pdf}
\end{center}
\caption[Gravitational-wave polarisations]{
Consider a 2-dimensional ring of free-floating particles in the $x$-$y$-plane. A gravitational-wave which passes through the ring in the $z$-direction (out of the page) will alternately stretch and compress the ring in two orthogonal axes out of phase. The upper image shows the effect of a plus-polarised wave, the lower image shows the effect of a cross-polarised wave. Adapted from \citet{BIGaustralia}.
}\label{fig:wave}
\end{figure}
\newpage
\end{colsection}
\subsection{Detecting gravitational waves}
\label{sec:gw_detecting}
\begin{colsection}
As described above, gravitational waves manifest as alternately stretching and compressing spacetime along perpendicular axes. Several methods of directly detecting gravitational waves have been proposed, but the most successful design uses a Michelson interferometer to observe how two test masses move relative to each other as a wave passes through \citep{BIGbirmingham}. As shown in \aref{fig:detector}, an input laser is split into two by a beam splitter, and each beam is sent into one of two long perpendicular arms. In order to detect the tiny strains from gravitational waves these arms needs to be kilometres in length, and each arm acts as a laser cavity, reflecting the beam multiple times between two mirrored test masses, to further increase the effective distance. When they exit the arms, the beams are recombined to form a single output. Should the lengths of the arms change relative to each other, e.g.\ due to a gravitational wave passing through, the distance the beams travel will be different, which will produce a change in the resulting interference pattern produced when they recombine. The test masses are suspended by a complex vibration isolation system in order to reduce any outside interference, such as from man-made vibrations or seismic events.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\linewidth]{images/detector.pdf}
\end{center}
\caption[A Michelson interferometer used as a gravitational-wave detector]{
A Michelson interferometer used as a gravitational-wave detector. As a wave passes through, the relative lengths of the arms will change, as shown (highly exaggerated) in the inset. This will reduce or increase the distance the laser light travels through each arm, and therefore alter the output interference signal. Adapted from \citet{GW150914_detectors}.
}\label{fig:detector}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{images/global.pdf}
\end{center}
\caption[Locations of gravitational-wave detectors]{
Locations of current and proposed gravitational-wave detectors.
}\label{fig:global}
\end{figure}
Several of these gravitational-wave detectors have been built around the world, as shown in \aref{fig:global}. Having multiple detectors acting together provides redundancy, and allows the source of the signal to be localised (see \aref{sec:gw_localisation}). There are currently three active detectors: the two Advanced \glsfirst{ligo} detectors in the United States, at Hanford, Washington and Livingston, Louisiana \citep{LIGO}, and the \glsfirst{ego} Advanced Virgo detector near Pisa, Italy \citep{Virgo}. These three detectors form a global network known as the LIGO-Virgo Collaboration \glsadd{lvc} \citep[LVC,][]{LIGO-Virgo}. In addition, the older German-British GEO600 detector in Germany is still active, primarily as a technology test system \citep{GEO600}. The \glsfirst{kagra} is currently under construction in Japan \citep{KAGRA}, and is expected to join the global network before the end of 2019 \citep{LIGO-Virgo-KAGRA}. In the next decade, work should also begin on building a third LIGO detector, LIGO-India \citep{LIGO_India}, relocating what was previously a second interferometer at Hanford.
In the longer term, the next generation of larger and more sensitive gravitational-wave detectors is already being designed, including the Einstein Telescope \citep{EinsteinTelescope} and the Cosmic Explorer \citep{CosmicExplorer}. Space-based gravitational-wave detectors are also being planned, such as the Laser Interferometer Space Antenna \glsadd{lisa} \citep[LISA,][]{LISA}. Detectors in space would be free from the seismic noise that limits ground-based detectors at low frequencies, and could therefore detect lower-frequency gravitational waves. This could potentially include signals from supermassive black hole mergers, lower-mass white dwarf binaries within our own galaxy, and early detections of neutron star or black hole mergers that are subsequently observed by larger detectors on Earth.
\newpage
\end{colsection}
\subsection{Sources of gravitational waves}
\label{sec:gw_sources}
\begin{colsection}
Any accelerating mass will generate gravitational waves as it moves through spacetime, as long as the motion is not spherically symmetric \citep[such as a rotating disk or a uniformly expanding sphere;][]{BIGcardiff,BIGparis}. However, in practice it is only possible to detect gravitational waves from astronomical sources, as only they will produce large enough strains to be picked up by the detectors.
A continuous source of gravitational waves will be generated by two massive objects orbiting one another \citep{GW_sources}, and the loss of energy from the system in the form of gravitational waves will slowly cause the orbiting distance of the two objects to shrink. The first binary pulsar was discovered in 1974 \citep{HulseTaylor}, and after repeated observations it was apparent that the orbital period of the two stars was decreasing in perfect agreement with the predictions given by general relativity. This was the first real evidence, albeit indirect, of the existence of gravitational waves, and the discovery of the Hulse-Taylor pulsar was deemed so significant that its discoverers were awarded the Nobel Prize in 1993 \citep{HulseTaylor2}.
The loss of energy in the form of gravitational radiation will cause binary orbits to slowly decay \citep[unless counteracted by another process, such as mass transfer;][]{binary_masstransfer}. As the orbital distance decreases so will the period, resulting in the objects orbiting faster and the system emitting gravitational waves at higher frequencies. This will produce a characteristic `chirp' signal until the two objects collide \citep{GW_sources, BIGparis}. At the point of coalescence the system will release a huge burst of gravitational energy; at its peak the GW150914 signal reached a luminosity of \SI{3.6e49}{\watt} \citep[greater than the combined luminosity of all stars in the observable universe;][]{GW150914}. After the inspiral and merger gravitational waves are still detectable in the ``ring-down'' phase, as the resulting object gradually settles to form a stable sphere \citep{GW_ringdown}. More massive objects produce stronger signals, and so the ideal binary systems for gravitational-wave detections are from compact binary coalescence (CBC) \glsadd{cbc} events, which include binary neutron stars \glsadd{bns} (BNS), binary black holes \glsadd{bbh} (BBH) and neutron star-black hole (NSBH) \glsadd{nsbh} binaries.
Coalescing binaries are not the only predicted sources of gravitational-wave signals. Sources of gravitational waves are typically classed into three categories: bursts, continuous emission and the stochastic background. Along with coalescing binaries, core-collapse supernovae are predicted to produce bursts of gravitational radiation \citep{GW_supernovae}, as long as the explosions are not entirely symmetrical. Asymmetric, rapidly-spinning neutron stars (pulsars) should produce continuous, periodic emission of gravitational waves \citep{GW_pulsars}. A stochastic background of gravitational radiation from events throughout the history of the universe is also predicted, which, if measured, could provide insights into the physics of the early universe \citep{GW_background, GW_background2}. In this thesis, I will focus on gravitational-wave signals from compact binaries, as at the time of writing they are the only confirmed detections by the LIGO-Virgo interferometers.
\end{colsection}
\subsection{Gravitational-wave detection history}
\label{sec:gw_detections}
\begin{colsection}
The LIGO detectors became operational in 2002 and observed on-and-off until 2010 without detecting any gravitational-wave signals, after which they were taken offline in order to upgrade into the second-generation Advanced LIGO detectors \citep{LIGO_initial, LIGO_advanced}. The upgrade to the optics and lasers increased the strain sensitivity by a factor of 10, which corresponded to an increase in the search volume of space from which a signal could be detected by a factor of 1000 \citep{LIGO}.
The detectors were recommissioned in 2015, and first direct detection of gravitational waves occurred on the 14th of September 2015, while the two LIGO detectors were still in engineering mode. The signal, GW150914\footnote{Confirmed gravitational-wave detections are named in the form GW\textit{YYMMDD}.}, was produced by the merger of a binary black hole system approximately \SI{440}{\mega\parsec} away with component masses of \SI{35}{\solarmass} and \SI{30}{\solarmass} \citep{GW150914}. The `chirp' signals recorded in the LIGO detectors are shown in \aref{fig:chirp}. Note the strain on the $y$-axis is of the order $10^{-21}$, meaning that, as the wave passed through, the \SI{4}{\kilo\metre}-long LIGO detector arms changed in length by approximately \SI{4e-18}{\metre}, a fraction of the size of a proton ($\approx$ \SI{e-15}{\metre}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/chirp2.pdf}
\end{center}
\caption[The first detection of gravitational waves]{
The first detection of gravitational waves, recorded in the LIGO Hanford detector (left) and then approximately \SI{7}{\milli\second} later in the LIGO Livingston detector (right). Adapted from \citet{GW150914}.
}\label{fig:chirp}
\end{figure}
The first LIGO observing run (O1) \glsadd{o1} continued from September 2015 to January 2016, and during that time two further gravitational-wave signals were detected \citep{LIGO_O1}. All three detections were identified as being produced by coalescing black hole binaries, and although at the time one (LVT151012) was below the $5\sigma$ significance level, it has since been upgraded to a significant detection and reclassified as GW151012 \citep{GW_catalog}.
The second observing run (O2) \glsadd{o2} took place from November 2016 to August 2017. This run saw the first observation of gravitational waves from a binary neutron star, GW170817 \citep{GW170817}, as well as the addition of the Virgo detector to the network in the final month. In total eleven gravitational-wave events were detected during O1 and O2, ten from binary black holes and one (GW170817) from a binary neutron star. Together these eleven events form the first Gravitational-Wave Transient Catalogue \glsadd{gwtc} \citep[GWTC-1;][]{GW_catalog}. The source and remnant masses for each event are shown in \aref{fig:gw_masses}, compared to previous direct detections of neutron stars and stellar-mass black holes.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/gw_masses2.png}
\end{center}
\caption[Sources of gravitational waves detected during O1 and O2]{
Masses of the sources and remnant objects from the 11 signals detected during O1 and O2, compared to previous electromagnetic detections. Note the ``mass gap'' between 2--\SI{5}{\solarmass}. Image credit: LIGO/Virgo/Northwestern/Frank Elavsky.
}\label{fig:gw_masses}
\end{figure}
After a few short engineering runs the third observing run (O3) \glsadd{o3} began on 1 April 2019. At the time of writing, O3 is still ongoing, and after a short break in October 2019 is expected to run until May 2020. This is the first run to include the three LIGO-Virgo detectors from the beginning, and KAGRA is expected to join before the end of 2019 \citep{LIGO-Virgo-KAGRA}. O3 also marked the start of public alert releases\footnote{Public alerts are available at \url{https://gracedb.ligo.org/superevents/public/O3}.}; during O1 and O2 immediate alerts were only released to groups who had signed memoranda of understanding with the LVC (the GOTO Collaboration was one of these groups).
In the first 5 months of O3, from the start of April to the end of August 2019, the LVC released 32 alerts; 7 were ultimately retracted as false alarms, leaving 25 due to real astronomical signals. As O3 is currently ongoing the LVC has not yet published final values or mass estimates for any of these events. As such they are still treated as candidates, with provisional signal designations and preliminary classification probabilities. Of the 25 non-retracted events, 20 are currently classified as originating from binary black hole systems ($P_\text{BBH}>90\%$). Only one is likely from a binary neutron star \citep[S190425z;][]{S190425z}, one is classed as a likely neutron star-black hole binary \citep[S190814bv;][]{S190814bv}, and one \citep[S190426c;][]{S190426c} has an uncertain classification: a 49\% probability as coming from a binary neutron star, 24\% coming from a binary including a `MassGap' object \citep[a theorised object with a mass between a neutron star and a black hole;][]{GW_MassGap}, and 13\% from a neutron star-black hole binary (the remaining 14\% is the chance the signal is from a non-astrophysical source, i.e.\ detector noise). The remaining two events both have over 50\% non-astrophysical probability but have not been formally retracted by the LVC.\@
\end{colsection}
\subsection{On-sky localisation}
\label{sec:gw_localisation}
\begin{colsection}
One limitation with using interferometers to detect gravitational-wave signals is that alone they are very poor at localising the direction a signal originates from. It is possible to estimate a rough direction from the polarisation of the signal, and the distance to the origin can be estimated from the signal strength, but multiple detectors are needed to obtain more accurate sky localisations \citep{GW_localisation, GW_localisation2}. With two detectors, the difference between the arrival time of a signal at each allows the direction to the source to be narrowed down, based on the distance between the two detectors and knowing that gravitational waves propagate at the speed of light. However, this will only constrain the source to within an annulus on the celestial sphere, perpendicular to the line between the detectors. As shown in \aref{fig:triangulate}, at least three detectors are needed to triangulate the source location, and even then only to two positions on opposite sides of the sky (in practice the polarisation will suggest which of these two points is the more likely origin). The localisation skymaps for the GW170817 event, showing regions on the sky that the source is predicted to be within, are shown in \aref{fig:170817_skymaps}. They show how the contributions from multiple detectors dramatically improved the on-sky localisation.
Even with the three current detectors, sources that are detected by all three can typically only be localised to areas of tens to hundreds of square degrees. This also requires all three detectors to be observing at the same time, with no redundancy for down time due to maintenance or hardware problems. This is why the full network is anticipated to include five detectors across the globe, as described in \aref{sec:gw_detecting}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\linewidth]{images/triangulate_1.pdf}
\includegraphics[width=0.4\linewidth]{images/triangulate_2.pdf}
\end{center}
\caption[Localising signals using gravitational-wave detectors]{
Localising signals using gravitational-wave detectors. With just two detectors sources can only be localised to a ring on the sky (shown on the left, for three different sources). The addition of a third detector means sources can be triangulated to where the rings intercept (shown on the right).
}\label{fig:triangulate}
\end{figure}
\newpage
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/skymaps.pdf}
\end{center}
\caption[Skymaps for GW170817]{
Skymaps for GW170817, produced from the single Hanford detector (in \textcolorbf{NavyBlue}{blue}), both Hanford and Livingston (\textcolorbf{Orange}{orange}) and all three detectors (\textcolorbf{Green}{green}). The final \textit{Fermi} GBM skymap is also shown in \textcolorbf{Purple}{purple}, and the location of the counterpart source is marked by a \textcolorbf{Red}{red} star. Solid lines show 50\% confidence regions, dashed lines 90\% regions.
}\label{fig:170817_skymaps}
\end{figure}
\newpage
\end{colsection}
\section{Multi-Messenger Astronomy}
\label{sec:multi}
\begin{colsection}
\emph{Multi-messenger astronomy} refers to detecting multiple signals from the same astrophysical source using two or more different `messengers'. Such messengers can include electromagnetic waves/photons, gravitational waves, neutrinos or cosmic rays. An example of a multi-messenger event is supernova SN 1987A, which was detected by neutrino detectors several hours before becoming visible in the electromagnetic spectrum \citep{SN1987A}. This thesis concentrates on the search for electromagnetic counterparts to gravitational-wave detections, of which at the time of writing only one has been found \citep[GW170817;][]{GW170817}.
Prior to the GW170817 detection, it had long been theorised that some gravitational-wave detections might have electromagnetic counterparts. Binary mergers involving neutrons stars (either neutron star binaries or neutron star-black hole mergers) were suggested as possible sources of short-duration gamma-ray bursts \citep{SGRBs}. Such events were expected to produce kilonovae, transient bursts of electromagnetic radiation that could also be visible in the optical \citep[these events were named ``kilo''-novae as they were predicted to reach luminosities approximately 1000 times that of a classical nova;][]{GW_kilonova}. Electromagnetic counterparts to binary black hole mergers were less expected; binary stellar-mass black holes may not be surrounded by much orbiting matter with which to interact, although certain systems with an orbiting disk might produce enough material for accretion and subsequent emission \citep{BBH_EM}.
\end{colsection}
\subsection{The benefits of multi-messenger observations}
\label{sec:mma_benefits}
\begin{colsection}
As explained in \aref{sec:gw_localisation}, gravitational-wave detectors have only a limited ability to localise the source of each detection. Wide-field electromagnetic monitors, such as the \textit{Fermi} \glsfirst{gbm}, can provide independent localisation skymaps to help reduce the search area (\cite{GW170817_GRB}; note also the \textit{Fermi} skymap included in \aref{fig:170817_skymaps}). Ideally, the direct detection of a kilonova would allow precise localisation to a host galaxy, as well as a measure of redshift and therefore the distance to the source.
Electromagnetic observations of a counterpart to a gravitational-wave detection can give additional insights into the nature of the source and its environment, as well as further scientific breakthroughs. For example, observations of the kilonova associated with GW170817 \citep{GW170817, GW170817_followup} allowed analysis of the equation of state of neutron star material \citep{GW170817_NSscience}, an insight into the origin of heavy metals in the universe \citep{GW170818_heavy}, constraints on the nature of gravity \citep{GW170817_gravity}, and a new, independent measurement of the Hubble constant \citep{GW170817_hubble}.
At the time of writing, GW170817 remains the only gravitational-wave signal with an confirmed electromagnetic counterpart. As observations continue and new detectors come online it is only a matter of time until similar objects are found, which will assuredly lead to further astrophysical and cosmological breakthroughs. However, future counterparts may not be as easy to find.
\end{colsection}
\subsection{Finding optical counterparts to GW detections}
\label{sec:followup}
\begin{colsection}
GW170817 was a remarkably lucky event in several ways. The gravitational-wave signal was observed by both LIGO detectors, and although it was not observed by Virgo it was active at the time so the non-detection still helped narrow down the localisation area. This produced a fairly small skymap, covering just $31~\text{deg}^2$ \citep[see \aref{fig:170817_skymaps}]{GW170817}, which was well positioned for telescopes in the southern hemisphere to observe (although close to the sun, the area was visible for a few hours after sunset). The gravitational-wave detection also produced a very low luminosity distance of $40\pm\SI{8}{\mega\parsec}$, close enough to make an electromagnetic observation of the counterpart feasible.
\newpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{images/sss17a.pdf}
\end{center}
\caption[Detection of the counterpart to GW170817]{
Detection of the counterpart to GW170817 with the Swope telescope. On the left is an archival image of NGC 4993 from HST, and on the right the position of the kilonova is marked in the Swope discovery image. Adapted from \citet{GW170817_Swope}.
}\label{fig:sss17a}
\end{figure}
The first of many observations of the counterpart transient, designated AT~2017gfo, was taken by the One-Meter, Two-Hemisphere collaboration using the 1-metre Swope telescope at Las Campanas Observatory in Chile. The discovery image is shown in \aref{fig:sss17a}, and the Swope team designated the transient SSS17a \citep{GW170817_Swope}. The Swope telescope only has a field of view of $\SI{30}{\arcmin}\times\SI{30}{\arcmin}$, but as the event was localised to a small, near-by region a list of potential host galaxies could be selected from the Gravitational Wave Galaxy Catalogue \glsadd{gwgc} \citep[GWGC;][]{GWGC}. The Swope observations were targeted to fields including these galaxies, as shown in \aref{fig:swope_decam}. The host galaxy, NGC 4993, was observed in the ninth pointing approximately 10 hours after the gravitational-wave detection, and as shown in \aref{fig:sss17a} the kilonova was clearly visible at the outer edge of the galaxy.
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/170817_obs.pdf}
\end{center}
\caption[Follow-up observations of GW170817 with Swope and DECam]{
Follow-up observations of GW170817 with the Swope telescope \citep[left, adapted from][]{GW170817_Swope} and the Dark Energy Camera \citep[right, adapted from][]{GW170817_DECam}. The skymap contours are the same as in \aref{fig:170817_skymaps}; the Hanford-Livingston skymap is in \textcolorbf{Orange}{orange}, the final Hanford-Livingston-Virgo skymap is in \textcolorbf{Green}{green}, and the location of the counterpart is marked by the \textcolorbf{Red}{red} star. The Swope telescope has a small, $0.25~\text{deg}^2$ field of view, and so targeted its follow-up observations on concentrations of GWGC galaxies (\textcolorbf{darkgray}{grey} circles). The squares on the left show the fields observed by Swope, containing multiple galaxies (in \textcolorbf{NavyBlue}{blue}) or single galaxies (\textcolorbf{Purple}{purple}). Instead of targeting galaxies, DECam observed using a pre-defined grid of 18 pointings (\textcolorbf{NavyBlue}{blue} hexagons on the right), followed by a second set of offset observations (\textcolorbf{Purple}{purple}).
}\label{fig:swope_decam}
\end{figure}
Five other groups independently observed the same transient within an hour of the Swope observation: the Dark Energy Camera \glsadd{decam}\citep[DECam,][]{GW170817_DECam}, the Distance Less Than \SI{40}{\mega\parsec} survey \glsadd{dlt40}\citep[DLT40,][]{GW170817_DLT40}, Las Cumbres Observatory \glsadd{lco}\citep[LCO,][]{GW170817_LCO}, the MASTER Global Robotic Net \citep{GW170817_MASTER} and the \glsfirst{eso} Visible and Infrared Survey Telescope for Astronomy \glsadd{vista}\citep[VISTA,][]{GW170817_VISTA}. The observing strategy differed between groups depending on the field of view of the instruments. DLT40 was an existing supernova survey so targeted already-known galaxies, and the LCO and VISTA surveys both targeted their observations at possible host galaxies, just like Swope. On the other hand, DECam and MASTER had larger fields of view, and so could therefore cover the entire localisation region using a regular tiling pattern. The DECam tile pointings are shown on the right-hand plot of \aref{fig:swope_decam}.
The relatively small GW170817 skymap allowed telescopes with small fields of view, such as Swope, to efficiently cover the search area and locate the counterpart source. However, there is no guarantee that this will always be the case, and indeed subsequent events have not been as well localised. As previously mentioned in \aref{sec:gw_sources}, the second binary neutron star gravitational-wave detection, S190425z, occurred in April 2019, a few weeks into the O3 run \citep{S190425z}. Unlike GW170817, this was only a single-detector detection, observed only by LIGO-Livingston (again Virgo was also observing at the same time, but LIGO-Hanford was shut down for maintenance). The initial skymap for this event covered an area of approximately 10,000 square degrees, and the final skymap only reduced this to 7,500~sq~deg; still 250 times the size of the final GW170817 skymap (see \aref{fig:ztf}). For searching such large sky areas, containing thousands of galaxies, the method used by smaller telescopes such as Swope is impractical, and so dedicated wide-field survey telescopes are needed.
The \glsfirst{ztf} is one such telescope, with a field of view of 47~sq~deg \citep{ZTF}. Over the first two nights following the S190425z detection ZTF covered approximately 8,000~sq~deg of the initial skymap \citep{S190425z_ZTF}, shown in \aref{fig:ztf}. This still only corresponded to 46\% of the localisation probability, reduced to 21\% in the final skymap, as unfortunately a large fraction of the skymap was located too close to the Sun to observe.
\newpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{images/190425_ztf.pdf}
\end{center}
\caption[Follow-up observations of S190425z with ZTF]{
Follow-up observations of S190425z with the Zwicky Transient Facility. The ZTF tiled observations are shown in \textcolorbf{NavyBlue}{blue} over the initial LVC skymap in \textcolorbf{Orange}{orange}. Adapted from \citet{S190425z_ZTF}. For a size comparison to \aref{fig:swope_decam} the position of the final GW170817 skymap is also shown in \textcolorbf{Green}{green}.
}\label{fig:ztf}
\end{figure}
If covering the large gravitational-wave skymap was not enough of a challenge, once electromagnetic observations have been taken a robust analysis pipeline is also needed in order to distinguish potential counterpart candidates from the large number of coincident transient and variable-star detections. ZTF found 340,000 transients when following-up S190425z, which were narrowed down to just 12 potential candidates \citep{S190425z_ZTF}. Ultimately, each was shown to be unconnected with the gravitational-wave signal, and in the end no counterpart was identified for this event by ZTF or any other project.
As a single-detector event S190425z was something of an extreme example, and as more gravitational-wave detectors come online the typical skymap size should decrease. The time and effort required to follow up S190425z stands in contrast to the relative ease with which the GW170817 counterpart was found. Small telescopes like Swope can contribute with galaxy-focused observations when the skymap is small enough, but for events like S190425z, where large searches are required, it is clear that dedicated, wide-field survey telescopes are required to have the best chance of finding any counterpart.
\newpage
\end{colsection}
\hfuzz=6pt %
\section[The Gravitational-wave Optical Transient Observer]{%
\protect\scalebox{0.93}[1.0]{\mbox{The Gravitational-wave Optical Transient Observer}}
}
\hfuzz=.5pt %
\label{sec:goto}
\begin{colsection}
The \glsfirst{goto}\footnote{\url{https://goto-observatory.org}} is a project dedicated to detecting optical counterparts of gravitational-wave sources. The GOTO collaboration was founded in 2014 and, as of 2019, contains 10 institutions from the UK, Australia, Thailand, Spain and Finland\footnote{The GOTO collaboration includes the University of Warwick, Monash University, Armagh Observatory and Planetarium, the University of Leicester, the University of Sheffield, the National Astronomical Research Institute of Thailand, the Instituto de Astrofísica de Canarias, the University of Manchester, the University of Turku and the University of Portsmouth.}. The first prototype GOTO telescope was inaugurated at the \glsfirst{orm_lapalma} on La Palma, Canary Islands in July 2017, and is shown in \aref{fig:goto_photo}.
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.9\linewidth]{images/goto_photo.jpg}
\end{center}
\caption[The GOTO prototype on La Palma]{
The GOTO prototype on La Palma, with four unit telescopes.
}\label{fig:goto_photo}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.9\linewidth]{images/goto_render.png}
\end{center}
\caption[A rendering of a complete GOTO node]{
A rendering of a complete GOTO node with two independent mounts, each with eight unit telescopes.
}\label{fig:goto_render}
\end{figure}
\end{colsection}
\subsection{Motivation}
\label{sec:goto_motivation}
\begin{colsection}
Even before the first detection of gravitational waves in 2015 it was recognised that, due to the issues described in the previous section, the best chance of reliably detecting electromagnetic counterparts quickly was with a network of dedicated, robotic, wide-field telescopes \citep{Darren}. On most nights the telescopes would carry out an all-sky survey on a fixed grid, but when a gravitational-wave alert was received they could quickly change to covering the skymap. As robotic telescopes, they would be quicker to respond than human-operated telescopes, meaning follow-up observations could begin automatically just minutes after an alert was issued.
Rapidly covering large areas of the sky maximises the chance of finding any possible counterpart before it fades from view. However, any such observations are going to be contaminated by a huge number of unrelated transient and variable objects (recall the hundreds of thousands of detections by ZTF when searching for the S190425z counterpart as described in \aref{sec:followup}). Therefore, the difficulty in finding any counterpart is not just in covering the large search area, but also in being able to distinguish the needle from the haystack of other astronomical transients and variables. The best way to reduce the number of candidate detections is temporally: any sources with detections prior to the gravitational-wave event time could not be the transient associated with it. In order to perform this temporal filtering, the project needs an as-recent-as-possible image of the same patch of sky, which necessitates the telescope carrying out an all-sky survey with as low a cadence (the time between observing each point of the sky) as possible.
\citet{Darren} outlined the original GOTO proposal, including the requirements for each telescope. An instantaneous field of view of 50--100 square degrees and a limiting magnitude of $R\simeq21$ in 5 minutes was suggested to be able to have the best chance of observing electromagnetic counterparts, with the field of view ideally split between multiple independent mounts to allow for covering the irregularly-shaped gravitational-wave skymaps. These telescopes would need to respond quickly to alerts, covering the skymaps within a single night before any possible counterpart faded from view, necessitating the use of fast-slewing, robotic mounts. Based on the sensitivity regions of the LIGO-Virgo network, the best sites for these telescopes would be in the North Atlantic (e.g.\@ the Canary Islands) and Australia, at least until the detectors in Japan and India come online (as described in \aref{sec:gw_detecting}).
GOTO is, of course, not the only project searching for gravitational-wave counterparts, and the optimal follow-up strategy has been an area of much analysis in recent years \citep[see, for example,][]{BlackGEM_strategy, ZTF_strategy, GW_strategy}. Other contemporary projects include the Zwicky Transient Facility at the Palomar Observatory in California, previously mentioned in \aref{sec:followup}, which captures 47~square~degrees to a depth of $r=20.5$ \citep{ZTF}, and the BlackGEM project, currently under construction at at La Silla Observatory in Chile, which aims to go deeper ($q=23$ in five minutes), albeit initially with a smaller footprint of $\sim$8~square~degrees. \aref{tab:rivals} shows a comparison of the planned GOTO network to other projects.
\newpage
\begin{table}[t]
\begin{center}
\begin{tabular}{r|ccccccl} %
& First & $D$ & FoV & Limiting & Cost & Etendue & \\
Name & light & (m) & (deg$^2$) & magnitude & (M\$) & (m$^2$deg$^2$) & \\
\midrule
ATLAS &
2015 &
0.5$\times$2 &
60 &
$g=19.3$ (\SI{30}{\second}) &
2 &
12 &
\tablefootnote{~~\citet{ATLAS}}
\\
Pan-STARRS1 &
2008 &
1.8 &
7 &
$g=22.0$ (\SI{43}{\second}) &
25 &
18 &
\tablefootnote{~~\citet{Pan-STARRS}}
\\
ZTF &
2017 &
1.2 &
47 &
$g=20.8$ (\SI{30}{\second}) &
24 &
54 &
\tablefootnote{~~\citet{ZTF}}
\\
\textit{BlackGEM} &
\textit{2019} &
0.65$\times$3 &
8.1 &
$q=23$ (\SI{300}{\second}) &
3 &
2.7 &
\tablefootnote{~~\citet{BlackGEM}}
\\
\textit{LSST} &
\textit{2020} &
8.4 &
9.6 &
$g=25.6$ (\SI{15}{\second}) &
500 &
532 &
\tablefootnote{~~\citet{LSST}}
\\
\\
GOTO-4 &
2017 &
(0.4$\times$4) &
18 &
$g=19.5$ (\SI{60}{\second}) &
1.0 &
2.3
\\
\textit{GOTO-8} &
\textit{2019} &
(0.4$\times$8) &
40 &
$g=19.5$ (\SI{60}{\second}) &
1.5 &
5
\\
\textit{2$\times$GOTO-8} &
\textit{2020} &
(0.4$\times$8$)\times$2 &
80 &
$g=19.5$ (\SI{60}{\second}) &
2.5 &
10
\\
\textit{4$\times$GOTO-8} &
\textit{2021} &
(0.4$\times$8$)\times$4 &
160 &
$g=19.5$ (\SI{60}{\second}) &
4.0 &
20
\\
\end{tabular}
\end{center}
\caption[Comparison of projects involved in gravitational-wave follow-up]{
A comparison of selected projects involved in following-up gravitational-wave detections. Given for each is the year it saw first light, the primary mirror diameter(s) (note GOTO has 4 or 8 small telescopes per mount, and ATLAS, BlackGEM and later GOTO stages include multiple mounts), the total instantaneous field of view, the limiting magnitude for a single exposure, estimated total cost, and etendue (the product of the primary mirror area(s) and the field of view). The projects in \textit{italics} are yet to be commissioned and so use predicted values. %
}\label{tab:rivals}
\end{table}
\end{colsection}
\subsection{Hardware design}
\label{sec:goto_design}
\begin{colsection}
The GOTO prototype, as shown in \aref{fig:goto_photo}, uses an array of four \SI{40}{\cm} Unit Telescopes (UTs)\glsadd{ut}, attached to a boom-arm on a single robotic mount with a slew speed of \SI{4}{\degree} per second. Using multiple smaller instruments on one mount is a design already used by several survey and wide-field telescopes, such as the All-Sky Automated Survey for Supernovae \glsadd{asassn} \citep[ASAS-SN,][]{ASAS-SN} and SuperWASP \glsadd{wasp} \citep[Wide Angle Search for Planets,][]{SuperWASP}. The array design provides a cost-effective way of reaching the desired wide field of view with multiple small telescopes instead of one large one, and the modular nature also allows more unit telescopes to be added to a mount as more funding becomes available.
\newpage
The prototype unit telescopes and mount were constructed by APM Telescopes\footnote{\url{https://www.professional-telescopes.com}}. Each UT is a fast Wynne-Newtonian astrograph (see \aref{sec:optics}) with a focal ratio of f/2.5, and each uses off-the-shelf camera hardware from \glsfirst{fli}\footnote{\url{https://www.flicamera.com}}. The FLI MicroLine cameras each use a 50 megapixel sensor (see \aref{sec:chip_layout}), which gives each UT a field of view of approximately 6 square degrees with a plate scale of \SI[per-mode=symbol]{1.24}{\arcsec\per\pixel}. Three \SI{60}{\second} GOTO images can reach a limiting magnitude of $g \approx 20$ \citep[see \aref{sec:onsky_comparison};][]{S190425z_GOTO}. A full GOTO telescope will have eight UTs on an GE-300 German equatorial (parallactic) mount, giving an overall field of view of \SI{40}{\square\deg} (accounting for some overlap between cameras). A sample frame taken with one UT is shown in \aref{fig:fov}, which also gives a comparison of the GOTO field of view to some of the other projects from \aref{tab:rivals}.
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/fov.pdf}
\end{center}
\caption[GOTO's field of view compared to other projects]{
GOTO's field of view compared to other projects. On the left is a commissioning image of M31 taken with one of GOTO's cameras, showing the wide field of view of a single unit telescope. Four of these UTs form the initial 18 square degree survey tile, to be increased to 40 square degrees in the full 8-UT system. On the right, the GOTO FoV is compared to two similar projects: the Zwicky Transient Facility \glsadd{ztf} \citep[ZTF,][]{ZTF} and the Large Synoptic Survey Telescope \glsadd{lsst} \citep[LSST,][]{LSST}.
}\label{fig:fov}
\end{figure}
Each unit telescope is fitted with a filter wheel with several wide-band coloured filters (see \aref{sec:filters}), which can be used for additional source identification. The telescopes are housed in an Astrohaven clamshell dome\footnote{\url{https://www.astrohaven.com}}, which when fully open allows an unrestricted view of the sky. This means that the telescope does not need to waste time waiting for the dome to move when slewing to a new position, and can instead quickly move from observing one portion of the sky to another.
Each GOTO site is anticipated to host two domes, as shown in \aref{fig:goto_render}, with a total of 16 unit telescopes giving an instantaneous field of view of approximately \SI{80}{\square\deg}. Having two independent mounts will allow the sky to be surveyed at a higher cadence (every 2--3 days, see \aref{sec:survey_sims}), and also gives more options for survey and transient follow-up strategies. For example, the two mounts could observe different patches of the sky in order to cover the skymap as fast as possible, or they could combine to observe the same field to a greater depth. Alternately, each mount could observe using a different filter, to get immediate multi-colour information on any detected sources.
\newpage
\end{colsection}
\subsection{Image processing and candidate detection}
\label{sec:gotophoto}
\begin{colsection}
The GOTO project produces a huge amount of data to be handled and processed. Each image taken by the 50~megapixel cameras is approximately 100~MB;\@ GOTO typically takes three \SI{60}{\second} exposures per pointing and on average observes $\sim$130 targets each night. Just the prototype 4-UT system hence produces approximately 150~GB of data each night, and a full multi-site GOTO system would produce close to 400~TB per year. For real-time transient detection, each set of images needs to be processed in the approximately three minutes between each observation, and due to the wide field of view each image will contain many thousands of sources. Processing the images is therefore not an easy task. In order to do this, a real-time data flow system has been developed called GOTOflow, which is used to run the GOTO pipeline GOTOphoto. The key components of the pipeline are shown in \aref{fig:gotoflow}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/pipeline.png}
\end{center}
\caption[The GOTO dataflow]{
A flowchart showing the key components of the GOTO dataflow.
}\label{fig:gotoflow}
\end{figure}
\newpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/diffimg.png}
\end{center}
\caption[The detection of SN 2019bpc through difference imaging]{
The detection of supernova SN 2019bpc by the GOTOphoto difference imaging pipeline. The new exposure on the left (a) has the reference image (b) subtracted to give the difference image on the right (c), where the new source is clearly visible.
}\label{fig:diffimg}
\end{figure}
GOTOphoto calibrates each image and then combines each set of three to increase the effective depth. New or changed sources are detected using difference imaging, as shown in \aref{fig:diffimg}. This requires a master reference image at each position in the sky, which is continuously built up from the all-sky survey. Any apparently new sources are then added to a detection database, and are also checked against historic observations to discount any sources which had been observed previously (for example, variable stars on the edge of the detection depth might fade in and out of visibility). Candidate sources are also checked against other catalogues such as Pan-STARRS \citep{Pan-STARRS}, as well as against lists of known minor planets and other transients.
New candidates are presented for human vetting through a web interface called the GOTO Marshal. Collaboration members can check each candidate and flag them either as potential astrophysical sources or junk detections. Work is ongoing across the collaboration on machine-learning projects for automatic transient detection and identification, which would use the human responses from the Marshal to train an automatic classifier.
\newpage
\end{colsection}
\subsection{Deployment and future expansion}
\label{sec:goto_expansion}
\begin{colsection}
The 4-UT GOTO prototype shown in \aref{fig:goto_photo} was inaugurated in July 2017. The telescope is located at the \glsfirst{orm_lapalma} on La Palma in the Canary Islands, at the site shown in \aref{fig:orm}. After some hardware issues (see \aref{sec:hardware_commissioning}), the telescope is now fully operational: since February 2019 GOTO has been carrying out an all-sky survey, and since the start of the third LIGO-Virgo observing run (O3, see \aref{sec:gw_detections}) in April 2019 it has been following-up gravitational-wave events when they occur.
The next stage in the GOTO project will be the addition of the second set of four unit telescopes to the existing mount, due in late 2019. Funding for a second mount with another set of four unit telescopes has already been secured, and a second dome has already been constructed on La Palma. This second mount is expected to be commissioned in 2020.
La Palma is one of the best observing sites in the northern hemisphere, and is already home to several telescopes operated by GOTO collaboration members. It was therefore an obvious choice for the location of the first GOTO node. Ultimately, a second, complimentary node is planned to be built in the southern hemisphere, most likely in Australia. Having a site in both hemispheres allows the entire sky to be surveyed, and, as Australia is on the opposite side of the Earth from La Palma, the two sites would provide almost 24-hour coverage of gravitational-wave alerts. This second node would also host two GOTO telescopes with 8 UTs each, and the two sites combined would be able to survey the entire visible sky every 1--2 days (see \aref{sec:survey_sims}). As GOTO grows, it is anticipated that the telescopes at both sites will be operated as a single observatory, meaning observation scheduling will be optimised for each site and the output data will be unified into a single detection database.
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/orm.png}
\end{center}
\caption[The location of GOTO and other telescopes on La Palma]{
The location of GOTO and other telescopes at the Observatorio del Roque de los Muchachos on La Palma, including the sites of proposed and under-construction projects in \textcolorbf{YellowOrange}{yellow}. GOTO, marked in \textcolorbf{BlueGreen}{blue}, is located on the east side of the observatory, close to the edge of the caldera. The lower image shows the eastern part of the ORM and the telescopes surrounding GOTO, including the other Warwick-operated telescopes (W1m and SuperWASP) and the Sheffield/Durham-operated pt5m (on the roof of the WHT building) in \textcolorbf{ForestGreen}{green}. Satellite images taken from Google Maps.
}\label{fig:orm}
\end{figure}
\newpage
\end{colsection}
\section{Thesis outline}
\label{sec:outline}
\begin{colsection}
This thesis details my work on the GOTO project carried out between 2015 and 2019.
My primary role within the collaboration was to develop the software needed for GOTO to operate as an autonomous telescope. The hardware design of GOTO --- with multiple unit telescopes attached to each mount --- required custom software which could operate all of the cameras synchronously. Each unit telescope is also equipped with a filter wheel and a focuser, which also need to be controlled in parallel. Additional control software was required to move the mount, open and close the dome, and control any other pieces of on-site hardware.
The next stage of the control system development was writing the software that would operate the telescope without human involvement. The routine nightly operations (move to a target, take an exposure, repeat until sunrise) would need to be supported, and in addition standard observational tasks, such as taking calibration frames and focusing the telescopes, needed to be automated. Crucially, the control system required robust and reliable systems for monitoring the weather and the hardware status, and in the case of an emergency it had to be able to close the dome or recover from any problems without immediate human intervention.
In order for a telescope to observe autonomously it needs to be able to know what targets to observe and when. Therefore GOTO needed an automatic scheduling system, which had to be able to handle two distinct operational modes:\@ carrying out the all-sky survey for the majority of the time, but quickly switching to follow up any gravitational-wave alerts (or other transient events) when triggered. For the all-sky survey a series of tiled pointings had to be to be defined, and alert observations are mapped onto the same grid to enable the pipeline to carry out difference imaging. Reacting quickly to any alerts is vital, so GOTO needed a robust alert monitoring system that could determine the optimal follow-up strategy for each event.
Finally, as described in \aref{sec:goto_expansion}, GOTO is envisioned as a modular project with an increasing number of telescopes and sites, eventually operating as a multi-site observatory. This future expansion had to be taken into account in the design of both the hardware control and scheduling systems.
This thesis is arranged as follows:
\begin{itemize}
\item In \nref{chap:hardware} I describe my work characterising the GOTO hardware and optical systems prior to its deployment on La Palma.
\item In \nref{chap:gtecs} I introduce the software I developed to control the GOTO hardware.
\item In \nref{chap:autonomous} I describe the additional autonomous level of software I wrote to allow GOTO to operate as a robotic telescope.
\item In \nref{chap:scheduling} I examine in detail the functions used to prioritise and schedule GOTO observations.
\item In \nref{chap:tiling} I describe how GOTO observations are mapped onto an all-sky grid, and how it is used to observe gravitational-wave skymaps.
\item In \nref{chap:alerts} I describe the software systems used to receive and process astronomical alerts, including gravitational-wave detections.
\item In \nref{chap:commissioning} I detail my work during the deployment of the GOTO prototype on La Palma and subsequent control system development.
\item In \nref{chap:multiscope} I examine the future expansion plans of GOTO, and detail simulations I carried out to quantify the benefits of multiple telescopes and sites.
\item Finally, in \nref{chap:conclusion} I present concluding remarks and some suggestions for future project development.
\end{itemize}
\end{colsection}
\chapter{A Multi-Telescope Observatory}
\label{chap:multiscope}
\chaptoc{}
\section{Introduction}
\label{sec:multiscope_intro}
\begin{colsection}
In this chapter I describe the potential future expansion of the GOTO project, with additional telescopes at the current site on La Palma and a future second site in Australia.
\begin{itemize}
\item In \nref{sec:multi_tel} I give an outline of the additional work required to create a multi-site scheduling system, and how the existing simulation code can be modified to approximate the required functionality.
\item In \nref{sec:gw_sims} I describe simulations showing the benefits of additional telescopes observing gravitational-wave alerts in order to locate the counterpart source.
\item In \nref{sec:survey_sims} I describe further simulations detailing the effects of additional telescopes on the all-sky survey.
\end{itemize}
All work described in this chapter is my own unless otherwise indicated, and has not been published elsewhere.
\end{colsection}
\section{Scheduling for multiple telescopes}
\label{sec:multi_tel}
\begin{colsection}
As described in \aref{sec:goto_expansion}, the ultimate aim of the GOTO project is to have multiple nodes around the world. Specifically, the plan calls for two full GOTO-8 systems on La Palma and another two at a second site in Australia (either at Siding Spring Observatory in New South Wales or Mt Kent Observatory in Queensland). It is anticipated that the G-TeCS scheduling system described in \aref{sec:observing} will be extended to cover all these telescopes, so that they each query a single observation database and a master scheduler decides what target each telescope should be observing at a given time. This will require a large amount of work to modify both the database structure and the scheduling functions and, as this is not currently implemented into the existing scheduler, several workarounds are needed in order to create realistic multi-telescope simulations.
\end{colsection}
\subsection{Multiple observing telescopes}
\label{sec:multi_tel_scheduling}
\begin{colsection}
One of the current restrictions in the scheduling functions (as described in \aref{chap:scheduling}) is that they only ever expect a single pointing in the observation database to be marked as \texttt{running} at any one time. It is explicitly coded into the scheduler that detecting multiple running pointings should raise a critical error, as certain bugs early in development could lead to this undesired state to occur. Obviously once the system is to be expanded to multiple telescopes this restriction will have to be lifted, but for running simulations a simplification was required to work around it.
It is currently planned that each telescope will have its own pilot and hardware daemons completely independent of each other, with the only point of overlap being the shared scheduler (and, for each site, the conditions daemon). This makes the master scheduler even more complicated, as each pilot will be querying it completely out-of-sync. If telescope 1 has just finished observing and makes a scheduler check, the scheduler will need to know what telescope 2 is observing, so as not to return the same pointing to telescope 1 (although in some cases having both telescopes observe the same target might be desired, adding yet another level of complexity). But should both telescopes finish observing at the same time then the scheduler will need some way to decide which telescope is assigned which target, perhaps based on the slew time to each target from the telescope's current position.
As none of the above has yet been implemented into the existing code, a simplified system was required in order to simulate multiple telescopes. The existing fake pilot code (described in \aref{sec:goto_sims}) already contains calls to the real scheduling functions, which return the highest priority pointing at given time. The first simplification was to make the function instead return the top $N$ highest pointings, where $N$ is the number of currently observing telescopes. In lieu of any better algorithm to decide which telescope observes which target, the code simply gives the highest priority pointing to telescope 1, the second highest to telescope 2, and so on. Should there only be one valid pointing returned then only telescope 1 will observe, while telescope 2 will remain ``parked'' until it is needed (in reality the second telescope would default to observing the all-sky survey until it also has something to do).
The second simplification was to ensure the telescopes always stay in sync when observing. This was achievable for the simulations described in this chapter because every pointing uses the same exposure set (three \SI{60}{\second} exposures), and therefore they take the same amount of time to observe. However, in reality each telescope would take a different amount of time to slew to its target, and so they would quickly get out of sync. Slew time is included in the fake pilot code for each telescope to acquire its new target, and so in order to remain synchronised with multiple telescopes the simulations simply wait the required amount of time for the telescope with the furthest distance to slew. This ensures both telescopes start and finish their observations at the same time, although it does mean a small amount of observing time is ``wasted'' while one telescope is waiting for the other to be in position.
\newpage
\end{colsection}
\subsection{Multiple observing sites}
\label{sec:multi_site_scheduling}
\begin{colsection}
The modifications to the scheduler described in the previous section provide a good approximation of the response of an arbitrary number of telescopes observing at one site. However, expanding the code further to simulate observations from multiple sites adds further complexity.
The scheduler functions (see \aref{sec:ranking}) need to know which site observations are being made from in order to correctly sort pointings. The visibility constraints (see \aref{sec:constraints}) check if each target is above the local horizon, as well as the local Sun altitude, and the tiebreak parameter (see \aref{sec:breaking_ties}) takes into account the airmass of each target. These are simple parameters to calculate if you are only observing from one site, but once there are telescopes at multiple sites querying the scheduler at the same time then the responses will need to take the position of each into account.
This could lead to problems when returning the highest priority pointings. For example, with two telescopes observing from different sites the scheduler could return the highest priority pointing visible from each. If they are different then each telescope can then observe the best target for its site; however if both telescopes were observing at the same time, and the visible portions of the sky from both sites overlapped, then it is very possible that the same pointing would be the highest priority from both sites. Assuming they should not both observe the same target at the same time, the scheduler would need to choose which telescope to assign that pointing to and then recalculate a different target for the other telescope. What would be better is to use the same method described in \aref{sec:multi_tel_scheduling}, and have the scheduler always return the top $X$ pointings, where $X = N_\text{site1} + N_\text{site2} + \cdots$ is the total number of telescopes across the globe. In reality targets would need to be assigned to telescopes based on their airmass at each site, or the slew time from the current target, but as in \aref{sec:multi_tel_scheduling} for the simulations they can just be assigned to each telescope in order.
\newpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/nights.pdf}
\end{center}
\caption[Night times throughout the year for GOTO sites]{
Night times throughout the year for GOTO sites. Night here is defined as when the Sun is \SI{12}{\degree} below the local horizon.
}\label{fig:nights}
\end{figure}
However, saying that the scheduler needs to find the top $X$ pointings, where $X$ is the total number of telescopes at all sites, is not strictly true --- it actually only needs to return enough pointings to satisfy the telescopes at the sites that are currently observing. In other words, if there are two sites but one is shut down, due to weather or because it is daytime there, the scheduler only needs to consider the single site. Conveniently, for simulating the proposed GOTO network this is always true: by defining night as when the Sun is below \SI{-12}{\degree} altitude, the periods of darkness between La Palma and either of the two proposed Australian sites never overlap. This is shown in \aref{fig:nights}, where there is a constant ``buffer zone'' between night ending at one site and beginning at the other. This case only applies for a very limited number of combinations of sites. As shown in \aref{fig:site_nights} there is a tear-drop-shaped area on the Earth's surface which contain the locations where the local night will never overlap with night on La Palma, comprising only of eastern Australia, New Zealand and Melanesia. For a Sun altitude limit of \SI{-12}{\degree} this area contains just $6.6 \%$ of the Earth's surface.
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/sites.pdf}
\end{center}
\caption[Locations on the Earth with non-overlapping night times]{
Finding locations on the Earth with non-overlapping night times. In the upper plot the filled areas show the locations on the Earth where local night never overlaps with night on La Palma, at any time of the year. The different colours denote different night time definitions, with the \SI{-12}{\degree} definition also being surrounded by a white dashed line. The location of the GOTO sites considered (\textcolor{NavyBlue}{La Palma}, \textcolor{Green}{Siding Spring} and \textcolor{Red}{Mt Kent}) are marked with stars and their antipodes are marked with hollow circles. The equivalent areas for Siding Spring and Mt Kent are also shown by the coloured dashed lines surrounding their antipodes, for the \SI{-12}{\degree} night definition only. The lower plots show how the region varies over the course of a year, from the solstices via the equinox (the plot is identical for the two equinoxes and therefore is only shown once).
}\label{fig:site_nights}
\end{figure}
The fortuitous location of the proposed Australian sites means implementing telescopes at multiple sites into the simulation code was fairly simple. Each simulation would consider either only La Palma, or La Palma and one of the Australian sites, as these are the only anticipated scenarios for GOTO.\@ In the two-site cases only the telescopes at one of the sites would ever be observing at any one time, and so the simulation only requires the multi-telescope implementation as described in \aref{sec:multi_tel_scheduling}.
Should simulations be desired for other sites on the Earth that are not within the small area with non-overlapping nights shown in \aref{fig:site_nights}, for example other potential GOTO-South sites in South Africa or Chile, then the simulation code would need to be modified to take this into account. This has not yet been done as it was not required for the scenarios described here. In principle, the fact the sites overlap could be ignored, and the simulations can be run for each site as a stand-alone observatory and then combined afterwards with the results from other, stand-alone simulations. This, however, removes the benefit of the sites acting together and using a common observation database, and would lead to multiple observations of the same targets from each site.
\end{colsection}
\subsection{Simulating different survey grids}
\label{sec:multi_grid_scheduling}
\begin{colsection}
One fundamental feature of the existing G-TeCS code is that observations are carried out on a fixed all-sky grid, as defined in \aref{chap:tiling}. When considering multiple telescopes this is both useful in some ways and limiting in others. Having a fixed grid that is common to all telescopes is vital for the GOTO image subtraction pipeline GOTOphoto, as it requires observations of the same part of the sky to create reference frames for difference imaging (see \aref{sec:gotophoto}). This is why a common grid is anticipated to form the base of the global system. By sharing the same tiles each telescope can contribute to the same all-sky survey grid, as well as efficiently coordinate mapping out a gravitational-wave skymap.
\newpage
However, sharing the grid requires all of the telescopes to have essentially the same field of view. There is some leeway in the exact field of view of each telescope array; the grid tiles are defined to leave a slight overlap around the edge (see \aref{fig:4ut_footprint}). But if the field of view of the telescope array is much larger than the tile size then the pointings will be too close together and therefore inefficient. Even worse, if the field of view of the telescope array is much smaller than the defined tile size it would lead to gaps in the sky coverage.
For the proposed GOTO system with near-identical GOTO-8 units around the world this is not an issue, but it should be recognised as a limitation of not just the simulations but the whole G-TeCS control system. One potential case where this may be an issue is when commissioning GOTO-South. If it spends time as a GOTO-4 system similar to La Palma before getting the second set of unit telescopes to bring it up to a full set of eight, then it will be observing concurrently with one or two GOTO-8 systems on La Palma. This is a likely enough situation that it was considered in the gravitational-wave simulations as described in \aref{sec:gw_sims}, using the workaround of two independent simulations mentioned previously. How this scenario would be dealt with within a real implementation of G-TeCS is a problem that needs development in the future, should it prove to be necessary.
\end{colsection}
\section{Gravitational-wave follow-up simulations}
\label{sec:gw_sims}
\begin{colsection}
As the primary mission of the GOTO project is to follow up gravitational-wave detections, it is important to consider what benefit additional telescopes will bring to the project. In order to do this, simulations were run on the LIGO First Two Years mock skymaps \citep{First2Years}, a small selection of which were previously used for the scheduler simulations described in \aref{sec:scheduler_sims}. The full sample contained 1105 events, each based on simulating a binary neutron star coalescence at a particular sky position and distance. Each event had two skymaps generated: the first using the rapid BAYESTAR pipeline \citep{BAYESTAR}, which is typically available minutes after the event, and the second using the LALInference code \citep{LALInference}, which can take hours or days to complete. For these simulations, therefore, only the BAYESTAR skymaps were considered in order to focus on GOTO's initial follow-up, although an extension to the simulations could include the effects of the second updated skymap being processed and added to the database some hours after the event.
\end{colsection}
\subsection{Event visibility}
\label{sec:gw_visability}
\begin{colsection}
The simulations were designed to begin at the time the event was detected, and then simulate the next 24 hours of observations. This guaranteed one night's worth of observing at each site, although split into two halves if the event occurred during the night. The time each event occurred was taken from the simulated skymaps, and does not account for the delay between the event being detected and the alert being issued and processed by the G-TeCS sentinel. Events were uniformly distributed in time of occurrence during the day, and they all occurred over a two month period spanning either side of the 2010 September equinox as shown in \aref{fig:f2y_times}. It is not clear why this range of dates was selected, although surrounding one of the equinoxes might have been an attempt to reduce bias towards observers from either hemisphere. However, the events are not entirely equally distributed either side of the equinox (03:09 UTC on 2010--09--23): the first event occurs 33 days before the equinox and the last 27 days after. Overall 64\% of events occurred before the September equinox and 36\% after, which leads to a slight bias in visibility towards southern telescopes as they experience longer nights before the equinox (in the southern winter).
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/f2y_times.pdf}
\end{center}
\caption[Date and time distribution of events in the First Two Years sample]{
Date and time distribution of the ``First Two Years'' events. The night periods are shown in \textcolorbf{NavyBlue}{blue} for La Palma and \textcolorbf{Green}{green} for Siding Spring, and the date of the equinox is marked by the \textcolorbf{Red}{red} dashed line.
}\label{fig:f2y_times}
\end{figure}
Events were uniformly distributed across the sky, and are uniform in distance cubed \citep{First2Years}. Although each source included distance information this was not taken into account in the simulations, aside from determining the event strategy to use (all were well within the \SI{400}{\mega\parsec} definition for close neutron star events defined for GOTO-alert in \aref{sec:event_strategy}). Future simulations could use the distance to the event to estimate a light curve based on the observed kilonova for GW170817 \citep{GW170817_followup} and use it to predict how long each event would be visible for GOTO for (the GW170817 transient AT~2017gfo faded below GOTO's 20 mag limit after 2.5 days).
\newpage
The first stage of simulating observations of each event was to determine if the source location was visible from the chosen site(s) in the 24 hours after the event occurred. In the cases where this was not true there was no point running the full simulation, as it the source would never be observed. Each event was classified into one of four categories: %
\begin{itemize}
\item \textbf{Not visible --- too close to the Sun}. These events had sources that were too close to the Sun to observe within the 24 hour period after the event, regardless of the site considered. This was defined as the source location being within \SI{42}{\degree} of the Sun (\SI{-12}{\degree} from the definition of night time plus \SI{30}{\degree} from the altitude limit). It is a fixed fraction of the sky: 5280 sq deg, or 13\% of the celestial sphere\footnote{The area of a circle with radius $r$ on the surface of a sphere with radius $R$ is $2\pi R^2(1-\cos(r))$. The radius of the celestial sphere $R=\SI{360}{\degree}/2\pi \approx \SI{57.3}{\degree}$.}.
\item \textbf{Not visible --- below declination limit}. The sources for these events fell within the region of the sky that is never visible due to the limited declination range visible from a given site. For example, using the GOTO \SI{30}{\degree} altitude limit a telescope on La Palma (latitude \SI{28}{\degree} N) can see a band of sky between \SI{+88}{\degree} and \SI{-32}{\degree} declination. Sources outside of this region (that are not already excluded due to being too close to the Sun) would therefore never be observable from the site, but could be observed from other locations. At the equator this band covers 87\% of the sky over the course of a year, at latitudes of $\pm \SI{30}{\degree}$ 75\% of the sky is visible, falling to just 25\% at the poles\footnote{The area of a segment on a sphere between angles $\theta$ and $\phi$ is $2 \pi R^2 (\cos(\theta)-\cos(\phi))$.}.
\item \textbf{Not visible --- daytime}. These event sources are within the visible declination range, but are not observable from a given site during the 24 hour period after the event as they are only above the horizon during the day. Unlike the fraction of the sky within the circular \SI{42}{\degree} region around the Sun, these positions could still be observable from other sites at different latitudes.
\item \textbf{Visible}. The source for this event falls outside of either of the above three areas, and therefore is nominally above the \SI{30}{\degree} altitude limit at some point during night time within 24 hours after the event. The portion of the sky that is visible in one night from a given site depends on the latitude of the site and the time of year.
\end{itemize}
\begin{table}[t]
\begin{center}
\begin{tabular}{c|ccc} %
\multirow{3}{*}{Night} & \multicolumn{3}{c}{Site} \\
& La Palma & Siding Spring & Mt Kent \\
& (\SI{28}{\degree} N) & (\SI{31}{\degree} S) & (\SI{27}{\degree} S) \\
\midrule
\\
March & \textcolorbf{Green}{57.1\% visible}
& \textcolorbf{Green}{56.3\% visible}
& \textcolorbf{Green}{59.6\% visible}
\\
equinox & {\scriptsize(\textcolorbf{Orange}{12.9\%} $\cdot$
\textcolorbf{NavyBlue}{23.4\%} $\cdot$
\textcolorbf{Blue}{6.7\%})}
& {\scriptsize(\textcolorbf{Orange}{12.9\%} $\cdot$
\textcolorbf{NavyBlue}{24.6\%} $\cdot$
\textcolorbf{Blue}{6.2\%})}
& {\scriptsize(\textcolorbf{Orange}{12.9\%} $\cdot$
\textcolorbf{NavyBlue}{22.5\%} $\cdot$
\textcolorbf{Blue}{5.0\%})}
\\[0.5cm]
June & \textcolorbf{Green}{50.4\% visible}
& \textcolorbf{Green}{61.9\% visible}
& \textcolorbf{Green}{62.7\% visible}
\\
solstice & {\scriptsize(\textcolorbf{Orange}{12.9\%} $\cdot$
\textcolorbf{NavyBlue}{24.2\%} $\cdot$
\textcolorbf{Blue}{12.4\%})}
& {\scriptsize(\textcolorbf{Orange}{12.9\%} $\cdot$
\textcolorbf{NavyBlue}{20.9\%} $\cdot$
\textcolorbf{Blue}{4.3\%})}
& {\scriptsize(\textcolorbf{Orange}{12.9\%} $\cdot$
\textcolorbf{NavyBlue}{19.0\%} $\cdot$
\textcolorbf{Blue}{5.4\%})}
\\[0.5cm]
September & \textcolorbf{Green}{57.0\% visible}
& \textcolorbf{Green}{56.4\% visible}
& \textcolorbf{Green}{57.7\% visible}
\\
equinox & {\scriptsize(\textcolorbf{Orange}{12.9\%} $\cdot$
\textcolorbf{NavyBlue}{23.4\%} $\cdot$
\textcolorbf{Blue}{6.7\%})}
& {\scriptsize(\textcolorbf{Orange}{12.9\%} $\cdot$
\textcolorbf{NavyBlue}{24.5\%} $\cdot$
\textcolorbf{Blue}{6.2\%})}
& {\scriptsize(\textcolorbf{Orange}{12.9\%} $\cdot$
\textcolorbf{NavyBlue}{22.4\%} $\cdot$
\textcolorbf{Blue}{7.0\%})}
\\[0.5cm]
December & \textcolorbf{Green}{62.5\% visible}
& \textcolorbf{Green}{48.9\% visible}
& \textcolorbf{Green}{51.0\% visible}
\\
solstice & {\scriptsize(\textcolorbf{Orange}{12.9\%} $\cdot$
\textcolorbf{NavyBlue}{19.7\%} $\cdot$
\textcolorbf{Blue}{4.8\%})}
& {\scriptsize(\textcolorbf{Orange}{12.9\%} $\cdot$
\textcolorbf{NavyBlue}{25.8\%} $\cdot$
\textcolorbf{Blue}{12.4\%})}
& {\scriptsize(\textcolorbf{Orange}{12.9\%} $\cdot$
\textcolorbf{NavyBlue}{23.2\%} $\cdot$
\textcolorbf{Blue}{12.9\%})}
\\
\end{tabular}
\end{center}
\caption[Sky visibility over a year]{
Sky visibility over a year from the three different GOTO sites. The upper value in \textcolorbf{Green}{green} shows the fraction of the sky that is visible during the night. The lower values break down the remaining fraction of the sky into the three non-visible categories: too close to the Sun in \textcolorbf{Orange}{orange}, below the declination limit in \textcolorbf{NavyBlue}{light blue} and only visible during the day in \textcolorbf{Blue}{dark blue}.
}\label{tab:visibility}
\end{table}
The region of the sky visible during the night for a given site changes over the course of the year. \aref{tab:visibility} shows the fractions of the sky in each of the four categories above at the solstices and equinoxes. \aref{fig:visibility} plots the regions on the celestial sphere, in order to better visualise how they change depending on observing site and time of year.
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/visibility.pdf}
\end{center}
\caption[Plotting sky visibility regions over a year]{
Changing sky visibility regions over a year, plotted on the celestial sphere.
Visibility is shown at the equinoxes and the solstices for two different sites: the left column shows visibility for an observer located on the Earth's equator, the right column shows visibility from La Palma. The \textcolorbf{Green}{green} regions are visible during the night. The area above the \SI{30}{\degree} altitude limit at sunset is shown by a white dashed line, with the local zenith marked with a white point, and the arrows show how the visible region moves in RA during the night until sunrise. The position of the Sun on the ecliptic is shown by the black star, the \textcolorbf{Orange}{orange} region is within \SI{42}{\degree} of the Sun and is therefore not visible from anywhere on Earth. The \textcolorbf{NavyBlue}{light blue} regions are permanently out of the visible declination range of the site, and the regions in \textcolorbf{Blue}{dark blue} would only be visible on that day when the Sun is above the horizon (but are visible from other sites).
}\label{fig:visibility}
\end{figure}
\clearpage
\end{colsection}
\subsection{Selecting event tiles}
\label{sec:gw_selecting}
\begin{colsection}
Even if the source of a gravitational-wave event is visible within 24 hours from a given site, or combination of sites, there is one further criterion that would prevent the source being observed --- whether or not the source is located within any of the tile pointings added to the database. The issue of determining which tiles to add to the database is detailed in \aref{sec:selecting_tiles}, but is ultimately a matter of probability: if a telescope covers the 90\% confidence region for every gravitational-wave event then it would be expected to observe 90\% of the sources.
GOTO-alert uses the mean contour level method to select tiles, as described in \aref{sec:event_insert}. For simulations described in this chapter a mean contour selection value of $0.9$ was used for the GOTO-4 grid and $0.95$ for the GOTO-8 grid. Using these values, 92\% of GW events had sources within at least one of the selected tiles for the GOTO-4 grid, and 95\% for the GOTO-8 grid. Two events where the source location lay outside the selected tiles are shown in \aref{fig:poor_selection}. In the following simulation results the tile selection was only considered after the visibility restrictions in the previous section had been applied, as the visibility would be true for any telescopes at the relevant sites while the tiles are specific to GOTO.\@ In other words, if the source location was visible but was not within the tiles selected by GOTO, this is only GOTO's problem, and other telescopes might still have observed it.
In order to find the optimal selection levels, further simulations could be run using the same sample of skymaps but altering the selection level. As discussed in \aref{sec:event_insert}, there is a trade-off between adding too few tiles and missing the source, and adding too many and increasing the time to cover them all. Additional telescopes in the GOTO network would lead to the skymap being covered faster, which could make adding less probable tiles worthwhile.
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/non_selected.pdf}
\end{center}
\caption[Examples of mock GW event sources falling outside of the selected tiles]{
Two examples of mock GW event sources (marked by the \textcolorbf{Red}{red} star) falling outside the selected tiles (the tiles highlighted in \textcolorbf{NavyBlue}{blue}). In the upper case (trigger ID 13630) the source fell just outside of the selected tiles, while in the lower case (trigger ID 930001) the source was on completely the other side of the sky.
}\label{fig:poor_selection}
\end{figure}
\clearpage
\end{colsection}
\subsection{Multi-telescope simulation results}
\label{sec:gw_sim_results}
\begin{colsection}
In order to simulate the response of different GOTO systems, each of the 1105 First Two Years skymaps \citep{First2Years} were simulated using a script \texttt{sim\_skymaps.py}. The object of the simulations was to find how quickly the event source would be observed. For events that fell into one of the exceptions described previously, either the source was not visible within 24 hours or the source tile was not selected to be added to the database, the simulation was aborted early, and the result recorded. The remaining events were classified as ``observable'', and for these the full fake pilot simulation was run for up to 24 hours after the time the event occurred. The fake pilot knew which tiles the event source fell within, and once any of those tiles were recorded as being observed the simulation ended. The time of the observation and the alt/az position the tile was observed at were recorded. Any events which were simulated for the full 24 hours without the source being observed were counted as failures, and were classed as ``not observed''.
Simulations were carried out for a variety of possible GOTO systems. Each simulation was assigned a code based on how many telescopes of each type were located at each site. Two possible GOTO ``models'' were considered: the GOTO-4 prototype with four unit telescopes and the intended GOTO-8 design with eight (see \aref{sec:goto_design}). In the following section the code \textbf{1N4} refers to one GOTO-4 mount on La Palma (the current system at the time of writing), \textbf{2N8+1S4} is two GOTO-8 telescopes on La Palma and one GOTO-4 in Siding Spring, \textbf{2N8+1K4} would be the same but the southern telescope is at Mt Kent.
The results of the simulations for six key scenarios are given in the following plots: \aref{fig:gw_sim_1n4}, \aref{fig:gw_sim_1n8} and \aref{fig:gw_sim_2n8} show results for the evolving site on La Palma, while \aref{fig:gw_sim_2n8+1s4}, \aref{fig:gw_sim_2n8+2s8} and \aref{fig:gw_sim_2n8+2k8} shows the effect of adding three different southern facilities. A summary of the key results from all of the simulations that were carried out is given in \aref{tab:gw_sim_results}.
\newpage
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{0.15\linewidth}\vspace{0.6cm}
\includegraphics[trim={.5cm 0 .5cm 0},clip,width=\linewidth]{images/gw_sims/1n4_pie.png}
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}\vspace{0pt}
\begin{tabular}{lrr}
\multicolumn{3}{c}{\textbf{Simulation results}} \\
\midrule
%
\textcolor{Green}{Observed} & 532 & 48.1\% \\
\textcolor{Red}{Not observed} & 16 & 1.4\% \\
\textcolor{darkgray}{Not selected} & 45 & 4.1\% \\
\textcolor{NavyBlue}{Never above dec limit} & 265 & 24.0\% \\
\textcolor{Blue}{Not visible at night} & 118 & 10.7\% \\
\textcolor{Orange}{Too close to Sun} & 129 & 11.7\% \\
\midrule
Visible events & 593 & 53.7\% \\
%
\end{tabular}
\end{minipage}
\begin{minipage}[t]{0.37\linewidth}\vspace{0pt}
\begin{tabular}{lr}
\multicolumn{2}{c}{\textbf{System: 1N4}} \\
\midrule
%
Observing efficiency & 89.7\% \\
\midrule
Mean delay after & \multirow{2}{*}{9.96 h} \\
event time & \\
Mean delay after & \multirow{2}{*}{1.58 h} \\
becoming visible & \\
\midrule
Mean airmass & 1.64 \\
%
\end{tabular}
\end{minipage}
\end{center}
\caption[GW simulation results: 1N4 system]{
Simulation results for a 1N4 system. The pie chart and the table on the left shows which category each of the 1105 events fell into. ``Visible events'' include only the top three categories, and the ``observing efficiency'' is the fraction of these events which were subsequently observed. The table on the right gives the mean delay and airmass of the source observation, for events where the source was observed.
}\label{fig:gw_sim_1n4}
\end{figure}
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{0.15\linewidth}\vspace{0.6cm}
\includegraphics[trim={.5cm 0 .5cm 0},clip,width=\linewidth]{images/gw_sims/1n8_pie.png}
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}\vspace{0pt}
\begin{tabular}{lrr}
\multicolumn{3}{c}{\textbf{Simulation results}} \\
\midrule
%
\textcolor{Green}{Observed} & 553 & 50.0\% \\
\textcolor{Red}{Not observed} & 12 & 1.1\% \\
\textcolor{darkgray}{Not selected} & 29 & 2.6\% \\
\textcolor{NavyBlue}{Never above dec limit} & 285 & 25.8\% \\
\textcolor{Blue}{Not visible at night} & 85 & 7.7\% \\
\textcolor{Orange}{Too close to Sun} & 141 & 12.8\% \\
\midrule
Visible events & 594 & 53.8\% \\
%
\end{tabular}
\end{minipage}
\begin{minipage}[t]{0.37\linewidth}\vspace{0pt}
\begin{tabular}{lr}
\multicolumn{2}{c}{\textbf{System: 1N8}} \\
\midrule
%
Observing efficiency & 93.1\% \\
\midrule
Mean delay after & \multirow{2}{*}{10.06 h} \\
event time & \\
Mean delay after & \multirow{2}{*}{1.60 h} \\
becoming visible & \\
\midrule
Mean airmass & 1.66 \\
%
& \\
\end{tabular}
\end{minipage}
\end{center}
\caption[GW simulation results: 1N8 system]{
Simulation results for a 1N8 system. Note the distribution of events changes due to the different grid used, and the biggest gain in events observed is from the decreased number of events with sources not included in the selected tiles.
}\label{fig:gw_sim_1n8}
\end{figure}
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{0.15\linewidth}\vspace{0.6cm}
\includegraphics[trim={.5cm 0 .5cm 0},clip,width=\linewidth]{images/gw_sims/2n8_pie.png}
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}\vspace{0pt}
\begin{tabular}{lrr}
\multicolumn{3}{c}{\textbf{Simulation results}} \\
\midrule
%
\textcolor{Green}{Observed} & 557 & 50.4\% \\
\textcolor{Red}{Not observed} & 8 & 0.7\% \\
\textcolor{darkgray}{Not selected} & 29 & 2.6\% \\
\textcolor{NavyBlue}{Never above dec limit} & 285 & 25.8\% \\
\textcolor{Blue}{Not visible at night} & 85 & 7.7\% \\
\textcolor{Orange}{Too close to Sun} & 141 & 12.8\% \\
\midrule
Visible events & 594 & 53.8\% \\
%
\end{tabular}
\end{minipage}
\begin{minipage}[t]{0.37\linewidth}\vspace{0pt}
\begin{tabular}{lr}
\multicolumn{2}{c}{\textbf{System: 2N8}} \\
\midrule
%
Observing efficiency & 93.8\% \\
\midrule
Mean delay after & \multirow{2}{*}{9.89 h} \\
event time & \\
Mean delay after & \multirow{2}{*}{1.53 h} \\
becoming visible & \\
\midrule
Mean airmass & 1.67 \\
%
& \\
\end{tabular}
\end{minipage}
\end{center}
\caption[GW simulation results: 2N8 system]{
Simulation results for a 2N8 system. The improvements over the 1N8 system are a small gain in observing efficiency and a decrease in the mean delay time.
}\label{fig:gw_sim_2n8}
\end{figure}
\newpage
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{0.15\linewidth}\vspace{0.6cm}
\includegraphics[trim={.5cm 0 .5cm 0},clip,width=\linewidth]{images/gw_sims/2n8+1s4_pie.png}
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}\vspace{0pt}
\begin{tabular}{lrr}
\multicolumn{3}{c}{\textbf{Simulation results}} \\
\midrule
%
\textcolor{Green}{Observed} & 822 & 74.4\% \\
\textcolor{Red}{Not observed} & 17 & 1.5\% \\
\textcolor{darkgray}{Not selected} & 71 & 6.4\% \\
\textcolor{NavyBlue}{Never above dec limit} & 0 & 0.0\% \\
\textcolor{Blue}{Not visible at night} & 70 & 6.3\% \\
\textcolor{Orange}{Too close to Sun} & 125 & 11.3\% \\
\midrule
Visible events & 910 & 82.4\% \\
%
\end{tabular}
\end{minipage}
\begin{minipage}[t]{0.37\linewidth}\vspace{0pt}
\begin{tabular}{lr}
\multicolumn{2}{c}{\textbf{System: 2N8+1S4}} \\
\midrule
%
Observing efficiency & 90.3\% \\
\midrule
Mean delay after & \multirow{2}{*}{8.16 h} \\
event time & \\
Mean delay after & \multirow{2}{*}{1.66 h} \\
becoming visible & \\
\midrule
Mean airmass & 1.63 \\
%
& \\
\end{tabular}
\end{minipage}
\end{center}
\caption[GW simulation results: 2N8+1S4 system]{
Simulation results for a 2N8+1S4 system. As these sites use different grids they were simulated independently and the results combined.
}\label{fig:gw_sim_2n8+1s4}
\end{figure}
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{0.15\linewidth}\vspace{0.6cm}
\includegraphics[trim={.5cm 0 .5cm 0},clip,width=\linewidth]{images/gw_sims/2n8+2s8_pie.png}
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}\vspace{0pt}
\begin{tabular}{lrr}
\multicolumn{3}{c}{\textbf{Simulation results}} \\
\midrule
%
\textcolor{Green}{Observed} & 839 & 75.9\% \\
\textcolor{Red}{Not observed} & 8 & 0.7\% \\
\textcolor{darkgray}{Not selected} & 47 & 4.3\% \\
\textcolor{NavyBlue}{Never above dec limit} & 0 & 0.0\% \\
\textcolor{Blue}{Not visible at night} & 70 & 6.3\% \\
\textcolor{Orange}{Too close to Sun} & 141 & 12.8\% \\
\midrule
Visible events & 894 & 80.9\% \\
%
\end{tabular}
\end{minipage}
\begin{minipage}[t]{0.37\linewidth}\vspace{0pt}
\begin{tabular}{lr}
\multicolumn{2}{c}{\textbf{System: 2N8+2S8}} \\
\midrule
%
Observing efficiency & 93.8\% \\
\midrule
Mean delay after & \multirow{2}{*}{7.69 h} \\
event time & \\
Mean delay after & \multirow{2}{*}{1.57 h} \\
becoming visible & \\
\midrule
Mean airmass & 1.64 \\
%
& \\
\end{tabular}
\end{minipage}
\end{center}
\caption[GW simulation results: 2N8+2S8 system]{
Simulation results for a 2N8+2S8 system. The obvious improvement over the northern hemisphere-only 2N8 system (\aref{fig:gw_sim_2n8}) is the removal of the declination-limited events, meaning more event sources are visible. The observing efficiency remains the same, but there is a notable improvement in the post-event delay times.
}\label{fig:gw_sim_2n8+2s8}
\end{figure}
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{0.15\linewidth}\vspace{0.6cm}
\includegraphics[trim={.5cm 0 .5cm 0},clip,width=\linewidth]{images/gw_sims/2n8+2k8_pie.png}
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}\vspace{0pt}
\begin{tabular}{lrr}
\multicolumn{3}{c}{\textbf{Simulation results}} \\
\midrule
%
\textcolor{Green}{Observed} & 828 & 74.9\% \\
\textcolor{Red}{Not observed} & 11 & 1.0\% \\
\textcolor{darkgray}{Not selected} & 48 & 4.3\% \\
\textcolor{NavyBlue}{Never above dec limit} & 0 & 0.0\% \\
\textcolor{Blue}{Not visible at night} & 77 & 7.0\% \\
\textcolor{Orange}{Too close to Sun} & 141 & 12.8\% \\
\midrule
Visible events & 887 & 80.3\% \\
%
\end{tabular}
\end{minipage}
\begin{minipage}[t]{0.37\linewidth}\vspace{0pt}
\begin{tabular}{lr}
\multicolumn{2}{c}{\textbf{System: 2N8+2K8}} \\
\midrule
%
Observing efficiency & 93.3\% \\
\midrule
Mean delay after & \multirow{2}{*}{7.69 h} \\
event time & \\
Mean delay after & \multirow{2}{*}{1.55 h} \\
becoming visible & \\
\midrule
Mean airmass & 1.64 \\
%
& \\
\end{tabular}
\end{minipage}
\end{center}
\caption[GW simulation results: 2N8+2K8 system]{
Simulation results for a 2N8+2K8 system. Comparing to \aref{fig:gw_sim_2n8+2s8} it makes very little difference to the results if the southern site is at Siding Spring or Mt Kent, when compared to the huge gain from having either available instead of just La Palma (\aref{fig:gw_sim_2n8}).
}\label{fig:gw_sim_2n8+2k8}
\end{figure}
\begin{table}[p]
\begin{center}
\begin{tabular}{c|cccc|c|cc} %
\multirow{2}{*}{System} &
\multicolumn{4}{c|}{Source observed within \ldots} &
{\small Observing} &
\multicolumn{2}{c}{Mean delay after \ldots} \\
& 1h & 6h & 12h & 24h & efficiency & {\small the event} & {\small becoming visible} \\
\midrule
1N4 & 5.9\% & 16.5\% & 26.2\% & 48.1\% & 89.7\% & 9.96 h & 1.58 h \\
1N8 & 6.8\% & 16.9\% & 27.1\% & 50.0\% & 93.1\% & 10.06 h & 1.60 h \\
2N8 & 7.2\% & 17.3\% & 27.9\% & 50.4\% & 93.8\% & 9.89 h & 1.53 h \\
&&&&&&&\\
1S4 & 8.8\% & 18.8\% & 27.9\% & 47.1\% & 87.7\% & 9.39 h & 1.64 h \\
1S8 & 10.3\% & 20.6\% & 31.1\% & 49.2\% & 91.0\% & 8.81 h & 1.52 h \\
2S8 & 11.1\% & 21.2\% & 31.8\% & 49.9\% & 92.1\% & 8.67 h & 1.47 h \\
&&&&&&&\\
2K8 & 10.8\% & 20.9\% & 31.3\% & 50.9\% & 92.3\% & 9.02 h & 1.46 h \\
&&&&&&&\\
1N4+1S4 & 14.7\% & 34.9\% & 48.9\% & 71.5\% & 89.6\% & 7.99 h & 1.64 h \\
1N8+1S8 & 17.1\% & 36.8\% & 52.5\% & 74.8\% & 92.5\% & 7.82 h & 1.63 h \\
2N8+1S8 & 17.6\% & 37.2\% & 52.9\% & 75.2\% & 93.0\% & 7.75 h & 1.59 h \\
2N8+2S8 & 18.4\% & 37.7\% & 53.8\% & 75.9\% & 93.8\% & 7.69 h & 1.57 h \\
&&&&&&&\\
2N8+2K8 & 18.2\% & 38.0\% & 52.8\% & 74.9\% & 93.3\% & 7.69 h & 1.55 h \\
&&&&&&&\\
2N8+1S4* & 16.0\% & 35.7\% & 50.3\% & 74.4\% & 90.3\% & 8.16 h & 1.66 h \\
2N8+1S8* & 17.6\% & 37.2\% & 53.0\% & 75.2\% & 93.0\% & 7.76 h & 1.60 h \\
2N8+2S8* & 18.4\% & 37.7\% & 53.7\% & 75.9\% & 93.8\% & 7.70 h & 1.58 h \\
\end{tabular}
\end{center}
\caption[GW simulation results summary table]{
Summary of simulation results. The fraction of events where the source was observed is given for different time delays after the event, along with the overall observing efficiency after 24 hours. The mean delay between the event and the source being observed is given, along with the mean time it took to observe the source tile after it became visible. Systems marked with an asterisk (*) were not simulated together, but were instead combined from the individual simulations for each site.
}\label{tab:gw_sim_results}
\end{table}
\clearpage
\end{colsection}
\subsection{Analysis of simulation results}
\label{sec:gw_sim_analysis}
\begin{colsection}
The results of the gravitational-wave follow-up simulations support two conclusions: the addition of the southern site provides a huge benefit to the number of sources that can be observed, while adding further telescopes at a single site provides a much more modest benefit. \aref{fig:gw_sim_results} summarises the simulated post-event delay times for the different possible stages of GOTO deployment.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/gw_sims/results.png}
\end{center}
\caption[Simulation delay time for different GOTO systems]{
Post-event delay in observing gravitational-wave event sources for six possible deployment stages of the GOTO system.
}\label{fig:gw_sim_results}
\end{figure}
The reason for the first conclusion is obvious: adding a site in the southern hemisphere opens up a large number of sources that are physically incapable of being observed from La Palma. The second comes about essentially because the efficiency of a single GOTO system is already very high. \aref{fig:gw_sim_1n8} shows that the 1N8 system already observes 93.1\% of all sources visible from La Palma, and the majority of those not observed were outside of the selected tiles (29 events) compared to just not being observed (12 events). The addition of the second GOTO-8 system, as shown in \aref{fig:gw_sim_2n8}, moves just 4 events from ``not observed'' to ``observed''; adding a second telescope can not make any changes to any of the other categories. The mean delay time does decrease, but again only by a small amount.
The above conclusions are perhaps most visible by comparing the results in \aref{tab:gw_sim_results} for the 2N8 system to the 1N8+1S8 system, where there is a clear increase in the number of sources observed within 24 hours (from 50.4\% to 74.8\%). Therefore, based on these metrics alone, it would be far better to prioritise deploying a second mount in Australia before adding another on La Palma. There are numerous practical reasons why this is not the priority of the collaboration, and \aref{sec:survey_sims} illustrates that multiple telescopes at a single site are much more important to the all-sky survey cadence (which in practice would benefit the counterpart search by producing more recent reference images).
Regarding the choice of southern site, there is very little difference between results for Siding Spring and for Mt Kent. Comparing the 2S8 and 2K8 simulation results in \aref{tab:gw_sim_results} shows that Mt Kent has a small advantage in terms of the number of events observed, but Siding Spring has a lower mean delay time. Overall, there is no real difference between the two sites, and in most cases the ``S'' simulations are considered as representative of either site.
Another factor to emerge from these simulations is the difference between two independent systems in each hemisphere verses one combined system that uses a common database. This emerged as important as it was desired to simulate the 2N8+1S4 system, as a plausible future stage of GOTO's deployment. As considered in \aref{sec:multi_grid_scheduling}, the simulations require all telescopes to be observing using the same grid, as otherwise it is impossible at this time to share the common tiles between them. However, it was possible to consider the two cases, 2N8 and 1S4, separately as independent simulations, and then combine the results. For the event counts the logic is fairly straightforward: if an event is observed by \emph{either} site (or both) it counts as being observed. In cases where the event source was observed by both sites independently, only the earlier observation is considered. Using this method the results shown in \aref{fig:gw_sim_2n8+1s4} were derived. The same method could also be used in situations where the two sites could be simulated together; for example, comparing the 2N8+2S8 simulation to the combined results of the 2N8 and 2S8 simulations, given as 2N8+2S8* in \aref{tab:gw_sim_results}. The same events fell into the same categories as shown in \aref{fig:gw_sim_2n8+2s8}, and the only difference is a small increase in the delay time when the sites are not simulated together. For large skymaps with tiles within the shared area of the sky visible from both sites (roughly $\pm$\SI{30}{\degree} declination) one site can complete observations of the region even if it can not see the source, meaning that once the other telescope opens and starts observing, a large area of the skymap that does not contain the source has already been excluded. This is only the case when both sites are observing using a shared database, as the second site needs to know what the first site has already observed.
\end{colsection}
\section{All-sky survey simulations}
\label{sec:survey_sims}
\begin{colsection}
As described in \aref{sec:goto_motivation}, carrying out the all-sky survey is just as critical to the GOTO project as the gravitational-wave follow-up operations, as up-to-date reference images will always be required to detect any counterpart sources. Therefore, in parallel to the gravitational-wave simulations, further simulations were carried out in order to quantify what benefit additional telescopes and sites will have on carrying out the all-sky survey. It was also an opportunity to consider different sky survey methods before implementing them in the real scheduling system. Unlike the gravitational-wave simulations, it is also possible to compare the simulated results for a single GOTO-4 system on La Palma to the actual observations the live GOTO system has taken since it began observing the current survey in February 2019.
\end{colsection}
\subsection{Simulating sky survey observations}
\label{sec:survey_sim_methods}
\begin{colsection}
Simulating the all-sky survey is more straightforward than the gravitational-wave simulations, as there is no added complication of processing the LVC skymap or checking the visibility of the source coordinates. Instead, all that is needed is to fill the observation database with the sky survey pointings (see \aref{sec:obsdb}) and run the fake pilot (see \aref{sec:goto_sims}).
The only drawback to this method is the time taken to perform the simulations. Unlike the gravitational-wave simulations, the sky survey simulation can not be finished early if the source is not visible, or stopped once the source has been observed. Instead, the fake pilot needs to simulate the full 24 hours of observations, for however many days the simulation is run for. The same simplifications detailed previously still apply, so each loop still skips approximately 4 minutes of simulation time until the observation of each tile has been completed. A full simulation of a year of observations including both sites (therefore observing for approximately 20 hours each day) requires 1.1 million steps, and with each simulation loop taking approximately 2 seconds of CPU time (the scheduler check takes the majority of this time), the full simulation takes approximately 60 hours. This compares to at most 16 hours for the multi-site gravitational-wave simulations.
Due to the full sky-survey simulations requiring a large time investment, only a few of them could be carried out in the time available. A simplified version of the simulation code was therefore developed, which could produce the same results much faster. This `lite' script did away with the scheduler and database code, and instead at each step just finds the highest altitude tiles that have been observed the fewest times. This is a major simplification of the scheduling functions described in \aref{chap:scheduling}, but the results are effectively the same, and can be obtained 15--20 times faster. Therefore, a majority of the simulations discussed in this section use this much faster `lite' script. The other benefit of this method was making it much easier to modify the scheduling function to test different surveying methods, as discussed in \aref{sec:survey_sim_meridian}, without needing to rewrite the actual G-TeCS scheduler.
\end{colsection}
\subsection{Multi-telescope simulation results}
\label{sec:survey_sim_results}
\begin{colsection}
Sky-survey simulations were carried out for different combinations of GOTO telescopes and sites, similar to the gravitational-wave event simulations detailed in \aref{sec:gw_sims}. Simulations were run for 365 days starting semi-arbitrarily on the 21st of February 2019, which was the date that the current ongoing GOTO all-sky survey started on La Palma.
Fewer simulations were carried out when compared to the gravitational-wave simulations. This is partially as they take longer to run, but there were also fewer possible cases to simulate. It is not possible to combine the results of telescopes observing the sky survey on different grids, as the results depend explicitly on the grid used. Therefore, unlike the gravitational-wave simulations, it was not possible to combine a GOTO-8 telescope in the north and a GOTO-4 telescope in the south.
The results of the sky-survey simulations are given in \aref{tab:survey_sim_results}. \aref{fig:survey_sim_1n4} shows the final tile-coverage map for the 1N4 system, in which each tile in the GOTO-4 grid is coloured by the number of times it was observed. \aref{fig:survey_sim_2n8+2s8} shows the same information for the final 2N8+2S8 system.
\begin{figure}[p]
\begin{center}
\includegraphics[height=190pt]{images/survey_sims/365_1N4_lite.png}
\end{center}
\caption[All-sky survey simulation results: 1N4 system]{
All-sky survey simulation coverage map for a 1N4 system, as currently deployed on La Palma. Tiles are coloured by the number of times they were observed over the 365 simulated nights. Tiles in white are those not visible from the northern site.
}\label{fig:survey_sim_1n4}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[height=190pt]{images/survey_sims/365_2N8+2S8_lite.png}
\end{center}
\caption[All-sky survey simulation results: 2N8+2S8 system]{
All-sky survey simulation coverage map for a 2N8+2S8 system, the ultimate design goal of the GOTO collaboration. Note the colour scale has changed from \aref{fig:survey_sim_1n4}, the grid has changed to the GOTO-8 tiles and the region which was previously not visible from just the north has been filled in.
}\label{fig:survey_sim_2n8+2s8}
\end{figure}
\begin{table}[t]
\begin{center}
\begin{tabular}{c|cc|c|c|c} %
\multirow{2}{*}{System} &
\multicolumn{2}{c|}{Fraction of sky observed} &
No.\ of times &
Mean cadence &
{\small Mean airmass}
\\
&
each night &
over 1y &
tiles observed &
(days) &
observed
\\
\midrule
1N4* & 4.3\%--6.5\% & 76.5\% & 26 (13--28) & $10.0\pm1.8$ & $1.6\pm0.4$ \\
&&&&&\\
1N4 & 4.3\%--6.4\% & 76.5\% & 26 (13--28) & $10.1\pm1.8$ & $1.6\pm0.3$ \\
1N8 & 9.5\%--14.0\% & 74.2\% & 58 (34--62) & $4.6\pm0.7$ & $1.6\pm0.4$ \\
2N8 & 19.0\%--28.1\% & 74.2\% & 117 (68--123) & $2.3\pm0.4$ & $1.6\pm0.4$ \\
&&&&&\\
1N4+1S4 & 10.5\%--11.0\% & 99.9\% & 39 (30--42) & $7.3\pm0.7$ & $1.5\pm0.4$ \\
1N8+1S8 & 23.2\%--24.1\% & 99.9\% & 87 (67--91) & $3.4\pm0.3$ & $1.5\pm0.4$ \\
2N8+1S8 & 33.0\%--37.3\% & 99.9\% & 130 (98--138) & $2.3\pm0.2$ & $1.6\pm0.4$ \\
2N8+2S8 & 46.3\%--48.1\% & 99.9\% & 173 (134--180) & $1.7\pm0.1$ & $1.5\pm0.4$ \\
&&&&&\\
2N8+2K8 & 46.8\%--48.3\% & 99.8\% & 174 (134--181) & $1.7\pm0.1$ & $1.6\pm0.4$ \\
\end{tabular}
\end{center}
\caption[All-sky survey simulation results summary table]{
Summary of all-sky survey simulation results. The first 1N4 simulation, marked with an asterisk (*), was the only one carried out using the full scheduler and database system; all the other simulations used the `lite' script. The fraction of the sky observed each night is given as a range over the course of a year, as well as the total fraction of the sky observed over the whole year. The number of times each tile was observed is given as an average over all tiles observed and, in parenthesis, the minimum and maximum. The mean cadence between observations of each tile is also given, along with the mean airmass of each observation, over the whole year.
}\label{tab:survey_sim_results}
\end{table}
\end{colsection}
\subsection{Analysis of simulation results}
\label{sec:survey_sim_analysis}
\begin{colsection}
The results of the all-sky survey simulations show, as expected, that the greatest benefit to the survey cadence comes from increasing the number of telescopes at each site. \aref{fig:survey_sim_results} plots the change in mean cadence and fraction of the sky observed each night for the planned stages of GOTO deployment.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/survey_sims/results.png}
\end{center}
\caption[Tile cadence and nightly sky observation for different GOTO systems]{
Mean tile cadence (\textcolorbf{NavyBlue}{blue}) and fraction of the sky observed each night (\textcolorbf{Red}{red}) for five deployment stages of the GOTO system. Error bars on the cadence show the standard deviation for all of the tiles observed across the sky, while the error bars on the observed fraction show the minimum and maximum nightly observed fraction arising from differing night lengths throughout the year.
}\label{fig:survey_sim_results}
\end{figure}
The improvement in tile observation cadence roughly follows the expected trend that doubling the instantaneous field of view would double the number of observations carried out in one night, and therefore halve the time between observations of a given tile. With the current GOTO-4 system (1N4) the simulations predict approximately 10 days between tile observations, reducing to approximately 5 days with the upgrade to the full GOTO-8 system (1N8) and then halving again to 2.5 days with the addition of the second GOTO-8 telescope on La Palma (2N8). Adding a single GOTO-8 telescope in Australia (2N8+1S8) leaves the tile cadence effectively unchanged. Adding a site in Australia increases the total amount of the sky that is visible over the year from roughly 75\% to almost 100\%, an increase of one third, and adding one more GOTO-8 telescope in the south corresponds to a one third increase in observing capability. Therefore the overall efficiency remains the same as in the 2N8 case. Adding the second telescope in Australia correspondingly decreases the cadence further; as this increases the overall instantaneous field of view by one third, the cadence is reduced by a third, from 2.5 to 1.66 days.
\newpage
As also shown in \aref{fig:survey_sim_results}, the increase in the fraction of the sky observed each night as more telescopes are added is fairly linear. The variation over the course of the year, shown by the error bars, comes from seasonal variation in the length of the night; the variation increases as more telescopes are added in the north but is then reduced to effectively zero with equal numbers of telescopes in both hemispheres.
Together, the sky-survey simulations confirm that having more telescopes decreases the survey cadence, as would be expected. Having a fast all-sky survey is critical for rapidly detecting candidate sources to gravitational-wave events, so it is necessary to consider the results of both sets of simulations together. Counter to the conclusions from \aref{sec:gw_sims}, the simulations in this section suggest it would not necessarily be best to prioritise adding a telescope in the south compared to adding a second telescope in the north. While going from the 1N8 case to the 2N8 case makes very little difference to the number of gravitational-wave events that can be observed, it would halve the survey cadence from 4.6 days to 2.3, thereby making it far easier to identify candidates for the events which are visible. The decision of what order to deploy the GOTO telescopes therefore will come down to more practical considerations. Having any telescopes in the southern hemisphere will increase the number of possible gravitational-wave sources that could be observed, but without a high-cadence sky survey identifying the counterpart will be much more difficult.
Overall, the two sets of simulations together suggest that the proposed full GOTO network, the ``2N8+2S8'' system, should expect to observe the position of over 75\% of gravitational-wave sources within 24 hours, and over 50\% within 12 hours. On average there should be a reference image taken of the same position within the past 1.7 days, which will greatly help in narrowing down potential candidates. When fully deployed, GOTO would therefore be a powerful system for rapidly finding counterparts to gravitational-wave detections.
\newpage
\end{colsection}
\subsection{Comparison of simulations to real observations}
\label{sec:survey_sim_150}
\begin{colsection}
The results of the 1N4 simulation can be compared to the real observations carried out by the existing telescope on La Palma, in order to confirm how good a model it is of the real system. The current phase of the GOTO project began on the night of the 21st of February 2019 (see \aref{sec:timeline}); this was the first night of fully robotic observations with the set of four unit telescopes, and marks the start of the ongoing all-sky survey. The first 5 months of observations span 150 days up to the night ending on the 21st of July, and this provides the benchmark to compare with simulations of the same period.
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/150.pdf}
\end{center}
\caption[Observations carried out in the first 150 days of the all-sky survey]{
Observations carried out in the first 150 days of the current all-sky survey. The number of observations each night is shown in the upper plot, with the number of all-sky survey tiles observed shown in \textcolorbf{NavyBlue}{blue}, and any extra observations (of gravitational-wave events, GRB triggers or manually-inserted pointings) shown in \textcolorbf{Orange}{orange}. The background \textcolorbf{Gray}{grey} bars show the number of tiles observed on the same nights by the 1N4 survey simulation. The lower ``barcode'' plot shows the periods when the conditions flags were recorded as bad, either due to weather (\textcolorbf{Red}{red}) or hardware errors (\textcolorbf{Purple}{purple}).
}\label{fig:150}
\end{figure}
\aref{fig:150} shows the number of tiles observed each night by the real GOTO on La Palma and the corresponding 1N4 simulation, restricted to the first 150 days. Over the 5-month period, 16,146 on-grid observations were carried out by the telescope on La Palma, of which 85\% were survey pointings, and over the same period the simulation produced 21,300 observations. The simulated observations can be considered the idealised case, and the real observations carried out differ from simulation in three ways.
First, the real system on La Palma is affected by bad conditions which prevent observations from being taken, shown in \aref{fig:150} by the red and purple bars below the main plot. There was one particularly bad period in late March and early April when the dome could not open for over a week. The simulations do not currently include the effects of bad weather, although the code exists to simulate periods of bad conditions, and future simulations could include the real weather conditions over the same period. There were also other reasons for observations to be stopped on some nights, for example switching to manual mode to carry out calibration tests or on-site work.
Secondly, the real system had to deal with multiple distractions from observing the all-sky survey. The orange bars in \aref{fig:150} show non-survey observations, which take up a significant amount of time (15\% of all observations). Some nights are almost entirely orange, corresponding to LVC gravitational-wave triggers with particularly large skymaps visible from La Palma (see \aref{sec:gw_results}). These include S190425z and S190426c in late April, multiple events during May and S190720a just before the end of the period in mid-July. Other orange patches represent observations of smaller gravitational-wave skymaps, gamma-ray burst triggers, or other manually-inserted targets; these were also not considered in the sky-survey simulations.
Finally, there is still a regular offset in \aref{fig:150} between the number of real observations taken in nights with clear conditions and the number predicted by the simulations. This discrepancy is due to the values used within the simulation for camera readout and slew time not matching up precisely with the actual times; future simulations will need to be calibrated more accurately against real data.
\aref{fig:survey_real_150} shows the sky coverage map for the real observations in the first 150 days, while \aref{fig:survey_sim_1n4_150} shows the same for the 1N4 simulation. The real sky coverage is very similar in extent to the simulations: the real observations cover 2135 of the 2913 GOTO-4 grid tiles at least once while the simulation covers 2187. The reason for the small discrepancy is that initially the real system used an altitude limit of \SI{35}{\degree} for the all-sky survey, which was lowered to \SI{30}{\degree} in May. This change is visible in the bottom row of tiles in \aref{fig:survey_real_150}. The simulations all assume a constant \SI{30}{\degree} limit.
The major difference in the coverage between the real and simulated results is in the number of times each tile was observed. \aref{fig:survey_real_150} shows the most a real tile was observed was nine times, while \aref{fig:survey_sim_1n4_150} shows that the simulated results included up to 13 observations of a single tile. The mean tile cadence of the real observations is $14\pm4$, compared to $10\pm2$ from the 1N4 simulation. It is clear that future simulations need to take into account the time lost to weather and other non-survey observations in order to accurately predict the output of the real system. Overall though, aside from the constant offset visible in \aref{fig:150}, the simulations do seem to provide a reasonable approximation of what GOTO could observe in this idealised case.
\begin{figure}[p]
\begin{center}
\includegraphics[height=190pt]{images/survey_sims/150_1N4_real_v2.png}
\end{center}
\caption[Real survey observations over 150 days]{
Real all-sky survey map of observations over the first 150 days, from 21st February to 21st July 2019. Tiles are coloured by the number of times they were observed during this period (the most a single tile was observed was 9 times), and white tiles were never observed.
}\label{fig:survey_real_150}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[height=190pt]{images/survey_sims/150_1N4_lite_v2.png}
\end{center}
\caption[1N4 survey simulation observations over 150 days]{
Simulated 1N4 all-sky survey map of observations over the first 150 days, using the same scale as \aref{fig:survey_real_150} above. Compare to \aref{fig:survey_sim_1n4} for the coverage over the entire 1-year simulation.
}\label{fig:survey_sim_1n4_150}
\end{figure}
\clearpage
\newpage
\end{colsection}
\subsection{An alternative, meridian-limited sky survey method}
\label{sec:survey_sim_meridian}
\begin{colsection}
One of the problems with the current scheduling system for the all-sky survey is that it leads to observations being carried out at high airmasses. Due to how the scheduler ranks tiles (see \aref{sec:ranking}), when all of the visible survey tiles have been observed the same number of times the scheduler will then choose between them based on the airmass tiebreak parameter (unlike tiles linked to skymaps, all survey tiles have equal weights, so the tiebreaking algorithm developed in \aref{sec:scheduler_tiebreaker} is simplified). This results in the scheduler always selecting tiles as soon as they rise if they have been observed fewer times than any others currently visible, and this means survey tiles are often observed at low altitudes and therefore high airmasses --- leading to poor data quality.
One possible method to fix this problem, and improve the data quality, is to implement stricter limits when observing survey tiles. This should not be based on altitude or airmass, because that would exclude tiles close to the site declination limits (such as near the north celestial pole from La Palma) which could never rise above the limit. Instead, observations should be limited based on distance from the observer's meridian, which in practice limits the target's hour angle. Limiting observations by hour angle defines a strip surrounding the observer's meridian within which survey tiles are valid and outside of which they are not.
In order to see the consequences of this method several simulations were carried out using the 1N4 system, but modified to limit the hour angle of each target. The results of the simulations are given in \aref{tab:survey_sim_meridian}, for different hour angle limits and the unlimited case for comparison. \aref{fig:survey_sim_airmass_365} shows the change in distribution of airmasses between the existing unlimited method and when restricting observations to a \SI{20}{\degree} wide strip ($\pm\SI{10}{\degree}$) around the observer's meridian. \aref{fig:survey_sim_airmass_normal} shows the mean airmass each tile is observed in the first month of a survey using the existing method, while \aref{fig:survey_sim_airmass_meridian} shows the same thing but for a simulation using the hour angle limit.
By restricting observations to be closer to the observer's meridian the mean airmass of the observations is decreased, as expected. This is shown by the mean airmasses in \aref{tab:survey_sim_meridian} but is even clearer in the distributions shown in \aref{fig:survey_sim_airmass_365}. The optimal value of the hour angle limit will depend on several factors, including the number of telescopes being used (as the instantaneous field of view increases, the tiles within the meridian strip will be observed faster, and so the hour angle limit should be increased).
A side effect of this method is that, as the width of the meridian strip is decreased and the effective visible sky is reduced, each tile within the strip is observed more often, and therefore the mean cadence decreases. However, this also restricts the overage area and leads to fewer unique tiles being observed, as shown in \aref{fig:survey_sim_airmass_meridian}. From \aref{tab:survey_sim_meridian} the fraction of sky observed within a single month reduces from almost 60\% using the unlimited method to below 40\% with the strictest hour angle limit. This would have a knock-on effect on the effectiveness of the sky survey, as although lower-airmass observations are desirable, so are more recent observations of the tiles for difference imaging. In practice it might be necessary to have two concurrent surveys, one optimised for minimum airmass and the other optimised for cadence.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|cc|c|c|c} %
Survey &
\multicolumn{2}{c|}{Fraction of sky observed} &
Mean cadence &
Mean observed
\\
method &
1st month &
whole year &
(days) &
airmass
\\
\midrule
Meridian \SI{\pm5}{\degree} & 39.4\% & 76.5\% & $6.2\pm0.5$ & $1.2\pm0.2$ \\
Meridian \SI{\pm10}{\degree} & 41.3\% & 76.5\% & $6.6\pm0.7$ & $1.2\pm0.3$ \\
Meridian \SI{\pm30}{\degree} & 48.5\% & 76.5\% & $7.9\pm0.8$ & $1.3\pm0.3$ \\
Meridian \SI{\pm45}{\degree} & 53.3\% & 76.5\% & $8.8\pm0.9$ & $1.3\pm0.3$ \\
No limit & 57.0\% & 76.5\% & $10.1\pm1.8$ & $1.6\pm0.3$ \\
\end{tabular}
\end{center}
\caption[Comparison of survey simulations using a meridian limit]{
Comparison of 1N4 survey simulations using different meridian limits.
}\label{tab:survey_sim_meridian}
\end{table}
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[height=140pt]{images/survey_sims/365_1N4_lite_airmass3.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}\vspace{10pt}
\includegraphics[height=140pt]{images/survey_sims/365_1N4_meridian_airmass3.png}
\end{minipage}
\end{center}
\caption[Airmass distribution over a year of observations]{
Airmass distribution over a year of observations with the 1N4 system, for the normal unlimited case (left) and limited to the observer's meridian $\pm\SI{10}{\degree}$ (right).
}\label{fig:survey_sim_airmass_365}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.7\linewidth]{images/survey_sims/30_1N4_lite_airmass.png}
\end{center}
\caption[Mean observation airmasses for the 1N4 survey simulation]{
Mean observation airmasses for the first month of the 1N4 survey simulation, with no hour angle limit.
}\label{fig:survey_sim_airmass_normal}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.7\linewidth]{images/survey_sims/30_1N4_meridian_airmass.png}
\end{center}
\caption[Mean observation airmasses using the meridian scanning method]{
Mean observation airmasses for the first month of a survey using the meridian scanning method, restricting observations to tiles within $\pm\SI{10}{\degree}$ of the observer's meridian. Using the hour angle limit it possible to optimise the airmass of observations, but at the cost of sky coverage.
}\label{fig:survey_sim_airmass_meridian}
\end{figure}
\end{colsection}
\section{Summary and Conclusions}
\label{sec:multiscope_conclusion}
\begin{colsection}
In this chapter I exampled different possible expansion options for the GOTO project, and their effect on the core science.
The intention of the GOTO project has always been to include two complimentary sites in the northern and southern hemispheres, La Palma was he natural choice for the north and a second site in Australia would provide an ideal counterpart. The intention is to have four telescopes in total, two at each site, and all four will operate together as a single global observatory. The core G-TeCS control software outlined in \aref{chap:gtecs} and \aref{chap:autonomous} can be fairly easily duplicated for each system, but in order to achieve optimal coordination between the sites the G-TeCS scheduling system detailed in \aref{chap:scheduling} will need to be expanded to schedule all the active telescopes at once. This is given as a major area of future work in \aref{chap:conclusion}.
In order to examine different possible GOTO configurations, and to make the case for the full deployment described above, I carried out two major series of simulations. The first focused on the combined system's ability to follow-up gravitational-wave alerts, and determined that expanding to the southern hemisphere provides a large improvement to the ability to observe counterparts --- purely by allowing more of the sky to be surveyed. The second set of simulations made the case for hosting two telescopes at each site, as only then will the fast cadences required to reject candidates be achieved. Together the full proposed GOTO system, four 40 square degree field of view mounts located across the two sites, will be a world-leading facility, able to observe the entire visible sky every 1--2 days and provide the best chance to locate and identify optical counterparts to future gravitational-wave detections.
\end{colsection}
\chapter{Scheduling Observations}
\label{chap:scheduling}
\chaptoc{}
\section{Introduction}
\label{sec:scheduling_intro}
\begin{colsection}
Completing the chapters describing the core functions of the GOTO Telescope Control System, in this chapter I detail how the robotic system decides which targets to observe.
\begin{itemize}
\item In \nref{sec:ranking} I describe the functions used by the G\nobreakdash-TeCS scheduler to chose between targets and decide which is the highest priority.
\item In \nref{sec:scheduler_tiebreaker} I examine how the ``tiebreak'' value is calculated to sort between equally-ranked targets.
\item In \nref{sec:scheduler_sims} I describe how optimal tiebreak weighting parameters were determined, by running simulations of the G-TeCS system observing gravitational-wave events.
\end{itemize}
All work described in this chapter is my own unless otherwise indicated. The first two sections are based on the description of the scheduling functions in \citet{Dyer}.
\end{colsection}
\section{Determining target priorities}
\label{sec:ranking}
\begin{colsection}
GOTO operates under a ``just-in-time'' scheduling model \citep[see, for example,][]{LCO_scheduling}, rather than creating a plan at the beginning of the night of what to observe \citep[see, for example,][]{ZTF_scheduler}. Each time the pilot queries the scheduler the current queue of pointings is imported and the priority of each is calculated, with no explicit consideration for the past or future (aside from the ``mintime'' constraints, as described below). The highest priority pointing is then returned, as described in \aref{sec:scheduler}.
This system is very reactive to any incoming alerts, as the new pointings will immediately be included in the queue at the next check. This method also naturally works around any delay in observations due to poor conditions, unlike a fixed night plan. The just-in-time method can be less efficient than a night plan when observing predefined targets which can be deliberately optimised before the night starts. However the just-in-time system is perfectly reasonable for the all-sky survey GOTO is normally observing, and any other observations will be alerts entered by the sentinel daemon which could not be planned for, so it was determined to be the best option for GOTO.\@
Each time the scheduler functions are called several steps need to be carried out. The first of these is to fetch the current queue from the observation database (see \aref{sec:obsdb}). This is done by querying the database \texttt{pointings} table for any entries that have the \texttt{pending} status. Additional filters are also applied in order to reduce the number of invalid pointings imported: restricting the query to pointings within the visible region of the sky (based on the time and observatory location) and within the pointing's valid period (the start time has passed and stop time has not yet been reached). Any entries in the table that pass these filters make up the \emph{pointings queue}.
In order to find which pointing is the highest priority, the queue is sorted using a variety of parameters, with the pointing sorted at the top being returned by the scheduler. The sorting criteria are outlined in the following sections.
\end{colsection}
\subsection{Applying target constraints}
\label{sec:constraints}
\begin{colsection}
The first consideration is determining which pointings are currently valid. As described in \aref{sec:obsdb}, pointings have limits defined for physical constraints (minimum target altitude, minimum Moon separation, maximum Moon illumination, maximum Sun altitude). These constraints are calculated and applied to the pointings using the Astroplan Python package \citep[\texttt{astroplan}\footnote{\url{https://astroplan.readthedocs.io}},][]{astroplan}. The target altitude and Moon separation constraints depend on the position of the target, both the altitude constraints depend on the site the observations are being taken from, and all four constraints depend on the current time. Each constant is applied both at the current time and after the minimum observing time defined for each pointing. This ensures that, for example, targets that are setting are visible throughout their observing period by checking the altitude is above the minimum both at the beginning and end of the observation. The minimum time constraints are not applied to the pointing currently being observed (if any), as the pointing will already be part way through and will have already been passed as valid. The validity of the pointings is a simple boolean flag (True or False), and invalid pointings are naturally sorted below valid ones.
\end{colsection}
\subsection{Effective rank}
\label{sec:rank}
\begin{colsection}
The next order pointings are sorted by is the effective rank of the pointing, which is a combination of the integer starting rank the pointing was inserted with and the number of times it has since been observed.
The starting rank is fixed when the pointing is created in the observation database: every pointing is given an integer rank between 0 and 999. The highest and lowest ranks are reserved for particular classes of targets. Rank 0 is not intended to be used under normal circumstances: it is reserved for exceptional events, such as a local galactic supernova, as a pointing with rank 0 would outrank all other pointings including even gravitational-wave events. At the other end of the scale, rank 999 is reserved for the all-sky survey tiles, so that they are sorted below all other pointings. These pointings act as ``queue fillers'' in the system, ensuring there is always something for the telescope to observe. All other ranks are otherwise available, although by convention ranks ending in 1--5 are used for gravitational-wave events, 6--8 for other transient events (e.g. GRBs) and 9 for other fixed targets. See \aref{sec:event_strategy} for the details of determining the rank for different transient events.
Added to the starting rank is a count of the number of times that a target has been observed, based on the number of pointings previously associated with a given mpointing (see \aref{sec:obsdb}). This count only includes successful observations, so pointings that were interrupted or aborted are not included. The starting rank ($R_s$) and observation count ($n_\text{obs}$) are added to create the effective rank $R$ given by
\begin{equation}
R = R_s + 10\times n_\text{obs}.
\label{eq:effective_rank}
\end{equation}
This formula means a pointing with a starting rank of 2 that has been observed five times will have an effective rank of 52. Effective ranks are sorted in reverse order, so a rank-5 pointing that has been observed once (an effective rank of 15) will be a higher priority target than a rank-4 pointing that has been observed twice (an effective rank of 24). This system allows for a natural filtering of targets, as targets will move down the queue as they are observed. For example, pointings from a gravitational-wave event might be inserted into the database at rank 2, so will first appear in the queue with effective rank of $R=2$ ($n_\text{obs}=0$). The first pointing that is observed will reappear with $R=12$, and therefore be sorted below those tiles that have not yet been observed. Once all the pointings have been observed once they will all have effective rank 12 and the process repeats, with each pointing falling to effective rank 22, 32 etc. As the increase is by 10 each time pointings from other events, or which were manually inserted, might also be in the queue and interweave between the event follow-up pointings. For example, a manual observation might be inserted at rank 9, meaning it will fall below the first observation of the gravitational-wave tiles at $R=2$ but will take priority over subsequent observations. To prevent this, the manual observation could be inserted at rank 19 to come after two gravitational-wave observations, or even 509 to completely ensure it does not interfere with the gravitational-wave follow-up targets.
\end{colsection}
\subsection{Targets of Opportunity}
\label{sec:toos}
\begin{colsection}
For pointings with the same effective rank the next sorting parameter is the \glsfirst{too} flag assigned to the pointing when it was inserted into the database. The flag is simply a boolean value that is true if the target is a ToO and false if it is not, and pointings that have the flag as true are sorted higher than those of the same rank that are not ToOs. This ensures that time-sensitive targets are prioritised ahead of other targets at the same rank, although it is important to remember that the effective rank does still take priority (this means a ToO at rank 4 will be sorted above any other rank 4 pointings, but will still be a lower priority than a non-ToO at rank 3).
\end{colsection}
\subsection{Breaking ties}
\label{sec:breaking_ties}
\begin{colsection}
Finally, if there are multiple pointings with the same values for the above parameters then a single \emph{tiebreak} value is calculated for each. This value is based on the current airmass of the pointing and the weighting of the survey tile the pointing is linked to, if any; pointings at lower airmass (closer to the zenith) and higher tile weightings are the higher priority. For example, if two new gravitational-wave pointings with equal ranks both contain the same skymap probability (see \aref{sec:skymaps}), then the one at the lower airmass at the time of the check will be prioritised.
\newpage
To calculate the tiebreak value, both the tile weighting ($W$) and airmass ($X$) values need to be scaled between 0 and 1. This is true by definition for the tile weights, while the airmass is scaled so airmasses 1 and 2 are set to 1 and 0 respectively (airmasses greater than 2 are set to zero). The parameters are then combined to form the tiebreak value $V$ in a ratio 10:1 using
\begin{equation}
V = \frac{10}{11}~W + \frac{1}{11}~(2 - X).
\label{eq:tiebreak}
\end{equation}
This ensures the tiebreak value is also between 0 and 1, with higher values being preferred. The best possible scenario is a tile which contains 100\% of the skymap localisation probability ($W=1$) and is exactly at zenith ($X=1$) which gives a tiebreak value $V=1$. How this tiebreak formula was determined is described in \aref{sec:scheduler_tiebreaker}. Note that \aref{eq:tiebreak} is just \aref{eq:wa_ratio} using a ratio of 10:1, which was determined based on the scheduling simulations described in \aref{sec:scheduler_sims}.
In the unlikely event that two pointings are still tied, all other parameters (rank, ToO flag) being otherwise equal, and they have exactly the same tiebreak value, then whichever was inserted into the database first (and therefore has a lower database ID) by default comes first in the queue.
\end{colsection}
\subsection{Queue sorting example}
\label{sec:sorting_example}
\begin{colsection}
\begin{table}[t]
\begin{center}
\begin{tabular}{c|l|c|ccc|c|ccc} %
& Name & Valid & $R_s$ & $n_\text{obs}$ & Eff.\ rank & ToO & $W$ & $X$ & Tiebreaker \\
\midrule
1 & GW191202 P3 & \textcolor{Green}{Y} & 2 & 0 & 2 & \textcolor{Green}{Y} & 0.10 & 1.1 & 0.173 \\
2 & GW191202 P4 & \textcolor{Green}{Y} & 2 & 0 & 2 & \textcolor{Green}{Y} & 0.05 & 1.1 & 0.127 \\
3 & M101 & \textcolor{Green}{Y} & 9 & 0 & 9 & \textcolor{Red}{N} & 1 & 1.5 & 0.955 \\
4 & GW191202 P2 & \textcolor{Green}{Y} & 2 & 1 & 12 & \textcolor{Green}{Y} & 0.30 & 1.1 & 0.355 \\
5 & AT 2019xyz & \textcolor{Green}{Y} & 6 & 2 & 26 & \textcolor{Green}{Y} & 1 & 1.4 & 0.964 \\
6 & M31 & \textcolor{Green}{Y} & 16 & 1 & 26 & \textcolor{Red}{N} & 1 & 1.2 & 0.982 \\
7 & All-sky T0042 & \textcolor{Green}{Y} & 999 & 0 & 999 & \textcolor{Red}{N} & 1 & 1.0 & 1.000 \\
\vdots & & & & & & \\
& GW191202 P1 & \textcolor{Red}{N} & 2 & 0 & 2 & \textcolor{Green}{Y} & 0.55 & 1.1 & 0.582 \\
& All-sky T0123 & \textcolor{Red}{N} & 999 & 0 & 999 & \textcolor{Red}{N} & 1 & 2.0 & 0.909 \\
\end{tabular}
\end{center}
\caption[Examples of sorting pointings by priority]{
Some examples of a queue of pointings sorted by priority. Pointings are first sorted by validity, with invalid pointings shown at the bottom of the queue. Then pointings are sorted by effective rank, which is comprised of the starting rank ($R_s$) and the observation count ($n_\text{obs}$). Pointings with the same effective rank are sorted based on if they are targets of opportunity or not, with ToOs being ranked higher. Finally pointings with all other factors being equal are ranked by the tiebreaker value, combining tile weighting ($W$) and the current airmass ($X$) using \aref{eq:tiebreak}.
}\label{tab:priority}
\end{table}
In order to show how the above sorting methods are applied in practice, an example queue of pointings is shown in \aref{tab:priority}. The current highest-priority pointing is one of four pointings from a fictional gravitational-wave event, GW191202. At the top of the queue are two of these pointings, marked as P3 and P4. Both are valid, both have the same starting rank (2) and neither have been observed yet ($n_\text{obs}=0$). They are also both at the same airmass (1.1), but as P3 has a higher tile weighting (containing 10\% of the skymap probability compared to 5\% for P4) it has a higher tiebreak value and is therefore sorted higher. Therefore P3 would be returned by the scheduler.
As a demonstration, the rest of the queue is also shown. The gravitational-wave pointing containing the highest probability, P1, is unfortunately not valid and is therefore at the bottom of the queue (but still above other invalid pointings). The second highest, P2, has already been observed once and therefore has an effective rank of 12. This puts it below a non-ToO pointing of M101 which has a lower starting rank, 9 compared to 2, but has not yet been observed and is therefore sorted higher. The other non-survey pointings in the queue are a pointing of a transient event, AT 2019xyz, and one of M31. Both are valid and have the same effective rank of 26, but the transient is a target of opportunity and therefore is sorted higher. This is true even though it is at a worse airmass, as the ToO sorting takes priority over the tiebreak; had they both (or neither) been ToOs then the M31 pointing would have been higher. Finally below those pointings is the first of the all-sky survey pointings. These will only be the highest priority if there are no other valid pointings above them, which for GOTO is actually most of the time.
\end{colsection}
\section{Calculating the tiebreaker}
\label{sec:scheduler_tiebreaker}
\begin{colsection}
The scheduler weights pointings in the current queue by several parameters, as described in the previous section: the assigned rank, the number of times it was previously observed, if it is a target of opportunity or not. But in practice most of the time the queue will contain a large number of pointings where these values are all the same. For example, when a new gravitational-wave event is processed by the sentinel the GOTO-alert event handler adds in a large number of pointings based on tiles from the skymap (see \aref{sec:event_insert}). On the next scheduler check the queue will be populated by a large number of pointings each with the same rank and ToO flag which have never been observed. Likewise, when observing the all-sky survey the queue will be filled with tiles that have all been observed the same number of times. This is why the scheduler then needs a further way to distinguish between pointings, which is known as the tiebreaker.
The older pt5m scheduling code that G-TeCS is based on (see \aref{sec:pt5m}) used only a single tiebreak parameter to decide between equally-ranked pointings: the airmass each target would be at at the midpoint of the observation. This works well to prioritise getting the best data quality, assuming the two targets are otherwise identical. However when adapting the pt5m system for GOTO it was clear there was the need for an additional parameter in the scheduling functions, to encode the relative weights of a set of pointings.
\end{colsection}
\subsection{Tile weighting}
\label{sec:weights}
\begin{colsection}
All the pointings added from a gravitational-wave skymap (or similar event such as a gamma-ray burst) will have the same rank, but they will have different weights from the amount of skymap probability they each contain (how event skymaps are mapped onto the tile grid is described in \aref{chap:tiling}). It makes sense that, of all the tiles from a given event, the ones with higher probability should be the ones to prioritise and observe first.
However, unlike an integer parameter such as the rank or the True/False ToO flag, the tile weights cover a wide range and often there will only be a small amount of difference between the values for neighbouring tiles. Prioritising a tile that contains 2.71\% of the skymap over another that contains 2.70\% in all cases is not the best strategy, especially if the latter is close to zenith while the former is low down close to the horizon. Observing a high-airmass tile over a low-airmass one for a gain of only 0.01\% probability is a poor choice, especially if the former tile is currently rising and will be at a better altitude in a few hours. For these reasons it was decided that the skymap probability weighting should be considered at the same level as the airmass tiebreaker, meaning a lower-airmass tile with only a slightly lower probability will be prioritised over one further from zenith. This should only be true up to a reasonable limit, however, as in the case of two tiles where one contains a probability of 95\% and the other 3\% it should always be true that observing the former is the better choice, even if it has a slightly worse airmass.
It should be noted that skymap probability is not necessarily the only way for a group of tiles to be weighted. In the past GOTO has carried out more focused surveys: when carrying out a galaxy-focused survey, for example, tiles were weighted by the sum of the magnitudes of all galaxies within them. The only requirement is that each tile has a weighting of between 0 and 1, relative to the other tiles added in that survey. For non-survey pointings, such as a single observation of a particular target, the weight is set to 1 (this can be considered as that tile having a 100\% chance of containing the target). This is also true for survey pointings where all tiles are weighted equally, such as the all-sky survey. This can be seen in the example \aref{tab:priority} in the previous section, as all the non-gravitational-wave pointings have $W=1$.
\newpage
\end{colsection}
\subsection{Combining tile weight and airmass}
\label{sec:wa}
\begin{colsection}
As mentioned previously, the pt5m system uses airmass as the sole tiebreak parameter. For the G-TeCS scheduler it was instead decided to create a new tiebreak value, which would combine both the tile weight, as described above, and the airmass of the target at the time the scheduler check was carried out. This allows the scheduler to take into account both parameters when deciding between otherwise-equal pointings.
Airmass is usually modelled using a plane-parallel atmosphere, which gives
\begin{equation}
X = \sec{z},
\label{eq:airmass}
\end{equation}
where $X$ is airmass and $z$ is the zenith distance ($z=90-h$ where $h$ is the altitude of the target). Targets are best to observe at low airmasses to get the best data quality.
In order to combine both weight and airmass into a single tiebreak value it was decided to scale both between 0 and 1, and then combine them such that the final tiebreak value $V$ was also between 0 and 1. This would then be sorted so that higher values are prioritised, as described previously in \aref{sec:ranking}. Helpfully, the tile weights are already defined as being between 0 and 1. In order to scale the airmasses it was decided that the airmass of a target that is at or below the horizon limit should be set to 0, and any target that was at the zenith ($h=90$, so $X=1$) should be set to 1. For GOTO the horizon limit is \SI{30}{\degree}, which corresponds to an airmass limit of 2. The final tiebreak value is defined using
\begin{equation}
V = \frac{w}{w+a}~W + \frac{a}{w+a}~(2-X),
\label{eq:wa_ratio}
\end{equation}
where the balance of tile weight $W$ to airmass $X$ is described by the ratio $w$:$a$. Using a ratio of 10:1, i.e.\ prioritising the tile weight 10 times more than the airmass, produces \aref{eq:tiebreak}.
\newpage
\end{colsection}
\subsection{Time-to-set}
\label{sec:tts}
\begin{colsection}
The definition of airmass given in \aref{eq:airmass} is, as would be expected, symmetric around the zenith. Scheduling using this parameter is therefore a problem for the following reason: consider two targets with equal or similar contained skymap probabilities, but one is \SI{5}{\degree} above the horizon in the west and the other is \SI{5}{\degree} above the horizon in the east. They have the same airmass value, but due to the rotation of the Earth the one in the west will be setting while the one in the east is rising. A good scheduling system could prioritise observing the target in the west, as unless it is observed quickly it will pass below the horizon and no longer be visible for the remainder of the night (assuming it is not circumpolar, see below).
In order to address this problem, a new parameter was required that prioritises targets that are about to set. This new parameter is called `time-to-set', and is simply the time until the target sets below the defined horizon. The units are arbitrary, but as it will repeat with a period of 24 hours the time-to-set value is normalised between 0 and 1 so that it is 0 when the target is at the horizon, 0.5 when it is 12 hours from setting and 1 when it is 24 hours from setting (there is therefore a degeneracy at 0 and 1). \aref{fig:airmass_tts} shows how the airmass and time-to-set values change between 0 and 1 over the course of a day for any non-circumpolar target. Circumpolar targets are ones that never set below the horizon, and therefore for these targets the time-to-set is an illogical value. However, the North Celestial Pole is just below the \SI{30}{\degree} horizon limit of GOTO from La Palma, so this is not a concern at present.\@ This may need to be reconsidered depending on which site is picked for GOTO's future southern node, see \aref{sec:multi_site_scheduling}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/airmass-tts.png}
\end{center}
\caption[Plotting scaled airmass and time-to-set values for a target]{
Plotting scaled airmass and time-to-set values for a target over 24 hours.
The altitude of the target is shown by the \textcolorbf{NavyBlue}{blue} dashed line, and the \textcolorbf{Gray}{grey} regions show the times when the target is below the \SI{30}{\degree} horizon altitude limit (the black dotted line).
The two tiebreak values are also plotted, scaled between 0 and 1: airmass (in \textcolorbf{Orange}{orange}) is set to 0 when the target is at airmass 2 or below, and the time-to-set (in \textcolorbf{Green}{green}) linearly increases to 1 until the target passes below the horizon and it is reset to 0.
}\label{fig:airmass_tts}
\end{figure}
Just replacing airmass in the tiebreaker calculation with the time-to-set would not produce good results, as the telescope would be prioritised to observe the western horizon continuously (as by design targets that are just about to set have the highest scaled time-to-set values). As when including the skymap probability, a weighted combination of both airmass and time-to-set would be best in order to take both parameters into account. This new parameter, $Z$, can be considered using the equation
\begin{equation}
Z = \frac{a}{a+t}~(2-X) + \frac{t}{a+t}~T,
\label{eq:at_ratio}
\end{equation}
where again the weighting factors $a$ and $t$ describe the relative ratio between the airmass ($X$) and time-to-set ($T$) in the ratio $a$:$t$. \aref{fig:at_ratio} shows how different ratios produces different distributions: a ratio of 1:0 only considers the airmass, a ratio of 1:1 has to two equally weighted and a ratio of 0:1 only considers the time-to-set. As the ratio is increased in favour of time-to-set (i.e.\ $t$ is larger than $a$) the peak of the distribution shifts to the right, which will favour targets that are setting over those at the zenith. By using both airmass and time-to-set values in the tiebreaker formula the scheduler should prioritise observing targets that are about to set but are still at a reasonable airmass.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/at_ratio.png}
\end{center}
\caption[Combining airmass and time-to-set values]{
Combining airmass and time-to-set values in different ratios using \aref{eq:at_ratio} to create a new distribution. This plot uses the same target as \aref{fig:airmass_tts}, but focusing the $x$-axis on the time when the target is above the horizon.
}\label{fig:at_ratio}
\end{figure}
\newpage
Previously, the scheduler tiebreak value $V$ was calculated using equation \aref{eq:wa_ratio} with just the tile weight ($W$) and the airmass ($X$). In order to determine how including the time-to-set ($T$) would affect the scheduler this equation can be rewritten as
\begin{equation}
V = \frac{w}{w+a+t}~W + \frac{a}{w+a+t}~(2-X) + \frac{t}{w+a+t}~T,
\label{eq:wat}
\end{equation}
where $w$, $a$ and $t$ are the relative weightings factors for the tile weight, airmass and time-to-set respectively (usually written in the form $w$:$a$:$t$). Using this equation a series of simulations were carried out using the GOTO scheduling code to determine optimal values for $w$, $a$ and $t$, as described in the \aref{sec:scheduler_sims}. Ultimately a ratio of 10:1:0 was selected; by setting $t=0$ the time-to-set value is ignored, as it was found to only hinder the scheduler performance. This is why \aref{eq:tiebreak} in \aref{sec:breaking_ties} only includes the tile weight and airmass values.
\end{colsection}
\section{Scheduler simulations}
\label{sec:scheduler_sims}
\begin{colsection}
When the scheduling functions described in the previous sections were written, it was not clear how best to combine the selected parameters (tile weight, airmass and time-to-set) to create a single tiebreak value. In order to examine how different weightings of the three parameters affected the scheduler performance, a series of simulations were carried out. These simulations, and their results, are described in this section.
\end{colsection}
\subsection{Simulating GOTO}
\label{sec:goto_sims}
\begin{colsection}
The G-TeCS control system described in \aref{chap:gtecs} and \aref{chap:autonomous} contains all of the code used to operate the GOTO telescope, as well as test code down to the level of the individual daemons and hardware units. This makes it possible to simulate, for example, the camera daemon taking exposures using fake hardware code that waits in real time until the exposure time is completed (plus some readout time) and then creates a blank FITS image file with all of the expected header information. On top of this the real pilot can run without knowing these daemons are fake, and the real sentinel daemon can add real or simulated events into a copy of the observation database for the real scheduler to choose between. In this way the entire control system can be run without connecting it to any real hardware.
However, the fully-featured test suite described above is not necessary for the simulations described in this section: ideally they would run faster than real time, and it is not necessary to simulate the full hardware system down to fake images being created, but the intention is still to model the response of the real control system as much as possible. For these simulations anything below the pilot (i.e.\ the hardware daemons, as shown in \aref{fig:flow}) is abstracted away, and the pilot itself is replaced by a new specialised script simply called the fake pilot. This mirrors the real code described in \aref{sec:pilot} in most ways, however there are several important simplifications:
\begin{itemize}
\item The fake pilot does not call the scheduler daemon in order to find what pointing to observe, but instead imports and runs the scheduling functions itself. This was the original way the pilot ran before the scheduler was split into a separate daemon, as described in \aref{sec:scheduler}.
\item The fake pilot does not include the full conditions monitoring system described in \aref{sec:conditions}. The \texttt{check\_conditions} routine still exists in order to stop observations when the Sun has risen, but is just a single function check. Code was written to simulate weather closing the dome using random Gaussian processes, however for the scheduling simulations described here only a single night of observations is considered, and including random weather effects in the simulations only distracted from their purpose to model the scheduler response.
\item The night marshal and any observing tasks other than actually observing the scheduler target (e.g.\ autofocusing) have been removed from the fake pilot. Observations start immediately after sunset and continue to sunrise, and then when the dome is closed the pilot loop continues until the simulation has completed.
\item While the real pilot works using loops that sleep until a given time has passed, the fake pilot contains an internal time which is increased for each `step' in the simulation. At each step the script checks the scheduler using the normal commands. One important factor in speeding the simulations up is to increase this step size, up to the point that each observation takes a single step. This means that at each step the pilot will observe a new target, and increase the internal simulation time by the appropriate amount. If, for example, the target pointing asks for three \SI{60}{\second} exposures the simulations would increase by 3 minutes, plus extra time for readout and slewing to the target before the exposures start.
\end{itemize}
\newpage
\end{colsection}
\subsection{Simulation results}
\label{sec:scheduler_sim_results}
\begin{colsection}
A series of simulations using the fake pilot code described above were carried out in order to find optimal values for the weights ($w$, $a$ and $t$) given in \aref{eq:wat}, and to see how the telescope response changes depending on the values used.
It should be noted that these simulations were carried out in 2016, early in the development of G-TeCS and before the first GOTO telescope had been commissioned on La Palma. They therefore included assumptions for values such as the field of view of the telescope (affecting the tile size), mount slew speed and readout time. In addition, most of the code to handle gravitational-wave skymaps detailed in \aref{chap:alerts} had not yet been written and the event follow-up strategy had not been fully defined.
In order to run the simulations, a selection of model skymaps from the LIGO First Two Years project \citep{First2Years} were manually processed to generate a series of tiled pointings, which were then added to the observation database. The fake pilot script was then run to simulate one night of observations using set values for the $w$:$a$:$t$ ratio. Once completed the tiles observed were recorded, and then the database was reset, the ratio changed and the simulations repeated.
Two metrics were used to judge the effectiveness of the scheduler response: the mean airmass of each tile when observed, and the fraction of the skymap probability covered (i.e.\ the total contained probability within all observed tiles). As simulated skymaps were being used the location of the source of the gravitational-wave signal was known, so it was possible to record if the tile containing the source was observed or not. However, for these simulations the overall response, in terms of skymap coverage, was deemed a better indicator of the scheduler performance than just if the source was observed or not (for example, the source might not have even been visible from La Palma). The later, more advanced simulations described in \aref{chap:multiscope} go into more detail about the source location and the probability that the source position is observed.
\newpage
\end{colsection}
\subsection{Analysis of simulation results}
\label{sec:scheduler_sim_analysis}
\begin{colsection}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/sched_sim1.png}
\end{center}
\caption[Skymap coverage verses mean airmass for different $w$:$a$:$t$ ratios]{
The fraction of the event skymap covered verses the mean observation airmass for three different $w$:$a$:$t$ ratios: 1:0:0 (\textcolorbf{Purple}{purple}), 0:1:0 (\textcolorbf{Orange}{orange}) and 0:0:1 (\textcolorbf{Green}{green}). Each point represents a simulation using one of the First 2 Year skymaps. The stars show the average position for each ratio, with the error bars showing the standard deviation. The shaded region shows the area of \aref{fig:scheduler_sim_results2}.
}\label{fig:scheduler_sim_results1}
\end{figure}
The simulation results for three different ratios are plotted in \aref{fig:scheduler_sim_results1}. Each coloured point represents a single simulation of a night observing a single skymap. There is a large range of results: some simulations covered close to 100\% of the skymap while others covered almost none. Only simulations that included observing at least one skymap tile are included, otherwise the mean observed airmass would be undefined. The stars show the average position for each $w$:$a$:$t$ ratio. Although the errors are clearly large, some distributors are clear. For example, the 0:1:0 ratio (only including the airmass weighting) on average produces a better mean airmass than the others, which would be expected. Likewise the 1:0:0 case (only including the tile weight) on average results in a slightly higher fraction of the skymap being observed.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/sched_sim2.png}
\end{center}
\caption[Scheduler simulation results for different $w$:$a$:$t$ ratios]{
The fraction of the event skymap covered verses airmass for multiple different $w$:$a$:$t$ ratios. Each point here shows the average over multiple simulations of different skymaps, and the ratios shown by stars are the same as plotted in \aref{fig:scheduler_sim_results1}. The dashed coloured lines join results with the same $a$:$t$ ratio; 1:0 in \textcolorbf{Orange}{orange}, 1:1 in \textcolorbf{NavyBlue}{blue}, and 0:1 in \textcolorbf{Green}{green}; and the \textcolorbf{Gray}{grey} dashed line joins ratios with $w=0$.
}\label{fig:scheduler_sim_results2}
\end{figure}
Simulations were repeated for multiple different $w$:$a$:$t$ ratios, and the averages for each are plotted in in \aref{fig:scheduler_sim_results2}. This plot shows that, while there is a lot of underlying scatter between different skymaps as shown in \aref{fig:scheduler_sim_results1}, the mean positions for each ratio follow remarkably smooth trends (error bars are omitted from \aref{fig:scheduler_sim_results2} as they would spread off the page in both axes). For example, the higher the tile weight value $w$ relative to the other two parameters consistently increases the mean fraction of the skymap covered, however this is at a cost to the mean airmass of the observations.
Changing the relative ratios of airmass to time-to-set (from 1:0 through 1:1 to 0:1) shows an unexpected result. It is true the observed airmasses would be lower as less weight is put on the airmass parameter, however it was intended that introducing the time-to-set parameter would compensate by catching more setting tiles that would be missed by purely looking around the zenith, and therefore the skymap fraction covered would be higher. This is true if the tile weight is not included as a factor, as seen by the grey 0:0:1--1:0:0 line at the bottom of \aref{fig:scheduler_sim_results2}: when only airmass is considered (in the 0:1:0 case) the results produce the best average airmass per observation but the worse skymap coverage, and on the other hand only considering time-to-set (0:0:1) results in worse airmasses but better coverage. However, when the tile weight value $w$ is included this trend is counteracted, to the extent that the 50:0:1 point has both a higher mean airmass and a lower fraction of the skymap covered than the 50:1:0 point. In fact, based on the results which do not include the airmass parameter (the green line on the left of \aref{fig:scheduler_sim_results2}) the time-to-set value just suppresses the fraction of the skymap observed, while making almost no difference to the mean airmass.
The optimal scheduler result would, on average, produce the highest possible skymap coverage for the lowest average airmass. This would fall in the top-right region of \aref{fig:scheduler_sim_results2}. Based on these simulation results the best solution is to ignore the time-to-set value, and as described previously in \aref{sec:breaking_ties} the G-TeCS scheduler has been operating with a ratio of 10:1:0.
\end{colsection}
\subsection{Further simulations}
\label{sec:scheduler_sim_future}
\begin{colsection}
The conclusions from these scheduler simulations are not unreasonable if all that is needed is a one-size-fits-all set of weights that are hard-coded into the scheduler, as used by the current G-TeCS system. However, future work on these simulations should look in detail at other trends that might be hidden in the averages. For example, different $w$:$a$:$t$ ratios might be better suited for large skymaps versus smaller ones, or in cases where the whole skymap is visible at once compared to it slowly rising above the horizon during the night.
Although the time-to-set value seemed to only hinder the scheduler, other possible parameters could be considered. One idea is to convert time-to-set to time-visible, by including not only the time when the target sets below the horizon but also the time that the Sun rises. The existing time-to-set ratio prioritises observing targets later in the night when they are about to set, whereas airmass prioritises tiles near the zenith. Neither however considers the time remaining in the night, and while this is the same for every target including it as a factor in the scheduler could be a relatively-straightforward way to attempt to prioritise observations in the limited time available.
As mentioned previously, the simulations presented in this section were carried out before a lot of the G-TeCS code was finalised. Therefore it would also make sense to revisit the scheduler simulations with the newer simulation code, as used by the simulations in \aref{chap:multiscope}, in order to confirm if the 10:1:0 ratio is still found to be the best case. This would also allow the real parameters from the commissioned telescope to be included. When the next four unit telescopes are added to GOTO the effective field of view will be doubled, and the results in this section based on the 4-UT telescope might not necessarily be the same in the 8-UT case.
Finally, GOTO is ultimately planned to expand to multiple telescopes at several sites, as described in \aref{sec:goto_expansion}. Adapting the scheduling system described into this chapter to deal with multiple telescopes will be a major future project, which is considered in more detail in \aref{chap:multiscope}.
\end{colsection}
\section{Summary and Conclusions}
\label{sec:scheduling_conclusion}
\begin{colsection}
In this chapter I described how the G-TeCS automated scheduling system decides which pointings to observe.
The scheduler daemon is one of the autonomous support daemons as defined in \aref{chap:autonomous}. It has the task of taking the queue of possible targets from the observation database and sorting them based on several parameters to determine which is the highest priority. First each pointing is tested using a series of physical constraints to remove any which are invalid. The queue sorting then depends on the properties of each pointing: its rank, the number of times it has already been observed, whether it is defined as a target of opportunity or not. If there are still multiple pointings in the same position in the queue then a final tiebreaker value is calculated to chose between them, based on the skymap tile weighting (defined in \aref{chap:tiling}) and target airmass.
Determining how to calculate the tiebreaker and which parameters to base it on required a series of simulations of GOTO observations, to see how the choice of parameter weightings affected which pointings were observed. These simulations showed that a 10:1 ratio of tile weight to scaled airmass was closest to the preferred outcome, and that the introduction of a third parameter, time-to-set, could not improve on this. GOTO has been observing successfully using this ratio ever since. However, since those simulations were carried out further development of the control system and simulation code means that it would be worth revisiting them to see if the conclusions still hold.
\end{colsection}
\chapter{Tiling the Sky}
\label{chap:tiling}
\chaptoc{}
\section{Introduction}
\label{sec:tiling_intro}
\begin{colsection}
In this chapter I describe the software used by GOTO to create an all-sky survey grid, which gravitational-wave events are then mapped onto.
\begin{itemize}
\item In \nref{sec:gototile} I describe the GOTO-tile Python package, and the algorithms it uses to define the GOTO all-sky survey grid.
\item In \nref{sec:skymaps} I describe how transient alert localisations are defined using skymaps and how they are mapped onto the GOTO-tile grid.
\item In \nref{sec:custom_skymaps} I give some examples of how other skymaps can be used to direct GOTO observations.
\end{itemize}
All work described in this chapter is my own unless otherwise indicated, and has not been published elsewhere. The original GOTO-tile package was written by Darren White at Sheffield and Evert Rol at Monash, before I took over development and made substantial changes as described in this chapter.
\end{colsection}
\section{Defining the sky grid}
\label{sec:gototile}
\begin{colsection}
GOTO-tile is a Python package (\texttt{gototile}\footnote{\url{https://github.com/GOTO-OBS/goto-tile}}) created for the GOTO project to contain all of the functions related to tiling the sky and processing skymaps. It was originally developed by Darren White as a way to process gravitational-wave skymaps for GOTO, and then maintained by Evert Rol who rearranged it into a package usable for some other telescopes, including SuperWASP on La Palma and a proposed southern GOTO node. My contributions to the package have been extensive: reworking the foundations to improve how the sky grid is defined and how skymaps are applied, as well as adding additional code to create new skymaps (described in \aref{sec:custom_skymaps}).
\end{colsection}
\subsection{Creating sky grids}
\label{sec:grids}
\begin{colsection}
The core of GOTO-tile as it now exists is the \texttt{SkyGrid} Python class, which is used to define a sky grid: a collection of regularly-spaced points on the celestial sphere. These points are used as the centre of rectangular `tiles' aligned to the equatorial right ascension/declination coordinate system, which create a framework for survey observations to be mapped on to.
The most important parameter required when defining a sky grid is the field of view of the telescope, which is taken as the size of the tiles that make up the grid. The field of view is defined within GOTO-tile by giving a width and height value in degrees, meaning the tiles can only be square or rectangular. This is typically fine for the GOTO array, which has a total field of view comprising of overlapping rectangles from each unit telescope (see \aref{fig:fov}). There was a period when having three unit telescopes in an `L'-shape was considered, but this was abandoned due mainly to the complexity of tiling the grid based on abstract shapes. For the prototype 4-UT GOTO system currently on La Palma a rectangular 18 square degree tile (\SI{3.7}{\degree} $\times$ \SI{4.9}{\degree}) was defined during the commissioning period (see \aref{sec:timeline}).
The second parameter required to define a sky grid is the desired overlap between the tiles. This is given as a value between zero and one in both the right ascension and declination axes, with zero meaning no overlap and one meaning all the tiles are completely overlapping (in practice the overlap is restricted to no more than $0.9$). The overlap is used to define the spacing between the tile centres, depending on the algorithm used. The current 4-UT grid uses an overlap of 0.1 (10\%) in both axes.
As GOTO-tile has developed, the algorithm used to define the grid has changed (see \aref{sec:algorithms}), but the basic method remained the same:
\begin{enumerate}
\item On the celestial sphere (\aref{fig:sphere}) equally spaced lines of constant declination are defined, separated by the value $\Delta\delta$ (\aref{fig:deltadelta}). These ``declination strips'' are the basis for the grid points, which the tiles are centred on.
\item Each declination strip is then filled with equally spaced points, separated by the value $\Delta\alpha$ (\aref{fig:deltaalpha}). This value is constant within each strip but is (in most algorithms) a function of declination, $\Delta\alpha(\delta)$, meaning that as one moves away from the equator towards the poles each strip will contain a fewer number of points.
\item These points are then defined as the centres of the tiles, the size of which is given by the field of view (\aref{fig:tiledsphere}).
\end{enumerate}
Once the grid has been created it is encapsulated within the GOTO-tile \texttt{SkyGrid} class. Each tile is defined by a coordinate at its centre, and each is also given a unique name of the form \texttt{T0001}. The grid itself is also given a name formed using the input field of view and overlap parameters, so the current grid (with a field of view of \SI{3.7}{\degree}$\times$\SI{4.9}{\degree} and overlap factor of 0.1 in both axes) is given the name \texttt{allsky-3.7x4.9--0.1--0.1}. In this way a given tile in a given grid can be recreated just from the grid and tile name, which is used when storing the details in the \texttt{grids} and \texttt{grid\_tiles} tables in the observation database (see \aref{sec:obsdb}). %
\newpage
\makeatletter
\setlength{\@fptop}{0pt}
\makeatother
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/globe1.pdf}
\end{center}
\caption[The celestial sphere]{
The celestial sphere. The northern and southern celestial poles are marked as \textbf{NCP} \glsadd{ncp} and \textbf{SCP} \glsadd{scp} respectively, and the celestial equator is marked in \textcolorbf{Red}{red}. The ecliptic (the path of the Sun) is marked in \textcolorbf{Green}{green}, the point where the ecliptic rises above the celestial equator (the vernal equinox) is marked with the symbol \Aries{}, and the meridian that intercepts the poles and the vernal equinox is marked in \textcolorbf{NavyBlue}{blue}. Traditionally the equatorial coordinate system is defined as shown: declination (\textcolorbf{NavyBlue}{Dec}, $\delta$) is the angle from the equator (between \SI{-90}{\degree} at the SCP to \SI{90}{\degree} at the NCP) and right ascension (\textcolorbf{Red}{RA}, $\alpha$) \glsadd{ra} is the angle east of the vernal equinox (between \SI{0}{\degree} and \SI{360}{\degree}). The modern \glsfirst{icrs} defines coordinates based on radio sources which approximately match the ones described here \citep{ICRF}.
}\label{fig:sphere}
\end{figure}
\clearpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/globe2.pdf}
\end{center}
\caption[Defining declination strips]{
Defining the declination strips (shown in \textcolorbf{Orange}{orange}). The full declination range (\SI{-90}{\degree} to \SI{90}{\degree}) is divided equally by a constant spacing value $\Delta\delta$ (in \textcolorbf{NavyBlue}{blue}). In this example $\Delta\delta =$ \SI{10}{\degree}, and so the centre of each strip is set at $\delta=$ \SI{0}{\degree}, $\pm$\SI{10}{\degree}, $\pm$\SI{20}{\degree} etc. This gives 19 strips, 9 in each hemisphere and one on the equator. There is always a strip centred on $\delta=0$, and using the ``minverlap'' algorithm (see \aref{sec:algorithms}) there is always a `strip' of tiles at the poles which will include a single point.
}\label{fig:deltadelta}
\end{figure}
\clearpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/globe3.pdf}
\end{center}
\caption[Defining grid points]{
Defining the grid points, based on the declination strips from \aref{fig:deltadelta}. Using the ``minverlap'' algorithm (see \aref{sec:algorithms}) points are uniformly distributed on each declination strip with a spacing $\Delta\alpha(\delta)$. Unlike $\Delta\delta$, which is fixed across the sphere, $\Delta\alpha$ varies as a function of declination, meaning strips closer to the poles will contain fewer points (and therefore fewer tiles). Two examples of defining grid points are shown, one at declination $\delta_1=+$\SI{50}{\degree} (in \textcolorbf{Purple}{purple}) in the northern hemisphere and another at $\delta_2=-$\SI{10}{\degree} (in \textcolorbf{BlueGreen}{cyan}) in the southern hemisphere. The survey tiles are then centred on the grid points, as shown for these two strips.
}\label{fig:deltaalpha}
\end{figure}
\clearpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/globe4.pdf}
\end{center}
\caption[A fully tiled celestial sphere]{
A fully tiled celestial sphere. The same two strips of tiles are coloured in \textcolorbf{Purple}{purple} and \textcolorbf{BlueGreen}{cyan} as in \aref{fig:deltaalpha}. As each strip starts with a tile at RA$=0$ there is a fully aligned column of tiles along the meridian through the vernal equinox, coloured in \textcolorbf{YellowOrange}{yellow}, and there is always a tile centred on the vernal equinox and each pole. This grid was defined using the ``minverlap'' algorithm (see \aref{sec:algorithms}), with each tile having a field of view of \SI{10}{\degree} $\times$ \SI{10}{\degree} and a overlap of zero for clarity. Note zero overlap can lead to gaps between tiles towards the poles, shown by the \textcolorbf{Red}{red} patches. In this case the complete grid contains 424 tiles.
}\label{fig:tiledsphere}
\end{figure}
\clearpage
\makeatletter
\setlength{\@fptop}{0\p@ \@plus 1fil} %
\makeatother
\end{colsection}
\subsection{Different gridding algorithms}
\label{sec:algorithms}
\begin{colsection}
There have been three different algorithms used by GOTO-tile to define the grid.
\subsubsection{The product algorithm}
The first has since retroactively been called the ``\textbf{product}'' algorithm, and was used when Darren White first wrote GOTO-tile. It first defines the declination step size as
\begin{equation}
\Delta\delta = f_\text{dec}(1-v_\text{dec}),
\label{eq:product_deltadelta}
\end{equation}
where $f_\text{dec}$ and $v_\text{dec}$ are respectively the field of view in degrees and the fractional overlap parameters in the declination direction. The declination strips are then defined by taking steps of this size from the equator towards the poles, stopping when $|\delta| > 90$. An equivalent formula is used to calculate the steps in right ascension
\begin{equation}
\Delta\alpha = f_\text{RA}(1-v_\text{RA}).
\label{eq:product_deltaalpha}
\end{equation}
The clear downside of this method is that $\Delta\alpha$ does not vary with declination. In effect this algorithm attempts to define the grid as if it was on a flat plane, where the tiles could be arranged in orthogonal rows and columns. In practice when applied to a sphere this leads to a vast number of redundant tiles at the poles, as shown in \aref{fig:product}.
\subsubsection{The cosine algorithm}
Due to the obvious problems with the product algorithm, a replacement was written by Evert Rol, which I have since called the ``\textbf{cosine}'' algorithm. It is a more refined version of the product algorithm, and the declination strips are calculated in the same manner using \aref{eq:product_deltadelta}. However \aref{eq:product_deltaalpha} is modified to depend on declination:
\begin{equation}
\Delta\alpha(\delta) = \frac{f_\text{RA}(1-v_\text{RA})}{\cos \delta}.
\label{eq:cosine_deltaalpha}
\end{equation}
This produces a more sensible grid with fewer redundant tiles at the poles, as shown in \aref{fig:cosine}. However, there remained an issue of asymmetry: the strips are arranged increasing and decreasing from $\delta=0$ and the tiles are then arranged within the strips starting from $\alpha=0$. This leads to varying levels of overlap when the tiles within the strips meet as $\alpha$ approaches \SI{360}{\degree}, as visible in \aref{fig:cosine}. Although more subtle, there are similar issues at the north and south celestial poles. It is also common for there to be small gaps between the tiles at high and low declinations.
\subsubsection{The minverlap algorithm}
Due to these problems I created a new method to create the grid, called the ``\textbf{minverlap}'' (\emph{min}imum o\emph{verlap}) algorithm. A grid created with this algorithm is shown in \aref{fig:minverlap}. The intention of the new algorithm was to solve the issues with the product and cosine algorithms by dynamically adjusting the spacing between tiles. The previous two algorithms both treated the user-specified overlap parameter as fixed, and if the resulting spacings did not give an integer number of tiles within the ranges available then they produced uneven gaps at the edges. This is shown more clearly in \aref{fig:cosine_spacing}, where a particular spacing results in gaps at the celestial poles and variable overlaps across the RA$=0$ meridian.
The minverlap algorithm solves these problems by treating the overlap parameter not as fixed but as the \textit{minimum} required overlap between tiles. For example, if a grid is requested with an overlap of $0.2$ (20\%) but the field of view of the tiles does not neatly divide by \SI{90}{\degree} then the overlap can be increased until an integer number of tiles fit, as shown in \aref{fig:minverlap_spacing}.
\newpage
\begin{figure}[p]
\begin{minipage}[c]{0.46\linewidth}
\includegraphics[width=\linewidth]{images/algo_product.png}
\end{minipage}
\hfill
\begin{minipage}[c]{0.50\linewidth}
\caption[The product gridding algorithm]{
A sky grid of tiles defined using the ``product'' gridding algorithm. The inputs were a field of view of \SI{13}{\degree} $\times$ \SI{13}{\degree} and an overlap factor of $0.2$ in both axes. The colours show overlapping coverage: \textcolorbf{YellowOrange}{yellow} areas are within only one tile, \textcolorbf{ForestGreen}{green} two, \textcolorbf{cyan}{cyan} three, \textcolorbf{blue}{blue} four and \textcolorbf{RubineRed}{pink} five or more. This grid contains 595 tiles. Note the constant spacing of tiles in RA and the huge number of redundant tiles at the pole.
}\label{fig:product}
\end{minipage}
\end{figure}
\begin{figure}[p]
\begin{minipage}[c]{0.50\linewidth}
\caption[The cosine gridding algorithm]{
A sky grid of tiles defined using the ``product'' gridding algorithm. The input parameters and colours are the same as in \aref{fig:product}. This grid contains 393 tiles. Note the asymmetric ``seam'' along the $\alpha=0$ meridian, and the \textcolorbf{red}{red} areas near the pole that are not within the area of any tiles.
}\label{fig:cosine}
\end{minipage}
\hfill
\begin{minipage}[c]{0.46\linewidth}
\includegraphics[width=\linewidth]{images/algo_cosine.png}
\end{minipage}
\end{figure}
\begin{figure}[p]
\begin{minipage}[c]{0.46\linewidth}
\includegraphics[width=\linewidth]{images/algo_minverlap.png}
\end{minipage}
\hfill
\begin{minipage}[c]{0.50\linewidth}
\caption[The minverlap gridding algorithm]{
A sky grid of tiles defined using the ``minverlap'' gridding algorithm. The input parameters and colours are the same as in \aref{fig:product}. This grid contains 407 tiles. Note the even spacing of tiles even over the $\alpha=0$ meridian, and the better coverage at the pole.
}\label{fig:minverlap}
\end{minipage}
\end{figure}
\newpage
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/spacing_cosine.pdf}
\end{center}
\caption[Grid spacing with the cosine algorithm]{
Grid spacing with the cosine algorithm. Using a \SI{13}{\degree}$\times$\SI{13}{\degree} field of view and an overlap of $0.2$ \aref{eq:product_deltadelta} gives $\Delta\delta = $ \SI{10.4}{\degree}. 17 declination strips are defined moving away from $\delta=0$, as shown in the equatorial view on the left. The final strips are \SI{6.8}{\degree} from the poles, as this is more than half of the field of view (\SI{6.5}{\degree}) the poles themselves will not by within the area of any tile. \aref{eq:cosine_deltaalpha} gives $\Delta\alpha = $ \SI{10.4}{\degree} on the equator ($\delta=0$). This results in 35 points arranged as shown in the polar view on the right, and a reduced spacing of \SI{6.4}{\degree} to the west of the $\alpha=0$ meridian. This remainder will be different for each strip, as shown in \aref{fig:cosine}.
}\label{fig:cosine_spacing}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[width=\linewidth]{images/spacing_minverlap.pdf}
\end{center}
\caption[Grid spacing with the minverlap algorithm]{
Grid spacing with the minverlap algorithm. Using the same parameters as \aref{fig:cosine_spacing}, \aref{eq:minverlap_deltadelta} gives $\Delta\delta = $ \SI{10}{\degree}, therefore neatly arranging 19 declination strips between \SI{-90}{\degree} and \SI{90}{\degree}. \aref{eq:minverlap_deltaalpha} also gives $\Delta\alpha = $ \SI{10}{\degree} on the equator ($\delta=0$), so 36 points are uniformly arranged around the circumference.
}\label{fig:minverlap_spacing}
\end{figure}
\clearpage
To define the grid points using the minverlap algorithm, it is first necessary to find the number of tiles $n$ that would fit into the available range using the cosine algorithm spacing; if this is not an integer number of tiles then round it up to the next whole number. In declination this is calculated as
\begin{equation}
n_\text{dec} = \left \lceil \frac{90}{f_\text{dec}(1-v_\text{dec})} \right \rceil,
\label{eq:minverlap_ndec}
\end{equation}
where $\lceil x \rceil$ is the mathematical ceiling function. This is a modification of \aref{eq:product_deltadelta}, but one that will always find an integer number of tiles. For example, for a tile with a declination field of view $f_\text{dec} = $ \SI{13}{\degree} and overlap $v_\text{dec} = 0.2$ \aref{eq:product_deltadelta} gives $\Delta\delta = 13 \times (1-0.2) = $ \SI{10.4}{\degree}. This clearly does not divide into \SI{90}{\degree} without a remainder, \SI{6.8}{\degree}, as shown in \aref{fig:cosine_spacing}. The problem is that \SI{90}{\degree} $/$ \SI{10.4}{\degree} $= 8.65$, so the product and cosine algorithms will fit in 8 declination strips and then have over half the height of a tile remaining at the poles. Instead, the minverlap algorithm rounds this up to $n_\text{dec} = 9$ and then calculates the spacing using
\begin{equation}
\Delta\delta = \frac{90}{n_\text{dec}}.
\label{eq:minverlap_deltadelta}
\end{equation}
In this case the new $\Delta\delta = $ \SI{10}{\degree}, which gives an even arrangement of tiles from the equator to the poles, as shown in \aref{fig:minverlap_spacing}. The other benefit of this method is that, in addition to there always being a declination strip at $\delta=0$, there will always be ``strips'' at \SI{+90}{\degree} and \SI{-90}{\degree}, which results in a single tile being located over the celestial poles and ensuring there are no major gaps in coverage.
In the minverlap algorithm the spacing in right ascension is treated in a similar way. The integer number of tiles that can fit into a given declination strip is given by
\begin{equation}
n_\text{RA}(\delta) = \left \lceil \frac{360}{f_\text{RA}(1-v_\text{RA})/\cos \delta} \right \rceil + 1,
\label{eq:minverlap_nra}
\end{equation}
where the $+1$ is to account for tiles being located both at $\alpha=$\SI{0}{\degree} and $\alpha=$\SI{360}{\degree}. The logic is exactly the same as with declination, and the revised spacing is given by
\begin{equation}
\Delta\alpha(\delta) = \frac{360}{n_\text{RA}(\delta)}.
\label{eq:minverlap_deltaalpha}
\end{equation}
This spacing is also shown in \aref{fig:minverlap_spacing}, with the grid points uniformly spaced around the celestial equator. Note that the ceiling function means that, in some cases, strips at different declinations can have the same number of tiles. For example, using the same parameters as previously $\Delta\delta=$\SI{10}{\degree}, so declination strips start at \SI{0}{\degree} and continue to \SI{\pm10}{\degree}, \SI{\pm20}{\degree} \ldots (mirrored in both hemispheres). From \aref{eq:minverlap_nra} the number of tiles on the equator is $n_\text{RA}(\delta=\SI{0}{\degree}) = \lceil 360/(10.4/\cos \SI{0}{\degree}) \rceil + 1 = \lceil 34.6 \rceil + 1 = 36$. But on the next strip up (or down) $n_\text{RA}(\delta=\SI{\pm10}{\degree}) = \lceil 360/(10.4/\cos(\pm\SI{10}{\degree})) \rceil + 1 = \lceil 34.1 \rceil + 1 = 36$ as well. This occurs because there are only a limited number of ways to fit an integer number of fixed tiles into a given declination strip, and so, as shown in \aref{fig:minverlap}, the three strips around the equator align perfectly with the same number of tiles.
\subsubsection{Limitations of the minverlap algorithm}
The new minverlap algorithm is a significant improvement on the previous gridding algorithms. In particular it reduces the occurrences of gaps in coverage close to the poles which occur when using the cosine algorithm. However, gaps can still occur when using the minverlap with a particularly low overlap parameter. For example, \aref{fig:tiledsphere} shows a sphere tiled using the minverlap algorithm with an overlap parameter of 0 and in this case gaps are visible just below the northern celestial pole.
A proposed solution to this problem would be to force tiles to meet at their lower corners (in the northern hemisphere; upper corners in the south), therefore overlapping further and removing the possibility of gaps forming due to the angle between the tiles. An attempt to make this change and create an ``enhanced minverlap'' algorithm was tested, however ultimately it proved unnecessary. Although the current minverlap algorithm is deficient at low overlap values, this is only an issue when used with large tiles. The \SI{10}{\degree} $\times$ \SI{10}{\degree} tiles and 0 overlap used for \aref{fig:tiledsphere} are extreme values, and even for the roughly \SI{8}{\degree} $\times$ \SI{5}{\degree} full field of view of GOTO with 8 unit telescopes the overlap has to be less than 0.1 before noticeable gaps start appearing.
A further possible improvement to the minverlap algorithm has also been identified since it was implemented. Instead of locating two grid points precisely at the celestial poles (i.e.\ using declination strips at $\pm$\SI{90}{\degree}) as shown in \aref{fig:minverlap_spacing}, it would instead be enough to have the highest/lowest declination strips exactly half of the field of view away from the poles (e.g.\ at $\pm$\SI{85}{\degree} if the tile was \SI{10}{\degree} tall). This would ensure the poles were still included within the tiled area, at the top/bottom of the highest/lowest strip, but could reduce the number of strips needed to cover the entire sphere.
As it happens, when viewed from La Palma, the northern celestial pole is below the GOTO altitude limit of \SI{30}{\degree}. This means that tiles closest to the pole are not visible, and so the issues described above are irrelevant. Should GOTO-tile be applied in the future to other telescopes at other sites then this issue would need to be revisited, but it was not a priority to fix within the context of this work.
\end{colsection}
\section{Probability skymaps}
\label{sec:skymaps}
\begin{colsection}
When identifying a particular target in the sky its coordinates can be given in right ascension and declination, and if there is some uncertainty in the position, then errors can be given on the coordinate values. For example, a \glsfirst{grb} event detected by the \textit{Fermi} \glsfirst{gbm} might have a central position and an error radius ranging from arcseconds to tens of degrees. However, multiple gravitational-wave detectors produce large and distinctly asymmetric localisation areas (see \aref{sec:gw_localisation}). For these cases, the \glsfirst{lvc} produce probability skymaps which map the localisation area onto the celestial sphere. This section describes how these skymaps are defined and how they are mapped onto the GOTO all-sky grid as described in \aref{sec:gototile}.
\end{colsection}
\subsection{Defining skymaps with HEALPix}
\label{sec:healpix}
\begin{colsection}
\glsfirst{healpix} is a system used to define pixelised data on the surface of a sphere~\citep{HEALPix}. Developed at NASA JPL for microwave background data, it is now widely used for other applications including for gravitational-wave skymaps produced by the LVC.\@ HEALPix divides the sphere into a series of nested (hierarchical), equal-area (although not equal-shape) pixels arranged in declination strips (``isoLatitude''). The first four orders of spheres are shown in \aref{fig:healpix}, starting from a base resolution with 12 pixels and increasing as each pixel is split into four.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\linewidth]{images/healpix.pdf}
\end{center}
\caption[HEALPix partitions of a sphere]{
The first four orders of the HEALPix partition of a sphere, with increasing $N_\text{side}$ resolution parameter. Note that as the resolution doubles each pixel on the previous sphere is split into four, and $N_\text{side}$ is the number of pixels along the side of a first-order pixel (two of which are highlighted). Adapted from \citet{HEALPix}.
}\label{fig:healpix}
\end{figure}
The resolution of the HEALPix grid is defined using the $N_\text{side}$ parameter, which for the given resolution is the number of pixels along each side of one of the 12 base pixels. At every resolution each first-order pixel contains $N_\text{side}^2$ pixels, so the total number of pixels in a sphere is given by
\begin{equation}
N_\text{pix} = 12 N_\text{side}^2.
\label{eq:healpix_npix}
\end{equation}
Each pixel therefore has an equal area of
\begin{equation}
\Omega_\text{pix} = \frac{4\pi}{12 N_\text{side}^2} = \frac{\pi}{3 N_\text{side}^2},
\label{eq:healpix_area}
\end{equation}
on a unit sphere where the radius $r=1$. Taking the celestial sphere, the circumference in degrees is $\SI{360}{\degree} = 2 \pi r$ meaning the area of the whole sky is given by
\begin{equation}
A_\text{sky} = 4 \pi r^2 = 4 \pi \left ( \frac{\SI{360}{\degree}}{2 \pi} \right )^2 = \frac{129600}{\pi}~\text{sq deg} \approx 41252~\text{sq deg} , %
\label{eq:sky_area}
\end{equation}
and therefore the area of each HEALPix pixel is
\begin{equation}
A_\text{pix} = \frac{129600}{12 \pi N_\text{side}^2}~\text{sq~deg} \approx \frac{3438}{N_\text{side}^2}~\text{sq~deg}.
\label{eq:healpix_area_degrees}
\end{equation}
\aref{fig:healpix} shows only the first four orders of HEALPix pixelisation, up to $N_\text{side} = 8$ where the sphere is split into 768 pixels each with an area of 53.7~sq deg. An initial, low-resolution LVC skymap might use a grid with $N_\text{side} = 64$ (approximately 49 thousand pixels, each with an area of 0.84~sq~deg), whereas a final output skymap will have $N_\text{side} = 1024$ (12.5 million pixels, and a pixel size resolution of $3.27 \times 10^{-3}$~sq~deg or 11.7~square~arcminutes).
In addition to being a way to divide the sphere, each HEALPix pixel has a unique index from one of two different numbering schemes: either the ring (counting around each ring from the north to the south) or nested (based on the sub-pixel tree) system.
HEALPix is used to provide localisation of sky probabilities for transient astronomical events, in the form of ``skymaps''. Each point on the HEALPix grid is assigned a probability between 0 and 1 that the counterpart object is located within that pixel, and the whole sphere should sum to unity. \aref{fig:skymap_regrade} shows a typical LVC skymap, for the gravitational-wave event S190521r \citep{S190521r}, at various HEALPix $N_\text{side}$ parameters.
\begin{figure}[p]
\begin{center}
\begin{tabular}{cc}
$N_\text{side} = 1$ &
$N_\text{side} = 2$ \\
\includegraphics[width=0.45\linewidth]{images/regrade/1.png} &
\includegraphics[width=0.45\linewidth]{images/regrade/2.png} \\
\\
$N_\text{side} = 4$ &
$N_\text{side} = 8$ \\
\includegraphics[width=0.45\linewidth]{images/regrade/4.png} &
\includegraphics[width=0.45\linewidth]{images/regrade/8.png} \\
\\
$N_\text{side} = 16$ &
$N_\text{side} = 32$ \\
\includegraphics[width=0.45\linewidth]{images/regrade/16.png} &
\includegraphics[width=0.45\linewidth]{images/regrade/32.png} \\
\\
$N_\text{side} = 64$ &
$N_\text{side} = 128$ \\
\includegraphics[width=0.45\linewidth]{images/regrade/64.png} &
\includegraphics[width=0.45\linewidth]{images/regrade/128.png} \\
\end{tabular}
\end{center}
\caption[Regrading a gravitational-wave skymap]{
Changing the HEALPix resolution of a gravitational-wave skymap (also known as regrading). At every stage each pixel is assigned a probability value which indicates the probability the source is located within that pixel; here, darker colours denote higher probabilities. At lower $N_\text{side}$ values individual pixels are visible, but as the resolution increases the HEALPix structure is less visible.
}\label{fig:skymap_regrade}
\end{figure}
As well as the individual probabilities assigned to each pixel, it is also useful to consider the overall spread of the probability. This is done by considering the probability contour areas, typically at the 50\% and 90\% levels. The 50\% contour area of a skymap is defined by encircling the smallest number of pixels so that the total probability within the area is 50\% of the overall skymap probability. When a skymap is processed using GOTO-tile each pixel is assigned a contour value as well as its individual probability value. This is calculated by sorting all of the pixels by probability from highest to lowest, and the contour value for each pixel is then the cumulative sum of the probability within the pixels above it. This contour value can be considered as the lowest contour area that each pixel is within, meaning the pixels that are contained within the 50\% contour area are those with contour values of less than 50\%. \aref{fig:sim_skymap_probs} shows a cartoon 2-dimensional skymap, and \aref{fig:sim_skymap_conts} illustrates how the 50\% and 90\% contours are calculated.
\makeatletter
\setlength{\@fptop}{1cm}
\makeatother
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{images/sim/sim_skymap_probs.pdf}
\end{center}
\caption[An example 2D probability skymap]{
A cartoon 2-dimensional skymap. Each pixel (represented by one of the 81 squares) has an assigned probability, and together they all sum to 100\%. The \textcolorbf{BlueGreen}{blue} inner contour contains 50\% of the probability, while the \textcolorbf{Green}{green} outer contour contains 90\% of the probability. These contours are created based on the values shown in \aref{fig:sim_skymap_conts}.
}\label{fig:sim_skymap_probs}
\end{figure}
\clearpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{images/sim/sim_skymap_conts.pdf}
\end{center}
\caption[An example 2D skymap with pixel contour values]{
The same cartoon skymap as in \aref{fig:sim_skymap_probs}, but now each pixel contains its contour value calculated by sorting the pixels by probability and assigning each pixel the value of the cumulative sum. The pixel with the highest probability (E5) has the same contour value as its probability value in \aref{fig:sim_skymap_probs}, the second highest (F5) has the sum of the probability values of both E5 and F5, and this continues to the pixel with the lowest probability value (A9) which has a contour value of 100\%. The \textcolorbf{BlueGreen}{blue} area is the 50\% probability contour, which encloses all pixels with a contour value of less than 50\%. The \textcolorbf{Green}{green} area likewise encloses all pixels with a contour value of less than 90\%. Note in this example the contours are continuous, but it is possible to have multiple `islands' of probability within a single skymap.
}\label{fig:sim_skymap_conts}
\end{figure}
\clearpage
\makeatletter
\setlength{\@fptop}{0\p@ \@plus 1fil} %
\makeatother
\newpage
\end{colsection}
\subsection{Mapping skymaps onto the grid}
\label{sec:mapping_skymaps}
\begin{colsection}
When a gravitational-wave signal is detected the LVC analysis pipelines create HEALPix skymaps to describe the sky localisation, and these are then distributed with the public alert (see \aref{sec:voevents}). GOTO-tile is used to map the skymaps onto the grid used for the all-sky survey (defined in \aref{sec:gototile}). This requires finding which HEALPix pixels fall within each tile, which is done by defining polygons that match the projected tile areas and using the \texttt{query\_polygon} function from the healpy Python package (\texttt{healpy}\footnote{\url{https://healpy.readthedocs.io}}). For each tile it is then simple to sum the probability of all the HEALPix pixels within it, which gives the total contained probability. This is shown for a cartoon skymap in \aref{fig:sim_skymap_tiles}. In cases where grid tiles overlap a given HEALPix pixel could fall within the area of multiple tiles, and therefore that pixel would contribute to the total probability of more than one tile. This means the total contained probability within all tiles can add to more than 100\%.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.46\linewidth]{images/sim/sim_skymap_pix_probs.pdf}
\includegraphics[width=0.46\linewidth]{images/sim/sim_skymap_tile_probs.pdf}
\end{center}
\caption[Mapping a 2D probability skymap onto grid tiles]{
On the left, the cartoon skymap from \aref{fig:sim_skymap_probs} has been divided into nine survey grid tiles (outlined in \textcolorbf{Red}{red}). On the right, the total contained probability for each tile is found by summing the probability of nine pixels within them.
}\label{fig:sim_skymap_tiles}
\end{figure}
\newpage
\end{colsection}
\subsection{Selecting tiles}
\label{sec:selecting_tiles}
\begin{colsection}
When a gravitational-wave event is processed, event pointings are added into the observation database as described in \aref{sec:event_insert}. However, only a certain number of pointings should be added to prevent GOTO wasting too much time observing low-probability areas. Each pointing is mapped to a grid tile, and only tiles with a reasonably high contained probability are worth observing. The GOTO-alert event handling code described in \aref{chap:alerts} selects tiles based on their contour level, meaning GOTO could, for example, chose to observe the 90\% contour of each skymap. However, determining which contour level each tile is within is not as simple as calculating the contained probability, as there are multiple ways to define the contour value for each tile.
For example, a tile could be defined as being within a given contour area if \textit{every} pixel contained within that tile is within that contour. However this is unreasonable for large tiles, such as GOTO's, as the tile areas are often wider than the long, stretched out probability areas seen in typical gravitational-wave typical skymaps. An alternative then would be say that a tile is within a contour if \textit{any} of its contained pixels are within the contour. However, this will find every tile covering the contour area even if only the smallest fraction of the tile's area is within that region, which leads to over-selecting tiles. Several alternative methods were considered, including taking the median or mean of the contained pixel contour levels within each tile. Some different selection methods applied to the S190521r skymap are shown in \aref{fig:selecting_tiles}.
The method used within the GOTO-alert event handler is to select all tiles which have a mean contour value within 90\%. However, more quantitative simulations of different skymaps could be used to determine if this is the optimal choice for all cases. For example, the selection level could be modified depending on the size of the skymap, and the event strategy might need to be modified as more GOTO telescopes are built. These possibilities are discussed in \aref{sec:event_insert}.
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.49\linewidth]{images/tiling/S190521r_1.pdf}
\includegraphics[width=0.49\linewidth]{images/tiling/S190521r_2.pdf}
\vspace{0.5cm}
\includegraphics[width=0.49\linewidth]{images/tiling/S190521r_3.pdf}
\includegraphics[width=0.49\linewidth]{images/tiling/S190521r_4.pdf}
\end{center}
\caption[Selecting tiles for a gravitational-wave skymap]{
Selecting tiles for the S190521r skymap \citep[also shown in \aref{fig:skymap_regrade}]{S190521r}.
In the upper left the skymap is plotted on the celestial sphere, forming the ``banana'' shape typical of gravitational-wave localisations, and the 50\% and 90\% probability contours are shown.
The other three plots show grid tiles selected using one of three methods: selecting tiles to cover the entire 90\% contour (\textcolorbf{NavyBlue}{blue}), selecting tiles with a median contour value of 90\% (\textcolorbf{Orange}{orange}) and selecting tiles with a mean contour value of 90\% (\textcolorbf{Green}{green}). The number of tiles selected ($n$) and total probability within them ($P$) is given.
The mean contour method provides a good compromise, selecting fewer than half of the tiles needed to cover the whole 90\% contour (31 compared to 63), but together they still contain nearly 87\% of the total probability.
}\label{fig:selecting_tiles}
\end{figure}
\newpage
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/tiling/170817.pdf}
\end{center}
\caption[GOTO tile probabilities for GW170817]{
GOTO tiling applied to the final GW170817 skymap \citep{GW170817}.
The grid shown is for the 4-UT GOTO field of view, with tiles of \SI{3.7}{\degree}$\times$\SI{4.9}{\degree} and an overlap of $0.1$. The inset shows the 10 tiles within the total 90\% contour (containing 97.4\% of the total probability) in \textcolorbf{NavyBlue}{blue} and the 2 selected by the 90\% mean contour (containing 79.5\%) in \textcolorbf{Green}{green}. Compare to \aref{fig:swope_decam}, which shows follow-up observations of GW170817 by the Swope and DECam projects.
}\label{fig:170817_gw}
\end{figure}
\aref{fig:170817_gw} shows the GOTO-tile selection code applied to the skymap for GW170817 \citep{GW170817}. As described in \aref{sec:followup} this is the only gravitational-wave detection so far with an identified counterpart, AT~2017gfo \citep{GW170817_followup}. In this case the mean contour selection method is perhaps too restrictive, only adding two tiles to the database, and for similarly well-localised events adding more tiles would be better. Still, based on the performance during O3 (see \aref{sec:gw_results}), had GOTO been able to observe this event then the counterpart could have been observed in tile T0932 within minutes of the alert notice being received.
\end{colsection}
\section{Creating and modifying skymaps}
\label{sec:custom_skymaps}
\begin{colsection}
The GOTO-tile skymap processing system described in \aref{sec:skymaps} provides a framework which allows any gravitational-wave skymap to be mapped onto the GOTO survey grid, from which pointings can be generated for the pilot to observe. However, there is no particular reason that the system has to be restricted to just the gravitational-wave skymaps produced by the LVC.\@ This section describes three further projects based on the GOTO-tile skymap code, from when I was working with Yik Lun Mong at Monash.
\end{colsection}
\subsection{Creating Gaussian skymaps for GRB events}
\label{sec:grb_skymaps}
\begin{colsection}
As part of the GOTO commissioning observations when the LIGO-Virgo detectors were not operating (see \aref{sec:timeline}), GOTO followed up \glsfirst{grb} events from the \textit{Fermi} satellite Gamma-ray Burst Monitor \citep[GBM;][]{Fermi_GBM}. At the time, the alert notices for GBM events did not include probability skymaps, only right ascension, declination and an error radius, and so code was developed in order to create a skymap from these details based on a 2D Gaussian profile; therefore allowing them to be processed by GOTO-tile using the same methods already created for gravitational-wave events.
Taking the radius $r$ as half the full-width at half-maximum, the standard deviation of a 2D Gaussian distribution $\sigma$ is given by
\begin{equation}
\sigma = \frac{r}{\sqrt{2 \ln 2}}.
\label{eq:gaussian_sigma}
\end{equation}
The distance $d$ between a given point on the sphere ($\alpha, \delta$) and the central coordinates of the distribution ($\alpha_c, \delta_c$) is given by
\begin{equation}
\sin^2 \left ( \frac{1}{2} d \right )
= \sin^2 \left ( \frac{\delta-\delta_c}{2} \right)
+ \cos \delta \cos \delta_c \sin^2 \left ( \frac{\alpha-\alpha_c}{2} \right),
\label{eq:gaussian_distance}
\end{equation}
\noindent and the probability at each point for a 2D Gaussian is given by
\begin{equation}
P(\alpha, \delta) = \frac{1}{2\pi\sigma} \exp \left ( \frac{d^2}{2\sigma^2} \right ).
\label{eq:gaussian_prob}
\end{equation}
This probability is calculated for the location of every HEALPix pixel on a sphere, which produces a skymap array that can then be processed using GOTO-tile.
Using the above method, skymaps can be created for any single-target alert that has a given error radius. Several sources of transient events, such as \textit{Gaia} and \textit{Swift}, produce well-localised events with error circles much smaller than the GOTO tiles, so creating skymaps is less important. \textit{Fermi} GRB skymaps however cover much larger areas. For example, the GBM detection of GRB~170817A that helped localise the GW170817 gravitational-wave detection produced an initial alert with an error radius of \SI{17.45}{\degree}, later reduced to \SI{11.58}{\degree} in the final alert\footnote{GCN Notices available at \url{https://gcn.gsfc.nasa.gov/other/524666471.fermi}.}, which corresponded to a 50\% confidence region of $\sim$500~square~degrees \citep{GW170817_Fermi}.
The error values given in GBM notices only account for statistical errors for that event, not systematic errors. The GBM systematic errors are described in \citet{Fermi_localisation} to be well modelled by a core Gaussian with a radius (FWHM) of \SI{3.71}{\degree} and a non-Gaussian tail extending to \SI{14}{\degree}. For the purposes of GOTO tiling only the Gaussian portion is considered, with a radius obtained by combining the statistical radius ($r_\text{notice}$) and the systematic error in quadrature as
\begin{equation}
r = \sqrt{r_\text{notice}^2 + {(\SI{3.71}{\degree})}^2}.
\label{eq:fermi_radius}
\end{equation}
This is then used with the previous method to create a Gaussian skymap, which can be processed by GOTO-tile as described in \aref{sec:mapping_skymaps}. The skymap generated using this method for GRB~170817A is shown in \aref{fig:170817_grb}. Note that the location of AT~2017gfo falls quite far from the reported peak of the GRB skymap, and had there not been the coincident gravitational-wave detection it would have been unlikely that the source of the gamma-ray burst would have been observed.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{images/tiling/170817_fermi.pdf}
\end{center}
\caption[Gaussian skymap for GRB 170817A]{
A Gaussian skymap generated from the initial GBM alert for GRB 170817A, shown on the GOTO tile grid. The \textcolorbf{Red}{red} star shows the location of the counterpart AT~2017gfo. The final GRB skymap for this event is shown in \aref{fig:170817_skymaps}.
}\label{fig:170817_grb}
\end{figure}
It should be noted that the GBM localisation areas are actually not perfectly symmetric, but the above procedure works as a reasonable approximation. Since May 2019 \textit{Fermi} has started including HEALPix skymaps with alert notices \citep{Fermi_skymaps}, however, unlike the LVC, the GBM team do not guarantee that the skymap will have been generated by the time the alert notice is issued. Therefore the above procedure is still used when processing alerts if the official skymap is not yet available.
\newpage
\end{colsection}
\subsection{Weighting GW skymaps using galaxy positions}
\label{sec:galaxy_skymaps}
\begin{colsection}
As described in \aref{sec:followup}, telescopes with small fields of view can focus on observing possible host galaxies instead of covering an entire GW probability region \citep{GW_weighting}. The most recent catalogue of potential host galaxies is the \glsfirst{glade} catalogue \citep{GLADE}, which combines multiple prior catalogues including the Gravitational Wave Galaxy Catalogue \citep[GWGC,][]{GWGC} used by Swope to successfully find the GW170817 counterpart.
GOTO does not use a galaxy-focused strategy; due to its large field of view each GOTO pointing will contain tens of possible host galaxies. However, for skymaps that cover large numbers of tiles, the order in which GOTO observes could potentially be optimised by focusing on the tiles that contain the most potential host galaxies. One way of doing this using the existing G-TeCS scheduling framework (described in \aref{chap:scheduling}) is to adjust the weighting factor assigned to each tile, which can be done by multiplying the LVC localisation skymap with another skymap, containing the position of possible host galaxies, before applying the result to the tile grid as described in \aref{sec:mapping_skymaps}. Constructing weighted skymaps like this is a strategy used by several smaller field-of-view instruments, such as \textit{Swift} \citep{GW_Swift}.
In order to create such a weighted skymap the GLADE catalogue can be queried for the position of each galaxy within the event distance limits (each LVC event notice contains an estimate for the distance to the source, see \aref{sec:event_strategy} for how this is used to determine the follow-up strategy for each event). This is only possible for events within a few \SI{100}{\mega\parsec}, beyond which the GLADE catalogue is increasingly incomplete \citep{GLADE}. Once found, a HEALPix skymap can be constructed by weighting each pixel by the number of possible host galaxies located within it. As galaxies are not being point sources, the resulting skymap is then passed through a Gaussian smoothing function with a default standard deviation of 15~arcseconds. This new skymap can then be normalised and multiplied with the gravitational-wave position skymap, to produce a new skymap which contains the information from both.
\aref{fig:galaxy_skymap} shows this method applied to the large skymap for event S190425z \citep{S190425z}, which included a reported luminosity distance of $155\pm\SI{45}{\mega\parsec}$. The underlying pattern of the gravitational-wave localisation regions is still clearly visible in the final skymap, but by including the galaxy information the resulting tile pointings will be weighted towards regions with larger numbers of possible host galaxies. This method still needs more work before being implemented, in particular the relative weighting to apply to each skymap needs to be considered. However, it could prove beneficial by further prioritising GOTO observations towards regions more likely to include a counterpart source.
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.8\linewidth]{images/tiling/gal_before.pdf}
\includegraphics[width=0.8\linewidth]{images/tiling/gal.pdf}
\includegraphics[width=0.8\linewidth]{images/tiling/gal_after.pdf}
\end{center}
\caption[Weighting a GW skymap using galaxy positions]{
Weighting a GW skymap (for event S190425z) using galaxy positions. The top plot shows the skymap distributed by the LVC, the central plot shows a skymap using GLADE galaxies within the estimated distance to the source ($155\pm\SI{45}{\mega\parsec}$), and the lower plot shows the result of multiplying the two together.
}\label{fig:galaxy_skymap}
\end{figure}
\end{colsection}
\subsection{Prioritising observations using dust extinction skymaps}
\label{sec:extinction_skymaps}
\begin{colsection}
In very crowded fields a single GOTO image can contain tens of thousands of sources, which makes it difficult for the GOTOphoto photometry pipeline (see \aref{sec:gotophoto}) to identify potential counterpart candidates. Observing high-interstellar-extinction areas, i.e.\ through the galactic plane, also makes it harder to observe extra-galactic sources such as counterparts to gravitational-wave events. Therefore another possible reason to modify localisation skymaps is to de-prioritise observations of the galactic plane.
One method to do this is shown in \aref{fig:extinction_skymap} --- multiplying the gravitational-wave skymap by an inverted thermal dust emission skymap from the \textit{Planck} observatory \citep{Planck_dust}. The effect is intended to be subtle, not enough to completely wipe out the skymap probability in the high-extinction regions, but instead to just reduce the weighting of those tiles so that GOTO first prioritises other areas. Again, this is not currently implemented in the scheduling system, but it is presented as another example of using skymaps to optimise observation priorities.
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.8\linewidth]{images/tiling/ext_before.pdf}
\includegraphics[width=0.8\linewidth]{images/tiling/ext.pdf}
\includegraphics[width=0.8\linewidth]{images/tiling/ext_after.pdf}
\end{center}
\caption[Weighting a GW skymap using galactic extinction]{
Weighting a GW skymap (for event S190521r) using galactic extinction. The top plot shows the skymap distributed by the LVC, the central plot shows an inverted thermal dust emission skymap from \textit{Planck}, and the lower plot shows the result of multiplying the two together.
}\label{fig:extinction_skymap}
\end{figure}
\end{colsection}
\section{Summary and Conclusions}
\label{sec:tiling_conclusion}
\begin{colsection}
In this chapter I described how the GOTO all-sky survey grid is defined, and how it is used for targeting gravitational-wave follow-up observations.
As a survey-based project all GOTO observations are taken aligned to a fixed grid. This all-sky grid is defined using the ``minverlap'' algorithm I developed for the GOTO-tile Python module, which produces a much more even grid compared to the previous algorithms used. Gravitational-wave localisation areas are produced in the form of HEALPix skymaps, which need to be mapped onto the grid used by GOTO before observations can be carried out. This is carried out by the sentinel alert-listening daemon described in \aref{chap:autonomous}, and the resulting pointings are sorted and prioritised by the scheduler as described previously in \aref{chap:scheduling}.
The functions used by the sentinel to process transient alerts are described in the following chapter (\aref{chap:alerts}). It is at this stage that the relative weighting of each tile is determined based on the transient skymap, for both LVC gravitational-wave detections and other alerts such as gamma-ray bursts. In this chapter I also outlined some further possible additions to this weighting by including skymaps based on host galaxy catalogues or galactic extinction. By including these extra weightings it may be possible to further optimise the GOTO follow-up observations and increase the chance of taking a quick observation of a gravitational-wave counterpart.
\end{colsection}
\section{}
\renewcommand\section{\newpage\stdsection}
\newcommand\makequote[3]{
\begin{center}
{\Huge \textbf{\textcolor{GOTOlightblue}{``~~}}}\\
{\LARGE #1}\\
\vspace{0.3cm}
{\Huge \textbf{\textcolor{GOTOlightblue}{~~''}}}
\end{center}
\vspace{-0.7cm} \hspace{8cm}
{\large --- #2\ifthenelse{\isempty{#3}}{}{, \textit{#3}}}
}
\renewcommand*{\thefootnote}{[\arabic{footnote}]}
\newcommand{\hspace{-3.2pt}$\bullet$ \hspace{5pt}}{\hspace{-3.2pt}$\bullet$ \hspace{5pt}}
\makeatletter
\renewcommand{\@pnumwidth}{3em}
\renewcommand{\@tocrmarg}{4em}
\makeatother
\setlength{\cftparskip}{-7pt}
\renewcommand*{\glsclearpage}{}
\makeatletter
\patchcmd{\@makechapterhead}{\vskip 20}{\vskip 0}{}{} %
\makeatother
\mtcsettitle{minitoc}{}
\mtcsetrules{*}{off}
\mtcsetfont{minitoc}{section}{\normalsize\bfseries}
\mtcsetfont{minitoc}{subsection}{\normalsize}
\mtcsetoffset{minitoc}{-0.7em}
\setlength{\mtcindent}{0em}
\let\oldappendices\oldappendices\adjustmtc\def\oldappendices\adjustmtc{\oldappendices\adjustmtc}
\newcommand\chaptoc{
\begin{singlespacing}
\begin{small}
\vspace{-1.4cm}
\noindent{\color{GOTOlightblue}\rule{\linewidth}{1.5pt}}
\vspace{-1cm}
{\hypersetup{linkcolor=black}
\minitoc{}
}
\vspace{-0.5cm}
\noindent{\color{GOTOlightblue}\rule{\linewidth}{1.5pt}}
\begin{center}
\hyperref[contents]{\textcolor{white}{\ding{115}}}
\end{center}
\end{small}
\end{singlespacing}
\newpage
}
\AtBeginDocument{%
\renewcommand\chapterautorefname{Chapter}%
\renewcommand\sectionautorefname{Section}%
\renewcommand\subsectionautorefname{Section}%
}
\newcommand\aref[1]{\autoref{#1}}
\newcommand\nref[1]{\autoref{#1} (\nameref{#1})}
\DeclareFieldFormat{citehyperref}{%
\DeclareFieldAlias{bibhyperref}{noformat}%
\bibhyperref{#1}}
\DeclareFieldFormat{textcitehyperref}{%
\DeclareFieldAlias{bibhyperref}{noformat}%
\bibhyperref{%
#1%
\ifbool{cbx:parens}
{\bibcloseparen\global\boolfalse{cbx:parens}}
{}}}
\savebibmacro{cite}
\savebibmacro{textcite}
\renewbibmacro*{cite}{%
\printtext[citehyperref]{%
\restorebibmacro{cite}%
\usebibmacro{cite}}}
\renewbibmacro*{textcite}{%
\ifboolexpr{
(not test {\iffieldundef{prenote}} and
test {\ifnumequal{\value{citecount}}{1}})
or
(not test {\iffieldundef{postnote}} and
test {\ifnumequal{\value{citecount}}{\value{citetotal}}})
}
{\DeclareFieldAlias{textcitehyperref}{noformat}}
{}%
\printtext[textcitehyperref]{%
\restorebibmacro{textcite}%
\usebibmacro{textcite}}}
\renewcommand*{\nameyeardelim}{\addspace}
\addbibresource{bibliography.bib}
\renewbibmacro{in:}{}
\AtEveryBibitem{%
\clearfield{booktitle}%
\clearfield{editor}%
\clearfield{title}%
\clearfield{number}%
\clearfield{eid}%
}
\renewcommand{\newunitpunct}{\addcomma\space}
\renewcommand{\labelnamepunct}{\addcomma\space}
\renewcommand{\finentrypunct}{}
\DeclareFieldFormat{journaltitle}{#1,}
\DeclareFieldFormat{pages}{#1}
\xpatchbibmacro{date+extradate}{\printtext[parens]}{\setunit{\addcomma\space}\printtext}{}{}
\chapter*{Acknowledgements}
\begin{onehalfspace}
This is the very last thing I'm writing before sending this to print, because I'm awful at writing these sort of things and I kept putting it off. Not that it really matters, of the very few people who will ever read this thesis I'm guessing most will skip over this bit. I always do.
\medskip
Anyway, I'll start by thanking my PhD supervisor Vik Dhillon, for being a fantastic supervisor and mentor over the past four years, as well as, when needed, a 5-star tour guide of La Palma. Thanks also to my second supervisor Ed Daw, whose Physical Computing class I enjoyed both teaching and learning from, as well as Stu, Dave, Steven, Liam, James, Lydia, Pablo and others in the group in Sheffield. I'd also like to thank members of the GOTO collaboration: Danny, Krzysztof, Paul, Joe, James and Ryan at Warwick; Duncan, Kendall, Evert, Travis and Alex at Monash; Gav, Mark and everyone else from the ever-increasing list of member institutions (especially the poor students who had to babysit GOTO for so long). Also my masters project supervisors at Durham, Richard and Tim, who first put me in touch with Vik --- without them I literally would not be where I am today.
\medskip
I'd like to thank all the various friends I've made over the years from Maidenhead, Borlase, Durham and Sheffield; including, but not at all limited to, Alex, Anna, Becky, Ben, Emma, Gemma, Harriet, Harry, Héloïse, James, Katie, Katherine, Laura, Mac, Oliver, Richard, Robin, Simon, Sophie and Varun (I wanted to include as many people as I could, apologies to anyone I forgot). A particular massive thanks to the other PhD students in my year at Sheffield: Becky, Héloïse and Katie. You were the best office-mates and compatriots I could have asked for, and while you all finished before me and went off to better things, it did mean I my own office in the final few weeks --- thanks!
\medskip
Of course, I must also thank my Mum and Dad, and my brother Phil, for their unwavering support and encouragement, as well as my Grandad Norman, my cousin Nye, and the rest of my family.
\newpage
\medskip
Finally, I would like to acknowledge the people who did the most to encourage my love of physics and astronomy that eventually culminated in this thesis: my secondary-school physics teacher Malcolm Brownsell, the members of the Maidenhead Astronomical Society, and my grandfather Eric Snelling, aka Bubba. May you all have clear skies.
\smallskip
\begin{flushright}
--- Martin Dyer, Sheffield, 25th September 2019
\end{flushright}
\medskip
PS Although the above is dated the 25th, and I did try to print out and submit on that day, I failed to do so --- the office didn't have any binding combs large enough for this epic tome. So I'm going to get it done today, the 26th, at the Rymans in town, and I'll leave this postscript as a permanent record. Let's hope today is the day!
\medskip
PPS As my final correction (I'm writing this on the 20th of December) I'd also like to say thanks to my viva examiners, Iain and James. The viva was far more enjoyable that I expected, and their feedback was very useful. And now, at last, it is finished.
\end{onehalfspace}
\chapter*{Acronyms and Abbreviations}
\vspace{-3.5cm}
\setlength{\glsdescwidth}{0.8\linewidth}
\pdfbookmark[section]{Acronyms and Abbreviations}{aaa}
\begingroup
\let\clearpage\relax
\vspace{-12pt}
\begin{singlespacing}
\printglossary[title={},
type=\acronymtype,
style=super,
nopostdot,
nonumberlist,
%
]
\end{singlespacing}
\endgroup
\chapter*{Contents}
\renewcommand{\contentsname}{}
\vspace{-3.5cm}
\dominitoc{}
{\hypersetup{linkcolor=black}
\pdfbookmark[section]{Contents}{contents}
\tableofcontents
}
\label{contents}
\clearpage
\chapter*{Declaration}
\begin{onehalfspace}
\noindent I declare that, unless otherwise stated, the work presented in this thesis is my own.
\bigskip
\noindent No part of this thesis has been accepted or is currently being submitted for any other qualification at the University of Sheffield or elsewhere.
\bigskip
\noindent Some of my work on the GOTO Telescope Control System has previously been published as \citet{Dyer}. This paper forms the basis of Chapters 3, 4 and 5 of this thesis. The other chapters have not been published previously, although the work within them will contribute to future publications.
\end{onehalfspace}
\chapter*{Figures}
\renewcommand{\listfigurename}{}
\vspace{-3.5cm}
{\hypersetup{linkcolor=black}
\pdfbookmark[section]{Figures}{figures}
\listoffigures
}
\clearpage
\chapter*{Summary}
\begin{onehalfspace}
The detection of the first electromagnetic counterpart to a gravitational-wave signal in August 2017 marked the start of a new era of multi-messenger astrophysics. An unprecedented number of telescopes around the world were involved in hunting for the source of the signal, and although more gravitational-wave signals have been since detected, no further electromagnetic counterparts have been found.
\medskip
In this thesis, I present my work to help build a telescope dedicated to the hunt for these elusive sources: the Gravitational-wave Optical Transient Observer (GOTO). I detail the creation of the GOTO Telescope Control System, G-TeCS, which includes the software required to control multiple wide-field telescopes on a single robotic mount. G-TeCS also includes software that enables the telescope to complete a sky survey and transient alert follow-up observations completely autonomously, whilst monitoring the weather conditions and automatically fixing any hardware issues that arise. I go on to describe the routines used to determine target priorities, as well as how the all-sky survey grid is defined, how gravitational-wave and other transient alerts are received and processed, and how the optimum follow-up strategies for these events were determined.
\medskip
The first GOTO telescope, situated on La Palma in the Canary Islands, saw first light in June 2017. I detail the work I carried out on the site to help commission the prototype, and how the control software was developed during the commissioning phase. I also analyse the GOTO CCD cameras and optics, building a complete theoretical model of the system to confirm the performance of the prototype. Finally, I describe the results of simulations I carried out predicting the future of the GOTO project, with multiple robotic telescopes on La Palma and in Australia, and how the G-TeCS software might be modified to operate these telescopes as a single, global observatory.
\end{onehalfspace}
\chapter*{Tables}
\renewcommand{\listtablename}{}
\vspace{-3.5cm}
{\hypersetup{linkcolor=black}
\pdfbookmark[section]{Tables}{tables}
\listoftables
}
\clearpage
|
1,477,468,750,478 | arxiv |
\section{Detecting Unexpected Objects}
The legend for the semantic class colors used throughout the article is given in Fig.~\ref{fig:supplement_semantic_colors}.
We present additional examples of the anomaly detection task in Fig.~\ref{fig:supplement_extra_samples}.
The synthetic training process alters only foreground objects. A potential failure mode could therefore be for the network to detect {\it all} foreground objects as anomalies, thus finding not only the true obstacles but also everything else.
In Fig.~\ref{fig:supplement_laf_foreground}, we show that this does not happen and that objects correctly labeled in the semantic segmentation are not detected as discrepancies.
In Fig.~\ref{fig:supplement_road_anomaly_variation}, we illustrate the fact that, sometimes, objects of known classes differ strongly in appearance from the instances of this class present in the training data, resulting in them being marked as unexpected.
We present a failure case of our method in Fig.~\ref{fig:supplement_anomaly_failure_case}:
Anomalies similar to an existing semantic class are sometimes not detected as discrepancies
if the semantic segmentation marks them as this similar class.
For example, an animal is assigned to the {\it person} class and missed by the discrepancy network.
In that case, however, the system as a whole is still aware of the obstacle because of its presence in the semantic map.
Our discrepancy network relies on the implementations of {\it PSP Net}~\cite{Zhao17b} and {\it SegNet}~\cite{Badrinarayanan15} kindly provided by Zijun Deng.
The detailed architecture of the discrepancy network is shown in Fig.~\ref{fig:supplement_discrepancy_arch}.
We utilize a pre-trained VGG16~\cite{Simonyan15} to extract features from images
and calculate their pointwise correlation, inspired by the co-segmentation network of~\cite{Li18b}.
The up-convolution part of the network contains SELU activation functions~\cite{Klambauer17}.
The discrepancy network was trained for 50 epochs using the Cityscapes~\cite{Cordts16} training set with synthetically changed labels as described in Section~3.2 of the main paper.
We used the Adam~\cite{Kingma15} optimizer with a learning rate of 0.0001 and the per-pixel cross-entropy loss.
We utilized the class weighting scheme introduced in~\cite{Paszke16} to offset the unbalanced numbers of pixels belonging to each class.
\input{supplement/fig/semantic_colors.tex}
\input{supplement/fig/samples_extra.tex}
\input{supplement/fig/LAF_foreground.tex}
\input{supplement/fig/supplement_road_anomaly_variation.tex}
\input{supplement/fig/anomaly_failure_case.tex}
\input{supplement/fig/discrepancy_arch_detail.tex}
\section{ Detecting Adversarial Samples}
We show additional results on adversarial example detection on the Cityscapes and BDD datasets using the Houdini and DAG attack schemes in Figs.~\ref{fig:advsup1} and~\ref{fig:advsup2}.
To obtain these results, we set the maximal number of iterations to 200 in all settings and $L_{\infty}$ perturbation of 0.05 across each iteration of the attack. We randomly choose 80\% of the original validation samples to train the logistic detectors and the rest of the samples are used for evaluation. While evaluating the state-of-the-art Scale Consistency method~\cite{Xiao18}, we found by cross-validation that a patch size of $256\times256$ resulted in the best performance for an input image of size $1024\times512$.
\section{Image Attribution}
We used Wikimedia Commons images kindly provided under the Creative Commons Attribution license by the following authors:
Thomas R Machnitzki \href{https://commons.wikimedia.org/wiki/File:Goose_on_the_road_Memphis_TN_2013-03-17_001.jpg}{[link]},
Megan Beckett \href{https://commons.wikimedia.org/wiki/File:Rhino_crossing_road.JPG}{[link]},
Infrogmation \href{https://commons.wikimedia.org/wiki/File:Broadmoor9JanConesSkidloader.jpg}{[link]},
Kyah \href{https://commons.wikimedia.org/wiki/File:Federation_chantier_aout_2006_-_5.JPG}{[link]},
PIXNIO \href{https://commons.wikimedia.org/wiki/File:Bovine_catle_beside_road.jpg}{[link]},
Matt Buck \href{https://commons.wikimedia.org/wiki/File:Beeston_MMB_A6_Middle_Street.jpg}{[link]},
Luca Canepa \href{https://commons.wikimedia.org/wiki/File:Zebra_Crossing_Abbey_Road_Style_(63894353).jpeg}{[link]},
Jonas Buchholz \href{https://commons.wikimedia.org/wiki/File:Aihole-Pattadakal_road.JPG}{[link]}
and
Kelvin JM \href{https://commons.wikimedia.org/wiki/File:A_man_carrying_dry_grass_on_bicycle_for_domestic_animal_like_cows.jpg}{[link]}.
\section{Introduction}\label{sec:intro}
\input{fig/teaser.tex}
Semantic segmentation has progressed tremendously in recent years and state-of-the-art methods rely on deep learning~\cite{Chen17a,Chen18c,Zhao17b,Yu18b}.
Therefore, they typically operate under the assumption that all classes encountered at test time have been seen at training time. In reality, however, guaranteeing that all classes that can ever be found are represented in the database is impossible when dealing with complex outdoors scenes. For instance, in an autonomous driving scenario, one should expect to occasionally find the unexpected, in the form of animals, snow heaps, or lost cargo on the road, as shown in Fig.~\ref{fig:teaser}. %
Note that the corresponding labels are absent from standard segmentation training datasets~\cite{Cordts16,Yu18c,Huang18a}. Nevertheless, a self-driving vehicle should at least be able to detect that some image regions cannot be labeled properly and warrant further attention.
Recent approaches to addressing this problem follow two trends. The first one involves reasoning about the prediction uncertainty of the deep networks used to perform the segmentation~\cite{Kendall15b,Lakshminarayanan17,Kendall17,Gast18}.
In the driving scenario, we have observed that the uncertain regions tend not to coincide with unknown objects,
and, as illustrated by Fig.~\ref{fig:teaser}, these methods therefore fail to detect the unexpected.
The second trend consists of leveraging autoencoders to detect anomalies~\cite{Creusot15,Munawar17,Akcay18}, assuming that never-seen-before objects will be decoded poorly. We found, however, that autoencoders tend to learn to simply generate a lower-quality version of the input image. As such, as shown in Fig.~\ref{fig:teaser}, they also fail to find the unexpected objects.
In this paper, we therefore introduce a radically different approach to detecting the unexpected. Fig.~\ref{fig:pipeline} depicts our pipeline, built on the following intuition: In regions containing unknown classes, the segmentation network will make spurious predictions. Therefore, if one tries to resynthesize the input image from the semantic label map, the resynthesized unknown regions will look significantly different from the original ones. In other words, we reformulate the problem of segmenting unknown classes as one of identifying the differences between the original input image and the one resynthesized from the predicted semantic map. To this end, we leverage a generative network~\cite{Wang18c}
to learn a mapping from semantic maps back to images.
We then introduce a discrepancy network that, given as input the original image, the resynthesized one, and the predicted semantic map, produces a binary mask indicating unexpected objects.
To train this network \emph{without} ever observing unexpected objects, we simulate such objects by changing the semantic label of known object instances to other, randomly chosen classes.
This process, described in Section~\ref{sec:synthetic_training}, does {\it not} require seeing the unknown classes during training, which makes our approach applicable to detecting never-seen-before classes at test time.
\input{fig/pipeline.tex}
Our contribution is therefore a radically new approach to identifying regions that have been misclassified by a given semantic segmentation method, based on comparing the original image with a resynthesized one.
We demonstrate the ability of our approach to detect unexpected objects using the Lost and Found dataset~\cite{Pinggera16}. This dataset, however, only depicts a limited set of unexpected objects in a fairly constrained scenario. To palliate this lack of data, we create a new dataset depicting unexpected objects, such as animals, rocks, lost tires and construction equipment, on roads. Our method outperforms uncertainty-based baselines, as well as the state-of-the-art autoencoder-based method specifically designed to detect road obstacles~\cite{Creusot15}.
Furthermore, our approach to detecting anomalies by comparing the original image with a resynthesized one is generic and applies to other tasks than unexpected object detection. For example, deep learning segmentation algorithms are vulnerable to adversarial attacks~\cite{Xie17,cisse2017}, that is, maliciously crafted images that look normal to a human but cause the segmentation algorithm to fail catastrophically. As in the unexpected object detection case, re-synthesizing the image using the erroneous labels results in a synthetic image that looks nothing like the original one. Then, a simple non-differentiable detector, thus less prone to attacks, is sufficient to identify the attack. As shown by our experiments, our approach outperforms the state-of-the-art one of~\cite{Xiao18} for standard attacks, such as those introduced in~\cite{Xie17,cisse2017}.
\section{Related Work}\label{sec:related}
\subsection{Uncertainty in Semantic Segmentation}
Reasoning about uncertainty in neural networks can be traced back to the early 90s and Bayesian neural networks~\cite{Denker91,Mackay92,MacKay95}. Unfortunately, they are not easy to train and, in practice, {\it dropout}~\cite{Srivastava14} has often been used to approximate Bayesian inference~\cite{Gal16}. An approach relying on explicitly propagating activation uncertainties through the network was recently proposed~\cite{Gast18}. However, it has only been studied for a restricted set of distributions, such as the Gaussian one. Another alternative to modeling uncertainty is to replace a single network by an ensemble~\cite{Lakshminarayanan17}.
For semantic segmentation specifically, the standard approach is to use dropout, as in the {\it Bayesian SegNet}~\cite{Kendall15b}, a framework later extended in~\cite{Kendall17}. Leveraging such an approach to estimating label uncertainty then becomes an appealing way to detect unknown objects
because one would expect these objects to coincide with low confidence regions in the predicted semantic map. This approach was pursued in~\cite{Isobe17a, Isobe17b, Isobe17c}. These methods build upon the Bayesian SegNet and incorporate an uncertainty threshold to detect potentially mislabeled regions, including unknown objects. However, as shown in our experiments, uncertainty-based methods, such as the Bayesian SegNet~\cite{Kendall15b} and network ensembles~\cite{Lakshminarayanan17}, yield many false positives in irrelevant regions. %
By contrast, our resynthesis-based approach learns to focus on the regions depicting unexpected objects.
\subsection{Anomaly Detection via Resynthesis}
Image resynthesis and generation methods, such as autoencoder and GANs, have been used in the past for anomaly detection. The existing methods, however, mostly focus on finding behavioral anomalies in the temporal domain~\cite{Ravanbakhsh17,Kiran18}. For example,~\cite{Ravanbakhsh17} predicts the optical flow in a video, attempts to reconstruct the images from the flow, and treats significant differences from the original images as evidence for an anomaly. This method, however, was only demonstrated in scenes with a static background. Furthermore, as it relies on flow, it does not apply to single images.
To handle individual images, some algorithms compare the image to the output of a model trained to represent the distribution of the original images. For example, in~\cite{Akcay18}, the image is passed through an adversarial autoencoder and the feature loss between the output and input image is then measured. This can be used to classify whole images but not localize anomalies within the images. Similarly, given a GAN trained to represent an original distribution, the algorithm of~\cite{Schlegl17} searches for the latent vector that yields the image most similar to the input, which is computationally expensive and does not localize anomalies either.
In the context of road scenes, image resynthesis has been employed to detect traffic obstacles. For example,~\cite{Munawar15} relies on the previous frame to predict the non-anomalous appearance of the road in the current one. In~\cite{Creusot15,Munawar17}, input patches are compared to the output of a shallow autoencoder trained on the road texture, which makes it possible to localize the obstacle. These methods, however, are very specific to roads and lack generality. Furthermore, as shown in our experiments, patch-based approaches such as the one of~\cite{Creusot15} yield many false positives and our approach outperforms it.
Note that the approaches described above typically rely on autoencoder for image resynthesis. We have observed that autoencoders tend to learn to perform image compression, simply synthesizing a lower-quality version of the input image, independently of its content. By contrast, we resynthesize the image from the semantic label map, and thus incorrect class predictions yield appearance variations between the input and resynthesized image.
\subsection{Adversarial Attacks in Semantic Segmentation}
As mentioned before, we can also use the comparison of an original image with a resynthesized one for adversarial attack detection. The main focus of the adversarial attack literature has been on image classification~\cite{goodfellow2014a,carlini2017,moosavi2016}, leading to several defense strategies~\cite{kurakin2016, tramer2017} and detection methods~\cite{metzen2017,lee2018adv,ma18b}. Nevertheless, in~\cite{Xie17,cisse2017}, classification attack schemes were extended to semantic segmentation networks. However, as far as defense schemes are concerned, only~\cite{Xiao18} has proposed an attack detection method in this scenario. This was achieved by analyzing the spatial consistency of the predictions of overlapping image patches. We will show that our approach outperforms this technique.
\section{Approach}
\label{sec:method}
\input{fig/discrepancy_arch.tex}
\input{fig/synthetic_training.tex}
Our goal is to handle unexpected objects at test time in semantic segmentation and to predict the probability that a pixel belongs to a never-seen-before class. This is in contrast to most of the semantic segmentation literature, which focuses
on assigning to each pixel a probability to belong to classes it has seen in training, without explicit provision for the unexpected.
Fig.~\ref{fig:pipeline} summarizes our approach. We first use a given semantic segmentation algorithm, such as~\cite{Badrinarayanan15} and~\cite{Zhao17b}, to generate a semantic map.
We then pass this map to a generative network~\cite{Wang18c} that attempts to resynthesize the input image. If the image contains objects belonging to a class that the segmentation algorithm has not been trained for, the corresponding pixels will be mislabeled in the semantic map and therefore poorly resynthesized. We then identify these {\it unexpected} objects by detecting significant differences between the original image and the synthetic one. Below, we introduce our approach to detecting these discrepancies and assessing which differences are significant.
\subsection{Discrepancy Network}\label{sec:discrepancies}
Having synthesized a new image, we compare it to the original one to detect the meaningful differences that denote unexpected objects not captured by the semantic map. While the layout
of the known objects is preserved in the synthetic image, precise information about the scene's appearance is lost and simply differencing the images would not yield meaningful results. Instead, we train a second network, which we refer to as the {\it discrepancy network}, to detect the image discrepancies that {\it are} significant.
Fig.~\ref{fig:diff_net_arch} depicts the architecture of our discrepancy network. We drew our inspiration from the co-segmentation network of~\cite{Li18b} that uses feature correlations to detect objects co-occurring in two input images. Our network relies on a
three-stream architecture that first extracts features from the inputs. We use
a pre-trained VGG~\cite{Simonyan15} network for both the original and resynthesized image, and a custom CNN to process the one-hot representation of the predicted labels.
At each level of the feature pyramid, the features of all the streams are concatenated and passed through $1 \times 1$ convolution filters to reduce the number of channels.
In parallel, pointwise correlations between the features of the real image and the resynthesized one are computed and passed, along with the reduced concatenated features,
to an upconvolution pyramid that returns the final discrepancy score.
The details of this architecture are provided in the supplementary material.
\subsection{Training}\label{sec:synthetic_training}
When training our discrepancy network, we cannot observe the unknown classes. To address this, we therefore train it on synthetic data that mimics what happens in the presence of unexpected objects. In practice, the semantic segmentation network assigns incorrect class labels to the regions belonging to unknown classes. To simulate this, as illustrated in Fig.~\ref{fig:synthetic_training}, we therefore replace the label of randomly-chosen object instances with a different random one, sampled from the set of known classes. We then resynthesize the input image from this altered semantic map using the {\it pix2pixHD}~\cite{Wang18c} generator trained on the dataset of interest. This creates pairs of real and synthesized images from which we can train our discrepancy network. Note that this strategy does {\it not} require seeing unexpected objects during training.
\subsection{Detecting Adversarial Attacks}
As mentioned above, comparing an input image to a resynthesized one also allows us to detect adversarial attacks.
To this end, we rely on the following strategy. As for unexpected object detection, we first compute a semantic map from the input image, adversarial or not, and resynthesize the scene from this map using the {\it pix2pixHD} generator. Here, unlike in the unexpected object case, the semantic map predicted for an adversarial example is completely wrong and the resynthesized image therefore completely distorted. This makes attack detection a simpler problem than unexpected object one. We can thus use a simple non-differentiable heuristic to compare the input image with the resynthesized one. Specifically, we use the $L^2$ distance between HOG~\cite{Dalal05} features computed on the input and resynthesized image. We then train a logisitic regressor on these distances to predict whether the input image is adversarial or not. Note that this simple heuristic is much harder to attack than a more sophisticated, deep learning based one.
\section{Experiments}\label{sec:experiments}
We first evaluate our approach on the task of detecting unexpected objects,
such as lost cargo, animals, and rocks, in traffic scenes, which constitute our target application domain and the central evaluation domain for semantic segmentation thanks to the availability of large datasets, such as Cityscapes~\cite{Cordts16} and BDD100K~\cite{Yu18c}. For this application,
all tested methods output a per-pixel {\it anomaly score},
and we compare the resulting maps with the ground-truth anomaly annotations
using ROC curves and the area under the ROC curve (AUROC) metric.
Then, we present our results on the task of adversarial attack detection.
We perform evaluations using the Bayesian SegNet~\cite{Kendall15b} and the PSP Net~\cite{Zhao17b},
both trained using the BDD100K dataset~\cite{Yu18c} (segmentation part)
chosen for its large number of diverse frames, allowing the networks to generalize to the anomaly datasets, whose images differ slightly and cannot be used during training.
To train the image synthesizer and discrepancy detector, we used the training set of Cityscapes~\cite{Cordts16}, downscaled to a resolution of $1024 \times 512$ because of GPU memory constraints.
\subsection{Baselines}
As a first baseline, we rely on an uncertainty-based semantic segmentation network. Specifically, we use
the Bayesian SegNet~\cite{Kendall15b}, which
samples the distribution of the network's results using random dropouts
--- the uncertainty measure is computed as the variance of the samples. We will refer to this method as {\it Uncertainty (Dropout)}.
It requires the semantic segmentation network to contain dropout layers,
which is not the case of most state-of-the-art networks, such as PSP~\cite{Zhao17b}, which is based on a ResNet backbone.
To calculate the uncertainty of the PSP network, we therefore use the ensemble-based method of~\cite{Lakshminarayanan17}:
We trained the PSP model four times, yielding different weights due to the random initialization.
We then use the variance of the outputs of these networks as a proxy for uncertainty.
We will refer to this method as {\it Uncertainty (Ensemble)}.
Finally, we also evaluate the road-specific approach of~\cite{Creusot15}, which relies on training
a shallow Restricted Boltzmann Machine autoencoder to resynthesize patches of road texture corrupted by Gaussian noise.
The regions whose appearance differs from the road are expected not to be reconstructed properly, and thus an
anomaly score for each patch can be obtained using the difference between the autoencoder's input and output.
The original implementation not being publicly available, we re-implemented it and will make our code publicly available for future comparisons.
As in the original article, we use $8\times 8$ patches with stride 6 and a hidden layer of size 20.
We extract the empty road patches required by this method for training from the Cityscapes images using the ground-truth labels to determine the road area.
We will refer to this approach as {\it RBM}.
The full version of our discrepancy detector takes as input the original image, the resynthesized one and the predicted semantic labels. To study the importance of using both of these information sources as input, we also report the results of variants of our approach that have access to only one of them. We will refer to these variants as
{\it Ours (Resynthesis only)} and {\it Ours (Labels only)}.
\subsection{Anomaly Detection Results}
We evaluate our method's ability to detect unexpected objects using two separate datasets described below.
We did not use any portion of these datasets during training, because we tackle the task of finding never-seen-before objects.
\subsubsection{Lost and Found}
\input{fig/rocs_all.tex}
The {\it Lost And Found}~\cite{Pinggera16} dataset contains images of small items, such as cargo and toys, left on the street,
with per-pixel annotations of the obstacle and the free-space in front of the car. We perform our evaluation using the test set, excluding 17 frames for which the annotations are missing.
We downscaled the images to $1024 \times 512$ to match the size of our training images and selected a region of interest which excludes the ego-vehicle and recording artifacts at the image boundaries.
We do not compare our results against the stereo-based ones introduced in~\cite{Pinggera16} because our study focuses on monocular approaches.
The ROC curves of our approach and of the baselines are shown in the left column of Fig.~\ref{fig:rocs_all}. %
Our method outperforms the baselines in both cases. The Labels-only and Resynthesis-only variants of our approach show lower accuracy but remain competitive. By contrast, the uncertainty-based methods prove to be ill-suited for this task. Qualitative examples are provided in Fig.~\ref{fig:samples_laf}.
Note that, while our method still produces false positives, albeit much fewer than the baselines, some of them are valid unexpected objects, such as the garbage bin in the first image. These objects, however, were not annotated as obstacles in the dataset.
Since the RBM method of~\cite{Creusot15} is specifically trained to reconstruct the road,
we further restricted the evaluation to the road area. To this end,
we defined the region of interest as the union of the {\it obstacle} and {\it freespace} annotations of {\it Lost And Found}.
The resulting ROC curves are shown in the middle column of Fig.~\ref{fig:rocs_all}.
The globally-higher scores in this scenario show that distinguishing anomalies from only the road is easier than finding them in the entire scene.
While the RBM approach significantly improves in this scenario, our method still outperforms it.
\input{fig/samples_LAF.tex}
\subsubsection{Our Road Anomaly Dataset}\label{sec:dataset_anomalies}
Motivated by the scarcity of available data for unexpected object detection,
we collected online images depicting anomalous objects, such as
animals, rocks, lost tires, trash cans, and construction equipment, located on or near the road.
We then produced per-pixel annotations of these unexpected objects manually, using the {\it Grab Cut} algorithm~\cite{Rother04} to speed up the process.
The dataset contains 60 images rescaled to a uniform size of $1280 \times 720$.
We will make this dataset and the labeling tool publicly available.
The results on this dataset are shown in the right column of Fig.~\ref{fig:rocs_all}, with example images in Fig.~\ref{fig:samples_road_anomaly}.
Our approach outperforms the baselines, demonstrating its ability to generalize to new environments. By contrast, the \textit{RBM} method's performance is strongly affected by the presence of road textures that differ significantly from the Cityscapes ones.
\input{fig/samples_road_anomaly.tex}
\subsection{Adversarial Attack Detection}
We now evaluate our approach to detecting attacks using the two types of attack that have been used in the context of semantic segmentation.
\noindent\textbf{Adversarial Attacks:} For semantic segmentation, the two state-of-the-art attack strategies are Dense Adversary Generation (DAG)~\cite{Xie17} and Houdini~\cite{cisse2017}. While DAG is an iterative gradient-based method, Houdini combines the standard task loss with an additional stochastic margin factor between the score of the actual and predicted semantic maps to yield less perturbed images. Following~\cite{Xiao18}, we generate adversarial examples with two different target semantic maps. In the first case (Shift), we shift the predicted label at each pixel by a constant offset and use the resulting label as target. In the second case (Pure), a single random label is chosen as target for all pixels, thus generating a pure semantic map. We generate adversarial samples on the validation sets of the Cityscapes and BDD100K datasets, yielding 500 and 1000 images, respectively, with every normal sample having an attacked counterpart.
\noindent\textbf{Results:}
We compare our method with the state-of-the-art spatial consistency (SC) work of~\cite{Xiao18}, which crops random overlapping patches and computes the mean Intersection over Union (mIoU) of the overlapping regions.
The results of this comparison are provided in Table~\ref{tbl:advresults}.
Our approach outperforms SC on Cityscapes and performs on par with it on BDD100K despite our use of a Cityscapes-trained generator to resynthesize the images.
Note that, in contrast with SC, which requires comparing 50 pairs of patches to detect the attack, our approach only requires a single forward pass through the segmentation and generator networks. In Fig.~\ref{fig:advresults}, we show the resynthesized images produced when using adversarial samples. Note that they massively differ from the input one. More examples are provided in the supplementary material.
\input{fig/adv_reconstructions.tex}
\input{fig/adv_results.tex}
\section{Conclusion}\label{sec:conclusion}
In this paper, we have introduced a drastically new approach to detecting the unexpected in images. Our method is built on the intuition that, because unexpected objects have not been seen during training, typical semantic segmentation networks will produce spurious labels in the corresponding regions. Therefore, resynthesizing an image from the semantic map will yield discrepancies with respect to the input image, and we introduced a network that learns to detect the meaningful ones. Our experiments have shown that our approach detects the unexpected objects much more reliably than uncertainty- and autoencoder-based techniques. We have also contributed a new dataset with annotated road anomalies, which we believe will facilitate research in this relatively unexplored field. Our approach still suffers from the presence of some false positives, which, in a real autonomous driving scenario would create a source of distraction. Reducing this false positive rate will therefore be the focus of our future research.
|
1,477,468,750,479 | arxiv |
\section{Introduction}
\label{sec:intro}
Modern deep generative modeling paradigms offer a powerful approach for learning data distributions.
Pioneering models in this family such as generative adversarial networks (GANs) \citep{GoodfellowPMXWOCB14} and variational autoencoders (VAEs) \citep{KingmaW13}
propose
elegant solutions
to generate high quality photo-realistic images, which were later evolved to generate other modalities of data. Much of the success of attaining photo-realism in generated images is attributed to the \textit{adversarial} nature of training
in GANs.
Essentially, GANs are neural samplers
in which a deep neural network $G_\phi$ is trained to generate high dimensional samples
from some low dimensional noise input. During the training, the generator is pitched against a classifier: the classifier is trained to distinguish the generated
from the true data samples and the generator is simultaneously trained to generate samples that look like true data. Upon successful training, the classifier fails to distinguish between the generated and actual samples. Unlike VAE, GAN is an implicit generative model since its likelihood function is implicitly defined and is in general intractable. Therefore training and inference are carried out using likelihood-free techniques such as the one described above.
In its original formulation, GANs can be shown to approximately minimize an $f$-divergence measure between the true data distribution $p_x$ and the distribution $q_\phi$ induced by its generator $G_\phi$. The difficulty in training the generator using the $f$-divergence criterion is that the supports of data and model distributions need to perfectly match. If at any time in the training phase, the supports have non-overlapping portions, the divergence either maxes out
or becomes undefined. If the divergence or its gradient cannot be evaluated, it cannot, in turn, direct the weights of model towards matching distributions \citep{arjovsky2017wasserstein} and training fails.
In this work, we present a novel method,
BreGMN, for implicit adversarial and non-adversarial generative models that is based on \emph{scaled Bregman divergences} \citep{StummerV12} and does not suffer from the aforementioned problem of support mismatch. Unlike $f$-divergences, scaled Bregman divergences can be defined with respect to a base measure such that they stay well-defined even when the data and the model distributions do not have matching support.
Such an observation leads to a key contribution of our work, which is to identify base measures that can
play such a useful role. We find that measures whose support include the supports of data and model are the ones applicable.
In particular, we leverage Gaussian distributions to augment distributions of data and model into a base measure that guarantees the desired behavior. Finally we propose training algorithms for both adversarial and non-adversarial versions of the proposed model.
The proposed method facilitates a steady decrease of the objective function and hence progress of training. We empirically evaluate the advantage of the proposed model for generation of synthetic and real image data. First, we study simulated data in a simple 2D setting with mismatched supports and show the advantage of our method in terms of convergence.
Further, we evaluate BreGMN when used to train both adversarial and non-adversarial generative models. For this purpose, we provide illustrative results on the MNIST, CIFAR10, and CelebA datasets, that show comparable performance to the sample quality of the state-of-art methods. In particular, our quantitative results on generative real datasets also demonstrate the effectiveness of the proposed method in terms of sample quality.
The remainder of this document is organized as follows. Section \ref{sec:related} outlines related work. We introduce the scaled Bregman divergence in Section \ref{sec:discrepancy}, demonstrate how it generalizes a wide variety of popular discrepancy measures, and show that with the right choice of base measure it can eliminate the support mismatch issue. Our application of the scaled Bregman divergence to generative modeling networks is described in Section~\ref{sec:method}, with empirical results
presented in Section \ref{sec:results}. Section \ref{sec:conc} concludes the paper.
\section{Related work}\label{sec:related}
Since the genesis of adversarial generative modeling,
there has been a flurry of work in this domain, e.g. \citep{NowozinCT16, SrivastavaVRGS17, mmdgan,arjovsky2017wasserstein} covering both practical and theoretical challenges in the field.
Within this, a line of research
addresses
the serious problem of \emph{support mismatch}
that makes training hopeless if not remedied. One proposed way to alleviate this problem and stabilize training is to match the distributions of the data and the model based on a different, well-behaved discrepancy measure that can be evaluated even if the distributions are not equally supported. Examples of this approach include Wasserstein GANs \citep{arjovsky2017wasserstein} that replace the $f$-divergence with Wasserstein distance between distributions and other integral probability metric (IPM) based methods such as MMD GANs \citep{mmdgan}, Fisher GAN \citep{mroueh2017fisher}, etc. While IPM based methods are better behaved with respect to the non-overlapping support issue, they have their own issues. For example, MMD-GAN requires several additional penalties such as feasible set reduction in order to successfully train the generator. Similarly, WGAN requires some ad-hoc method for ensuring the Lipschitz constraint on the critic via gradient clipping, etc.
Another approach to remedy the support mismatch issue comes from
\cite{NowozinCT16}.
They showed how GANs can be trained by optimizing a variational lowerbound to the actual $f$-divergence the original GAN formulation proposed. They also showed how the original GAN loss minimizes a Jenson-Shannon divergence and how it can be modified to train the generator using a $f$-divergence of choice.
In parallel, works such as \citep{amari2010information} have studied the relation between the many different divergences available in the literature.
An important extension to \emph{Bregman} divergences, namely \emph{scaled Bregman divergences}, was proposed in the works of \cite{StummerV12,KisslingerS13} and generalizes both $f$-divergences and Bregman divergences.
The Bregman divergence in its various forms has long been used as the objective function for \emph{training} machine learning models. Supervised learning based on least squares (a Bregman divergence) is perhaps the earliest example. \cite{helmbold1995worst, auer1996exponentially, kivinen1998relative} study the use of Bregman divergences as the objective function for training single-layer neural networks for univariate and multivariate regression, along with elegant methods for matching the Bregman divergence with the network's nonlinear transfer function via the so-called \emph{matching loss} construct.
In unsupervised learning, Bregman divergences are unified as the objective for clustering in
\cite{BanerjeeMDG05},
while convex relaxations of Bregman clustering models are proposed in \cite{ChengZS13}.
Generative modeling
based on Bregman divergences is explored in \cite{uehara2016b,
uehara2016generative}, which relies on a duality relationship between Bregman and $f$ divergences. These works retain the $f$-divergence based $f$-GAN objective, but use a Bregman divergence as a distance measure for estimating the needed density ratios in the $f$-divergence estimator. This contrasts with our approach which uses the scaled Bregman divergence as the overall training objective itself.
\section{Generative modeling via discrepancy measures}
\label{sec:discrepancy}
The choice of distance measure between the data and the model distribution is critical, as the success of the training procedure largely depends on the ability of these distance measures to provide meaningful gradients to the optimizer.
Common choices for distances include the Jensen-Shannon divergence (vanilla GAN) $f$-divergence ($f$-GAN) \citep{NowozinCT16} and various integral probability metrics (IPM, e.g. in Wasserstein-GAN, MMD-GAN) \citep{arjovsky2017wasserstein, mmdgan}. In this section, we consider
a generalization of the Bregman divergence that also subsumes the Jensen-Shannon and $f$-divergences as special cases, and can be shown to incorporate some geometric information in a way analogous to IPMs.
\subsection{Scaled Bregman divergence}
The \textbf{Bregman divergence} \citep{bregman1967relaxation} forms a measure of distance between two vectors $p,q \in \mathbb{R}^d$ using a convex function $F: \mathbb{R}^d \rightarrow \mathbb{R}$ as
\begin{align*}
B_F(p, q) = F(p) - F(q) - \nabla F(q) \cdot(p- q),
\end{align*}
which includes a variety of distances, such as the squared Euclidean distance and the KL divergence between finite-cardinality probability mass functions, as special cases.
More useful in our setting is the class of \textbf{separable} Bregman divergences of the form
\begin{equation}
B_f(P,Q) = \int_{\mathcal{X}} f(p(x)) - f(q(x)) - f'(q(x)) (p(x) - q(x)) dx
\label{eq:sepBreg}
\end{equation}
where $f: \mathbb{R}^+\rightarrow \mathbb{R}$ is a convex function, $f'$ is its right derivative and $P$ and $Q$ are measures on $\cal X$ with densities $p$ and $q$ respectively.
In this form the divergence is a discrepancy measure for distributions as desired.
In general, as the name divergence implies, the quantity is non-symmetric. It does not satisfy the triangle inequality either \citep{acharyya2013bregman}.
While this is a valid discrepancy measure, the Bregman divergence does not yield meaningful gradients for training when the two distributions in question have non-overlapping portions in their support, similar to the case of $f$-divergences \citep{ArjovskyB17}.
We thus propose to use the \emph{scaled} Bregman divergence, which introduces a third measure $M$ with density $m$ that can depend on $P$ and $Q$ and uses it as a base measure for the Bregman divergence. Specifically, the \textbf{scaled Bregman divergence} \citep{StummerV12} is given by
\begin{align}
B_f(P,Q |M) = \int_{\cal X} f\left(\frac{p(x)}{m(x)}\right)- f\left(\frac{q(x)}{m(x)}\right) -f'\left(\frac{q(x)}{m(x)}\right) \left(\frac{p(x)}{m(x)} - \frac{q(x)}{m(x)} \right) d M.
\label{eq:scaledBreg}
\end{align}
This expression is equal to the separable Bregman divergence \eqref{eq:sepBreg} when $M$ is equal to the Lebesgue measure.
As shown in \citep{StummerV12}, the scaled Bregman divergence \eqref{eq:scaledBreg} contains many popular discrepancy measures as special cases. In particular, when $f(t) = t \log t$ it reduces to the \textbf{KL~divergence} for any choice of $M$ (as does the vanilla Bregman divergence).
Many classical criteria (including the KL and Jensen-Shannon divergences) belong to the family of \textbf{$f$-divergences}, defined as
\begin{align*}
D_f(P, Q) = \int_{\cal{X}} q(x) f\left(\frac{p(x)}{q(x)}\right)dx.
\end{align*}
where the function $f : R_+ \to R$ is a convex, lower-semi-continuous function satisfying $f(1) = 0$, where the densities $p$ and $q$ are absolutely continuous with respect to each other. The scaled Bregman divergence with choice of $M= Q$ reduces to the $f$ divergence family as:
\[
B_f(P,Q |Q) = \int_{\cal X} f\left(\frac{p(x)}{q(x)}\right) -f'\left(1\right) \left(\frac{p(x)}{q(x)} - 1 \right) d Q = \int_{\cal{X}} q(x) f\left(\frac{p(x)}{q(x)}\right)dx,
\]
which shows all $f$-divergences are special cases of the scaled Bregman divergence.
A more complete list of discrepancy measures included in the class of scaled Bregman divergences is found in \cite{StummerV12}.
\subsection{Noisy base measures and support mismatch}
A widely-known weakness of $f$-divergence measures is that when the supports of $p$ and $q$ are disjoint,
the value of the divergence is trivial or undefined. In the context of generative models, this issue is often tackled by adding noise to the model distribution which extends its support over the entire observed space such as in VAEs. However, adding noise to the observed space is not particularly well-suited for tasks such as image generation as it results in blurry images. In this work we propose choosing a base measure $M$ that in some sense incorporates geometric information in such a way that the gradients in the disjoint setting become informative without compromising the image quality.
For the scaled Bregman $B_f (P,Q |M)$, we propose choosing a ``noisy" base measure $M$, specifically one that is formed by convolving some other measure with the Gaussian measure $\mathcal{N}(0, \Sigma)$. Recall that convolution of two distributions corresponds to the addition of the associated random variables, hence in this case we are in affect adding Gaussian noise to the variable generated by $M$. In addition to adding noise, we require a base measure $\tilde{M}$ that depends on $P$ and $Q$ to avoid the vanilla Bregman divergence's lack of informative gradients (see Section \ref{sec:BregEst} below). By analogy to the Jensen-Shannon divergence, we choose
\begin{equation}\label{eq:baseMeasure}
\tilde M = \alpha (P \ast \mathcal{N}(0, \Sigma_1)) + (1-\alpha) (Q \ast {\cal N}(0,\Sigma_2 ))
\end{equation}
for $0 \le \alpha\le 1$ and some covariances $\Sigma_1$ and $\Sigma_2$, where $\ast$ denotes the convolution of two distributions. Denote the density of $\tilde M$ as $\tilde m$.
Importantly, observe that each term of the corresponding scaled Bregman $B_f (P,Q |\tilde M)$ is always well defined and finite (with the exception of certain choices of $f$ such as $-\log$ that require numerical stabilization similar to the case of $f$-divergence) since $\tilde M$ has full support.
Furthermore, since $\tilde M$ is a noisy copy of $\alpha P + (1-\alpha) Q$, the ratio $\frac{p}{\tilde m}$ will be affected by $q$ even outside the support of $q$, and vice versa. This ensures that a training signal remains in the support mismatch case.
The presence of this training signal seems to indicate that geometric information is being used, since it varies with the distance between the supports.
To further explore this intuitive connection between noisy base measures and geometric information, we attempt to relate $B_f(P,Q|\tilde{M})$ to the $W_p$ distance. In what follows, for simplicity we focus on the case of $f(t) = t \log t$; analysis for more general choices of $f$ is left for future work. For the KL divergence for example, Pinsker's inequality states that
\[
D_{KL}(p||q) \geq 2(W_1(p,q))^2 .
\]
A similar lower bound for the $W_2$ distance and certain log-concave $q$ follows from Talagrand's inequality \citep{bobkov2000brunn}. These lower bounds are not surprising, since the KL divergence can go to infinity when Wasserstein-p is finite.
However, lower bounds of this type are not sufficient to imply that a divergence is using geometric information, since it can increase very quickly while $W_p$ increases only slightly.
Our use of a noisy $M_0$, however, allows us to obtain an upper bound for a symmetrized version of $B_f(P,Q|\tilde{M})$, which implies a continuity with respect to geometric information. While we found in our generative modeling experiments that a symmetrized version is unnecessary to use in practice, it is useful for comparison to IPMs.
Recall that the Jensen-Shannon divergence
constructs a symmetric measure by symmetrizing the KL divergence around $(P+Q)/2$. Any Bregman divergence can be similarly symmetrized (Eq. 16 in \cite{nielsen2011skew}).
For simplicity, we consider the special case of $\tilde{M}$, namely $M_0 = \frac{{P} + {Q}}{2}\ast \mathcal{N}_\sigma$ with density $m_0$, and use it to both scale and symmetrize the scaled Bregman divergence, obtaining the measure
$
{B_f(P,M_0 |M_0) + B_f(Q,M_0 |M_0)}
= {D_f(P || M_0) + D_f(Q||M_0)}$.
In Section \ref{supp:wass} of the Supplement we prove:
\begin{proposition}\label{prop:wass}
Assume that $\mathbb{E}_{U\sim P} \|U\|$ and $\mathbb{E}_{V\sim P} \|V\|$ are bounded. Then
\[
\left|B_{t \log t}(P,M_0 |M_0) - B_{t\log t}(Q,M_0 |M_0)\right| \leq c W_2(P,Q) + |h(Q) - h(P)|,
\]
where $c$ is a constant given in the proof and $h(P)$ is the Shannon entropy of $P$.
\end{proposition}
While an $h(P) - h(Q)$ term remains, it is simple to rescale $Q$ to match the entropy of $P$, eliminating that term and leaving the Wasserstein distance.\footnote{Under certain smoothness conditions on $P$ and $Q$ $|h(P) - h(Q)|$ can itself be upper bounded by the Wasserstein distance (see \cite{Polyanskiy016a} for details).}
While not fully characterizing the geometric information in $B_f(P,Q|M_0)$, these observations seem to imply that the use of the noisy $\tilde{M}$ is capable of incorporating some geometric information without having to resort to IPMs with their associated training difficulties in the GAN context such as gradient clipping and feasible set reduction \citep{ArjovskyB17,mmdgan}.
\section{Model}
\label{sec:method}
Let $\{x_i \vert x_i \in \mathbb{R}^d\}_{i=1}^N$ be a set of $N$ samples drawn from the data generating distribution $p_x$ that we are interested in learning through a parametric model $G_\phi$.
The goal of generative modeling is to train $G_\phi$,
generally implemented as a deep neural network, to map samples from a $k$-dimensional easy-to-sample distribution
to the ambient $d$ dimensional data space, i.e. $G_\phi:\mathbb{R}^k \mapsto \mathbb{R}^d$. Letting $q_\phi$ be the distribution induced by the generator function $G_\phi$, almost all training criteria are of the form
\begin{align}
\label{eq:gen_loss}
\min_\phi
{D}(p_x \Vert q_\phi)
\end{align}
where $\mathbb
{D(.\Vert.)}$ is
a measure of discrepancy between the data and the model distributions.
We propose to use the
{scaled-Bregman divergence} as
${D}$ in Equation \eqref{eq:gen_loss}.
We will show
that unlike $f$-divergences, scaled-Bregman divergences can be easily estimated
with respect to a base measure using only samples from the distributions.
This is important when we aim to match distributions in very high dimensional spaces where they may not have any overlapping support \citep{arjovsky2017wasserstein}.
In order to compute the divergence between data and model distributions, it is not required that both densities are known or can be evaluated on realizations from distributions. Instead, being able to evaluate the ratio between them, i.e. \emph{density ratio estimation}, is typically all that is needed. For example, generative models based on $f$-divergences only need density ratio estimation. Importantly, similar to the case of $f$-divergences, scaled-Bregman divergence estimation requires estimates of the density ratios only.
Below, we describe two methods of
density ratio estimation (DRE) between two distributions.
In what follows, suppose $r=\frac{p_x}{q_\phi}$ is the density ratio.
\paragraph{Discriminator-based DRE:}
This family of models uses a discriminator to estimate the density ratio.
Let $y=1$ if $x\sim p_x$ and $y=0$ if $x\sim q_\phi$. Further, let $\sigma(C(x)) = p(y=1\vert x)$, namely the discriminator, be a trained binary classifier on samples from $p_x$ and $q_\phi$ where $\sigma$ is the Sigmoid function.
It is then easy to show that $C(x) = -\log \frac{p_x(x)}{q_\phi(x)}= -\log r(x)$ \citep{dre}, so $C$ is a function of density ratio $r(x)$.
In fact, this is the underlying principle in adversarial generative models \citep{GoodfellowPMXWOCB14}. As such, most discriminator-based DREs result in adversarial training procedures when used in generative models.
\paragraph{MMD-based DRE:}
This family of models estimate the density ratio without the use of a discriminator.
In order to estimate the density ratio $r$ without training a classifier, thereby avoiding adversarial training of the generator later, we can employ the maximum mean discrepancy (MMD) \citep{mmd} criterion as in \cite{dre}.
By solving for $r$ in the RKHS in
\begin{align}
\label{eq:mmd-ratio}
\min_{r\in\mathcal{H}} \bigg \Vert \int k(x; .)p_x(x) dx - \int k(x; .)r(x)q_\phi(x) dx \bigg \Vert_{\mathcal{H}}^2,
\end{align}
where $k$ is a kernel function, we obtain a closed form estimator of the density ratio as
\begin{align}
\label{eq:ratio}
\hat{\mathbf{r}}_{p/q} = K_{q,q}^{-1} K_{q,p} \pmb 1.
\end{align}
Here $K_{q,q}$ and $K_{q,p}$ denote the
Gram matrices
corresponding to kernel $k$.
\subsection{Empirical estimation}\label{sec:BregEst}
Using the DRE estimators introduced above we create empirical estimators of the scaled-Bregman divergence \eqref{eq:scaledBreg} as
\begin{align}
\label{eq:bre_e}
\hat{B}_f(p_x,q_\phi|M) = \frac{1}{N}\sum_{i=1}^N f\left(r_{p/m}(x_i)\right)- f\left(r_{q_\phi/m}(x_i)\right) \\ \nonumber
-f'\left(r_{q_\phi/m}(x_i)\right) \left(r_{p/m}(x_i) - r_{q_\phi/m}(x_i) \right)
\end{align}
where $r_{p/m}$ denotes a DRE of $p/m$ and the $x_i$ are i.i.d. samples from the base distribution $m$ with measure $M$.
Note that this empirical estimator $\hat{B}_f$ does not have gradients with respect to $\phi$ if we only evaluate the DRE estimators on samples from the base measure $m$.
Choices of $M$ that depend on $p$ and $q$, however, including our choice of $\tilde M$ \eqref{eq:baseMeasure} as well as the choice $M = Q$ ($f$-divergences), have informative gradients, allowing us to train the generator.
\subsection{Training}
Training the generator function $G_\phi$ using scaled-Bregman divergence (shown in Algorithm \ref{alg:training}) alternates the following two steps until convergence.
\paragraph{Step 1:} Estimate the density ratios $r_{p/m}$ and $r_{q_\phi/m}$ using either the adversarial discriminator-based method (as in a GAN) or the non-adversarial MMD-based method.
\paragraph{Step 2:} Train the generator by optimizing
\begin{align}
\label{eq:train}
\min_\phi \hat{B}_f(p_x,q_\phi|\tilde M).
\end{align}
\begin{algorithm}[!t]
\SetAlgoNoLine
\While{ not converged }{
\textbf{Step 1 } Estimate the density ratios $r_{p/m}$ and $r_{q_\phi/m}$,
using either the adversarial discriminator-based (GAN-like) or\\ non-adversarial two-sample-test-based (MMD-like) method. \\
\textbf{Step 2 } Train the generator by optimizing
\\
${\hspace{6.3cm}\min_\phi \hat{B}_f(p_x,q_\phi|\tilde M)}$.
}
\caption{Training Algorithm of BreGMN}
\label{alg:training}
\end{algorithm}
\section{Experiments}
\label{sec:results}
In this section we present a detailed evaluation of our proposed scaled-Bregman divergence based method for training generative models. Since most generative models aim to learn the data generating distribution, our method can be generically applied to a large number of simple or complex and deep generative models for training. We demonstrate this by training a range of simple to complex models with our method.
\subsection{Synthetic data: support mismatch}
In this experiment, we evaluate our method in the regime where $p$ and $q_\phi$ have mismatched support, in order to validate the intuition that the noisy base measure $\tilde M$ aids learning in this setting. As shown in Figure \ref{fig:syn}(b), we start by training a simple probabilistic model (blue) to match the data distribution (red). The data distribution is a simple uniform distribution with finite support. Our model is a therefore parameterized as a uniform distribution with one trainable parameter.
Figure \ref{fig:syn}(a) shows the effect of training this model with $f$-divergence and with our method. Clearly, neither the KL nor JS divergences are able to provide any meaningful gradients for the training of this simple model. Our scaled-Bregman based training method, however, is indeed able to learn the model. Interestingly, as Figure \ref{fig:syn} shows, the choice of the function $f$ matters in the empirical convergence rate of our method, with the convergence of $f(t) = -\log t$ much faster than that of $f(t) = t^2$.
\begin{figure}[!tb]
\centering
\caption{$f$-divergence and scaled-Bregman divergence based training on synthetic dataset of two disjoint, non-overlapping 2D distributions.
\label{fig:syn}}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\linewidth]{2d.png}
\caption{Measure vs Training}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\linewidth]{syndata.png}
\caption{2D distributions.}
\end{subfigure}
\end{figure}
\subsection{Non-adversarial generative model}
Our training procedure is not intrinsically adversarial, i.e. it is not a saddle-point problem when the MMD-based DRE is used. To demonstrate the capability of the proposed model in training non-adversarial models, in this section, we
apply
the MMD-based DRE to train a generative model on the MNIST dataset in a non-adversarial fashion. As shown in Figure \ref{fig:mnist}(a), our method can be used to successfully train generative models of a simple dataset without using adversarial techniques. While the sample quality is not optimal (better sample quality may be achievable by carefully tuning the kernel in the MMD criterion), the training procedure is remarkably stable as shown in Figure \ref{fig:mnist}(b).
\begin{figure}[!tb]
\centering
\caption{Non-adversarial Training using scaled-Bregman Divergence and MMD based DRE.
\label{fig:mnist}}
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\linewidth]{mnist.png}
\caption{Samples from the generator.}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\linewidth]{mnist_opt.png}
\caption{Generator loss steadily decreases.}
\end{subfigure}
\end{figure}
\subsection{Adversarial generative model}
\begin{figure}[!tb]
\centering
\caption{Random samples from Adversarial BreGMN models (after 5 Epochs)
\label{fig:mnist}}
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\linewidth]{cifar.png}
\caption{CIFAR10}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\linewidth]{celeb.png}
\caption{CELEB A}
\end{subfigure}
\end{figure}
Training generative models on complicated high-dimensional datasets such as those of natural images is done preferably with adversarial techniques since they tend to lead to better sample quality. One straightforward way to assign adversarial advantage to our method is to use a discriminator based DRE. To evaluate our training method on adversarial generation, in this section, we compare the Frechet Inception Distance (FID) \citep{fid} of MMD-GAN \citep{mmdgan}, GAN \citep{GoodfellowPMXWOCB14} against BreGMN on CIFAR10 and CelebA dataset. FID measures the distance between the data and the model distributions by embedding their samples into a certain higher layer of a pre-trained Inception Net. We used a 4-layer DCGAN \citep{dcgan} architecture for all the experiments and averaged the FID over multiple runs. $\mathcal{N}(0,0.001)$ is used as the noise level across all the experiments. MMD-GAN trains a generator network using the maximum mean discrepency \citep{mmd} where the kernel is trained in an adversarial fashion. As shown in Table \ref{tab:celeb}, both BreGMN and GANs performs better than MMD-GAN in terms of sample quality. While BreGMN performs slightly better than GAN on average, their sample qualities are comparable.
\begin{table}[!tb]
\centering
\caption{Sample quality (measured by FID; lower is better) of BreGMN compared to GANs. }
\label{tab:celeb}
\begin{tabular}{@{}lcccc@{}}
\toprule
\multicolumn{1}{c}{\textbf{Archtitecture}} & \textbf{Dataset} & \textbf{MMD-GAN} & \textbf{GAN} & \textbf{BreGMN} \\ \midrule
\multicolumn{1}{l}{\textbf{DCGAN}} & \multicolumn{1}{c}{\textbf{Cifar10}} & \multicolumn{1}{c}{40 } & \multicolumn{1}{c}{26.82 } & \multicolumn{1}{c}{\textbf{26.62}} \\
\multicolumn{1}{l}{\textbf{DCGAN}} & \multicolumn{1}{l}{\textbf{CelebA}} & \multicolumn{1}{c}{41.10} & \multicolumn{1}{c}{30.97 } & \multicolumn{1}{c}{\textbf{30.84}} \\ \bottomrule
\end{tabular}
\end{table}
\section{Conclusions}
\label{sec:conc}
In this work, we proposed scaled-Bregman divergence based
generative models and identified base measures for them to facilitate effective training.
We showed that the proposed approach provides a certifiably advantageous
criterion
to model the data distribution using deep generative networks in comparison to the $f$-divergence based training methods. We clearly established that unlike $f$-divergence based training our method does not fail to train even when the model and the data distributions do not have any overlapping support to start with.
A future direction of research addresses the choice of the base measure and the effect of noise level on the optimization. Another, more theoretical direction is to study and establish the relationship between scaled-Bregman divergence and other IPMs.
\section{Submission of papers to NeurIPS 2019}
NeurIPS requires electronic submissions. The electronic submission site is
\begin{center}
\url{https://cmt.research.microsoft.com/NeurIPS2019/}
\end{center}
Please read the instructions below carefully and follow them faithfully.
\subsection{Style}
Papers to be submitted to NeurIPS 2019 must be prepared according to the
instructions presented here. Papers may only be up to eight pages long,
including figures. Additional pages \emph{containing only acknowledgments and/or
cited references} are allowed. Papers that exceed eight pages of content
(ignoring references) will not be reviewed, or in any other way considered for
presentation at the conference.
The margins in 2019 are the same as since 2007, which allow for $\sim$$15\%$
more words in the paper compared to earlier years.
Authors are required to use the NeurIPS \LaTeX{} style files obtainable at the
NeurIPS website as indicated below. Please make sure you use the current files
and not previous versions. Tweaking the style files may be grounds for
rejection.
\subsection{Retrieval of style files}
The style files for NeurIPS and other conference information are available on
the World Wide Web at
\begin{center}
\url{http://www.neurips.cc/}
\end{center}
The file \verb+neurips_2019.pdf+ contains these instructions and illustrates the
various formatting requirements your NeurIPS paper must satisfy.
The only supported style file for NeurIPS 2019 is \verb+neurips_2019.sty+,
rewritten for \LaTeXe{}. \textbf{Previous style files for \LaTeX{} 2.09,
Microsoft Word, and RTF are no longer supported!}
The \LaTeX{} style file contains three optional arguments: \verb+final+, which
creates a camera-ready copy, \verb+preprint+, which creates a preprint for
submission to, e.g., arXiv, and \verb+nonatbib+, which will not load the
\verb+natbib+ package for you in case of package clash.
\paragraph{Preprint option}
If you wish to post a preprint of your work online, e.g., on arXiv, using the
NeurIPS style, please use the \verb+preprint+ option. This will create a
nonanonymized version of your work with the text ``Preprint. Work in progress.''
in the footer. This version may be distributed as you see fit. Please \textbf{do
not} use the \verb+final+ option, which should \textbf{only} be used for
papers accepted to NeurIPS.
At submission time, please omit the \verb+final+ and \verb+preprint+
options. This will anonymize your submission and add line numbers to aid
review. Please do \emph{not} refer to these line numbers in your paper as they
will be removed during generation of camera-ready copies.
The file \verb+neurips_2019.tex+ may be used as a ``shell'' for writing your
paper. All you have to do is replace the author, title, abstract, and text of
the paper with your own.
The formatting instructions contained in these style files are summarized in
Sections \ref{gen_inst}, \ref{headings}, and \ref{others} below.
\section{General formatting instructions}
\label{gen_inst}
The text must be confined within a rectangle 5.5~inches (33~picas) wide and
9~inches (54~picas) long. The left margin is 1.5~inch (9~picas). Use 10~point
type with a vertical spacing (leading) of 11~points. Times New Roman is the
preferred typeface throughout, and will be selected for you by default.
Paragraphs are separated by \nicefrac{1}{2}~line space (5.5 points), with no
indentation.
The paper title should be 17~point, initial caps/lower case, bold, centered
between two horizontal rules. The top rule should be 4~points thick and the
bottom rule should be 1~point thick. Allow \nicefrac{1}{4}~inch space above and
below the title to rules. All pages should start at 1~inch (6~picas) from the
top of the page.
For the final version, authors' names are set in boldface, and each name is
centered above the corresponding address. The lead author's name is to be listed
first (left-most), and the co-authors' names (if different address) are set to
follow. If there is only one co-author, list both author and co-author side by
side.
Please pay special attention to the instructions in Section \ref{others}
regarding figures, tables, acknowledgments, and references.
\section{Headings: first level}
\label{headings}
All headings should be lower case (except for first word and proper nouns),
flush left, and bold.
First-level headings should be in 12-point type.
\subsection{Headings: second level}
Second-level headings should be in 10-point type.
\subsubsection{Headings: third level}
Third-level headings should be in 10-point type.
\paragraph{Paragraphs}
There is also a \verb+\paragraph+ command available, which sets the heading in
bold, flush left, and inline with the text, with the heading followed by 1\,em
of space.
\section{Citations, figures, tables, references}
\label{others}
These instructions apply to everyone.
\subsection{Citations within the text}
The \verb+natbib+ package will be loaded for you by default. Citations may be
author/year or numeric, as long as you maintain internal consistency. As to the
format of the references themselves, any style is acceptable as long as it is
used consistently.
The documentation for \verb+natbib+ may be found at
\begin{center}
\url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf}
\end{center}
Of note is the command \verb+\citet+, which produces citations appropriate for
use in inline text. For example,
\begin{verbatim}
\citet{hasselmo} investigated\dots
\end{verbatim}
produces
\begin{quote}
Hasselmo, et al.\ (1995) investigated\dots
\end{quote}
If you wish to load the \verb+natbib+ package with options, you may add the
following before loading the \verb+neurips_2019+ package:
\begin{verbatim}
\PassOptionsToPackage{options}{natbib}
\end{verbatim}
If \verb+natbib+ clashes with another package you load, you can add the optional
argument \verb+nonatbib+ when loading the style file:
\begin{verbatim}
\usepackage[nonatbib]{neurips_2019}
\end{verbatim}
As submission is double blind, refer to your own published work in the third
person. That is, use ``In the previous work of Jones et al.\ [4],'' not ``In our
previous work [4].'' If you cite your other papers that are not widely available
(e.g., a journal paper under review), use anonymous author names in the
citation, e.g., an author of the form ``A.\ Anonymous.''
\subsection{Footnotes}
Footnotes should be used sparingly. If you do require a footnote, indicate
footnotes with a number\footnote{Sample of the first footnote.} in the
text. Place the footnotes at the bottom of the page on which they appear.
Precede the footnote with a horizontal rule of 2~inches (12~picas).
Note that footnotes are properly typeset \emph{after} punctuation
marks.\footnote{As in this example.}
\subsection{Figures}
\begin{figure}
\centering
\fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
\caption{Sample figure caption.}
\end{figure}
All artwork must be neat, clean, and legible. Lines should be dark enough for
purposes of reproduction. The figure number and caption always appear after the
figure. Place one line space before the figure caption and one line space after
the figure. The figure caption should be lower case (except for first word and
proper nouns); figures are numbered consecutively.
You may use color figures. However, it is best for the figure captions and the
paper body to be legible if the paper is printed in either black/white or in
color.
\subsection{Tables}
All tables must be centered, neat, clean and legible. The table number and
title always appear before the table. See Table~\ref{sample-table}.
Place one line space before the table title, one line space after the
table title, and one line space after the table. The table title must
be lower case (except for first word and proper nouns); tables are
numbered consecutively.
Note that publication-quality tables \emph{do not contain vertical rules.} We
strongly suggest the use of the \verb+booktabs+ package, which allows for
typesetting high-quality, professional tables:
\begin{center}
\url{https://www.ctan.org/pkg/booktabs}
\end{center}
This package was used to typeset Table~\ref{sample-table}.
\begin{table}
\caption{Sample table title}
\label{sample-table}
\centering
\begin{tabular}{lll}
\toprule
\multicolumn{2}{c}{Part} \\
\cmidrule(r){1-2}
Name & Description & Size ($\mu$m) \\
\midrule
Dendrite & Input terminal & $\sim$100 \\
Axon & Output terminal & $\sim$10 \\
Soma & Cell body & up to $10^6$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Final instructions}
Do not change any aspects of the formatting parameters in the style files. In
particular, do not modify the width or length of the rectangle the text should
fit into, and do not change font sizes (except perhaps in the
\textbf{References} section; see below). Please note that pages should be
numbered.
\section{Preparing PDF files}
Please prepare submission files with paper size ``US Letter,'' and not, for
example, ``A4.''
Fonts were the main cause of problems in the past years. Your PDF file must only
contain Type 1 or Embedded TrueType fonts. Here are a few instructions to
achieve this.
\begin{itemize}
\item You should directly generate PDF files using \verb+pdflatex+.
\item You can check which fonts a PDF files uses. In Acrobat Reader, select the
menu Files$>$Document Properties$>$Fonts and select Show All Fonts. You can
also use the program \verb+pdffonts+ which comes with \verb+xpdf+ and is
available out-of-the-box on most Linux machines.
\item The IEEE has recommendations for generating PDF files whose fonts are also
acceptable for NeurIPS. Please see
\url{http://www.emfield.org/icuwb2010/downloads/IEEE-PDF-SpecV32.pdf}
\item \verb+xfig+ "patterned" shapes are implemented with bitmap fonts. Use
"solid" shapes instead.
\item The \verb+\bbold+ package almost always uses bitmap fonts. You should use
the equivalent AMS Fonts:
\begin{verbatim}
\usepackage{amsfonts}
\end{verbatim}
followed by, e.g., \verb+\mathbb{R}+, \verb+\mathbb{N}+, or \verb+\mathbb{C}+
for $\mathbb{R}$, $\mathbb{N}$ or $\mathbb{C}$. You can also use the following
workaround for reals, natural and complex:
\begin{verbatim}
\newcommand{\RR}{I\!\!R}
\newcommand{\Nat}{I\!\!N}
\newcommand{\CC}{I\!\!\!\!C}
\end{verbatim}
Note that \verb+amsfonts+ is automatically loaded by the \verb+amssymb+ package.
\end{itemize}
If your file contains type 3 fonts or non embedded TrueType fonts, we will ask
you to fix it.
\subsection{Margins in \LaTeX{}}
Most of the margin problems come from figures positioned by hand using
\verb+\special+ or other commands. We suggest using the command
\verb+\includegraphics+ from the \verb+graphicx+ package. Always specify the
figure width as a multiple of the line width as in the example below:
\begin{verbatim}
\usepackage[pdftex]{graphicx} ...
\includegraphics[width=0.8\linewidth]{myfile.pdf}
\end{verbatim}
See Section 4.4 in the graphics bundle documentation
(\url{http://mirrors.ctan.org/macros/latex/required/graphics/grfguide.pdf})
A number of width problems arise when \LaTeX{} cannot properly hyphenate a
line. Please give LaTeX hyphenation hints using the \verb+\-+ command when
necessary.
\subsubsection*{Acknowledgments}
Use unnumbered third level headings for the acknowledgments. All acknowledgments
go at the end of the paper. Do not include acknowledgments in the anonymized
submission, only in the final paper.
\section*{References}
References follow the acknowledgments. Use unnumbered first-level heading for
the references. Any choice of citation style is acceptable as long as you are
consistent. It is permissible to reduce the font size to \verb+small+ (9 point)
when listing the references. {\bf Remember that you can use more than eight
pages as long as the additional pages contain \emph{only} cited references.}
\medskip
\small
[1] Alexander, J.A.\ \& Mozer, M.C.\ (1995) Template-based algorithms for
connectionist rule extraction. In G.\ Tesauro, D.S.\ Touretzky and T.K.\ Leen
(eds.), {\it Advances in Neural Information Processing Systems 7},
pp.\ 609--616. Cambridge, MA: MIT Press.
[2] Bower, J.M.\ \& Beeman, D.\ (1995) {\it The Book of GENESIS: Exploring
Realistic Neural Models with the GEneral NEural SImulation System.} New York:
TELOS/Springer--Verlag.
[3] Hasselmo, M.E., Schnell, E.\ \& Barkai, E.\ (1995) Dynamics of learning and
recall at excitatory recurrent synapses and cholinergic modulation in rat
hippocampal region CA3. {\it Journal of Neuroscience} {\bf 15}(7):5249-5262.
\end{document}
|
1,477,468,750,480 | arxiv |
\section{Case Study}
\par From the experimental results presented in Section 5.2, we are able to maintain the best performance when we select nodes with top 3\% all centralities as central nodes and apply 1NN to train a model for malware detection. Therefore, we choose these three parameters\footnote{Classification model: 1NN, Centrality measure: all centrality, and N = 3.} to commence our case studies.
\subsection{Detection of Zero-Day Malware}
\par In our first case study, we evaluate the ability of \emph{IntDroid} on zero-day malware detection. To achieve this goal, we treat nodes with top 3\% all centralities as central nodes and leverage our 8,253 samples used in Section 5 to train a classifier by using 1NN algorithm.
Next, we crawl 5,000 apps from GooglePlay market and feed them to the trained 1NN classifier. Among these apps, \emph{IntDroid} reports 32 of them as being malicious. To validate whether these 32 apps are indeed malware or not, we upload them to VirusTotal \cite{virustotal} to analyze each of them. Among these 32 samples, 27 of them are reported as malicious by at least one AntiVirus scanner. We then conduct a further investigation (\textit{i.e.,} obtaining more detailed information from GooglePlay official website) about these 27 apps, results show that 1 of which has been downloaded and installed by more than 10 million users. This app has also been flagged as malware by 6 AntiVirus scanners in VirusTotal \cite{virustotal}, one of which is \emph{Symantec Mobile Insight}.
\par Of the remaining 5 apps, we manually inspect them. Our manual checks show that 1 of these 5 apps is actually malware as it contains highly suspicious behaviors. The app is a music downloader, which contains of 6 dangerous permissions and reads battery and memory information of the device. Moreover, the app collects many sensitive data (\textit{i.e.,} Android ID, serial number, IMSI, IMEI, phone number, contacts, emails, and location) and even can execute shell code.
In conclusion, \emph{IntDroid} is able to discover 28 zero-day malware\footnote{More detailed behaviors and informations are available in website: https://github.com/IntDroid/IntDroid} among 5,000 GooglePlay apps, 1 of them has been downloaded and installed by more than 10 million users, and 1 of them is not reported as malware by existing tools \cite{virustotal}.
\subsection{Detection of Malware Families}
\par There is another important aspect (the balance of malware families in the experimental dataset \cite{rossow2012prudent, arp2014drebin}) that should be considered when we evaluate the detection performance of a malware detection method. Suppose that the number of samples of several malware families is much larger than of other families, then the detection ability of the trained model will mainly depend on these families. In other words, a malware detection method should all maintain high effectiveness for different families. To evaluate the performance of \emph{IntDroid} on malware detection for different families, we conduct a case study by detecting the 20 largest malware families in AndroZoo \cite{allix2016androzoo}. Some malicious APK files in AndroZoo \cite{allix2016androzoo} are labeled into corresponding families by using method in \cite{hurier2017euphony}. We select the top 20 families as our test objectives. We randomly download 1,000 samples for each family, and the total number of downloaded samples is 20,000. The family names and the average size of samples for each family can be found in Table \ref{tab:FamilyAveSize}.
\begin{table}[htbp]
\caption{The average size (MB) of top 20 families in our dataset.}
\centering
\begin{tabular}{ccc|ccc}
\hline
\textbf{ID} & \textbf{Family} & \textbf{Ave Size} & \textbf{ID} & \textbf{Family} & \textbf{Ave Size} \\
\hline
A & admogo & 2.61 & K & leadbolt & 2.15 \\
B & adwo & 2.64 & L & plankton & 2.15 \\
C & airpush & 2.48 & M & smspay & 5.84 \\
D & appsgeyser & 0.53 & N & startapp & 2.69 \\
E & artemis & 3.41 & O & umeng & 2.38 \\
F & deng & 4.37 & P & utchi & 1.45 \\
G & domob & 1.95 & Q & waps & 2.47 \\
H & droidkungfu & 1.94 & R & wapsx & 2.41 \\
I & gingermaster & 4.11 & S & wooboo & 1.30 \\
J & kuguo & 3.38 & T & youmi & 2.11 \\
\hline
& Total & 2.62 & & & \\
\hline
\end{tabular}%
\label{tab:FamilyAveSize}%
\end{table}%
\par We leverage our 3,988 benign samples used in Section 5 and these 20,000 malware samples to commence our study. The detection performance of \emph{IntDroid} for each family is illustrated in Figure \ref{fig:malware_families_Acc}. \emph{IntDroid} is able to maintain above 91\% accuracy for all 20 malware families. Particularly, nine families show a detection rate of more than 98\% and two families achieve an accuracy of more than 99\%. The average accuracy of \emph{IntDroid} is 96.8\% when testing on these 23,988 samples, such high effectiveness suggests that \emph{IntDroid} is suitable to detect malware for different families.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.48\textwidth]{pdf/MalwareFamilies_Acc.pdf}}
\caption{Detection performance of \emph{IntDroid} for 20 different malware families.}
\label{fig:malware_families_Acc}
\end{figure}
\subsection{Sensitive API calls in Central Nodes}
\par \emph{IntDroid} needs to obtain central nodes within a function call graph before conducting classification. In this part, we study the distribution of sensitive API calls in central nodes between benign apps and malicious apps.
\begin{table*}[htbp]
\centering
\caption{Sensitive API calls with top 10 absolute value of distribution difference in benign apps and malicious apps.}
\begin{tabular}{c|cc|cc|c}
\hline
Sensitive API calls in Central Nodes & \#Benign Samples & Ratio1 & \#Malicious Samples & Ratio2 & |Ratio1 - Ratio2| \\
\hline
ConnectivityManager.getActiveNetworkInfo() & 273/3988 & 6.85\% & 10435/24265 & 43.00\% & 36.16\% \\
RelativeLayout.init() & 181/3988 & 4.54\% & 7212/24265 & 29.72\% & 25.18\% \\
Environment.getExternalStorageDirectory() & 504/3988 & 12.64\% & 9097/24265 & 37.49\% & 24.85\% \\
TelephonyManager.getDeviceId() & 69/3988 & 1.73\% & 6221/24265 & 25.64\% & 23.91\% \\
Environment.getExternalStorageState() & 337/3988 & 8.45\% & 7468/24265 & 30.78\% & 22.33\% \\
ViewPager.onTouchEvent() & 1820/3988 & 45.64\% & 6569/24265 & 27.07\% & 18.56\% \\
ImageView.init() & 259/3988 & 6.49\% & 5901/24265 & 24.32\% & 17.82\% \\
ViewPager.onInterceptTouchEvent() & 1472/3988 & 36.91\% & 4729/24265 & 19.49\% & 17.42\% \\
Resources.getString() & 1201/3988 & 30.12\% & 3189/24265 & 13.14\% & 16.97\% \\
LinearLayout.init() & 329/3988 & 8.25\% & 6053/24265 & 24.95\% & 16.70\% \\
\hline
\end{tabular}%
\label{tab:28253_ratio}%
\end{table*}%
\par We leverage 8,253 samples (3,988 benign samples and 4,265 malicious samples) used in Section 5 and 20,000 malicious samples used in Section 6.2 as our test dataset. The total benign samples and malicious samples in this case study are 3,988 and 24,265, respectively. Given a app, we first collect the central nodes by centrality analysis. After obtaining
all central nodes of 28,253 samples, we then analyze whether these central nodes contain sensitive API calls and perform statistical analysis on these sensitive API calls. Table \ref{tab:28253_ratio} lists 10 sensitive API calls whose absolute value of distribution difference in benign apps and malicious apps are large. From Table \ref{tab:28253_ratio} we can observe that the number of sensitive API calls in benign central nodes and malicious central nodes is different. For instance, the number of benign samples whose central nodes contain \emph{TelephonyManager.getDeviceId()} is 69, accounting for only 1.73\% of the total number of benign samples. However, there are 6,221 malicious samples where \emph{TelephonyManager.getDeviceId()} is a central node, occupying 25.64\% of the total number of malicious samples.
\par \emph{IntDroid} considers two factors when calculating the intimacy between sensitive API calls and central nodes: the number of reachable paths and the average path distance. The relationship between the average path distance and intimacy is negatively correlated. The longer the distance, the lower the intimacy. When a sensitive API call itself is a central node, its average path distance is the smallest, which is 0, and the corresponding intimacy will be large. In addition, as can be seen from Figure \ref{fig:intimacy_distribution} and Table \ref{tab:ANOVA}, the average value of intimacies between \emph{TelephonyManager.getDeviceID()} and central nodes in malicious apps is more than four times larger than in benign apps. Such result corresponds to the result of this case study.
\section{Conclusion}
In this paper, we propose to use contrastive learning to resist code obfuscations of Android malware.
To demonstrate the effectiveness of contrastive learning, we take the Android malware classification as an example.
Specifically, we implement an obfuscation-resilient system (\textit{i.e.,} \emph{IFDroid}) and the extensive evaluation results show that \emph{IFDroid} is superior to seven state-of-the-art Android malware classification systems (\textit{i.e.,} \emph{Dendroid}~\cite{suarez2014dendroid}, \emph{Apposcopy}~\cite{feng2014apposcopy}, \emph{DroidSIFT}~\cite{zhang2014droidsift}, \emph{MudFlow}~\cite{avdiienko2015mudflow}, \emph{DroidLegacy}~\cite{deshotels2014droidlegacy}, \emph{Astroid}~\cite{feng2016astroid}, and \emph{FalDroid}~\cite{fan2018android}).
Moreover, when analyzing 8,112 obfuscated malware samples, \emph{IFDroid} can correctly classify 98.2\% of them into their families.
Such result also suggests that \emph{IFDroid} can achieve obfuscation-resilient malware analysis.
\section{Definitions}
\par Before introducing our proposed method, we first describe the formal definitions that we use throughout the paper.
\subsection{Centrality}
\par Centrality concepts were first developed in social network analysis which quantify the importance of a node in the network. Centrality measures are very useful for network analysis, and many works have been proposed to use centrality measures in different areas, such as biological network~\cite{jeong2001lethality}, co-authorship network~\cite{liu2005co}, transportation network~\cite{guimera2005worldwide}, criminal network~\cite{coles2001s}, affiliation network~\cite{faust1997centrality}, etc. There have been proposed several definitions of centrality in a social network, for example:
\par \textbf{\emph{Definition 1:}} \emph{Degree centrality}~\cite{freeman1978centrality} of a node is the fraction of nodes it is connected to. The degree centrality values are normalized by dividing by the maximum possible degree in a graph $N$ -1 where $N$ is the number of nodes in the graph.
\par \centerline{$C_{D}(v)=\frac{deg(v)}{N-1}$}
\par Note that $deg(v)$ is the degree of node $v$.
\par \textbf{\emph{Definition 2:}} \emph{Katz centrality}~\cite{katz1953new} is a generalization of degree centrality. Degree centrality measures the number of direct neighbors, and Katz centrality measures the number of all nodes that can be connected through a path, while the contributions of distant nodes are penalized. Let \emph{A} be the adjacency matrix of a graph under consideration.
\par \centerline{$C_{K}(i)=\sum ^{\infty }\limits_{k=1}\sum ^{n}\limits_{j=1}\alpha (A^{k})_{ij}$}
\par Note that the above definition uses the fact that the element at location $(i,j)$ of $A^{k}$ reflects the total number of $k$ degree connections between nodes $i$ and $j$. The value of the attenuation factor $\alpha$ has to be chosen such that it is smaller than the reciprocal of the absolute value of the largest eigenvalue of $A$.
\par \textbf{\emph{Definition 3:}} \emph{Closeness centrality}~\cite{freeman1978centrality} of a node is the average length of the shortest path between the node and all other nodes in the graph. And its normalized form is generally given by the previous value multiplied by $N$-1, where $N$ is the number of nodes in the graph.
\par \centerline{$C_{C}(v)=\frac{N-1}{\sum _{t\neq v} d(t,v)}$}
\par Note that $d(t,v)$ is the distance between nodes $v$ and $t$.
\par \textbf{\emph{Definition 4:}} \emph{Harmonic centrality}~\cite{marchiori2000harmony} reverses the sum and reciprocal operations in the definition of closeness centrality.
\par \centerline{$C_{H}(v)=\frac{\sum \limits_{t\neq v}\frac{1}{d(t,v)}}{N-1}$}
\par Note that $d(t,v)$ is the distance between nodes $v$ and $t$ and $N$ is the number of nodes in the graph.
\subsection{Intimacy}
\par On the one hand, if there are more reachable paths between two nodes, the `communucation' between these two nodes can be considered frequent. On the other hand, if the distance between two nodes is shorter, the `communication' between them can be considered easy. In this paper, if the `communication' between two nodes is frequent and easy, they will be treated as a close pair.
\par \textbf{\emph{Definition 5:}} Given a graph \emph{G} = (\emph{V}, \emph{E}), and two nodes \emph{a}, \emph{b} $\in$ \emph{V}, then the \emph{intimacy} between \emph{a} and \emph{b} is defined as:
\par \centerline{$intimacy(a, b) = \frac{n}{ad(a, b)+1}$}
\par Note that \emph{n} denotes the number of reachable paths between \emph{a} and \emph{b}, \emph{ad(a, b)} denotes the average distance of these reachable paths.
For instance, suppose that there are two reachable paths between \emph{a} and \emph{b}: $a$ $\rightarrow$ $p$ $\rightarrow$ $q$ $\rightarrow$ $b$ and $a$ $\rightarrow$ $m$ $\rightarrow$ $b$. Then the number of reachable paths $n$ is 2 and the average distance $ad(a, b)$ is (3+2)/2=2.5. Therefor, the intimacy between $a$ and $b$ can be computed as 2/(2.5+1)=0.57.
\section{Discussions}
\subsection{Why is IFDroid obfuscation-resilient?}
The reasons are mainly three-fold.
First, \emph{IFDroid} uses sensitive API calls to form the features and API calls are not obfuscated.
Second, \emph{IFDroid} applies centrality analysis to maintain the graph details which is robust against obfuscations.
Last and most important, the learned encoder by contrastive learning in \emph{IFDroid} can extract robust features from generated images.
The goal of contrastive learning is to maximize the agreement between positive data and minimize the agreement between negative data.
Actually, the obfuscated malware can be regarded as one positive sample of the original malware since the inherent program semantics do not change after obfuscations.
Therefore, when we use contrastive learning to learn the encoder, it can enlarge the similarity between obfuscated malware and original malware, making it possible to classify the obfuscated malware into its correct family.
\subsection{Limitations}
To achieve efficient static analysis, we leverage Androguard \cite{desnos2011androguard} to extract the function call graph of a malware sample.
This graph is a context- and flow-insensitive call graph.
Moreover, malware samples can use reflection \cite{rastogi2013catch} to call sensitive API calls, in this case, we may miss the call relationships between these methods.
To mitigate the inaccuracies caused by our constructed call graph, we plan to use advanced program analysis~\cite{li2016droidra} to generate a suitable call graph to achieve the balance between the efficiency and effectiveness on classifying malware.
Sensitive API calls used in \emph{IFDroid} consists of 426 API calls that are highly correlated with malicious operations \cite{Liangyi2020Experiences}.
They occupy a small part of the whole sensitive API calls.
We plan to conduct statistical analysis to select more valuable sensitive API calls and use them to generate our images.
Although \emph{IFDroid} can resist certain advanced code obfuscations, it is vulnerable to packers.
These packers can hide the actual Dex code to protect apps.
To address this limitation, we can use some unpacker systems such as \emph{Dexhunter}~\cite{ zhang2015dexhunter} to recover the actual Dex files, then static analysis can be applied to extract call graph.
\section{Experimental Evaluation}
\par In this section, we aim to answer the following research questions:
\begin{itemize}
\item \emph{RQ1: What is the effectiveness of IFDroid on classifying Android malware without and with obfuscations?}
\item \emph{RQ2: Can IFDroid interpret the familial classification results?}
\item \emph{RQ3: What is the runtime overhead of IFDroid on Android malware familial classification?}
\end{itemize}
\subsection{Datasets and Metrics}
\begin{table}[htbp]
\footnotesize
\centering
\caption{\small{Descriptions of datasets used in our experiments}}
\begin{tabular}{|c|ccccc|}
\hline
Datsset & \#Family & \#Malware & MaxSize & MinSize & AveSize \\
\hline
dataset-I & 33 & 1,247 & 15.4MB & 12KB & 1.3MB \\
dataset-II & 36 & 8,407 & 36.2MB & 12KB & 2MB \\
\hline
\end{tabular}%
\label{tab:dataset}%
\end{table}%
We select two widely used ground truth datasets (\textit{i.e.,} dataset-I~\cite{zhou2012dissecting} and dataset-II~\cite{fan2018android}) as our experimental datasets to evaluate \emph{IFDroid}.
The first dataset is provided by Genome project~\cite{zhou2012dissecting} which consists of 1,247 malware samples, they correspond to 33 different families.
The second dataset is created by \emph{FalDroid}~\cite{fan2018android} which composes of 8,407 malware samples, they are classified into 36 different families.
Table \ref{tab:dataset} lists the summary of these two datasets.
\begin{table}
\footnotesize
\centering
\caption{\small{Descriptions of metrics used in our experiments}}
\begin{tabular}{|m{2.3cm}<{\centering}|m{0.6cm}<{\centering}|m{4.3cm}<{\centering}|}
\hline
\textbf{Metrics}&\textbf{Abbr}&\textbf{Definition}\\
\hline
True Positive & \textbf{TP} & \#malware in family \emph{f} are correctly classified into family \emph{f} \\
True Negative & \textbf{TN} & \#malware not in family \emph{f} are correctly not classified into family \emph{f} \\
False Positive & \textbf{FP} & \#malware not in family \emph{f} are incorrectly classified into family \emph{f} \\
False Negative & \textbf{FN} & \#malware in family \emph{f} are incorrectly not classified into family \emph{f} \\
True Positive Rate & \textbf{TPR} & TP/(TP+FN)\\
False Negative Rate & \textbf{FNR} & FN/(TP+FN)\\
True Negative Rate & \textbf{TNR}& TN/(TN+FP)\\
False Positive Rate & \textbf{FPR} & FP/(TN+FP)\\
Precision & \textbf{P} & TP/(TP+FP)\\
Recall & \textbf{R} & TP/(TP+FN)\\
F-measure & \textbf{F1} & 2*P*R/(P+R)\\
\hline
Classification Accuracy & \textbf{CA} & percentage of malware which are correctly classified into their families\\
\hline
\end{tabular}
\label{tab:metrics}
\end{table}
To evaluate \emph{IFDroid}, we conduct experiments by performing ten-fold cross validations.
In other words, we first divide our dataset into ten subsets, then nine of them are selected as training set and the last subset is used to test.
We repeat this ten times and report the average classification results.
Furthermore, to measure the effectiveness of \emph{IFDroid}, we leverage certain widely used metrics (Table \ref{tab:metrics}) to present the classification results.
\begin{table}[htbp]
\footnotesize
\centering
\caption{\small{Classification accuracy of \emph{IFDroid} and seven state-of-the-art comparative systems on dataset-I~\cite{zhou2012dissecting}}}
\begin{tabular}{|c|c|}
\hline
Baseline Approach & Classification Accuracy \\
\hline
Dendroid~\cite{suarez2014dendroid} & 0.942 \\
Apposcopy~\cite{feng2014apposcopy} & 0.900 \\
DroidSIFT~\cite{zhang2014droidsift} & 0.930 \\
MudFlow~\cite{avdiienko2015mudflow} & 0.881 \\
DroidLegacy~\cite{deshotels2014droidlegacy} & 0.929 \\
Astroid~\cite{feng2016astroid} & 0.938 \\
FalDroid~\cite{fan2018android} & 0.972 \\
\textbf{IFDroid} & \textbf{0.984} \\
\hline
\end{tabular}%
\label{tab:dataset1_result}%
\end{table}%
\subsection{RQ1: Effectiveness}
\subsubsection{\textbf{Effectiveness on Classifying General Malware}}
First, we conduct evaluations to check the ability of \emph{IFDroid} on classifying general Android malware.
We first use a widely used dataset (\textit{i.e.,} dataset-I~\cite{zhou2012dissecting}) to present the comparative results of \emph{IFDroid} and seven state-of-the-art related systems.
These systems include: 1) \emph{Dendroid}~\cite{suarez2014dendroid} applies text mining techniques to analyze the code structures of Android malware and classify them into corresponding families;
2) \emph{Apposcopy}~\cite{feng2014apposcopy} performs program analysis to extract both data-flow and control-flow information of malware samples to classify them;
3) \emph{DroidSIFT}~\cite{zhang2014droidsift} pays attention to construct API dependency graph by analyzing the program semantics to classify Android malware;
4) \emph{MudFlow}~\cite{avdiienko2015mudflow} extracts the source-and-sink pairs of malware samples and regards them as features to classify malware;
5) \emph{DroidLegacy}~\cite{deshotels2014droidlegacy} conducts app partition to divide the malware sample into sub-modules and labels the corresponding family by analyzing the malicious sub-module;
6) \emph{Astroid}~\cite{feng2016astroid} synthesizes a maximally suspicious common subgraph of each malware family as a signature to classify malware;
and 7) \emph{FalDroid}~\cite{fan2018android} analyzes frequent subgraphs to represent the common behaviors of each malware family and uses them to perform familial classification.
\begin{table}[htbp]
\footnotesize
\centering
\caption{\small{Familial classification results of \emph{FalDroid} (\emph{Fal} for short) and \emph{IFDroid} on dataset-II~\cite{fan2018android}, \emph{wo} and \emph{wi} denote \emph{IFDroid} without and with contrastive learning, respectively}}
\begin{tabular}{|c|m{0.35cm}<{\centering}m{0.35cm}<{\centering}m{0.35cm}<{\centering}|m{0.2cm}<{\centering}m{0.2cm}<{\centering}m{0.2cm}<{\centering}|m{0.35cm}<{\centering}m{0.35cm}<{\centering}m{0.35cm}<{\centering}|}
\hline
\multirow{2}{*}{Family} & \multicolumn{3}{c|}{TPR (\%)} & \multicolumn{3}{c|}{FPR (\%)} & \multicolumn{3}{c|}{F1 (\%)} \\
& \emph{Fal} & \emph{wo} & \emph{wi} & \emph{Fal} & \emph{wo} & \emph{wi} & \emph{Fal} & \emph{wo} & \emph{wi} \\
\hline
adrd & 100 & 100 & 100 & 0 & 0 & 0 & 100 & 97.9 & 98.9 \\
adwo & 89.6 & 92.6 & 95.9 & 0.3 & 0.4 & 0.4 & 90.9 & 91.3 & 93.5 \\
airpush & 72.4 & 71.1 & 85.5 & 0.2 & 0.1 & 0.1 & 74.3 & 78.8 & 89 \\
anserverbot & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
basebridge & 92.7 & 95.4 & 96 & 0.8 & 0.8 & 0.7 & 87.1 & 87.7 & 89.3 \\
boqx & 53.1 & 46.9 & 65.3 & 0.1 & 0.2 & 0.2 & 64.2 & 52.9 & 66.7 \\
boxer & 85.3 & 90.5 & 91.6 & 0.3 & 0.2 & 0.1 & 81 & 86.9 & 89.7 \\
clicker & 100 & 94.4 & 97.2 & 0 & 0 & 0 & 98.7 & 95.8 & 97.2 \\
dowgin & 94.7 & 96.5 & 97.6 & 0.6 & 0.4 & 0.3 & 94.5 & 96.7 & 97.6 \\
droiddreamlight & 88.1 & 95 & 97 & 0.1 & 0.1 & 0 & 91.3 & 95 & 97.5 \\
droidkungfu & 95.8 & 97.6 & 98 & 0.9 & 0.2 & 0 & 93.3 & 98 & 98.9 \\
droidsheep & 100 & 100 & 100 & 0 & 0 & 0 & 100 & 100 & 100 \\
fakeangry & 50 & 62.5 & 62.5 & 0 & 0 & 0 & 66.7 & 74.1 & 76.9 \\
fakedoc & 99.3 & 100 & 100 & 0 & 0 & 0 & 99 & 99 & 100 \\
fakeinst & 98 & 98.8 & 99 & 0.5 & 0.1 & 0.1 & 97.9 & 99.1 & 99.3 \\
fakeplay & 88.4 & 97.7 & 100 & 0 & 0 & 0 & 91.6 & 98.8 & 100 \\
geinimi & 100 & 91.3 & 95.7 & 0 & 0 & 0 & 99.5 & 91.3 & 97.8 \\
gingermaster & 91.4 & 96.6 & 96.8 & 0.4 & 0.1 & 0.1 & 91.7 & 97.6 & 97.7 \\
golddream & 93.8 & 91.1 & 93.7 & 0 & 0.1 & 0.1 & 95.5 & 92.3 & 93.7 \\
iconosys & 100 & 100 & 100 & 0 & 0 & 0 & 98.7 & 99.7 & 100 \\
imlog & 100 & 92.7 & 95.1 & 0 & 0 & 0 & 100 & 96.2 & 97.5 \\
jsmshider & 100 & 100 & 100 & 0 & 0 & 0 & 97.8 & 100 & 100 \\
kmin & 99.2 & 99.2 & 99.6 & 0 & 0 & 0 & 99 & 99.2 & 99.4 \\
kuguo & 93.6 & 93.9 & 94.4 & 0.4 & 0.4 & 0.3 & 92 & 92.6 & 93.5 \\
lovetrap & 100 & 100 & 100 & 0 & 0 & 0 & 92.7 & 100 & 100 \\
mobiletx & 100 & 100 & 100 & 0 & 0 & 0 & 100 & 100 & 100 \\
pjapps & 91.5 & 97.6 & 97.6 & 0 & 0 & 0 & 93.8 & 98.2 & 98.2 \\
plankton & 99 & 98.9 & 99.6 & 0.1 & 0.4 & 0.2 & 99 & 97.9 & 98.8 \\
smskey & 99.1 & 93.7 & 94.6 & 0 & 0.1 & 0.1 & 98.7 & 93.3 & 93.8 \\
smsreg & 83.2 & 86.5 & 92.6 & 0.2 & 0.2 & 0.2 & 86.7 & 88.3 & 91.6 \\
steek & 100 & 100 & 100 & 0 & 0 & 0 & 100 & 100 & 100 \\
utchi & 100 & 100 & 100 & 0 & 0 & 0 & 100 & 100 & 100 \\
waps & 92.9 & 95.6 & 96.8 & 0.7 & 0.7 & 0.5 & 93 & 94.6 & 96.2 \\
youmi & 76.1 & 81.4 & 84.1 & 0.3 & 0.2 & 0.2 & 77.1 & 83.3 & 85.2 \\
yzhc & 100 & 100 & 100 & 0 & 0 & 0 & 98 & 100 & 100 \\
zitmo & 93.1 & 100 & 100 & 0.1 & 0 & 0 & 87.5 & 96.8 & 98.4 \\
\hline
ALL & 94.2 & 95.5 & \textbf{96.6} & 0.4 & 0.1 & \textbf{0.1} & 93.9 & 95.6 & \textbf{96.6} \\
\hline
\end{tabular}%
\label{tab:dataset2_result2}%
\end{table}%
Because most of these systems are not publicly available and the dataset used to produce evaluation results are all the same (\textit{i.e.,} dataset-I~\cite{zhou2012dissecting}), we directly adopt the classification accuracy of these seven systems in their papers~\cite{suarez2014dendroid, feng2014apposcopy, zhang2014droidsift, avdiienko2015mudflow, deshotels2014droidlegacy, feng2016astroid, fan2018android}.
Through results in Table \ref{tab:dataset1_result}, we see that \emph{IFDroid} is superior to other comparative systems.
This happens because \emph{IFDroid} not only considers the program semantics of malware samples but also extracts effective features by a learned encoder.
Since \emph{FalDroid} performs best among the seven comparative systems (\textit{i.e.,} \emph{Dendroid}~\cite{suarez2014dendroid}, \emph{Apposcopy}~\cite{feng2014apposcopy}, \emph{DroidSIFT}~\cite{zhang2014droidsift}, \emph{MudFlow}~\cite{avdiienko2015mudflow}, \emph{DroidLegacy}~\cite{deshotels2014droidlegacy}, \emph{Astroid}~\cite{feng2016astroid}, and \emph{FalDroid}~\cite{fan2018android}) and it is open-source.
We only compare \emph{IFDroid} with \emph{FalDroid} on dataset-II.
To further examine the contribution of contrastive learning on general malware classification, we implement another system that does not use contrastive learning to learn an encoder first but directly train the encoder and classifier together in training phase.
The detailed classification results of 36 families are shown in Table \ref{tab:dataset2_result2}.
Through the results, we observe that \emph{IFDroid} performs better than \emph{FalDroid} on classifying most of malware families.
For example, \emph{FalDroid} can correctly classify 93.1\% of malware samples in zitmo family as ``zitmo" while \emph{IFDroid} has the ability to classify them into ``zitmo" without any inaccuracies.
Additionally, when we use contrastive learning to learn our encoder first, the classification performance is always better than without contrastive learning.
However, \emph{IFDroid} performs poorly in certain families, such as anserverbot family.
After our manual analysis, we find that malware samples in anserverbot family are all classified into basebridge family.
In practice, it is reasonable because malware samples in anserverbot family evolved from samples in basebridge family~\cite{zhou2012dissecting}.
\subsubsection{\textbf{Effectiveness on Classifying Obfuscated Malware}}
Next, we evaluate the effectiveness of \emph{IFDroid} on classifying obfuscated Android malware.
For this purpose, we use an automatic Android apps obfuscation tool (\emph{Obfuscapk}~\cite{aonzo2020obfuscapk}) that provides certain obfuscators including typical obfuscations (\textit{e.g.,} class rename and method rename) and some advanced code obfuscations (\textit{e.g.,} call indirection and goto).
Specifically, we first learn an encoder and train a classifier by using 8,407 samples in dataset-II.
After completing the training phase, we randomly select 700 malware samples from 36 families to ensure that each family has at least one sample.
Next, we obfuscate these samples by using \emph{Obfuscapk} to obtain the corresponding obfuscated apps.
\begin{table}[htbp]
\footnotesize
\centering
\caption{\small{Descriptions of 12 obfuscators used in our experiments}}
\begin{tabular}{|c|m{4.8cm}<{\centering}|}
\hline
Obfuscators & Descriptions \\
\hline
ClassRename & Change the package name and rename classes \\
FieldRename & Rename fields \\
MethodRename & Rename methods \\
ConstStringEncryption & Encrypt constant strings in code \\
AssetEncryption & Encrypt asset files \\
LibEncryption & Encrypt native libs \\
ResStringEncryption & Encrypt strings in resources \\
ArithmeticBranch & Insert junk code that is composed by arithmetic computations and a branch instruction \\
CallIndirection & Modify the control-flow graph without changing the code semantics \\
Goto & Modify the control-flow graph by adding two new nodes \\
Nop & Insert random nop instructions within every method implementation \\
Reorder & Change the order of basic blocks of the control-flow graph \\
\hline
\end{tabular}%
\label{tab:obfuscators}%
\end{table}%
We select a total of 12 different obfuscators provided by \emph{Obfuscapk}, including three rename obfuscators, four encryption obfuscators, and five advanced code obfuscators.
Descriptions of these obfuscators are presented in Table \ref{tab:obfuscators}.
In practice, \emph{Obfuscapk} can not obfuscate some of our malware samples due to certain errors.
But fortunately, the failed samples only occupy a small part.
To further evaluate the effectiveness of \emph{IFDroid}, we apply 12 obfuscators together to generate 100 more complex obfuscated malware.
Finally, we obtain 8,112 obfuscated samples in total.
We also conduct comparative evaluations with \emph{FalDroid} on classifying these obfuscated malware.
The comparative results are shown in Table \ref{tab:obfuscation}, which includes the true positive rate of \emph{FalDroid} and \emph{IFDroid}.
\begin{table}[htbp]
\centering
\footnotesize
\caption{\small{True positive rates of \emph{FalDroid} (\emph{Fal} for short) and \emph{IFDroid} on classifying obfuscated apps. \emph{wo} and \emph{wi} denote \emph{IFDroid} without and with contrastive learning, respectively}}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Obfuscators} & \emph{Fal} & \emph{wo} & \emph{wi} \\
\hline
\multirow{3}[2]{*}{Rename} & ClassRename & 100.0 & 100.0 & 100.0 \\
& FieldRename & 100.0 & 100.0 & 100.0 \\
& MethodRename & 100.0 & 100.0 & 100.0 \\
\hline
\multirow{4}[2]{*}{Encryption} & AssetEncryption & 95.1 & 96.1 & 100.0 \\
& ConstStringEncryption & 82.3 & 80.8 & 97.0 \\
& LibEncryption & 94.1 & 96.8 & 100.0 \\
& ResStringEncryption & 94.2 & 96.5 & 99.7 \\
\hline
\multirow{5}[2]{*}{Code} & ArithmeticBranch & 95.3 & 96.7 & 100.0 \\
& CallIndirection & 73.8 & 72.0 & 88.3 \\
& Goto & 93.5 & 96.6 & 100.0 \\
& Nop & 94.2 & 96.7 & 100.0 \\
& Recorder & 95.1 & 96.5 & 100.0 \\
\hline
\multicolumn{2}{|c|}{Apply 12 obfuscators} & 67.9 & 69.1 & 87.8 \\
\hline
\multicolumn{2}{|c|}{TPRs of 8,112 generated samples } & 92.1 & 93.1 & 98.2 \\
\hline
\end{tabular}%
\label{tab:obfuscation}%
\vspace{-1em}
\end{table}%
\begin{figure*}[htbp]
\centerline{\includegraphics[width=0.95\textwidth]{pdf/interpret1.pdf}}
\caption{\small{Visualization of 12 malware familial classification results, four malware samples are displayed for each family}}
\label{fig:visualization12family}
\end{figure*}
Since the typical rename obfuscations (\textit{i.e.,} class rename, method rename, and field rename) do not change the call relationships between functions in an app.
Both \emph{FalDroid} and \emph{IFDroid} can correctly classify all obfuscated apps into corresponding families.
Furthermore, no matter what kind of obfuscation it is for, \emph{IFDroid} can perform better when using contrast learning.
However, the true positive rate of \emph{IFDroid} without contrastive learning is only 72.0\% when we classify apps that obfuscated by \emph{CallIndirection}.
After our in-depth analysis, we find that the number of nodes and edges in a function call graph change a lot after applying \emph{CallIndirection}.
For example, a sample\footnote{2ec363bc3bb7b04694ccaa28d0bbdc6c} originally has 8,135 nodes and 19,725 edges.
After it is obfuscated by \emph{CallIndirection}, the number of nodes becomes 35,396, and the number of edges increases to 58,407.
This huge change makes \emph{IFDroid} make mistakes in classification.
However, when we adopt contrastive learning, the true positive rate of \emph{IFDroid} can be increased from 72.0\% to 88.6\%.
This result also shows that the encoder learned by contrastive learning can extract more robust features to classify malware.
Moreover, when classifying samples that are obfuscated by 12 obfuscators together, \emph{IFDroid} can also correctly classify 87.8\% samples into their families.
However, the true positive rate of \emph{FalDroid} is only 67.9\%.
On average, \emph{IFDroid} achieves a true positive rate of 98.2\% on classifying all generated obfuscated samples, while \emph{FalDroid} can only maintain a true positive rate of 92.1\%.
\emph{\textbf{Summary:}
IFDroid achieves higher accuracy than Dendroid, Apposcopy, DroidSIFT, MudFlow, DroidLegacy, Astriod, and FalDroid on Android malware classification.
Using contrastive learning can not only improve the accuracy but also can enhance the robustness of IFDroid.}
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.42\textwidth]{pdf/visualization2.pdf}}
\caption{\small{A real-world malware (com.gp.mahjongg) sample's visualization of classification result}}
\label{fig:visualization2}
\vspace{-1em}
\end{figure}
\subsection{RQ2: Interpretability}
As aforementioned, since \emph{Gradient-weighted Class Activation Mapping++} (Grad-CAM++)~\cite{selvaraju2017gradcam, chattopadhay2018gradcam} performs better and can generate visual explanations for any CNN-based network without changing the architecture or retraining, we use it as our visualization technique to interpret our classification results.
These interpretations can help security analysts understand why a malware sample is classified as this family.
Specifically, we use Grad-CAM++ to obtain the corresponding heatmaps of malware samples.
The red area in the heatmap indicates that features in this area play more important roles in classifying as this family.
In other words, sensitive API calls contained in these areas have higher weights in identifying the malware as the family.
Figure \ref{fig:visualization2} shows the visualization of a real-world malware sample.
This sample is correctly classified into droidkungfu family by \emph{IFDroid}.
After applying Grad-CAM++ visualization technique on the generated image, we can obtain the corresponding heatmap as shown in Figure \ref{fig:visualization2}.
Since the redder the color in a heatmap, the more valuable the features in the area.
Therefore, we pay more attention to these red areas.
After completing the one-to-one correspondence of three red areas from the original image, we can collect seven sensitive API calls (\textit{i.e.,} \emph{getExternalStorageDirectory()}, \emph{write()}, \emph{getLine1Number()}, \emph{getDeviceID()}, \emph{createNewFile()}, \emph{getCellLocation()}, and \emph{connect()}) from these areas.
Through the result, we can see that this malware collects user's private data and write them into created files.
These files are then sent to the network or saved on other external devices.
Because of this behavior, it is classified into droidkungfu family.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.45\textwidth]{pdf/heatmap_sim.pdf}}
\caption{\small{The average similarity between heatmaps in 36 families}}
\label{fig:heatmap-sim}
\end{figure}
In practice, to further study the effectiveness of the interpretation of familial classification by heatmaps, we apply Grad-CAM++ on images of all malware samples in dataset-II.
After obtaining all corresponding heatmaps, we compute the similarity between them one by one.
The similarity between two heatmaps is calculated by using \emph{Structural SIMilarity} (SSIM)~\cite{wang2004ssim} technique which has been widely used to measure the similarity of two images.
Due to the limited page, we randomly present 12 families' heatmaps in Figure \ref{fig:visualization12family}, four malware samples are displayed for each family.
After analyzing all heatmaps and the average similarity between different families (Figure \ref{fig:heatmap-sim}), we observe several phenomenons.
The first phenomenon is that the heatmaps of most malware in the same family are similar.
It is reasonable because malware samples of certain families are often just repackaged applications with slight modifications~\cite{arp2014drebin}.
In other words, most malware samples in the same family are polymorphic variants of other malware samples in this family.
Therefore, malware samples in the same family always perform similar malicious actions, resulting in similar heatmaps generated by visualization.
The second phenomenon is that the heatmaps of malware in different families are basically different.
This result is in line with expectations since malware samples in different families exhibit different malicious behaviors.
Because of this, they are classified into different malware families.
Moreover, we also find that heatmaps of malware samples in anserverbot family and basebridge family are almost the same.
This result is reasonable because malware samples in anserverbot family evolved from samples in basebridge family~\cite{zhou2012dissecting}.
Their heatmaps can also demonstrate the observation.
\emph{\textbf{Summary:}
IFDroid can interpret the familial classification results by using visualization techniques.
We can even distinguish the malware families directly through the heatmaps since heatmaps of most malware in the same family are similar and heatmaps of malware in different families are basically different.}
\subsection{RQ3: Efficiency}
\begin{figure}
\centering
\subfigure{
\begin{minipage}[t]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{pdf/staticanalysis_overhead.pdf}
\end{minipage}
}
\subfigure{
\begin{minipage}[t]{0.225\textwidth}
\centering
\includegraphics[width=\textwidth]{pdf/imagegeneration_overhead.pdf}
\end{minipage}
}
\subfigure{
\begin{minipage}[t]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{pdf/learning_cls_overhead.pdf}
\end{minipage}
}
\subfigure{
\begin{minipage}[t]{0.232\textwidth}
\centering
\includegraphics[width=\textwidth]{pdf/visualization_overhead.pdf}
\end{minipage}
}
\caption{The \small{\emph{Cumulative Distribution Function} (CDF) of runtime overheads of \emph{IFDroid} on different steps (seconds)}}
\label{fig:overhead}
\end{figure}
In this step, we aim to study the runtime overhead of \emph{IFDroid}.
To this end, we select dataset-II as our test objective which consists of 8,407 malware samples.
The average number of nodes and edges of these 8,407 samples are 3,007 and 7,339, respectively.
Given a new malware sample, \emph{IFDroid} performs four steps to classify it into corresponding family and interpret the classification result.
\begin{table}[htbp]
\footnotesize
\centering
\caption{\small{Runtime overhead of different steps of \emph{IFDroid} on dataset-II (8,407 malware samples)}}
\begin{tabular}{|c|c|c|}
\hline
Different Steps & Total runtime & Average runtime \\
\hline
Static Analysis & 4,792 & 0.57 \\
Image Generation & 10,172 & 1.21 \\
Familial Classification & 1.93 & 0.00023 \\
Interpretation & 13,619 & 1.62 \\
\hline
ALL & 28,585 & 1.78+1.62=3.4 \\
\hline
\end{tabular}%
\label{tab:overhead}%
\end{table}%
\emph{1) Static Analysis.}
The first step of \emph{IFDroid} is to distill the program semantics of a sample into a function call graph based on static analysis.
Figure \ref{fig:overhead} and Table \ref{tab:overhead} show the runtime overhead of static analysis on dataset-II, for more than 80\% samples we can extract the graphs in one second.
On average, it takes about 0.57 seconds for \emph{IFDroid} to complete static analysis on dataset-II.
\emph{2) Image Generation.}
The second step of \emph{IFDroid} is to transform the call graph into an image to avoid high-cost graph matching.
Specifically, we extract four different centralities (\textit{i.e.,} degree centrality, katz centrality, closeness centrality, and harmonic centrality) of sensitive API calls within the graph to construct the image.
As shown in Figure \ref{fig:overhead} and Table \ref{tab:overhead}, \emph{IFDroid} requires about 1.21 seconds on average to analyze the graph and complete the image generation step.
\emph{3) Familial Classification.}
The third step of \emph{IFDroid} is to classify the image into its corresponding family.
We first perform contrastive learning to learn an encoder and then train a classifier on these 8,407 images.
The total runtime overhead of contrastive learning and model training on 8,407 images are shown in Figure \ref{fig:overhead}.
On average, it takes about 10.98 seconds and 5.94 seconds to finish a round of contrastive learning and model training, respectively.
After completing the training phase, we leverage the learned encoder and the trained classifier to complete the familial classification step of \emph{IFDroid}.
As shown in Figure \ref{fig:overhead} and Table \ref{tab:overhead}, this step is the fastest step among all step in \emph{IFDroid}.
It consumes about 1.93 seconds to classify all images (\textit{i.e.,} 8,407 images) into corresponding families.
\emph{4) Interpretation.}
The final step of \emph{IFDroid} is to interpret the classification results of 8,407 images.
To this end, we use Grad-CAM++ visualization technique to obtain the corresponding heatmaps of these images.
This step is the most time-consuming which requires 1.62 seconds on average to accomplish the visualization of an image.
In general, given a new malware sample, \emph{IFDroid} consumes about 1.78 seconds to complete the classification and 1.62 seconds to interpret the classification result.
As for \emph{FalDroid}, it needs to take about 4.6 seconds to complete the classification of a malware sample.
In other words, if only from the overhead caused by classification, \emph{IFDroid} is about 2.6 times faster than \emph{FalDroid}.
Moreover, if we analyze ten samples in parallel, it only takes about 2,858 seconds (\textit{i.e.,} 0.79 hours) for \emph{IFDroid} to finish the analysis, classification, and interpretation of all samples in dataset-II.
Such high efficiency indicates that \emph{IFDroid} can achieve large-scale Android malware classification and interpretation.
\emph{\textbf{Summary:} On average, IFDroid requires about 1.78 seconds to complete the classification and 1.62 seconds to interpret the classification result of a malware sample.}
\section{Introduction}
As the most widely used mobile operating system~\cite{report1}, the security of Android platform has become more and more closely related to personal privacy and financial security.
Meanwhile, due to the open-source and market openness of Android operating system, it is more likely to be exploited by malware~\cite{report2}.
To hidden their malicious tasks, different code obfuscations have been applied by attackers~\cite{2018obfuscate1, 2018obfuscate2}.
After obfuscations, malware samples become more complex, resulting in features obtained from them containing many useless and camouflage features.
These futile features make it difficult to perform accurate behavioral analysis of Android malware.
Therefore, it is important to provide obfuscation-resilient Android malware analysis.
Most of traditional Android malware analysis methods~\cite{peng2012using,wang2014exploring,arp2014drebin,fan2018android} can not resist to code obfuscations.
For example, for familial classification of Android malware, it can be roughly divided into two main categories~\cite{fan2018android}, namely string-based approaches (\textit{e.g.,} permissions~\cite{peng2012using}) and graph-based techniques (\textit{e.g.,} function call graph~\cite{fan2019graph}).
Some methods~\cite{peng2012using,wang2014exploring,arp2014drebin} focus on permissions requested by apps and search the presence of several strings (\textit{e.g.,} API calls) from disassemble code to build models to analyze Android malware.
However, they can be easily evaded by obfuscations because of the lack of structural and contextual information of the program behaviors.
To achieve more robust malware classification, studies~\cite{fan2018android, fan2019graph} distill the program semantics of apps into graph representations and apply graph matching to analyze the malware families.
For example, \emph{FalDroid}~\cite{fan2018android} extracts the function call graph of an app and applies frequent subgraph analysis to classify Android malware.
However, Hammad and Dong \emph{et al.} \cite{2018obfuscate1, 2018obfuscate2} report that malware creators always perform complex obfuscations (\textit{e.g.,} control-flow obfuscations) to hidden their malicious tasks.
In this case, features extracted from graphs obtained by \emph{FalDroid}~\cite{fan2018android} may not be accurate since graphs may change a lot after applying advanced code obfuscations.
In one word, due to different code obfuscations, features obtained from malware samples may contain many useless and disguised features, making it difficult to achieve accurate behavioral analysis.
To address the issue, we propose to use contrastive learning on Android malware analysis.
Due to the powerful high-level feature extraction of contrastive learning, it has been widely used in different areas, such as text representation learning~\cite{2020DeCLUTR} and language understanding~\cite{2020Cert}.
To the best of our knowledge, we are the first to use contrastive learning to resist code obfuscations.
To demonstrate the ability of contrastive learning on analyzing obfuscated Android malware, in this paper, we take the Android malware classification as an example and propose a novel approach that can achieve obfuscation-resilient Android malware classification.
Specifically, we first obtain the function call graph of an app and then apply centrality analysis to transform the graph into an image.
The generated images are used to train an encoder by contrastive learning.
The use of contrastive learning is to maximize the similarity between positive samples and minimize the similarity between negative samples.
In practice, although applying obfuscations may change the app codes, the inherent program semantics do not change.
In other words, the obfuscated app can be treated as one of the positive samples of the original app.
Therefore, we can leverage contrastive learning to reduce the differences introduced by code obfuscations while enlarging the differences between different types of malware, making it possible to correctly classify the obfuscated malware into the corresponding family.
To further show how contrastive learning improves the usability of malware analysis, we apply visualization techniques to visualize the valuable features extracted by contrastive learning.
Specifically, we apply \emph{Gradient-weighted Class Activation Mapping++} (Grad-CAM++)~\cite{selvaraju2017gradcam, chattopadhay2018gradcam} on our images to obtain the corresponding heatmaps.
Grad-CAM++ is a class-discriminative localization technique that generates visual explanations for any CNN-based network without changing the architecture or retraining.
According to the intensity of the color in the heatmap, we can know which features are more effective in classifying this malware as this family.
These valuable features can represent the essential behaviors to explain why the malware is classified as this family.
We implement \emph{IFDroid} and conduct evaluations on two widely used datasets.
Through the comparative experimental results, we find that \emph{IFDroid} is superior to seven state-of-the-art Android malware familial classification systems (\textit{i.e.,} \emph{Dendroid}~\cite{suarez2014dendroid}, \emph{Apposcopy}~\cite{feng2014apposcopy}, \emph{DroidSIFT}~\cite{zhang2014droidsift}, \emph{MudFlow}~\cite{avdiienko2015mudflow}, \emph{DroidLegacy}~\cite{deshotels2014droidlegacy}, \emph{Astroid}~\cite{feng2016astroid}, and \emph{FalDroid}~\cite{fan2018android}).
As for obfuscations, \emph{IFDroid} can maintain 98.2\% true positive rate on classifying 8,112 obfuscated malware samples.
As for interpretability, our experiments show that the heatmaps of most malware in the same family are similar, and the heatmaps of malware in different families are different.
This result is in line with expectations and it can help security analysts analyze the specific reasons why malware samples are classified as corresponding families.
As for runtime overhead, \emph{IFDroid} requires an average of 1.78 seconds to complete the classification and 1.62 seconds to interpret the classification result in our dataset.
Such result indicates that \emph{IFDroid} is able to conduct large-scale malware analysis.
\par In summary, this paper makes the following contributions:
\begin{itemize}
\item{
To the best of our knowledge, we are the first to use contrastive learning to resist code obfuscations of Android malware.
Contrastive learning can not only improve the accuracy but also enhance the robustness of Android malware analysis.
}
\item{
We take the Android malware classification as an example to demonstrate the effectiveness of contrastive learning on analyzing obfuscated malware.
We design a novel system (\textit{i.e.,} \emph{IFDroid}) by transforming the function call graph of an app into an image and performing contrastive learning on generated images.
}
\item{
We conduct evaluations on two widely used datasets and results indicate that \emph{IFDroid} is superior to seven state-of-the-art Android malware classification systems (\textit{i.e.,} \emph{Dendroid}~\cite{suarez2014dendroid}, \emph{Apposcopy}~\cite{feng2014apposcopy}, \emph{DroidSIFT}~\cite{zhang2014droidsift}, \emph{MudFlow}~\cite{avdiienko2015mudflow}, \emph{DroidLegacy}~\cite{deshotels2014droidlegacy}, \emph{Astroid}~\cite{feng2016astroid}, and \emph{FalDroid}~\cite{fan2018android}).
}
\end{itemize}
\par \noindent \textbf{Paper organization.} The remainder of the paper is organized as follows.
Section II presents our motivation.
Section III introduces our system.
Section IV reports the experimental results.
Section V discusses the future work and limitations.
Section VI describes the related work. Section VII concludes the present paper.
\section{Motivation Scenario}
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.48\textwidth]{pdf/motivation2.pdf}}
\caption{\small{Motivation scenario of \emph{IFDroid}}}
\label{fig:motivation}
\end{figure}
Assuming that there is a security analyst, his daily task is to classify newly detected malware into corresponding families to enrich their malware family dataset.
The richer the family dataset, the more accurately the unknown malware behavior can be predicted.
\emph{Statista}~\cite{report3} reported that the average number of new malware detected per day is about 16,000.
It will be a very time-consuming project if the analyst conducts in-depth manual analysis on these malware samples one by one.
Therefore, the analyst decides to extract the semantic information of the samples by program analysis, and then classify them into their corresponding families through semantic similarity matching (\textit{e.g.,} graph matching).
But in fact, obfuscation technology has become more and more advanced, and it is used more and more frequently by attackers~\cite{2018obfuscate1, 2018obfuscate2}.
In other words, these samples may be applied to different obfuscation techniques, resulting in the extracted features containing many useless and camouflage features.
At the same time, after classifying the samples into their families, the analyst wants to know which semantic features make these malware samples classified into corresponding families.
However, the classification method can only tell us which family the malware belongs to, and will not explain which features are used to determine that they are classified into this family.
The whole scenario is shown in Figure \ref{fig:motivation}.
To address the above challenges in the scenario, we first transform the program semantics of samples into images and then train a robust encoder by contrastive learning.
In classification phase, we use a visualization technique to obtain the corresponding heatmaps of generated images.
These heatmaps can help security analysts understand which features are more valuable in classifying them as corresponding families.
We implement \emph{IFDroid} to complete the whole analysis process automatically.
\section{Related Work}
\subsection{Malware Familial Classification}
Recently, many studies~\cite{suarez2014dendroid, feng2014apposcopy, zhang2014droidsift, avdiienko2015mudflow, deshotels2014droidlegacy, feng2016astroid, fan2018android, meng2016smart, cai2018droidcat, yang2014droidminer, garcia2018lightweight} have been proposed to classify malware samples into corresponding families.
For example, \emph{Dendroid}~\cite{suarez2014dendroid} applies text mining techniques to analyze the code structures of Android malware and classify them into corresponding families.
\emph{Apposcopy}~\cite{feng2014apposcopy} considers both data-flow and control-flow information of malware samples to classify them by performing heavy-weight program analysis.
\emph{MudFlow}~\cite{avdiienko2015mudflow} extracts the source-and-sink pairs of malware samples and regards them as features to classify malware.
\emph{FalDroid}~\cite{fan2018android} conducts frequent subgraph analysis to extract common subgraphs of each family and uses them to perform familial classification.
\emph{DroidSIFT}~\cite{zhang2014droidsift} extracts the weighted contextual API dependency graph to solve the malware deformation problem based on static analysis.
These proposed approaches consider different program information to achieve accurate malware classification.
However, heavy-weight program analysis results in low scalability, making them can not scale to large numbers of malware analysis.
Moreover, most of them only provide the corresponding labels (\textit{i.e.,} families) to users and can not interpret the classification results.
\subsection{Contrastive Learning}
Contrastive learning was first introduced by Mikolov \emph{et al.}~\cite{2013Distributed} in 2013 for \emph{natural language processing} (NLP).
In recent years, it has been more and more popular on different NLP tasks such as text representation learning~\cite{2020DeCLUTR}, language understanding~\cite{2020Cert}, and cross-lingual pre-training~\cite{2020InfoXLM}.
In practice, it has also been used in other domains.
For example, Dai \emph{et al.}~\cite{2017Contrastive} propose a new method for image caption through contrastive learning.
\emph{SimCLR}~\cite{2020A} is a simple framework to use contrastive learning on image classification.
Compared with previous work, the accuracy of \emph{SimCLR} is improved by 7\%.
\emph{COLA}~\cite{2020Contrastive} is a self-supervised pre-training approach for learning a general-purpose representation of audio and \emph{CVRL}~\cite{2020Spatiotemporal} uses contrastive learning to learn spatiotemporal visual representations from unlabeled videos.
The use of contrastive learning in most of the previous studies is self-supervised, however, Khosla \emph{et al.} \cite{khosla2020supervised} find that the label information of training dataset can improve the performance of the learned encoder.
Therefore, in this paper, to achieve better classification accuracy, we leverage supervised contrastive learning to conduct Android malware familial classification.
\section{System Architecture}
In this section, we introduce \emph{IFDroid}, a novel contrastive learning-based robust and interpretable Android malware classification system.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.47\textwidth]{pdf/overview2.pdf}}
\caption{\small{System overview of \emph{IFDroid}}}
\label{fig:overview}
\end{figure}
\subsection{Overview}
\par As shown in Figure \ref{fig:overview}, \emph{IFDroid} consists of two main phases: \emph{Training Phase} and \emph{Classification Phase}.
The goal of \emph{Training Phase} is to train a robust encoder and an accurate classifier.
This phase consists of four steps as follows.
\emph{1) Static Analysis}: This step aims to extract the function call graph of a malware sample based on static analysis where each node is an API call or a user-defined function.
\emph{2) Image Generation}: This step aims to transform the function call graph into an image base on centrality analysis.
\emph{3) Contrastive Learning}: This step aims to learn an encoder that can automatically extract robust features from an image.
\emph{4) Classifier Training}: This step aims to use vectors encoded by the learned encoder to train an accurate classifier.
The purpose of \emph{Classification Phase} is to classify unlabeled malware into their corresponding families.
This phase includes three steps: \emph{1) Static Analysis}, \emph{2) Image Generation}, and \emph{3) Family Classification and Interpretation}.
The first two steps are the same as in \emph{Training Phase}.
Given an image, it will be fed into a learned encoder in \emph{Training Phase} to obtain the vector representation.
The vector is then labeled as corresponding family by a trained classifier in \emph{Training Phase}.
To interpret the classification result, we use a deep visualization technique to obtain the heatmap of the image to help the security analyst understand why it is classified as this family.
\subsection{Static Analysis}
Empirical studies~\cite{fan2018android, zhang2014droidsift} have demonstrated that graph representation is more robust than string-based features.
In this paper, we aim to achieve efficient malware analysis.
Therefore, we perform low-cost program analysis (\textit{e.g.,} context- and flow-insensitive analysis) to distill the program semantics of a malware sample into a function call graph.
More specifically, we leverage a widely used Android reverse engineering tool namely Androguard~\cite{desnos2011androguard} to complete our static analysis.
To better describe the detailed steps in \emph{IFDroid}, we choose a real-world malware sample\footnote{485c85b5998bfceca88c6240e3bd5337} as our example.
Figure \ref{fig:imagegenaration} shows the sample's function call graph where each node is an API call or a user-defined function.
The number of nodes and edges are 117 and 170, respectively.
\subsection{Image Generation}
On the one hand, deep-learning-based image classification can process millions of images while maintaining high accuracy.
On the other hand, the output of image classification can be visualized to give a better intuition to users rather than giving a single decision.
Because of these advantages, image-based methods have been widely used in malware analysis.
However, most of these approaches \cite{malware-image1, malware-image2, malware-image3, malware-image4} only use simple mapping algorithms to transform malware samples to images and then apply deep learning to analyze them.
Thus the semantics of the malware samples may be ignored.
For example, Yang \emph{et al.}~\cite{malware-image4} read Dalvik bytecodes in hexadecimal format directly to convert the sequence of bytecode into an image file.
According to the report in their paper, its detection accuracy is only 93\%.
To achieve efficient and semantic malware analysis, we propose to transform the function call graph into an image by centrality analysis.
The centrality concept was first proposed in social network analysis whose purpose is to dig out the most important persons in the network.
It can measure the importance of a node in a network and is very useful for network analysis.
In practice, there have been proposed many studies to use centralities in different areas such as biological network~\cite{jeong2001lethality}, co-authorship network~\cite{liu2005co}, transportation network~\cite{guimera2005worldwide}, criminal network~\cite{coles2001s}, etc
Different centralities analyze the importance of a node in a network by performing different network analyses, therefore, they have the potential to preserve different structural properties of a network.
In our paper, we select four widely used centrality measures (\textit{i.e.,} \emph{Degree centrality}~\cite{freeman1978centrality}, \emph{Katz centrality}~\cite{katz1953new}, \emph{Closeness centrality}~\cite{freeman1978centrality}, and \emph{Harmonic centrality}~\cite{marchiori2000harmony}) to commence our image generation.
These four centralities can represent graph details from four different aspects.
By this, we can achieve more complete preservation of a function call graph’s semantics.
Specifically, the definitions of these four centralities are as follows.
\begin{itemize}
\item \textbf{\emph{Degree centrality}}~\cite{freeman1978centrality} of a node is the fraction of nodes it is connected to.
The degree centrality values are normalized by dividing by the maximum possible degree in a graph $N$-1 where $N$ is the number of nodes in the graph.
\par \centerline{$x_i=\frac{deg(i)}{N-1}$}
Note that $deg(i)$ is the degree of node $i$.
\item \textbf{\emph{Katz centrality}}~\cite{katz1953new} computes the centrality for a node based on the centrality of its neighbors.
The katz centrality for node $i$ is
\par \centerline{$x_{i}=\alpha \sum_{j}^{}A_{ij}x_{j}+\beta$}
Note that $A$ is the adjacency matrix of the graph $G$ with eigenvalues $\lambda$.
The parameter $\beta$ controls the initial centrality and
\par \centerline{$\alpha < \frac{1}{_{\lambda_{max}}}$}
katz centrality computes the relative influence of a node within a graph by measuring the number of the immediate neighbors (first degree nodes) and also all other nodes in the graph that connect to the node under consideration through these immediate neighbors.
\item \textbf{\emph{Closeness centrality}}~\cite{freeman1978centrality} indicates how close a node is to all other nodes in the network.
It is calculated as the average of the shortest path length from the node to every other node in the graph.
The smaller the average shortest distance of a node, the greater the closeness centrality of the node.
In other words, the average shortest distance and the corresponding closeness centrality are negatively correlated.
\par \centerline{$x_i=\frac{N-1}{\sum _{i\neq j} d(i,j)}$}
Note that $d(i,j)$ is the distance between nodes $i$ and $j$ and and $N$ is the number of nodes in the graph.
\item \textbf{\emph{Harmonic centrality}}~\cite{marchiori2000harmony} reverses the sum and reciprocal operations in the definition of closeness centrality.
\par \centerline{$x_i=\frac{\sum \limits_{i\neq j}\frac{1}{d(i,j)}}{N-1}$}
Note that $d(i,j)$ is the distance between nodes $i$ and $j$ and $N$ is the number of nodes in the graph.
\end{itemize}
On the one hand, Android apps use API calls to access operating system functionality and system resources.
On the other hand, malware samples always invoke sensitive API calls to perform malicious tasks.
For example, \emph{getDeviceID} can get your phone's IMEI and \emph{getLine1Number} can obtain your phone number.
Therefore, we can leverage sensitive API calls to characterize the malicious behaviors of malware samples.
Specifically, we choose 426 sensitive API calls~\cite{Liangyi2020Experiences} as our concerned objectives which compose of three different API call sets.
The first API call set is the top 260 API calls with the highest correlation with malware, the second API call set is 112 API calls that relate to restrictive permissions, and the third API call set is 70 API calls that are relevant to sensitive operations.
In final, 426 sensitive API calls
are obtained by computing the union set of these three API call sets.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.48\textwidth]{pdf/example2.pdf}}
\caption{\small{An example to illustrate the image generation step of \emph{IFDroid}}}
\label{fig:imagegenaration}
\end{figure}
Given a function call graph, we first apply centrality analysis to obtain four centrality values of sensitive API calls.
If sensitive API calls do not appear in the function call graph, the four centrality values will all be zero.
For example, the malware sample in Figure \ref{fig:imagegenaration} invokes a total of three sensitive API call (\textit{i.e.,} \emph{sendTextMessage()}, \emph{getDefault()}, and \emph{setText()}).
Then the four centrality values of these three sensitive API calls can be computed by centrality analysis.
Other 423 sensitive API calls do not appear in the call graph, therefore, their centrality values are all zero.
After centrality analysis, we can obtain a 426$*$4 vector representation.
If we directly transform it into an image, the generated image will be too narrow, making it difficult to distinguish which area it is when using heatmap for interpretation.
Moreover, it is not conducive to viewing.
Therefore, we crop the vector and turn it into a more square image.
Specifically, we add 60 zeros at the end and then reshape it as a 42$*$42 (\textit{i.e.,} 426$*$4+60=42$*$42) vector.
At the same time, in order to be able to see more clearly, we multiply all the values in the vector by 255 to brighten the pixels in the image.
The range of centrality values is between zero and one, and the range of image pixels is between 0 and 255.
After the two are multiplied, the range is also between 0 and 255.
Finally, we can obtain a 42$*$42 image.
\subsection{Contrastive Learning}
In our daily life, humans can recognize objects in the wild, even if we do not remember the exact appearance of the object.
This happens because we have retained enough high-level features of the object to distinguish it from others and ignored pixel-level details.
For example, despite we have seen what a dollar bill looks like many times, we rarely draw a dollar bill exactly the same.
However, although we cannot draw a lifelike dollar bill, we can easily distinguish it~\cite{blog_contrastive}.
Therefore, researchers have asked a question: \emph{Can we build a representation learning algorithm that does not pay attention to pixel-level details and only encodes high-level features that are sufficient to distinguish different objects?}
To answer the question, contrastive learning is proposed.
The goal of contrastive learning is to maximize the agreement between positive data and minimize the agreement between negative data by using a contrastive loss in the vector space.
\centerline{
$s(f(x), f(x^{+})) >> s(f(x), f(x^{-}))$
}
Note that $x$ is a sample, $x^{+}$ is a positive (\textit{i.e.,} similar) sample of $x$, and $x^{-}$ is a negative (\textit{i.e.,} dissimilar) sample of $x$.
Encoder $f$ can encode samples into vector representations.
$s$ is a function that computes the similarity between two vectors.
In self-supervised contrastive learning, the positive sample $x^{+}$ of an image $x$ is constructed by data augmentations such as image rotation and image cropping.
As for negative sample $x^{-}$, any other images can be selected as $x^{-}$.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.35\textwidth]{pdf/contrastive.pdf}}
\caption{\small{Supervised contrastive learning in \emph{IFDroid}}}
\label{fig:contrastive}
\end{figure}
Many studies have demonstrated the high effectiveness of self-supervised contrastive learning~\cite{2020DeCLUTR, 2020Cert, 2020InfoXLM}, but Khosla \emph{et al.} \cite{khosla2020supervised} found that the label information of samples can be used to improve the accuracy of contrastive learning.
Therefore, in this paper, we select supervised contrastive learning to train our encoder.
In other words, our positive samples $x^{+}$ are selected from other malware samples in the same family, rather than being data augmentations of itself, as done in self-supervised contrastive learning.
As shown in Figure \ref{fig:contrastive}, sample $B_{i}$ and $B_{j}$ are both from droidkungfu family while sample $A_{i}$ is from dowgin family.
Images of these three malware samples (\textit{i.e.,} $A_{i}$, $B_{i}$, and $B_{j}$) are passed through an encoder to get their corresponding vector representations (\textit{i.e.,} $v_{1i}$, $v_{2i}$, and $v_{2j}$).
Then the goal of our contrastive learning is to maximize the similarity between positive samples (\textit{i.e.,} ($v_{2i}$, $v_{2j}$)) and minimize the similarity between negative samples (\textit{i.e.,} ($v_{1i}$, $v_{2i}$) and ($v_{1i}$, $v_{2j}$)).
\begin{table}[htbp]
\small
\centering
\caption{\small{Parameters used in our contrastive learning}}
\begin{tabular}{|c|c|}
\hline
Parameters & Settings \\
\hline
loss function & SupCon loss in~\cite{khosla2020supervised} \\
temperature & 0.07 \\
optimizer & SGD \\
momentum & 0.9 \\
weight decay & 0.0001 \\
learning rate & 0.05 \\
batch size & 64 \\
epoch & 100 \\
\hline
\end{tabular}%
\label{tab:parameters1}%
\end{table}%
After experimenting with certain widely-used neural networks, we finally choose ResNet-18~\cite{he2016resnet18} as our image encoder since it can achieve a balance between accuracy and efficiency.
Table \ref{tab:parameters1} shows the details of parameters used in our contrastive learning.
The loss function is the same as in \emph{SupCon}~\cite{khosla2020supervised} namely \emph{Supervised Contrastive Loss}.
The whole procedure is trained using \emph{Stochastic Gradient Descent} (SGD) with 0.9 momentum and 0.0001 weight decay.
The output in this step is a learned encoder, that is, a learned ResNet-18.
It can convert an image into a vector whose dimension is 512.
\subsection{Classifier Training}
In this step, we first use our learned encoder (\textit{i.e.,} ResNet-18) to encode images into corresponding vectors, and then train a classifier (\textit{i.e.,} a one-layer fully connected layer) by using these vectors and their labels.
After training 100 epochs, the classifier will be selected as our final classifier.
Parameters used in classifier training and contrastive learning are different only in loss function (\textit{i.e.,} \emph{Cross Entropy in classifier training}), and the others are the same.
\subsection{Family Classification and Interpretation}
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.4\textwidth]{pdf/classifier.pdf}}
\caption{\small{Familial classification of \emph{IFDroid}}}
\label{fig:cls}
\end{figure}
After training phase, we can obtain a learned encoder and a trained classifier.
They will be used to classify newly unlabeled malware samples.
Specifically, given a new malware sample, we first perform static analysis to extract the function call graph.
Then the graph is transformed into an image by centrality analysis.
As shown in Figure \ref{fig:cls}, given an image, the encoder can encode it as a vector representation.
Finally, the classifier takes the input of the vector and predicts the corresponding family.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.4\textwidth]{pdf/visualization1.pdf}}
\caption{\small{The heatmap generated by Grad-CAM++ visualization technique to illustrate the classification result
}}
\label{fig:visualization1}
\end{figure}
To interpret the classification result, we need to apply visualization techniques to complete the purpose.
After trying several different methods, we select \emph{Gradient-weighted Class Activation Mapping++} (Grad-CAM++)~\cite{selvaraju2017gradcam, chattopadhay2018gradcam} as our final visualization technique since it performs better and can generate visual explanations for any CNN-based network without changing the architecture or retraining.
Figure \ref{fig:visualization1} shows the visualization of a real-world malware sample.
According to the intensity of the color in the heatmap, we can know which features are more effective in classifying this malware as this family.
For example, a malware sample is classified as droidkungfu family.
After applying Grad-CAM++ visualization technique on the image, we can obtain the corresponding heatmap as shown in Figure \ref{fig:visualization1}.
The redder area in the heatmap indicates that this area contains more effective features.
We can find the corresponding areas in the original image in reverse.
Sensitive API calls contained in these areas have higher weights in identifying the malware as the family.
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,steve@university.edu}
\email{firstname.lastname@phillips.org}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for all short- and
full-length articles, and optional for two-page abstracts.
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{The 1907 Franklin Model D roadster.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader. Figure captions go below the figure. Your figures should
{\bfseries also} include a description suitable for screen readers, to
assist the visually-challenged to better understand your work.
Figure captions are placed {\itshape below} the figure.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.